text
stringlengths 2
11.8k
|
---|
A tutorial on CPM-Live.
CpmAntConfig
[[autodoc]] CpmAntConfig
- all
CpmAntTokenizer
[[autodoc]] CpmAntTokenizer
- all
CpmAntModel
[[autodoc]] CpmAntModel
- all
CpmAntForCausalLM
[[autodoc]] CpmAntForCausalLM
- all |
Speech Encoder Decoder Models
The [SpeechEncoderDecoderModel] can be used to initialize a speech-to-text model
with any pretrained speech autoencoding model as the encoder (e.g. Wav2Vec2, Hubert) and any pretrained autoregressive model as the decoder.
The effectiveness of initializing speech-sequence-to-text-sequence models with pretrained checkpoints for speech
recognition and speech translation has e.g. been shown in Large-Scale Self- and Semi-Supervised Learning for Speech
Translation by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli,
Alexis Conneau.
An example of how to use a [SpeechEncoderDecoderModel] for inference can be seen in Speech2Text2.
Randomly initializing SpeechEncoderDecoderModel from model configurations.
[SpeechEncoderDecoderModel] can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [Wav2Vec2Model] configuration for the encoder
and the default [BertForCausalLM] configuration for the decoder.
thon |
from transformers import BertConfig, Wav2Vec2Config, SpeechEncoderDecoderConfig, SpeechEncoderDecoderModel
config_encoder = Wav2Vec2Config()
config_decoder = BertConfig()
config = SpeechEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
model = SpeechEncoderDecoderModel(config=config) |
Initialising SpeechEncoderDecoderModel from a pretrained encoder and a pretrained decoder.
[SpeechEncoderDecoderModel] can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained Transformer-based speech model, e.g. Wav2Vec2, Hubert can serve as the encoder and both pretrained auto-encoding models, e.g. BERT, pretrained causal language models, e.g. GPT2, as well as the pretrained decoder part of sequence-to-sequence models, e.g. decoder of BART, can be used as the decoder.
Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized.
Initializing [SpeechEncoderDecoderModel] from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in the Warm-starting-encoder-decoder blog post.
To do so, the SpeechEncoderDecoderModel class provides a [SpeechEncoderDecoderModel.from_encoder_decoder_pretrained] method.
thon |
from transformers import SpeechEncoderDecoderModel
model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(
"facebook/hubert-large-ll60k", "google-bert/bert-base-uncased"
) |
Loading an existing SpeechEncoderDecoderModel checkpoint and perform inference.
To load fine-tuned checkpoints of the SpeechEncoderDecoderModel class, [SpeechEncoderDecoderModel] provides the from_pretrained() method just like any other model architecture in Transformers.
To perform inference, one uses the [generate] method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling.
thon |
from transformers import Wav2Vec2Processor, SpeechEncoderDecoderModel
from datasets import load_dataset
import torch
load a fine-tuned speech translation model and corresponding processor
model = SpeechEncoderDecoderModel.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15")
let's perform inference on a piece of English speech (which we'll translate to German)
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values
autoregressively generate transcription (uses greedy decoding by default)
generated_ids = model.generate(input_values)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
Mr. Quilter ist der Apostel der Mittelschicht und wir freuen uns, sein Evangelium willkommen heißen zu können. |
Training
Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model on a dataset of (speech, text) pairs.
As you can see, only 2 inputs are required for the model in order to compute a loss: input_values (which are the
speech inputs) and labels (which are the input_ids of the encoded target sequence).
thon |
from transformers import AutoTokenizer, AutoFeatureExtractor, SpeechEncoderDecoderModel
from datasets import load_dataset
encoder_id = "facebook/wav2vec2-base-960h" # acoustic model encoder
decoder_id = "google-bert/bert-base-uncased" # text decoder
feature_extractor = AutoFeatureExtractor.from_pretrained(encoder_id)
tokenizer = AutoTokenizer.from_pretrained(decoder_id)
Combine pre-trained encoder and pre-trained decoder to form a Seq2Seq model
model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id)
model.config.decoder_start_token_id = tokenizer.cls_token_id
model.config.pad_token_id = tokenizer.pad_token_id
load an audio input and pre-process (normalise mean/std to 0/1)
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
input_values = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt").input_values
load its corresponding transcription and tokenize to generate labels
labels = tokenizer(ds[0]["text"], return_tensors="pt").input_ids
the forward function automatically creates the correct decoder_input_ids
loss = model(input_values=input_values, labels=labels).loss
loss.backward() |
SpeechEncoderDecoderConfig
[[autodoc]] SpeechEncoderDecoderConfig
SpeechEncoderDecoderModel
[[autodoc]] SpeechEncoderDecoderModel
- forward
- from_encoder_decoder_pretrained
FlaxSpeechEncoderDecoderModel
[[autodoc]] FlaxSpeechEncoderDecoderModel
- call
- from_encoder_decoder_pretrained |
MMS
Overview
The MMS model was proposed in Scaling Speech Technology to 1,000+ Languages
by Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli
The abstract from the paper is the following:
Expanding the language coverage of speech technology has the potential to improve access to information for many more people.
However, current speech technology is restricted to about one hundred languages which is a small fraction of the over 7,000
languages spoken around the world.
The Massively Multilingual Speech (MMS) project increases the number of supported languages by 10-40x, depending on the task.
The main ingredients are a new dataset based on readings of publicly available religious texts and effectively leveraging
self-supervised learning. We built pre-trained wav2vec 2.0 models covering 1,406 languages,
a single multilingual automatic speech recognition model for 1,107 languages, speech synthesis models
for the same number of languages, as well as a language identification model for 4,017 languages.
Experiments show that our multilingual speech recognition model more than halves the word error rate of
Whisper on 54 languages of the FLEURS benchmark while being trained on a small fraction of the labeled data.
Here are the different models open sourced in the MMS project. The models and code are originally released here. We have add them to the transformers framework, making them easier to use.
Automatic Speech Recognition (ASR)
The ASR model checkpoints can be found here : mms-1b-fl102, mms-1b-l1107, mms-1b-all. For best accuracy, use the mms-1b-all model.
Tips: |
All ASR models accept a float array corresponding to the raw waveform of the speech signal. The raw waveform should be pre-processed with [Wav2Vec2FeatureExtractor].
The models were trained using connectionist temporal classification (CTC) so the model output has to be decoded using
[Wav2Vec2CTCTokenizer].
You can load different language adapter weights for different languages via [~Wav2Vec2PreTrainedModel.load_adapter]. Language adapters only consists of roughly 2 million parameters
and can therefore be efficiently loaded on the fly when needed. |
Loading
By default MMS loads adapter weights for English. If you want to load adapter weights of another language
make sure to specify target_lang=<your-chosen-target-lang> as well as "ignore_mismatched_sizes=True.
The ignore_mismatched_sizes=True keyword has to be passed to allow the language model head to be resized according
to the vocabulary of the specified language.
Similarly, the processor should be loaded with the same target language |
from transformers import Wav2Vec2ForCTC, AutoProcessor
model_id = "facebook/mms-1b-all"
target_lang = "fra"
processor = AutoProcessor.from_pretrained(model_id, target_lang=target_lang)
model = Wav2Vec2ForCTC.from_pretrained(model_id, target_lang=target_lang, ignore_mismatched_sizes=True) |
You can safely ignore a warning such as:
text
Some weights of Wav2Vec2ForCTC were not initialized from the model checkpoint at facebook/mms-1b-all and are newly initialized because the shapes did not match:
- lm_head.bias: found shape torch.Size([154]) in the checkpoint and torch.Size([314]) in the model instantiated
- lm_head.weight: found shape torch.Size([154, 1280]) in the checkpoint and torch.Size([314, 1280]) in the model instantiated
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. |
If you want to use the ASR pipeline, you can load your chosen target language as such:
from transformers import pipeline
model_id = "facebook/mms-1b-all"
target_lang = "fra"
pipe = pipeline(model=model_id, model_kwargs={"target_lang": "fra", "ignore_mismatched_sizes": True})
Inference
Next, let's look at how we can run MMS in inference and change adapter layers after having called [~PretrainedModel.from_pretrained]
First, we load audio data in different languages using the Datasets. |
from datasets import load_dataset, Audio
English
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
en_sample = next(iter(stream_data))["audio"]["array"]
French
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "fr", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
fr_sample = next(iter(stream_data))["audio"]["array"] |
Next, we load the model and processor
from transformers import Wav2Vec2ForCTC, AutoProcessor
import torch
model_id = "facebook/mms-1b-all"
processor = AutoProcessor.from_pretrained(model_id)
model = Wav2Vec2ForCTC.from_pretrained(model_id)
Now we process the audio data, pass the processed audio data to the model and transcribe the model output,
just like we usually do for [Wav2Vec2ForCTC]. |
Now we process the audio data, pass the processed audio data to the model and transcribe the model output,
just like we usually do for [Wav2Vec2ForCTC].
inputs = processor(en_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
ids = torch.argmax(outputs, dim=-1)[0]
transcription = processor.decode(ids)
'joe keton disapproved of films and buster also had reservations about the media' |
We can now keep the same model in memory and simply switch out the language adapters by
calling the convenient [~Wav2Vec2ForCTC.load_adapter] function for the model and [~Wav2Vec2CTCTokenizer.set_target_lang] for the tokenizer.
We pass the target language as an input - "fra" for French. |
processor.tokenizer.set_target_lang("fra")
model.load_adapter("fra")
inputs = processor(fr_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
ids = torch.argmax(outputs, dim=-1)[0]
transcription = processor.decode(ids)
"ce dernier est volé tout au long de l'histoire romaine" |
In the same way the language can be switched out for all other supported languages. Please have a look at:
py
processor.tokenizer.vocab.keys()
to see all supported languages.
To further improve performance from ASR models, language model decoding can be used. See the documentation here for further details.
Speech Synthesis (TTS)
MMS-TTS uses the same model architecture as VITS, which was added to 🤗 Transformers in v4.33. MMS trains a separate
model checkpoint for each of the 1100+ languages in the project. All available checkpoints can be found on the Hugging
Face Hub: facebook/mms-tts, and the inference
documentation under VITS.
Inference
To use the MMS model, first update to the latest version of the Transformers library: |
pip install --upgrade transformers accelerate
Since the flow-based model in VITS is non-deterministic, it is good practice to set a seed to ensure reproducibility of
the outputs.
For languages with a Roman alphabet, such as English or French, the tokenizer can be used directly to
pre-process the text inputs. The following code example runs a forward pass using the MMS-TTS English checkpoint: |
thon
import torch
from transformers import VitsTokenizer, VitsModel, set_seed
tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-eng")
model = VitsModel.from_pretrained("facebook/mms-tts-eng")
inputs = tokenizer(text="Hello - my dog is cute", return_tensors="pt")
set_seed(555) # make deterministic
with torch.no_grad():
outputs = model(**inputs)
waveform = outputs.waveform[0] |
The resulting waveform can be saved as a .wav file:
thon
import scipy
scipy.io.wavfile.write("synthesized_speech.wav", rate=model.config.sampling_rate, data=waveform)
Or displayed in a Jupyter Notebook / Google Colab:
thon
from IPython.display import Audio
Audio(waveform, rate=model.config.sampling_rate) |
For certain languages with non-Roman alphabets, such as Arabic, Mandarin or Hindi, the uroman
perl package is required to pre-process the text inputs to the Roman alphabet.
You can check whether you require the uroman package for your language by inspecting the is_uroman attribute of
the pre-trained tokenizer:
thon
from transformers import VitsTokenizer
tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-eng")
print(tokenizer.is_uroman) |
If required, you should apply the uroman package to your text inputs prior to passing them to the VitsTokenizer,
since currently the tokenizer does not support performing the pre-processing itself.
To do this, first clone the uroman repository to your local machine and set the bash variable UROMAN to the local path: |
git clone https://github.com/isi-nlp/uroman.git
cd uroman
export UROMAN=$(pwd)
You can then pre-process the text input using the following code snippet. You can either rely on using the bash variable
UROMAN to point to the uroman repository, or you can pass the uroman directory as an argument to the uromaize function:
thon
import torch
from transformers import VitsTokenizer, VitsModel, set_seed
import os
import subprocess
tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-kor")
model = VitsModel.from_pretrained("facebook/mms-tts-kor")
def uromanize(input_string, uroman_path):
"""Convert non-Roman strings to Roman using the uroman perl package."""
script_path = os.path.join(uroman_path, "bin", "uroman.pl")
command = ["perl", script_path] |
process = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# Execute the perl command
stdout, stderr = process.communicate(input=input_string.encode())
if process.returncode != 0:
raise ValueError(f"Error {process.returncode}: {stderr.decode()}")
# Return the output as a string and skip the new-line character at the end
return stdout.decode()[:-1] |
# Return the output as a string and skip the new-line character at the end
return stdout.decode()[:-1]
text = "이봐 무슨 일이야"
uromaized_text = uromanize(text, uroman_path=os.environ["UROMAN"])
inputs = tokenizer(text=uromaized_text, return_tensors="pt")
set_seed(555) # make deterministic
with torch.no_grad():
outputs = model(inputs["input_ids"])
waveform = outputs.waveform[0]
Tips: |
The MMS-TTS checkpoints are trained on lower-cased, un-punctuated text. By default, the VitsTokenizer normalizes the inputs by removing any casing and punctuation, to avoid passing out-of-vocabulary characters to the model. Hence, the model is agnostic to casing and punctuation, so these should be avoided in the text prompt. You can disable normalisation by setting normalize=False in the call to the tokenizer, but this will lead to un-expected behaviour and is discouraged.
The speaking rate can be varied by setting the attribute model.speaking_rate to a chosen value. Likewise, the randomness of the noise is controlled by model.noise_scale: |
thon
import torch
from transformers import VitsTokenizer, VitsModel, set_seed
tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-eng")
model = VitsModel.from_pretrained("facebook/mms-tts-eng")
inputs = tokenizer(text="Hello - my dog is cute", return_tensors="pt")
make deterministic
set_seed(555)
make speech faster and more noisy
model.speaking_rate = 1.5
model.noise_scale = 0.8
with torch.no_grad():
outputs = model(**inputs) |
Language Identification (LID)
Different LID models are available based on the number of languages they can recognize - 126, 256, 512, 1024, 2048, 4017.
Inference
First, we install transformers and some other libraries
```bash
pip install torch accelerate datasets[audio]
pip install --upgrade transformers
`
Next, we load a couple of audio samples via datasets. Make sure that the audio data is sampled to 16000 kHz. |
from datasets import load_dataset, Audio
English
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
en_sample = next(iter(stream_data))["audio"]["array"]
Arabic
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "ar", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
ar_sample = next(iter(stream_data))["audio"]["array"] |
Next, we load the model and processor
from transformers import Wav2Vec2ForSequenceClassification, AutoFeatureExtractor
import torch
model_id = "facebook/mms-lid-126"
processor = AutoFeatureExtractor.from_pretrained(model_id)
model = Wav2Vec2ForSequenceClassification.from_pretrained(model_id) |
Now we process the audio data, pass the processed audio data to the model to classify it into a language, just like we usually do for Wav2Vec2 audio classification models such as ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition |
English
inputs = processor(en_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
lang_id = torch.argmax(outputs, dim=-1)[0].item()
detected_lang = model.config.id2label[lang_id]
'eng'
Arabic
inputs = processor(ar_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
lang_id = torch.argmax(outputs, dim=-1)[0].item()
detected_lang = model.config.id2label[lang_id]
'ara' |
To see all the supported languages of a checkpoint, you can print out the language ids as follows:
py
processor.id2label.values()
Audio Pretrained Models
Pretrained models are available for two different sizes - 300M ,
1Bil. |
The MMS for ASR architecture is based on the Wav2Vec2 model, refer to Wav2Vec2's documentation page for further
details on how to finetune with models for various downstream tasks.
MMS-TTS uses the same model architecture as VITS, refer to VITS's documentation page for API reference. |
BORT
This model is in maintenance mode only, we do not accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: pip install -U transformers==4.30.0. |
Overview
The BORT model was proposed in Optimal Subarchitecture Extraction for BERT by
Adrian de Wynter and Daniel J. Perry. It is an optimal subset of architectural parameters for the BERT, which the
authors refer to as "Bort".
The abstract from the paper is the following:
We extract an optimal subset of architectural parameters for the BERT architecture from Devlin et al. (2018) by
applying recent breakthroughs in algorithms for neural architecture search. This optimal subset, which we refer to as
"Bort", is demonstrably smaller, having an effective (that is, not counting the embedding layer) size of 5.5% the
original BERT-large architecture, and 16% of the net size. Bort is also able to be pretrained in 288 GPU hours, which
is 1.2% of the time required to pretrain the highest-performing BERT parametric architectural variant, RoBERTa-large
(Liu et al., 2019), and about 33% of that of the world-record, in GPU hours, required to train BERT-large on the same
hardware. It is also 7.9x faster on a CPU, as well as being better performing than other compressed variants of the
architecture, and some of the non-compressed variants: it obtains performance improvements of between 0.3% and 31%,
absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks.
This model was contributed by stefan-it. The original code can be found here.
Usage tips |
BORT's model architecture is based on BERT, refer to BERT's documentation page for the
model's API reference as well as usage examples.
BORT uses the RoBERTa tokenizer instead of the BERT tokenizer, refer to RoBERTa's documentation page for the tokenizer's API reference as well as usage examples.
BORT requires a specific fine-tuning algorithm, called Agora ,
that is sadly not open-sourced yet. It would be very useful for the community, if someone tries to implement the
algorithm to make BORT fine-tuning work.
|
SqueezeBERT
Overview
The SqueezeBERT model was proposed in SqueezeBERT: What can computer vision teach NLP about efficient neural networks? by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, Kurt W. Keutzer. It's a
bidirectional transformer similar to the BERT model. The key difference between the BERT architecture and the
SqueezeBERT architecture is that SqueezeBERT uses grouped convolutions
instead of fully-connected layers for the Q, K, V and FFN layers.
The abstract from the paper is the following:
Humans read and write hundreds of billions of messages every day. Further, due to the availability of large datasets,
large computing systems, and better neural network models, natural language processing (NLP) technology has made
significant strides in understanding, proofreading, and organizing these messages. Thus, there is a significant
opportunity to deploy NLP in myriad applications to help web users, social networks, and businesses. In particular, we
consider smartphones and other mobile devices as crucial platforms for deploying NLP models at scale. However, today's
highly-accurate NLP neural network models such as BERT and RoBERTa are extremely computationally expensive, with
BERT-base taking 1.7 seconds to classify a text snippet on a Pixel 3 smartphone. In this work, we observe that methods
such as grouped convolutions have yielded significant speedups for computer vision networks, but many of these
techniques have not been adopted by NLP neural network designers. We demonstrate how to replace several operations in
self-attention layers with grouped convolutions, and we use this technique in a novel network architecture called
SqueezeBERT, which runs 4.3x faster than BERT-base on the Pixel 3 while achieving competitive accuracy on the GLUE test
set. The SqueezeBERT code will be released.
This model was contributed by forresti.
Usage tips |
SqueezeBERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right
rather than the left.
SqueezeBERT is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore
efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained
with a causal language modeling (CLM) objective are better in that regard.
For best results when finetuning on sequence classification tasks, it is recommended to start with the
squeezebert/squeezebert-mnli-headless checkpoint. |
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide |
SqueezeBertConfig
[[autodoc]] SqueezeBertConfig
SqueezeBertTokenizer
[[autodoc]] SqueezeBertTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
SqueezeBertTokenizerFast
[[autodoc]] SqueezeBertTokenizerFast
SqueezeBertModel
[[autodoc]] SqueezeBertModel
SqueezeBertForMaskedLM
[[autodoc]] SqueezeBertForMaskedLM
SqueezeBertForSequenceClassification
[[autodoc]] SqueezeBertForSequenceClassification
SqueezeBertForMultipleChoice
[[autodoc]] SqueezeBertForMultipleChoice
SqueezeBertForTokenClassification
[[autodoc]] SqueezeBertForTokenClassification
SqueezeBertForQuestionAnswering
[[autodoc]] SqueezeBertForQuestionAnswering |
Wav2Vec2Phoneme
Overview
The Wav2Vec2Phoneme model was proposed in Simple and Effective Zero-shot Cross-lingual Phoneme Recognition (Xu et al.,
2021 by Qiantong Xu, Alexei Baevski, Michael Auli.
The abstract from the paper is the following:
Recent progress in self-training, self-supervised pretraining and unsupervised learning enabled well performing speech
recognition systems without any labeled data. However, in many cases there is labeled data available for related
languages which is not utilized by these methods. This paper extends previous work on zero-shot cross-lingual transfer
learning by fine-tuning a multilingually pretrained wav2vec 2.0 model to transcribe unseen languages. This is done by
mapping phonemes of the training languages to the target language using articulatory features. Experiments show that
this simple method significantly outperforms prior work which introduced task-specific architectures and used only part
of a monolingually pretrained model.
Relevant checkpoints can be found under https://huggingface.co/models?other=phoneme-recognition.
This model was contributed by patrickvonplaten
The original code can be found here.
Usage tips |
Wav2Vec2Phoneme uses the exact same architecture as Wav2Vec2
Wav2Vec2Phoneme is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
Wav2Vec2Phoneme model was trained using connectionist temporal classification (CTC) so the model output has to be
decoded using [Wav2Vec2PhonemeCTCTokenizer].
Wav2Vec2Phoneme can be fine-tuned on multiple language at once and decode unseen languages in a single forward pass
to a sequence of phonemes
By default, the model outputs a sequence of phonemes. In order to transform the phonemes to a sequence of words one
should make use of a dictionary and language model. |
Wav2Vec2Phoneme's architecture is based on the Wav2Vec2 model, for API reference, check out Wav2Vec2's documentation page
except for the tokenizer.
Wav2Vec2PhonemeCTCTokenizer
[[autodoc]] Wav2Vec2PhonemeCTCTokenizer
- call
- batch_decode
- decode
- phonemize |
BigBird
Overview
The BigBird model was proposed in Big Bird: Transformers for Longer Sequences by
Zaheer, Manzil and Guruganesh, Guru and Dubey, Kumar Avinava and Ainslie, Joshua and Alberti, Chris and Ontanon,
Santiago and Pham, Philip and Ravula, Anirudh and Wang, Qifan and Yang, Li and others. BigBird, is a sparse-attention
based transformer which extends Transformer based models, such as BERT to much longer sequences. In addition to sparse
attention, BigBird also applies global attention as well as random attention to the input sequence. Theoretically, it
has been shown that applying sparse, global, and random attention approximates full attention, while being
computationally much more efficient for longer sequences. As a consequence of the capability to handle longer context,
BigBird has shown improved performance on various long document NLP tasks, such as question answering and
summarization, compared to BERT or RoBERTa.
The abstract from the paper is the following:
Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP.
Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence
length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that
reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and
is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our
theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire
sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to
8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context,
BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also
propose novel applications to genomics data.
This model was contributed by vasudevgupta. The original code can be found
here.
Usage tips |
For an in-detail explanation on how BigBird's attention works, see this blog post.
BigBird comes with 2 implementations: original_full & block_sparse. For the sequence length < 1024, using
original_full is advised as there is no benefit in using block_sparse attention.
The code currently uses window size of 3 blocks and 2 global blocks.
Sequence length must be divisible by block size.
Current implementation supports only ITC.
Current implementation doesn't support num_random_blocks = 0
BigBird is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left. |
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide |
BigBirdConfig
[[autodoc]] BigBirdConfig
BigBirdTokenizer
[[autodoc]] BigBirdTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
BigBirdTokenizerFast
[[autodoc]] BigBirdTokenizerFast
BigBird specific outputs
[[autodoc]] models.big_bird.modeling_big_bird.BigBirdForPreTrainingOutput |
BigBirdModel
[[autodoc]] BigBirdModel
- forward
BigBirdForPreTraining
[[autodoc]] BigBirdForPreTraining
- forward
BigBirdForCausalLM
[[autodoc]] BigBirdForCausalLM
- forward
BigBirdForMaskedLM
[[autodoc]] BigBirdForMaskedLM
- forward
BigBirdForSequenceClassification
[[autodoc]] BigBirdForSequenceClassification
- forward
BigBirdForMultipleChoice
[[autodoc]] BigBirdForMultipleChoice
- forward
BigBirdForTokenClassification
[[autodoc]] BigBirdForTokenClassification
- forward
BigBirdForQuestionAnswering
[[autodoc]] BigBirdForQuestionAnswering
- forward |
FlaxBigBirdModel
[[autodoc]] FlaxBigBirdModel
- call
FlaxBigBirdForPreTraining
[[autodoc]] FlaxBigBirdForPreTraining
- call
FlaxBigBirdForCausalLM
[[autodoc]] FlaxBigBirdForCausalLM
- call
FlaxBigBirdForMaskedLM
[[autodoc]] FlaxBigBirdForMaskedLM
- call
FlaxBigBirdForSequenceClassification
[[autodoc]] FlaxBigBirdForSequenceClassification
- call
FlaxBigBirdForMultipleChoice
[[autodoc]] FlaxBigBirdForMultipleChoice
- call
FlaxBigBirdForTokenClassification
[[autodoc]] FlaxBigBirdForTokenClassification
- call
FlaxBigBirdForQuestionAnswering
[[autodoc]] FlaxBigBirdForQuestionAnswering
- call |
Mixtral
Overview
Mixtral-8x7B was introduced in the Mixtral of Experts blogpost by Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
The introduction of the blog post says:
Today, the team is proud to release Mixtral 8x7B, a high-quality sparse mixture of experts models (SMoE) with open weights. Licensed under Apache 2.0. Mixtral outperforms Llama 2 70B on most benchmarks with 6x faster inference. It is the strongest open-weight model with a permissive license and the best model overall regarding cost/performance trade-offs. In particular, it matches or outperforms GPT3.5 on most standard benchmarks.
Mixtral-8x7B is the second large language model (LLM) released by mistral.ai, after Mistral-7B.
Architectural details
Mixtral-8x7B is a decoder-only Transformer with the following architectural choices: |
Mixtral is a Mixture of Experts (MoE) model with 8 experts per MLP, with a total of 45 billion parameters. To learn more about mixture-of-experts, refer to the blog post.
Despite the model having 45 billion parameters,, the compute required for a single forward pass is the same as that of a 14 billion parameter model. This is because even though each of the experts have to be loaded in RAM (70B like ram requirement) each token from the hidden states are dispatched twice (top 2 routing) and thus the compute (the operation required at each forward computation) is just 2 X sequence_length. |
The following implementation details are shared with Mistral AI's first model Mistral-7B:
- Sliding Window Attention - Trained with 8k context length and fixed cache size, with a theoretical attention span of 128K tokens
- GQA (Grouped Query Attention) - allowing faster inference and lower cache size.
- Byte-fallback BPE tokenizer - ensures that characters are never mapped to out of vocabulary tokens.
For more details refer to the release blog post.
License
Mixtral-8x7B is released under the Apache 2.0 license.
Usage tips
The Mistral team has released 2 checkpoints:
- a base model, Mixtral-8x7B-v0.1, which has been pre-trained to predict the next token on internet-scale data.
- an instruction tuned model, Mixtral-8x7B-Instruct-v0.1, which is the base model optimized for chat purposes using supervised fine-tuning (SFT) and direct preference optimization (DPO).
The base model can be used as follows:
thon |
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x7B-v0.1", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mixtral-8x7B-v0.1")
prompt = "My favourite condiment is"
model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True)
tokenizer.batch_decode(generated_ids)[0]
"My favourite condiment is to " |
The instruction tuned model can be used as follows:
thon |
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x7B-Instruct-v0.1", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mixtral-8x7B-Instruct-v0.1")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
generated_ids = model.generate(model_inputs, max_new_tokens=100, do_sample=True)
tokenizer.batch_decode(generated_ids)[0]
"Mayonnaise can be made as follows: ()" |
As can be seen, the instruction-tuned model requires a chat template to be applied to make sure the inputs are prepared in the right format.
Speeding up Mixtral by using Flash Attention
The code snippets above showcase inference without any optimization tricks. However, one can drastically speed up the model by leveraging Flash Attention, which is a faster implementation of the attention mechanism used inside the model.
First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature. |
pip install -U flash-attn --no-build-isolation
Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of the flash attention repository. Make also sure to load your model in half-precision (e.g. torch.float16)
To load and run a model using Flash Attention-2, refer to the snippet below:
thon |
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x7B-v0.1", torch_dtype=torch.float16, attn_implementation="flash_attention_2", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mixtral-8x7B-v0.1")
prompt = "My favourite condiment is"
model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True)
tokenizer.batch_decode(generated_ids)[0]
"The expected output" |
Expected speedups
Below is a expected speedup diagram that compares pure inference time between the native implementation in transformers using mistralai/Mixtral-8x7B-v0.1 checkpoint and the Flash Attention 2 version of the model. |
Sliding window Attention
The current implementation supports the sliding window attention mechanism and memory efficient cache management.
To enable sliding window attention, just make sure to have a flash-attn version that is compatible with sliding window attention (>=2.3.0).
The Flash Attention-2 model uses also a more memory efficient cache slicing mechanism - as recommended per the official implementation of Mistral model that use rolling cache mechanism we keep the cache size fixed (self.config.sliding_window), support batched generation only for padding_side="left" and use the absolute position of the current token to compute the positional embedding.
Shrinking down Mixtral using quantization
As the Mixtral model has 45 billion parameters, that would require about 90GB of GPU RAM in half precision (float16), since each parameter is stored in 2 bytes. However, one can shrink down the size of the model using quantization. If the model is quantized to 4 bits (or half a byte per parameter), a single A100 with 40GB of RAM is enough to fit the entire model, as in that case only about 27 GB of RAM is required.
Quantizing a model is as simple as passing a quantization_config to the model. Below, we'll leverage the BitsAndyBytes quantization (but refer to this page for other quantization methods):
thon |
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
specify how to quantize the model
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype="torch.float16",
)
model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x7B-Instruct-v0.1", quantization_config=True, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mixtral-8x7B-Instruct-v0.1")
prompt = "My favourite condiment is"
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
generated_ids = model.generate(model_inputs, max_new_tokens=100, do_sample=True)
tokenizer.batch_decode(generated_ids)[0]
"The expected output" |
This model was contributed by Younes Belkada and Arthur Zucker .
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Mixtral. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. |
A demo notebook to perform supervised fine-tuning (SFT) of Mixtral-8x7B can be found here. 🌎
A blog post on fine-tuning Mixtral-8x7B using PEFT. 🌎
The Alignment Handbook by Hugging Face includes scripts and recipes to perform supervised fine-tuning (SFT) and direct preference optimization with Mistral-7B. This includes scripts for full fine-tuning, QLoRa on a single GPU as well as multi-GPU fine-tuning.
Causal language modeling task guide |
MixtralConfig
[[autodoc]] MixtralConfig
MixtralModel
[[autodoc]] MixtralModel
- forward
MixtralForCausalLM
[[autodoc]] MixtralForCausalLM
- forward
MixtralForSequenceClassification
[[autodoc]] MixtralForSequenceClassification
- forward |
Graphormer
Overview
The Graphormer model was proposed in Do Transformers Really Perform Bad for Graph Representation? by
Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen and Tie-Yan Liu. It is a Graph Transformer model, modified to allow computations on graphs instead of text sequences by generating embeddings and features of interest during preprocessing and collation, then using a modified attention.
The abstract from the paper is the following:
The Transformer architecture has become a dominant choice in many domains, such as natural language processing and computer vision. Yet, it has not achieved competitive performance on popular leaderboards of graph-level prediction compared to mainstream GNN variants. Therefore, it remains a mystery how Transformers could perform well for graph representation learning. In this paper, we solve this mystery by presenting Graphormer, which is built upon the standard Transformer architecture, and could attain excellent results on a broad range of graph representation learning tasks, especially on the recent OGB Large-Scale Challenge. Our key insight to utilizing Transformer in the graph is the necessity of effectively encoding the structural information of a graph into the model. To this end, we propose several simple yet effective structural encoding methods to help Graphormer better model graph-structured data. Besides, we mathematically characterize the expressive power of Graphormer and exhibit that with our ways of encoding the structural information of graphs, many popular GNN variants could be covered as the special cases of Graphormer.
This model was contributed by clefourrier. The original code can be found here.
Usage tips
This model will not work well on large graphs (more than 100 nodes/edges), as it will make the memory explode.
You can reduce the batch size, increase your RAM, or decrease the UNREACHABLE_NODE_DISTANCE parameter in algos_graphormer.pyx, but it will be hard to go above 700 nodes/edges.
This model does not use a tokenizer, but instead a special collator during training.
GraphormerConfig
[[autodoc]] GraphormerConfig
GraphormerModel
[[autodoc]] GraphormerModel
- forward
GraphormerForGraphClassification
[[autodoc]] GraphormerForGraphClassification
- forward |
DeBERTa
Overview
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen It is based on Google's
BERT model released in 2018 and Facebook's RoBERTa model released in 2019.
It builds on RoBERTa with disentangled attention and enhanced mask decoder training with half of the data used in
RoBERTa.
The abstract from the paper is the following:
Recent progress in pre-trained neural language models has significantly improved the performance of many natural
language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with
disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the
disentangled attention mechanism, where each word is represented using two vectors that encode its content and
position, respectively, and the attention weights among words are computed using disentangled matrices on their
contents and relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to
predict the masked tokens for model pretraining. We show that these two techniques significantly improve the efficiency
of model pretraining and performance of downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of
the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9%
(90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). The DeBERTa code and
pre-trained models will be made publicly available at https://github.com/microsoft/DeBERTa.
This model was contributed by DeBERTa. This model TF 2.0 implementation was
contributed by kamalkraj . The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DeBERTa. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. |
A blog post on how to Accelerate Large Model Training using DeepSpeed with DeBERTa.
A blog post on Supercharged Customer Service with Machine Learning with DeBERTa.
[DebertaForSequenceClassification] is supported by this example script and notebook.
[TFDebertaForSequenceClassification] is supported by this example script and notebook.
Text classification task guide |
[DebertaForTokenClassification] is supported by this example script and notebook.
[TFDebertaForTokenClassification] is supported by this example script and notebook.
Token classification chapter of the 🤗 Hugging Face Course.
Byte-Pair Encoding tokenization chapter of the 🤗 Hugging Face Course.
Token classification task guide |
[DebertaForMaskedLM] is supported by this example script and notebook.
[TFDebertaForMaskedLM] is supported by this example script and notebook.
Masked language modeling chapter of the 🤗 Hugging Face Course.
Masked language modeling task guide
[DebertaForQuestionAnswering] is supported by this example script and notebook.
[TFDebertaForQuestionAnswering] is supported by this example script and notebook.
Question answering chapter of the 🤗 Hugging Face Course.
Question answering task guide |
DebertaConfig
[[autodoc]] DebertaConfig
DebertaTokenizer
[[autodoc]] DebertaTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
DebertaTokenizerFast
[[autodoc]] DebertaTokenizerFast
- build_inputs_with_special_tokens
- create_token_type_ids_from_sequences |
DebertaModel
[[autodoc]] DebertaModel
- forward
DebertaPreTrainedModel
[[autodoc]] DebertaPreTrainedModel
DebertaForMaskedLM
[[autodoc]] DebertaForMaskedLM
- forward
DebertaForSequenceClassification
[[autodoc]] DebertaForSequenceClassification
- forward
DebertaForTokenClassification
[[autodoc]] DebertaForTokenClassification
- forward
DebertaForQuestionAnswering
[[autodoc]] DebertaForQuestionAnswering
- forward |
TFDebertaModel
[[autodoc]] TFDebertaModel
- call
TFDebertaPreTrainedModel
[[autodoc]] TFDebertaPreTrainedModel
- call
TFDebertaForMaskedLM
[[autodoc]] TFDebertaForMaskedLM
- call
TFDebertaForSequenceClassification
[[autodoc]] TFDebertaForSequenceClassification
- call
TFDebertaForTokenClassification
[[autodoc]] TFDebertaForTokenClassification
- call
TFDebertaForQuestionAnswering
[[autodoc]] TFDebertaForQuestionAnswering
- call |
Hybrid Vision Transformer (ViT Hybrid)
Overview
The hybrid Vision Transformer (ViT) model was proposed in An Image is Worth 16x16 Words: Transformers for Image Recognition
at Scale by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk
Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob
Uszkoreit, Neil Houlsby. It's the first paper that successfully trains a Transformer encoder on ImageNet, attaining
very good results compared to familiar convolutional architectures. ViT hybrid is a slight variant of the plain Vision Transformer,
by leveraging a convolutional backbone (specifically, BiT) whose features are used as initial "tokens" for the Transformer.
The abstract from the paper is the following:
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its
applications to computer vision remain limited. In vision, attention is either applied in conjunction with
convolutional networks, or used to replace certain components of convolutional networks while keeping their overall
structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to
sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of
data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.),
Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring
substantially fewer computational resources to train.
This model was contributed by nielsr. The original code (written in JAX) can be
found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViT Hybrid. |
[ViTHybridForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ViTHybridConfig
[[autodoc]] ViTHybridConfig
ViTHybridImageProcessor
[[autodoc]] ViTHybridImageProcessor
- preprocess
ViTHybridModel
[[autodoc]] ViTHybridModel
- forward
ViTHybridForImageClassification
[[autodoc]] ViTHybridForImageClassification
- forward |
Data2Vec
Overview
The Data2Vec model was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and Michael Auli.
Data2Vec proposes a unified framework for self-supervised learning across different data modalities - text, audio and images.
Importantly, predicted targets for pre-training are contextualized latent representations of the inputs, rather than modality-specific, context-independent targets.
The abstract from the paper is the following:
While the general idea of self-supervised learning is identical across modalities, the actual algorithms and
objectives differ widely because they were developed with a single modality in mind. To get us closer to general
self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech,
NLP or computer vision. The core idea is to predict latent representations of the full input data based on a
masked view of the input in a selfdistillation setup using a standard Transformer architecture.
Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which
are local in nature, data2vec predicts contextualized latent representations that contain information from
the entire input. Experiments on the major benchmarks of speech recognition, image classification, and
natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.
Models and code are available at www.github.com/pytorch/fairseq/tree/master/examples/data2vec.
This model was contributed by edugp and patrickvonplaten.
sayakpaul and Rocketknight1 contributed Data2Vec for vision in TensorFlow.
The original code (for NLP and Speech) can be found here.
The original code for vision can be found here.
Usage tips |
Data2VecAudio, Data2VecText, and Data2VecVision have all been trained using the same self-supervised learning method.
For Data2VecAudio, preprocessing is identical to [Wav2Vec2Model], including feature extraction
For Data2VecText, preprocessing is identical to [RobertaModel], including tokenization.
For Data2VecVision, preprocessing is identical to [BeitModel], including feature extraction. |
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Data2Vec.
[Data2VecVisionForImageClassification] is supported by this example script and notebook.
To fine-tune [TFData2VecVisionForImageClassification] on a custom dataset, see this notebook. |
Data2VecText documentation resources
- Text classification task guide
- Token classification task guide
- Question answering task guide
- Causal language modeling task guide
- Masked language modeling task guide
- Multiple choice task guide
Data2VecAudio documentation resources
- Audio classification task guide
- Automatic speech recognition task guide
Data2VecVision documentation resources
- Image classification
- Semantic segmentation
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Data2VecTextConfig
[[autodoc]] Data2VecTextConfig
Data2VecAudioConfig
[[autodoc]] Data2VecAudioConfig
Data2VecVisionConfig
[[autodoc]] Data2VecVisionConfig |
Data2VecAudioModel
[[autodoc]] Data2VecAudioModel
- forward
Data2VecAudioForAudioFrameClassification
[[autodoc]] Data2VecAudioForAudioFrameClassification
- forward
Data2VecAudioForCTC
[[autodoc]] Data2VecAudioForCTC
- forward
Data2VecAudioForSequenceClassification
[[autodoc]] Data2VecAudioForSequenceClassification
- forward
Data2VecAudioForXVector
[[autodoc]] Data2VecAudioForXVector
- forward
Data2VecTextModel
[[autodoc]] Data2VecTextModel
- forward
Data2VecTextForCausalLM
[[autodoc]] Data2VecTextForCausalLM
- forward
Data2VecTextForMaskedLM
[[autodoc]] Data2VecTextForMaskedLM
- forward
Data2VecTextForSequenceClassification
[[autodoc]] Data2VecTextForSequenceClassification
- forward
Data2VecTextForMultipleChoice
[[autodoc]] Data2VecTextForMultipleChoice
- forward
Data2VecTextForTokenClassification
[[autodoc]] Data2VecTextForTokenClassification
- forward
Data2VecTextForQuestionAnswering
[[autodoc]] Data2VecTextForQuestionAnswering
- forward
Data2VecVisionModel
[[autodoc]] Data2VecVisionModel
- forward
Data2VecVisionForImageClassification
[[autodoc]] Data2VecVisionForImageClassification
- forward
Data2VecVisionForSemanticSegmentation
[[autodoc]] Data2VecVisionForSemanticSegmentation
- forward |
TFData2VecVisionModel
[[autodoc]] TFData2VecVisionModel
- call
TFData2VecVisionForImageClassification
[[autodoc]] TFData2VecVisionForImageClassification
- call
TFData2VecVisionForSemanticSegmentation
[[autodoc]] TFData2VecVisionForSemanticSegmentation
- call |
UL2
Overview
The T5 model was presented in Unifying Language Learning Paradigms by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler.
The abstract from the paper is the following:
Existing pre-trained models are generally geared towards a particular class of problems. To date, there seems to be still no consensus on what the right architecture and pre-training setup should be. This paper presents a unified framework for pre-training models that are universally effective across datasets and setups. We begin by disentangling architectural archetypes with pre-training objectives -- two concepts that are commonly conflated. Next, we present a generalized and unified perspective for self-supervision in NLP and show how different pre-training objectives can be cast as one another and how interpolating between different objectives can be effective. We then propose Mixture-of-Denoisers (MoD), a pre-training objective that combines diverse pre-training paradigms together. We furthermore introduce a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training schemes. We conduct extensive ablative experiments to compare multiple pre-training objectives and find that our method pushes the Pareto-frontier by outperforming T5 and/or GPT-like models across multiple diverse setups. Finally, by scaling our model up to 20B parameters, we achieve SOTA performance on 50 well-established supervised NLP tasks ranging from language generation (with automated and human evaluation), language understanding, text classification, question answering, commonsense reasoning, long text reasoning, structured knowledge grounding and information retrieval. Our model also achieve strong results at in-context learning, outperforming 175B GPT-3 on zero-shot SuperGLUE and tripling the performance of T5-XXL on one-shot summarization.
This model was contributed by DanielHesslow. The original code can be found here.
Usage tips |
UL2 is an encoder-decoder model pre-trained on a mixture of denoising functions as well as fine-tuned on an array of downstream tasks.
UL2 has the same architecture as T5v1.1 but uses the Gated-SiLU activation function instead of Gated-GELU.
The authors release checkpoints of one architecture which can be seen here
As UL2 has the same architecture as T5v1.1, refer to T5's documentation page for API reference, tips, code examples and notebooks. |
XLM-RoBERTa |
Overview
The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume
Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's
RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl
data.
The abstract from the paper is the following:
This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a
wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred
languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly
outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +13.8% average accuracy on
XNLI, +12.3% average F1 score on MLQA, and +2.1% average F1 score on NER. XLM-R performs particularly well on
low-resource languages, improving 11.8% in XNLI accuracy for Swahili and 9.2% for Urdu over the previous XLM model. We
also present a detailed empirical evaluation of the key factors that are required to achieve these gains, including the
trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource
languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing
per-language performance; XLM-Ris very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We
will make XLM-R code, data, and models publicly available.
This model was contributed by stefan-it. The original code can be found here.
Usage tips |
XLM-RoBERTa is a multilingual model trained on 100 different languages. Unlike some XLM multilingual models, it does
not require lang tensors to understand which language is used, and should be able to determine the correct
language from the input ids.
Uses RoBERTa tricks on the XLM approach, but does not use the translation language modeling objective. It only uses masked language modeling on sentences coming from one language. |
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with XLM-RoBERTa. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. |
A blog post on how to finetune XLM RoBERTa for multiclass classification with Habana Gaudi on AWS
[XLMRobertaForSequenceClassification] is supported by this example script and notebook.
[TFXLMRobertaForSequenceClassification] is supported by this example script and notebook.
[FlaxXLMRobertaForSequenceClassification] is supported by this example script and notebook.
Text classification chapter of the 🤗 Hugging Face Task Guides.
Text classification task guide |
[XLMRobertaForTokenClassification] is supported by this example script and notebook.
[TFXLMRobertaForTokenClassification] is supported by this example script and notebook.
[FlaxXLMRobertaForTokenClassification] is supported by this example script.
Token classification chapter of the 🤗 Hugging Face Course.
Token classification task guide |
[XLMRobertaForCausalLM] is supported by this example script and notebook.
Causal language modeling chapter of the 🤗 Hugging Face Task Guides.
Causal language modeling task guide
[XLMRobertaForMaskedLM] is supported by this example script and notebook.
[TFXLMRobertaForMaskedLM] is supported by this example script and notebook.
[FlaxXLMRobertaForMaskedLM] is supported by this example script and notebook.
Masked language modeling chapter of the 🤗 Hugging Face Course.
Masked language modeling |
[XLMRobertaForQuestionAnswering] is supported by this example script and notebook.
[TFXLMRobertaForQuestionAnswering] is supported by this example script and notebook.
[FlaxXLMRobertaForQuestionAnswering] is supported by this example script.
Question answering chapter of the 🤗 Hugging Face Course.
Question answering task guide
Multiple choice |
Multiple choice
[XLMRobertaForMultipleChoice] is supported by this example script and notebook.
[TFXLMRobertaForMultipleChoice] is supported by this example script and notebook.
Multiple choice task guide
🚀 Deploy
A blog post on how to Deploy Serverless XLM RoBERTa on AWS Lambda.
This implementation is the same as RoBERTa. Refer to the documentation of RoBERTa for usage examples as well as the information relative to the inputs and outputs. |
This implementation is the same as RoBERTa. Refer to the documentation of RoBERTa for usage examples as well as the information relative to the inputs and outputs.
XLMRobertaConfig
[[autodoc]] XLMRobertaConfig
XLMRobertaTokenizer
[[autodoc]] XLMRobertaTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
XLMRobertaTokenizerFast
[[autodoc]] XLMRobertaTokenizerFast |
XLMRobertaModel
[[autodoc]] XLMRobertaModel
- forward
XLMRobertaForCausalLM
[[autodoc]] XLMRobertaForCausalLM
- forward
XLMRobertaForMaskedLM
[[autodoc]] XLMRobertaForMaskedLM
- forward
XLMRobertaForSequenceClassification
[[autodoc]] XLMRobertaForSequenceClassification
- forward
XLMRobertaForMultipleChoice
[[autodoc]] XLMRobertaForMultipleChoice
- forward
XLMRobertaForTokenClassification
[[autodoc]] XLMRobertaForTokenClassification
- forward
XLMRobertaForQuestionAnswering
[[autodoc]] XLMRobertaForQuestionAnswering
- forward |
Subsets and Splits