text
stringlengths 2
11.8k
|
---|
[T5ForConditionalGeneration] is supported by this example script and notebook.
[TFT5ForConditionalGeneration] is supported by this example script and notebook.
Translation task guide
A notebook on how to finetune T5 for question answering with TensorFlow 2. 🌎
A notebook on how to finetune T5 for question answering on a TPU. |
A notebook on how to finetune T5 for question answering with TensorFlow 2. 🌎
A notebook on how to finetune T5 for question answering on a TPU.
🚀 Deploy
- A blog post on how to deploy T5 11B for inference for less than $500.
T5Config
[[autodoc]] T5Config
T5Tokenizer
[[autodoc]] T5Tokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
T5TokenizerFast
[[autodoc]] T5TokenizerFast |
T5Model
[[autodoc]] T5Model
- forward
T5ForConditionalGeneration
[[autodoc]] T5ForConditionalGeneration
- forward
T5EncoderModel
[[autodoc]] T5EncoderModel
- forward
T5ForSequenceClassification
[[autodoc]] T5ForSequenceClassification
- forward
T5ForTokenClassification
[[autodoc]] T5ForTokenClassification
- forward
T5ForQuestionAnswering
[[autodoc]] T5ForQuestionAnswering
- forward |
TFT5Model
[[autodoc]] TFT5Model
- call
TFT5ForConditionalGeneration
[[autodoc]] TFT5ForConditionalGeneration
- call
TFT5EncoderModel
[[autodoc]] TFT5EncoderModel
- call
FlaxT5Model
[[autodoc]] FlaxT5Model
- call
- encode
- decode
FlaxT5ForConditionalGeneration
[[autodoc]] FlaxT5ForConditionalGeneration
- call
- encode
- decode
FlaxT5EncoderModel
[[autodoc]] FlaxT5EncoderModel
- call |
OpenAI GPT2 |
Overview
OpenAI GPT-2 model was proposed in Language Models are Unsupervised Multitask Learners by Alec
Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever from OpenAI. It's a causal (unidirectional)
transformer pretrained using language modeling on a very large corpus of ~40 GB of text data.
The abstract from the paper is the following:
GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset[1] of 8 million
web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some
text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks
across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than
10X the amount of data.
Write With Transformer is a webapp created and hosted by
Hugging Face showcasing the generative capabilities of several models. GPT-2 is one of them and is available in five
different sizes: small, medium, large, xl and a distilled version of the small checkpoint: distilgpt-2.
This model was contributed by thomwolf. The original code can be found here.
Usage tips |
GPT-2 is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
GPT-2 was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next
token in a sequence. Leveraging this feature allows GPT-2 to generate syntactically coherent text as it can be
observed in the run_generation.py example script.
The model can take the past_key_values (for PyTorch) or past (for TF) as input, which is the previously computed
key/value attention pairs. Using this (past_key_values or past) value prevents the model from re-computing
pre-computed values in the context of text generation. For PyTorch, see past_key_values argument of the
[GPT2Model.forward] method, or for TF the past argument of the
[TFGPT2Model.call] method for more information on its usage.
Enabling the scale_attn_by_inverse_layer_idx and reorder_and_upcast_attn flags will apply the training stability
improvements from Mistral (for PyTorch only). |
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GPT2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. |
A blog on how to Finetune a non-English GPT-2 Model with Hugging Face.
A blog on How to generate text: using different decoding methods for language generation with Transformers with GPT-2.
A blog on Training CodeParrot 🦜 from Scratch, a large GPT-2 model.
A blog on Faster Text Generation with TensorFlow and XLA with GPT-2.
A blog on How to train a Language Model with Megatron-LM with a GPT-2 model.
A notebook on how to finetune GPT2 to generate lyrics in the style of your favorite artist. 🌎
A notebook on how to finetune GPT2 to generate tweets in the style of your favorite Twitter user. 🌎
Causal language modeling chapter of the 🤗 Hugging Face Course.
[GPT2LMHeadModel] is supported by this causal language modeling example script, text generation example script, and notebook.
[TFGPT2LMHeadModel] is supported by this causal language modeling example script and notebook.
[FlaxGPT2LMHeadModel] is supported by this causal language modeling example script and notebook.
Text classification task guide
Token classification task guide
Causal language modeling task guide |
GPT2Config
[[autodoc]] GPT2Config
GPT2Tokenizer
[[autodoc]] GPT2Tokenizer
- save_vocabulary
GPT2TokenizerFast
[[autodoc]] GPT2TokenizerFast
GPT2 specific outputs
[[autodoc]] models.gpt2.modeling_gpt2.GPT2DoubleHeadsModelOutput
[[autodoc]] models.gpt2.modeling_tf_gpt2.TFGPT2DoubleHeadsModelOutput |
GPT2Model
[[autodoc]] GPT2Model
- forward
GPT2LMHeadModel
[[autodoc]] GPT2LMHeadModel
- forward
GPT2DoubleHeadsModel
[[autodoc]] GPT2DoubleHeadsModel
- forward
GPT2ForQuestionAnswering
[[autodoc]] GPT2ForQuestionAnswering
- forward
GPT2ForSequenceClassification
[[autodoc]] GPT2ForSequenceClassification
- forward
GPT2ForTokenClassification
[[autodoc]] GPT2ForTokenClassification
- forward |
TFGPT2Model
[[autodoc]] TFGPT2Model
- call
TFGPT2LMHeadModel
[[autodoc]] TFGPT2LMHeadModel
- call
TFGPT2DoubleHeadsModel
[[autodoc]] TFGPT2DoubleHeadsModel
- call
TFGPT2ForSequenceClassification
[[autodoc]] TFGPT2ForSequenceClassification
- call
TFSequenceClassifierOutputWithPast
[[autodoc]] modeling_tf_outputs.TFSequenceClassifierOutputWithPast
TFGPT2Tokenizer
[[autodoc]] TFGPT2Tokenizer |
FlaxGPT2Model
[[autodoc]] FlaxGPT2Model
- call
FlaxGPT2LMHeadModel
[[autodoc]] FlaxGPT2LMHeadModel
- call |
BROS
Overview
The BROS model was proposed in BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key Information Extraction from Documents by Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, Sungrae Park.
BROS stands for BERT Relying On Spatiality. It is an encoder-only Transformer model that takes a sequence of tokens and their bounding boxes as inputs and outputs a sequence of hidden states. BROS encode relative spatial information instead of using absolute spatial information.
It is pre-trained with two objectives: a token-masked language modeling objective (TMLM) used in BERT, and a novel area-masked language modeling objective (AMLM)
In TMLM, tokens are randomly masked, and the model predicts the masked tokens using spatial information and other unmasked tokens.
AMLM is a 2D version of TMLM. It randomly masks text tokens and predicts with the same information as TMLM, but it masks text blocks (areas).
BrosForTokenClassification has a simple linear layer on top of BrosModel. It predicts the label of each token.
BrosSpadeEEForTokenClassification has an initial_token_classifier and subsequent_token_classifier on top of BrosModel. initial_token_classifier is used to predict the first token of each entity, and subsequent_token_classifier is used to predict the next token of within entity. BrosSpadeELForTokenClassification has an entity_linker on top of BrosModel. entity_linker is used to predict the relation between two entities.
BrosForTokenClassification and BrosSpadeEEForTokenClassification essentially perform the same job. However, BrosForTokenClassification assumes input tokens are perfectly serialized (which is very challenging task since they exist in a 2D space), while BrosSpadeEEForTokenClassification allows for more flexibility in handling serialization errors as it predicts next connection tokens from one token.
BrosSpadeELForTokenClassification perform the intra-entity linking task. It predicts relation from one token (of one entity) to another token (of another entity) if these two entities share some relation.
BROS achieves comparable or better result on Key Information Extraction (KIE) benchmarks such as FUNSD, SROIE, CORD and SciTSR, without relying on explicit visual features.
The abstract from the paper is the following:
Key information extraction (KIE) from document images requires understanding the contextual and spatial semantics of texts in two-dimensional (2D) space. Many recent studies try to solve the task by developing pre-trained language models focusing on combining visual features from document images with texts and their layout. On the other hand, this paper tackles the problem by going back to the basic: effective combination of text and layout. Specifically, we propose a pre-trained language model, named BROS (BERT Relying On Spatiality), that encodes relative positions of texts in 2D space and learns from unlabeled documents with area-masking strategy. With this optimized training scheme for understanding texts in 2D space, BROS shows comparable or better performance compared to previous methods on four KIE benchmarks (FUNSD, SROIE, CORD, and SciTSR) without relying on visual features. This paper also reveals two real-world challenges in KIE tasks-(1) minimizing the error from incorrect text ordering and (2) efficient learning from fewer downstream examples-and demonstrates the superiority of BROS over previous methods.*
This model was contributed by jinho8345. The original code can be found here.
Usage tips and examples |
[~transformers.BrosModel.forward] requires input_ids and bbox (bounding box). Each bounding box should be in (x0, y0, x1, y1) format (top-left corner, bottom-right corner). Obtaining of Bounding boxes depends on external OCR system. The x coordinate should be normalized by document image width, and the y coordinate should be normalized by document image height. |
thon
def expand_and_normalize_bbox(bboxes, doc_width, doc_height):
# here, bboxes are numpy array
# Normalize bbox -> 0 ~ 1
bboxes[:, [0, 2]] = bboxes[:, [0, 2]] / width
bboxes[:, [1, 3]] = bboxes[:, [1, 3]] / height |
[~transformers.BrosForTokenClassification.forward, ~transformers.BrosSpadeEEForTokenClassification.forward, ~transformers.BrosSpadeEEForTokenClassification.forward] require not only input_ids and bbox but also box_first_token_mask for loss calculation. It is a mask to filter out non-first tokens of each box. You can obtain this mask by saving start token indices of bounding boxes when creating input_ids from words. You can make box_first_token_mask with following code, |
thon
def make_box_first_token_mask(bboxes, words, tokenizer, max_seq_length=512):
box_first_token_mask = np.zeros(max_seq_length, dtype=np.bool_)
# encode(tokenize) each word from words (List[str])
input_ids_list: List[List[int]] = [tokenizer.encode(e, add_special_tokens=False) for e in words]
# get the length of each box
tokens_length_list: List[int] = [len(l) for l in input_ids_list] |
# get the length of each box
tokens_length_list: List[int] = [len(l) for l in input_ids_list]
box_end_token_indices = np.array(list(itertools.accumulate(tokens_length_list)))
box_start_token_indices = box_end_token_indices - np.array(tokens_length_list) |
box_end_token_indices = np.array(list(itertools.accumulate(tokens_length_list)))
box_start_token_indices = box_end_token_indices - np.array(tokens_length_list)
# filter out the indices that are out of max_seq_length
box_end_token_indices = box_end_token_indices[box_end_token_indices < max_seq_length - 1]
if len(box_start_token_indices) > len(box_end_token_indices):
box_start_token_indices = box_start_token_indices[: len(box_end_token_indices)] |
# set box_start_token_indices to True
box_first_token_mask[box_start_token_indices] = True
return box_first_token_mask
Resources
Demo scripts can be found here. |
return box_first_token_mask
Resources
Demo scripts can be found here.
BrosConfig
[[autodoc]] BrosConfig
BrosProcessor
[[autodoc]] BrosProcessor
- call
BrosModel
[[autodoc]] BrosModel
- forward
BrosForTokenClassification
[[autodoc]] BrosForTokenClassification
- forward
BrosSpadeEEForTokenClassification
[[autodoc]] BrosSpadeEEForTokenClassification
- forward
BrosSpadeELForTokenClassification
[[autodoc]] BrosSpadeELForTokenClassification
- forward |
RoBERTa |
Overview
The RoBERTa model was proposed in RoBERTa: A Robustly Optimized BERT Pretraining Approach by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer
Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. It is based on Google's BERT model released in 2018.
It builds on BERT and modifies key hyperparameters, removing the next-sentence pretraining objective and training with
much larger mini-batches and learning rates.
The abstract from the paper is the following:
Language model pretraining has led to significant performance gains but careful comparison between different
approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes,
and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication
study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and
training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every
model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results
highlight the importance of previously overlooked design choices, and raise questions about the source of recently
reported improvements. We release our models and code.
This model was contributed by julien-c. The original code can be found here.
Usage tips |
This implementation is the same as [BertModel] with a tiny embeddings tweak as well as a setup
for Roberta pretrained models.
RoBERTa has the same architecture as BERT, but uses a byte-level BPE as a tokenizer (same as GPT-2) and uses a
different pretraining scheme.
RoBERTa doesn't have token_type_ids, you don't need to indicate which token belongs to which segment. Just
separate your segments with the separation token tokenizer.sep_token (or </s>)
Same as BERT with better pretraining tricks: |
Same as BERT with better pretraining tricks:
dynamic masking: tokens are masked differently at each epoch, whereas BERT does it once and for all
together to reach 512 tokens (so the sentences are in an order than may span several documents)
train with larger batches
use BPE with bytes as a subunit and not characters (because of unicode characters)
CamemBERT is a wrapper around RoBERTa. Refer to this page for usage examples. |
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with RoBERTa. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. |
A blog on Getting Started with Sentiment Analysis on Twitter using RoBERTa and the Inference API.
A blog on Opinion Classification with Kili and Hugging Face AutoTrain using RoBERTa.
A notebook on how to finetune RoBERTa for sentiment analysis. 🌎
[RobertaForSequenceClassification] is supported by this example script and notebook.
[TFRobertaForSequenceClassification] is supported by this example script and notebook.
[FlaxRobertaForSequenceClassification] is supported by this example script and notebook.
Text classification task guide |
[RobertaForTokenClassification] is supported by this example script and notebook.
[TFRobertaForTokenClassification] is supported by this example script and notebook.
[FlaxRobertaForTokenClassification] is supported by this example script.
Token classification chapter of the 🤗 Hugging Face Course.
Token classification task guide |
A blog on How to train a new language model from scratch using Transformers and Tokenizers with RoBERTa.
[RobertaForMaskedLM] is supported by this example script and notebook.
[TFRobertaForMaskedLM] is supported by this example script and notebook.
[FlaxRobertaForMaskedLM] is supported by this example script and notebook.
Masked language modeling chapter of the 🤗 Hugging Face Course.
Masked language modeling task guide |
A blog on Accelerated Inference with Optimum and Transformers Pipelines with RoBERTa for question answering.
[RobertaForQuestionAnswering] is supported by this example script and notebook.
[TFRobertaForQuestionAnswering] is supported by this example script and notebook.
[FlaxRobertaForQuestionAnswering] is supported by this example script.
Question answering chapter of the 🤗 Hugging Face Course.
Question answering task guide |
Multiple choice
- [RobertaForMultipleChoice] is supported by this example script and notebook.
- [TFRobertaForMultipleChoice] is supported by this example script and notebook.
- Multiple choice task guide
RobertaConfig
[[autodoc]] RobertaConfig
RobertaTokenizer
[[autodoc]] RobertaTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
RobertaTokenizerFast
[[autodoc]] RobertaTokenizerFast
- build_inputs_with_special_tokens |
RobertaModel
[[autodoc]] RobertaModel
- forward
RobertaForCausalLM
[[autodoc]] RobertaForCausalLM
- forward
RobertaForMaskedLM
[[autodoc]] RobertaForMaskedLM
- forward
RobertaForSequenceClassification
[[autodoc]] RobertaForSequenceClassification
- forward
RobertaForMultipleChoice
[[autodoc]] RobertaForMultipleChoice
- forward
RobertaForTokenClassification
[[autodoc]] RobertaForTokenClassification
- forward
RobertaForQuestionAnswering
[[autodoc]] RobertaForQuestionAnswering
- forward |
TFRobertaModel
[[autodoc]] TFRobertaModel
- call
TFRobertaForCausalLM
[[autodoc]] TFRobertaForCausalLM
- call
TFRobertaForMaskedLM
[[autodoc]] TFRobertaForMaskedLM
- call
TFRobertaForSequenceClassification
[[autodoc]] TFRobertaForSequenceClassification
- call
TFRobertaForMultipleChoice
[[autodoc]] TFRobertaForMultipleChoice
- call
TFRobertaForTokenClassification
[[autodoc]] TFRobertaForTokenClassification
- call
TFRobertaForQuestionAnswering
[[autodoc]] TFRobertaForQuestionAnswering
- call |
FlaxRobertaModel
[[autodoc]] FlaxRobertaModel
- call
FlaxRobertaForCausalLM
[[autodoc]] FlaxRobertaForCausalLM
- call
FlaxRobertaForMaskedLM
[[autodoc]] FlaxRobertaForMaskedLM
- call
FlaxRobertaForSequenceClassification
[[autodoc]] FlaxRobertaForSequenceClassification
- call
FlaxRobertaForMultipleChoice
[[autodoc]] FlaxRobertaForMultipleChoice
- call
FlaxRobertaForTokenClassification
[[autodoc]] FlaxRobertaForTokenClassification
- call
FlaxRobertaForQuestionAnswering
[[autodoc]] FlaxRobertaForQuestionAnswering
- call |
Swin Transformer V2
Overview
The Swin Transformer V2 model was proposed in Swin Transformer V2: Scaling Up Capacity and Resolution by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.
The abstract from the paper is the following:
Large-scale NLP models have been shown to significantly improve the performance on language tasks with no signs of saturation. They also demonstrate amazing few-shot capabilities like that of human beings. This paper aims to explore large-scale models in computer vision. We tackle three major issues in training and application of large vision models, including training instability, resolution gaps between pre-training and fine-tuning, and hunger on labelled data. Three main techniques are proposed: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) A log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) A self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images. Through these techniques, this paper successfully trained a 3 billion-parameter Swin Transformer V2 model, which is the largest dense vision model to date, and makes it capable of training with images of up to 1,536×1,536 resolution. It set new performance records on 4 representative vision tasks, including ImageNet-V2 image classification, COCO object detection, ADE20K semantic segmentation, and Kinetics-400 video action classification. Also note our training is much more efficient than that in Google's billion-level visual models, which consumes 40 times less labelled data and 40 times less training time.
This model was contributed by nandwalritik.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Swin Transformer v2. |
[Swinv2ForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
Besides that:
[Swinv2ForMaskedImageModeling] is supported by this example script. |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Swinv2Config
[[autodoc]] Swinv2Config
Swinv2Model
[[autodoc]] Swinv2Model
- forward
Swinv2ForMaskedImageModeling
[[autodoc]] Swinv2ForMaskedImageModeling
- forward
Swinv2ForImageClassification
[[autodoc]] transformers.Swinv2ForImageClassification
- forward |
LED
Overview
The LED model was proposed in Longformer: The Long-Document Transformer by Iz
Beltagy, Matthew E. Peters, Arman Cohan.
The abstract from the paper is the following:
Transformer-based models are unable to process long sequences due to their self-attention operation, which scales
quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention
mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or
longer. Longformer's attention mechanism is a drop-in replacement for the standard self-attention and combines a local
windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we
evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In
contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our
pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on
WikiHop and TriviaQA. We finally introduce the Longformer-Encoder-Decoder (LED), a Longformer variant for supporting
long document generative sequence-to-sequence tasks, and demonstrate its effectiveness on the arXiv summarization
dataset.
Usage tips |
[LEDForConditionalGeneration] is an extension of
[BartForConditionalGeneration] exchanging the traditional self-attention layer with
Longformer's chunked self-attention layer. [LEDTokenizer] is an alias of
[BartTokenizer].
LED works very well on long-range sequence-to-sequence tasks where the input_ids largely exceed a length of
1024 tokens.
LED pads the input_ids to be a multiple of config.attention_window if required. Therefore a small speed-up is
gained, when [LEDTokenizer] is used with the pad_to_multiple_of argument.
LED makes use of global attention by means of the global_attention_mask (see
[LongformerModel]). For summarization, it is advised to put global attention only on the first
<s> token. For question answering, it is advised to put global attention on all tokens of the question.
To fine-tune LED on all 16384, gradient checkpointing can be enabled in case training leads to out-of-memory (OOM)
errors. This can be done by executing model.gradient_checkpointing_enable().
Moreover, the use_cache=False
flag can be used to disable the caching mechanism to save memory.
LED is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left. |
This model was contributed by patrickvonplaten.
Resources
A notebook showing how to evaluate LED.
A notebook showing how to fine-tune LED.
Text classification task guide
Question answering task guide
Translation task guide
Summarization task guide |
LEDConfig
[[autodoc]] LEDConfig
LEDTokenizer
[[autodoc]] LEDTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
LEDTokenizerFast
[[autodoc]] LEDTokenizerFast
LED specific outputs
[[autodoc]] models.led.modeling_led.LEDEncoderBaseModelOutput
[[autodoc]] models.led.modeling_led.LEDSeq2SeqModelOutput
[[autodoc]] models.led.modeling_led.LEDSeq2SeqLMOutput
[[autodoc]] models.led.modeling_led.LEDSeq2SeqSequenceClassifierOutput
[[autodoc]] models.led.modeling_led.LEDSeq2SeqQuestionAnsweringModelOutput
[[autodoc]] models.led.modeling_tf_led.TFLEDEncoderBaseModelOutput
[[autodoc]] models.led.modeling_tf_led.TFLEDSeq2SeqModelOutput
[[autodoc]] models.led.modeling_tf_led.TFLEDSeq2SeqLMOutput |
LEDModel
[[autodoc]] LEDModel
- forward
LEDForConditionalGeneration
[[autodoc]] LEDForConditionalGeneration
- forward
LEDForSequenceClassification
[[autodoc]] LEDForSequenceClassification
- forward
LEDForQuestionAnswering
[[autodoc]] LEDForQuestionAnswering
- forward
TFLEDModel
[[autodoc]] TFLEDModel
- call
TFLEDForConditionalGeneration
[[autodoc]] TFLEDForConditionalGeneration
- call |
UPerNet
Overview
The UPerNet model was proposed in Unified Perceptual Parsing for Scene Understanding
by Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun. UPerNet is a general framework to effectively segment
a wide range of concepts from images, leveraging any vision backbone like ConvNeXt or Swin.
The abstract from the paper is the following:
Humans recognize the visual world at multiple levels: we effortlessly categorize scenes and detect objects inside, while also identifying the textures and surfaces of the objects along with their different compositional parts. In this paper, we study a new task called Unified Perceptual Parsing, which requires the machine vision systems to recognize as many visual concepts as possible from a given image. A multi-task framework called UPerNet and a training strategy are developed to learn from heterogeneous image annotations. We benchmark our framework on Unified Perceptual Parsing and show that it is able to effectively segment a wide range of concepts from images. The trained networks are further applied to discover visual knowledge in natural scenes. |
UPerNet framework. Taken from the original paper.
This model was contributed by nielsr. The original code is based on OpenMMLab's mmsegmentation here.
Usage examples
UPerNet is a general framework for semantic segmentation. It can be used with any vision backbone, like so: |
from transformers import SwinConfig, UperNetConfig, UperNetForSemanticSegmentation
backbone_config = SwinConfig(out_features=["stage1", "stage2", "stage3", "stage4"])
config = UperNetConfig(backbone_config=backbone_config)
model = UperNetForSemanticSegmentation(config)
To use another vision backbone, like ConvNeXt, simply instantiate the model with the appropriate backbone: |
To use another vision backbone, like ConvNeXt, simply instantiate the model with the appropriate backbone:
from transformers import ConvNextConfig, UperNetConfig, UperNetForSemanticSegmentation
backbone_config = ConvNextConfig(out_features=["stage1", "stage2", "stage3", "stage4"])
config = UperNetConfig(backbone_config=backbone_config)
model = UperNetForSemanticSegmentation(config) |
Note that this will randomly initialize all the weights of the model.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with UPerNet.
Demo notebooks for UPerNet can be found here.
[UperNetForSemanticSegmentation] is supported by this example script and notebook.
See also: Semantic segmentation task guide |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
UperNetConfig
[[autodoc]] UperNetConfig
UperNetForSemanticSegmentation
[[autodoc]] UperNetForSemanticSegmentation
- forward |
Blenderbot
Overview
The Blender chatbot model was proposed in Recipes for building an open-domain chatbot Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu,
Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston on 30 Apr 2020.
The abstract of the paper is the following:
Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that
scaling neural models in the number of parameters and the size of the data they are trained on gives improved results,
we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of
skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to
their partners, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent
persona. We show that large scale models can learn these skills when given appropriate training data and choice of
generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models
and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn
dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing
failure cases of our models.
This model was contributed by sshleifer. The authors' code can be found here .
Usage tips and example
Blenderbot is a model with absolute position embeddings so it's usually advised to pad the inputs on the right
rather than the left.
An example:
thon |
from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration
mname = "facebook/blenderbot-400M-distill"
model = BlenderbotForConditionalGeneration.from_pretrained(mname)
tokenizer = BlenderbotTokenizer.from_pretrained(mname)
UTTERANCE = "My friends are cool but they eat too many carbs."
inputs = tokenizer([UTTERANCE], return_tensors="pt")
reply_ids = model.generate(**inputs)
print(tokenizer.batch_decode(reply_ids))
[" That's unfortunate. Are they trying to lose weight or are they just trying to be healthier?"] |
Implementation Notes
Blenderbot uses a standard seq2seq model transformer based architecture.
Available checkpoints can be found in the model hub.
This is the default Blenderbot model class. However, some smaller checkpoints, such as
facebook/blenderbot_small_90M, have a different architecture and consequently should be used with
BlenderbotSmall.
Resources
Causal language modeling task guide
Translation task guide
Summarization task guide |
Resources
Causal language modeling task guide
Translation task guide
Summarization task guide
BlenderbotConfig
[[autodoc]] BlenderbotConfig
BlenderbotTokenizer
[[autodoc]] BlenderbotTokenizer
- build_inputs_with_special_tokens
BlenderbotTokenizerFast
[[autodoc]] BlenderbotTokenizerFast
- build_inputs_with_special_tokens |
BlenderbotModel
See [~transformers.BartModel] for arguments to forward and generate
[[autodoc]] BlenderbotModel
- forward
BlenderbotForConditionalGeneration
See [~transformers.BartForConditionalGeneration] for arguments to forward and generate
[[autodoc]] BlenderbotForConditionalGeneration
- forward
BlenderbotForCausalLM
[[autodoc]] BlenderbotForCausalLM
- forward |
TFBlenderbotModel
[[autodoc]] TFBlenderbotModel
- call
TFBlenderbotForConditionalGeneration
[[autodoc]] TFBlenderbotForConditionalGeneration
- call
FlaxBlenderbotModel
[[autodoc]] FlaxBlenderbotModel
- call
- encode
- decode
FlaxBlenderbotForConditionalGeneration
[[autodoc]] FlaxBlenderbotForConditionalGeneration
- call
- encode
- decode |
Splinter
Overview
The Splinter model was proposed in Few-Shot Question Answering by Pretraining Span Selection by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. Splinter
is an encoder-only transformer (similar to BERT) pretrained using the recurring span selection task on a large corpus
comprising Wikipedia and the Toronto Book Corpus.
The abstract from the paper is the following:
In several question answering benchmarks, pretrained models have reached human parity through fine-tuning on an order
of 100,000 annotated questions and answers. We explore the more realistic few-shot setting, where only a few hundred
training examples are available, and observe that standard models perform poorly, highlighting the discrepancy between
current pretraining objectives and question answering. We propose a new pretraining scheme tailored for question
answering: recurring span selection. Given a passage with multiple sets of recurring spans, we mask in each set all
recurring spans but one, and ask the model to select the correct span in the passage for each masked span. Masked spans
are replaced with a special token, viewed as a question representation, that is later used during fine-tuning to select
the answer span. The resulting model obtains surprisingly good results on multiple benchmarks (e.g., 72.7 F1 on SQuAD
with only 128 training examples), while maintaining competitive performance in the high-resource setting.
This model was contributed by yuvalkirstain and oriram. The original code can be found here.
Usage tips |
Splinter was trained to predict answers spans conditioned on a special [QUESTION] token. These tokens contextualize
to question representations which are used to predict the answers. This layer is called QASS, and is the default
behaviour in the [SplinterForQuestionAnswering] class. Therefore:
Use [SplinterTokenizer] (rather than [BertTokenizer]), as it already
contains this special token. Also, its default behavior is to use this token when two sequences are given (for
example, in the run_qa.py script).
If you plan on using Splinter outside run_qa.py, please keep in mind the question token - it might be important for
the success of your model, especially in a few-shot setting.
Please note there are two different checkpoints for each size of Splinter. Both are basically the same, except that
one also has the pretrained weights of the QASS layer (tau/splinter-base-qass and tau/splinter-large-qass) and one
doesn't (tau/splinter-base and tau/splinter-large). This is done to support randomly initializing this layer at
fine-tuning, as it is shown to yield better results for some cases in the paper. |
Resources
Question answering task guide |
SplinterConfig
[[autodoc]] SplinterConfig
SplinterTokenizer
[[autodoc]] SplinterTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
SplinterTokenizerFast
[[autodoc]] SplinterTokenizerFast
SplinterModel
[[autodoc]] SplinterModel
- forward
SplinterForQuestionAnswering
[[autodoc]] SplinterForQuestionAnswering
- forward
SplinterForPreTraining
[[autodoc]] SplinterForPreTraining
- forward |
SEW-D
Overview
SEW-D (Squeezed and Efficient Wav2Vec with Disentangled attention) was proposed in Performance-Efficiency Trade-offs
in Unsupervised Pre-training for Speech Recognition by Felix Wu, Kwangyoun Kim,
Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
The abstract from the paper is the following:
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition
(ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance
and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a
pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a
variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x
inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference
time, SEW reduces word error rate by 25-50% across different model sizes.
This model was contributed by anton-l.
Usage tips |
SEW-D is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
SEWDForCTC is fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded
using [Wav2Vec2CTCTokenizer].
Resources
Audio classification task guide
Automatic speech recognition task guide |
Resources
Audio classification task guide
Automatic speech recognition task guide
SEWDConfig
[[autodoc]] SEWDConfig
SEWDModel
[[autodoc]] SEWDModel
- forward
SEWDForCTC
[[autodoc]] SEWDForCTC
- forward
SEWDForSequenceClassification
[[autodoc]] SEWDForSequenceClassification
- forward |
TAPAS
Overview
The TAPAS model was proposed in TAPAS: Weakly Supervised Table Parsing via Pre-training
by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos. It's a BERT-based model specifically
designed (and pre-trained) for answering questions about tabular data. Compared to BERT, TAPAS uses relative position embeddings and has 7
token types that encode tabular structure. TAPAS is pre-trained on the masked language modeling (MLM) objective on a large dataset comprising
millions of tables from English Wikipedia and corresponding texts.
For question answering, TAPAS has 2 heads on top: a cell selection head and an aggregation head, for (optionally) performing aggregations (such as counting or summing) among selected cells. TAPAS has been fine-tuned on several datasets:
- SQA (Sequential Question Answering by Microsoft)
- WTQ (Wiki Table Questions by Stanford University)
- WikiSQL (by Salesforce).
It achieves state-of-the-art on both SQA and WTQ, while having comparable performance to SOTA on WikiSQL, with a much simpler architecture.
The abstract from the paper is the following:
Answering natural language questions over tables is usually seen as a semantic parsing task. To alleviate the collection cost of full logical forms, one popular approach focuses on weak supervision consisting of denotations instead of logical forms. However, training semantic parsers from weak supervision poses difficulties, and in addition, the generated logical forms are only used as an intermediate step prior to retrieving the denotation. In this paper, we present TAPAS, an approach to question answering over tables without generating logical forms. TAPAS trains from weak supervision, and predicts the denotation by selecting table cells and optionally applying a corresponding aggregation operator to such selection. TAPAS extends BERT's architecture to encode tables as input, initializes from an effective joint pre-training of text segments and tables crawled from Wikipedia, and is trained end-to-end. We experiment with three different semantic parsing datasets, and find that TAPAS outperforms or rivals semantic parsing models by improving state-of-the-art accuracy on SQA from 55.1 to 67.2 and performing on par with the state-of-the-art on WIKISQL and WIKITQ, but with a simpler model architecture. We additionally find that transfer learning, which is trivial in our setting, from WIKISQL to WIKITQ, yields 48.7 accuracy, 4.2 points above the state-of-the-art.
In addition, the authors have further pre-trained TAPAS to recognize table entailment, by creating a balanced dataset of millions of automatically created training examples which are learned in an intermediate step prior to fine-tuning. The authors of TAPAS call this further pre-training intermediate pre-training (since TAPAS is first pre-trained on MLM, and then on another dataset). They found that intermediate pre-training further improves performance on SQA, achieving a new state-of-the-art as well as state-of-the-art on TabFact, a large-scale dataset with 16k Wikipedia tables for table entailment (a binary classification task). For more details, see their follow-up paper: Understanding tables with intermediate pre-training by Julian Martin Eisenschlos, Syrine Krichene and Thomas Müller.
TAPAS architecture. Taken from the original blog post.
This model was contributed by nielsr. The Tensorflow version of this model was contributed by kamalkraj. The original code can be found here.
Usage tips |
TAPAS is a model that uses relative position embeddings by default (restarting the position embeddings at every cell of the table). Note that this is something that was added after the publication of the original TAPAS paper. According to the authors, this usually results in a slightly better performance, and allows you to encode longer sequences without running out of embeddings. This is reflected in the reset_position_index_per_cell parameter of [TapasConfig], which is set to True by default. The default versions of the models available on the hub all use relative position embeddings. You can still use the ones with absolute position embeddings by passing in an additional argument revision="no_reset" when calling the from_pretrained() method. Note that it's usually advised to pad the inputs on the right rather than the left.
TAPAS is based on BERT, so TAPAS-base for example corresponds to a BERT-base architecture. Of course, TAPAS-large will result in the best performance (the results reported in the paper are from TAPAS-large). Results of the various sized models are shown on the original GitHub repository.
TAPAS has checkpoints fine-tuned on SQA, which are capable of answering questions related to a table in a conversational set-up. This means that you can ask follow-up questions such as "what is his age?" related to the previous question. Note that the forward pass of TAPAS is a bit different in case of a conversational set-up: in that case, you have to feed every table-question pair one by one to the model, such that the prev_labels token type ids can be overwritten by the predicted labels of the model to the previous question. See "Usage" section for more info.
TAPAS is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained with a causal language modeling (CLM) objective are better in that regard. Note that TAPAS can be used as an encoder in the EncoderDecoderModel framework, to combine it with an autoregressive text decoder such as GPT-2. |
Usage: fine-tuning
Here we explain how you can fine-tune [TapasForQuestionAnswering] on your own dataset.
STEP 1: Choose one of the 3 ways in which you can use TAPAS - or experiment
Basically, there are 3 different ways in which one can fine-tune [TapasForQuestionAnswering], corresponding to the different datasets on which Tapas was fine-tuned: |
SQA: if you're interested in asking follow-up questions related to a table, in a conversational set-up. For example if you first ask "what's the name of the first actor?" then you can ask a follow-up question such as "how old is he?". Here, questions do not involve any aggregation (all questions are cell selection questions).
WTQ: if you're not interested in asking questions in a conversational set-up, but rather just asking questions related to a table, which might involve aggregation, such as counting a number of rows, summing up cell values or averaging cell values. You can then for example ask "what's the total number of goals Cristiano Ronaldo made in his career?". This case is also called weak supervision, since the model itself must learn the appropriate aggregation operator (SUM/COUNT/AVERAGE/NONE) given only the answer to the question as supervision.
WikiSQL-supervised: this dataset is based on WikiSQL with the model being given the ground truth aggregation operator during training. This is also called strong supervision. Here, learning the appropriate aggregation operator is much easier. |
To summarize:
| Task | Example dataset | Description |
|-------------------------------------|---------------------|---------------------------------------------------------------------------------------------------------|
| Conversational | SQA | Conversational, only cell selection questions |
| Weak supervision for aggregation | WTQ | Questions might involve aggregation, and the model must learn this given only the answer as supervision |
| Strong supervision for aggregation | WikiSQL-supervised | Questions might involve aggregation, and the model must learn this given the gold aggregation operator | |
Initializing a model with a pre-trained base and randomly initialized classification heads from the hub can be done as shown below. |
from transformers import TapasConfig, TapasForQuestionAnswering
for example, the base sized model with default SQA configuration
model = TapasForQuestionAnswering.from_pretrained("google/tapas-base")
or, the base sized model with WTQ configuration
config = TapasConfig.from_pretrained("google/tapas-base-finetuned-wtq")
model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config)
or, the base sized model with WikiSQL configuration
config = TapasConfig("google-base-finetuned-wikisql-supervised")
model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) |
Of course, you don't necessarily have to follow one of these three ways in which TAPAS was fine-tuned. You can also experiment by defining any hyperparameters you want when initializing [TapasConfig], and then create a [TapasForQuestionAnswering] based on that configuration. For example, if you have a dataset that has both conversational questions and questions that might involve aggregation, then you can do it this way. Here's an example: |
from transformers import TapasConfig, TapasForQuestionAnswering
you can initialize the classification heads any way you want (see docs of TapasConfig)
config = TapasConfig(num_aggregation_labels=3, average_logits_per_cell=True)
initializing the pre-trained base sized model with our custom classification heads
model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) |
Initializing a model with a pre-trained base and randomly initialized classification heads from the hub can be done as shown below. Be sure to have installed the tensorflow_probability dependency: |
from transformers import TapasConfig, TFTapasForQuestionAnswering
for example, the base sized model with default SQA configuration
model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base")
or, the base sized model with WTQ configuration
config = TapasConfig.from_pretrained("google/tapas-base-finetuned-wtq")
model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config)
or, the base sized model with WikiSQL configuration
config = TapasConfig("google-base-finetuned-wikisql-supervised")
model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) |
Of course, you don't necessarily have to follow one of these three ways in which TAPAS was fine-tuned. You can also experiment by defining any hyperparameters you want when initializing [TapasConfig], and then create a [TFTapasForQuestionAnswering] based on that configuration. For example, if you have a dataset that has both conversational questions and questions that might involve aggregation, then you can do it this way. Here's an example: |
from transformers import TapasConfig, TFTapasForQuestionAnswering
you can initialize the classification heads any way you want (see docs of TapasConfig)
config = TapasConfig(num_aggregation_labels=3, average_logits_per_cell=True)
initializing the pre-trained base sized model with our custom classification heads
model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) |
What you can also do is start from an already fine-tuned checkpoint. A note here is that the already fine-tuned checkpoint on WTQ has some issues due to the L2-loss which is somewhat brittle. See here for more info.
For a list of all pre-trained and fine-tuned TAPAS checkpoints available on HuggingFace's hub, see here.
STEP 2: Prepare your data in the SQA format
Second, no matter what you picked above, you should prepare your dataset in the SQA format. This format is a TSV/CSV file with the following columns: |
id: optional, id of the table-question pair, for bookkeeping purposes.
annotator: optional, id of the person who annotated the table-question pair, for bookkeeping purposes.
position: integer indicating if the question is the first, second, third, related to the table. Only required in case of conversational setup (SQA). You don't need this column in case you're going for WTQ/WikiSQL-supervised.
question: string
table_file: string, name of a csv file containing the tabular data
answer_coordinates: list of one or more tuples (each tuple being a cell coordinate, i.e. row, column pair that is part of the answer)
answer_text: list of one or more strings (each string being a cell value that is part of the answer)
aggregation_label: index of the aggregation operator. Only required in case of strong supervision for aggregation (the WikiSQL-supervised case)
float_answer: the float answer to the question, if there is one (np.nan if there isn't). Only required in case of weak supervision for aggregation (such as WTQ and WikiSQL) |
The tables themselves should be present in a folder, each table being a separate csv file. Note that the authors of the TAPAS algorithm used conversion scripts with some automated logic to convert the other datasets (WTQ, WikiSQL) into the SQA format. The author explains this here. A conversion of this script that works with HuggingFace's implementation can be found here. Interestingly, these conversion scripts are not perfect (the answer_coordinates and float_answer fields are populated based on the answer_text), meaning that WTQ and WikiSQL results could actually be improved.
STEP 3: Convert your data into tensors using TapasTokenizer |
Third, given that you've prepared your data in this TSV/CSV format (and corresponding CSV files containing the tabular data), you can then use [TapasTokenizer] to convert table-question pairs into input_ids, attention_mask, token_type_ids and so on. Again, based on which of the three cases you picked above, [TapasForQuestionAnswering] requires different
inputs to be fine-tuned:
| Task | Required inputs |
|------------------------------------|---------------------------------------------------------------------------------------------------------------------|
| Conversational | input_ids, attention_mask, token_type_ids, labels |
| Weak supervision for aggregation | input_ids, attention_mask, token_type_ids, labels, numeric_values, numeric_values_scale, float_answer |
| Strong supervision for aggregation | input ids, attention mask, token type ids, labels, aggregation_labels |
[TapasTokenizer] creates the labels, numeric_values and numeric_values_scale based on the answer_coordinates and answer_text columns of the TSV file. The float_answer and aggregation_labels are already in the TSV file of step 2. Here's an example: |
from transformers import TapasTokenizer
import pandas as pd
model_name = "google/tapas-base"
tokenizer = TapasTokenizer.from_pretrained(model_name)
data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}
queries = [
"What is the name of the first actor?",
"How many movies has George Clooney played in?",
"What is the total number of movies?",
]
answer_coordinates = [[(0, 0)], [(2, 1)], [(0, 1), (1, 1), (2, 1)]]
answer_text = [["Brad Pitt"], ["69"], ["209"]]
table = pd.DataFrame.from_dict(data)
inputs = tokenizer(
table=table,
queries=queries,
answer_coordinates=answer_coordinates,
answer_text=answer_text,
padding="max_length",
return_tensors="pt",
)
inputs
{'input_ids': tensor([[ ]]), 'attention_mask': tensor([[]]), 'token_type_ids': tensor([[[]]]),
'numeric_values': tensor([[ ]]), 'numeric_values_scale: tensor([[ ]]), labels: tensor([[ ]])} |
Note that [TapasTokenizer] expects the data of the table to be text-only. You can use .astype(str) on a dataframe to turn it into text-only data.
Of course, this only shows how to encode a single training example. It is advised to create a dataloader to iterate over batches: |
import torch
import pandas as pd
tsv_path = "your_path_to_the_tsv_file"
table_csv_path = "your_path_to_a_directory_containing_all_csv_files"
class TableDataset(torch.utils.data.Dataset):
def init(self, data, tokenizer):
self.data = data
self.tokenizer = tokenizer |
def getitem(self, idx):
item = data.iloc[idx]
table = pd.read_csv(table_csv_path + item.table_file).astype(
str
) # be sure to make your table data text only
encoding = self.tokenizer(
table=table,
queries=item.question,
answer_coordinates=item.answer_coordinates,
answer_text=item.answer_text,
truncation=True,
padding="max_length",
return_tensors="pt",
)
# remove the batch dimension which the tokenizer adds by default
encoding = {key: val.squeeze(0) for key, val in encoding.items()}
# add the float_answer which is also required (weak supervision for aggregation case)
encoding["float_answer"] = torch.tensor(item.float_answer)
return encoding
def len(self):
return len(self.data) |
data = pd.read_csv(tsv_path, sep="\t")
train_dataset = TableDataset(data, tokenizer)
train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=32)
``
</pt>
<tf>
Third, given that you've prepared your data in this TSV/CSV format (and corresponding CSV files containing the tabular data), you can then use [TapasTokenizer] to convert table-question pairs intoinput_ids,attention_mask,token_type_idsand so on. Again, based on which of the three cases you picked above, [TFTapasForQuestionAnswering`] requires different
inputs to be fine-tuned: |
| Task | Required inputs |
|------------------------------------|---------------------------------------------------------------------------------------------------------------------|
| Conversational | input_ids, attention_mask, token_type_ids, labels |
| Weak supervision for aggregation | input_ids, attention_mask, token_type_ids, labels, numeric_values, numeric_values_scale, float_answer |
| Strong supervision for aggregation | input ids, attention mask, token type ids, labels, aggregation_labels |
[TapasTokenizer] creates the labels, numeric_values and numeric_values_scale based on the answer_coordinates and answer_text columns of the TSV file. The float_answer and aggregation_labels are already in the TSV file of step 2. Here's an example: |
from transformers import TapasTokenizer
import pandas as pd
model_name = "google/tapas-base"
tokenizer = TapasTokenizer.from_pretrained(model_name)
data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}
queries = [
"What is the name of the first actor?",
"How many movies has George Clooney played in?",
"What is the total number of movies?",
]
answer_coordinates = [[(0, 0)], [(2, 1)], [(0, 1), (1, 1), (2, 1)]]
answer_text = [["Brad Pitt"], ["69"], ["209"]]
table = pd.DataFrame.from_dict(data)
inputs = tokenizer(
table=table,
queries=queries,
answer_coordinates=answer_coordinates,
answer_text=answer_text,
padding="max_length",
return_tensors="tf",
)
inputs
{'input_ids': tensor([[ ]]), 'attention_mask': tensor([[]]), 'token_type_ids': tensor([[[]]]),
'numeric_values': tensor([[ ]]), 'numeric_values_scale: tensor([[ ]]), labels: tensor([[ ]])} |
Note that [TapasTokenizer] expects the data of the table to be text-only. You can use .astype(str) on a dataframe to turn it into text-only data.
Of course, this only shows how to encode a single training example. It is advised to create a dataloader to iterate over batches: |
import tensorflow as tf
import pandas as pd
tsv_path = "your_path_to_the_tsv_file"
table_csv_path = "your_path_to_a_directory_containing_all_csv_files"
class TableDataset:
def init(self, data, tokenizer):
self.data = data
self.tokenizer = tokenizer |
def iter(self):
for idx in range(self.len()):
item = self.data.iloc[idx]
table = pd.read_csv(table_csv_path + item.table_file).astype(
str
) # be sure to make your table data text only
encoding = self.tokenizer(
table=table,
queries=item.question,
answer_coordinates=item.answer_coordinates,
answer_text=item.answer_text,
truncation=True,
padding="max_length",
return_tensors="tf",
)
# remove the batch dimension which the tokenizer adds by default
encoding = {key: tf.squeeze(val, 0) for key, val in encoding.items()}
# add the float_answer which is also required (weak supervision for aggregation case)
encoding["float_answer"] = tf.convert_to_tensor(item.float_answer, dtype=tf.float32)
yield encoding["input_ids"], encoding["attention_mask"], encoding["numeric_values"], encoding[
"numeric_values_scale"
], encoding["token_type_ids"], encoding["labels"], encoding["float_answer"]
def len(self):
return len(self.data) |
data = pd.read_csv(tsv_path, sep="\t")
train_dataset = TableDataset(data, tokenizer)
output_signature = (
tf.TensorSpec(shape=(512,), dtype=tf.int32),
tf.TensorSpec(shape=(512,), dtype=tf.int32),
tf.TensorSpec(shape=(512,), dtype=tf.float32),
tf.TensorSpec(shape=(512,), dtype=tf.float32),
tf.TensorSpec(shape=(512, 7), dtype=tf.int32),
tf.TensorSpec(shape=(512,), dtype=tf.int32),
tf.TensorSpec(shape=(512,), dtype=tf.float32),
)
train_dataloader = tf.data.Dataset.from_generator(train_dataset, output_signature=output_signature).batch(32) |
Note that here, we encode each table-question pair independently. This is fine as long as your dataset is not conversational. In case your dataset involves conversational questions (such as in SQA), then you should first group together the queries, answer_coordinates and answer_text per table (in the order of their position
index) and batch encode each table with its questions. This will make sure that the prev_labels token types (see docs of [TapasTokenizer]) are set correctly. See this notebook for more info. See this notebook for more info regarding using the TensorFlow model.
**STEP 4: Train (fine-tune) the model |
You can then fine-tune [TapasForQuestionAnswering] as follows (shown here for the weak supervision for aggregation case): |
from transformers import TapasConfig, TapasForQuestionAnswering, AdamW
this is the default WTQ configuration
config = TapasConfig(
num_aggregation_labels=4,
use_answer_as_supervision=True,
answer_loss_cutoff=0.664694,
cell_selection_preference=0.207951,
huber_loss_delta=0.121194,
init_cell_selection_weights_to_zero=True,
select_one_column=True,
allow_empty_column_selection=False,
temperature=0.0352513,
)
model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config)
optimizer = AdamW(model.parameters(), lr=5e-5)
model.train()
for epoch in range(2): # loop over the dataset multiple times
for batch in train_dataloader:
# get the inputs;
input_ids = batch["input_ids"]
attention_mask = batch["attention_mask"]
token_type_ids = batch["token_type_ids"]
labels = batch["labels"]
numeric_values = batch["numeric_values"]
numeric_values_scale = batch["numeric_values_scale"]
float_answer = batch["float_answer"] |
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(
input_ids=input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
labels=labels,
numeric_values=numeric_values,
numeric_values_scale=numeric_values_scale,
float_answer=float_answer,
)
loss = outputs.loss
loss.backward()
optimizer.step()
``
</pt>
<tf>
You can then fine-tune [TFTapasForQuestionAnswering`] as follows (shown here for the weak supervision for aggregation case): |
import tensorflow as tf
from transformers import TapasConfig, TFTapasForQuestionAnswering
this is the default WTQ configuration
config = TapasConfig(
num_aggregation_labels=4,
use_answer_as_supervision=True,
answer_loss_cutoff=0.664694,
cell_selection_preference=0.207951,
huber_loss_delta=0.121194,
init_cell_selection_weights_to_zero=True,
select_one_column=True,
allow_empty_column_selection=False,
temperature=0.0352513,
)
model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config)
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
for epoch in range(2): # loop over the dataset multiple times
for batch in train_dataloader:
# get the inputs;
input_ids = batch[0]
attention_mask = batch[1]
token_type_ids = batch[4]
labels = batch[-1]
numeric_values = batch[2]
numeric_values_scale = batch[3]
float_answer = batch[6] |
# forward + backward + optimize
with tf.GradientTape() as tape:
outputs = model(
input_ids=input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
labels=labels,
numeric_values=numeric_values,
numeric_values_scale=numeric_values_scale,
float_answer=float_answer,
)
grads = tape.gradient(outputs.loss, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights)) |
Usage: inference |
Here we explain how you can use [TapasForQuestionAnswering] or [TFTapasForQuestionAnswering] for inference (i.e. making predictions on new data). For inference, only input_ids, attention_mask and token_type_ids (which you can obtain using [TapasTokenizer]) have to be provided to the model to obtain the logits. Next, you can use the handy [~models.tapas.tokenization_tapas.convert_logits_to_predictions] method to convert these into predicted coordinates and optional aggregation indices.
However, note that inference is different depending on whether or not the setup is conversational. In a non-conversational set-up, inference can be done in parallel on all table-question pairs of a batch. Here's an example of that: |
from transformers import TapasTokenizer, TapasForQuestionAnswering
import pandas as pd
model_name = "google/tapas-base-finetuned-wtq"
model = TapasForQuestionAnswering.from_pretrained(model_name)
tokenizer = TapasTokenizer.from_pretrained(model_name)
data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}
queries = [
"What is the name of the first actor?",
"How many movies has George Clooney played in?",
"What is the total number of movies?",
]
table = pd.DataFrame.from_dict(data)
inputs = tokenizer(table=table, queries=queries, padding="max_length", return_tensors="pt")
outputs = model(**inputs)
predicted_answer_coordinates, predicted_aggregation_indices = tokenizer.convert_logits_to_predictions(
inputs, outputs.logits.detach(), outputs.logits_aggregation.detach()
)
let's print out the results:
id2aggregation = {0: "NONE", 1: "SUM", 2: "AVERAGE", 3: "COUNT"}
aggregation_predictions_string = [id2aggregation[x] for x in predicted_aggregation_indices]
answers = []
for coordinates in predicted_answer_coordinates:
if len(coordinates) == 1:
# only a single cell:
answers.append(table.iat[coordinates[0]])
else:
# multiple cells
cell_values = []
for coordinate in coordinates:
cell_values.append(table.iat[coordinate])
answers.append(", ".join(cell_values))
display(table)
print("")
for query, answer, predicted_agg in zip(queries, answers, aggregation_predictions_string):
print(query)
if predicted_agg == "NONE":
print("Predicted answer: " + answer)
else:
print("Predicted answer: " + predicted_agg + " > " + answer)
What is the name of the first actor?
Predicted answer: Brad Pitt
How many movies has George Clooney played in?
Predicted answer: COUNT > 69
What is the total number of movies?
Predicted answer: SUM > 87, 53, 69
``
</pt>
<tf>
Here we explain how you can use [TFTapasForQuestionAnswering] for inference (i.e. making predictions on new data). For inference, onlyinput_ids,attention_maskandtoken_type_ids(which you can obtain using [TapasTokenizer]) have to be provided to the model to obtain the logits. Next, you can use the handy [~models.tapas.tokenization_tapas.convert_logits_to_predictions`] method to convert these into predicted coordinates and optional aggregation indices. |
However, note that inference is different depending on whether or not the setup is conversational. In a non-conversational set-up, inference can be done in parallel on all table-question pairs of a batch. Here's an example of that: |
Subsets and Splits