text
stringlengths 2
11.8k
|
---|
Fine-tune a classification VQA model, specifically ViLT, on the Graphcore/vqa dataset.
Use your fine-tuned ViLT for inference.
Run zero-shot VQA inference with a generative model, like BLIP-2. |
Fine-tuning ViLT
ViLT model incorporates text embeddings into a Vision Transformer (ViT), allowing it to have a minimal design for
Vision-and-Language Pre-training (VLP). This model can be used for several downstream tasks. For the VQA task, a classifier
head is placed on top (a linear layer on top of the final hidden state of the [CLS] token) and randomly initialized.
Visual Question Answering is thus treated as a classification problem.
More recent models, such as BLIP, BLIP-2, and InstructBLIP, treat VQA as a generative task. Later in this guide we
illustrate how to use them for zero-shot VQA inference.
Before you begin, make sure you have all the necessary libraries installed. |
pip install -q transformers datasets
We encourage you to share your model with the community. Log in to your Hugging Face account to upload it to the π€ Hub.
When prompted, enter your token to log in:
from huggingface_hub import notebook_login
notebook_login()
Let's define the model checkpoint as a global variable.
model_checkpoint = "dandelin/vilt-b32-mlm" |
Load the data
For illustration purposes, in this guide we use a very small sample of the annotated visual question answering Graphcore/vqa dataset.
You can find the full dataset on π€ Hub.
As an alternative to the Graphcore/vqa dataset, you can download the
same data manually from the official VQA dataset page. If you prefer to follow the
tutorial with your custom data, check out how to Create an image dataset
guide in the π€ Datasets documentation.
Let's load the first 200 examples from the validation split and explore the dataset's features:
thon |
from datasets import load_dataset
dataset = load_dataset("Graphcore/vqa", split="validation[:200]")
dataset
Dataset({
features: ['question', 'question_type', 'question_id', 'image_id', 'answer_type', 'label'],
num_rows: 200
})
Let's take a look at an example to understand the dataset's features: |
dataset[0]
{'question': 'Where is he looking?',
'question_type': 'none of the above',
'question_id': 262148000,
'image_id': '/root/.cache/huggingface/datasets/downloads/extracted/ca733e0e000fb2d7a09fbcc94dbfe7b5a30750681d0e965f8e0a23b1c2f98c75/val2014/COCO_val2014_000000262148.jpg',
'answer_type': 'other',
'label': {'ids': ['at table', 'down', 'skateboard', 'table'],
'weights': [0.30000001192092896,
1.0,
0.30000001192092896,
0.30000001192092896]}} |
The features relevant to the task include:
* question: the question to be answered from the image
* image_id: the path to the image the question refers to
* label: the annotations
We can remove the rest of the features as they won't be necessary:
dataset = dataset.remove_columns(['question_type', 'question_id', 'answer_type']) |
dataset = dataset.remove_columns(['question_type', 'question_id', 'answer_type'])
As you can see, the label feature contains several answers to the same question (called ids here) collected by different human annotators.
This is because the answer to a question can be subjective. In this case, the question is "where is he looking?". Some people
annotated this with "down", others with "at table", another one with "skateboard", etc.
Take a look at the image and consider which answer would you give:
thon |
from PIL import Image
image = Image.open(dataset[0]['image_id'])
image |
Due to the questions' and answers' ambiguity, datasets like this are treated as a multi-label classification problem (as
multiple answers are possibly valid). Moreover, rather than just creating a one-hot encoded vector, one creates a
soft encoding, based on the number of times a certain answer appeared in the annotations.
For instance, in the example above, because the answer "down" is selected way more often than other answers, it has a
score (called weight in the dataset) of 1.0, and the rest of the answers have scores < 1.0.
To later instantiate the model with an appropriate classification head, let's create two dictionaries: one that maps
the label name to an integer and vice versa: |
import itertools
labels = [item['ids'] for item in dataset['label']]
flattened_labels = list(itertools.chain(*labels))
unique_labels = list(set(flattened_labels))
label2id = {label: idx for idx, label in enumerate(unique_labels)}
id2label = {idx: label for label, idx in label2id.items()}
Now that we have the mappings, we can replace the string answers with their ids, and flatten the dataset for a more convenient further preprocessing.
thon |
def replace_ids(inputs):
inputs["label"]["ids"] = [label2id[x] for x in inputs["label"]["ids"]]
return inputs
dataset = dataset.map(replace_ids)
flat_dataset = dataset.flatten()
flat_dataset.features
{'question': Value(dtype='string', id=None),
'image_id': Value(dtype='string', id=None),
'label.ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None),
'label.weights': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None)} |
Preprocessing data
The next step is to load a ViLT processor to prepare the image and text data for the model.
[ViltProcessor] wraps a BERT tokenizer and ViLT image processor into a convenient single processor:
from transformers import ViltProcessor
processor = ViltProcessor.from_pretrained(model_checkpoint) |
To preprocess the data we need to encode the images and questions using the [ViltProcessor]. The processor will use
the [BertTokenizerFast] to tokenize the text and create input_ids, attention_mask and token_type_ids for the text data.
As for images, the processor will leverage [ViltImageProcessor] to resize and normalize the image, and create pixel_values and pixel_mask.
All these preprocessing steps are done under the hood, we only need to call the processor. However, we still need to
prepare the target labels. In this representation, each element corresponds to a possible answer (label). For correct answers, the element holds
their respective score (weight), while the remaining elements are set to zero.
The following function applies the processor to the images and questions and formats the labels as described above: |
import torch
def preprocess_data(examples):
image_paths = examples['image_id']
images = [Image.open(image_path) for image_path in image_paths]
texts = examples['question'] |
encoding = processor(images, texts, padding="max_length", truncation=True, return_tensors="pt")
for k, v in encoding.items():
encoding[k] = v.squeeze()
targets = []
for labels, scores in zip(examples['label.ids'], examples['label.weights']):
target = torch.zeros(len(id2label))
for label, score in zip(labels, scores):
target[label] = score
targets.append(target)
encoding["labels"] = targets
return encoding |
To apply the preprocessing function over the entire dataset, use π€ Datasets [~datasets.map] function. You can speed up map by
setting batched=True to process multiple elements of the dataset at once. At this point, feel free to remove the columns you don't need. |
processed_dataset = flat_dataset.map(preprocess_data, batched=True, remove_columns=['question','question_type', 'question_id', 'image_id', 'answer_type', 'label.ids', 'label.weights'])
processed_dataset
Dataset({
features: ['input_ids', 'token_type_ids', 'attention_mask', 'pixel_values', 'pixel_mask', 'labels'],
num_rows: 200
})
As a final step, create a batch of examples using [DefaultDataCollator]:
from transformers import DefaultDataCollator
data_collator = DefaultDataCollator() |
As a final step, create a batch of examples using [DefaultDataCollator]:
from transformers import DefaultDataCollator
data_collator = DefaultDataCollator()
Train the model
Youβre ready to start training your model now! Load ViLT with [ViltForQuestionAnswering]. Specify the number of labels
along with the label mappings:
from transformers import ViltForQuestionAnswering
model = ViltForQuestionAnswering.from_pretrained(model_checkpoint, num_labels=len(id2label), id2label=id2label, label2id=label2id) |
from transformers import ViltForQuestionAnswering
model = ViltForQuestionAnswering.from_pretrained(model_checkpoint, num_labels=len(id2label), id2label=id2label, label2id=label2id)
At this point, only three steps remain:
Define your training hyperparameters in [TrainingArguments]: |
At this point, only three steps remain:
Define your training hyperparameters in [TrainingArguments]:
from transformers import TrainingArguments
repo_id = "MariaK/vilt_finetuned_200"
training_args = TrainingArguments(
output_dir=repo_id,
per_device_train_batch_size=4,
num_train_epochs=20,
save_steps=200,
logging_steps=50,
learning_rate=5e-5,
save_total_limit=2,
remove_unused_columns=False,
push_to_hub=True,
) |
Pass the training arguments to [Trainer] along with the model, dataset, processor, and data collator.
from transformers import Trainer
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=processed_dataset,
tokenizer=processor,
)
Call [~Trainer.train] to finetune your model.
trainer.train()
Once training is completed, share your model to the Hub with the [~Trainer.push_to_hub] method to share your final model on the π€ Hub: |
trainer.train()
Once training is completed, share your model to the Hub with the [~Trainer.push_to_hub] method to share your final model on the π€ Hub:
trainer.push_to_hub()
Inference
Now that you have fine-tuned a ViLT model, and uploaded it to the π€ Hub, you can use it for inference. The simplest
way to try out your fine-tuned model for inference is to use it in a [Pipeline].
from transformers import pipeline
pipe = pipeline("visual-question-answering", model="MariaK/vilt_finetuned_200") |
from transformers import pipeline
pipe = pipeline("visual-question-answering", model="MariaK/vilt_finetuned_200")
The model in this guide has only been trained on 200 examples, so don't expect a lot from it. Let's see if it at least
learned something from the data and take the first example from the dataset to illustrate inference: |
example = dataset[0]
image = Image.open(example['image_id'])
question = example['question']
print(question)
pipe(image, question, top_k=1)
"Where is he looking?"
[{'score': 0.5498199462890625, 'answer': 'down'}] |
Even though not very confident, the model indeed has learned something. With more examples and longer training, you'll get far better results!
You can also manually replicate the results of the pipeline if you'd like:
1. Take an image and a question, prepare them for the model using the processor from your model.
2. Forward the result or preprocessing through the model.
3. From the logits, get the most likely answer's id, and find the actual answer in the id2label. |
processor = ViltProcessor.from_pretrained("MariaK/vilt_finetuned_200")
image = Image.open(example['image_id'])
question = example['question']
prepare inputs
inputs = processor(image, question, return_tensors="pt")
model = ViltForQuestionAnswering.from_pretrained("MariaK/vilt_finetuned_200")
forward pass
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
idx = logits.argmax(-1).item()
print("Predicted answer:", model.config.id2label[idx])
Predicted answer: down |
Zero-shot VQA
The previous model treated VQA as a classification task. Some recent models, such as BLIP, BLIP-2, and InstructBLIP approach
VQA as a generative task. Let's take BLIP-2 as an example. It introduced a new visual-language pre-training
paradigm in which any combination of pre-trained vision encoder and LLM can be used (learn more in the BLIP-2 blog post).
This enables achieving state-of-the-art results on multiple visual-language tasks including visual question answering.
Let's illustrate how you can use this model for VQA. First, let's load the model. Here we'll explicitly send the model to a
GPU, if available, which we didn't need to do earlier when training, as [Trainer] handles this automatically: |
from transformers import AutoProcessor, Blip2ForConditionalGeneration
import torch
processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16)
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
The model takes image and text as input, so let's use the exact same image/question pair from the first example in the VQA dataset: |
The model takes image and text as input, so let's use the exact same image/question pair from the first example in the VQA dataset:
example = dataset[0]
image = Image.open(example['image_id'])
question = example['question']
To use BLIP-2 for visual question answering task, the textual prompt has to follow a specific format: Question: {} Answer:.
prompt = f"Question: {question} Answer:" |
To use BLIP-2 for visual question answering task, the textual prompt has to follow a specific format: Question: {} Answer:.
prompt = f"Question: {question} Answer:"
Now we need to preprocess the image/prompt with the model's processor, pass the processed input through the model, and decode the output: |
prompt = f"Question: {question} Answer:"
Now we need to preprocess the image/prompt with the model's processor, pass the processed input through the model, and decode the output:
inputs = processor(image, text=prompt, return_tensors="pt").to(device, torch.float16)
generated_ids = model.generate(**inputs, max_new_tokens=10)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip()
print(generated_text)
"He is looking at the crowd" |
As you can see, the model recognized the crowd, and the direction of the face (looking down), however, it seems to miss
the fact the crowd is behind the skater. Still, in cases where acquiring human-annotated datasets is not feasible, this
approach can quickly produce useful results. |
Before you begin, make sure you have all the necessary libraries installed:
pip install transformers datasets evaluate
We encourage you to log in to your Hugging Face account to upload and share your model with the community. When prompted, enter your token to log in:
from huggingface_hub import notebook_login
notebook_login() |
from huggingface_hub import notebook_login
notebook_login()
Load Food-101 dataset
Start by loading a smaller subset of the Food-101 dataset from the π€ Datasets library. This will give you a chance to
experiment and make sure everything works before spending more time training on the full dataset.
from datasets import load_dataset
food = load_dataset("food101", split="train[:5000]")
Split the dataset's train split into a train and test set with the [~datasets.Dataset.train_test_split] method: |
Split the dataset's train split into a train and test set with the [~datasets.Dataset.train_test_split] method:
food = food.train_test_split(test_size=0.2)
Then take a look at an example:
food["train"][0]
{'image': ,
'label': 79}
Each example in the dataset has two fields:
image: a PIL image of the food item
label: the label class of the food item
To make it easier for the model to get the label name from the label id, create a dictionary that maps the label name
to an integer and vice versa: |
To make it easier for the model to get the label name from the label id, create a dictionary that maps the label name
to an integer and vice versa:
labels = food["train"].features["label"].names
label2id, id2label = dict(), dict()
for i, label in enumerate(labels):
label2id[label] = str(i)
id2label[str(i)] = label
Now you can convert the label id to a label name:
id2label[str(79)]
'prime_rib'
Preprocess
The next step is to load a ViT image processor to process the image into a tensor: |
Now you can convert the label id to a label name:
id2label[str(79)]
'prime_rib'
Preprocess
The next step is to load a ViT image processor to process the image into a tensor:
from transformers import AutoImageProcessor
checkpoint = "google/vit-base-patch16-224-in21k"
image_processor = AutoImageProcessor.from_pretrained(checkpoint) |
from transformers import AutoImageProcessor
checkpoint = "google/vit-base-patch16-224-in21k"
image_processor = AutoImageProcessor.from_pretrained(checkpoint)
Apply some image transformations to the images to make the model more robust against overfitting. Here you'll use torchvision's transforms module, but you can also use any image library you like.
Crop a random part of the image, resize it, and normalize it with the image mean and standard deviation: |
from torchvision.transforms import RandomResizedCrop, Compose, Normalize, ToTensor
normalize = Normalize(mean=image_processor.image_mean, std=image_processor.image_std)
size = (
image_processor.size["shortest_edge"]
if "shortest_edge" in image_processor.size
else (image_processor.size["height"], image_processor.size["width"])
)
_transforms = Compose([RandomResizedCrop(size), ToTensor(), normalize]) |
Then create a preprocessing function to apply the transforms and return the pixel_values - the inputs to the model - of the image:
def transforms(examples):
examples["pixel_values"] = [_transforms(img.convert("RGB")) for img in examples["image"]]
del examples["image"]
return examples
To apply the preprocessing function over the entire dataset, use π€ Datasets [~datasets.Dataset.with_transform] method. The transforms are applied on the fly when you load an element of the dataset: |
To apply the preprocessing function over the entire dataset, use π€ Datasets [~datasets.Dataset.with_transform] method. The transforms are applied on the fly when you load an element of the dataset:
food = food.with_transform(transforms)
Now create a batch of examples using [DefaultDataCollator]. Unlike other data collators in π€ Transformers, the DefaultDataCollator does not apply additional preprocessing such as padding.
from transformers import DefaultDataCollator
data_collator = DefaultDataCollator() |
from transformers import DefaultDataCollator
data_collator = DefaultDataCollator()
To avoid overfitting and to make the model more robust, add some data augmentation to the training part of the dataset.
Here we use Keras preprocessing layers to define the transformations for the training data (includes data augmentation),
and transformations for the validation data (only center cropping, resizing and normalizing). You can use tf.imageor
any other library you prefer. |
from tensorflow import keras
from tensorflow.keras import layers
size = (image_processor.size["height"], image_processor.size["width"])
train_data_augmentation = keras.Sequential(
[
layers.RandomCrop(size[0], size[1]),
layers.Rescaling(scale=1.0 / 127.5, offset=-1),
layers.RandomFlip("horizontal"),
layers.RandomRotation(factor=0.02),
layers.RandomZoom(height_factor=0.2, width_factor=0.2),
],
name="train_data_augmentation",
)
val_data_augmentation = keras.Sequential(
[
layers.CenterCrop(size[0], size[1]),
layers.Rescaling(scale=1.0 / 127.5, offset=-1),
],
name="val_data_augmentation",
) |
Next, create functions to apply appropriate transformations to a batch of images, instead of one image at a time. |
import numpy as np
import tensorflow as tf
from PIL import Image
def convert_to_tf_tensor(image: Image):
np_image = np.array(image)
tf_image = tf.convert_to_tensor(np_image)
# expand_dims() is used to add a batch dimension since
# the TF augmentation layers operates on batched inputs.
return tf.expand_dims(tf_image, 0)
def preprocess_train(example_batch):
"""Apply train_transforms across a batch."""
images = [
train_data_augmentation(convert_to_tf_tensor(image.convert("RGB"))) for image in example_batch["image"]
]
example_batch["pixel_values"] = [tf.transpose(tf.squeeze(image)) for image in images]
return example_batch |
def preprocess_val(example_batch):
"""Apply val_transforms across a batch."""
images = [
val_data_augmentation(convert_to_tf_tensor(image.convert("RGB"))) for image in example_batch["image"]
]
example_batch["pixel_values"] = [tf.transpose(tf.squeeze(image)) for image in images]
return example_batch |
Use π€ Datasets [~datasets.Dataset.set_transform] to apply the transformations on the fly:
py
food["train"].set_transform(preprocess_train)
food["test"].set_transform(preprocess_val)
As a final preprocessing step, create a batch of examples using DefaultDataCollator. Unlike other data collators in π€ Transformers, the
DefaultDataCollator does not apply additional preprocessing, such as padding.
from transformers import DefaultDataCollator
data_collator = DefaultDataCollator(return_tensors="tf") |
from transformers import DefaultDataCollator
data_collator = DefaultDataCollator(return_tensors="tf")
Evaluate
Including a metric during training is often helpful for evaluating your model's performance. You can quickly load an
evaluation method with the π€ Evaluate library. For this task, load
the accuracy metric (see the π€ Evaluate quick tour to learn more about how to load and compute a metric):
import evaluate
accuracy = evaluate.load("accuracy") |
import evaluate
accuracy = evaluate.load("accuracy")
Then create a function that passes your predictions and labels to [~evaluate.EvaluationModule.compute] to calculate the accuracy:
import numpy as np
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=1)
return accuracy.compute(predictions=predictions, references=labels)
Your compute_metrics function is ready to go now, and you'll return to it when you set up your training.
Train |
Your compute_metrics function is ready to go now, and you'll return to it when you set up your training.
Train
If you aren't familiar with finetuning a model with the [Trainer], take a look at the basic tutorial here!
You're ready to start training your model now! Load ViT with [AutoModelForImageClassification]. Specify the number of labels along with the number of expected labels, and the label mappings: |
You're ready to start training your model now! Load ViT with [AutoModelForImageClassification]. Specify the number of labels along with the number of expected labels, and the label mappings:
from transformers import AutoModelForImageClassification, TrainingArguments, Trainer
model = AutoModelForImageClassification.from_pretrained(
checkpoint,
num_labels=len(labels),
id2label=id2label,
label2id=label2id,
)
At this point, only three steps remain: |
Define your training hyperparameters in [TrainingArguments]. It is important you don't remove unused columns because that'll drop the image column. Without the image column, you can't create pixel_values. Set remove_unused_columns=False to prevent this behavior! The only other required parameter is output_dir which specifies where to save your model. You'll push this model to the Hub by setting push_to_hub=True (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer] will evaluate the accuracy and save the training checkpoint.
Pass the training arguments to [Trainer] along with the model, dataset, tokenizer, data collator, and compute_metrics function.
Call [~Trainer.train] to finetune your model. |
training_args = TrainingArguments(
output_dir="my_awesome_food_model",
remove_unused_columns=False,
evaluation_strategy="epoch",
save_strategy="epoch",
learning_rate=5e-5,
per_device_train_batch_size=16,
gradient_accumulation_steps=4,
per_device_eval_batch_size=16,
num_train_epochs=3,
warmup_ratio=0.1,
logging_steps=10,
load_best_model_at_end=True,
metric_for_best_model="accuracy",
push_to_hub=True,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=food["train"],
eval_dataset=food["test"],
tokenizer=image_processor,
compute_metrics=compute_metrics,
)
trainer.train() |
Once training is completed, share your model to the Hub with the [~transformers.Trainer.push_to_hub] method so everyone can use your model:
trainer.push_to_hub()
If you are unfamiliar with fine-tuning a model with Keras, check out the basic tutorial first! |
To fine-tune a model in TensorFlow, follow these steps:
1. Define the training hyperparameters, and set up an optimizer and a learning rate schedule.
2. Instantiate a pre-trained model.
3. Convert a π€ Dataset to a tf.data.Dataset.
4. Compile your model.
5. Add callbacks and use the fit() method to run the training.
6. Upload your model to π€ Hub to share with the community.
Start by defining the hyperparameters, optimizer and learning rate schedule: |
from transformers import create_optimizer
batch_size = 16
num_epochs = 5
num_train_steps = len(food["train"]) * num_epochs
learning_rate = 3e-5
weight_decay_rate = 0.01
optimizer, lr_schedule = create_optimizer(
init_lr=learning_rate,
num_train_steps=num_train_steps,
weight_decay_rate=weight_decay_rate,
num_warmup_steps=0,
)
Then, load ViT with [TFAutoModelForImageClassification] along with the label mappings: |
Then, load ViT with [TFAutoModelForImageClassification] along with the label mappings:
from transformers import TFAutoModelForImageClassification
model = TFAutoModelForImageClassification.from_pretrained(
checkpoint,
id2label=id2label,
label2id=label2id,
)
Convert your datasets to the tf.data.Dataset format using the [~datasets.Dataset.to_tf_dataset] and your data_collator: |
converting our train dataset to tf.data.Dataset
tf_train_dataset = food["train"].to_tf_dataset(
columns="pixel_values", label_cols="label", shuffle=True, batch_size=batch_size, collate_fn=data_collator
)
converting our test dataset to tf.data.Dataset
tf_eval_dataset = food["test"].to_tf_dataset(
columns="pixel_values", label_cols="label", shuffle=True, batch_size=batch_size, collate_fn=data_collator
)
Configure the model for training with compile(): |
Configure the model for training with compile():
from tensorflow.keras.losses import SparseCategoricalCrossentropy
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer=optimizer, loss=loss)
To compute the accuracy from the predictions and push your model to the π€ Hub, use Keras callbacks.
Pass your compute_metrics function to KerasMetricCallback,
and use the PushToHubCallback to upload the model: |
from transformers.keras_callbacks import KerasMetricCallback, PushToHubCallback
metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_eval_dataset)
push_to_hub_callback = PushToHubCallback(
output_dir="food_classifier",
tokenizer=image_processor,
save_strategy="no",
)
callbacks = [metric_callback, push_to_hub_callback] |
Finally, you are ready to train your model! Call fit() with your training and validation datasets, the number of epochs,
and your callbacks to fine-tune the model: |
model.fit(tf_train_dataset, validation_data=tf_eval_dataset, epochs=num_epochs, callbacks=callbacks)
Epoch 1/5
250/250 [==============================] - 313s 1s/step - loss: 2.5623 - val_loss: 1.4161 - accuracy: 0.9290
Epoch 2/5
250/250 [==============================] - 265s 1s/step - loss: 0.9181 - val_loss: 0.6808 - accuracy: 0.9690
Epoch 3/5
250/250 [==============================] - 252s 1s/step - loss: 0.3910 - val_loss: 0.4303 - accuracy: 0.9820
Epoch 4/5
250/250 [==============================] - 251s 1s/step - loss: 0.2028 - val_loss: 0.3191 - accuracy: 0.9900
Epoch 5/5
250/250 [==============================] - 238s 949ms/step - loss: 0.1232 - val_loss: 0.3259 - accuracy: 0.9890 |
Congratulations! You have fine-tuned your model and shared it on the π€ Hub. You can now use it for inference!
For a more in-depth example of how to finetune a model for image classification, take a look at the corresponding PyTorch notebook.
Inference
Great, now that you've fine-tuned a model, you can use it for inference!
Load an image you'd like to run inference on:
ds = load_dataset("food101", split="validation[:10]")
image = ds["image"][0] |
ds = load_dataset("food101", split="validation[:10]")
image = ds["image"][0]
The simplest way to try out your finetuned model for inference is to use it in a [pipeline]. Instantiate a pipeline for image classification with your model, and pass your image to it: |
from transformers import pipeline
classifier = pipeline("image-classification", model="my_awesome_food_model")
classifier(image)
[{'score': 0.31856709718704224, 'label': 'beignets'},
{'score': 0.015232225880026817, 'label': 'bruschetta'},
{'score': 0.01519392803311348, 'label': 'chicken_wings'},
{'score': 0.013022331520915031, 'label': 'pork_chop'},
{'score': 0.012728818692266941, 'label': 'prime_rib'}]
You can also manually replicate the results of the pipeline if you'd like: |
You can also manually replicate the results of the pipeline if you'd like:
Load an image processor to preprocess the image and return the input as PyTorch tensors:
from transformers import AutoImageProcessor
import torch
image_processor = AutoImageProcessor.from_pretrained("my_awesome_food_model")
inputs = image_processor(image, return_tensors="pt")
Pass your inputs to the model and return the logits: |
Pass your inputs to the model and return the logits:
from transformers import AutoModelForImageClassification
model = AutoModelForImageClassification.from_pretrained("my_awesome_food_model")
with torch.no_grad():
logits = model(**inputs).logits
Get the predicted label with the highest probability, and use the model's id2label mapping to convert it to a label:
predicted_label = logits.argmax(-1).item()
model.config.id2label[predicted_label]
'beignets' |
predicted_label = logits.argmax(-1).item()
model.config.id2label[predicted_label]
'beignets'
Load an image processor to preprocess the image and return the input as TensorFlow tensors:
from transformers import AutoImageProcessor
image_processor = AutoImageProcessor.from_pretrained("MariaK/food_classifier")
inputs = image_processor(image, return_tensors="tf")
Pass your inputs to the model and return the logits: |
Pass your inputs to the model and return the logits:
from transformers import TFAutoModelForImageClassification
model = TFAutoModelForImageClassification.from_pretrained("MariaK/food_classifier")
logits = model(**inputs).logits
Get the predicted label with the highest probability, and use the model's id2label mapping to convert it to a label:
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
model.config.id2label[predicted_class_id]
'beignets' |
In this guide you'll learn how to:
create a depth estimation pipeline
run depth estimation inference by hand
Before you begin, make sure you have all the necessary libraries installed:
pip install -q transformers
Depth estimation pipeline
The simplest way to try out inference with a model supporting depth estimation is to use the corresponding [pipeline].
Instantiate a pipeline from a checkpoint on the Hugging Face Hub: |
from transformers import pipeline
checkpoint = "vinvino02/glpn-nyu"
depth_estimator = pipeline("depth-estimation", model=checkpoint)
Next, choose an image to analyze:
from PIL import Image
import requests
url = "https://unsplash.com/photos/HwBAsSbPBDU/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MzR8fGNhciUyMGluJTIwdGhlJTIwc3RyZWV0fGVufDB8MHx8fDE2Nzg5MDEwODg&force=true&w=640"
image = Image.open(requests.get(url, stream=True).raw)
image
Pass the image to the pipeline.
predictions = depth_estimator(image) |
Pass the image to the pipeline.
predictions = depth_estimator(image)
The pipeline returns a dictionary with two entries. The first one, called predicted_depth, is a tensor with the values
being the depth expressed in meters for each pixel.
The second one, depth, is a PIL image that visualizes the depth estimation result.
Let's take a look at the visualized result:
predictions["depth"] |
predictions["depth"]
Depth estimation inference by hand
Now that you've seen how to use the depth estimation pipeline, let's see how we can replicate the same result by hand.
Start by loading the model and associated processor from a checkpoint on the Hugging Face Hub.
Here we'll use the same checkpoint as before: |
from transformers import AutoImageProcessor, AutoModelForDepthEstimation
checkpoint = "vinvino02/glpn-nyu"
image_processor = AutoImageProcessor.from_pretrained(checkpoint)
model = AutoModelForDepthEstimation.from_pretrained(checkpoint)
Prepare the image input for the model using the image_processor that will take care of the necessary image transformations
such as resizing and normalization:
pixel_values = image_processor(image, return_tensors="pt").pixel_values |
pixel_values = image_processor(image, return_tensors="pt").pixel_values
Pass the prepared inputs through the model:
import torch
with torch.no_grad():
outputs = model(pixel_values)
predicted_depth = outputs.predicted_depth
Visualize the results: |
import torch
with torch.no_grad():
outputs = model(pixel_values)
predicted_depth = outputs.predicted_depth
Visualize the results:
import numpy as np
interpolate to original size
prediction = torch.nn.functional.interpolate(
predicted_depth.unsqueeze(1),
size=image.size[::-1],
mode="bicubic",
align_corners=False,
).squeeze()
output = prediction.numpy()
formatted = (output * 255 / np.max(output)).astype("uint8")
depth = Image.fromarray(formatted)
depth |
Before you begin, make sure you have all the necessary libraries installed:
pip install transformers datasets evaluate
We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:
from huggingface_hub import notebook_login
notebook_login()
Load SWAG dataset
Start by loading the regular configuration of the SWAG dataset from the π€ Datasets library: |
from huggingface_hub import notebook_login
notebook_login()
Load SWAG dataset
Start by loading the regular configuration of the SWAG dataset from the π€ Datasets library:
from datasets import load_dataset
swag = load_dataset("swag", "regular")
Then take a look at an example: |
swag["train"][0]
{'ending0': 'passes by walking down the street playing their instruments.',
'ending1': 'has heard approaching them.',
'ending2': "arrives and they're outside dancing and asleep.",
'ending3': 'turns the lead singer watches the performance.',
'fold-ind': '3416',
'gold-source': 'gold',
'label': 0,
'sent1': 'Members of the procession walk down the street holding small horn brass instruments.',
'sent2': 'A drum line',
'startphrase': 'Members of the procession walk down the street holding small horn brass instruments. A drum line',
'video-id': 'anetv_jkn6uvmqwh4'} |
While it looks like there are a lot of fields here, it is actually pretty straightforward:
sent1 and sent2: these fields show how a sentence starts, and if you put the two together, you get the startphrase field.
ending: suggests a possible ending for how a sentence can end, but only one of them is correct.
label: identifies the correct sentence ending.
Preprocess
The next step is to load a BERT tokenizer to process the sentence starts and the four possible endings: |
Preprocess
The next step is to load a BERT tokenizer to process the sentence starts and the four possible endings:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
The preprocessing function you want to create needs to: |
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
The preprocessing function you want to create needs to:
Make four copies of the sent1 field and combine each of them with sent2 to recreate how a sentence starts.
Combine sent2 with each of the four possible sentence endings.
Flatten these two lists so you can tokenize them, and then unflatten them afterward so each example has a corresponding input_ids, attention_mask, and labels field. |
ending_names = ["ending0", "ending1", "ending2", "ending3"]
def preprocess_function(examples):
first_sentences = [[context] * 4 for context in examples["sent1"]]
question_headers = examples["sent2"]
second_sentences = [
[f"{header} {examples[end][i]}" for end in ending_names] for i, header in enumerate(question_headers)
] |
first_sentences = sum(first_sentences, [])
second_sentences = sum(second_sentences, [])
tokenized_examples = tokenizer(first_sentences, second_sentences, truncation=True)
return {k: [v[i : i + 4] for i in range(0, len(v), 4)] for k, v in tokenized_examples.items()} |
To apply the preprocessing function over the entire dataset, use π€ Datasets [~datasets.Dataset.map] method. You can speed up the map function by setting batched=True to process multiple elements of the dataset at once:
py
tokenized_swag = swag.map(preprocess_function, batched=True)
π€ Transformers doesn't have a data collator for multiple choice, so you'll need to adapt the [DataCollatorWithPadding] to create a batch of examples. It's more efficient to dynamically pad the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.
DataCollatorForMultipleChoice flattens all the model inputs, applies padding, and then unflattens the results: |
from dataclasses import dataclass
from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy
from typing import Optional, Union
import torch
@dataclass
class DataCollatorForMultipleChoice:
"""
Data collator that will dynamically pad the inputs for multiple choice received.
""" |
tokenizer: PreTrainedTokenizerBase
padding: Union[bool, str, PaddingStrategy] = True
max_length: Optional[int] = None
pad_to_multiple_of: Optional[int] = None
def call(self, features):
label_name = "label" if "label" in features[0].keys() else "labels"
labels = [feature.pop(label_name) for feature in features]
batch_size = len(features)
num_choices = len(features[0]["input_ids"])
flattened_features = [
[{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features
]
flattened_features = sum(flattened_features, [])
batch = self.tokenizer.pad(
flattened_features,
padding=self.padding,
max_length=self.max_length,
pad_to_multiple_of=self.pad_to_multiple_of,
return_tensors="pt",
)
batch = {k: v.view(batch_size, num_choices, -1) for k, v in batch.items()}
batch["labels"] = torch.tensor(labels, dtype=torch.int64)
return batch
</pt>
<tf>py |
from dataclasses import dataclass
from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy
from typing import Optional, Union
import tensorflow as tf
@dataclass
class DataCollatorForMultipleChoice:
"""
Data collator that will dynamically pad the inputs for multiple choice received.
""" |
tokenizer: PreTrainedTokenizerBase
padding: Union[bool, str, PaddingStrategy] = True
max_length: Optional[int] = None
pad_to_multiple_of: Optional[int] = None
def call(self, features):
label_name = "label" if "label" in features[0].keys() else "labels"
labels = [feature.pop(label_name) for feature in features]
batch_size = len(features)
num_choices = len(features[0]["input_ids"])
flattened_features = [
[{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features
]
flattened_features = sum(flattened_features, [])
batch = self.tokenizer.pad(
flattened_features,
padding=self.padding,
max_length=self.max_length,
pad_to_multiple_of=self.pad_to_multiple_of,
return_tensors="tf",
)
batch = {k: tf.reshape(v, (batch_size, num_choices, -1)) for k, v in batch.items()}
batch["labels"] = tf.convert_to_tensor(labels, dtype=tf.int64)
return batch |
Evaluate
Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the π€ Evaluate library. For this task, load the accuracy metric (see the π€ Evaluate quick tour to learn more about how to load and compute a metric):
import evaluate
accuracy = evaluate.load("accuracy")
Then create a function that passes your predictions and labels to [~evaluate.EvaluationModule.compute] to calculate the accuracy: |
import evaluate
accuracy = evaluate.load("accuracy")
Then create a function that passes your predictions and labels to [~evaluate.EvaluationModule.compute] to calculate the accuracy:
import numpy as np
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=1)
return accuracy.compute(predictions=predictions, references=labels)
Your compute_metrics function is ready to go now, and you'll return to it when you setup your training.
Train |
Your compute_metrics function is ready to go now, and you'll return to it when you setup your training.
Train
If you aren't familiar with finetuning a model with the [Trainer], take a look at the basic tutorial here!
You're ready to start training your model now! Load BERT with [AutoModelForMultipleChoice]:
from transformers import AutoModelForMultipleChoice, TrainingArguments, Trainer
model = AutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-uncased") |
from transformers import AutoModelForMultipleChoice, TrainingArguments, Trainer
model = AutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-uncased")
At this point, only three steps remain: |
Define your training hyperparameters in [TrainingArguments]. The only required parameter is output_dir which specifies where to save your model. You'll push this model to the Hub by setting push_to_hub=True (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer] will evaluate the accuracy and save the training checkpoint.
Pass the training arguments to [Trainer] along with the model, dataset, tokenizer, data collator, and compute_metrics function.
Call [~Trainer.train] to finetune your model. |
training_args = TrainingArguments(
output_dir="my_awesome_swag_model",
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
learning_rate=5e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=3,
weight_decay=0.01,
push_to_hub=True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_swag["train"],
eval_dataset=tokenized_swag["validation"],
tokenizer=tokenizer,
data_collator=DataCollatorForMultipleChoice(tokenizer=tokenizer),
compute_metrics=compute_metrics,
)
trainer.train() |
Once training is completed, share your model to the Hub with the [~transformers.Trainer.push_to_hub] method so everyone can use your model:
trainer.push_to_hub()
If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial here!
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: |
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:
from transformers import create_optimizer
batch_size = 16
num_train_epochs = 2
total_train_steps = (len(tokenized_swag["train"]) // batch_size) * num_train_epochs
optimizer, schedule = create_optimizer(init_lr=5e-5, num_warmup_steps=0, num_train_steps=total_train_steps)
Then you can load BERT with [TFAutoModelForMultipleChoice]: |
Then you can load BERT with [TFAutoModelForMultipleChoice]:
from transformers import TFAutoModelForMultipleChoice
model = TFAutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-uncased")
Convert your datasets to the tf.data.Dataset format with [~transformers.TFPreTrainedModel.prepare_tf_dataset]: |
Convert your datasets to the tf.data.Dataset format with [~transformers.TFPreTrainedModel.prepare_tf_dataset]:
data_collator = DataCollatorForMultipleChoice(tokenizer=tokenizer)
tf_train_set = model.prepare_tf_dataset(
tokenized_swag["train"],
shuffle=True,
batch_size=batch_size,
collate_fn=data_collator,
)
tf_validation_set = model.prepare_tf_dataset(
tokenized_swag["validation"],
shuffle=False,
batch_size=batch_size,
collate_fn=data_collator,
) |
Subsets and Splits