text
stringlengths 2
11.8k
|
---|
Batch processing
You can pass multiple sets of images and text queries to search for different (or same) objects in several images.
Let's use both an astronaut image and the beach image together.
For batch processing, you should pass text queries as a nested list to the processor and images as lists of PIL images,
PyTorch tensors, or NumPy arrays. |
images = [image, im]
text_queries = [
["human face", "rocket", "nasa badge", "star-spangled banner"],
["hat", "book", "sunglasses", "camera"],
]
inputs = processor(text=text_queries, images=images, return_tensors="pt")
Previously for post-processing you passed the single image's size as a tensor, but you can also pass a tuple, or, in case
of several images, a list of tuples. Let's create predictions for the two examples, and visualize the second one (image_idx = 1). |
with torch.no_grad():
outputs = model(**inputs)
target_sizes = [x.size[::-1] for x in images]
results = processor.post_process_object_detection(outputs, threshold=0.1, target_sizes=target_sizes)
image_idx = 1
draw = ImageDraw.Draw(images[image_idx])
scores = results[image_idx]["scores"].tolist()
labels = results[image_idx]["labels"].tolist()
boxes = results[image_idx]["boxes"].tolist()
for box, score, label in zip(boxes, scores, labels):
xmin, ymin, xmax, ymax = box
draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1)
draw.text((xmin, ymin), f"{text_queries[image_idx][label]}: {round(score,2)}", fill="white")
images[image_idx] |
Image-guided object detection
In addition to zero-shot object detection with text queries, OWL-ViT offers image-guided object detection. This means
you can use an image query to find similar objects in the target image.
Unlike text queries, only a single example image is allowed.
Let's take an image with two cats on a couch as a target image, and an image of a single cat
as a query: |
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image_target = Image.open(requests.get(url, stream=True).raw)
query_url = "http://images.cocodataset.org/val2017/000000524280.jpg"
query_image = Image.open(requests.get(query_url, stream=True).raw)
Let's take a quick look at the images:
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 2)
ax[0].imshow(image_target)
ax[1].imshow(query_image)
In the preprocessing step, instead of text queries, you now need to use query_images: |
In the preprocessing step, instead of text queries, you now need to use query_images:
inputs = processor(images=image_target, query_images=query_image, return_tensors="pt")
For predictions, instead of passing the inputs to the model, pass them to [~OwlViTForObjectDetection.image_guided_detection]. Draw the predictions
as before except now there are no labels. |
with torch.no_grad():
outputs = model.image_guided_detection(**inputs)
target_sizes = torch.tensor([image_target.size[::-1]])
results = processor.post_process_image_guided_detection(outputs=outputs, target_sizes=target_sizes)[0]
draw = ImageDraw.Draw(image_target)
scores = results["scores"].tolist()
boxes = results["boxes"].tolist()
for box, score, label in zip(boxes, scores, labels):
xmin, ymin, xmax, ymax = box
draw.rectangle((xmin, ymin, xmax, ymax), outline="white", width=4)
image_target |
Load SceneParse150 dataset
Start by loading a smaller subset of the SceneParse150 dataset from the 🤗 Datasets library. This'll give you a chance to experiment and make sure everything works before spending more time training on the full dataset.
from datasets import load_dataset
ds = load_dataset("scene_parse_150", split="train[:50]")
Split the dataset's train split into a train and test set with the [~datasets.Dataset.train_test_split] method: |
Split the dataset's train split into a train and test set with the [~datasets.Dataset.train_test_split] method:
ds = ds.train_test_split(test_size=0.2)
train_ds = ds["train"]
test_ds = ds["test"]
Then take a look at an example:
train_ds[0]
{'image': ,
'annotation': ,
'scene_category': 368} |
ds = ds.train_test_split(test_size=0.2)
train_ds = ds["train"]
test_ds = ds["test"]
Then take a look at an example:
train_ds[0]
{'image': ,
'annotation': ,
'scene_category': 368}
image: a PIL image of the scene.
annotation: a PIL image of the segmentation map, which is also the model's target.
scene_category: a category id that describes the image scene like "kitchen" or "office". In this guide, you'll only need image and annotation, both of which are PIL images. |
You'll also want to create a dictionary that maps a label id to a label class which will be useful when you set up the model later. Download the mappings from the Hub and create the id2label and label2id dictionaries: |
import json
from huggingface_hub import cached_download, hf_hub_url
repo_id = "huggingface/label-files"
filename = "ade20k-id2label.json"
id2label = json.load(open(cached_download(hf_hub_url(repo_id, filename, repo_type="dataset")), "r"))
id2label = {int(k): v for k, v in id2label.items()}
label2id = {v: k for k, v in id2label.items()}
num_labels = len(id2label) |
Custom dataset
You could also create and use your own dataset if you prefer to train with the run_semantic_segmentation.py script instead of a notebook instance. The script requires:
a [~datasets.DatasetDict] with two [~datasets.Image] columns, "image" and "label" |
from datasets import Dataset, DatasetDict, Image
image_paths_train = ["path/to/image_1.jpg/jpg", "path/to/image_2.jpg/jpg", , "path/to/image_n.jpg/jpg"]
label_paths_train = ["path/to/annotation_1.png", "path/to/annotation_2.png", , "path/to/annotation_n.png"]
image_paths_validation = []
label_paths_validation = []
def create_dataset(image_paths, label_paths):
dataset = Dataset.from_dict({"image": sorted(image_paths),
"label": sorted(label_paths)})
dataset = dataset.cast_column("image", Image())
dataset = dataset.cast_column("label", Image())
return dataset
# step 1: create Dataset objects
train_dataset = create_dataset(image_paths_train, label_paths_train)
validation_dataset = create_dataset(image_paths_validation, label_paths_validation)
# step 2: create DatasetDict
dataset = DatasetDict({
"train": train_dataset,
"validation": validation_dataset,
}
)
# step 3: push to Hub (assumes you have ran the huggingface-cli login command in a terminal/notebook)
dataset.push_to_hub("your-name/dataset-repo")
# optionally, you can push to a private repo on the Hub
# dataset.push_to_hub("name of repo on the hub", private=True)
|
an id2label dictionary mapping the class integers to their class names
py
import json
# simple example
id2label = {0: 'cat', 1: 'dog'}
with open('id2label.json', 'w') as fp:
json.dump(id2label, fp) |
As an example, take a look at this example dataset which was created with the steps shown above.
Preprocess
The next step is to load a SegFormer image processor to prepare the images and annotations for the model. Some datasets, like this one, use the zero-index as the background class. However, the background class isn't actually included in the 150 classes, so you'll need to set reduce_labels=True to subtract one from all the labels. The zero-index is replaced by 255 so it's ignored by SegFormer's loss function: |
from transformers import AutoImageProcessor
checkpoint = "nvidia/mit-b0"
image_processor = AutoImageProcessor.from_pretrained(checkpoint, reduce_labels=True)
It is common to apply some data augmentations to an image dataset to make a model more robust against overfitting. In this guide, you'll use the ColorJitter function from torchvision to randomly change the color properties of an image, but you can also use any image library you like. |
from torchvision.transforms import ColorJitter
jitter = ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.1) |
Now create two preprocessing functions to prepare the images and annotations for the model. These functions convert the images into pixel_values and annotations to labels. For the training set, jitter is applied before providing the images to the image processor. For the test set, the image processor crops and normalizes the images, and only crops the labels because no data augmentation is applied during testing. |
def train_transforms(example_batch):
images = [jitter(x) for x in example_batch["image"]]
labels = [x for x in example_batch["annotation"]]
inputs = image_processor(images, labels)
return inputs
def val_transforms(example_batch):
images = [x for x in example_batch["image"]]
labels = [x for x in example_batch["annotation"]]
inputs = image_processor(images, labels)
return inputs |
To apply the jitter over the entire dataset, use the 🤗 Datasets [~datasets.Dataset.set_transform] function. The transform is applied on the fly which is faster and consumes less disk space:
train_ds.set_transform(train_transforms)
test_ds.set_transform(val_transforms) |
It is common to apply some data augmentations to an image dataset to make a model more robust against overfitting.
In this guide, you'll use tf.image to randomly change the color properties of an image, but you can also use any image
library you like.
Define two separate transformation functions:
- training data transformations that include image augmentation
- validation data transformations that only transpose the images, since computer vision models in 🤗 Transformers expect channels-first layout |
import tensorflow as tf
def aug_transforms(image):
image = tf.keras.utils.img_to_array(image)
image = tf.image.random_brightness(image, 0.25)
image = tf.image.random_contrast(image, 0.5, 2.0)
image = tf.image.random_saturation(image, 0.75, 1.25)
image = tf.image.random_hue(image, 0.1)
image = tf.transpose(image, (2, 0, 1))
return image
def transforms(image):
image = tf.keras.utils.img_to_array(image)
image = tf.transpose(image, (2, 0, 1))
return image |
Next, create two preprocessing functions to prepare batches of images and annotations for the model. These functions apply
the image transformations and use the earlier loaded image_processor to convert the images into pixel_values and
annotations to labels. ImageProcessor also takes care of resizing and normalizing the images. |
def train_transforms(example_batch):
images = [aug_transforms(x.convert("RGB")) for x in example_batch["image"]]
labels = [x for x in example_batch["annotation"]]
inputs = image_processor(images, labels)
return inputs
def val_transforms(example_batch):
images = [transforms(x.convert("RGB")) for x in example_batch["image"]]
labels = [x for x in example_batch["annotation"]]
inputs = image_processor(images, labels)
return inputs |
To apply the preprocessing transformations over the entire dataset, use the 🤗 Datasets [~datasets.Dataset.set_transform] function.
The transform is applied on the fly which is faster and consumes less disk space:
train_ds.set_transform(train_transforms)
test_ds.set_transform(val_transforms) |
train_ds.set_transform(train_transforms)
test_ds.set_transform(val_transforms)
Evaluate
Including a metric during training is often helpful for evaluating your model's performance. You can quickly load an evaluation method with the 🤗 Evaluate library. For this task, load the mean Intersection over Union (IoU) metric (see the 🤗 Evaluate quick tour to learn more about how to load and compute a metric):
import evaluate
metric = evaluate.load("mean_iou") |
import evaluate
metric = evaluate.load("mean_iou")
Then create a function to [~evaluate.EvaluationModule.compute] the metrics. Your predictions need to be converted to
logits first, and then reshaped to match the size of the labels before you can call [~evaluate.EvaluationModule.compute]: |
import numpy as np
import torch
from torch import nn
def compute_metrics(eval_pred):
with torch.no_grad():
logits, labels = eval_pred
logits_tensor = torch.from_numpy(logits)
logits_tensor = nn.functional.interpolate(
logits_tensor,
size=labels.shape[-2:],
mode="bilinear",
align_corners=False,
).argmax(dim=1) |
pred_labels = logits_tensor.detach().cpu().numpy()
metrics = metric.compute(
predictions=pred_labels,
references=labels,
num_labels=num_labels,
ignore_index=255,
reduce_labels=False,
)
for key, value in metrics.items():
if isinstance(value, np.ndarray):
metrics[key] = value.tolist()
return metrics |
def compute_metrics(eval_pred):
logits, labels = eval_pred
logits = tf.transpose(logits, perm=[0, 2, 3, 1])
logits_resized = tf.image.resize(
logits,
size=tf.shape(labels)[1:],
method="bilinear",
) |
pred_labels = tf.argmax(logits_resized, axis=-1)
metrics = metric.compute(
predictions=pred_labels,
references=labels,
num_labels=num_labels,
ignore_index=-1,
reduce_labels=image_processor.do_reduce_labels,
)
per_category_accuracy = metrics.pop("per_category_accuracy").tolist()
per_category_iou = metrics.pop("per_category_iou").tolist()
metrics.update({f"accuracy_{id2label[i]}": v for i, v in enumerate(per_category_accuracy)})
metrics.update({f"iou_{id2label[i]}": v for i, v in enumerate(per_category_iou)})
return {"val_" + k: v for k, v in metrics.items()} |
Your compute_metrics function is ready to go now, and you'll return to it when you setup your training.
Train
If you aren't familiar with finetuning a model with the [Trainer], take a look at the basic tutorial here!
You're ready to start training your model now! Load SegFormer with [AutoModelForSemanticSegmentation], and pass the model the mapping between label ids and label classes: |
You're ready to start training your model now! Load SegFormer with [AutoModelForSemanticSegmentation], and pass the model the mapping between label ids and label classes:
from transformers import AutoModelForSemanticSegmentation, TrainingArguments, Trainer
model = AutoModelForSemanticSegmentation.from_pretrained(checkpoint, id2label=id2label, label2id=label2id)
At this point, only three steps remain: |
Define your training hyperparameters in [TrainingArguments]. It is important you don't remove unused columns because this'll drop the image column. Without the image column, you can't create pixel_values. Set remove_unused_columns=False to prevent this behavior! The only other required parameter is output_dir which specifies where to save your model. You'll push this model to the Hub by setting push_to_hub=True (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer] will evaluate the IoU metric and save the training checkpoint.
Pass the training arguments to [Trainer] along with the model, dataset, tokenizer, data collator, and compute_metrics function.
Call [~Trainer.train] to finetune your model. |
training_args = TrainingArguments(
output_dir="segformer-b0-scene-parse-150",
learning_rate=6e-5,
num_train_epochs=50,
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
save_total_limit=3,
evaluation_strategy="steps",
save_strategy="steps",
save_steps=20,
eval_steps=20,
logging_steps=1,
eval_accumulation_steps=5,
remove_unused_columns=False,
push_to_hub=True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_ds,
eval_dataset=test_ds,
compute_metrics=compute_metrics,
)
trainer.train() |
Once training is completed, share your model to the Hub with the [~transformers.Trainer.push_to_hub] method so everyone can use your model:
trainer.push_to_hub()
If you are unfamiliar with fine-tuning a model with Keras, check out the basic tutorial first! |
To fine-tune a model in TensorFlow, follow these steps:
1. Define the training hyperparameters, and set up an optimizer and a learning rate schedule.
2. Instantiate a pretrained model.
3. Convert a 🤗 Dataset to a tf.data.Dataset.
4. Compile your model.
5. Add callbacks to calculate metrics and upload your model to 🤗 Hub
6. Use the fit() method to run the training.
Start by defining the hyperparameters, optimizer and learning rate schedule: |
from transformers import create_optimizer
batch_size = 2
num_epochs = 50
num_train_steps = len(train_ds) * num_epochs
learning_rate = 6e-5
weight_decay_rate = 0.01
optimizer, lr_schedule = create_optimizer(
init_lr=learning_rate,
num_train_steps=num_train_steps,
weight_decay_rate=weight_decay_rate,
num_warmup_steps=0,
) |
Then, load SegFormer with [TFAutoModelForSemanticSegmentation] along with the label mappings, and compile it with the
optimizer. Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to:
from transformers import TFAutoModelForSemanticSegmentation
model = TFAutoModelForSemanticSegmentation.from_pretrained(
checkpoint,
id2label=id2label,
label2id=label2id,
)
model.compile(optimizer=optimizer) # No loss argument! |
Convert your datasets to the tf.data.Dataset format using the [~datasets.Dataset.to_tf_dataset] and the [DefaultDataCollator]: |
from transformers import DefaultDataCollator
data_collator = DefaultDataCollator(return_tensors="tf")
tf_train_dataset = train_ds.to_tf_dataset(
columns=["pixel_values", "label"],
shuffle=True,
batch_size=batch_size,
collate_fn=data_collator,
)
tf_eval_dataset = test_ds.to_tf_dataset(
columns=["pixel_values", "label"],
shuffle=True,
batch_size=batch_size,
collate_fn=data_collator,
) |
To compute the accuracy from the predictions and push your model to the 🤗 Hub, use Keras callbacks.
Pass your compute_metrics function to [KerasMetricCallback],
and use the [PushToHubCallback] to upload the model: |
from transformers.keras_callbacks import KerasMetricCallback, PushToHubCallback
metric_callback = KerasMetricCallback(
metric_fn=compute_metrics, eval_dataset=tf_eval_dataset, batch_size=batch_size, label_cols=["labels"]
)
push_to_hub_callback = PushToHubCallback(output_dir="scene_segmentation", tokenizer=image_processor)
callbacks = [metric_callback, push_to_hub_callback] |
Finally, you are ready to train your model! Call fit() with your training and validation datasets, the number of epochs,
and your callbacks to fine-tune the model:
model.fit(
tf_train_dataset,
validation_data=tf_eval_dataset,
callbacks=callbacks,
epochs=num_epochs,
)
Congratulations! You have fine-tuned your model and shared it on the 🤗 Hub. You can now use it for inference! |
Congratulations! You have fine-tuned your model and shared it on the 🤗 Hub. You can now use it for inference!
Inference
Great, now that you've finetuned a model, you can use it for inference!
Load an image for inference:
image = ds[0]["image"]
image
We will now see how to infer without a pipeline. Process the image with an image processor and place the pixel_values on a GPU: |
image = ds[0]["image"]
image
We will now see how to infer without a pipeline. Process the image with an image processor and place the pixel_values on a GPU:
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # use GPU if available, otherwise use a CPU
encoding = image_processor(image, return_tensors="pt")
pixel_values = encoding.pixel_values.to(device)
Pass your input to the model and return the logits:
outputs = model(pixel_values=pixel_values)
logits = outputs.logits.cpu() |
Pass your input to the model and return the logits:
outputs = model(pixel_values=pixel_values)
logits = outputs.logits.cpu()
Next, rescale the logits to the original image size:
upsampled_logits = nn.functional.interpolate(
logits,
size=image.size[::-1],
mode="bilinear",
align_corners=False,
)
pred_seg = upsampled_logits.argmax(dim=1)[0]
Load an image processor to preprocess the image and return the input as TensorFlow tensors: |
Load an image processor to preprocess the image and return the input as TensorFlow tensors:
from transformers import AutoImageProcessor
image_processor = AutoImageProcessor.from_pretrained("MariaK/scene_segmentation")
inputs = image_processor(image, return_tensors="tf")
Pass your input to the model and return the logits:
from transformers import TFAutoModelForSemanticSegmentation
model = TFAutoModelForSemanticSegmentation.from_pretrained("MariaK/scene_segmentation")
logits = model(**inputs).logits |
from transformers import TFAutoModelForSemanticSegmentation
model = TFAutoModelForSemanticSegmentation.from_pretrained("MariaK/scene_segmentation")
logits = model(**inputs).logits
Next, rescale the logits to the original image size and apply argmax on the class dimension: |
Next, rescale the logits to the original image size and apply argmax on the class dimension:
logits = tf.transpose(logits, [0, 2, 3, 1])
upsampled_logits = tf.image.resize(
logits,
# We reverse the shape of image because image.size returns width and height.
image.size[::-1],
)
pred_seg = tf.math.argmax(upsampled_logits, axis=-1)[0] |
To visualize the results, load the dataset color palette as ade_palette() that maps each class to their RGB values. Then you can combine and plot your image and the predicted segmentation map: |
import matplotlib.pyplot as plt
import numpy as np
color_seg = np.zeros((pred_seg.shape[0], pred_seg.shape[1], 3), dtype=np.uint8)
palette = np.array(ade_palette())
for label, color in enumerate(palette):
color_seg[pred_seg == label, :] = color
color_seg = color_seg[, ::-1] # convert to BGR
img = np.array(image) * 0.5 + color_seg * 0.5 # plot the image with the segmentation map
img = img.astype(np.uint8)
plt.figure(figsize=(15, 10))
plt.imshow(img)
plt.show() |
Image captioning
[[open-in-colab]]
Image captioning is the task of predicting a caption for a given image. Common real world applications of it include
aiding visually impaired people that can help them navigate through different situations. Therefore, image captioning
helps to improve content accessibility for people by describing images to them.
This guide will show you how to:
Fine-tune an image captioning model.
Use the fine-tuned model for inference. |
Fine-tune an image captioning model.
Use the fine-tuned model for inference.
Before you begin, make sure you have all the necessary libraries installed:
pip install transformers datasets evaluate -q
pip install jiwer -q
We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in:
thon
from huggingface_hub import notebook_login
notebook_login() |
Load the Pokémon BLIP captions dataset
Use the 🤗 Dataset library to load a dataset that consists of {image-caption} pairs. To create your own image captioning dataset
in PyTorch, you can follow this notebook.
thon
from datasets import load_dataset
ds = load_dataset("lambdalabs/pokemon-blip-captions")
ds
bash
DatasetDict({
train: Dataset({
features: ['image', 'text'],
num_rows: 833
})
})
The dataset has two features, image and text. |
The dataset has two features, image and text.
Many image captioning datasets contain multiple captions per image. In those cases, a common strategy is to randomly sample a caption amongst the available ones during training. |
Split the dataset’s train split into a train and test set with the [~datasets.Dataset.train_test_split] method:
python
ds = ds["train"].train_test_split(test_size=0.1)
train_ds = ds["train"]
test_ds = ds["test"]
Let's visualize a couple of samples from the training set.
thon
from textwrap import wrap
import matplotlib.pyplot as plt
import numpy as np
def plot_images(images, captions):
plt.figure(figsize=(20, 20))
for i in range(len(images)):
ax = plt.subplot(1, len(images), i + 1)
caption = captions[i]
caption = "\n".join(wrap(caption, 12))
plt.title(caption)
plt.imshow(images[i])
plt.axis("off")
sample_images_to_visualize = [np.array(train_ds[i]["image"]) for i in range(5)]
sample_captions = [train_ds[i]["text"] for i in range(5)]
plot_images(sample_images_to_visualize, sample_captions) |
Preprocess the dataset
Since the dataset has two modalities (image and text), the pre-processing pipeline will preprocess images and the captions.
To do so, load the processor class associated with the model you are about to fine-tune.
thon
from transformers import AutoProcessor
checkpoint = "microsoft/git-base"
processor = AutoProcessor.from_pretrained(checkpoint) |
The processor will internally pre-process the image (which includes resizing, and pixel scaling) and tokenize the caption.
thon
def transforms(example_batch):
images = [x for x in example_batch["image"]]
captions = [x for x in example_batch["text"]]
inputs = processor(images=images, text=captions, padding="max_length")
inputs.update({"labels": inputs["input_ids"]})
return inputs
train_ds.set_transform(transforms)
test_ds.set_transform(transforms) |
With the dataset ready, you can now set up the model for fine-tuning.
Load a base model
Load the "microsoft/git-base" into a AutoModelForCausalLM object.
thon
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(checkpoint) |
Evaluate
Image captioning models are typically evaluated with the Rouge Score or Word Error Rate. For this guide, you will use the Word Error Rate (WER).
We use the 🤗 Evaluate library to do so. For potential limitations and other gotchas of the WER, refer to this guide.
thon
from evaluate import load
import torch
wer = load("wer")
def compute_metrics(eval_pred):
logits, labels = eval_pred
predicted = logits.argmax(-1)
decoded_labels = processor.batch_decode(labels, skip_special_tokens=True)
decoded_predictions = processor.batch_decode(predicted, skip_special_tokens=True)
wer_score = wer.compute(predictions=decoded_predictions, references=decoded_labels)
return {"wer_score": wer_score} |
Train!
Now, you are ready to start fine-tuning the model. You will use the 🤗 [Trainer] for this.
First, define the training arguments using [TrainingArguments].
thon
from transformers import TrainingArguments, Trainer
model_name = checkpoint.split("/")[1]
training_args = TrainingArguments(
output_dir=f"{model_name}-pokemon",
learning_rate=5e-5,
num_train_epochs=50,
fp16=True,
per_device_train_batch_size=32,
per_device_eval_batch_size=32,
gradient_accumulation_steps=2,
save_total_limit=3,
evaluation_strategy="steps",
eval_steps=50,
save_strategy="steps",
save_steps=50,
logging_steps=50,
remove_unused_columns=False,
push_to_hub=True,
label_names=["labels"],
load_best_model_at_end=True,
) |
Then pass them along with the datasets and the model to 🤗 Trainer.
python
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_ds,
eval_dataset=test_ds,
compute_metrics=compute_metrics,
)
To start training, simply call [~Trainer.train] on the [Trainer] object.
python
trainer.train()
You should see the training loss drop smoothly as training progresses.
Once training is completed, share your model to the Hub with the [~Trainer.push_to_hub] method so everyone can use your model:
python
trainer.push_to_hub()
Inference
Take a sample image from test_ds to test the model.
thon
from PIL import Image
import requests
url = "https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/pokemon.png"
image = Image.open(requests.get(url, stream=True).raw)
image |
Prepare image for the model.
thon
device = "cuda" if torch.cuda.is_available() else "cpu"
inputs = processor(images=image, return_tensors="pt").to(device)
pixel_values = inputs.pixel_values
Call [generate] and decode the predictions.
python
generated_ids = model.generate(pixel_values=pixel_values, max_length=50)
generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_caption) |
a drawing of a pink and blue pokemon
Looks like the fine-tuned model generated a pretty good caption! |
Before you begin, make sure you have all the necessary libraries installed: |
pip install -q datasets transformers evaluate timm albumentations
You'll use 🤗 Datasets to load a dataset from the Hugging Face Hub, 🤗 Transformers to train your model,
and albumentations to augment the data. timm is currently required to load a convolutional backbone for the DETR model.
We encourage you to share your model with the community. Log in to your Hugging Face account to upload it to the Hub.
When prompted, enter your token to log in:
from huggingface_hub import notebook_login
notebook_login() |
from huggingface_hub import notebook_login
notebook_login()
Load the CPPE-5 dataset
The CPPE-5 dataset contains images with
annotations identifying medical personal protective equipment (PPE) in the context of the COVID-19 pandemic.
Start by loading the dataset: |
from datasets import load_dataset
cppe5 = load_dataset("cppe-5")
cppe5
DatasetDict({
train: Dataset({
features: ['image_id', 'image', 'width', 'height', 'objects'],
num_rows: 1000
})
test: Dataset({
features: ['image_id', 'image', 'width', 'height', 'objects'],
num_rows: 29
})
}) |
You'll see that this dataset already comes with a training set containing 1000 images and a test set with 29 images.
To get familiar with the data, explore what the examples look like.
cppe5["train"][0]
{'image_id': 15,
'image': ,
'width': 943,
'height': 663,
'objects': {'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]],
'category': [4, 4, 0, 0]}} |
The examples in the dataset have the following fields:
- image_id: the example image id
- image: a PIL.Image.Image object containing the image
- width: width of the image
- height: height of the image
- objects: a dictionary containing bounding box metadata for the objects in the image:
- id: the annotation id
- area: the area of the bounding box
- bbox: the object's bounding box (in the COCO format )
- category: the object's category, with possible values including Coverall (0), Face_Shield (1), Gloves (2), Goggles (3) and Mask (4)
You may notice that the bbox field follows the COCO format, which is the format that the DETR model expects.
However, the grouping of the fields inside objects differs from the annotation format DETR requires. You will
need to apply some preprocessing transformations before using this data for training.
To get an even better understanding of the data, visualize an example in the dataset. |
import numpy as np
import os
from PIL import Image, ImageDraw
image = cppe5["train"][0]["image"]
annotations = cppe5["train"][0]["objects"]
draw = ImageDraw.Draw(image)
categories = cppe5["train"].features["objects"].feature["category"].names
id2label = {index: x for index, x in enumerate(categories, start=0)}
label2id = {v: k for k, v in id2label.items()}
for i in range(len(annotations["id"])):
box = annotations["bbox"][i]
class_idx = annotations["category"][i]
x, y, w, h = tuple(box)
# Check if coordinates are normalized or not
if max(box) > 1.0:
# Coordinates are un-normalized, no need to re-scale them
x1, y1 = int(x), int(y)
x2, y2 = int(x + w), int(y + h)
else:
# Coordinates are normalized, re-scale them
x1 = int(x * width)
y1 = int(y * height)
x2 = int((x + w) * width)
y2 = int((y + h) * height)
draw.rectangle((x, y, x + w, y + h), outline="red", width=1)
draw.text((x, y), id2label[class_idx], fill="white")
image |
To visualize the bounding boxes with associated labels, you can get the labels from the dataset's metadata, specifically
the category field.
You'll also want to create dictionaries that map a label id to a label class (id2label) and the other way around (label2id).
You can use them later when setting up the model. Including these maps will make your model reusable by others if you share
it on the Hugging Face Hub. Please note that, the part of above code that draws the bounding boxes assume that it is in XYWH (x,y co-ordinates and width and height of the box) format. It might not work for other formats like (x1, y1, x2, y2).
As a final step of getting familiar with the data, explore it for potential issues. One common problem with datasets for
object detection is bounding boxes that "stretch" beyond the edge of the image. Such "runaway" bounding boxes can raise
errors during training and should be addressed at this stage. There are a few examples with this issue in this dataset.
To keep things simple in this guide, we remove these images from the data. |
remove_idx = [590, 821, 822, 875, 876, 878, 879]
keep = [i for i in range(len(cppe5["train"])) if i not in remove_idx]
cppe5["train"] = cppe5["train"].select(keep) |
Preprocess the data
To finetune a model, you must preprocess the data you plan to use to match precisely the approach used for the pre-trained model.
[AutoImageProcessor] takes care of processing image data to create pixel_values, pixel_mask, and
labels that a DETR model can train with. The image processor has some attributes that you won't have to worry about:
image_mean = [0.485, 0.456, 0.406 ]
image_std = [0.229, 0.224, 0.225] |
image_mean = [0.485, 0.456, 0.406 ]
image_std = [0.229, 0.224, 0.225]
These are the mean and standard deviation used to normalize images during the model pre-training. These values are crucial
to replicate when doing inference or finetuning a pre-trained image model.
Instantiate the image processor from the same checkpoint as the model you want to finetune.
from transformers import AutoImageProcessor
checkpoint = "facebook/detr-resnet-50"
image_processor = AutoImageProcessor.from_pretrained(checkpoint) |
Before passing the images to the image_processor, apply two preprocessing transformations to the dataset:
- Augmenting images
- Reformatting annotations to meet DETR expectations
First, to make sure the model does not overfit on the training data, you can apply image augmentation with any data augmentation library. Here we use Albumentations
This library ensures that transformations affect the image and update the bounding boxes accordingly.
The 🤗 Datasets library documentation has a detailed guide on how to augment images for object detection,
and it uses the exact same dataset as an example. Apply the same approach here, resize each image to (480, 480),
flip it horizontally, and brighten it: |
import albumentations
import numpy as np
import torch
transform = albumentations.Compose(
[
albumentations.Resize(480, 480),
albumentations.HorizontalFlip(p=1.0),
albumentations.RandomBrightnessContrast(p=1.0),
],
bbox_params=albumentations.BboxParams(format="coco", label_fields=["category"]),
) |
The image_processor expects the annotations to be in the following format: {'image_id': int, 'annotations': List[Dict]},
where each dictionary is a COCO object annotation. Let's add a function to reformat annotations for a single example: |
def formatted_anns(image_id, category, area, bbox):
annotations = []
for i in range(0, len(category)):
new_ann = {
"image_id": image_id,
"category_id": category[i],
"isCrowd": 0,
"area": area[i],
"bbox": list(bbox[i]),
}
annotations.append(new_ann)
return annotations
Now you can combine the image and annotation transformations to use on a batch of examples: |
return annotations
Now you can combine the image and annotation transformations to use on a batch of examples:
transforming a batch
def transform_aug_ann(examples):
image_ids = examples["image_id"]
images, bboxes, area, categories = [], [], [], []
for image, objects in zip(examples["image"], examples["objects"]):
image = np.array(image.convert("RGB"))[:, :, ::-1]
out = transform(image=image, bboxes=objects["bbox"], category=objects["category"]) |
area.append(objects["area"])
images.append(out["image"])
bboxes.append(out["bboxes"])
categories.append(out["category"])
targets = [
{"image_id": id_, "annotations": formatted_anns(id_, cat_, ar_, box_)}
for id_, cat_, ar_, box_ in zip(image_ids, categories, area, bboxes)
]
return image_processor(images=images, annotations=targets, return_tensors="pt") |
Apply this preprocessing function to the entire dataset using 🤗 Datasets [~datasets.Dataset.with_transform] method. This method applies
transformations on the fly when you load an element of the dataset.
At this point, you can check what an example from the dataset looks like after the transformations. You should see a tensor
with pixel_values, a tensor with pixel_mask, and labels. |
cppe5["train"] = cppe5["train"].with_transform(transform_aug_ann)
cppe5["train"][15]
{'pixel_values': tensor([[[ 0.9132, 0.9132, 0.9132, , -1.9809, -1.9809, -1.9809],
[ 0.9132, 0.9132, 0.9132, , -1.9809, -1.9809, -1.9809],
[ 0.9132, 0.9132, 0.9132, , -1.9638, -1.9638, -1.9638],
,
[-1.5699, -1.5699, -1.5699, , -1.9980, -1.9980, -1.9980],
[-1.5528, -1.5528, -1.5528, , -1.9980, -1.9809, -1.9809],
[-1.5528, -1.5528, -1.5528, , -1.9980, -1.9809, -1.9809]], |
[[ 1.3081, 1.3081, 1.3081, , -1.8431, -1.8431, -1.8431],
[ 1.3081, 1.3081, 1.3081, , -1.8431, -1.8431, -1.8431],
[ 1.3081, 1.3081, 1.3081, , -1.8256, -1.8256, -1.8256],
,
[-1.3179, -1.3179, -1.3179, , -1.8606, -1.8606, -1.8606],
[-1.3004, -1.3004, -1.3004, , -1.8606, -1.8431, -1.8431],
[-1.3004, -1.3004, -1.3004, , -1.8606, -1.8431, -1.8431]], |
[[ 1.4200, 1.4200, 1.4200, , -1.6476, -1.6476, -1.6476],
[ 1.4200, 1.4200, 1.4200, , -1.6476, -1.6476, -1.6476],
[ 1.4200, 1.4200, 1.4200, , -1.6302, -1.6302, -1.6302],
,
[-1.0201, -1.0201, -1.0201, , -1.5604, -1.5604, -1.5604],
[-1.0027, -1.0027, -1.0027, , -1.5604, -1.5430, -1.5430],
[-1.0027, -1.0027, -1.0027, , -1.5604, -1.5430, -1.5430]]]), |
'pixel_mask': tensor([[1, 1, 1, , 1, 1, 1],
[1, 1, 1, , 1, 1, 1],
[1, 1, 1, , 1, 1, 1],
,
[1, 1, 1, , 1, 1, 1],
[1, 1, 1, , 1, 1, 1],
[1, 1, 1, , 1, 1, 1]]),
'labels': {'size': tensor([800, 800]), 'image_id': tensor([756]), 'class_labels': tensor([4]), 'boxes': tensor([[0.7340, 0.6986, 0.3414, 0.5944]]), 'area': tensor([519544.4375]), 'iscrowd': tensor([0]), 'orig_size': tensor([480, 480])}} |
You have successfully augmented the individual images and prepared their annotations. However, preprocessing isn't
complete yet. In the final step, create a custom collate_fn to batch images together.
Pad images (which are now pixel_values) to the largest image in a batch, and create a corresponding pixel_mask
to indicate which pixels are real (1) and which are padding (0). |
def collate_fn(batch):
pixel_values = [item["pixel_values"] for item in batch]
encoding = image_processor.pad(pixel_values, return_tensors="pt")
labels = [item["labels"] for item in batch]
batch = {}
batch["pixel_values"] = encoding["pixel_values"]
batch["pixel_mask"] = encoding["pixel_mask"]
batch["labels"] = labels
return batch |
Training the DETR model
You have done most of the heavy lifting in the previous sections, so now you are ready to train your model!
The images in this dataset are still quite large, even after resizing. This means that finetuning this model will
require at least one GPU.
Training involves the following steps:
1. Load the model with [AutoModelForObjectDetection] using the same checkpoint as in the preprocessing.
2. Define your training hyperparameters in [TrainingArguments].
3. Pass the training arguments to [Trainer] along with the model, dataset, image processor, and data collator.
4. Call [~Trainer.train] to finetune your model.
When loading the model from the same checkpoint that you used for the preprocessing, remember to pass the label2id
and id2label maps that you created earlier from the dataset's metadata. Additionally, we specify ignore_mismatched_sizes=True to replace the existing classification head with a new one. |
from transformers import AutoModelForObjectDetection
model = AutoModelForObjectDetection.from_pretrained(
checkpoint,
id2label=id2label,
label2id=label2id,
ignore_mismatched_sizes=True,
) |
In the [TrainingArguments] use output_dir to specify where to save your model, then configure hyperparameters as you see fit.
It is important you do not remove unused columns because this will drop the image column. Without the image column, you
can't create pixel_values. For this reason, set remove_unused_columns to False.
If you wish to share your model by pushing to the Hub, set push_to_hub to True (you must be signed in to Hugging
Face to upload your model). |
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir="detr-resnet-50_finetuned_cppe5",
per_device_train_batch_size=8,
num_train_epochs=10,
fp16=True,
save_steps=200,
logging_steps=50,
learning_rate=1e-5,
weight_decay=1e-4,
save_total_limit=2,
remove_unused_columns=False,
push_to_hub=True,
)
Finally, bring everything together, and call [~transformers.Trainer.train]: |
Finally, bring everything together, and call [~transformers.Trainer.train]:
from transformers import Trainer
trainer = Trainer(
model=model,
args=training_args,
data_collator=collate_fn,
train_dataset=cppe5["train"],
tokenizer=image_processor,
)
trainer.train() |
If you have set push_to_hub to True in the training_args, the training checkpoints are pushed to the
Hugging Face Hub. Upon training completion, push the final model to the Hub as well by calling the [~transformers.Trainer.push_to_hub] method.
trainer.push_to_hub() |
Evaluate
Object detection models are commonly evaluated with a set of COCO-style metrics.
You can use one of the existing metrics implementations, but here you'll use the one from torchvision to evaluate the final
model that you pushed to the Hub.
To use the torchvision evaluator, you'll need to prepare a ground truth COCO dataset. The API to build a COCO dataset
requires the data to be stored in a certain format, so you'll need to save images and annotations to disk first. Just like
when you prepared your data for training, the annotations from the cppe5["test"] need to be formatted. However, images
should stay as they are.
The evaluation step requires a bit of work, but it can be split in three major steps.
First, prepare the cppe5["test"] set: format the annotations and save the data to disk. |
import json
format annotations the same as for training, no need for data augmentation
def val_formatted_anns(image_id, objects):
annotations = []
for i in range(0, len(objects["id"])):
new_ann = {
"id": objects["id"][i],
"category_id": objects["category"][i],
"iscrowd": 0,
"image_id": image_id,
"area": objects["area"][i],
"bbox": objects["bbox"][i],
}
annotations.append(new_ann) |
return annotations
Save images and annotations into the files torchvision.datasets.CocoDetection expects
def save_cppe5_annotation_file_images(cppe5):
output_json = {}
path_output_cppe5 = f"{os.getcwd()}/cppe5/" |
if not os.path.exists(path_output_cppe5):
os.makedirs(path_output_cppe5)
path_anno = os.path.join(path_output_cppe5, "cppe5_ann.json")
categories_json = [{"supercategory": "none", "id": id, "name": id2label[id]} for id in id2label]
output_json["images"] = []
output_json["annotations"] = []
for example in cppe5:
ann = val_formatted_anns(example["image_id"], example["objects"])
output_json["images"].append(
{
"id": example["image_id"],
"width": example["image"].width,
"height": example["image"].height,
"file_name": f"{example['image_id']}.png",
}
)
output_json["annotations"].extend(ann)
output_json["categories"] = categories_json
with open(path_anno, "w") as file:
json.dump(output_json, file, ensure_ascii=False, indent=4)
for im, img_id in zip(cppe5["image"], cppe5["image_id"]):
path_img = os.path.join(path_output_cppe5, f"{img_id}.png")
im.save(path_img)
return path_output_cppe5, path_anno |
Subsets and Splits