text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
On the other hand, oneDNN is optimized for Channels Last memory format to use it for optimal performance directly and PyTorch will simply pass a memory view to oneDNN. Which means the conversion of input and output tensor is saved. Fig-2 indicates memory format propagation behavior of convolution on PyTorch CPU (the solid arrow indicates a memory format conversion, and the dashed arrow indicates a memory view):
Fig-2 CPU Conv memory format propagation
On PyTorch, the default memory format is Channels First. In case a particular operator doesn't have support on Channels Last, the NHWC input would be treated as a non-contiguous NCHW and therefore fallback to Channels First, which will consume the previous memory bandwidth on CPU and result in suboptimal performance. | https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/ | pytorch blogs |
Therefore, it is very important to extend the scope of Channels Last support for optimal performance. And we have implemented Channels Last kernels for the commonly use operators in CV domain, applicable for both inference and training, such as:
Activations (e.g., ReLU, PReLU, etc.)
Convolution (e.g., Conv2d)
Normalization (e.g., BatchNorm2d, GroupNorm, etc.)
Pooling (e.g., AdaptiveAvgPool2d, MaxPool2d, etc.)
Shuffle (e.g., ChannelShuffle, PixelShuffle)
Refer to Operators-with-Channels-Last-support for details.
Native Level Optimization on Channels Last
As mentioned above, PyTorch uses oneDNN to achieve optimal performance on Intel CPUs for convolutions. The rest of memory format aware operators are optimized at PyTorch native level, which doesn’t require any third-party library support. | https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/ | pytorch blogs |
Cache friendly parallelization scheme: keep the same parallelization scheme for all the memory format aware operators, this will help increase data locality when passing each layer’s output to the next.
Vectorization on multiple archs: generally, we can vectorize on the most inner dimension on Channels Last memory format. And each of the vectorized CPU kernels will be generated for both AVX2 and AVX512.
While contributing to Channels Last kernels, we tried our best to optimize Channels First counterparts as well. The fact is some operators are physically impossible to achieve optimal performance on Channels First, such as Convolution, Pooling, etc.
Run Vision Models on Channels Last
The Channels Last related APIs are documented at PyTorch memory format tutorial. Typically, we can convert a 4D tensor from Channels First to Channels Last by:
```python
convert x to channels last
suppose x’s shape is (N, C, H, W) | https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/ | pytorch blogs |
suppose x’s shape is (N, C, H, W)
then x’s stride will be (HWC, 1, WC, C)
x = x.to(memory_format=torch.channels_last)
To run models on Channels Last memory format, simply need to convert input and model to Channels Last and then you are ready to go. The following is a minimal example showing how to run ResNet50 with TorchVision on Channels Last memory format:
```python
import torch
from torchvision.models import resnet50
N, C, H, W = 1, 3, 224, 224
x = torch.rand(N, C, H, W)
model = resnet50()
model.eval()
# convert input and model to channels last
x = x.to(memory_format=torch.channels_last)
model = model.to(memory_format=torch.channels_last)
model(x)
The Channels Last optimization is implemented at native kernel level, which means you may apply other functionalities such as torch.fx and torch script together with Channels Last as well.
Performance Gains | https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/ | pytorch blogs |
Performance Gains
We benchmarked inference performance of TorchVision models on Intel® Xeon® Platinum 8380 CPU @ 2.3 GHz, single instance per socket (batch size = 2 x number of physical cores). Results show that Channels Last has 1.3x to 1.8x performance gain over Channels First.
The performance gain primarily comes from two aspects:
For Convolution layers, Channels Last saved the memory format conversion to blocked format for activations, which improves the overall computation efficiency.
For Pooling and Upsampling layers, Channels Last can use vectorized logic along the most inner dimension, e.g., “C”, while Channels First can’t.
For memory format non aware layers, Channels Last and Channels First has the same performance.
Conclusion & Future Work | https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/ | pytorch blogs |
Conclusion & Future Work
In this blog we introduced fundamental concepts of Channels Last and demonstrated the performance benefits of CPU using Channels Last on vision models. The current work is limited to 2D models at the current stage, and we will extend the optimization effort to 3D models in near future!
Acknowledgement
The results presented in this blog is a joint effort of Meta and Intel PyTorch team. Special thanks to Vitaly Fedyunin and Wei Wei from Meta who spent precious time and gave substantial assistance! Together we made one more step on the path of improving the PyTorch CPU eco system.
References
PyTorch memory format tutorial
oneDNN guide on memory formats
PyTorch operators with Channels Last support
| https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/ | pytorch blogs |
layout: blog_detail
title: 'Announcing the PyTorch Enterprise Support Program'
author: Team PyTorch
Today, we are excited to announce the PyTorch Enterprise Support Program, a participatory program that enables service providers to develop and offer tailored enterprise-grade support to their customers. This new offering, built in collaboration between Facebook and Microsoft, was created in direct response to feedback from PyTorch enterprise users who are developing models in production at scale for mission-critical applications.
The PyTorch Enterprise Support Program is available to any service provider. It is designed to mutually benefit all program Participants by sharing and improving PyTorch long-term support (LTS), including contributions of hotfixes and other improvements found while working closely with customers and on their systems. | https://pytorch.org/blog/announcing-pytorch-enterprise/ | pytorch blogs |
To benefit the open source community, all hotfixes developed by Participants will be tested and fed back to the LTS releases of PyTorch regularly through PyTorch’s standard pull request process. To participate in the program, a service provider must apply and meet a set of program terms and certification requirements. Once accepted, the service provider becomes a program Participant and can offer a packaged PyTorch Enterprise support service with LTS, prioritized troubleshooting, useful integrations, and more.
| https://pytorch.org/blog/announcing-pytorch-enterprise/ | pytorch blogs |
As one of the founding members and an inaugural member of the PyTorch Enterprise Support Program, Microsoft is launching PyTorch Enterprise on Microsoft Azure to deliver a reliable production experience for PyTorch users. Microsoft will support each PyTorch release for as long as it is current. In addition, it will support selected releases for two years, enabling a stable production experience. Microsoft Premier and Unified Support customers can access prioritized troubleshooting for hotfixes, bugs, and security patches at no additional cost. Microsoft will extensively test PyTorch releases for performance regression. The latest release of PyTorch will be integrated with Azure Machine Learning and other PyTorch add-ons including ONNX Runtime for faster inference. | https://pytorch.org/blog/announcing-pytorch-enterprise/ | pytorch blogs |
PyTorch Enterprise on Microsoft Azure not only benefits its customers, but also the PyTorch community users. All improvements will be tested and fed back to the future release for PyTorch so everyone in the community can use them.
As an organization or PyTorch user, the standard way of researching and deploying with different release versions of PyTorch does not change. If your organization is looking for the managed long-term support, prioritized patches, bug fixes, and additional enterprise-grade support, then you should reach out to service providers participating in the program.
To learn more and participate in the program as a service provider, visit the PyTorch Enterprise Support Program. If you want to learn more about Microsoft’s offering, visit PyTorch Enterprise on Microsoft Azure.
Thank you,
Team PyTorch | https://pytorch.org/blog/announcing-pytorch-enterprise/ | pytorch blogs |
layout: blog_detail
title: 'Everything You Need To Know About Torchvision’s SSD Implementation'
author: Vasilis Vryniotis
featured-img: 'assets/images/prediction-examples.png'
In TorchVision v0.10, we’ve released two new Object Detection models based on the SSD architecture. Our plan is to cover the key implementation details of the algorithms along with information on how they were trained in a two-part article. | https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
In part 1 of the series, we will focus on the original implementation of the SSD algorithm as described on the Single Shot MultiBox Detector paper. We will briefly give a high-level description of how the algorithm works, then go through its main components, highlight key parts of its code, and finally discuss how we trained the released model. Our goal is to cover all the necessary details to reproduce the model including those optimizations which are not covered on the paper but are part on the original implementation.
How Does SSD Work?
Reading the aforementioned paper is highly recommended but here is a quick oversimplified refresher. Our target is to detect the locations of objects in an image along with their categories. Here is the Figure 5 from the SSD paper with prediction examples of the model:
| https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
The SSD algorithm uses a CNN backbone, passes the input image through it and takes the convolutional outputs from different levels of the network. The list of these outputs are called feature maps. These feature maps are then passed through the Classification and Regression heads which are responsible for predicting the class and the location of the boxes. | https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
Since the feature maps of each image contain outputs from different levels of the network, their size varies and thus they can capture objects of different dimensions. On top of each, we tile several default boxes which can be thought as our rough prior guesses. For each default box, we predict whether there is an object (along with its class) and its offset (correction over the original location). During training time, we need to first match the ground truth to the default boxes and then we use those matches to estimate our loss. During inference, similar prediction boxes are combined to estimate the final predictions.
The SSD Network Architecture
In this section, we will discuss the key components of SSD. Our code follows closely the paper and makes use of many of the undocumented optimizations included in the official implementation.
DefaultBoxGenerator | https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
DefaultBoxGenerator
The DefaultBoxGenerator class is responsible for generating the default boxes of SSD and operates similarly to the AnchorGenerator of FasterRCNN (for more info on their differences see pages 4-6 of the paper). It produces a set of predefined boxes of specific width and height which are tiled across the image and serve as the first rough prior guesses of where objects might be located. Here is Figure 1 from the SSD paper with a visualization of ground truths and default boxes:
| https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
The class is parameterized by a set of hyperparameters that control their shape and tiling. The implementation will provide automatically good guesses with the default parameters for those who want to experiment with new backbones/datasets but one can also pass optimized custom values.
SSDMatcher | https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
The SSDMatcher class extends the standard Matcher used by FasterRCNN and it is responsible for matching the default boxes to the ground truth. After estimating the IoUs of all combinations, we use the matcher to find for each default box the best candidate ground truth with overlap higher than the IoU threshold. The SSD version of the matcher has an extra step to ensure that each ground truth is matched with the default box that has the highest overlap. The results of the matcher are used in the loss estimation during the training process of the model. | https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
Classification and Regression Heads
The SSDHead class is responsible for initializing the Classification and Regression parts of the network. Here are a few notable details about their code:
Both the Classification and the Regression head inherit from the same class which is responsible for making the predictions for each feature map.
| https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
Each level of the feature map uses a separate 3x3 Convolution to estimate the class logits and box locations.
The number of predictions that each head makes per level depends on the number of default boxes and the sizes of the feature maps.
Backbone Feature Extractor
The feature extractor reconfigures and enhances a standard VGG backbone with extra layers as depicted on the Figure 2 of the SSD paper:
| https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
The class supports all VGG models of TorchVision and one can create a similar extractor class for other types of CNNs (see this example for ResNet). Here are a few implementation details of the class:
Patching the ceil_mode parameter of the 3rd Maxpool layer is necessary to get the same feature map sizes as the paper. This is due to small differences between PyTorch and the original Caffe implementation of the model.
| https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
It adds a series of extra feature layerson top of VGG. If the highres parameter is True during its construction, it will append an extra convolution. This is useful for the SSD512 version of the model.
As discussed on section 3 of the paper, the fully connected layers of the original VGG are converted to convolutions with the first one using Atrous. Moreover maxpool5’s stride and kernel size is modified.
| https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
As described on section 3.1, L2 normalization is used on the output of conv4_3 and a set of learnable weights are introduced to control its scaling.
SSD Algorithm
The final key piece of the implementation is on the SSD class. Here are some notable details: | https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
The algorithm is parameterized by a set of arguments similar to other detection models. The mandatory parameters are: the backbone which is responsible for estimating the feature maps, the anchor_generator which should be a configured instance of the DefaultBoxGenerator class, the size to which the input images will be resized and the num_classes for classification excluding the background.
| https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
If a head is not provided, the constructor will initialize the default SSDHead. To do so, we need to know the number of output channels for each feature map produced by the backbone. Initially we try to retrieve this information from the backbone but if not available we will dynamically estimate it.
| https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
The algorithm reuses the standard BoxCoder class used by other Detection models. The class is responsible for encoding and decoding the bounding boxes and is configured to use the same prior variances as the original implementation.
| https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
Though we reuse the standard GeneralizedRCNNTransform class to resize and normalize the input images, the SSD algorithm configures it to ensure that the image size will remain fixed.
Here are the two core methods of the implementation: | https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
The compute_loss method estimates the standard Multi-box loss as described on page 5 of the SSD paper. It uses the smooth L1 loss for regression and the standard cross-entropy loss with hard-negative sampling for classification.
| https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
As in all detection models, the forward method currently has different behaviour depending on whether the model is on training or eval mode. It starts by resizing & normalizing the input images and then passes them through the backbone to get the feature maps. The feature maps are then passed through the head to get the predictions and then the method generates the default boxes.
| https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
If the model is on training mode, the forward will estimate the IoUs of the default boxes with the ground truth, use the SSDmatcher to produce matches and finally estimate the losses by calling the compute_loss method.
| https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
If the model is on eval mode, we first select the best detections by keeping only the ones that pass the score threshold, select the most promising boxes and run NMS to clean up and select the best predictions. Finally we postprocess the predictions to resize them to the original image size.
The SSD300 VGG16 Model | https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
The SSD300 VGG16 Model
The SSD is a family of models because it can be configured with different backbones and different Head configurations. In this section, we will focus on the provided SSD pre-trained model. We will discuss the details of its configuration and the training process used to reproduce the reported results.
Training process
The model was trained using the COCO dataset and all of its hyper-parameters and scripts can be found in our references folder. Below we provide details on the most notable aspects of the training process.
Paper Hyperparameters | https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
Paper Hyperparameters
In order to achieve the best possible results on COCO, we adopted the hyperparameters described on the section 3 of the paper concerning the optimizer configuration, the weight regularization etc. Moreover we found it useful to adopt the optimizations that appear in the official implementation concerning the tiling configuration of the DefaultBox generator. This optimization was not described in the paper but it was crucial for improving the detection precision of smaller objects.
Data Augmentation | https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
Data Augmentation
Implementing the SSD Data Augmentation strategy as described on page 6 and page 12 of the paper was critical to reproducing the results. More specifically the use of random “Zoom In” and “Zoom Out” transformations make the model robust to various input sizes and improve its precision on the small and medium objects. Finally since the VGG16 has quite a few parameters, the photometric distortions included in the augmentations have a regularization effect and help avoid the overfitting.
Weight Initialization & Input Scaling | https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
Another aspect that we found beneficial was to follow the weight initialization scheme proposed by the paper. To do that, we had to adapt our input scaling method by undoing the 0-1 scaling performed by ToTensor() and use pre-trained ImageNet weights fitted with this scaling (shoutout to Max deGroot for providing them in his repo). All the weights of new convolutions were initialized using Xavier and their biases were set to zero. After initialization, the network was trained end-to-end. | https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
LR Scheme
As reported on the paper, after applying aggressive data augmentations it’s necessary to train the models for longer. Our experiments confirm this and we had to tweak the Learning rate, batch sizes and overall steps to achieve the best results. Our proposed learning scheme is configured to be rather on the safe side, showed signs of plateauing between the steps and thus one is likely to be able to train a similar model by doing only 66% of our epochs.
Breakdown of Key Accuracy Improvements | https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
Breakdown of Key Accuracy Improvements
It is important to note that implementing a model directly from a paper is an iterative process that circles between coding, training, bug fixing and adapting the configuration until we match the accuracies reported on the paper. Quite often it also involves simplifying the training recipe or enhancing it with more recent methodologies. It is definitely not a linear process where incremental accuracy improvements are achieved by improving a single direction at a time but instead involves exploring different hypothesis, making incremental improvements in different aspects and doing a lot of backtracking.
With that in mind, below we try to summarize the optimizations that affected our accuracy the most. We did this by grouping together the various experiments in 4 main groups and attributing the experiment improvements to the closest match. Note that the Y-axis of the graph starts from 18 instead from 0 to make the difference between optimizations more visible: | https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
Model Configuration
mAP delta
mAP
Baseline with "FasterRCNN-style" Hyperparams
-
19.5
+ Paper Hyperparams
1.6
21.1
+ Data Augmentation
1.8
22.9
+ Weight Initialization & Input Scaling
1
23.9
+ LR scheme
1.2
25.1
Our final model achieves an mAP of 25.1 and reproduces exactly the COCO results reported on the paper. Here is a detailed breakdown of the accuracy metrics.
We hope you found the part 1 of the series interesting. On the part 2, we will focus on the implementation of SSDlite and discuss its differences from SSD. Until then, we are looking forward to your feedback. | https://pytorch.org/blog/torchvision-ssd-implementation/ | pytorch blogs |
layout: blog_detail
title: "PyTorch 1.13 release, including beta versions of functorch and improved support for Apple’s new M1 chips."
author: Team PyTorch
featured-img: "/assets/images/blog-2022-10-25-Pytorch-1.13-Release.png"
We are excited to announce the release of PyTorch® 1.13 (release note)! This includes Stable versions of BetterTransformer. We deprecated CUDA 10.2 and 11.3 and completed migration of CUDA 11.6 and 11.7. Beta includes improved support for Apple M1 chips and functorch, a library that offers composable vmap (vectorization) and autodiff transforms, being included in-tree with the PyTorch release. This release is composed of over 3,749 commits and 467 contributors since 1.12.1. We want to sincerely thank our dedicated community for your contributions.
Summary: | https://pytorch.org/blog/PyTorch-1.13-release/ | pytorch blogs |
Summary:
The BetterTransformer feature set supports fastpath execution for common Transformer models during Inference out-of-the-box, without the need to modify the model. Additional improvements include accelerated add+matmul linear algebra kernels for sizes commonly used in Transformer models and Nested Tensors is now enabled by default.
Timely deprecating older CUDA versions allows us to proceed with introducing the latest CUDA version as they are introduced by Nvidia®, and hence allows support for C++17 in PyTorch and new NVIDIA Open GPU Kernel Modules.
Previously, functorch was released out-of-tree in a separate package. After installing PyTorch, a user will be able to import functorch and use functorch without needing to install another package.
| https://pytorch.org/blog/PyTorch-1.13-release/ | pytorch blogs |
PyTorch is offering native builds for Apple® silicon machines that use Apple's new M1 chip as a beta feature, providing improved support across PyTorch's APIs.
Stable
Beta
Prototype
Better Transformer
Enable Intel® VTune™ Profiler’s Instrumentation and Tracing Technology APIs
Arm® Compute Library backend support for AWS Graviton
CUDA 10.2 and 11.3 CI/CD Deprecation
Extend NNC to support channels last and bf16
CUDA Sanitizer
Functorch now in PyTorch Core Library
| https://pytorch.org/blog/PyTorch-1.13-release/ | pytorch blogs |
</tr>
<tr>
<td> </td>
<td><a href="#betasupport"> Beta Support for M1 devices</a></td>
<td> </td>
</tr>
Along with 1.13, we are also releasing major updates to the PyTorch libraries, more details can be found in this blog.
Stable Features
(Stable) BetterTransformer API
The BetterTransformer feature set, first released in PyTorch 1.12, is stable. PyTorch BetterTransformer supports fastpath execution for common Transformer models during Inference out-of-the-box, without the need to modify the model. To complement the improvements in Better Transformer, we have also accelerated add+matmul linear algebra kernels for sizes commonly used in Transformer models. | https://pytorch.org/blog/PyTorch-1.13-release/ | pytorch blogs |
Reflecting the performance benefits for many NLP users, Nested Tensors use for Better Transformer is now enabled by default. To ensure compatibility, a mask check is performed to ensure a contiguous mask is supplied. In Transformer Encoder, the mask check for src_key_padding_mask may be suppressed by setting mask_check=False. This accelerates processing for users than can guarantee that only aligned masks are provided. Finally, better error messages are provided to diagnose incorrect inputs, together with improved diagnostics why fastpath execution cannot be used.
Better Transformer is directly integrated into the PyTorch TorchText library, enabling TorchText users to transparently and automatically take advantage of BetterTransformer speed and efficiency performance. (Tutorial)
| https://pytorch.org/blog/PyTorch-1.13-release/ | pytorch blogs |
Figure: BetterTransformer fastpath execution is now stable and enables sparsity optimization using Nested Tensor representation as default
Introduction of CUDA 11.6 and 11.7 and deprecation of CUDA 10.2 and 11.3
Timely deprecating older CUDA versions allows us to proceed with introducing the latest CUDA version as they are introduced by Nvidia®, and hence allows developers to use the latest features of CUDA and benefit from correctness fixes provided by the latest version.
Decommissioning of CUDA 10.2. CUDA 11 is the first CUDA version to support C++17. Hence decommissioning legacy CUDA 10.2 was a major step in adding support for C++17 in PyTorch. It also helps to improve PyTorch code by eliminating legacy CUDA 10.2 specific instructions. | https://pytorch.org/blog/PyTorch-1.13-release/ | pytorch blogs |
Decommissioning of CUDA 11.3 and introduction of CUDA 11.7 brings compatibility support for the new NVIDIA Open GPU Kernel Modules and another significant highlight is the lazy loading support. CUDA 11.7 is shipped with cuDNN 8.5.0 which contains a number of optimizations accelerating transformer-based models, 30% reduction in library size , and various improvements in the runtime fusion engine. Learn more on CUDA 11.7 with our release notes.
Beta Features
(Beta) functorch
Inspired by Google® JAX, functorch is a library that offers composable vmap (vectorization) and autodiff transforms. It enables advanced autodiff use cases that would otherwise be tricky to express in PyTorch. Examples include:
model ensembling | https://pytorch.org/blog/PyTorch-1.13-release/ | pytorch blogs |
efficiently computing jacobians and hessians
computing per-sample-gradients (or other per-sample quantities)
We’re excited to announce that, as a first step towards closer integration with PyTorch, functorch has moved to inside the PyTorch library and no longer requires the installation of a separate functorch package. After installing PyTorch via conda or pip, you’ll be able to `import functorch’ in your program. Learn more with our detailed instructions, nightly and release notes.
(Beta) Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIs (ITT) integration | https://pytorch.org/blog/PyTorch-1.13-release/ | pytorch blogs |
PyTorch users are able to visualize op-level timeline of PyTorch scripts execution in Intel® VTune™ Profiler when they need to analyze per-op performance with low-level performance metrics on Intel platforms.
with torch.autograd.profiler.emit_itt():
for i in range(10):
torch.itt.range_push('step_{}'.format(i))
model(input)
torch.itt.range_pop()
Learn more with our tutorial.
(Beta) NNC: Add BF16 and Channels last support
TorchScript graph-mode inference performance on x86 CPU is boosted by adding channels last and BF16 support to NNC. PyTorch users may benefit from channels last optimization on most popular x86 CPUs and benefit from BF16 optimization on Intel Cooper Lake Processor and Sapphire Rapids Processor. >2X geomean performance boost is observed on broad vision models with these two optimizations on Intel Cooper Lake Processor. | https://pytorch.org/blog/PyTorch-1.13-release/ | pytorch blogs |
The performance benefit can be obtained with existing TorchScript, channels last and BF16 Autocast APIs. See code snippet below. We will migrate the optimizations in NNC to the new PyTorch DL Compiler TorchInductor.
import torch
import torchvision.models as models
model = models.resnet50(pretrained=True)
# Convert the model to channels-last
model = model.to(memory_format=torch.channels_last)
model.eval()
data = torch.rand(1, 3, 224, 224)
# Convert the data to channels-lastdata = data.to(memory_format=torch.channels_last)
# Enable autocast to run with BF16
with torch.cpu.amp.autocast(), torch.no_grad():
# Trace the model
model = torch.jit.trace(model, torch.rand(1, 3, 224, 224))
model = torch.jit.freeze(model)
# Run the traced model
model(data)
(Beta) Support for M1 Devices | https://pytorch.org/blog/PyTorch-1.13-release/ | pytorch blogs |
```
(Beta) Support for M1 Devices
Since v1.12, PyTorch has been offering native builds for Apple® silicon machines that use Apple's new M1 chip as a prototype feature. In this release, we bring this feature to beta, providing improved support across PyTorch's APIs.
We now run tests for all submodules except torch.distributed on M1 macOS 12.6 instances. With this improved testing, we were able to fix features such as cpp extension and convolution correctness for certain inputs.
To get started, just install PyTorch v1.13 on your Apple silicon Mac running macOS 12 or later with a native version (arm64) of Python. Learn more with our release notes.
Prototype Features
(Prototype) Arm® Compute Library (ACL) backend support for AWS Graviton
We achieved substantial improvements for CV and NLP inference on aarch64 cpu with Arm Compute Library (acl) to enable acl backend for pytorch and torch-xla modules. Highlights include: | https://pytorch.org/blog/PyTorch-1.13-release/ | pytorch blogs |
Enabled mkldnn + acl as the default backend for aarch64 torch wheel.
Enabled mkldnn matmul operator for aarch64 bf16 device.
Brought TensorFlow xla+acl feature into torch-xla. We enhanced the TensorFlow xla with Arm Compute Library runtime for aarch64 cpu. These changes are included in TensorFlow master and then the upcoming TF 2.10. Once the torch-xla repo is updated for the tensorflow commit, it will have compiling support for torch-xla. We observed ~2.5-3x improvement for MLPerf Bert inference compared to the torch 1.12 wheel on Graviton3.
(Prototype) CUDA Sanitizer | https://pytorch.org/blog/PyTorch-1.13-release/ | pytorch blogs |
(Prototype) CUDA Sanitizer
When enabled, the sanitizer begins to analyze low-level CUDA operations invoked as a result of the user’s PyTorch code to detect data race errors caused by unsynchronized data access from different CUDA streams. The errors found are then printed along with stack traces of faulty accesses, much like Thread Sanitizer does. An example of a simple error and the output produced by the sanitizer can be viewed here. It will be especially useful for machine learning applications, where corrupted data can be easy to miss for a human and the errors may not always manifest themselves; the sanitizer will always be able to detect them.
(Prototype) Limited Python 3.11 support | https://pytorch.org/blog/PyTorch-1.13-release/ | pytorch blogs |
(Prototype) Limited Python 3.11 support
Binaries for Linux with Python 3.11 support are available to download via pip. Please follow the instructions on the get started page. Please note that Python 3.11 support is only a preview. In particular, features including Distributed, Profiler, FX and JIT might not be fully functional yet. | https://pytorch.org/blog/PyTorch-1.13-release/ | pytorch blogs |
layout: blog_detail
title: "Celebrate PyTorch 2.0 with New Performance Features for AI Developers"
author: Intel
Congratulations to the PyTorch Foundation for its release of PyTorch 2.0! In this blog, I discuss the four features for which Intel made significant contributions to PyTorch 2.0:
TorchInductor
GNN
INT8 Inference Optimization
oneDNN Graph API
We at Intel are delighted to be part of the PyTorch community and appreciate the collaboration with and feedback from our colleagues at Meta as we co-developed these features.
Let’s get started.
1. TorchInductor CPU FP32 Inference Optimized
As part of the PyTorch 2.0 compilation stack, TorchInductor CPU backend optimization brings notable performance improvements via graph compilation over the PyTorch eager mode. | https://pytorch.org/blog/celebrate-pytorch-2.0/ | pytorch blogs |
The TorchInductor CPU backend is sped up by leveraging the technologies from the Intel® Extension for PyTorch for Conv/GEMM ops with post-op fusion and weight prepacking, and PyTorch ATen CPU kernels for memory-bound ops with explicit vectorization on top of OpenMP*-based thread parallelization.
With these optimizations on top of the powerful loop fusions in TorchInductor codegen, we achieved up to a 1.7x FP32 inference performance boost over three representative deep learning benchmarks: TorchBench, HuggingFace, and timm1. Training and low-precision support are under development.
See the Improvements
The performance improvements on various backends are tracked on this TouchInductor CPU Performance Dashboard.
Improve Graph Neural Network (GNN) in PyG for Inference and Training Performance on CPU | https://pytorch.org/blog/celebrate-pytorch-2.0/ | pytorch blogs |
GNN is a powerful tool to analyze graph structure data. This feature is designed to improve GNN inference and training performance on Intel® CPUs, including the new 4th Gen Intel® Xeon® Scalable processors.
PyTorch Geometric (PyG) is a very popular library built upon PyTorch to perform GNN workflows. Currently on CPU, GNN models of PyG run slowly due to the lack of GNN-related sparse matrix multiplication operations (i.e., SpMM_reduce) and the lack of several critical kernel-level optimizations (scatter/gather, etc.) tuned for GNN compute.
To address this, optimizations are provided for message passing between adjacent neural network nodes:
scatter_reduce: performance hotspot in message-passing when the edge index is stored in coordinate format (COO).
gather: backward computation of scatter_reduce, specially tuned for the GNN compute when the index is an expanded tensor.
| https://pytorch.org/blog/celebrate-pytorch-2.0/ | pytorch blogs |
torch.sparse.mm with reduce flag: performance hotspot in message-passing when the edge index is stored in compressed sparse row (CSR). Supported reduce flag for: sum, mean, amax, amin.
End-to-end performance benchmark results for both inference and training on 3rd Gen Intel® Xeon® Scalable processors 8380 platform and on 4th Gen 8480+ platform are discussed in Accelerating PyG on Intel CPUs.
Optimize int8 Inference with Unified Quantization Backend for x86 CPU Platforms
The new X86 quantization backend is a combination of FBGEMM (Facebook General Matrix-Matrix Multiplication) and oneAPI Deep Neural Network Library (oneDNN) backends and replaces FBGEMM as the default quantization backend for x86 platforms. The result: better end-to-end int8 inference performance than FBGEMM. | https://pytorch.org/blog/celebrate-pytorch-2.0/ | pytorch blogs |
Users access the x86 quantization backend by default for x86 platforms, and the selection between different kernels is automatically done behind the scenes. The rules of selection are based on prior performance testing data done by Intel during feature development. Thus, the x86 backend replaces FBGEMM and may offer better performance, depending on the use case.
The selection rules are:
On platforms without VNNI (e.g., Intel® Core™ i7 processors), FBGEMM is always used.
On platforms with VNNI (e.g., 2nd-4th Gen Intel® Xeon® Scalable processors and future platforms):
For linear, FBGEMM is always used.
For convolution layers, FBGEMM is used for depth-wise convolution whose layers > 100; otherwise, oneDNN is used.
Note that as the kernels continue to evolve. | https://pytorch.org/blog/celebrate-pytorch-2.0/ | pytorch blogs |
Note that as the kernels continue to evolve.
The selection rules above are subject to change to achieve better performance. Performance metrics for through-put speed-up ratios of unified x86 backend vs. pure FBGEMM are discussed in [RFC] Unified quantization backend for x86 CPU platforms #83888.
Leverage oneDNN Graph API to Accelerate Inference on CPU | https://pytorch.org/blog/celebrate-pytorch-2.0/ | pytorch blogs |
oneDNN Graph API extends oneDNN with a flexible graph API to maximize the optimization opportunity for generating efficient code on Intel® AI hardware. It automatically identifies the graph partitions to be accelerated via fusion. The fusion patterns focus on fusing compute-intensive operations such as convolution, matmul, and their neighbor operations for both inference and training use cases.
Currently, BFloat16 and Float32 datatypes are supported and only inference workloads can be optimized. BF16 is only optimized on machines with Intel® Advanced Vector Extensions 512 (Intel® AVX-512) BF16 support.
Few or no modifications are needed in PyTorch to support newer oneDNN Graph fusions/optimized kernels. To use oneDNN Graph, users can: | https://pytorch.org/blog/celebrate-pytorch-2.0/ | pytorch blogs |
Either use the API torch.jit.enable_onednn_fusion(True) before JIT tracing a model, OR …
Use its context manager, viz. with torch.jit.fuser(“fuser3”).
For accelerating BFloat16 inference, we rely on eager-mode AMP (Automatic Mixed Precision) support in PyTorch and disable JIT mode’s AMP.
See the PyTorch performance tuning guide.
Next Steps
Get the Software
Try out PyTorch 2.0 and realize the performance benefits for yourself from these Intel-contributed features. | https://pytorch.org/blog/celebrate-pytorch-2.0/ | pytorch blogs |
We encourage you to check out Intel’s other AI Tools and Framework optimizations and learn about the open, standards-based oneAPI multiarchitecture, multivendor programming model that forms the foundation of Intel’s AI software portfolio.
For more details about 4th Gen Intel Xeon Scalable processor, visit AI Platform where you can learn about how Intel is empowering developers to run high-performance, efficient end-to-end AI pipelines.
PyTorch Resources
PyTorch Get Started
Dev Discussions
| https://pytorch.org/blog/celebrate-pytorch-2.0/ | pytorch blogs |
Documentation
| https://pytorch.org/blog/celebrate-pytorch-2.0/ | pytorch blogs |
layout: blog_detail
title: "What Every User Should Know About Mixed Precision Training in PyTorch"
author: Syed Ahmed, Christian Sarofeen, Mike Ruberry, Eddie Yan, Natalia Gimelshein, Michael Carilli, Szymon Migacz, Piotr Bialecki, Paulius Micikevicius, Dusan Stosic, Dong Yang, and Naoya Maruyama
featured-img: ''
| https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/ | pytorch blogs |
featured-img: ''
Efficient training of modern neural networks often relies on using lower precision data types. Peak float16 matrix multiplication and convolution performance is 16x faster than peak float32 performance on A100 GPUs. And since the float16 and bfloat16 data types are only half the size of float32 they can double the performance of bandwidth-bound kernels and reduce the memory required to train a network, allowing for larger models, larger batches, or larger inputs. Using a module like torch.amp (short for “Automated Mixed Precision”) makes it easy to get the speed and memory usage benefits of lower precision data types while preserving convergence behavior. | https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/ | pytorch blogs |
Going faster and using less memory is always advantageous – deep learning practitioners can test more model architectures and hyperparameters, and larger, more powerful models can be trained. Training very large models like those described in Narayanan et al. and Brown et al. (which take thousands of GPUs months to train even with expert handwritten optimizations) is infeasible without using mixed precision.
We’ve talked about mixed precision techniques before (here, here, and here), and this blog post is a summary of those techniques and an introduction if you’re new to mixed precision.
Mixed Precision Training in Practice | https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/ | pytorch blogs |
Mixed Precision Training in Practice
Mixed precision training techniques – the use of the lower precision float16 or bfloat16 data types alongside the float32 data type – are broadly applicable and effective. See Figure 1 for a sampling of models successfully trained with mixed precision, and Figures 2 and 3 for example speedups using torch.amp.
Figure 1: Sampling of DL Workloads Successfully Trained with float16 (Source).
Figure 2: Performance of mixed precision training using torch.amp on NVIDIA 8xV100 vs. float32 training on 8xV100 GPU. Bars represent the speedup factor of torch.amp over float32. | https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/ | pytorch blogs |
(Higher is better.) (Source).
Figure 3. Performance of mixed precision training using torch.amp on NVIDIA 8xA100 vs. 8xV100 GPU. Bars represent the speedup factor of A100 over V100.
(Higher is Better.) (Source).
See the NVIDIA Deep Learning Examples repository for more sample mixed precision workloads. | https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/ | pytorch blogs |
Similar performance charts can be seen in 3D medical image analysis, gaze estimation, video synthesis, conditional GANs, and convolutional LSTMs. Huang et al. showed that mixed precision training is 1.5x to 5.5x faster over float32 on V100 GPUs, and an additional 1.3x to 2.5x faster on A100 GPUs on a variety of networks. On very large networks the need for mixed precision is even more evident. Narayanan et al. reports that it would take 34 days to train GPT-3 175B on 1024 A100 GPUs (with a batch size of 1536), but it’s estimated it would take over a year using float32! | https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/ | pytorch blogs |
Getting Started With Mixed Precision Using torch.amp
torch.amp, introduced in PyTorch 1.6, makes it easy to leverage mixed precision training using the float16 or bfloat16 dtypes. See this blog post, tutorial, and documentation for more details. Figure 4 shows an example of applying AMP with grad scaling to a network.
```console
import torch
Creates once at the beginning of training
scaler = torch.cuda.amp.GradScaler()
for data, label in data_iter:
optimizer.zero_grad()
# Casts operations to mixed precision
with torch.amp.autocast(device_type=“cuda”, dtype=torch.float16):
loss = model(data)
# Scales the loss, and calls backward()
# to create scaled gradients
scaler.scale(loss).backward()
# Unscales gradients and calls
# or skips optimizer.step() | https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/ | pytorch blogs |
or skips optimizer.step()
scaler.step(optimizer)
# Updates the scale for next iteration
scaler.update()
```
Figure 4: AMP recipe
Picking The Right Approach
Out-of-the-box mixed precision training with either float16 or bfloat16 is effective at speeding up the convergence of many deep learning models, but some models may require more careful numerical accuracy management. Here are some options:
Full float32 precision. Floating point tensors and modules are created in float32 precision by default in PyTorch, but this is a historic artifact not representative of training most modern deep learning networks. It’s rare that networks need this much numerical accuracy.
| https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/ | pytorch blogs |
Enabling TensorFloat32 (TF32) mode. On Ampere and later CUDA devices matrix multiplications and convolutions can use the TensorFloat32 (TF32) mode for faster but slightly less accurate computations. See the Accelerating AI Training with NVIDIA TF32 Tensor Cores blog post for more details. By default PyTorch enables TF32 mode for convolutions but not matrix multiplications, and unless a network requires full float32 precision we recommend enabling this setting for matrix multiplications, too (see the documentation here for how to do so). It can significantly speed up computations with typically negligible loss of numerical accuracy.
| https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/ | pytorch blogs |
Using torch.amp with bfloat16 or float16. Both these low precision floating point data types are usually comparably fast, but some networks may only converge with one vs the other. If a network requires more precision it may need to use float16, and if a network requires more dynamic range it may need to use bfloat16, whose dynamic range is equal to that of float32. If overflows are observed, for example, then we suggest trying bfloat16.
There are even more advanced options than those presented here, like using torch.amp’s autocasting for only parts of a model, or managing mixed precision directly. These topics are largely beyond the scope of this blog post, but see the “Best Practices” section below.
Best Practices
We strongly recommend using mixed precision with torch.amp or the TF32 mode (on Ampere and later CUDA devices) whenever possible when training a network. If one of those approaches doesn’t work, however, we recommend the following: | https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/ | pytorch blogs |
High Performance Computing (HPC) applications, regression tasks, and generative networks may simply require full float32 IEEE precision to converge as expected.
Try selectively applying torch.amp. In particular we recommend first disabling it on regions performing operations from the torch.linalg module or when doing pre- or post-processing. These operations are often especially sensitive. Note that TF32 mode is a global switch and can’t be used selectively on regions of a network. Enable TF32 first to check if a network’s operators are sensitive to the mode, otherwise disable it.
If you encounter type mismatches while using torch.amp we don’t suggest inserting manual casts to start. This error is indicative of something being off with the network, and it’s usually worth investigating first.
| https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/ | pytorch blogs |
Figure out by experimentation if your network is sensitive to range and/or precision of a format. For example fine-tuning bfloat16-pretrained models in float16 can easily run into range issues in float16 because of the potentially large range from training in bfloat16, so users should stick with bfloat16 fine-tuning if the model was trained in bfloat16.
The performance gain of mixed precision training can depend on multiple factors (e.g. compute-bound vs memory-bound problems) and users should use the tuning guide to remove other bottlenecks in their training scripts. Although having similar theoretical performance benefits, BF16 and FP16 can have different speeds in practice. It’s recommended to try the mentioned formats and use the one with best speed while maintaining the desired numeric behavior.
| https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/ | pytorch blogs |
For more details, refer to the AMP Tutorial, Training Neural Networks with Tensor Cores, and see the post “More In-Depth Details of Floating Point Precision" on PyTorch Dev Discussion.
Conclusion
Mixed precision training is an essential tool for training deep learning models on modern hardware, and it will become even more important in the future as the performance gap between lower precision operations and float32 continues to grow on newer hardware, as reflected in Figure 5.
| https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/ | pytorch blogs |
Figure 5: Relative peak throughput of float16 (FP16) vs float32 matrix multiplications on Volta and Ampere GPUs. On Ampere relative peak throughput for the TensorFloat32 (TF32) mode and bfloat16 matrix multiplications are shown, too. The relative peak throughput of low precision data types like float16 and bfloat16 vs. float32 matrix multiplications is expected to grow as new hardware is released.
PyTorch’s torch.amp module makes it easy to get started with mixed precision, and we highly recommend using it to train faster and reduce memory usage. torch.amp supports both float16 and bfloat16 mixed precision.
There are still some networks that are tricky to train with mixed precision, and for these networks we recommend trying TF32 accelerated matrix multiplications on Ampere and later CUDA hardware. Networks are rarely so precision sensitive that they require full float32 precision for every operation. | https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/ | pytorch blogs |
If you have questions or suggestions for torch.amp or mixed precision support in PyTorch then let us know by posting to the mixed precision category on the PyTorch Forums or filing an issue on the PyTorch GitHub page. | https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/ | pytorch blogs |
layout: blog_detail
title: 'Everything you need to know about TorchVision’s MobileNetV3 implementation'
author: Vasilis Vryniotis and Francisco Massa
In TorchVision v0.9, we released a series of new mobile-friendly models that can be used for Classification, Object Detection and Semantic Segmentation. In this article, we will dig deep into the code of the models, share notable implementation details, explain how we configured and trained them, and highlight important tradeoffs we made during their tuning. Our goal is to disclose technical details that typically remain undocumented in the original papers and repos of the models.
Network Architecture | https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/ | pytorch blogs |
Network Architecture
The implementation of the MobileNetV3 architecture follows closely the original paper. It is customizable and offers different configurations for building Classification, Object Detection and Semantic Segmentation backbones. It was designed to follow a similar structure to MobileNetV2 and the two share common building blocks. | https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/ | pytorch blogs |
Off-the-shelf, we offer the two variants described on the paper: the Large and the Small. Both are constructed using the same code with the only difference being their configuration which describes the number of blocks, their sizes, their activation functions etc.
Configuration parameters | https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/ | pytorch blogs |
Configuration parameters
Even though one can write a custom InvertedResidual setting and pass it to the MobileNetV3 class directly, for the majority of applications we can adapt the existing configs by passing parameters to the model building methods. Some of the key configuration parameters are the following: | https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/ | pytorch blogs |
The width_mult parameter is a multiplier that affects the number of channels of the model. The default value is 1 and by increasing or decreasing it one can change the number of filters of all convolutions, including the ones of the first and last layers. The implementation ensures that the number of filters is always a multiple of 8. This is a hardware optimization trick which allows for faster vectorization of operations.
| https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/ | pytorch blogs |
The reduced_tail parameter halves the number of channels on the last blocks of the network. This version is used by some Object Detection and Semantic Segmentation models. It’s a speed optimization which is described on the MobileNetV3 paper and reportedly leads to a 15% latency reduction without a significant negative effect on accuracy.
| https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/ | pytorch blogs |
The dilated parameter affects the last 3 InvertedResidual blocks of the model and turns their normal depthwise Convolutions to Atrous Convolutions. This is used to control the output stride of these blocks and has a significant positive effect on the accuracy of Semantic Segmentation models.
Implementation details
Below we provide additional information on some notable implementation details of the architecture.
The MobileNetV3 class is responsible for building a network out of the provided configuration. Here are some implementation details of the class: | https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/ | pytorch blogs |
The last convolution block expands the output of the last InvertedResidual block by a factor of 6. The implementation is aligned with the Large and Small configurations described on the paper and can adapt to different values of the multiplier parameter.
Similarly to other models such as MobileNetV2, a dropout layer is placed just before the final Linear layer of the classifier.
The InvertedResidual class is the main building block of the network. Here are some notable implementation details of the block along with its visualization which comes from Figure 4 of the paper: | https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/ | pytorch blogs |
There is no expansion step if the input channels and the expanded channels are the same. This happens on the first convolution block of the network.
There is always a projection step even when the expanded channels are the same as the output channels.
The activation method of the depthwise block is placed before the Squeeze-and-Excite layer as this improves marginally the accuracy.
Classification | https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/ | pytorch blogs |
Classification
In this section we provide benchmarks of the pre-trained models and details on how they were configured, trained and quantized.
Benchmarks
Here is how to initialize the pre-trained models:
large = torchvision.models.mobilenet_v3_large(pretrained=True, width_mult=1.0, reduced_tail=False, dilated=False)
small = torchvision.models.mobilenet_v3_small(pretrained=True)
quantized = torchvision.models.quantization.mobilenet_v3_large(pretrained=True)
Below we have the detailed benchmarks between new and selected previous models. As we can see MobileNetV3-Large is a viable replacement of ResNet50 for users who are willing to sacrifice a bit of accuracy for a roughly 6x speed-up:
Model
Acc@1
Acc@5
Inference on CPU (sec)
# Params (M)
MobileNetV3-Large
74.042
91.340
0.0411
5.48
| https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/ | pytorch blogs |
| MobileNetV3-Small | 67.668 | 87.402 | 0.0165 | 2.54 |
| Quantized MobileNetV3-Large | 73.004 | 90.858 | 0.0162 | 2.96 |
| MobileNetV2 | 71.880 | 90.290 | 0.0608 | 3.50 |
| ResNet50 | 76.150 | 92.870 | 0.2545 | 25.56 |
| ResNet18 | 69.760 | 89.080 | 0.1032 | 11.69 |
Note that the inference times are measured on CPU. They are not absolute benchmarks, but they allow for relative comparisons between models.
Training process | https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/ | pytorch blogs |
Training process
All pre-trained models are configured with a width multiplier of 1, have full tails, are non-dilated, and were fitted on ImageNet. Both the Large and Small variants were trained using the same hyper-parameters and scripts which can be found in our references folder. Below we provide details on the most notable aspects of the training process.
Achieving fast and stable training | https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/ | pytorch blogs |
Achieving fast and stable training
Configuring RMSProp correctly was crucial to achieve fast training with numerical stability. The authors of the paper used TensorFlow in their experiments and in their runs they reported using quite high rmsprop_epsilon comparing to the default. Typically this hyper-parameter takes small values as it’s used to avoid zero denominators, but in this specific model choosing the right value seems important to avoid numerical instabilities in the loss. | https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/ | pytorch blogs |
Another important detail is that though PyTorch’s and TensorFlow’s RMSProp implementations typically behave similarly, there are a few differences with the most notable in our setup being how the epsilon hyperparameter is handled. More specifically, PyTorch adds the epsilon outside of the square root calculation while TensorFlow adds it inside. The result of this implementation detail is that one needs to adjust the epsilon value while porting the hyper parameter of the paper. A reasonable approximation can be taken with the formula PyTorch_eps = sqrt(TF_eps).
Increasing our accuracy by tuning hyperparameters & improving our training recipe | https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/ | pytorch blogs |
After configuring the optimizer to achieve fast and stable training, we turned into optimizing the accuracy of the model. There are a few techniques that helped us achieve this. First of all, to avoid overfitting we augmented out data using the AutoAugment algorithm, followed by RandomErasing. Additionally we tuned parameters such as the weight decay using cross validation. We also found beneficial to perform weight averaging across different epoch checkpoints after the end of the training. Finally, though not used in our published training recipe, we found that using Label Smoothing, Stochastic Depth and LR noise injection improve the overall accuracy by over 1.5 points. | https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/ | pytorch blogs |
The graph and table depict a simplified summary of the most important iterations for improving the accuracy of the MobileNetV3 Large variant. Note that the actual number of iterations done while training the model was significantly larger and that the progress in accuracy was not always monotonically increasing. Also note that the Y-axis of the graph starts from 70% instead from 0% to make the difference between iterations more visible:
Iteration
Acc@1
Acc@5
Baseline with "MobileNetV2-style" Hyperparams
71.542
90.068
+ RMSProp with default eps
70.684
89.38
+ RMSProp with adjusted eps & LR scheme
71.764
90.178
+ Data Augmentation & Tuned Hyperparams
73.86
91.292
| https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/ | pytorch blogs |
| + Checkpoint Averaging | 74.028 | 91.382 |
| + Label Smoothing & Stochastic Depth & LR noise | 75.536 | 92.368 |
Note that once we’ve achieved an acceptable accuracy, we verified the model performance on the hold-out test dataset which hasn't been used before for training or hyper-parameter tuning. This process helps us detect overfitting and is always performed for all pre-trained models prior their release.
Quantization
We currently offer quantized weights for the QNNPACK backend of the MobileNetV3-Large variant which provides a speed-up of 2.5x. To quantize the model, Quantized Aware Training (QAT) was used. The hyper parameters and the scripts used to train the model can be found in our references folder. | https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/ | pytorch blogs |
Note that QAT allows us to model the effects of quantization and adjust the weights so that we can improve the model accuracy. This translates to an accuracy increase of 1.8 points comparing to simple post-training quantization:
Quantization Status
Acc@1
Acc@5
Non-quantized
74.042
91.340
Quantized Aware Training
73.004
90.858
Post-training Quantization
71.160
89.834
Object Detection | https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/ | pytorch blogs |
Object Detection
In this section, we will first provide benchmarks of the released models, and then discuss how the MobileNetV3-Large backbone was used in a Feature Pyramid Network along with the FasterRCNN detector to perform Object Detection. We will also explain how the network was trained and tuned alongside with any tradeoffs we had to make. We will not cover details about how it was used with SSDlite as this will be discussed on a future article.
Benchmarks
Here is how the models are initialized:
high_res = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(pretrained=True)
low_res = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_320_fpn(pretrained=True)
| https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/ | pytorch blogs |
```
Below are some benchmarks between new and selected previous models. As we can see the high resolution Faster R-CNN with MobileNetV3-Large FPN backbone seems a viable replacement of the equivalent ResNet50 model for those users who are willing to sacrifice few accuracy points for a 5x speed-up:
Model
mAP
Inference on CPU (sec)
# Params (M)
Faster R-CNN MobileNetV3-Large FPN (High-Res)
32.8
0.8409
19.39
Faster R-CNN MobileNetV3-Large 320 FPN (Low-Res)
22.8
0.1679
19.39
Faster R-CNN ResNet-50 FPN
37.0
4.1514
41.76
RetinaNet ResNet-50 FPN
36.4
4.8825
34.01
Implementation details | https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/ | pytorch blogs |
Implementation details
The Detector uses a FPN-style backbone which extracts features from different convolutions of the MobileNetV3 model. By default the pre-trained model uses the output of the 13th InvertedResidual block and the output of the Convolution prior to the pooling layer but the implementation supports using the outputs of more stages. | https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/ | pytorch blogs |
All feature maps extracted from the network have their output projected down to 256 channels by the FPN block as this greatly improves the speed of the network. These feature maps provided by the FPN backbone are used by the FasterRCNN detector to provide box and class predictions at different scales.
Training & Tuning process
We currently offer two pre-trained models capable of doing object detection at different resolutions. Both models were trained on the COCO dataset using the same hyper-parameters and scripts which can be found in our references folder. | https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/ | pytorch blogs |
The High Resolution detector was trained with images of 800-1333px, while the mobile-friendly Low Resolution detector was trained with images of 320-640px. The reason why we provide two separate sets of pre-trained weights is because training a detector directly on the smaller images leads to a 5 mAP increase in precision comparing to passing small images to the pre-trained high-res model. Both backbones were initialized with weights fitted on ImageNet and the 3 last stages of their weights where fined-tuned during the training process. | https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/ | pytorch blogs |
An additional speed optimization can be applied on the mobile-friendly model by tuning the RPN NMS thresholds. By sacrificing only 0.2 mAP of precision we were able to improve the CPU speed of the model by roughly 45%. The details of the optimization can be seen below:
Tuning Status
mAP
Inference on CPU (sec)
Before
23.0
0.2904
After
22.8
0.1679
Below we provide some examples of visualizing the predictions of the Faster R-CNN MobileNetV3-Large FPN model:
Semantic Segmentation | https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/ | pytorch blogs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.