text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
Community Contributions
We had more contributions from the open source community in this release than ever before, including several completely new features. We would like to extend our sincere thanks to the community. Please check out the newly added CONTRIBUTING.md for ways to contribute code, and remember that reporting bugs and requesting features are just as valuable. We will continue posting well-scoped work items as issues labeled “help-wanted” and “contributions-welcome” for anyone who would like to contribute code, and are happy to coach new contributors through the contribution process.
Find the full TorchAudio release notes here.
TorchText 0.9.0
[Beta] Dataset API Updates | https://pytorch.org/blog/pytorch-1.8-new-library-releases/ | pytorch blogs |
TorchText 0.9.0
[Beta] Dataset API Updates
In this release, we are updating TorchText’s dataset API to be compatible with PyTorch data utilities, such as DataLoader, and are deprecating TorchText’s custom data abstractions such as Field. The updated datasets are simple string-by-string iterators over the data. For guidance about migrating from the legacy abstractions to use modern PyTorch data utilities, please refer to our migration guide.
The text datasets listed below have been updated as part of this work. For examples of how to use these datasets, please refer to our end-to-end text classification tutorial.
* Language modeling: WikiText2, WikiText103, PennTreebank, EnWik9 | https://pytorch.org/blog/pytorch-1.8-new-library-releases/ | pytorch blogs |
Text classification: AG_NEWS, SogouNews, DBpedia, YelpReviewPolarity, YelpReviewFull, YahooAnswers, AmazonReviewPolarity, AmazonReviewFull, IMDB
Sequence tagging: UDPOS, CoNLL2000Chunking
Translation: IWSLT2016, IWSLT2017
Question answer: SQuAD1, SQuAD2
Find the full TorchText release notes here.
[Stable] TorchCSPRNG 0.2.0
We released TorchCSPRNG in August 2020, a PyTorch C++/CUDA extension that provides cryptographically secure pseudorandom number generators for PyTorch. Today, we are releasing the 0.2.0 version and designating the library as stable. This release includes a new API for encrypt/decrypt with AES128 ECB/CTR as well as CUDA 11 and Windows CUDA support.
Find the full TorchCSPRNG release notes here. | https://pytorch.org/blog/pytorch-1.8-new-library-releases/ | pytorch blogs |
Thanks for reading, and if you are excited about these updates and want to participate in the future of PyTorch, we encourage you to join the discussion forums and open GitHub issues.
Cheers!
Team PyTorch | https://pytorch.org/blog/pytorch-1.8-new-library-releases/ | pytorch blogs |
layout: blog_detail
title: 'An overview of the ML models introduced in TorchVision v0.9'
author: Team PyTorch
TorchVision v0.9 has been released and it is packed with numerous new Machine Learning models and features, speed improvements and bug fixes. In this blog post, we provide a quick overview of the newly introduced ML models and discuss their key features and characteristics.
Classification | https://pytorch.org/blog/ml-models-torchvision-v0.9/ | pytorch blogs |
MobileNetV3 Large & Small: These two classification models are optimized for Mobile use-cases and are used as backbones on other Computer Vision tasks. The implementation of the new MobileNetV3 architecture supports the Large & Small variants and the depth multiplier parameter as described in the original paper. We offer pre-trained weights on ImageNet for both Large and Small networks with depth multiplier 1.0 and resolution 224x224. Our previous training recipes have been updated and can be used to easily train the models from scratch (shoutout to Ross Wightman for inspiring some of our training configuration). The Large variant offers a competitive accuracy comparing to ResNet50 while being over 6x faster on CPU, meaning that it is a good candidate for applications where speed is important. For applications where speed is critical, one can sacrifice further accuracy for speed and use the Small variant which is 15x faster than ResNet50.
| https://pytorch.org/blog/ml-models-torchvision-v0.9/ | pytorch blogs |
Quantized MobileNetV3 Large: The quantized version of MobilNetV3 Large reduces the number of parameters by 45% and it is roughly 2.5x faster than the non-quantized version while remaining competitive in terms of accuracy. It was fitted on ImageNet using Quantization Aware Training by iterating on the non-quantized version and it can be trained from scratch using the existing reference scripts.
Usage:
model = torchvision.models.mobilenet_v3_large(pretrained=True)
# model = torchvision.models.mobilenet_v3_small(pretrained=True)
# model = torchvision.models.quantization.mobilenet_v3_large(pretrained=True)
model.eval()
predictions = model(img)
Object Detection | https://pytorch.org/blog/ml-models-torchvision-v0.9/ | pytorch blogs |
predictions = model(img)
### Object Detection
* **Faster R-CNN MobileNetV3-Large FPN:** Combining the MobileNetV3 Large backbone with a Faster R-CNN detector and a Feature Pyramid Network leads to a highly accurate and fast object detector. The pre-trained weights are fitted on COCO 2017 using the provided reference [scripts](https://github.com/pytorch/vision/tree/master/references/detection#faster-r-cnn-mobilenetv3-large-fpn) and the model is 5x faster on CPU than the equivalent ResNet50 detector while remaining competitive in [terms of accuracy](https://github.com/pytorch/vision/blob/master/docs/source/models.rst#object-detection-instance-segmentation-and-person-keypoint-detection).
* **Faster R-CNN MobileNetV3-Large 320 FPN:** This is an iteration of the previous model that uses reduced resolution (min_size=320 pixel) and sacrifices accuracy for speed. It is 25x faster on CPU than the equivalent ResNet50 detector and thus it is good for real mobile use-cases.
**Usage:**
| https://pytorch.org/blog/ml-models-torchvision-v0.9/ | pytorch blogs |
Usage:
model = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(pretrained=True)
# model = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_320_fpn(pretrained=True)
model.eval()
predictions = model(img)
Semantic Segmentation
DeepLabV3 with Dilated MobileNetV3 Large Backbone: A dilated version of the MobileNetV3 Large backbone combined with DeepLabV3 helps us build a highly accurate and fast semantic segmentation model. The pre-trained weights are fitted on COCO 2017 using our standard training recipes. The final model has the same accuracy as the FCN ResNet50 but it is 8.5x faster on CPU and thus making it an excellent replacement for the majority of applications.
| https://pytorch.org/blog/ml-models-torchvision-v0.9/ | pytorch blogs |
Lite R-ASPP with Dilated MobileNetV3 Large Backbone: We introduce the implementation of a new segmentation head called Lite R-ASPP and combine it with the dilated MobileNetV3 Large backbone to build a very fast segmentation model. The new model sacrifices some accuracy to achieve a 15x speed improvement comparing to the previously most lightweight segmentation model which was the FCN ResNet50.
Usage:
model = torchvision.models.segmentation.deeplabv3_mobilenet_v3_large(pretrained=True)
# model = torchvision.models.segmentation.lraspp_mobilenet_v3_large(pretrained=True)
model.eval()
predictions = model(img)
In the near future we plan to publish an article that covers the details of how the above models were trained and discuss their tradeoffs and design choices. Until then we encourage you to try out the new models and provide your feedback. | https://pytorch.org/blog/ml-models-torchvision-v0.9/ | pytorch blogs |
layout: blog_detail
title: 'Introducing nvFuser, a deep learning compiler for PyTorch'
author: Christian Sarofeen, Piotr Bialecki, Jie Jiang, Kevin Stephano, Masaki Kozuki, Neal Vaidya, Stas Bekman
featured-img: "/assets/images/introducing-nvfuser-a-deep-learning-compiler-for-pytorch-1.png"
nvFuser is a Deep Learning Compiler for NVIDIA GPUs that automatically just-in-time compiles fast and flexible kernels to reliably accelerate users' networks. It provides significant speedups for deep learning networks running on Volta and later CUDA accelerators by generating fast custom “fusion” kernels at runtime. nvFuser is specifically designed to meet the unique requirements of the PyTorch community, and it supports diverse network architectures and programs with dynamic inputs of varying shapes and strides. | https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/ | pytorch blogs |
In this blog post we’ll describe nvFuser and how it’s used today, show the significant performance improvements it can obtain on models from HuggingFace and TIMM, and look ahead to nvFuser in PyTorch 1.13 and beyond. If you would like to know more about how and why fusion improves the speed of training for Deep Learning networks, please see our previous talks on nvFuser from GTC 2022 and GTC 2021. | https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/ | pytorch blogs |
nvFuser relies on a graph representation of PyTorch operations to optimize and accelerate. Since PyTorch has an eager execution model, the PyTorch operations users are running are not directly accessible as a whole program that can be optimized by a system like nvFuser. Therefore users must utilize systems built on top of nvFuser which are capable of capturing users programs and translating them into a form that is optimizable by nvFuser. These higher level systems then pass these captured operations to nvFuser, so that nvFuser can optimize the execution of the user’s script for NVIDIA GPUs. There are three systems that capture, translate, and pass user programs to nvFuser for optimization:
TorchScript jit.script
| https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/ | pytorch blogs |
This system directly parses sections of an annotated python script to translate into its own representation what the user is doing. This system then applies its own version of auto differentiation to the graph, and passes sections of the subsequent forward and backwards graphs to nvFuser for optimization.
FuncTorch
This system doesn’t directly look at the user python script, instead inserting a mechanism that captures PyTorch operations as they’re being run. We refer to this type of capture system as “trace program acquisition”, since we’re tracing what has been performed. FuncTorch doesn’t perform its own auto differentiation – it simply traces PyTorch’s autograd directly to get backward graphs.
TorchDynamo
| https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/ | pytorch blogs |
TorchDynamo is another program acquisition mechanism built on top of FuncTorch. TorchDynamo parses the Python bytecode produced from the user script in order to select portions to trace with FuncTorch. The benefit of TorchDynamo is that it’s able to apply decorators to a user’s script, effectively isolating what should be sent to FuncTorch, making it easier for FuncTorch to successfully trace complex Python scripts.
These systems are available for users to interact with directly while nvFuser automatically and seamlessly optimizes performance critical regions of the user’s code. These systems automatically send parsed user programs to nvFuser so nvFuser can:
Analyze the operations being run on GPUs
Plan parallelization and optimization strategies for those operations
Apply those strategies in generated GPU code
Runtime-compile the generated optimized GPU functions
Execute those CUDA kernels on subsequent iterations
| https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/ | pytorch blogs |
It is important to note nvFuser does not yet support all PyTorch operations, and there are still some scenarios that are actively being improved in nvFuser that are discussed herein. However, nvFuser does support many DL performance critical operations today, and the number of supported operations will grow in subsequent PyTorch releases. nvFuser is capable of generating highly specialized and optimized GPU functions for the operations it does have support for. This means nvFuser is able to power new PyTorch systems like TorchDynamo and FuncTorch to combine the flexibility PyTorch is known for with unbeatable performance.
nvFuser Performance | https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/ | pytorch blogs |
nvFuser Performance
Before getting into how to use nvFuser, in this section we’ll show the improvements in training speed nvFuser provides for a variety of models from the HuggingFace Transformers and PyTorch Image Models (TIMM) repositories and we will discuss current gaps in nvFuser performance that are under development today. All performance numbers in this section were taken using an NVIDIA A100 40GB GPU, and used either FuncTorch alone or Functorch with TorchDynamo.
HuggingFace Transformer Benchmarks
nvFuser can dramatically accelerate training of HuggingFace Transformers when combined with another important optimization (more on that in a moment). Performance improvements can be seen in Figure 1 to range between 1.12x and 1.50x across a subset of popular HuggingFace Transformer networks.
| https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/ | pytorch blogs |
Figure 1: Performance gains of 8 training scenarios from HuggingFace’s Transformer repository. First performance boost in the dark green is due to replacing the optimizer with an NVIDIA Apex fused AdamW optimizer. The light green is due to adding nvFuser. Models were run with batch size and sequence lengths of [64, 128], [8, 512], [2, 1024], [64, 128], [8, 512], [8, src_seql=512, tgt_seql=128], [8, src_seql=1024, tgt_seql=128], and [8, 512] respectively. All networks were run with Automatic Mixed Precision (AMP) enabled with dtype=float16.
| https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/ | pytorch blogs |
While these speedups are significant, it’s important to understand that nvFuser doesn’t (yet) automate everything about running networks quickly. For HuggingFace Transformers, for example, it was important to use the AdamW fused optimizer from NVIDIA’s Apex repository as the optimizer otherwise consumed a large portion of runtime. Using the fused AdamW optimizer to make the network faster exposes the next major performance bottleneck — memory bound operations. These operations are optimized by nvFuser, providing another large performance boost. With the fused optimizer and nvFuser enabled, the training speed of these networks improved between 1.12x to 1.5x. | https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/ | pytorch blogs |
HuggingFace Transformer models were run with the torch.amp module. (“amp” stands for Automated Mixed Precision, see the “What Every User Should Know about Mixed Precision in PyTorch” blog post for details.) An option to use nvFuser was added to HuggingFace’sTrainer. If you have TorchDynamo installed you can activate it to enable nvFuser in HuggingFace by passing torchdynamo = ‘nvfuser’ to the Trainer class.
nvFuser has great support for normalization kernels and related fusions frequently found in Natural Language Processing (NLP) models, and it is recommended users try nvFuser in their NLP workloads.
PyTorch Image Models (TIMM) Benchmarks | https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/ | pytorch blogs |
PyTorch Image Models (TIMM) Benchmarks
nvFuser, can also significantly reduce the training time of TIMM networks, up to over 1.3x vs. eager PyTorch, and up to 1.44x vs. eager PyTorch when combined with the torch.amp module. Figure 1 shows nvFuser’s speedup without torch.amp, and when torch.amp is used with the NHWC (“channels last”) and NCHW (“channels first”) formats. nvFuser is integrated in TIMM through FuncTorch tracing directly (without TorchDynamo) and can be used by adding the --aot-autograd command line argument when running the TIMM benchmark or training script.
| https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/ | pytorch blogs |
Figure 1: The Y-axis is the performance gain nvFuser provides over not using nvFuser. A value of 1.0 means no change in perf, 2.0 would mean nvFuser is twice as fast, 0.5 would mean nvFuser takes twice the time to run. Square markers are with float16 Automatic Mixed Precision (AMP) and channels first contiguous inputs, circle markers are float32 inputs, and triangles are with float16 AMP and channels last contiguous inputs. Missing data points are due to an error being encountered when tracing.
| https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/ | pytorch blogs |
When running with float32 precision nvFuser provides a 1.12x geometric mean (“geomean”) speedup on TIMM networks, and when running with torch.amp and “channels first” it provides a 1.14x geomean speedup. However, nvFuser currently doesn’t speedup torch.amp and “channels last” training (a .9x geomean regression), so we recommend not using it in those cases. We are actively working on improving “channels last” performance now, and soon we will have two additional optimization strategies (grid persistent optimizations for channels-last normalizations and fast transposes) which we expect will provide speedups comparable to “channels first” in PyTorch version 1.13 and later. Many of nvFuser’s optimizations can also help in inference cases. However, in PyTorch when running inference on small batch sizes, the performance is typically limited by CPU overhead, which nvFuser can’t completely remove or fix. Therefore, typically the most important optimization for inference is to enable CUDA Graphs when possible. Once CUDA Graphs is enabled, then it can also be beneficial to also enable fusion through nvFuser. Performance of inference is shown in Figure 2 and Figure 3. Inference is only run with float16 AMP as it is uncommon to run inference workloads in full float32 precision. | https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/ | pytorch blogs |
Figure 2: Performance gains of enabling CUDA Graphs, and CUDA Graphs with nvFuser compared to the performance of native PyTorch without CUDA Graphs and nvFuser across TIMM models with float16 AMP, channels first inputs, and a batch size of 1 and 8 respectively. There is a geomean speedup of 2.74x with CUDA Graphs and 2.71x with CUDA Graphs + nvFuser respectively. nvFuser provides a maximum regression of 0.68x and a maximum performance gain of 2.74x (relative to CUDA Graphs without nvFuser). Performance gain is measured relative to the average time per iteration PyTorch takes without CUDA Graphs and without nvFuser. Models are sorted by how much additional performance nvFuser is providing.
| https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/ | pytorch blogs |
Figure 3: Performance gains of enabling CUDA Graphs, and CUDA Graphs with nvFuser compared to the performance of native PyTorch without CUDA Graphs and nvFuser across TIMM models with AMP, channels last inputs, and a batch size of 1 and 8 respectively. There is a geomean speedup of 2.29x with CUDA Graphs and 2.95x with CUDA Graphs + nvFuser respectively. nvFuser provides a maximum regression of 0.86x and a maximum performance gain of 3.82x (relative to CUDA Graphs without nvFuser). Performance gain is measured relative to the average time per iteration PyTorch takes without CUDA Graphs and without nvFuser. Models are sorted by how much additional performance nvFuser is providing.
| https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/ | pytorch blogs |
So far nvFuser performance has not been tuned for inference workloads so its performance benefit is not consistent across all cases. However, there are still many models that benefit significantly from nvFuser during inference and we encourage users to try nvFuser in inference workloads to see if you would benefit today. Performance of nvFuser in inference workloads will improve in the future and if you’re interested in nvFuser in inference workloads please reach out to us on the PyTorch forums.
Getting Started - Accelerate Your Scripts with nvFuser | https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/ | pytorch blogs |
We’ve created a tutorial demonstrating how to take advantage of nvFuser to accelerate part of a standard transformer block, and how nvFuser can be used to define fast and novel operations. There are still some rough edges in nvFuser that we’re working hard on improving as we’ve outlined in this blog post. However we’ve also demonstrated some great improvements for training speed on multiple networks in HuggingFace and TIMM and we expect there are opportunities in your networks where nvFuser can help today, and many more opportunities it will help in the future.
If you would like to learn more about nvFuser we recommend watching our presentations from NVIDIA’s GTC conference GTC 2022 and GTC 2021. | https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/ | pytorch blogs |
layout: blog_detail
title: 'Introducing PyTorch Profiler - the new and improved performance tool'
author: Maxim Lukiyanov - Principal PM at Microsoft, Guoliang Hua - Principal Engineering Manager at Microsoft, Geeta Chauhan - Partner Engineering Lead at Facebook, Gisle Dankel - Tech Lead at Facebook
Along with PyTorch 1.8.1 release, we are excited to announce PyTorch Profiler – the new and improved performance debugging profiler for PyTorch. Developed as part of a collaboration between Microsoft and Facebook, the PyTorch Profiler is an open-source tool that enables accurate and efficient performance analysis and troubleshooting for large-scale deep learning models. | https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/ | pytorch blogs |
Analyzing and improving large-scale deep learning model performance is an ongoing challenge that grows in importance as the model sizes increase. For a long time, PyTorch users had a hard time solving this challenge due to the lack of available tools. There were standard performance debugging tools that provide GPU hardware level information but missed PyTorch-specific context of operations. In order to recover missed information, users needed to combine multiple tools together or manually add minimum correlation information to make sense of the data. There was also the autograd profiler (torch.autograd.profiler) which can capture information about PyTorch operations but does not capture detailed GPU hardware-level information and cannot provide support for visualization. | https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/ | pytorch blogs |
The new PyTorch Profiler (torch.profiler) is a tool that brings both types of information together and then builds experience that realizes the full potential of that information. This new profiler collects both GPU hardware and PyTorch related information, correlates them, performs automatic detection of bottlenecks in the model, and generates recommendations on how to resolve these bottlenecks. All of this information from the profiler is visualized for the user in TensorBoard. The new Profiler API is natively supported in PyTorch and delivers the simplest experience available to date where users can profile their models without installing any additional packages and see results immediately in TensorBoard with the new PyTorch Profiler plugin. Below is the screenshot of PyTorch Profiler - automatic bottleneck detection.
Getting started | https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/ | pytorch blogs |
Getting started
PyTorch Profiler is the next version of the PyTorch autograd profiler. It has a new module namespace torch.profiler but maintains compatibility with autograd profiler APIs. The Profiler uses a new GPU profiling engine, built using Nvidia CUPTI APIs, and is able to capture GPU kernel events with high fidelity. To profile your model training loop, wrap the code in the profiler context manager as shown below.
with torch.profiler.profile(
schedule=torch.profiler.schedule(
wait=2,
warmup=2,
active=6,
repeat=1),
on_trace_ready=tensorboard_trace_handler,
with_stack=True
) as profiler:
for step, data in enumerate(trainloader, 0):
print("step:{}".format(step))
inputs, labels = data[0].to(device=device), data[1].to(device=device)
outputs = model(inputs)
loss = criterion(outputs, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
profiler.step()
| https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/ | pytorch blogs |
profiler.step()
The ```schedule``` parameter allows you to limit the number of training steps included in the profile to reduce the amount of data collected and simplify visual analysis by focusing on what’s important. The ```tensorboard_trace_handler``` automatically saves profiling results to disk for analysis in TensorBoard.
To view results of the profiling session in TensorBoard, install PyTorch Profiler TensorBoard Plugin package.
```python
pip install torch_tb_profiler
Visual Studio Code Integration | https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/ | pytorch blogs |
```
Visual Studio Code Integration
Microsoft Visual Studio Code is one of the most popular code editors for Python developers and data scientists. The Python extension for VS Code recently added the integration of TensorBoard into the code editor, including support for the PyTorch Profiler. Once you have VS Code and the Python extension installed, you can quickly open the TensorBoard Profiler plugin by launching the Command Palette using the keyboard shortcut CTRL + SHIFT + P (CMD + SHIFT + P on a Mac) and typing the “Launch TensorBoard” command.
| https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/ | pytorch blogs |
This integration comes with a built-in lifecycle management feature. VS Code will install the TensorBoard package and the PyTorch Profiler plugin package (coming in mid-April) automatically if you don’t have them on your system. VS Code will also launch TensorBoard process for you and automatically look for any TensorBoard log files within your current directory. When you’re done, just close the tab and VS Code will automatically close the process. No more Terminal windows running on your system to provide a backend for the TensorBoard UI! Below is PyTorch Profiler Trace View running in TensorBoard.
Learn more about TensorBoard support in VS Code in this blog.
Feedback | https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/ | pytorch blogs |
Feedback
Review PyTorch Profiler documentation, give Profiler a try and let us know about your experience. Provide your feedback on PyTorch Discussion Forum or file issues on PyTorch GitHub. | https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/ | pytorch blogs |
layout: blog_detail
title: "How Disney Improved Activity Recognition Through Multimodal Approaches with PyTorch"
author: Monica Alfaro, Albert Aparicio, Francesc Guitart, Marc Junyent, Pablo Pernias, Marcel Porta, and Miquel Àngel Farré (former Senior Technology Manager)
featured-img: 'assets/images/disney_media_logo.jpg'
Introduction
Among the many things Disney Media & Entertainment Distribution (DMED) is responsible for, is the management and distribution of a huge array of media assets including news, sports, entertainment and features, episodic programs, marketing and advertising and more.
Our team focuses on media annotation as part of DMED Technology’s content platforms group. In our day-to-day work, we automatically analyze a variety of content that constantly challenges the efficiency of our machine learning workflow and the accuracy of our models. | https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/ | pytorch blogs |
Several of our colleagues recently discussed the workflow efficiencies that we achieved by switching to an end-to-end video analysis pipeline using PyTorch, as well as how we approach animated character recognition. We invite you to read more about both in this previous post.
While the conversion to an end-to-end PyTorch pipeline is a solution that any company might benefit from, animated character recognition was a uniquely-Disney concept and solution.
In this article we will focus on activity recognition, which is a general challenge across industries — but with some specific opportunities when leveraged in the media production field, because we can combine audio, video, and subtitles to provide a solution.
Experimenting with Multimodality | https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/ | pytorch blogs |
Experimenting with Multimodality
Working on a multimodal problem adds more complexity to the usual training pipelines. Having multiple information modes for each example means that the multimodal pipeline has to have specific implementations to process each mode in the dataset. Usually after this processing step, the pipeline has to merge or fuse the outputs. | https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/ | pytorch blogs |
Our initial experiments in multimodality were completed using the MMF framework. MMF is a modular framework for vision and language multimodal research. MMF contains reference implementations of state-of-the-art vision and language models and has also powered multiple research projects at Meta AI Research (as seen in this poster presented in PyTorch Ecosystem Day 2020). Along with the recent release of TorchMultimodal, a PyTorch library for training state-of-the-art multimodal models at scale, MMF highlights the growing interest in Multimodal understanding.
MMF tackles this complexity with modular management of all the elements of the pipeline through a wide set of different implementations for specific modules, ranging from the processing of the modalities to the fusion of the processed information. | https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/ | pytorch blogs |
In our scenario, MMF was a great entry point to experiment with multimodality. It allowed us to iterate quickly by combining audio, video and closed captioning and experiment at different levels of scale with certain multimodal models, shifting from a single GPU to TPU Pods.
Multimodal Transformers
With a workbench based on MMF, our initial model was based on a concatenation of features from each modality evolving to a pipeline that included a Transformer-based fusion module to combine the different input modes.
Specifically, we made use of the fusion module called MMFTransformer, developed in collaboration with the Meta AI Research team. This is an implementation based on VisualBERT for which the necessary modifications were added to be able to work with text, audio and video.
Despite having decent results with the out-of-box implementation MMFTransformer, we were still far from our goal, and the Transformers-based models required more data than we had available. | https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/ | pytorch blogs |
Searching for less data-hungry solutions
Searching for less data-hungry solutions, our team started studying MLP-Mixer. This new architecture has been proposed by the Google Brain team and it provides an alternative to well established de facto architectures like convolutions or self-attention for computer vision tasks.
MLP-Mixer
The core idea behind mixed variations consists of replacing the convolutions or self-attention mechanisms used in transformers with Multilayer Perceptrons. This change in architecture favors the performance of the model in high data regimes (especially with respect to the Transformers), while also opening some questions regarding the inductive biases hidden in the convolutions and the self-attention layers.
Those proposals perform great in solving image classification tasks by splitting the image in chunks, flattening those chunks into 1D vectors and passing them through a sequence of Mixer Layers.
| https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/ | pytorch blogs |
Inspired by the advantages of Mixer based architectures, our team searched for parallelisms with the type of problems we try to solve in video classification: specifically, instead of a single image, we have a set of frames that need to be classified, along with audio and closed captioning in the shape of new modalities.
Activity Recognition reinterpreting the MLP-Mixer
Our proposal takes the core idea of the MLP-Mixer — using multiple multi-layer perceptrons on a sequence and transposed sequence and extends it into a Multi Modal framework that allows us to process video, audio & text with the same architecture. | https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/ | pytorch blogs |
For each of the modalities, we use different extractors that will provide embeddings describing the content. Given the embeddings of each modality, the MLP-Mixer architecture solves the problem of deciding which of the modalities might be the most important, while also weighing how much each modality contributes to the final labeling.
For example, when it comes to detecting laughs, sometimes the key information is in audio or in the frames, and in some of the cases we have a strong signal in the closed caption.
We tried processing each frame separately with a ResNet34 and getting a sequence of embeddings and by using a video-specific model called R3D, both pre-trained on ImageNet and Kinetics400 respectively.
To process the audio, we use the pretrained ResNet34, and we remove the final layers to be able to extract 2D embeddings from the audio spectrograms (for 224x224 images we end up with 7x7 embeddings). | https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/ | pytorch blogs |
For closed captioning, we are using a pre-trained BERT-large, with all layers frozen, except for the Embeddings & LayerNorms.
Once we have extracted the embedding from each modality, we concatenate them into a single sequence and pass it through a set of MLP-Mixer blocks; next we use average pooling & a classification head to get predictions.
Our experiments have been performed on a custom, manually labeled dataset for activity recognition with 15 classes, which we know from experiments are hard and cannot all be predicted accurately using a single modality.
These experiments have shown a significant increase in performance using our approach, especially in a low/mid-data regime (75K training samples). | https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/ | pytorch blogs |
When it comes to using only Text and Audio, our experiments showed a 15 percent improvement in accuracy over using a classifier on top of the features extracted by state-of-the-art backbones.
Using Text, Audio and Video we have seen a 17 percent improvement in accuracy over using Meta AIFacebook’s MMF Framework, which uses a VisualBERT-like model to combine modalities using more powerful state of the art backbones.
Currently, we extended the initial model to cover up to 55 activity classes and 45 event classes. One of the challenges we expect to improve upon in the future is to include all activities and events, even those that are less frequent.
Interpreting the MLP-Mixer mode combinations
An MLP-Mixer is a concatenation of MultiLayer Perceptrons. This can be, very roughly, approximated to a linear operation, in the sense that, once trained, the weights are fixed and the input will directly affect the output. | https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/ | pytorch blogs |
Once we assume that approximation, we also assume that for an input consisting of NxM numbers, we could find a NxM matrix that (when multiplied elementwise) could approximate the predictions of the MLP-Mixer for a class.
We will call this matrix a stencil, and if we have access to it, we can find what parts of the input embeddings are responsible for a specific prediction.
You can think of it as a punch card with holes in specific positions. Only information in those positions will pass and contribute to a specific prediction. So we can measure the intensity of the input at those positions.
| https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/ | pytorch blogs |
Of course, this is an oversimplification, and there won't exist a unique stencil that perfectly represents all of the contributions of the input to a class (otherwise that would mean that the problem could be solved linearly). So this should be used for visualization purposes only, not as an accurate predictor.
Once we have a set of stencils for each class, we can effortlessly measure input contribution without relying on any external visualization techniques.
To find a stencil, we can start from a "random noise" stencil and optimize it to maximize the activations for a specific class by just back-propagating through the MLP-Mixer.
By doing this we can end up with many valid stencils, and we can reduce them to a few by using K-means to cluster them into similar stencils and averaging each cluster.
Using the Mixer to get the best of each world | https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/ | pytorch blogs |
Using the Mixer to get the best of each world
MLP-Mixer, used as an image classification model without convolutional layers, requires a lot of data, since the lack of inductive bias – one of the model's good points overall – is a weakness when it comes to working in low data domains.
When used as a way to combine information previously extracted by large pretrained backbones (as opposed to being used as a full end-to-end solution), they shine. The Mixer’s strength lies in finding temporal or structural coherence between different inputs. For example, in video-related tasks we could extract embeddings from the frames using a powerful, pretrained model that understands what is going on at frame level and use the mixer to make sense of it in a sequential manner. | https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/ | pytorch blogs |
This way of using the Mixer allows us to work with limited amounts of data and still get better results than what was achieved with Transformers. This is because Mixers seem to be more stable during training and seem to pay attention to all the inputs, while Transformers tend to collapse and pay attention only to some modalities/parts of the sequence.
Acknowledgements: We would like to thank the Meta AI Research and Partner Engineering teams for this collaboration. | https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/ | pytorch blogs |
layout: blog_detail
title: 'Practical Quantization in PyTorch'
author: Suraj Subramanian, Mark Saroufim, Jerry Zhang
featured-img: ''
Quantization is a cheap and easy way to make your DNN run faster and with lower memory requirements. PyTorch offers a few different approaches to quantize your model. In this blog post, we'll lay a (quick) foundation of quantization in deep learning, and then take a look at how each technique looks like in practice. Finally we'll end with recommendations from the literature for using quantization in your workflows.
Fig 1. PyTorch <3 Quantization
Contents
* TOC
Fundamentals of Quantization
If someone asks you what time it is, you don't respond "10:14:34:430705", but you might say "a quarter past 10".
Quantization has roots in information compression; in deep networks it refers to reducing the numerical precision of its weights and/or activations. | https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
Overparameterized DNNs have more degrees of freedom and this makes them good candidates for information compression [[1]]. When you quantize a model, two things generally happen - the model gets smaller and runs with better efficiency. Hardware vendors explicitly allow for faster processing of 8-bit data (than 32-bit data) resulting in higher throughput. A smaller model has lower memory footprint and power consumption [[2]], crucial for deployment at the edge.
Mapping function
The mapping function is what you might guess - a function that maps values from floating-point to integer space. A commonly used mapping function is a linear transformation given by , where is the input and are quantization parameters. | https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
To reconvert to floating point space, the inverse function is given by .
, and their difference constitutes the quantization error.
Quantization Parameters
The mapping function is parameterized by the scaling factor and zero-point .
is simply the ratio of the input range to the output range
| https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
where [] is the clipping range of the input, i.e. the boundaries of permissible inputs. [] is the range in quantized output space that it is mapped to. For 8-bit quantization, the output range <img src="https://latex.codecogs.com/gif.latex?\beta_q - \alpha_q <= (2^8 - 1)">.
acts as a bias to ensure that a 0 in the input space maps perfectly to a 0 in the quantized space.
Calibration | https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
Calibration
The process of choosing the input clipping range is known as calibration. The simplest technique (also the default in PyTorch) is to record the running mininmum and maximum values and assign them to and . TensorRT also uses entropy minimization (KL divergence), mean-square-error minimization, or percentiles of the input range. | https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
In PyTorch, Observer modules (docs, code) collect statistics on the input values and calculate the qparams . Different calibration schemes result in different quantized outputs, and it's best to empirically verify which scheme works best for your application and architecture (more on that later).
```python
from torch.quantization.observer import MinMaxObserver, MovingAverageMinMaxObserver, HistogramObserver
C, L = 3, 4
normal = torch.distributions.normal.Normal(0,1)
inputs = [normal.sample((C, L)), normal.sample((C, L))]
print(inputs)
>>>>>
[tensor([[-0.0590, 1.1674, 0.7119, -1.1270],
[-1.3974, 0.5077, -0.5601, 0.0683],
[-0.0929, 0.9473, 0.7159, -0.4574]]]),
tensor([[-0.0236, -0.7599, 1.0290, 0.8914], | https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
tensor([[-0.0236, -0.7599, 1.0290, 0.8914],
[-1.1727, -1.2556, -0.2271, 0.9568],
[-0.2500, 1.4579, 1.4707, 0.4043]])]
observers = [MinMaxObserver(), MovingAverageMinMaxObserver(), HistogramObserver()]
for obs in observers:
for x in inputs: obs(x)
print(obs.class.name, obs.calculate_qparams())
>>>>>
MinMaxObserver (tensor([0.0112]), tensor([124], dtype=torch.int32))
MovingAverageMinMaxObserver (tensor([0.0101]), tensor([139], dtype=torch.int32))
HistogramObserver (tensor([0.0100]), tensor([106], dtype=torch.int32))
```
Affine and Symmetric Quantization Schemes
Affine or asymmetric quantization schemes assign the input range to the min and max observed values. Affine schemes generally offer tighter clipping ranges and are useful for quantizing non-negative activations (you don't need the input range to contain negative values if your input tensors are never negative). The range is calculated as | https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
. Affine quantization leads to more computationally expensive inference when used for weight tensors [[3]].
Symmetric quantization schemes center the input range around 0, eliminating the need to calculate a zero-point offset. The range is calculated as
. For skewed signals (like non-negative activations) this can result in bad quantization resolution because the clipping range includes values that never show up in the input (see the pyplot below).
```python
act = torch.distributions.pareto.Pareto(1, 10).sample((1,1024))
weights = torch.distributions.normal.Normal(0, 0.12).sample((3, 64, 7, 7)).flatten()
def get_symmetric_range(x):
beta = torch.max(x.max(), x.min().abs())
return -beta.item(), beta.item()
def get_affine_range(x):
return x.min().item(), x.max().item()
def plot(plt, data, scheme): | https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
def plot(plt, data, scheme):
boundaries = get_affine_range(data) if scheme == 'affine' else get_symmetric_range(data)
a, _, _ = plt.hist(data, density=True, bins=100)
ymin, ymax = np.quantile(a[a>0], [0.25, 0.95])
plt.vlines(x=boundaries, ls='--', colors='purple', ymin=ymin, ymax=ymax)
fig, axs = plt.subplots(2,2)
plot(axs[0, 0], act, 'affine')
axs[0, 0].set_title("Activation, Affine-Quantized")
plot(axs[0, 1], act, 'symmetric')
axs[0, 1].set_title("Activation, Symmetric-Quantized")
plot(axs[1, 0], weights, 'affine')
axs[1, 0].set_title("Weights, Affine-Quantized")
plot(axs[1, 1], weights, 'symmetric')
axs[1, 1].set_title("Weights, Symmetric-Quantized")
plt.show()
```
Fig 2. Clipping ranges (in purple) for affine and symmetric schemes
In PyTorch, you can specify affine or symmetric schemes while initializing the Observer. Note that not all observers support both schemes. | https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
for qscheme in [torch.per_tensor_affine, torch.per_tensor_symmetric]:
obs = MovingAverageMinMaxObserver(qscheme=qscheme)
for x in inputs: obs(x)
print(f"Qscheme: {qscheme} | {obs.calculate_qparams()}")
# >>>>>
# Qscheme: torch.per_tensor_affine | (tensor([0.0101]), tensor([139], dtype=torch.int32))
# Qscheme: torch.per_tensor_symmetric | (tensor([0.0109]), tensor([128]))
Per-Tensor and Per-Channel Quantization Schemes
Quantization parameters can be calculated for the layer's entire weight tensor as a whole, or separately for each channel. In per-tensor, the same clipping range is applied to all the channels in a layer
Fig 3. Per-Channel uses one set of qparams for each channel. Per-tensor uses the same qparams for the entire tensor.
| https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
For weights quantization, symmetric-per-channel quantization provides better accuracies; per-tensor quantization performs poorly, possibly due to high variance in conv weights across channels from batchnorm folding [[3]].
from torch.quantization.observer import MovingAveragePerChannelMinMaxObserver
obs = MovingAveragePerChannelMinMaxObserver(ch_axis=0) # calculate qparams for all `C` channels separately
for x in inputs: obs(x)
print(obs.calculate_qparams())
# >>>>>
# (tensor([0.0090, 0.0075, 0.0055]), tensor([125, 187, 82], dtype=torch.int32))
Backend Engine | https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
### Backend Engine
Currently, quantized operators run on x86 machines via the [FBGEMM backend](https://github.com/pytorch/FBGEMM), or use [QNNPACK](https://github.com/pytorch/QNNPACK) primitives on ARM machines. Backend support for server GPUs (via TensorRT and cuDNN) is coming soon. Learn more about extending quantization to custom backends: [RFC-0019](https://github.com/pytorch/rfcs/blob/master/RFC-0019-Extending-PyTorch-Quantization-to-Custom-Backends.md).
```python
backend = 'fbgemm' if x86 else 'qnnpack'
qconfig = torch.quantization.get_default_qconfig(backend)
torch.backends.quantized.engine = backend
QConfig
The QConfig (code, docs) NamedTuple stores the Observers and the quantization schemes used to quantize activations and weights. | https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
Be sure to pass the Observer class (not the instance), or a callable that can return Observer instances. Use with_args() to override the default arguments.
my_qconfig = torch.quantization.QConfig(
activation=MovingAverageMinMaxObserver.with_args(qscheme=torch.per_tensor_affine),
weight=MovingAveragePerChannelMinMaxObserver.with_args(qscheme=torch.qint8)
)
# >>>>>
# QConfig(activation=functools.partial(<class 'torch.ao.quantization.observer.MovingAverageMinMaxObserver'>, qscheme=torch.per_tensor_affine){}, weight=functools.partial(<class 'torch.ao.quantization.observer.MovingAveragePerChannelMinMaxObserver'>, qscheme=torch.qint8){})
In PyTorch
PyTorch allows you a few different ways to quantize your model depending on
- if you prefer a flexible but manual, or a restricted automagic process (Eager Mode v/s FX Graph Mode)
- if qparams for quantizing activations (layer outputs) are precomputed for all inputs, or calculated afresh with each input (static v/s dynamic), | https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
if qparams are computed with or without retraining (quantization-aware training v/s post-training quantization)
FX Graph Mode automatically fuses eligible modules, inserts Quant/DeQuant stubs, calibrates the model and returns a quantized module - all in two method calls - but only for networks that are symbolic traceable. The examples below contain the calls using Eager Mode and FX Graph Mode for comparison.
In DNNs, eligible candidates for quantization are the FP32 weights (layer parameters) and activations (layer outputs). Quantizing weights reduces the model size. Quantized activations typically result in faster inference.
As an example, the 50-layer ResNet network has ~26 million weight parameters and computes ~16 million activations in the forward pass.
Post-Training Dynamic/Weight-only Quantization | https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
Here the model's weights are pre-quantized; the activations are quantized on-the-fly ("dynamic") during inference. The simplest of all approaches, it has a one line API call in torch.quantization.quantize_dynamic. Currently only Linear and Recurrent (LSTM, GRU, RNN) layers are supported for dynamic quantization.
(+) Can result in higher accuracies since the clipping range is exactly calibrated for each input [[1]].
(+) Dynamic quantization is preferred for models like LSTMs and Transformers where writing/retrieving the model's weights from memory dominate bandwidths [[4]].
(-) Calibrating and quantizing the activations at each layer during runtime can add to the compute overhead.
```python
import torch
from torch import nn
toy model
m = nn.Sequential(
nn.Conv2d(2, 64, (8,)),
nn.ReLU(),
nn.Linear(16,10),
nn.LSTM(10, 10))
m.eval()
EAGER MODE
from torch.quantization import quantize_dynamic
model_quantized = quantize_dynamic( | https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
model_quantized = quantize_dynamic(
model=m, qconfig_spec={nn.LSTM, nn.Linear}, dtype=torch.qint8, inplace=False
)
FX MODE
from torch.quantization import quantize_fx
qconfig_dict = {"": torch.quantization.default_dynamic_qconfig} # An empty key denotes the default applied to all modules
model_prepared = quantize_fx.prepare_fx(m, qconfig_dict)
model_quantized = quantize_fx.convert_fx(model_prepared)
```
Post-Training Static Quantization (PTQ)
PTQ also pre-quantizes model weights but instead of calibrating activations on-the-fly, the clipping range is pre-calibrated and fixed ("static") using validation data. Activations stay in quantized precision between operations during inference. About 100 mini-batches of representative data are sufficient to calibrate the observers [[2]]. The examples below use random data in calibration for convenience - using that in your application will result in bad qparams.
| https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
Fig 4. Steps in Post-Training Static Quantization
Module fusion combines multiple sequential modules (eg: [Conv2d, BatchNorm, ReLU]) into one. Fusing modules means the compiler needs to only run one kernel instead of many; this speeds things up and improves accuracy by reducing quantization error.
(+) Static quantization has faster inference than dynamic quantization because it eliminates the float<->int conversion costs between layers.
(-) Static quantized models may need regular re-calibration to stay robust against distribution-drift.
```python
Static quantization of a model consists of the following steps:
Fuse modules
Insert Quant/DeQuant Stubs
Prepare the fused module (insert observers before and after layers)
Calibrate the prepared module (pass it representative data) | https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
Convert the calibrated module (replace with quantized version)
import torch
from torch import nn
import copy
backend = "fbgemm" # running on a x86 CPU. Use "qnnpack" if running on ARM.
model = nn.Sequential(
nn.Conv2d(2,64,3),
nn.ReLU(),
nn.Conv2d(64, 128, 3),
nn.ReLU()
)
EAGER MODE
m = copy.deepcopy(model)
m.eval()
"""Fuse
- Inplace fusion replaces the first module in the sequence with the fused module, and the rest with identity modules
"""
torch.quantization.fuse_modules(m, ['0','1'], inplace=True) # fuse first Conv-ReLU pair
torch.quantization.fuse_modules(m, ['2','3'], inplace=True) # fuse second Conv-ReLU pair
"""Insert stubs"""
m = nn.Sequential(torch.quantization.QuantStub(),
*m,
torch.quantization.DeQuantStub())
"""Prepare"""
m.qconfig = torch.quantization.get_default_qconfig(backend)
torch.quantization.prepare(m, inplace=True)
"""Calibrate | https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
"""Calibrate
- This example uses random data for convenience. Use representative (validation) data instead.
"""
with torch.inference_mode():
for _ in range(10):
x = torch.rand(1,2, 28, 28)
m(x)
"""Convert"""
torch.quantization.convert(m, inplace=True)
"""Check"""
print(m[[1]].weight().element_size()) # 1 byte instead of 4 bytes for FP32
FX GRAPH
from torch.quantization import quantize_fx
m = copy.deepcopy(model)
m.eval()
qconfig_dict = {"": torch.quantization.get_default_qconfig(backend)}
Prepare
model_prepared = quantize_fx.prepare_fx(m, qconfig_dict)
Calibrate - Use representative (validation) data.
with torch.inference_mode():
for _ in range(10):
x = torch.rand(1,2,28, 28)
model_prepared(x)
quantize
model_quantized = quantize_fx.convert_fx(model_prepared)
```
Quantization-aware Training (QAT)
Fig 5. Steps in Quantization-Aware Training | https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
Fig 5. Steps in Quantization-Aware Training
The PTQ approach is great for large models, but accuracy suffers in smaller models [[6]]. This is of course due to the loss in numerical precision when adapting a model from FP32 to the INT8 realm (Figure 6(a)). QAT tackles this by including this quantization error in the training loss, thereby training an INT8-first model.
Fig 6. Comparison of PTQ and QAT convergence [3]
| https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
All weights and biases are stored in FP32, and backpropagation happens as usual. However in the forward pass, quantization is internally simulated via FakeQuantize modules. They are called fake because they quantize and immediately dequantize the data, adding quantization noise similar to what might be encountered during quantized inference. The final loss thus accounts for any expected quantization errors. Optimizing on this allows the model to identify a wider region in the loss function (Figure 6(b)), and identify FP32 parameters such that quantizing them to INT8 does not significantly affect accuracy.
Fig 7. Fake Quantization in the forward and backward pass
Image source: https://developer.nvidia.com/blog/achieving-fp32-accuracy-for-int8-inference-using-quantization-aware-training-with-tensorrt
| https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
(+) QAT yields higher accuracies than PTQ.
(+) Qparams can be learned during model training for more fine-grained accuracy (see LearnableFakeQuantize)
(-) Computational cost of retraining a model in QAT can be several hundred epochs [[1]]
```python
QAT follows the same steps as PTQ, with the exception of the training loop before you actually convert the model to its quantized version
import torch
from torch import nn
backend = "fbgemm" # running on a x86 CPU. Use "qnnpack" if running on ARM.
m = nn.Sequential(
nn.Conv2d(2,64,8),
nn.ReLU(),
nn.Conv2d(64, 128, 8),
nn.ReLU()
)
"""Fuse"""
torch.quantization.fuse_modules(m, ['0','1'], inplace=True) # fuse first Conv-ReLU pair
torch.quantization.fuse_modules(m, ['2','3'], inplace=True) # fuse second Conv-ReLU pair
"""Insert stubs"""
m = nn.Sequential(torch.quantization.QuantStub(),
*m, | https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
*m,
torch.quantization.DeQuantStub())
"""Prepare"""
m.train()
m.qconfig = torch.quantization.get_default_qconfig(backend)
torch.quantization.prepare_qat(m, inplace=True)
"""Training Loop"""
n_epochs = 10
opt = torch.optim.SGD(m.parameters(), lr=0.1)
loss_fn = lambda out, tgt: torch.pow(tgt-out, 2).mean()
for epoch in range(n_epochs):
x = torch.rand(10,2,24,24)
out = m(x)
loss = loss_fn(out, torch.rand_like(out))
opt.zero_grad()
loss.backward()
opt.step()
"""Convert"""
m.eval()
torch.quantization.convert(m, inplace=True)
```
Sensitivity Analysis | https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
```
Sensitivity Analysis
Not all layers respond to quantization equally, some are more sensitive to precision drops than others. Identifying the optimal combination of layers that minimizes accuracy drop is time-consuming, so [[3]] suggest a one-at-a-time sensitivity analysis to identify which layers are most sensitive, and retaining FP32 precision on those. In their experiments, skipping just 2 conv layers (out of a total 28 in MobileNet v1) give them near-FP32 accuracy. Using FX Graph Mode, we can create custom qconfigs to do this easily:
```python
ONE-AT-A-TIME SENSITIVITY ANALYSIS
for quantized_layer, _ in model.named_modules():
print("Only quantizing layer: ", quantized_layer)
# The module_name key allows module-specific qconfigs.
qconfig_dict = {"": None,
"module_name":[(quantized_layer, torch.quantization.get_default_qconfig(backend))]}
model_prepared = quantize_fx.prepare_fx(model, qconfig_dict)
# calibrate
model_quantized = quantize_fx.convert_fx(model_prepared) | https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
evaluate(model)
```
Another approach is to compare statistics of the FP32 and INT8 layers; commonly used metrics for these are SQNR (Signal to Quantized Noise Ratio) and Mean-Squre-Error. Such a comparative analysis may also help in guiding further optimizations.
Fig 8. Comparing model weights and activations
PyTorch provides tools to help with this analysis under the Numeric Suite. Learn more about using Numeric Suite from the full tutorial.
```python
extract from https://pytorch.org/tutorials/prototype/numeric_suite_tutorial.html
import torch.quantization._numeric_suite as ns
def SQNR(x, y):
# Higher is better
Ps = torch.norm(x)
Pn = torch.norm(x-y)
return 20*torch.log10(Ps/Pn) | https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
return 20*torch.log10(Ps/Pn)
wt_compare_dict = ns.compare_weights(fp32_model.state_dict(), int8_model.state_dict())
for key in wt_compare_dict:
print(key, compute_error(wt_compare_dict[key]['float'], wt_compare_dict[key]['quantized'].dequantize()))
act_compare_dict = ns.compare_model_outputs(fp32_model, int8_model, input_data)
for key in act_compare_dict:
print(key, compute_error(act_compare_dict[key]['float'][0], act_compare_dict[key]['quantized'][0].dequantize()))
```
Recommendations for your workflow
Fig 9. Suggested quantization workflow
Click for larger image
Points to note
Large (10M+ parameters) models are more robust to quantization error. [[2]]
| https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
Quantizing a model from a FP32 checkpoint provides better accuracy than training an INT8 model from scratch.[[2]]
Profiling the model runtime is optional but it can help identify layers that bottleneck inference.
Dynamic Quantization is an easy first step, especially if your model has many Linear or Recurrent layers.
Use symmetric-per-channel quantization with MinMax observers for quantizing weights. Use affine-per-tensor quantization with MovingAverageMinMax observers for quantizing activations[[2], [3]]
Use metrics like SQNR to identify which layers are most suscpetible to quantization error. Turn off quantization on these layers.
Use QAT to fine-tune for around 10% of the original training schedule with an annealing learning rate schedule starting at 1% of the initial training learning rate. [[3]]
| https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
If the above workflow didn't work for you, we want to know more. Post a thread with details of your code (model architecture, accuracy metric, techniques tried). Feel free to cc me @suraj.pt.
That was a lot to digest, congratulations for sticking with it! Next, we'll take a look at quantizing a "real-world" model that uses dynamic control structures (if-else, loops). These elements disallow symbolic tracing a model, which makes it a bit tricky to directly quantize the model out of the box. In the next post of this series, we'll get our hands dirty on a model that is chock full of loops and if-else blocks, and even uses third-party libraries in the forward call. | https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
We'll also cover a cool new feature in PyTorch Quantization called Define-by-Run, that tries to ease this constraint by needing only subsets of the model's computational graph to be free of dynamic flow. Check out the Define-by-Run poster at PTDD'21 for a preview.
References
[[1]] Gholami, A., Kim, S., Dong, Z., Yao, Z., Mahoney, M. W., & Keutzer, K. (2021). A survey of quantization methods for efficient neural network inference. arXiv preprint arXiv:2103.13630.
[[2]] Krishnamoorthi, R. (2018). Quantizing deep convolutional networks for efficient inference: A whitepaper. arXiv preprint arXiv:1806.08342.
[[3]] Wu, H., Judd, P., Zhang, X., Isaev, M., & Micikevicius, P. (2020). Integer quantization for deep learning inference: Principles and empirical evaluation. arXiv preprint arXiv:2004.09602.
[[4]] PyTorch Quantization Docs | https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
https://pytorch.org/blog/quantization-in-practice/ | pytorch blogs |
|
layout: blog_detail
title: 'Towards Reproducible Research with PyTorch Hub'
author: Team PyTorch
redirect_from: /2019/06/10/pytorch_hub.html
Reproducibility is an essential requirement for many fields of research including those based on machine learning techniques. However, many machine learning publications are either not reproducible or are difficult to reproduce. With the continued growth in the number of research publications, including tens of thousands of papers now hosted on arXiv and submissions to conferences at an all time high, research reproducibility is more important than ever. While many of these publications are accompanied by code as well as trained models which is helpful but still leaves a number of steps for users to figure out for themselves. | https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/ | pytorch blogs |
We are excited to announce the availability of PyTorch Hub, a simple API and workflow that provides the basic building blocks for improving machine learning research reproducibility. PyTorch Hub consists of a pre-trained model repository designed specifically to facilitate research reproducibility and enable new research. It also has built-in support for Colab, integration with Papers With Code and currently contains a broad set of models that include Classification and Segmentation, Generative, Transformers, etc.
[Owner] Publishing models
PyTorch Hub supports the publication of pre-trained models (model definitions and pre-trained weights) to a GitHub repository by adding a simple hubconf.py file.
This provides an enumeration of which models are to be supported and a list of dependencies needed to run the models. | https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/ | pytorch blogs |
Examples can be found in the torchvision, huggingface-bert and gan-model-zoo repositories.
Let us look at the simplest case: torchvision's hubconf.py:
```python
Optional list of dependencies required by the package
dependencies = ['torch']
from torchvision.models.alexnet import alexnet
from torchvision.models.densenet import densenet121, densenet169, densenet201, densenet161
from torchvision.models.inception import inception_v3
from torchvision.models.resnet import resnet18, resnet34, resnet50, resnet101, resnet152,\
resnext50_32x4d, resnext101_32x8d
from torchvision.models.squeezenet import squeezenet1_0, squeezenet1_1
from torchvision.models.vgg import vgg11, vgg13, vgg16, vgg19, vgg11_bn, vgg13_bn, vgg16_bn, vgg19_bn
from torchvision.models.segmentation import fcn_resnet101, deeplabv3_resnet101 | https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/ | pytorch blogs |
from torchvision.models.googlenet import googlenet
from torchvision.models.shufflenetv2 import shufflenet_v2_x0_5, shufflenet_v2_x1_0
from torchvision.models.mobilenet import mobilenet_v2
```
In torchvision, the models have the following properties:
- Each model file can function and be executed independently
- They dont require any package other than PyTorch (encoded in hubconf.py as dependencies['torch'])
- They dont need separate entry-points, because the models when created, work seamlessly out of the box
Minimizing package dependencies reduces the friction for users to load your model for immediate experimentation.
A more involved example is HuggingFace's BERT models. Here is their hubconf.py
```python
dependencies = ['torch', 'tqdm', 'boto3', 'requests', 'regex']
from hubconfs.bert_hubconf import (
bertTokenizer,
bertModel,
bertForNextSentencePrediction,
bertForPreTraining,
bertForMaskedLM,
bertForSequenceClassification,
bertForMultipleChoice, | https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/ | pytorch blogs |
bertForMultipleChoice,
bertForQuestionAnswering,
bertForTokenClassification
)
Each model then requires an entrypoint to be created. Here is a code snippet to specify an entrypoint of the ```bertForMaskedLM``` model, which returns the pre-trained model weights.
```python
def bertForMaskedLM(*args, **kwargs):
"""
BertForMaskedLM includes the BertModel Transformer followed by the
pre-trained masked language modeling head.
Example:
...
"""
model = BertForMaskedLM.from_pretrained(*args, **kwargs)
return model
These entry-points can serve as wrappers around complex model factories. They can give a clean and consistent help docstring, have logic to support downloading of pretrained weights (for example via pretrained=True) or have additional hub-specific functionality such as visualization.
With a hubconf.py in place, you can send a pull request based on the template here. | https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/ | pytorch blogs |
Our goal is to curate high-quality, easily-reproducible, maximally-beneficial models for research reproducibility.
Hence, we may work with you to refine your pull request and in some cases reject some low-quality models to be published.
Once we accept your pull request, your model will soon appear on Pytorch hub webpage for all users to explore.
[User] Workflow
As a user, PyTorch Hub allows you to follow a few simple steps and do things like: 1) explore available models; 2) load a model; and 3) understand what methods are available for any given model. Let's walk through some examples of each.
Explore available entrypoints.
Users can list all available entrypoints in a repo using the torch.hub.list() API.
```python
torch.hub.list('pytorch/vision')
['alexnet',
'deeplabv3_resnet101',
'densenet121',
...
'vgg16',
'vgg16_bn',
'vgg19',
'vgg19_bn']
```
| https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/ | pytorch blogs |
'vgg16',
'vgg16_bn',
'vgg19',
'vgg19_bn']
```
Note that PyTorch Hub also allows auxillary entrypoints (other than pretrained models), e.g. bertTokenizer for preprocessing in the BERT models, to make the user workflow smoother.
Load a model
Now that we know which models are available in the Hub, users can load a model entrypoint using the torch.hub.load() API. This only requires a single command without the need to install a wheel. In addition the torch.hub.help() API can provide useful information about how to instantiate the model.
print(torch.hub.help('pytorch/vision', 'deeplabv3_resnet101'))
model = torch.hub.load('pytorch/vision', 'deeplabv3_resnet101', pretrained=True)
It is also common that repo owners will want to continually add bug fixes or performance improvements. PyTorch Hub makes it super simple for users to get the latest update by calling:
```python | https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/ | pytorch blogs |
model = torch.hub.load(..., force_reload=True)
We believe this will help to alleviate the burden of repetitive package releases by repo owners and instead allow them to focus more on their research.
It also ensures that, as a user, you are getting the freshest available models.
On the contrary, stability is important for users. Hence, some model owners serve them from a specificed branch or tag, rather than the master branch, to ensure stability of the code.
For example, pytorch_GAN_zoo serves them from the hub branch:
model = torch.hub.load('facebookresearch/pytorch_GAN_zoo:hub', 'DCGAN', pretrained=True, useGPU=False)
Note that the *args, **kwargs passed to hub.load() are used to instantiate a model. In the above example, pretrained=True and useGPU=False are given to the model's entrypoint.
Explore a loaded model | https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/ | pytorch blogs |
Explore a loaded model
Once you have a model from PyTorch Hub loaded, you can use the following workflow to find out the available methods that are supported as well as understand better what arguments are requires to run it.
dir(model) to see all available methods of the model. Let's take a look at bertForMaskedLM's available methods.
>>> dir(model)
>>>
['forward'
...
'to'
'state_dict',
]
help(model.forward) provides a view into what arguments are required to make your loaded model run
>>> help(model.forward)
>>>
Help on method forward in module pytorch_pretrained_bert.modeling:
forward(input_ids, token_type_ids=None, attention_mask=None, masked_lm_labels=None)
...
Have a closer look at the BERT and DeepLabV3 pages, where you can see how these models can be used once loaded.
Other ways to explore | https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/ | pytorch blogs |
Other ways to explore
Models available in PyTorch Hub also support both Colab and are directly linked on Papers With Code and you can get started with a single click. Here is a good example to get started with (shown below).
Additional resources:
PyTorch Hub API documentation can be found here.
Submit a model here for publication in PyTorch Hub.
Go to https://pytorch.org/hub to learn more about the available models.
Look for more models to come on paperswithcode.com.
| https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/ | pytorch blogs |
A BIG thanks to the folks at HuggingFace, the PapersWithCode team, fast.ai and Nvidia as well as Morgane Riviere (FAIR Paris) and lots of others for helping bootstrap this effort!!
Cheers!
Team PyTorch
FAQ:
Q: If we would like to contribute a model that is already in the Hub but perhaps mine has better accuracy, should I still contribute?
A: Yes!! A next step for Hub is to implement an upvote/downvote system to surface the best models.
Q: Who hosts the model weights for PyTorch Hub?
A: You, as the contributor, are responsible to host the model weights. You can host your model in your favorite cloud storage or, if it fits within the limits, on GitHub. If it is not within your means to host the weights, check with us via opening an issue on the hub repository.
Q: What if my model is trained on private data? Should I still contribute this model? | https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/ | pytorch blogs |
A: No! PyTorch Hub is centered around open research and that extends to the usage of open datasets to train these models on. If a pull request for a proprietary model is submitted, we will kindly ask that you resubmit a model trained on something open and available.
Q: Where are my downloaded models saved?
A: We follow the XDG Base Directory Specification and adhere to common standards around cached files and directories.
The locations are used in the order of:
Calling hub.set_dir(<PATH_TO_HUB_DIR>)
$TORCH_HOME/hub, if environment variable TORCH_HOME is set.
$XDG_CACHE_HOME/torch/hub, if environment variable XDG_CACHE_HOME is set.
~/.cache/torch/hub
| https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/ | pytorch blogs |
layout: blog_detail
title: 'The road to 1.0: production ready PyTorch'
author: The PyTorch Team
redirect_from: /2018/05/02/road-to-1.0.html
We would like to give you a preview of the roadmap for PyTorch 1.0 , the next release of PyTorch. Over the last year, we've had 0.2, 0.3 and 0.4 transform PyTorch from a [Torch+Chainer]-like interface into something cleaner, adding double-backwards, numpy-like functions, advanced indexing and removing Variable boilerplate. At this time, we're confident that the API is in a reasonable and stable state to confidently release a 1.0.
However, 1.0 isn't just about stability of the interface.
One of PyTorch's biggest strengths is its first-class Python integration, imperative style, simplicity of the API and options. These are aspects that make PyTorch good for research and hackability.
One of its biggest downsides has been production-support. What we mean by production-support is the countless things one has to do to models to run them efficiently at massive scale: | https://pytorch.org/blog/the-road-to-1_0/ | pytorch blogs |
exporting to C++-only runtimes for use in larger projects
optimizing mobile systems on iPhone, Android, Qualcomm and other systems
using more efficient data layouts and performing kernel fusion to do faster inference (saving 10% of speed or memory at scale is a big win)
quantized inference (such as 8-bit inference)
Startups, large companies and anyone who wants to build a product around PyTorch have asked for production support. At Facebook (the largest stakeholder for PyTorch) we have Caffe2, which has been the production-ready platform, running in our datacenters and shipping to more than 1 billion phones spanning eight generations of iPhones and six generations of Android CPU architectures. It has server-optimized inference on Intel / ARM, TensorRT support, and all the necessary bits for production. Considering all this value locked-in to a platform that the PyTorch team works quite closely with, we decided to marry PyTorch and Caffe2 which gives the production-level readiness for PyTorch. | https://pytorch.org/blog/the-road-to-1_0/ | pytorch blogs |
Supporting production features without adding usability issues for our researchers and end-users needs creative solutions.
Production != Pain for researchers
Adding production capabilities involves increasing the API complexity and number of configurable options for models. One configures memory-layouts (NCHW vs NHWC vs N,C/32,H,W,32, each providing different performance characteristics), quantization (8-bit? 3-bit?), fusion of low-level kernels (you used a Conv + BatchNorm + ReLU, let's fuse them into a single kernel), separate backend options (MKLDNN backend for a few layers and NNPACK backend for other layers), etc.
PyTorch's central goal is to provide a great platform for research and hackability. So, while we add all these optimizations, we've been working with a hard design constraint to never trade these off against usability. | https://pytorch.org/blog/the-road-to-1_0/ | pytorch blogs |
To pull this off, we are introducing torch.jit, a just-in-time (JIT) compiler that at runtime takes your PyTorch models and rewrites them to run at production-efficiency. The JIT compiler can also export your model to run in a C++-only runtime based on Caffe2 bits.
In 1.0, your code continues to work as-is, we're not making any big changes to the existing API.
Making your model production-ready is an opt-in annotation, which uses the torch.jit compiler to export your model to a Python-less environment, and improving its performance. Let's walk through the JIT compiler in detail.
torch.jit: A JIT-compiler for your models | https://pytorch.org/blog/the-road-to-1_0/ | pytorch blogs |
torch.jit: A JIT-compiler for your models
We strongly believe that it's hard to match the productivity you get from specifying your models directly as idiomatic Python code. This is what makes PyTorch so flexible, but it also means that PyTorch pretty much never knows the operation you'll run next. This however is a big blocker for export/productionization and heavyweight automatic performance optimizations because they need full upfront knowledge of how the computation will look before it even gets executed.
We provide two opt-in ways of recovering this information from your code, one based on tracing native python code and one based on compiling a subset of the python language annotated into a python-free intermediate representation. After thorough discussions we concluded that they're both going to be useful in different contexts, and as such you will be able to mix and match them freely.
Tracing Mode | https://pytorch.org/blog/the-road-to-1_0/ | pytorch blogs |
Tracing Mode
The PyTorch tracer, torch.jit.trace, is a function that records all the native PyTorch operations performed in a code region, along with the data dependencies between them. In fact, PyTorch has had a tracer since 0.3, which has been used for exporting models through ONNX. What changes now, is that you no longer necessarily need to take the trace and run it elsewhere - PyTorch can re-execute it for you, using a carefully designed high-performance C++ runtime. As we develop PyTorch 1.0 this runtime will integrate all the optimizations and hardware integrations that Caffe2 provides. | https://pytorch.org/blog/the-road-to-1_0/ | pytorch blogs |
The biggest benefit of this approach is that it doesn't really care how your Python code is structured — you can trace through generators or coroutines, modules or pure functions. Since we only record native PyTorch operators, these details have no effect on the trace recorded. This behavior, however, is a double-edged sword. For example, if you have a loop in your model, it will get unrolled in the trace, inserting a copy of the loop body for as many times as the loop ran. This opens up opportunities for zero-cost abstraction (e.g. you can loop over modules, and the actual trace will be loop-overhead free!), but on the other hand this will also affect data dependent loops (think of e.g. processing sequences of varying lengths), effectively hard-coding a single length into the trace.
For networks that do not contain loops and if statements, tracing is non-invasive and is robust enough to handle a wide variety of coding styles. This code example illustrates what tracing looks like:
```python | https://pytorch.org/blog/the-road-to-1_0/ | pytorch blogs |
# This will run your nn.Module or regular Python function with the example
# input that you provided. The returned callable can be used to re-execute
# all operations that happened during the example run, but it will no longer
# use the Python interpreter.
from torch.jit import trace
traced_model = trace(model, example_input=input)
traced_fn = trace(fn, example_input=input)
# The training loop doesn't change. Traced model behaves exactly like an
# nn.Module, except that you can't edit what it does or change its attributes.
# Think of it as a "frozen module".
for input, target in data_loader:
loss = loss_fn(traced_model(input), target)
Script Mode
Tracing mode is a great way to minimize the impact on your code, but we're also very excited about the models that fundamentally make use of control flow such as RNNs. Our solution to this is a scripting mode. | https://pytorch.org/blog/the-road-to-1_0/ | pytorch blogs |
In this case you write out a regular Python function, except that you can no longer use certain more complicated language features. Once you isolated the desired functionality, you let us know that you'd like the function to get compiled by decorating it with an @script decorator. This annotation will transform your python function directly into our high-performance C++ runtime. This lets us recover all the PyTorch operations along with loops and conditionals. They will be embedded into our internal representation of this function, and will be accounted for every time this function is run.
from torch.jit import script
@script
def rnn_loop(x):
hidden = None
for x_t in x.split(1):
x, hidden = model(x, hidden)
return x
Optimization and Export
Regardless of whether you use tracing or @script, the result is a python-free representation of your model, which can be used to optimize the model or to export the model from python for use in production environments. | https://pytorch.org/blog/the-road-to-1_0/ | pytorch blogs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.