text
stringlengths
0
1.73k
source
stringlengths
35
119
category
stringclasses
2 values
Misc Optimizations In addition to the steps laid about above, we also eliminated overhead between CUDA kernel launches and unnecessary tensor allocations. One example is when you do a tensor device look up. This can provide some poor performance initially with a lot of unnecessary allocations. When we remove these this results in a reduction from milliseconds to nanoseconds between kernel launches. Lastly, there might be normalization applied in the custom LSTMCell like LayerNorm. Since LayerNorm and other normalization ops contains reduce operations, it is hard to fuse it in its entirety. Instead, we automatically decompose Layernorm to a statistics computation (reduce operations) + element-wise transformations, and then fuse those element-wise parts together. As of this post, there are some limitations on our auto differentiation and graph fuser infrastructure which limits the current support to inference mode only. We plan to add backward support in a future release.
https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/
pytorch blogs
With the above optimizations on operation fusion, loop unrolling, batch matrix multiplication and some misc optimizations, we can see a clear performance increase on our custom TorchScript LSTM forward and backward from the following figure: There are a number of additional optimizations that we did not cover in this post. In addition to the ones laid out in this post, we now see that our custom LSTM forward pass is on par with cuDNN. We are also working on optimizing backward more and expect to see improvements in future releases. Besides the speed that TorchScript provides, we introduced a much more flexible API that enable you to hand draft a lot more custom RNNs, which cuDNN could not provide.
https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/
pytorch blogs
layout: blog_detail title: "PyTorch, a year in...." author: "The PyTorch Team" date: 2018-01-19 12:00:00 -0500 redirect_from: /2018/01/19/a-year-in.html Today marks 1 year since PyTorch was released publicly. It's been a wild ride — our quest to build a flexible deep learning research platform. Over the last year, we've seen an amazing community of people using, contributing to and evangelizing PyTorch — thank you for the love. Looking back, we wanted to summarize PyTorch over the past year: the progress, the news and highlights from the community. Community We've been blessed with a strong organic community of researchers and engineers who fell in love with PyTorch. The core team has engineers and researchers from multiple countries, companies and universities, and we couldn't have made PyTorch what it is without each contribution. Research papers, packages and Github
https://pytorch.org/blog/a-year-in/
pytorch blogs
Research papers, packages and Github Within days of release, users from the community started to implement their favorite research papers in PyTorch and release the code on Github. Open-source code is a primary and essential tool for researchers today. Folks came together to create torchtext, torchvision and torchaudio packages to help facilitate and democratize research in different domains. The first community package based on PyTorch came from Brandon Amos, titled Block, and helped with easier manipulation of block matrices. The Locus Lab at CMU subsequently went on to publish PyTorch packages and implementations for most of their research. The first research paper code came from Sergey Zagoruyko titled Paying more attention to attention.
https://pytorch.org/blog/a-year-in/
pytorch blogs
Jun-Yan Zhu, Taesung Park, Phillip Isola, Alyosha Efros and team from U.C.Berkeley released the hugely popular Cycle-GAN and pix2pix which does image to image transforms. The researchers at HarvardNLP and Systran started developing and improving OpenNMT in PyTorch, seeded by initial reimplementation of the [Lua]Torch code from Adam Lerer at Facebook. The MagicPony team at Twitter contributed implementations of their Super-resolution work early on into PyTorch's examples.
https://pytorch.org/blog/a-year-in/
pytorch blogs
Salesforce Research released several packages, including their highlight release of PyTorch-QRNN, a type of RNN that is 2x to 17x faster than standard LSTMs optimized by CuDNN. James Bradbury and team form one of the most active and engaging forces in the PyTorch community. We're releasing @PyTorch-QRNN, 2-17x faster than NVIDIA's cuDNN LSTM.Speed thanks to 50 lines of CUDA via CuPy.https://t.co/KaWhN4yDZd pic.twitter.com/yoLYj3pMI0— Smerity (@Smerity) October 9, 2017
https://pytorch.org/blog/a-year-in/
pytorch blogs
Researchers from Uber, Northeastern and Stanford came together to form an active probabilistic programming community around their packages Pyro and ProbTorch. They are actively developing the torch.distributions core package. This community is so active and fast-moving, we had our first pytorch-probabilistic-programming meetup at NIPS 2017 with Fritz Obermeyer, Noah Goodman, Jan-Willem van de Meent, Brooks Paige, Dustin Tran and 22 additional attendees discussing how to make the world bayesian.
https://pytorch.org/blog/a-year-in/
pytorch blogs
NVIDIA Researchers released three high-quality repositories that implemented pix2pix-HD, Sentiment Neuron and FlowNet2 papers. Their analysis of scalability of different Data Parallel models in PyTorch was helpful to the community. The Allen Institute for AI released AllenNLP which includes several state-of-the-art models in NLP — reference implementations and easy to use web demos for standard NLP tasks.
https://pytorch.org/blog/a-year-in/
pytorch blogs
We also had our first Kaggle winning team grt123 in July. They won the DataScience Bowl 2017 on Lung Cancer detection and subsequently released their PyTorch implementations. On the visualization front, Tzu-Wei Huang implemented a TensorBoard-PyTorch plugin and Facebook AI Research released PyTorch compatibility for their visdom visualization package. Lastly, Facebook AI Research released several projects such as ParlAI, fairseq-py, VoiceLoop and FaderNetworks that implemented cutting-edge models and interfaced datasets in multiple domains.
https://pytorch.org/blog/a-year-in/
pytorch blogs
There are countless good projects that we haven't highlighted for the lack of space, you can find a curated list here. We would also like to give a huge shout-out to folks who actively help others out on the Forums, especially ptrblck, jpeg729, QuantScientist, albanD, Thomas Viehmann and chenyuntc. You are providing an invaluable service, thank you so much! Metrics In terms of sheer numbers, 87,769 lines of Python code on github that import torch 3,983 repositories on Github that mention PyTorch in their name or description
https://pytorch.org/blog/a-year-in/
pytorch blogs
More than half a million downloads of PyTorch binaries. 651,916 to be precise. 5,400 users wrote 21,500 posts discussing 5,200 topics on our forums discuss.pytorch.org (http://discuss.pytorch.org/) 131 mentions of PyTorch on Reddit's /r/machinelearning since the day of release. In the same period, TensorFlow was mentioned 255 times. Research Metrics PyTorch is a research-focused framework. So one of the metrics of interest is to see the usage of PyTorch in machine learning research papers. In the recent ICLR2018 conference submissions, PyTorch was mentioned in 87 papers, compared to TensorFlow at 228 papers, Keras at 42 papers, Theano and Matlab at 32 papers. Monthly arxiv.org mentions for frameworks had PyTorch at 72 mentions, with TensorFlow at 273 mentions, Keras at 100 mentions, Caffe at 94 mentions and Theano at 53 mentions. Courses, Tutorials and Books
https://pytorch.org/blog/a-year-in/
pytorch blogs
Courses, Tutorials and Books When we released PyTorch, we had good API documentation, but our tutorials were limited to a few ipython notebooks — helpful, but not good enough. Sasank Chilamkurthy took it upon himself to revamp the tutorials into the beautiful website that it is today. Sean Robertson and Justin Johnson wrote great new tutorials — in NLP, and to learn by example. Yunjey Choi wrote a beautiful tutorial where most models were implemented in 30 lines or less. Each new tutorial helped users find their way faster, with different approaches to learning.
https://pytorch.org/blog/a-year-in/
pytorch blogs
Goku Mohandas and Delip Rao switched the code content of their book-in-progress to use PyTorch. We've seen quite a few university machine learning courses being taught with PyTorch as the primary tool, such as Harvard's CS287. Taking it one step further and democratizing learning, we had three online courses pop up that teach using PyTorch. Fast.ai's “Deep Learning for Coders” is a popular online course. In September, Jeremy and Rachel announced that the next fast.ai courses will be nearly entirely based on PyTorch. Ritchie Ng, a researcher with ties to NUS Singapore and Tsinghua released a Udemy course titled Practical Deep Learning with PyTorch.
https://pytorch.org/blog/a-year-in/
pytorch blogs
Sung Kim from HKUST released an online course on Youtube that was aimed towards a general audience, titled: “PyTorch Zero to All”. Engineering Over the last year we implemented multiple features, improved performance across the board and fixed lots of bugs. A full list of the work we've done is found in our release notes. Here are highlights from our work over the last year: Higher-order gradients With the release of several papers that implement penalties of gradients and with ongoing research in 2nd order gradient methods, this was an essential and sought-after feature. In August, we implemented a generalized interface that can take n-th order derivatives and increased the coverage of functions that support higher-order gradients over time, such that at the moment of writing almost all ops support this. Distributed PyTorch
https://pytorch.org/blog/a-year-in/
pytorch blogs
Distributed PyTorch In August, we released a small distributed package that followed the highly popular MPI-collective approach. The package has multiple backends such as TCP, MPI, Gloo and NCCL2 to support various types of CPU/GPU collective operations and use-cases, and integrates distributed technologies such as Infiniband and RoCE. Distributed is hard, and we had bugs in the initial iteration. Over subsequent releases, we made the package more stable and improved performance. Closer to NumPy One of the biggest demands from users were NumPy features that they were familiar with. Features such as Broadcasting and Advanced Indexing are convenient and save users a lot of verbosity. We implemented these features and started to align our API to be closer to NumPy. Over time, we expect to get closer and closer to NumPy's API where appropriate. Sparse Tensors
https://pytorch.org/blog/a-year-in/
pytorch blogs
Sparse Tensors In March, we released a small package supporting sparse Tensors and in May we released CUDA support for the sparse package. The package is small and limited in functionality, and is used for implementing Sparse Embeddings and commonly used sparse paradigms in deep learning. This package is still small in scope and there's demand to expand it — if you are interested in working on expanding the sparse package, reach out to us on our Discussion Boards Performance Performance is always an ongoing battle, especially for PyTorch which is a dynamic framework that wants to maximize flexibility. Over the last year, we've improved performance across board, from our core Tensor library to the neural network operators, writing faster micro-optimized across board. We've added specialized AVX and AVX2 intrinsics for Tensor operations Wrote faster GPU kernels for frequent workloads like concatenation and Softmax (among many other things)
https://pytorch.org/blog/a-year-in/
pytorch blogs
Rewrote the code for several neural network operators (too many to list), but notably nn.Embedding and group convolutions. Reducing framework overhead by 10x across board Since PyTorch is a dynamic graph framework, we create a new graph on the fly at every iteration of a training loop. Hence, the framework overhead has to be low, or the workload has to be large enough that the framework overhead is hidden. In August, the authors of DyNet (Graham Neubig and team) showcased that it's much faster than PyTorch on small NLP models. This was an interesting challenge, we didn't realize that models of those sizes were being trained. In a multi-month (and ongoing) effort, we embarked upon a significant rewrite of PyTorch internals that reduced the framework overhead from more than 10 microseconds per operator execution to as little as 1 microsecond. ATen
https://pytorch.org/blog/a-year-in/
pytorch blogs
ATen As we embarked upon a redesign of the PyTorch internals, we built the ATen C++11 library that now powers all of the PyTorch backend. ATen has an API that mirrors PyTorch's Python API, which makes it a convenient C++ library for Tensor computation. ATen can be built and used independently of PyTorch. Exporting models to production — ONNX Support and the JIT compiler One of the common requests we've received was to export PyTorch models to another framework. Users engaged in a rapid research cycle in PyTorch and when they were done, they wanted to ship it to larger projects with C++ only requirements. With this in mind, we built a tracer for PyTorch — which can export PyTorch models into an intermediate representation.
https://pytorch.org/blog/a-year-in/
pytorch blogs
The subsequent trace can be either used to run the current PyTorch model more efficiently (by running optimization passes on it), or be converted to the ONNX format to be shipped to other frameworks such as Caffe2, MXNet, TensorFlow and others or directly to the hardware accelerated libraries like CoreML or TensorRT. Over the next year, you will hear more about the JIT compiler for performance improvements. Users being funny :) Our users express their support in funny ways, made us laugh, thanks for this :) I've been using PyTorch a few months now and I've never felt better. I have more energy. My skin is clearer. My eye sight has improved.— Andrej Karpathy (@karpathy) May 26, 2017
https://pytorch.org/blog/a-year-in/
pytorch blogs
Talk to your doctor to find out if PyTorch is right for you.— Sean Robertson (@sprobertson) May 26, 2017 PyTorch gave me so much life that my skin got cleared, my grades are up, my bills are paid and my crops are watered.— Adam Will ð️‍ð (@adam_will_do_it) May 26, 2017
https://pytorch.org/blog/a-year-in/
pytorch blogs
So have I! But my hair is also shiner and I've lost weight. @PyTorch for the win. https://t.co/qgU4oIOB4K— Mariya (@thinkmariya) May 26, 2017
https://pytorch.org/blog/a-year-in/
pytorch blogs
layout: blog_detail title: 'PyTorch for AMD ROCm™ Platform now available as Python package' author: Niles Burbank – Director PM at AMD, Mayank Daga – Director, Deep Learning Software at AMD With the PyTorch 1.8 release, we are delighted to announce a new installation option for users of PyTorch on the ROCm™ open software platform. An installable Python package is now hosted on pytorch.org, along with instructions for local installation in the same simple, selectable format as PyTorch packages for CPU-only configurations and other GPU platforms. PyTorch on ROCm includes full capability for mixed-precision and large-scale training using AMD’s MIOpen & RCCL libraries. This provides a new option for data scientists, researchers, students, and others in the community to get started with accelerated PyTorch using AMD GPUs. The ROCm Ecosystem
https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/
pytorch blogs
The ROCm Ecosystem ROCm is AMD’s open source software platform for GPU-accelerated high performance computing and machine learning. Since the original ROCm release in 2016, the ROCm platform has evolved to support additional libraries and tools, a wider set of Linux® distributions, and a range of new GPUs. This includes the AMD Instinct™ MI100, the first GPU based on AMD CDNA™ architecture. The ROCm ecosystem has an established history of support for PyTorch, which was initially implemented as a fork of the PyTorch project, and more recently through ROCm support in the upstream PyTorch code. PyTorch users can install PyTorch for ROCm using AMD’s public PyTorch docker image, and can of course build PyTorch for ROCm from source. With PyTorch 1.8, these existing installation options are now complemented by the availability of an installable Python package. The primary focus of ROCm has always been high performance computing at scale. The combined
https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/
pytorch blogs
capabilities of ROCm and AMD’s Instinct family of data center GPUs are particularly suited to the challenges of HPC at data center scale. PyTorch is a natural fit for this environment, as HPC and ML workflows become more intertwined. Getting started with PyTorch for ROCm The scope for this build of PyTorch is AMD GPUs with ROCm support, running on Linux. The GPUs supported by ROCm include all of AMD’s Instinct family of compute-focused data center GPUs, along with some other select GPUs. A current list of supported GPUs can be found in the ROCm Github repository. After confirming that the target system includes supported GPUs and the current 4.0.1 release of ROCm, installation of PyTorch follows the same simple Pip-based installation as any other
https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/
pytorch blogs
Python package. As with PyTorch builds for other platforms, the configurator at https://pytorch.org/get-started/locally/ provides the specific command line to be run. PyTorch for ROCm is built from the upstream PyTorch repository, and is a full featured implementation. Notably, it includes support for distributed training across multiple GPUs and supports accelerated mixed precision training. More information A list of ROCm supported GPUs and operating systems can be found at https://github.com/RadeonOpenCompute/ROCm General documentation on the ROCm platform is available at https://rocmdocs.amd.com/en/latest/
https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/
pytorch blogs
ROCm Learning Center at https://developer.amd.com/resources/rocm-resources/rocm-learning-center/ General information on AMD’s offerings for HPC and ML can be found at https://amd.com/hpc Feedback An engaged user base is a tremendously important part of the PyTorch ecosystem. We would be deeply appreciative of feedback on the PyTorch for ROCm experience in the PyTorch discussion forum and, where appropriate, reporting any issues via Github.
https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/
pytorch blogs
layout: blog_detail title: "A Tour of PyTorch Internals (Part I)" author: "Trevor Killeen" date: 2017-05-11 12:00:00 -0500 redirect_from: /2017/05/11/Internals.html The fundamental unit in PyTorch is the Tensor. This post will serve as an overview for how we implement Tensors in PyTorch, such that the user can interact with it from the Python shell. In particular, we want to answer four main questions: How does PyTorch extend the Python interpreter to define a Tensor type that can be manipulated from Python code? How does PyTorch wrap the C libraries that actually define the Tensor's properties and methods? How does PyTorch cwrap work to generate code for Tensor methods? How does PyTorch's build system take all of these components to compile and generate a workable application? Extending the Python Interpreter
https://pytorch.org/blog/a-tour-of-pytorch-internals-1/
pytorch blogs
Extending the Python Interpreter PyTorch defines a new package torch. In this post we will consider the ._C module. This module is known as an "extension module" - a Python module written in C. Such modules allow us to define new built-in object types (e.g. the Tensor) and to call C/C++ functions. The ._C module is defined in torch/csrc/Module.cpp. The init_C() / PyInit__C() function creates the module and adds the method definitions as appropriate. This module is passed around to a number of different __init() functions that add further objects to the module, register new types, etc. One collection of these __init() calls is the following: ASSERT_TRUE(THPDoubleTensor_init(module)); ASSERT_TRUE(THPFloatTensor_init(module)); ASSERT_TRUE(THPHalfTensor_init(module)); ASSERT_TRUE(THPLongTensor_init(module)); ASSERT_TRUE(THPIntTensor_init(module)); ASSERT_TRUE(THPShortTensor_init(module)); ASSERT_TRUE(THPCharTensor_init(module)); ASSERT_TRUE(THPByteTensor_init(module));
https://pytorch.org/blog/a-tour-of-pytorch-internals-1/
pytorch blogs
ASSERT_TRUE(THPByteTensor_init(module)); ``` These __init() functions add the Tensor object for each type to the ._C module so that they can be used in the module. Let's learn how these methods work. The THPTensor Type Much like the underlying TH and THC libraries, PyTorch defines a "generic" Tensor which is then specialized to a number of different types. Before considering how this specialization works, let's first consider how defining a new type in Python works, and how we create the generic THPTensor type.
https://pytorch.org/blog/a-tour-of-pytorch-internals-1/
pytorch blogs
The Python runtime sees all Python objects as variables of type PyObject *, which serves as a "base type" for all Python objects. Every Python type contains the refcount for the object, and a pointer to the object's type object. The type object determines the properties of the type. For example, it might contain a list of methods associated with the type, and which C functions get called to implement those methods. The object also contains any fields necessary to represent its state. The formula for defining a new type is as follows: Create a struct that defines what the new object will contain Define the type object for the type The struct itself could be very simple. Inn Python, all floating point types are actually objects on the heap. The Python float struct is defined as: typedef struct { PyObject_HEAD double ob_fval; } PyFloatObject;
https://pytorch.org/blog/a-tour-of-pytorch-internals-1/
pytorch blogs
double ob_fval; } PyFloatObject; The `PyObject_HEAD` is a macro that brings in the code that implements an object's reference counting, and a pointer to the corresponding type object. So in this case, to implement a float, the only other "state" needed is the floating point value itself. Now, let's see the struct for our `THPTensor` type: ```cpp struct THPTensor { PyObject_HEAD THTensor *cdata; }; Pretty simple, right? We are just wrapping the underlying TH tensor by storing a pointer to it. The key part is defining the "type object" for a new type. An example definition of a type object for our Python float takes the form: ```cpp static PyTypeObject py_FloatType = { PyVarObject_HEAD_INIT(NULL, 0) "py.FloatObject", / tp_name / sizeof(PyFloatObject), / tp_basicsize / 0, / tp_itemsize / 0, / tp_dealloc / 0, / tp_print / 0, / tp_getattr /
https://pytorch.org/blog/a-tour-of-pytorch-internals-1/
pytorch blogs
0, / tp_getattr / 0, / tp_setattr / 0, / tp_as_async / 0, / tp_repr / 0, / tp_as_number / 0, / tp_as_sequence / 0, / tp_as_mapping / 0, / tp_hash / 0, / tp_call / 0, / tp_str / 0, / tp_getattro / 0, / tp_setattro / 0, / tp_as_buffer / Py_TPFLAGS_DEFAULT, / tp_flags / "A floating point number", / tp_doc / }; ```
https://pytorch.org/blog/a-tour-of-pytorch-internals-1/
pytorch blogs
"A floating point number", / tp_doc / }; The easiest way to think of a *type object* is as a set of fields which define the properties of the object. For example, the `tp_basicsize` field is set to `sizeof(PyFloatObject)`. This is so that Python knows how much memory to allocate when calling `PyObject_New()` for a `PyFloatObject.` The full list of fields you can set is defined in `object.h` in the CPython backend: https://github.com/python/cpython/blob/master/Include/object.h. The type object for our `THPTensor` is `THPTensorType`, defined in `csrc/generic/Tensor.cpp`. This object defines the name, size, mapping methods, etc. for a `THPTensor`. As an example, let's take a look at the `tp_new` function we set in the `PyTypeObject`: ```cpp PyTypeObject THPTensorType = { PyVarObject_HEAD_INIT(NULL, 0) ... THPTensor_(pynew), /* tp_new */ };
https://pytorch.org/blog/a-tour-of-pytorch-internals-1/
pytorch blogs
... THPTensor_(pynew), / tp_new / }; The `tp_new` function enables object creation. It is responsible for creating (as opposed to initializing) objects of that type and is equivalent to the `__new__()` method at the Python level. The C implementation is a static method that is passed the type being instantiated and any arguments, and returns a newly created object. ```cpp static PyObject * THPTensor_(pynew)(PyTypeObject *type, PyObject *args, PyObject *kwargs) { HANDLE_TH_ERRORS Py_ssize_t num_args = args ? PyTuple_Size(args) : 0; THPTensorPtr self = (THPTensor *)type->tp_alloc(type, 0); // more code below
https://pytorch.org/blog/a-tour-of-pytorch-internals-1/
pytorch blogs
// more code below The first thing our new function does is allocate the `THPTensor`. It then runs through a series of initializations based off of the args passed to the function. For example, when creating a `THPTensor` *x* from another `THPTensor` *y*, we set the newly created `THPTensor`'s `cdata` field to be the result of calling `THTensor_(newWithTensor)` with the *y*'s underlying `TH` Tensor as an argument. Similar constructors exist for sizes, storages, NumPy arrays, and sequences. ** Note that we solely use `tp_new`, and not a combination of `tp_new` and `tp_init` (which corresponds to the `__init__()` function). The other important thing defined in Tensor.cpp is how indexing works. PyTorch Tensors support Python's **Mapping Protocol**. This allows us to do things like: ```python x = torch.Tensor(10).fill_(1) y = x[3] // y == 1 x[4] = 2 // etc. ** Note that this indexing extends to Tensor with more than one dimension
https://pytorch.org/blog/a-tour-of-pytorch-internals-1/
pytorch blogs
We are able to use the []-style notation by defining the three mapping methods described here. The most important methods are THPTensor_(getValue) and THPTensor_(setValue) which describe how to index a Tensor, for returning a new Tensor/Scalar, or updating the values of an existing Tensor in place. Read through these implementations to better understand how PyTorch supports basic tensor indexing. Generic Builds (Part One) We could spend a ton of time exploring various aspects of the THPTensor and how it relates to defining a new Python object. But we still need to see how the THPTensor_(init)() function is translated to the THPIntTensor_init() we used in our module initialization. How do we take our Tensor.cpp file that defines a "generic" Tensor and use it to generate Python objects for all the permutations of types? To put it another way, Tensor.cpp is littered with lines of code like: ```cpp
https://pytorch.org/blog/a-tour-of-pytorch-internals-1/
pytorch blogs
return THPTensor_(New)(THTensor_(new)(LIBRARY_STATE_NOARGS)); This illustrates both cases we need to make type-specific: Our output code will call THP<Type>Tensor_New(...) in place of THPTensor_(New) Our output code will call TH<Type>Tensor_new(...) in place of THTensor_(new) In other words, for all supported Tensor types, we need to "generate" source code that has done the above substitutions. This is part of the "build" process for PyTorch. PyTorch relies on Setuptools (https://setuptools.readthedocs.io/en/latest/) for building the package, and we define a setup.py file in the top-level directory to customize the build process. One component building an Extension module using Setuptools is to list the source files involved in the compilation. However, our csrc/generic/Tensor.cpp file is not listed! So how does the code in this file end up being a part of the end product?
https://pytorch.org/blog/a-tour-of-pytorch-internals-1/
pytorch blogs
Recall that we are calling the THPTensor* functions (such as init) from the directory above generic. If we take a look in this directory, there is another file Tensor.cpp defined. The last line of this file is important: //generic_include TH torch/csrc/generic/Tensor.cpp Note that this Tensor.cpp file is included in setup.py, but it is wrapped in a call to a Python helper function called split_types. This function takes as input a file, and looks for the "//generic_include" string in the file contents. If it is found, it generates a new output file for each Tensor type, with the following changes: The output file is renamed to Tensor<Type>.cpp The output file is slightly modified as follows: # Before: //generic_include TH torch/csrc/generic/Tensor.cpp # After: #define TH_GENERIC_FILE "torch/src/generic/Tensor.cpp" #include "TH/THGenerate<Type>Type.h"
https://pytorch.org/blog/a-tour-of-pytorch-internals-1/
pytorch blogs
include "TH/THGenerateType.h" Including the header file on the second line has the side effect of including the source code in `Tensor.cpp` with some additional context defined. Let's take a look at one of the headers: ```cpp #ifndef TH_GENERIC_FILE #error "You must define TH_GENERIC_FILE before including THGenerateFloatType.h" #endif #define real float #define accreal double #define TH_CONVERT_REAL_TO_ACCREAL(_val) (accreal)(_val) #define TH_CONVERT_ACCREAL_TO_REAL(_val) (real)(_val) #define Real Float #define THInf FLT_MAX #define TH_REAL_IS_FLOAT #line 1 TH_GENERIC_FILE #include TH_GENERIC_FILE #undef accreal #undef real #undef Real #undef THInf #undef TH_REAL_IS_FLOAT #undef TH_CONVERT_REAL_TO_ACCREAL #undef TH_CONVERT_ACCREAL_TO_REAL #ifndef THGenerateManyTypes #undef TH_GENERIC_FILE #endif
https://pytorch.org/blog/a-tour-of-pytorch-internals-1/
pytorch blogs
undef TH_GENERIC_FILE endif ``` What this is doing is bringing in the code from the generic Tensor.cpp file and surrounding it with the following macro definitions. For example, we define real as a float, so any code in the generic Tensor implementation that refers to something as a real will have that real replaced with a float. In the corresponding file THGenerateIntType.h, the same macro would replace real with int. These output files are returned from split_types and added to the list of source files, so we can see how the .cpp code for different types is created.
https://pytorch.org/blog/a-tour-of-pytorch-internals-1/
pytorch blogs
There are a few things to note here: First, the split_types function is not strictly necessary. We could wrap the code in Tensor.cpp in a single file, repeating it for each type. The reason we split the code into separate files is to speed up compilation. Second, what we mean when we talk about the type replacement (e.g. replace real with a float) is that the C preprocessor will perform these substitutions during compilation. Merely surrounding the source code with these macros has no side effects until preprocessing. Generic Builds (Part Two) Now that we have source files for all the Tensor types, we need to consider how the corresponding header declarations are created, and also how the conversions from THTensor_(method) and THPTensor_(method) to TH<Type>Tensor_method and THP<Type>Tensor_method work. For example, csrc/generic/Tensor.h has declarations like: THP_API PyObject * THPTensor_(New)(THTensor *ptr);
https://pytorch.org/blog/a-tour-of-pytorch-internals-1/
pytorch blogs
We use the same strategy for generating code in the source files for the headers. In `csrc/Tensor.h`, we do the following: ```cpp #include "generic/Tensor.h" #include <TH/THGenerateAllTypes.h> #include "generic/Tensor.h" #include <TH/THGenerateHalfType.h> This has the same effect, where we draw in the code from the generic header, wrapped with the same macro definitions, for each type. The only difference is that the resulting code is contained all within the same header file, as opposed to being split into multiple source files. Lastly, we need to consider how we "convert" or "substitute" the function types. If we look in the same header file, we see a bunch of #define statements, including: #define THPTensor_(NAME) TH_CONCAT_4(THP,Real,Tensor_,NAME)
https://pytorch.org/blog/a-tour-of-pytorch-internals-1/
pytorch blogs
`` This macro says that any string in the source code matching the formatTHPTensor_(NAME)should be replaced withTHPRealTensor_NAME, where Real is derived from whatever the symbol Real is#define'd to be at the time. Because our header code and source code is surrounded by macro definitions for all the types as seen above, after the preprocessor has run, the resulting code is what we would expect. The code in theTHlibrary defines the same macro forTHTensor_(NAME)`, supporting the translation of those functions as well. In this way, we end up with header and source files with specialized code. Module Objects and Type Methods Now we have seen how we have wrapped TH's Tensor definition in THP, and generated THP methods such as THPFloatTensor_init(...). Now we can explore what the above code actually does in terms of the module we are creating. The key line in THPTensor_(init) is: ```cpp THPTensorBaseStr, THPTensorType are also macros that are specific to each type
https://pytorch.org/blog/a-tour-of-pytorch-internals-1/
pytorch blogs
to each type PyModule_AddObject(module, THPTensorBaseStr, (PyObject *)&THPTensorType); This function registers our Tensor objects to the extension module, so we can use THPFloatTensor, THPIntTensor, etc. in our Python code. Just being able to create Tensors isn't very useful - we need to be able to call all the methods that `TH` defines. A simple example shows calling the in-place `zero_` method on a Tensor. ```python x = torch.FloatTensor(10) x.zero_() Let's start by seeing how we add methods to newly defined types. One of the fields in the "type object" is tp_methods. This field holds an array of method definitions (PyMethodDefs) and is used to associate methods (and their underlying C/C++ implementations) with a type. Suppose we wanted to define a new method on our PyFloatObject that replaces the value. We could implement this as follows: ```cpp static PyObject * replace(PyFloatObject self, PyObject args) { double val; if (!PyArg_ParseTuple(args, "d", &val)) return NULL;
https://pytorch.org/blog/a-tour-of-pytorch-internals-1/
pytorch blogs
return NULL; self->ob_fval = val; Py_RETURN_NONE } This is equivalent to the Python method: ```python def replace(self, val): self.ob_fval = val It is instructive to read more about how defining methods works in CPython. In general, methods take as the first parameter the instance of the object, and optionally parameters for the positional arguments and keyword arguments. This static function is registered as a method on our float: static PyMethodDef float_methods[] = { {"replace", (PyCFunction)replace, METH_VARARGS, "replace the value in the float" }, {NULL} /* Sentinel */ } This registers a method called replace, which is implemented by the C function of the same name. The METH_VARARGS flag indicates that the method takes a tuple of arguments representing all the arguments to the function. This array is set to the tp_methods field of the type object, and then we can use the replace method on objects of that type.
https://pytorch.org/blog/a-tour-of-pytorch-internals-1/
pytorch blogs
We would like to be able to call all of the methods for TH tensors on our THP tensor equivalents. However, writing wrappers for all of the TH methods would be time-consuming and error prone. We need a better way to do this. PyTorch cwrap PyTorch implements its own cwrap tool to wrap the TH Tensor methods for use in the Python backend. We define a .cwrap file containing a series of C method declarations in our custom YAML format. The cwrap tool takes this file and outputs .cpp source files containing the wrapped methods in a format that is compatible with our THPTensor Python object and the Python C extension method calling format. This tool is used to generate code to wrap not only TH, but also CuDNN. It is defined to be extensible. An example YAML "declaration" for the in-place addmv_ function is as follows: ``` [[ name: addmv_ cname: addmv return: self arguments: - THTensor self - arg: real beta default: AS_REAL(1) - THTensor self
https://pytorch.org/blog/a-tour-of-pytorch-internals-1/
pytorch blogs
default: AS_REAL(1) - THTensor self - arg: real alpha default: AS_REAL(1) - THTensor mat - THTensor* vec ]] `` The architecture of the cwrap tool is very simple. It reads in a file, and then processes it with a series of **plugins.** Seetools/cwrap/plugins/init.py` for documentation on all the ways a plugin can alter the code. The source code generation occurs in a series of passes. First, the YAML "declaration" is parsed and processed. Then the source code is generated piece-by-piece - adding things like argument checks and extractions, defining the method header, and the actual call to the underlying library such as TH. Finally, the cwrap tool allows for processing the entire file at a time. The resulting output for addmv_ can be explored here.
https://pytorch.org/blog/a-tour-of-pytorch-internals-1/
pytorch blogs
In order to interface with the CPython backend, the tool generates an array of PyMethodDefs that can be stored or appended to the THPTensor's tp_methods field. In the specific case of wrapping Tensor methods, the build process first generates the output source file from TensorMethods.cwrap. This source file is #include'd in the generic Tensor source file. This all occurs before the preprocessor does its magic. As a result, all of the method wrappers that are generated undergo the same pass as the THPTensor code above. Thus a single generic declaration and definition is specialized for each type as well. Putting It All Together So far, we have shown how we extend the Python interpreter to create a new extension module, how such a module defines our new THPTensor type, and how we can generate source code for Tensors of all types that interface with TH. Briefly, we will touch on compilation.
https://pytorch.org/blog/a-tour-of-pytorch-internals-1/
pytorch blogs
Setuptools allows us to define an Extension for compilation. The entire torch._C extension is compiled by collecting all of the source files, header files, libraries, etc. and creating a setuptools Extension. Then setuptools handles building the extension itself. I will explore the build process more in a subsequent post. To summarize, let's revisit our four questions: How does PyTorch extend the Python interpreter to define a Tensor type that can be manipulated from Python code? It uses CPython's framework for extending the Python interpreter and defining new types, while taking special care to generate code for all types. How does PyTorch wrap the C libraries that actually define the Tensor's properties and methods? It does so by defining a new type, THPTensor, that is backed by a TH Tensor. Function calls are forwarded to this tensor via the CPython backend's conventions. How does PyTorch cwrap work to generate code for Tensor methods?
https://pytorch.org/blog/a-tour-of-pytorch-internals-1/
pytorch blogs
It takes our custom YAML-formatted code and generates source code for each method by processing it through a series of steps using a number of plugins. How does PyTorch's build system take all of these components to compile and generate a workable application? It takes a bunch of source/header files, libraries, and compilation directives to build an extension using Setuptools. This is just a snapshot of parts of the build system for PyTorch. There is more nuance, and detail, but I hope this serves as a gentle introduction to a lot of the components of our Tensor library. Resources: https://docs.python.org/3.7/extending/index.html is invaluable for understanding how to write C/C++ Extension to Python
https://pytorch.org/blog/a-tour-of-pytorch-internals-1/
pytorch blogs
layout: blog_detail title: "PyTorch 1.12: TorchArrow, Functional API for Modules and nvFuser, are now available" author: Team PyTorch featured-img: '' We are excited to announce the release of PyTorch 1.12 (release note)! This release is composed of over 3124 commits, 433 contributors. Along with 1.12, we are releasing beta versions of AWS S3 Integration, PyTorch Vision Models on Channels Last on CPU, Empowering PyTorch on Intel® Xeon® Scalable processors with Bfloat16 and FSDP API. We want to sincerely thank our dedicated community for your contributions. Summary: - Functional APIs to functionally apply module computation with a given set of parameters - Complex32 and Complex Convolutions in PyTorch - DataPipes from TorchData fully backward compatible with DataLoader - functorch with improved coverage for APIs - nvFuser a deep learning compiler for PyTorch
https://pytorch.org/blog/pytorch-1.12-released/
pytorch blogs
nvFuser a deep learning compiler for PyTorch Changes to float32 matrix multiplication precision on Ampere and later CUDA hardware TorchArrow, a new beta library for machine learning preprocessing over batch data Frontend APIs Introducing TorchArrow We’ve got a new Beta release ready for you to try and use: TorchArrow. This is a library for machine learning preprocessing over batch data. It features a performant and Pandas-style, easy-to-use API in order to speed up your preprocessing workflows and development. Currently, it provides a Python DataFrame interface with the following features: - High-performance CPU backend, vectorized and extensible User-Defined Functions (UDFs) with Velox - Seamless handoff with PyTorch or other model authoring, such as Tensor collation and easily plugging into PyTorch DataLoader and DataPipes - Zero copy for external readers via Arrow in-memory columnar format
https://pytorch.org/blog/pytorch-1.12-released/
pytorch blogs
For more details, please find our 10-min tutorial, installation instructions, API documentation, and a prototype for data preprocessing in TorchRec. (Beta) Functional API for Modules PyTorch 1.12 introduces a new beta feature to functionally apply Module computation with a given set of parameters. Sometimes, the traditional PyTorch Module usage pattern that maintains a static set of parameters internally is too restrictive. This is often the case when implementing algorithms for meta-learning, where multiple sets of parameters may need to be maintained across optimizer steps. The new torch.nn.utils.stateless.functional_call() API allows for: - Module computation with full flexibility over the set of parameters used
https://pytorch.org/blog/pytorch-1.12-released/
pytorch blogs
No need to reimplement your module in a functional way Any parameter or buffer present in the module can be swapped with an externally-defined value for use in the call. Naming for referencing parameters / buffers follows the fully-qualified form in the module’s state_dict() Example: ```python import torch from torch import nn from torch.nn.utils.stateless import functional_call class MyModule(nn.Module): def init(self): super().init() self.fc1 = nn.Linear(3, 3) self.bn = nn.BatchNorm1d(3) self.fc2 = nn.Linear(3, 3) def forward(self, x): return self.fc2(self.bn(self.fc1(x))) m = MyModule() Define parameter / buffer values to use during module computation. my_weight = torch.randn(3, 3, requires_grad=True) my_bias = torch.tensor([1., 2., 3.], requires_grad=True) params_and_buffers = { 'fc1.weight': my_weight, 'fc1.bias': my_bias, # Custom buffer values can be used too. 'bn.running_mean': torch.randn(3), }
https://pytorch.org/blog/pytorch-1.12-released/
pytorch blogs
'bn.running_mean': torch.randn(3), } Apply module computation to the input with the specified parameters / buffers. inp = torch.randn(5, 3) output = functional_call(m, params_and_buffers, inp) ``` (Beta) Complex32 and Complex Convolutions in PyTorch PyTorch today natively supports complex numbers, complex autograd, complex modules, and numerous complex operations, including linear algebra and Fast Fourier Transform (FFT) operators. Many libraries, including torchaudio and ESPNet, already make use of complex numbers in PyTorch, and PyTorch 1.12 further extends complex functionality with complex convolutions and the experimental complex32 (“complex half”) data type that enables half precision FFT operations. Due to the bugs in CUDA 11.3 package, we recommend using CUDA 11.6 package from wheels if you are using complex numbers. (Beta) Forward-mode Automatic Differentiation
https://pytorch.org/blog/pytorch-1.12-released/
pytorch blogs
(Beta) Forward-mode Automatic Differentiation Forward-mode AD allows the computation of directional derivatives (or equivalently, Jacobian-vector products) eagerly in the forward pass. PyTorch 1.12 significantly improves the operator coverage for forward-mode AD. See our tutorial for more information. TorchData BC DataLoader + DataPipe `DataPipe` from TorchData becomes fully backward compatible with the existing `DataLoader` regarding shuffle determinism and dynamic sharding in both multiprocessing and distributed environments. For more details, please check out the tutorial. (Beta) AWS S3 Integration DataPipes based on AWSSDK have been integrated into TorchData. It provides the following features backed by native AWSSDK:
https://pytorch.org/blog/pytorch-1.12-released/
pytorch blogs
Retrieve list of urls from each S3 bucket based on prefix Support timeout to prevent hanging indefinitely Support to specify S3 bucket region Load data from S3 urls Support buffered and multi-part download Support to specify S3 bucket region AWS native DataPipes are still in the beta phase. And, we will keep tuning them to improve their performance. (Prototype) DataLoader2 DataLoader2 became available in prototype mode. We are introducing new ways to interact between DataPipes, DataLoading API, and backends (aka ReadingServices). Feature is stable in terms of API, but functionally not complete yet. We welcome early adopters and feedback, as well as potential contributors. For more details, please checkout the link. functorch
https://pytorch.org/blog/pytorch-1.12-released/
pytorch blogs
functorch Inspired by Google JAX, functorch is a library that offers composable vmap (vectorization) and autodiff transforms. It enables advanced autodiff use cases that would otherwise be tricky to express in PyTorch. Examples of these include: - running ensembles of models on a single machine - efficiently computing Jacobians and Hessians - computing per-sample-gradients (or other per-sample quantities) We’re excited to announce functorch 0.2.0 with a number of improvements and new experimental features. Significantly improved coverage We significantly improved coverage for functorch.jvp (our forward-mode autodiff API) and other APIs that rely on it (functorch.{jacfwd, hessian}). (Prototype) functorch.experimental.functionalize
https://pytorch.org/blog/pytorch-1.12-released/
pytorch blogs
Given a function f, functionalize(f) returns a new function without mutations (with caveats). This is useful for constructing traces of PyTorch functions without in-place operations. For example, you can use make_fx(functionalize(f)) to construct a mutation-free trace of a pytorch function. To learn more, please see the documentation. For more details, please see our installation instructions, documentation, tutorials, and release notes. Performance Improvements Introducing nvFuser, a deep learning compiler for PyTorch
https://pytorch.org/blog/pytorch-1.12-released/
pytorch blogs
In PyTorch 1.12, Torchscript is updating its default fuser (for Volta and later CUDA accelerators) to nvFuser, which supports a wider range of operations and is faster than NNC, the previous fuser for CUDA devices. A soon to be published blog post will elaborate on nvFuser and show how it speeds up training on a variety of networks. See the nvFuser documentation for more details on usage and debugging. Changes to float32 matrix multiplication precision on Ampere and later CUDA hardware
https://pytorch.org/blog/pytorch-1.12-released/
pytorch blogs
PyTorch supports a variety of “mixed precision” techniques, like the torch.amp (Automated Mixed Precision) module and performing float32 matrix multiplications using the TensorFloat32 datatype on Ampere and later CUDA hardware for faster internal computations. In PyTorch 1.12 we’re changing the default behavior of float32 matrix multiplications to always use full IEEE fp32 precision, which is more precise but slower than using the TensorFloat32 datatype for internal computation. For devices with a particularly high ratio of TensorFloat32 to float32 throughput such as A100, this change in defaults can result in a large slowdown. If you’ve been using TensorFloat32 matrix multiplications then you can continue to do so by setting torch.backends.cuda.matmul.allow_tf32 = True which is supported since PyTorch 1.7. Starting in PyTorch 1.12 the new matmul precision API can be used, too: torch.set_float32_matmul_precision(“highest”|”high”|”medium”)
https://pytorch.org/blog/pytorch-1.12-released/
pytorch blogs
To reiterate, PyTorch’s new default is “highest” precision for all device types. We think this provides better consistency across device types for matrix multiplications. Documentation for the new precision API can be found here. Setting the “high” or “medium” precision types will enable TensorFloat32 on Ampere and later CUDA devices. If you’re updating to PyTorch 1.12 then to preserve the current behavior and faster performance of matrix multiplications on Ampere devices, set precision to “high”. Using mixed precision techniques is essential for training many modern deep learning networks efficiently, and if you’re already using torch.amp this change is unlikely to affect you. If you’re not familiar with mixed precision training then see our soon to be published “What Every User Should Know About Mixed Precision Training in PyTorch” blogpost.
https://pytorch.org/blog/pytorch-1.12-released/
pytorch blogs
(Beta) Accelerating PyTorch Vision Models with Channels Last on CPU Memory formats have a significant impact on performance when running vision models, generally Channels Last is more favorable from a performance perspective due to better data locality. 1.12 includes fundamental concepts of memory formats and demonstrates performance benefits using Channels Last on popular PyTorch vision models on Intel® Xeon® Scalable processors. - Enables Channels Last memory format support for the commonly used operators in CV domain on CPU, applicable for both inference and training - Provides native level optimization on Channels Last kernels from ATen, applicable for both AVX2 and AVX512 - Delivers 1.3x to 1.8x inference performance gain over Channels First for TorchVision models on Intel® Xeon® Ice Lake (or newer) CPUs (Beta) Empowering PyTorch on Intel® Xeon® Scalable processors with Bfloat16
https://pytorch.org/blog/pytorch-1.12-released/
pytorch blogs
Reduced precision numeric formats like bfloat16 improves PyTorch performance across multiple deep learning training workloads. PyTorch 1.12 includes the latest software enhancements on bfloat16 which applies to a broader scope of user scenarios and showcases even higher performance gains. The main improvements include: - 2x hardware compute throughput vs. float32 with the new bfloat16 native instruction VDPBF16PS, introduced on Intel® Xeon® Cooper Lake CPUs - 1/2 memory footprint of float32, faster speed for memory bandwidth intensive operators - 1.4x to 2.2x inference performance gain over float32 for TorchVision models on Intel® Xeon® Cooper Lake (or newer) CPUs (Prototype) Introducing Accelerated PyTorch Training on Mac
https://pytorch.org/blog/pytorch-1.12-released/
pytorch blogs
With the PyTorch 1.12 release, developers and researchers can now take advantage of Apple silicon GPUs for significantly faster model training. This unlocks the ability to perform machine learning workflows like prototyping and fine-tuning locally, right on Mac. Accelerated GPU training is enabled using Apple’s Metal Performance Shaders (MPS) as a backend. The benefits include performance speedup from accelerated GPU training and the ability to train larger networks or batch sizes locally. Learn more here. Accelerated GPU training and evaluation speedups over CPU-only (times faster) Alongside the new MPS device support, the M1 binaries for Core and Domain libraries that have been available for the last few releases are now an official prototype feature. These binaries can be used to run PyTorch natively on Apple Silicon.
https://pytorch.org/blog/pytorch-1.12-released/
pytorch blogs
(Prototype) BetterTransformer: Fastpath execution for Transformer Encoder Inference PyTorch now supports CPU and GPU fastpath implementations (“BetterTransformer”) for several Transformer Encoder modules including TransformerEncoder, TransformerEncoderLayer, and MultiHeadAttention (MHA). The BetterTransformer fastpath architecture Better Transformer is consistently faster – 2x for many common execution scenarios, depending on model and input characteristics. The new BetterTransformer-enabled modules are API compatible with previous releases of the PyTorch Transformer API and will accelerate existing models if they meet fastpath execution requirements, as well as read models trained with previous versions of PyTorch. PyTorch 1.12 includes: - BetterTransformer integration for Torchtext’s pretrained RoBERTa and XLM-R models - Torchtext which builds on the PyTorch Transformer API
https://pytorch.org/blog/pytorch-1.12-released/
pytorch blogs
Fastpath execution for improved performance by reducing execution overheads with fused kernels which combines multiple operators into a single kernel Option to achieve additional speedups by taking advantage of data sparsity during the processing of padding tokens in natural-language processing (by setting enable_nested_tensor=True when creating a TransformerEncoder) Diagnostics to help users understand why fastpath execution did not occur Distributed (Beta) Fully Sharded Data Parallel (FSDP) API
https://pytorch.org/blog/pytorch-1.12-released/
pytorch blogs
(Beta) Fully Sharded Data Parallel (FSDP) API FSDP API helps easily scale large model training by sharding a model’s parameters, gradients and optimizer states across data parallel workers while maintaining the simplicity of data parallelism. The prototype version was released in PyTorch 1.11 with a minimum set of features that helped scaling tests of models with up to 1T parameters. In this beta release, FSDP API added the following features to support various production workloads. Highlights of the the newly added features in this beta release include:
https://pytorch.org/blog/pytorch-1.12-released/
pytorch blogs
Universal sharding strategy API - Users can easily change between sharding strategies with a single line change, and thus compare and use DDP (only data sharding), FSDP (full model and data sharding), or Zero2 (only sharding of optimizer and gradients) to optimize memory and performance for their specific training needs Fine grained mixed precision policies - Users can specify a mix of half and full data types (bfloat16, fp16 or fp32) for model parameters, gradient communication, and buffers via mixed precision policies. Models are automatically saved in fp32 to allow for maximum portability Transformer auto wrapping policy - allows for optimal wrapping of Transformer based models by registering the models layer class, and thus accelerated training performance Faster model initialization using device_id init - initialization is performed in a streaming fashion to avoid OOM issues and optimize init performance vs CPU init
https://pytorch.org/blog/pytorch-1.12-released/
pytorch blogs
Rank0 streaming for full model saving of larger models - Fully sharded models can be saved by all GPU’s streaming their shards to the rank 0 GPU, and the model is built in full state on the rank 0 CPU for saving For more details and example code, please checkout the documentation and the tutorial. Thanks for reading, If you’re interested in these updates and want to join the PyTorch community, we encourage you to join the discussion forums and open GitHub issues. To get the latest news from PyTorch, follow us on Twitter, Medium, YouTube, and LinkedIn. Cheers! Team PyTorch
https://pytorch.org/blog/pytorch-1.12-released/
pytorch blogs
layout: blog_detail title: 'PyTorch 1.4 released, domain libraries updated' author: Team PyTorch Today, we’re announcing the availability of PyTorch 1.4, along with updates to the PyTorch domain libraries. These releases build on top of the announcements from NeurIPS 2019, where we shared the availability of PyTorch Elastic, a new classification framework for image and video, and the addition of Preferred Networks to the PyTorch community. For those that attended the workshops at NeurIPS, the content can be found here. PyTorch 1.4 The 1.4 release of PyTorch adds new capabilities, including the ability to do fine grain build level customization for PyTorch Mobile, and new experimental features including support for model parallel training and Java language bindings. PyTorch Mobile - Build level customization
https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/
pytorch blogs
PyTorch Mobile - Build level customization Following the open sourcing of PyTorch Mobile in the 1.3 release, PyTorch 1.4 adds additional mobile support including the ability to customize build scripts at a fine-grain level. This allows mobile developers to optimize library size by only including the operators used by their models and, in the process, reduce their on device footprint significantly. Initial results show that, for example, a customized MobileNetV2 is 40% to 50% smaller than the prebuilt PyTorch mobile library. You can learn more here about how to create your own custom builds and, as always, please engage with the community on the PyTorch forums to provide any feedback you have. Example code snippet for selectively compiling only the operators needed for MobileNetV2: ```python
https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/
pytorch blogs
# Dump list of operators used by MobileNetV2: import torch, yaml model = torch.jit.load('MobileNetV2.pt') ops = torch.jit.export_opnames(model) with open('MobileNetV2.yaml', 'w') as output: yaml.dump(ops, output) # Build PyTorch Android library customized for MobileNetV2: SELECTED_OP_LIST=MobileNetV2.yaml scripts/build_pytorch_android.sh arm64-v8a # Build PyTorch iOS library customized for MobileNetV2: SELECTED_OP_LIST=MobileNetV2.yaml BUILD_PYTORCH_MOBILE=1 IOS_ARCH=arm64 scripts/build_ios.sh Distributed model parallel training (Experimental)
https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/
pytorch blogs
With the scale of models, such as RoBERTa, continuing to increase into the billions of parameters, model parallel training has become ever more important to help researchers push the limits. This release provides a distributed RPC framework to support distributed model parallel training. It allows for running functions remotely and referencing remote objects without copying the real data around, and provides autograd and optimizer APIs to transparently run backwards and update parameters across RPC boundaries. To learn more about the APIs and the design of this feature, see the links below: API documentation Distributed Autograd design doc Remote Reference design doc For the full tutorials, see the links below: A full RPC tutorial
https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/
pytorch blogs
Examples using model parallel training for reinforcement learning and with an LSTM As always, you can connect with community members and discuss more on the forums. Java bindings (Experimental) In addition to supporting Python and C++, this release adds experimental support for Java bindings. Based on the interface developed for Android in PyTorch Mobile, the new bindings allow you to invoke TorchScript models from any Java program. Note that the Java bindings are only available for Linux for this release, and for inference only. We expect support to expand in subsequent releases. See the code snippet below for how to use PyTorch within Java: ```java Module mod = Module.load("demo-model.pt1"); Tensor data = Tensor.fromBlob( new int[] {1, 2, 3, 4, 5, 6}, // data new long[] {2, 3} // shape );
https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/
pytorch blogs
new long[] {2, 3} // shape ); IValue result = mod.forward(IValue.from(data), IValue.from(3.0)); Tensor output = result.toTensor(); System.out.println("shape: " + Arrays.toString(output.shape())); System.out.println("data: " + Arrays.toString(output.getDataAsFloatArray())); ``` Learn more about how to use PyTorch from Java here, and see the full Javadocs API documentation here. For the full 1.4 release notes, see here. Domain Libraries PyTorch domain libraries like torchvision, torchtext, and torchaudio complement PyTorch with common datasets, models, and transforms. We’re excited to share new releases for all three domain libraries alongside the PyTorch 1.4 core release. torchvision 0.5 The improvements to torchvision 0.5 mainly focus on adding support for production deployment including quantization, TorchScript, and ONNX. Some of the highlights include:
https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/
pytorch blogs
All models in torchvision are now torchscriptable making them easier to ship into non-Python production environments ResNets, MobileNet, ShuffleNet, GoogleNet and InceptionV3 now have quantized counterparts with pre-trained models, and also include scripts for quantization-aware training. In partnership with the Microsoft team, we’ve added ONNX support for all models including Mask R-CNN. Learn more about torchvision 0.5 here. torchaudio 0.4 Improvements in torchaudio 0.4 focus on enhancing the currently available transformations, datasets, and backend support. Highlights include: SoX is now optional, and a new extensible backend dispatch mechanism exposes SoundFile as an alternative to SoX. The interface for datasets has been unified. This enables the addition of two large datasets: LibriSpeech and Common Voice.
https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/
pytorch blogs
New filters such as biquad, data augmentation such as time and frequency masking, transforms such as MFCC, gain and dither, and new feature computation such as deltas, are now available. Transformations now support batches and are jitable. An interactive speech recognition demo with voice activity detection is available for experimentation. Learn more about torchaudio 0.4 here. torchtext 0.5 torchtext 0.5 focuses mainly on improvements to the dataset loader APIs, including compatibility with core PyTorch APIs, but also adds support for unsupervised text tokenization. Highlights include: Added bindings for SentencePiece for unsupervised text tokenization . Added a new unsupervised learning dataset - enwik9. Made revisions to PennTreebank, WikiText103, WikiText2, IMDb to make them compatible with torch.utils.data. Those datasets are in an experimental folder and we welcome your feedback.
https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/
pytorch blogs
Learn more about torchtext 0.5 here. We’d like to thank the entire PyTorch team and the community for all their contributions to this work. Cheers! Team PyTorch
https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/
pytorch blogs
layout: blog_detail title: 'Stochastic Weight Averaging in PyTorch' author: Pavel Izmailov and Andrew Gordon Wilson redirect_from: /2019/04/29/road-to-1.0.html In this blogpost we describe the recently proposed Stochastic Weight Averaging (SWA) technique [1, 2], and its new implementation in torchcontrib. SWA is a simple procedure that improves generalization in deep learning over Stochastic Gradient Descent (SGD) at no additional cost, and can be used as a drop-in replacement for any other optimizer in PyTorch. SWA has a wide range of applications and features: SWA has been shown to significantly improve generalization in computer vision tasks, including VGG, ResNets, Wide ResNets and DenseNets on ImageNet and CIFAR benchmarks [1, 2]. SWA provides state-of-the-art performance on key benchmarks in semi-supervised learning and domain adaptation [2].
https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/
pytorch blogs
SWA is shown to improve the stability of training as well as the final average rewards of policy-gradient methods in deep reinforcement learning [3]. An extension of SWA can obtain efficient Bayesian model averaging, as well as high quality uncertainty estimates and calibration in deep learning [4]. SWA for low precision training, SWALP, can match the performance of full-precision SGD even with all numbers quantized down to 8 bits, including gradient accumulators [5]. In short, SWA performs an equal average of the weights traversed by SGD with a modified learning rate schedule (see the left panel of Figure 1.). SWA solutions end up in the center of a wide flat region of loss, while SGD tends to converge to the boundary of the low-loss region, making it susceptible to the shift between train and test error surfaces (see the middle and right panels of Figure 1).
https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/
pytorch blogs
Figure 1. Illustrations of SWA and SGD with a Preactivation ResNet-164 on CIFAR-100 [1]. Left: test error surface for three FGE samples and the corresponding SWA solution (averaging in weight space). Middle and Right: test error and train loss surfaces showing the weights proposed by SGD (at convergence) and SWA, starting from the same initialization of SGD after 125 training epochs. Please see [1] for details on how these figures were constructed. With our new implementation in torchcontrib using SWA is as easy as using any other optimizer in PyTorch: from torchcontrib.optim import SWA ... ... # training loop base_opt = torch.optim.SGD(model.parameters(), lr=0.1) opt = torchcontrib.optim.SWA(base_opt, swa_start=10, swa_freq=5, swa_lr=0.05) for _ in range(100): opt.zero_grad() loss_fn(model(input), target).backward() opt.step() opt.swap_swa_sgd()
https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/
pytorch blogs
opt.step() opt.swap_swa_sgd() ``` You can wrap any optimizer from torch.optim using the SWA class, and then train your model as usual. When training is complete you simply call swap_swa_sgd() to set the weights of your model to their SWA averages. Below we explain the SWA procedure and the parameters of the SWA class in detail. We emphasize that SWA can be combined with any optimization procedure, such as Adam, in the same way that it can be combined with SGD. Is this just Averaged SGD?
https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/
pytorch blogs
Is this just Averaged SGD? At a high level, averaging SGD iterates dates back several decades in convex optimization [6, 7], where it is sometimes referred to as Polyak-Ruppert averaging, or averaged SGD. But the details matter. Averaged SGD is often employed in conjunction with a decaying learning rate, and an exponentially moving average, typically for convex optimization. In convex optimization, the focus has been on improved rates of convergence. In deep learning, this form of averaged SGD smooths the trajectory of SGD iterates, but does not perform very differently. By contrast, SWA is focused on an equal average of SGD iterates with a modified cyclical or high constant learning rate, and exploits the flatness of training objectives [8] specific to deep learning for improved generalization. Stochastic Weight Averaging
https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/
pytorch blogs
Stochastic Weight Averaging There are two important ingredients that make SWA work. First, SWA uses a modified learning rate schedule so that SGD continues to explore the set of high-performing networks instead of simply converging to a single solution. For example, we can use the standard decaying learning rate strategy for the first 75% of training time, and then set the learning rate to a reasonably high constant value for the remaining 25% of the time (see the Figure 2 below). The second ingredient is to average the weights of the networks traversed by SGD. For example, we can maintain a running average of the weights obtained in the end of every epoch within the last 25% of training time (see Figure 2).
https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/
pytorch blogs
Figure 2. Illustration of the learning rate schedule adopted by SWA. Standard decaying schedule is used for the first 75% of the training and then a high constant value is used for the remaining 25%. The SWA averages are formed during the last 25% of training. In our implementation the auto mode of the SWA optimizer allows us to run the procedure described above. To run SWA in auto mode you just need to wrap your optimizer base_opt of choice (can be SGD, Adam, or any other torch.optim.Optimizer) with SWA(base_opt, swa_start, swa_freq, swa_lr). After swa_start optimization steps the learning rate will be switched to a constant value swa_lr, and in the end of every swa_freq optimization steps a snapshot of the weights will be added to the SWA running average. Once you run opt.swap_swa_sgd(), the weights of your model are replaced with their SWA running averages. Batch Normalization
https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/
pytorch blogs
Batch Normalization One important detail to keep in mind is batch normalization. Batch normalization layers compute running statistics of activations during training. Note that the SWA averages of the weights are never used to make predictions during training, and so the batch normalization layers do not have the activation statistics computed after you reset the weights of your model with opt.swap_swa_sgd(). To compute the activation statistics you can just make a forward pass on your training data using the SWA model once the training is finished. In the SWA class we provide a helper function opt.bn_update(train_loader, model). It updates the activation statistics for every batch normalization layer in the model by making a forward pass on the train_loader data loader. You only need to call this function once in the end of training. Advanced Learning-Rate Schedules
https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/
pytorch blogs
Advanced Learning-Rate Schedules SWA can be used with any learning rate schedule that encourages exploration of the flat region of solutions. For example, you can use cyclical learning rates in the last 25% of the training time instead of a constant value, and average the weights of the networks corresponding to the lowest values of the learning rate within each cycle (see Figure 3). Figure 3. Illustration of SWA with an alternative learning rate schedule. Cyclical learning rates are adopted in the last 25% of training, and models for averaging are collected in the end of each cycle. In our implementation you can implement custom learning rate and weight averaging strategies by using SWA in the manual mode. The following code is equivalent to the auto mode code presented in the beginning of this blogpost. ```python opt = torchcontrib.optim.SWA(base_opt) for i in range(100):
https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/
pytorch blogs
for i in range(100): opt.zero_grad() loss_fn(model(input), target).backward() opt.step() if i > 10 and i % 5 == 0: opt.update_swa() opt.swap_swa_sgd() ``` In manual mode you don’t specify swa_start, swa_lr and swa_freq, and just call opt.update_swa() whenever you want to update the SWA running averages (for example in the end of each learning rate cycle). In manual mode SWA doesn’t change the learning rate, so you can use any schedule you want as you would normally do with any other torch.optim.Optimizer. Why does it work? SGD converges to a solution within a wide flat region of loss. The weight space is extremely high-dimensional, and most of the volume of the flat region is concentrated near the boundary, so SGD solutions will always be found near the boundary of the flat region of the loss. SWA on the other hand averages multiple SGD solutions, which allows it to move towards the center of the flat region.
https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/
pytorch blogs
We expect solutions that are centered in the flat region of the loss to generalize better than those near the boundary. Indeed, train and test error surfaces are not perfectly aligned in the weight space. Solutions that are centered in the flat region are not as susceptible to the shifts between train and test error surfaces as those near the boundary. In Figure 4 below we show the train loss and test error surfaces along the direction connecting the SWA and SGD solutions. As you can see, while SWA solution has a higher train loss compared to the SGD solution, it is centered in the region of low loss, and has a substantially better test error.
https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/
pytorch blogs
Figure 4. Train loss and test error along the line connecting the SWA solution (circle) and SGD solution (square). SWA solution is centered in a wide region of low train loss while the SGD solution lies near the boundary. Because of the shift between train loss and test error surfaces, SWA solution leads to much better generalization. Examples and Results We released a GitHub repo here with examples of using the torchcontrib implementation of SWA for training DNNs. For example, these examples can be used to achieve the following results on CIFAR-100: DNN (Budget) SGD SWA 1 Budget SWA 1.25 Budgets SWA 1.5 Budgets VGG16 (200) 72.55 ± 0.10 73.91 ± 0.12 74.17 ± 0.15 74.27 ± 0.25 PreResNet110 (150) 76.77 ± 0.38 78.75 ± 0.16 78.91 ± 0.29 79.10 ± 0.21
https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/
pytorch blogs
| PreResNet164 (150) | 78.49 ± 0.36 | 79.77 ± 0.17 | 80.18 ± 0.23 | 80.35 ± 0.16 | | WideResNet28x10 (200) | 80.82 ± 0.23 | 81.46 ± 0.23 | 81.91 ± 0.27 | 82.15 ± 0.27 | Semi-Supervised Learning In a follow-up paper SWA was applied to semi-supervised learning, where it illustrated improvements beyond the best reported results in multiple settings. For example, with SWA you can get 95% accuracy on CIFAR-10 if you only have the training labels for 4k training data points (the previous best reported result on this problem was 93.7%). This paper also explores averaging multiple times within epochs, which can accelerate convergence and find still flatter solutions in a given time. Figure 5. Performance of fast-SWA on semi-supervised learning with CIFAR-10. fast-SWA achieves record results in every setting considered.
https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/
pytorch blogs
Calibration and Uncertainty Estimates SWA-Gaussian (SWAG) is a simple, scalable and convenient approach to uncertainty estimation and calibration in Bayesian deep learning. Similarly to SWA, which maintains a running average of SGD iterates, SWAG estimates the first and second moments of the iterates to construct a Gaussian distribution over weights. SWAG distribution approximates the shape of the true posterior: Figure 6 below shows the SWAG distribution on top of the posterior log-density for PreResNet-164 on CIFAR-100. Figure 6. SWAG distribution on top of posterior log-density for PreResNet-164 on CIFAR-100. The shape of SWAG distribution is aligned with the posterior.
https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/
pytorch blogs
Empirically, SWAG performs on par or better than popular alternatives including MC dropout, KFAC Laplace, and temperature scaling on uncertainty quantification, out-of-distribution detection, calibration and transfer learning in computer vision tasks. Code for SWAG is available here. Reinforcement Learning In another follow-up paper SWA was shown to improve the performance of policy gradient methods A2C and DDPG on several Atari games and MuJoCo environments. Environment A2C A2C + SWA Breakout 522 ± 34 703 ± 60 Qbert 18777 ± 778 21272 ± 655 SpaceInvaders 7727 ± 1121 21676 ± 8897 Seaquest 1779 ± 4 1795 ± 4 CrazyClimber 147030 ± 10239 139752 ± 11618 BeamRider 9999 ± 402 11321 ± 1065
https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/
pytorch blogs
Low Precision Training We can filter through quantization noise by combining weights that have been rounded down with weights that have been rounded up. Moreover, by averaging weights to find a flat region of the loss surface, large perturbations of the weights will not affect the quality of the solution (Figures 7 and 8). Recent work shows that by adapting SWA to the low precision setting, in a method called SWALP, one can match the performance of full-precision SGD even with all training in 8 bits [5]. This is quite a practically important result, given that (1) SGD training in 8 bits performs notably worse than full precision SGD, and (2) low precision training is significantly harder than predictions in low precision after training (the usual setting). For example, a ResNet-164 trained on CIFAR-100 with float (16-bit) SGD achieves 22.2% error, while 8-bit SGD achieves 24.0% error. By contrast, SWALP with 8 bit training achieves 21.8% error.
https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/
pytorch blogs
Figure 7. Quantizing in a flat region can still provide solutions with low loss. Figure 8. Low precision SGD training (with a modified learning rate schedule) and SWALP. Conclusion
https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/
pytorch blogs
Conclusion One of the greatest open questions in deep learning is why SGD manages to find good solutions, given that the training objectives are highly multimodal, and there are in principle many settings of parameters that achieve no training loss but poor generalization. By understanding geometric features such as flatness, which relate to generalization, we can begin to resolve these questions and build optimizers that provide even better generalization, and many other useful features, such as uncertainty representation. We have presented SWA, a simple drop-in replacement for standard SGD, which can in principle benefit anyone training a deep neural network. SWA has been demonstrated to have strong performance in a number of areas, including computer vision, semi-supervised learning, reinforcement learning, uncertainty representation, calibration, Bayesian model averaging, and low precision training.
https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/
pytorch blogs
We encourage you try out SWA! Using SWA is now as easy as using any other optimizer in PyTorch. And even if you have already trained your model with SGD (or any other optimizer), it’s very easy to realize the benefits of SWA by running SWA for a small number of epochs starting with a pre-trained model. [1] Averaging Weights Leads to Wider Optima and Better Generalization; Pavel Izmailov, Dmitry Podoprikhin, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson; Uncertainty in Artificial Intelligence (UAI), 2018 [2] There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average; Ben Athiwaratkun, Marc Finzi, Pavel Izmailov, Andrew Gordon Wilson; International Conference on Learning Representations (ICLR), 2019 [3] Improving Stability in Deep Reinforcement Learning with Weight Averaging; Evgenii Nikishin, Pavel Izmailov, Ben Athiwaratkun, Dmitrii Podoprikhin, Timur Garipov, Pavel Shvechikov, Dmitry Vetrov, Andrew Gordon Wilson, UAI 2018 Workshop: Uncertainty in Deep Learning, 2018
https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/
pytorch blogs
[4] A Simple Baseline for Bayesian Uncertainty in Deep Learning, Wesley Maddox, Timur Garipov, Pavel Izmailov, Andrew Gordon Wilson, arXiv pre-print, 2019: https://arxiv.org/abs/1902.02476 [5] SWALP : Stochastic Weight Averaging in Low Precision Training, Guandao Yang, Tianyi Zhang, Polina Kirichenko, Junwen Bai, Andrew Gordon Wilson, Christopher De Sa, To appear at the International Conference on Machine Learning (ICML), 2019. [6] David Ruppert. Efficient estimations from a slowly convergent Robbins-Monro process. Technical report, Cornell University Operations Research and Industrial Engineering, 1988. [7] Acceleration of stochastic approximation by averaging. Boris T Polyak and Anatoli B Juditsky. SIAM Journal on Control and Optimization, 30(4):838–855, 1992.
https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/
pytorch blogs
[8] Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs, Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry Vetrov, Andrew Gordon Wilson. Neural Information Processing Systems (NeurIPS), 2018
https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/
pytorch blogs