text
stringlengths
0
1.73k
source
stringlengths
35
119
category
stringclasses
2 values
[7] xformers [8] The bfloat16 numerical format [9] Exploring Plain Vision Transformer Backbones for Object Detection [10] MViTv2: Improved Multiscale Vision Transformers for Classification and Detection [11] https://www.deepmind.com/open-source/kinetics [12] Getting Started with Distributed Data Parallel (DDP)
https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/
pytorch blogs
layout: blog_detail title: 'Announcing PyTorch Ecosystem Day' author: Team PyTorch We’re proud to announce our first PyTorch Ecosystem Day. The virtual, one-day event will focus completely on our Ecosystem and Industry PyTorch communities! PyTorch is a deep learning framework of choice for academics and companies, all thanks to its rich ecosystem of tools and strong community. As with our developers, our ecosystem partners play a pivotal role in the development and growth of the community. We will be hosting our first PyTorch Ecosystem Day, a virtual event designed for our ecosystem and industry communities to showcase their work and discover new opportunities to collaborate.
https://pytorch.org/blog/ecosystem_day_2021/
pytorch blogs
PyTorch Ecosystem Day will be held on April 21, with both a morning and evening session, to ensure we reach our global community. Join us virtually for a day filled with discussions on new developments, trends, challenges, and best practices through keynotes, breakout sessions, and a unique networking opportunity hosted through Gather.Town . Event Details April 21, 2021 (Pacific Time) Fully digital experience Morning Session: (EMEA) Opening Talks - 8:00 am-9:00 am PT Poster Exhibition & Breakout Sessions - 9:00 am-12:00 pm PT Evening Session (APAC/US) Opening Talks - 3:00 pm-4:00 pm PT Poster Exhibition & Breakout Sessions - 3:00 pm-6:00 pm PT Networking - 9:00 am-7:00 pm PT There are two ways to participate in PyTorch Ecosystem Day:
https://pytorch.org/blog/ecosystem_day_2021/
pytorch blogs
Poster Exhibition from the PyTorch ecosystem and industry communities covering a variety of topics. Posters are available for viewing throughout the duration of the event. To be part of the poster exhibition, please see below for submission details. If your poster is accepted, we highly recommend tending your poster during one of the morning or evening sessions or both! Breakout Sessions are 40-min sessions freely designed by the community. The breakouts can be talks, demos, tutorials or discussions. Note: you must have an accepted poster to apply for the breakout sessions.
https://pytorch.org/blog/ecosystem_day_2021/
pytorch blogs
Call for posters now open! Submit your proposal today! Please send us the title and summary of your projects, tools, and libraries that could benefit PyTorch researchers in academia and industry, application developers, and ML engineers for consideration. The focus must be on academic papers, machine learning research, or open-source projects. Please no sales pitches. Deadline for submission is March 18, 2021. Visit pytorchecosystemday.fbreg.com for more information and we look forward to welcoming you to PyTorch Ecosystem Day on April 21st!
https://pytorch.org/blog/ecosystem_day_2021/
pytorch blogs
layout: blog_detail title: 'PyTorch 1.5 released, new and updated APIs including C++ frontend API parity with Python' author: Team PyTorch Today, we’re announcing the availability of PyTorch 1.5, along with new and updated libraries. This release includes several major new API additions and improvements. PyTorch now includes a significant update to the C++ frontend, ‘channels last’ memory format for computer vision models, and a stable release of the distributed RPC framework used for model-parallel training. The release also has new APIs for autograd for hessians and jacobians, and an API that allows the creation of Custom C++ Classes that was inspired by pybind. You can find the detailed release notes here. C++ Frontend API (Stable) The C++ frontend API is now at parity with Python, and the features overall have been moved to ‘stable’ (previously tagged as experimental). Some of the major highlights include:
https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/
pytorch blogs
Now with ~100% coverage and docs for C++ torch::nn module/functional, users can easily translate their model from Python API to C++ API, making the model authoring experience much smoother. Optimizers in C++ had deviated from the Python equivalent: C++ optimizers can’t take parameter groups as input while the Python ones can. Additionally, step function implementations were not exactly the same. With the 1.5 release, C++ optimizers will always behave the same as the Python equivalent. The lack of tensor multi-dim indexing API in C++ is a well-known issue and had resulted in many posts in PyTorch Github issue tracker and forum. The previous workaround was to use a combination of narrow / select / index_select / masked_select, which was clunky and error-prone compared to the Python API’s elegant tensor[:, 0, ..., mask] syntax. With the 1.5 release, users can use tensor.index({Slice(), 0, "...", mask}) to achieve the same purpose.
https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/
pytorch blogs
‘Channels last’ memory format for Computer Vision models (Experimental) ‘Channels last’ memory layout unlocks ability to use performance efficient convolution algorithms and hardware (NVIDIA’s Tensor Cores, FBGEMM, QNNPACK). Additionally, it is designed to automatically propagate through the operators, which allows easy switching between memory layouts. Learn more here on how to write memory format aware operators. Custom C++ Classes (Experimental) This release adds a new API, torch::class_, for binding custom C++ classes into TorchScript and Python simultaneously. This API is almost identical in syntax to pybind11. It allows users to expose their C++ class and its methods to the TorchScript type system and runtime system such that they can instantiate and manipulate arbitrary C++ objects from TorchScript and Python. An example C++ binding: ```python template
https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/
pytorch blogs
template <class T> struct MyStackClass : torch::CustomClassHolder { std::vector<T> stack_; MyStackClass(std::vector<T> init) : stack_(std::move(init)) {} void push(T x) { stack_.push_back(x); } T pop() { auto val = stack_.back(); stack_.pop_back(); return val; } }; static auto testStack = torch::class_<MyStackClass<std::string>>("myclasses", "MyStackClass") .def(torch::init<std::vector<std::string>>()) .def("push", &MyStackClass<std::string>::push) .def("pop", &MyStackClass<std::string>::pop) .def("size", [](const c10::intrusive_ptr<MyStackClass>& self) { return self->stack_.size(); }); Which exposes a class you can use in Python and TorchScript like so: @torch.jit.script def do_stacks(s : torch.classes.myclasses.MyStackClass): s2 = torch.classes.myclasses.MyStackClass(["hi", "mom"]) print(s2.pop()) # "mom" s2.push("foobar") return s2 # ["hi", "foobar"]
https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/
pytorch blogs
return s2 # ["hi", "foobar"] ``` You can try it out in the tutorial here. Distributed RPC framework APIs (Now Stable) The Distributed RPC framework was launched as experimental in the 1.4 release and the proposal is to mark Distributed RPC framework as stable and no longer experimental. This work involves a lot of enhancements and bug fixes to make the distributed RPC framework more reliable and robust overall, as well as adding a couple of new features, including profiling support, using TorchScript functions in RPC, and several enhancements for ease of use. Below is an overview of the various APIs within the framework: RPC API The RPC API allows users to specify functions to run and objects to be instantiated on remote nodes. These functions are transparently recorded so that gradients can backpropagate through remote nodes using Distributed Autograd.
https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/
pytorch blogs
Distributed Autograd Distributed Autograd connects the autograd graph across several nodes and allows gradients to flow through during the backwards pass. Gradients are accumulated into a context (as opposed to the .grad field as with Autograd) and users must specify their model’s forward pass under a with dist_autograd.context() manager in order to ensure that all RPC communication is recorded properly. Currently, only FAST mode is implemented (see here for the difference between FAST and SMART modes). Distributed Optimizer
https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/
pytorch blogs
Distributed Optimizer The distributed optimizer creates RRefs to optimizers on each worker with parameters that require gradients, and then uses the RPC API to run the optimizer remotely. The user must collect all remote parameters and wrap them in an RRef, as this is required input to the distributed optimizer. The user must also specify the distributed autograd context_id so that the optimizer knows in which context to look for gradients. Learn more about distributed RPC framework APIs here. New High level autograd API (Experimental) PyTorch 1.5 brings new functions including jacobian, hessian, jvp, vjp, hvp and vhp to the torch.autograd.functional submodule. This feature builds on the current API and allows the user to easily perform these functions. Detailed design discussion on GitHub can be found here. Python 2 no longer supported
https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/
pytorch blogs
Python 2 no longer supported Starting PyTorch 1.5.0, we will no longer support Python 2, specifically version 2.7. Going forward support for Python will be limited to Python 3, specifically Python 3.5, 3.6, 3.7 and 3.8 (first enabled in PyTorch 1.4.0). We’d like to thank the entire PyTorch team and the community for all their contributions to this work. Cheers! Team PyTorch
https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/
pytorch blogs
layout: blog_detail title: 'OpenMined and PyTorch partner to launch fellowship funding for privacy-preserving ML community' author: Andrew Trask (OpenMined/U.Oxford), Shubho Sengupta, Laurens van der Maaten, Joe Spisak excerpt: Many applications of machine learning (ML) pose a range of security and privacy challenges. Many applications of machine learning (ML) pose a range of security and privacy challenges. In particular, users may not be willing or allowed to share their data, which prevents them from taking full advantage of ML platforms like PyTorch. To take the field of privacy-preserving ML (PPML) forward, OpenMined and PyTorch are announcing plans to jointly develop a combined platform to accelerate PPML research as well as new funding for fellowships.
https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/
pytorch blogs
There are many techniques attempting to solve the problem of privacy in ML, each at various levels of maturity. These include (1) homomorphic encryption, (2) secure multi-party computation, (3) trusted execution environments, (4) on-device computation, (5) federated learning with secure aggregation, and (6) differential privacy. Additionally, a number of open source projects implementing these techniques were created with the goal of enabling research at the intersection of privacy, security, and ML. Among them, PySyft and CrypTen have taken an “ML-first” approach by presenting an API that is familiar to the ML community, while masking the complexities of privacy and security protocols. We are excited to announce that these two projects are now collaborating closely to build a mature PPML ecosystem around PyTorch.
https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/
pytorch blogs
Additionally, to bolster this ecosystem and take the field of privacy preserving ML forward, we are also calling for contributions and supporting research efforts on this combined platform by providing funding to support the OpenMined community and the researchers that contribute, build proofs of concepts and desire to be on the cutting edge of how privacy-preserving technology is applied. We will provide funding through the RAAIS Foundation, a non-profit organization with a mission to advance education and research in artificial intelligence for the common good. We encourage interested parties to apply to one or more of the fellowships listed below. Tools Powering the Future of Privacy-Preserving ML
https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/
pytorch blogs
The next generation of privacy-preserving open source tools enable ML researchers to easily experiment with ML models using secure computing techniques without needing to be cryptography experts. By integrating with PyTorch, PySyft and CrypTen offer familiar environments for ML developers to research and apply these techniques as part of their work. PySyft is a Python library for secure and private ML developed by the OpenMined community. It is a flexible, easy-to-use library that makes secure computation techniques like multi-party computation (MPC) and privacy-preserving techniques like differential privacy accessible to the ML community. It prioritizes ease of use and focuses on integrating these techniques into end-user use cases like federated learning with mobile phones and other edge devices, encrypted ML as a service, and privacy-preserving data science.
https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/
pytorch blogs
CrypTen is a framework built on PyTorch that enables private and secure ML for the PyTorch community. It is the first step along the journey towards a privacy-preserving mode in PyTorch that will make secure computing techniques accessible beyond cryptography researchers. It currently implements secure multiparty computation with the goal of offering other secure computing backends in the near future. Other benefits to ML researchers include: It is ML first and presents secure computing techniques via a CrypTensor object that looks and feels exactly like a PyTorch Tensor. This allows the user to use automatic differentiation and neural network modules akin to those in PyTorch. The framework focuses on scalability and performance and is built with real-world challenges in mind.
https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/
pytorch blogs
The focus areas for CrypTen and PySyft are naturally aligned and complement each other. The former focuses on building support for various secure and privacy preserving techniques on PyTorch through an encrypted tensor abstraction, while the latter focuses on end user use cases like deployment on edge devices and a user friendly data science platform. Working together will enable PySyft to use CrypTen as a backend for encrypted tensors. This can lead to an increase in performance for PySyft and the adoption of CrypTen as a runtime by PySyft’s userbase. In addition to this, PyTorch is also adding cryptography friendly features such as support for cryptographically secure random number generation. Over the long run, this allows each library to focus exclusively on its core competencies while enjoying the benefits of the synergistic relationship. New Funding for OpenMined Contributors
https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/
pytorch blogs
New Funding for OpenMined Contributors We are especially excited to announce that the PyTorch team has invested $250,000 to support OpenMined in furthering the development and proliferation of privacy-preserving ML. This gift will be facilitated via the RAAIS Foundation and will be available immediately to support paid fellowship grants for the OpenMined community. How to get involved Thanks to the support from the PyTorch team, OpenMined is able to offer three different opportunities for you to participate in the project’s development. Each of these fellowships furthers our shared mission to lower the barrier-to-entry for privacy-preserving ML and to create a more privacy-preserving world. Core PySyft CrypTen Integration Fellowships
https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/
pytorch blogs
Core PySyft CrypTen Integration Fellowships During these fellowships, we will integrate CrypTen as a supported backend for encrypted computation in PySyft. This will allow for the high-performance, secure multi-party computation capabilities of CrypTen to be used alongside other important tools in PySyft such as differential privacy and federated learning. For more information on the roadmap and how to apply for a paid fellowship, check out the project’s call for contributors. Federated Learning on Mobile, Web, and IoT Devices
https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/
pytorch blogs
During these fellowships, we will be extending PyTorch with the ability to perform federated learning across mobile, web, and IoT devices. To this end, a PyTorch front-end will be able to coordinate across federated learning backends that run in Javascript, Kotlin, Swift, and Python. Furthermore, we will also extend PySyft with the ability to coordinate these backends using peer-to-peer connections, providing low latency and the ability to run secure aggregation as a part of the protocol. For more information on the roadmap and how to apply for a paid fellowship, check out the project’s call for contributors. Development Challenges
https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/
pytorch blogs
Development Challenges Over the coming months, we will issue regular open competitions for increasing the performance and security of the PySyft and PyGrid codebases. For performance-related challenges, contestants will compete (for a cash prize) to make a specific PySyft demo (such as federated learning) as fast as possible. For security-related challenges, contestants will compete to hack into a PyGrid server. The first to demonstrate their ability will win the cash bounty! For more information on the challenges and to sign up to receive emails when each challenge is opened, sign up here. To apply, select one of the above projects and identify a role that matches your strengths! Cheers, Andrew, Laurens, Joe, and Shubho
https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/
pytorch blogs
layout: blog_detail title: 'How Computational Graphs are Constructed in PyTorch' author: Preferred Networks featured-img: 'assets/images/augmented_computational_graph.png' In the previous post we went over the theoretical foundations of automatic differentiation and reviewed the implementation in PyTorch. In this post, we will be showing the parts of PyTorch involved in creating the graph and executing it. In order to understand the following contents, please read @ezyang’s wonderful blog post about PyTorch internals. Autograd components First of all, let’s look at where the different components of autograd live:
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
tools/autograd: Here we can find the definition of the derivatives as we saw in the previous post derivatives.yaml, several python scripts and a folder called templates. These scripts and the templates are used at building time to generate the C++ code for the derivatives as specified in the yaml file. Also, the scripts here generate wrappers for the regular ATen functions so that the computational graph can be constructed.
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
torch/autograd: This folder is where the autograd components that can be used directly from python are located. In function.py we find the actual definition of torch.autograd.Function, a class used by users to write their own differentiable functions in python as per the documentation. functional.py holds components for functionally computing the jacobian vector product, hessian, and other gradient related computations of a given function. The rest of the files have additional components such as gradient checkers, anomaly detection, and the autograd profiler. torch/csrc/autograd: This is where the graph creation and execution-related code lives.
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
All this code is written in C++, since it is a critical part that is required to be extremely performant. Here we have several files that implement the engine, metadata storage, and all the needed components. Alongside this, we have several files whose names start with python_, and their main responsibility is to allow python objects to be used in the autograd engine. Graph Creation Previously, we described the creation of a computational graph. Now, we will see how PyTorch creates these graphs with references to the actual codebase. Figure 1: Example of an augmented computational graph It all starts when in our python code, where we request a tensor to require the gradient. >>> x = torch.tensor([0.5, 0.75], requires_grad=True)
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
When the `required_grad` flag is set in tensor creation, c10 will [allocate](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/c10/core/TensorImpl.cpp#L382-L406) an `AutogradMeta` object that is used to hold the graph information. ```c++ void TensorImpl::set_requires_grad(bool requires_grad) { ... if (!autograd_meta_) autograd_meta_ = impl::GetAutogradMetaFactory()->make(); autograd_meta_->set_requires_grad(requires_grad, this); } The AutogradMeta object is defined in torch/csrc/autograd/variable.h as follows: struct TORCH_API AutogradMeta : public c10::AutogradMetaInterface { std::string name_; Variable grad_; std::shared_ptr<Node> grad_fn_; std::weak_ptr<Node> grad_accumulator_; // other fields and methods ... };
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
// other fields and methods ... }; The most important fields in this structure are the computed gradient in `grad_` and a pointer to the function `grad_fn` that will be called by the engine to produce the actual gradient. Also, there is a gradient accumulator object that is used to add together all the different gradients where this tensor is involved as we will see in the graph execution. ### Graphs, Nodes and Edges. Now, when we call a differentiable function that takes this tensor as an argument, the associated metadata will be populated. Let’s suppose that we call a regular torch function that is implemented in ATen. Let it be the multiplication as in our previous blog post example. The resulting tensor has a field called `grad_fn` that is essentially a pointer to the function that will be used to compute the gradient of that operation. ```py >>> x = torch.tensor([0.5, 0.75], requires_grad=True) >>> v = x[0] * x[1] >>> v tensor(0.3750, grad_fn=<MulBackward0>)
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
v tensor(0.3750, grad_fn=) ``` Here we see that the tensors’ grad_fn has a MulBackward0 value. This function is the same that was written in the derivatives.yaml file, and its C++ code was generated automatically by all the scripts in tools/autograd. It’s auto-generated source code can be seen in torch/csrc/autograd/generated/Functions.cpp. ```c++ variable_list MulBackward0::apply(variable_list&& grads) { std::lock_guard lock(mutex_); IndexRangeGenerator gen; auto self_ix = gen.range(1); auto other_ix = gen.range(1); variable_list grad_inputs(gen.size()); auto& grad = grads[0]; auto self = self_.unpack(); auto other = other_.unpack(); bool any_grad_defined = any_variable_defined(grads); if (should_compute_output({ other_ix })) {
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
if (should_compute_output({ other_ix })) { auto grad_result = any_grad_defined ? (mul_tensor_backward(grad, self, other_scalar_type)) : Tensor(); copy_range(grad_inputs, other_ix, grad_result); } if (should_compute_output({ self_ix })) { auto grad_result = any_grad_defined ? (mul_tensor_backward(grad, other, self_scalar_type)) : Tensor(); copy_range(grad_inputs, self_ix, grad_result); } return grad_inputs; } ``` The grad_fn objects inherit from the TraceableFunction class, a descendant of Node with just a property set to enable tracing for debugging and optimization purposes. A graph by definition has nodes and edges, so these functions are indeed the nodes of the computational graph that are linked together by using Edge objects to enable the graph traversal later on.
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
The Node definition can be found in the torch/csrc/autograd/function.h file. struct TORCH_API Node : std::enable_shared_from_this<Node> { ... /// Evaluates the function on the given inputs and returns the result of the /// function call. variable_list operator()(variable_list&& inputs) { ... } protected: /// Performs the `Node`'s actual operation. virtual variable_list apply(variable_list&& inputs) = 0; … edge_list next_edges_; Essentially we see that it has an override of the operator () that performs the call to the actual function, and a pure virtual function called apply. The automatically generated functions override this apply method as we saw in the MulBackward0 example above. Finally, the node also has a list of edges to enable graph connectivity.
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
The Edge object is used to link Nodes together and its implementation is straightforward. struct Edge { ... /// The function this `Edge` points to. std::shared_ptr<Node> function; /// The identifier of a particular input to the function. uint32_t input_nr; }; It only requires a function pointer (the actual grad_fn objects that the edges link together), and an input number that acts as an id for the edge. Linking nodes together When we invoke the product operation of two tensors, we enter into the realm of autogenerated code. All the scripts that we saw in tools/autograd fill a series of templates that wrap the differentiable functions in ATen. These functions have code to construct the backward graph during the forward pass.
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
The gen_variable_type.py script is in charge of writing all this wrapping code. This script is called from the tools/autograd/gen_autograd.py during the pytorch build process and it will output the automatically generated function wrappers to torch/csrc/autograd/generated/. Let’s take a look at how the tensor multiplication generated function looks like. The code has been simplified, but it can be found in the torch/csrc/autograd/generated/VariableType_4.cpp file when compiling pytorch from source. ```c++ at::Tensor mul_Tensor(c10::DispatchKeySet ks, const at::Tensor & self, const at::Tensor & other) { ... auto _any_requires_grad = compute_requires_grad( self, other ); std::shared_ptr grad_fn; if (_any_requires_grad) {
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
if (any_requires_grad) { // Creates the link to the actual grad_fn and links the graph for backward traversal grad_fn = std::shared_ptr(new MulBackward0(), deleteNode); grad_fn->set_next_edges(collect_next_edges( self, other )); ... } … // Does the actual function call to ATen auto _tmp = (& { at::AutoDispatchBelowADInplaceOrView guard; return at::redispatch::mul(ks & c10::after_autograd_keyset, self, other_); })(); auto result = std::move(_tmp); if (grad_fn) { // Connects the result to the graph set_history(flatten_tensor_args( result ), grad_fn); } ... return result; } ``` Let’s walk through the most important lines of this code. First of all, the grad_fn object is created with: grad_fn = std::shared_ptr<MulBackward0>(new MulBackward0(), deleteNode);.
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
After the grad_fn object is created, the edges used to link the nodes together are created by using the grad_fn->set_next_edges(collect_next_edges( self, other )); calls. struct MakeNextFunctionList : IterArgs<MakeNextFunctionList> { edge_list next_edges; using IterArgs<MakeNextFunctionList>::operator(); void operator()(const Variable& variable) { if (variable.defined()) { next_edges.push_back(impl::gradient_edge(variable)); } else { next_edges.emplace_back(); } } void operator()(const c10::optional<Variable>& variable) { if (variable.has_value() && variable->defined()) { next_edges.push_back(impl::gradient_edge(*variable)); } else { next_edges.emplace_back(); } } }; template <typename... Variables> edge_list collect_next_edges(Variables&&... variables) { detail::MakeNextFunctionList make; make.apply(std::forward<Variables>(variables)...); return std::move(make.next_edges); }
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
return std::move(make.next_edges); } ``` Given an input variable (it’s just a regular tensor), collect_next_edges will create an Edge object by calling impl::gradient_edge ``c++ Edge gradient_edge(const Variable& self) { // If grad_fn is null (as is the case for a leaf node), we instead // interpret the gradient function to be a gradient accumulator, which will // accumulate its inputs into the grad property of the variable. These // nodes get suppressed in some situations, see "suppress gradient // accumulation" below. Note that only variables which haverequires_grad = // True` can have gradient accumulators. if (const auto& gradient = self.grad_fn()) { return Edge(gradient, self.output_nr()); } else {
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
} else { return Edge(grad_accumulator(self), 0); } } ``` To understand how edges work, let’s assume that an early executed function produced two output tensors, both with their grad_fn set, each tensor also has an output_nr property with the order in which they were returned. When creating the edges for the current grad_fn, an Edge object per input variable will be created. The edges will point to the variable’s grad_fn and will also track the output_nr to establish ids used when traversing the graph. In the case that the input variables are “leaf”, i.e. they were not produced by any differentiable function, they don’t have a grad_fn attribute set. A special function called a gradient accumulator is set by default as seen in the above code snippet.
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
After the edges are created, the grad_fn graph Node object that is being currently created will hold them using the set_next_edges function. This is what connects grad_fns together, producing the computational graph. void set_next_edges(edge_list&& next_edges) { next_edges_ = std::move(next_edges); for(const auto& next_edge : next_edges_) { update_topological_nr(next_edge); } } Now, the forward pass of the function will execute, and after the execution set_history will connect the output tensors to the grad_fn Node. ```c++ inline void set_history( at::Tensor& variable, const std::shared_ptr& grad_fn) { AT_ASSERT(grad_fn); if (variable.defined()) { // If the codegen triggers this, you most likely want to add your newly added function // to the DONT_REQUIRE_DERIVATIVE list in tools/autograd/gen_variable_type.py
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
TORCH_INTERNAL_ASSERT(isDifferentiableType(variable.scalar_type())); auto output_nr = grad_fn->add_input_metadata(variable); impl::set_gradient_edge(variable, {grad_fn, output_nr}); } else { grad_fn->add_input_metadata(Node::undefined_input()); } } ``` set_history calls set_gradient_edge, which just copies the grad_fn and the output_nr to the AutogradMeta object that the tensor has. ```c++ void set_gradient_edge(const Variable& self, Edge edge) { auto* meta = materialize_autograd_meta(self); meta->grad_fn_ = std::move(edge.function); meta->output_nr_ = edge.input_nr; // For views, make sure this new grad_fn_ is not overwritten unless it is necessary // in the VariableHooks::grad_fn below.
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
// in the VariableHooks::grad_fn below. // This logic is only relevant for custom autograd Functions for which multiple // operations can happen on a given Tensor before its gradient edge is set when // exiting the custom Function. auto diff_view_meta = get_view_autograd_meta(self); if (diff_view_meta && diff_view_meta->has_bw_view()) { diff_view_meta->set_attr_version(self._version()); } } ``` This tensor now will be the input to another function and the above steps will be all repeated. Check the animation below to see how the graph is created. Figure 2: Animation that shows the graph creation Registering Python Functions in the graph We have seen how autograd creates the graph for the functions included in ATen. However, when we define our differentiable functions in Python, they are also included in the graph!
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
An autograd python defined function looks like the following: class Exp(torch.autograd.Function): @staticmethod def forward(ctx, i): result = i.exp() ctx.save_for_backward(result) return result @staticmethod def backward(ctx, grad_output): result, = ctx.saved_tensors return grad_output * result # Call the function Exp.apply(torch.tensor(0.5, requires_grad=True)) # Outputs: tensor(1.6487, grad_fn=<ExpBackward>) In the above snippet autograd detected our python function when creating the graph. All of this is possible thanks to the Function class. Let’s take a look at what happens when we call apply.
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
apply is defined in the torch._C._FunctionBase class, but this class is not present in the python source. _FunctionBase is defined in C++ by using the python C API to hook C functions together into a single python class. We are looking for a function named THPFunction_apply. ```c++ PyObject THPFunction_apply(PyObject cls, PyObject *inputs) { // Generates the graph node THPObjectPtr backward_cls(PyObject_GetAttrString(cls, "_backward_cls")); if (!backward_cls) return nullptr; THPObjectPtr ctx_obj(PyObject_CallFunctionObjArgs(backward_cls, nullptr)); if (!ctx_obj) return nullptr; THPFunction ctx = (THPFunction)ctx_obj.get(); auto cdata = std::shared_ptr(new PyNode(std::move(ctx_obj)), deleteNode);
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
ctx->cdata = cdata; // Prepare inputs and allocate context (grad fn) // Unpack inputs will collect the edges auto info_pair = unpack_input(inputs); UnpackedInput& unpacked_input = info_pair.first; InputFlags& input_info = info_pair.second; // Initialize backward function (and ctx) bool is_executable = input_info.is_executable; cdata->set_next_edges(std::move(input_info.next_edges)); ctx->needs_input_grad = input_info.needs_input_grad.release(); ctx->is_variable_input = std::move(input_info.is_variable_input); // Prepend ctx to input_tuple, in preparation for static method call auto num_args = PyTuple_GET_SIZE(inputs); THPObjectPtr ctx_input_tuple(PyTuple_New(num_args + 1)); if (!ctx_input_tuple) return nullptr; Py_INCREF(ctx); PyTuple_SET_ITEM(ctx_input_tuple.get(), 0, (PyObject)ctx); for (int i = 0; i < num_args; ++i) { PyObject arg = PyTuple_GET_ITEM(unpacked_input.input_tuple.get(), i); Py_INCREF(arg);
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
Py_INCREF(arg); PyTuple_SET_ITEM(ctx_input_tuple.get(), i + 1, arg); } // Call forward THPObjectPtr tensor_outputs; { AutoGradMode grad_mode(false); THPObjectPtr forward_fn(PyObject_GetAttrString(cls, "forward")); if (!forward_fn) return nullptr; tensor_outputs = PyObject_CallObject(forward_fn, ctx_input_tuple); if (!tensor_outputs) return nullptr; } // Here is where the outputs gets the tensors tracked return process_outputs(cls, cdata, ctx, unpacked_input, inputs, std::move(tensor_outputs), is_executable, node); END_HANDLE_TH_ERRORS } ``` Although this code is hard to read at first due to all the python API calls, it essentially does the same thing as the auto-generated forward functions that we saw for ATen: Create a grad_fn object. Collect the edges to link the current grad_fn with the input tensors one. Execute the function forward. Assign the created grad_fn to the output tensors metadata.
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
The grad_fn object is created in: // Generates the graph node THPObjectPtr backward_cls(PyObject_GetAttrString(cls, "_backward_cls")); if (!backward_cls) return nullptr; THPObjectPtr ctx_obj(PyObject_CallFunctionObjArgs(backward_cls, nullptr)); if (!ctx_obj) return nullptr; THPFunction* ctx = (THPFunction*)ctx_obj.get(); auto cdata = std::shared_ptr<PyNode>(new PyNode(std::move(ctx_obj)), deleteNode); ctx->cdata = cdata;
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
ctx->cdata = cdata; ``` Basically, it asks the python API to get a pointer to the Python object that can execute the user-written function. Then it wraps it into a PyNode object that is a specialized Node object that calls the python interpreter with the provided python function when apply is executed during the forward pass. Note that in the code cdata is the actual Node object that is part of the graph. ctx is the object that is passed to the python forward/backward functions and it is used to store autograd related information by both, the user’s function and PyTorch. As in the regular C++ functions we also call collect_next_edges to track the inputs grad_fn objects, but this is done in unpack_input: ```c++
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
template<bool enforce_variables> std::pair<UnpackedInput, InputFlags> unpack_input(PyObject *args) { ... flags.next_edges = (flags.is_executable ? collect_next_edges(unpacked.input_vars) : edge_list()); return std::make_pair(std::move(unpacked), std::move(flags)); } After this, the edges are assigned to the grad_fn by just doing cdata->set_next_edges(std::move(input_info.next_edges)); and the forward function is called through the python interpreter C API. Once the output tensors are returned from the forward pass, they are processed and converted to variables inside the process_outputs function. ```c++ PyObject process_outputs(PyObject op_obj, const std::shared_ptr& cdata, THPFunction grad_fn, const UnpackedInput& unpacked, PyObject inputs, THPObjectPtr&& raw_output, bool is_executable,
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
torch::jit::Node* node) { ... _wrap_outputs(cdata, grad_fn, unpacked.input_vars, raw_output, outputs, is_executable); _trace_post_record(node, op_obj, unpacked.input_vars, outputs, is_inplace, unpack_output); if (is_executable) { _save_variables(cdata, grad_fn); } ... return outputs.release(); } ``` Here, _wrap_outputs is in charge of setting the forward outputs grad_fn to the newly created one. For this, it calls another _wrap_outputs function defined in a different file, so the process here gets a little confusing. ```c++ static void _wrap_outputs(const std::shared_ptr& cdata, THPFunction *self,
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
const variable_list &input_vars, PyObject raw_output, PyObject outputs, bool is_executable) { auto cdata_if_executable = is_executable ? cdata : nullptr; ... // Wrap only the tensor outputs. // This calls csrc/autograd/custom_function.cpp auto wrapped_outputs = _wrap_outputs(input_vars, non_differentiable, dirty_inputs, raw_output_vars, cdata_if_executable); ... } ``` The called _wrap_outputs is the one in charge of setting the autograd metadata in the output tensors: ```c++ std::vector> _wrap_outputs(const variable_list &input_vars, const std::unordered_set &non_differentiable, const std::unordered_set &dirty_inputs, const at::ArrayRef> raw_outputs, const std::shared_ptr &cdata) { std::unordered_set inputs; … // Sets the grad_fn and output_nr of an output Variable. auto set_history = [&](Variable& var, uint32_t output_nr, bool is_input, bool is_modified,
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
bool is_differentiable) { // Lots of checks if (!is_differentiable) { ... } else if (is_input) { // An input has been returned, but it wasn't modified. Return it as a view // so that we can attach a new grad_fn to the Variable. // Run in no_grad mode to mimic the behavior of the forward. { AutoGradMode grad_mode(false); var = var.view_as(var); } impl::set_gradient_edge(var, {cdata, output_nr}); } else if (cdata) { impl::set_gradient_edge(var, {cdata, output_nr}); } }; ``` And this is where set_gradient_edge was called and this is how a user-written python function gets included in the computational graph with its associated backward function! Closing remarks This blog post is intended to be a code overview on how PyTorch constructs the actual computational graphs that we discussed in the previous post. The next entry will deal with how the autograd engine executes these graphs.
https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/
pytorch blogs
layout: blog_detail title: "Efficient Multi-Objective Neural Architecture Search with Ax" author: David Eriksson, Max Balandat featured-img: "/assets/images/MOO-NAS-blog-img2-pareto_frontier_plot.png" tl;dr Multi-Objective Optimization in Ax enables efficient exploration of tradeoffs (e.g. between model performance and model size or latency) in Neural Architecture Search. This method has been successfully applied at Meta for a variety of products such as On-Device AI. In this post, we provide an end-to-end tutorial that allows you to try it out yourself. Introduction
https://pytorch.org/blog/effective-multi-objective-nueral-architecture/
pytorch blogs
Introduction Neural networks continue to grow in both size and complexity. Developing state-of-the-art architectures is often a cumbersome and time-consuming process that requires both domain expertise and large engineering efforts. In an attempt to overcome these challenges, several Neural Architecture Search (NAS) approaches have been proposed to automatically design well-performing architectures without requiring a human in-the-loop.
https://pytorch.org/blog/effective-multi-objective-nueral-architecture/
pytorch blogs
Despite being very sample-inefficient, naïve approaches like random search and grid search are still popular for both hyperparameter optimization and NAS (a study conducted at NeurIPS 2019 and ICLR 2020 found that 80% of NeurIPS papers and 88% of ICLR papers tuned their ML model hyperparameters using manual tuning, random search, or grid search). But as models are often time-consuming to train and may require large amounts of computational resources, minimizing the number of configurations that are evaluated is important.
https://pytorch.org/blog/effective-multi-objective-nueral-architecture/
pytorch blogs
Ax is a general tool for black-box optimization that allows users to explore large search spaces in a sample-efficient manner using state-of-the art algorithms such as Bayesian Optimization. At Meta, Ax is used in a variety of domains, including hyperparameter tuning, NAS, identifying optimal product settings through large-scale A/B testing, infrastructure optimization, and designing cutting-edge AR/VR hardware.
https://pytorch.org/blog/effective-multi-objective-nueral-architecture/
pytorch blogs
In many NAS applications, there is a natural tradeoff between multiple metrics of interest. For instance, when deploying models on-device we may want to maximize model performance (e.g., accuracy), while simultaneously minimizing competing metrics such as power consumption, inference latency, or model size, in order to satisfy deployment constraints. In many cases, we have been able to reduce computational requirements or latency of predictions substantially by accepting a small degradation in model performance (in some cases we were able to both increase accuracy and reduce latency!). Principled methods for exploring such tradeoffs efficiently are key enablers of Sustainable AI.
https://pytorch.org/blog/effective-multi-objective-nueral-architecture/
pytorch blogs
At Meta, we have successfully used multi-objective Bayesian NAS in Ax to explore such tradeoffs. Our methodology is being used routinely for optimizing AR/VR on-device ML models. Beyond NAS applications, we have also developed MORBO which is a method for high-dimensional multi-objective optimization that can be used to optimize optical systems for augmented reality (AR). Fully automated Multi-Objective NAS with Ax Ax’s Scheduler allows running experiments asynchronously in a closed-loop fashion by continuously deploying trials to an external system, polling for results, leveraging the fetched data to generate more trials, and repeating the process until a stopping condition is met. No human intervention or oversight is required. Features of the Scheduler include:
https://pytorch.org/blog/effective-multi-objective-nueral-architecture/
pytorch blogs
Customizability of parallelism, failure tolerance, and many other settings; A large selection of state-of-the-art optimization algorithms; Saving in-progress experiments (to a SQL DB or json) and resuming an experiment from storage; Easy extensibility to new backends for running trial evaluations remotely. The following illustration from the Ax scheduler tutorial summarizes how the scheduler interacts with any external system used to run trial evaluations: To run automated NAS with the Scheduler, the main things we need to do are:
https://pytorch.org/blog/effective-multi-objective-nueral-architecture/
pytorch blogs
Define a Runner, which is responsible for sending off a model with a particular architecture to be trained on a platform of our choice (like Kubernetes, or maybe just a Docker image on our local machine). In the tutorial below, we use TorchX for handling deployment of training jobs. Define a Metric, which is responsible for fetching the objective metrics (such as accuracy, model size, latency) from the training job. In our tutorial, we use Tensorboard to log data, and so can use the Tensorboard metrics that come bundled with Ax. Tutorial
https://pytorch.org/blog/effective-multi-objective-nueral-architecture/
pytorch blogs
Tutorial In our tutorial we show how to use Ax to run multi-objective NAS for a simple neural network model on the popular MNIST dataset. While the underlying methodology can be used for more complicated models and larger datasets, we opt for a tutorial that is easily runnable end-to-end on a laptop in less than an hour. In our example, we will tune the widths of two hidden layers, the learning rate, the dropout probability, the batch size, and the number of training epochs. The goal is to trade off performance (accuracy on the validation set) and model size (the number of model parameters) using multi-objective Bayesian optimization. The tutorial makes use of the following PyTorch libraries: PyTorch Lightning (specifying the model and training loop) TorchX (for running training jobs remotely / asynchronously)
https://pytorch.org/blog/effective-multi-objective-nueral-architecture/
pytorch blogs
BoTorch (the Bayesian optimization library that powers Ax’s algorithms) The complete runnable example is available as a PyTorch Tutorial. Results The final results from the NAS optimization performed in the tutorial can be seen in the tradeoff plot below. Here, each point corresponds to the result of a trial, with the color representing its iteration number, and the star indicating the reference point defined by the thresholds we imposed on the objectives. We see that our method was able to successfully explore the trade-offs between validation accuracy and number of parameters and found both large models with high validation accuracy as well as small models with lower validation accuracy. Depending on the performance requirements and model size constraints, the decision maker can now choose which model to use or analyze further.
https://pytorch.org/blog/effective-multi-objective-nueral-architecture/
pytorch blogs
Visualizations Ax provides a number of visualizations that make it possible to analyze and understand the results of an experiment. Here, we will focus on the performance of the Gaussian process models that model the unknown objectives, which are used to help us discover promising configurations faster. Ax makes it easy to better understand how accurate these models are and how they perform on unseen data via leave-one-out cross-validation. In the figures below, we see that the model fits look quite good - predictions are close to the actual outcomes, and predictive 95% confidence intervals cover the actual outcomes well. Additionally, we observe that the model size (num_params) metric is much easier to model than the validation accuracy (val_acc) metric.
https://pytorch.org/blog/effective-multi-objective-nueral-architecture/
pytorch blogs
flex-direction:row; } Takeaways We showed how to run a fully automated multi-objective Neural Architecture Search using Ax. Using the Ax Scheduler, we were able to run the optimization automatically in a fully asynchronous fashion - this can be done locally (as done in the tutorial) or by deploying trials remotely to a cluster (simply by changing the TorchX scheduler configuration). The state-of-the-art multi-objective Bayesian optimization algorithms available in Ax allowed us to efficiently explore the tradeoffs between validation accuracy and model size. Advanced Functionality Ax has a number of other advanced capabilities that we did not discuss in our tutorial. Among these are the following: Early Stopping
https://pytorch.org/blog/effective-multi-objective-nueral-architecture/
pytorch blogs
Early Stopping When evaluating a new candidate configuration, partial learning curves are typically available while the NN training job is running. We can use the information contained in the partial curves to identify under-performing trials to stop early in order to free up computational resources for more promising candidates. While not demonstrated in the above tutorial, Ax supports early stopping out-of-the-box - see our early stopping tutorial for more details. High-dimensional search spaces
https://pytorch.org/blog/effective-multi-objective-nueral-architecture/
pytorch blogs
High-dimensional search spaces In our tutorial, we used Bayesian optimization with a standard Gaussian process in order to keep the runtime low. However, these models typically scale to only about 10-20 tunable parameters. Our new SAASBO method (paper, Ax tutorial, BoTorch tutorial) is very sample-efficient and enables tuning hundreds of parameters. SAASBO can easily be enabled by passing use_saasbo=True to choose_generation_strategy. Acknowledgements We thank the TorchX team (in particular Kiuk Chung and Tristan Rice) for their help with integrating TorchX with Ax, and the Adaptive Experimentation team @ Meta for their contributions to Ax and BoTorch. References
https://pytorch.org/blog/effective-multi-objective-nueral-architecture/
pytorch blogs
References D. Eriksson, P. Chuang, S. Daulton, M. Balandat. Optimizing model accuracy and latency using Bayesian multi-objective neural architecture search. Meta Research blog, July 2021.
https://pytorch.org/blog/effective-multi-objective-nueral-architecture/
pytorch blogs
layout: blog_detail title: 'PyTorch Adds New Ecosystem Projects for Encrypted AI and Quantum Computing, Expands PyTorch Hub' author: Team PyTorch The PyTorch ecosystem includes projects, tools, models and libraries from a broad community of researchers in academia and industry, application developers, and ML engineers. The goal of this ecosystem is to support, accelerate, and aid in your exploration with PyTorch and help you push the state of the art, no matter what field you are exploring. Similarly, we are expanding the recently launched PyTorch Hub to further help you discover and reproduce the latest research. In this post, we’ll highlight some of the projects that have been added to the PyTorch ecosystem this year and provide some context on the criteria we use to evaluate community projects. We’ll also provide an update on the fast-growing PyTorch Hub and share details on our upcoming PyTorch Summer Hackathon.
https://pytorch.org/blog/pytorch-ecosystem/
pytorch blogs
Recently added ecosystem projects From private AI to quantum computing, we’ve seen the community continue to expand into new and interesting areas. The latest projects include: Advertorch: A Python toolbox for adversarial robustness research. The primary functionalities are implemented in PyTorch. Specifically, AdverTorch contains modules for generating adversarial perturbations and defending against adversarial examples, as well as scripts for adversarial training. botorch: A modular and easily extensible interface for composing Bayesian optimization primitives, including probabilistic models, acquisition functions, and optimizers. Skorch: A high-level library for PyTorch that provides full scikit-learn compatibility.
https://pytorch.org/blog/pytorch-ecosystem/
pytorch blogs
PyTorch Geometric: A library for deep learning on irregular input data such as graphs, point clouds, and manifolds. PySyft: A Python library for encrypted, privacy preserving deep learning. PennyLane: A library for quantum ML, automatic differentiation, and optimization of hybrid quantum-classical computations. Flair: A very simple framework for state-of-the-art natural language processing (NLP). What makes a great project? When we review project submissions for the PyTorch ecosystem, we take into account a number of factors that we feel are important and that we would want in the projects we use ourselves. Some of these criteria include:
https://pytorch.org/blog/pytorch-ecosystem/
pytorch blogs
Well-tested: Users should be confident that ecosystem projects will work well with PyTorch, and include support for CI to ensure that testing is occurring on a continuous basis and the project can run on the latest version of PyTorch. Clear utility: Users should understand where each project fits within the PyTorch ecosystem and the value it brings. Permissive licensing: Users must be able to utilize ecosystem projects without licensing concerns. e.g. BSD-3, Apache-2 and MIT licenses Easy onboarding: Projects need to have support for binary installation options (pip/Conda), clear documentation and a rich set of tutorials (ideally built into Jupyter notebooks). Ongoing maintenance: Project authors need to be committed to supporting and maintaining their projects. Community: Projects should have (or be on track to building) an active, broad-based community.
https://pytorch.org/blog/pytorch-ecosystem/
pytorch blogs
If you would like to have your project included in the PyTorch ecosystem and featured on pytorch.org/ecosystem, please complete the form here. If you've previously submitted a project for consideration and haven't heard back, we promise to get back to you as soon as we can - we've received a lot of submissions! PyTorch Hub for reproducible research | New models
https://pytorch.org/blog/pytorch-ecosystem/
pytorch blogs
Since launching the PyTorch Hub in beta, we’ve received a lot of interest from the community including the contribution of many new models. Some of the latest include U-Net for Brain MRI contributed by researchers at Duke University, Single Shot Detection from NVIDIA and Transformer-XL from HuggingFace. We’ve seen organic integration of the PyTorch Hub by folks like paperswithcode, making it even easier for you to try out the state of the art in AI research. In addition, companies like Seldon provide production-level support for PyTorch Hub models on top of Kubernetes.
https://pytorch.org/blog/pytorch-ecosystem/
pytorch blogs
What are the benefits of contributing a model in the PyTorch Hub? Compatibility: PyTorch Hub models are prioritized first for testing by the TorchScript and Cloud TPU teams, and used as baselines for researchers across a number of fields. Visibility: Models in the Hub will be promoted on pytorch.org as well as on paperswithcode. Ease of testing and reproducibility: Each model comes with code, clear preprocessing requirements, and methods/dependencies to run. There is also tight integration with Google Colab, making it a true single click to get started. PyTorch Hub contributions welcome!
https://pytorch.org/blog/pytorch-ecosystem/
pytorch blogs
PyTorch Hub contributions welcome! We are actively looking to grow the PyTorch Hub and welcome contributions. You don’t need to be an original paper author to contribute, and we’d love to see the number of domains and fields broaden. So what types of contributions are we looking for? Artifacts of a published or an arXiv paper (or something of a similar nature that serves a different audience — such as ULMFit) that a large audience would need. AND Reproduces the published results (or better) Overall these models are aimed at researchers either trying to reproduce a baseline, or trying to build downstream research on top of the model (such as feature-extraction or fine-tuning) as well as researchers looking for a demo of the paper for subjective evaluation. Please keep this audience in mind when contributing.
https://pytorch.org/blog/pytorch-ecosystem/
pytorch blogs
If you are short on inspiration or would just like to find out what the SOTA is an any given field or domain, checkout the Paperswithcode state-of-the-art gallery. PyTorch Summer Hackathon We’ll be hosting the first PyTorch Summer Hackathon next month. We invite you to apply to participate in the in-person hackathon on August 8th to 9th at Facebook's Menlo Park campus. We'll be bringing the community together to work on innovative ML projects that can solve a broad range of complex challenges. Applications will be reviewed and accepted on a rolling basis until spaces are filled. For those who cannot join this Hackathon in person, we’ll be following up soon with other ways to participate. Please visit this link to apply. Thank you for being part of the PyTorch community! -Team PyTorch
https://pytorch.org/blog/pytorch-ecosystem/
pytorch blogs
layout: blog_detail title: 'PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models' author: Chaoyang He, Shen Li, Mahdi Soltanolkotabi, and Salman Avestimehr featured-img: 'assets/images/pipetransformer_overview.png' In this blog post, we describe the first peer-reviewed research paper that explores accelerating the hybrid of PyTorch DDP (torch.nn.parallel.DistributedDataParallel) [1] and Pipeline (torch.distributed.pipeline) - PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models (Transformers such as BERT [2] and ViT [3]), published at ICML 2021.
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs
PipeTransformer leverages automated elastic pipelining for efficient distributed training of Transformer models. In PipeTransformer, we designed an adaptive on-the-fly freeze algorithm that can identify and freeze some layers gradually during training and an elastic pipelining system that can dynamically allocate resources to train the remaining active layers. More specifically, PipeTransformer automatically excludes frozen layers from the pipeline, packs active layers into fewer GPUs, and forks more replicas to increase data-parallel width. We evaluate PipeTransformer using Vision Transformer (ViT) on ImageNet and BERT on SQuAD and GLUE datasets. Our results show that compared to the state-of-the-art baseline, PipeTransformer attains up to 2.83-fold speedup without losing accuracy. We also provide various performance analyses for a more comprehensive understanding of our algorithmic and system-wise design.
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs
Next, we will introduce the background, motivation, our idea, design, and how we implement the algorithm and system with PyTorch Distributed APIs. Paper: http://proceedings.mlr.press/v139/he21a.html Source Code: https://DistML.ai. Slides: https://docs.google.com/presentation/d/1t6HWL33KIQo2as0nSHeBpXYtTBcy0nXCoLiKd0EashY/edit?usp=sharing Introduction Figure 1: the Parameter Number of Transformer Models Increases Dramatically.
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs
Large Transformer models [4][5] have powered accuracy breakthroughs in both natural language processing and computer vision. GPT-3 [4] hit a new record high accuracy for nearly all NLP tasks. Vision Transformer (ViT) [3] also achieved 89\% top-1 accuracy in ImageNet, outperforming state-of-the-art convolutional networks ResNet-152 and EfficientNet. To tackle the growth in model sizes, researchers have proposed various distributed training techniques, including parameter servers [6][7][8], pipeline parallelism [9][10][11][12], intra-layer parallelism [13][14][15], and zero redundancy data-parallel [16]. Existing distributed training solutions, however, only study scenarios where all model weights are required to be optimized throughout the training (i.e., computation and communication overhead remains relatively static over different iterations). Recent works on progressive training suggest that parameters in neural networks can be trained dynamically:
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs
Freeze Training: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability. NeurIPS 2017 Efficient Training of BERT by Progressively Stacking. ICML 2019 Accelerating Training of Transformer-Based Language Models with Progressive Layer Dropping. NeurIPS 2020. On the Transformer Growth for Progressive BERT Training. NACCL 2021 Figure 2. Interpretable Freeze Training: DNNs converge bottom-up (Results on CIFAR10 using ResNet). Each pane shows layer-by-layer similarity using SVCCA [17][18]
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs
For example, in freeze training [17][18], neural networks usually converge from the bottom-up (i.e., not all layers need to be trained all the way through training). Figure 2 shows an example of how weights gradually stabilize during training in this approach. This observation motivates us to utilize freeze training for distributed training of Transformer models to accelerate training by dynamically allocating resources to focus on a shrinking set of active layers. Such a layer freezing strategy is especially pertinent to pipeline parallelism, as excluding consecutive bottom layers from the pipeline can reduce computation, memory, and communication overhead. Figure 3. The process of PipeTransformer’s automated and elastic pipelining to accelerate distributed training of Transformer models
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs
We propose PipeTransformer, an elastic pipelining training acceleration framework that automatically reacts to frozen layers by dynamically transforming the scope of the pipelined model and the number of pipeline replicas. To the best of our knowledge, this is the first paper that studies layer freezing in the context of both pipeline and data-parallel training. Figure 3 demonstrates the benefits of such a combination. First, by excluding frozen layers from the pipeline, the same model can be packed into fewer GPUs, leading to both fewer cross-GPU communications and smaller pipeline bubbles. Second, after packing the model into fewer GPUs, the same cluster can accommodate more pipeline replicas, increasing the width of data parallelism. More importantly, the speedups acquired from these two benefits are multiplicative rather than additive, further accelerating the training.
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs
The design of PipeTransformer faces four major challenges. First, the freeze algorithm must make on-the-fly and adaptive freezing decisions; however, existing work [17][18] only provides a posterior analysis tool. Second, the efficiency of pipeline re-partitioning results is influenced by multiple factors, including partition granularity, cross-partition activation size, and the chunking (the number of micro-batches) in mini-batches, which require reasoning and searching in a large solution space. Third, to dynamically introduce additional pipeline replicas, PipeTransformer must overcome the static nature of collective communications and avoid potentially complex cross-process messaging protocols when onboarding new processes (one pipeline is handled by one process). Finally, caching can save time for repeated forward propagation of frozen layers, but it must be shared between existing pipelines and newly added ones, as the system cannot afford to create and warm up a dedicated cache for each replica.
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs
Figure 4: An Animation to Show the Dynamics of PipeTransformer
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs
As shown in the animation (Figure 4), PipeTransformer is designed with four core building blocks to address the aforementioned challenges. First, we design a tunable and adaptive algorithm to generate signals that guide the selection of layers to freeze over different iterations (Freeze Algorithm). Once triggered by these signals, our elastic pipelining module (AutoPipe), then packs the remaining active layers into fewer GPUs by taking both activation sizes and variances of workloads across heterogeneous partitions (frozen layers and active layers) into account. It then splits a mini-batch into an optimal number of micro-batches based on prior profiling results for different pipeline lengths. Our next module, AutoDP, spawns additional pipeline replicas to occupy freed-up GPUs and maintains hierarchical communication process groups to attain dynamic membership for collective communications. Our final module, AutoCache, efficiently shares activations across existing and new data-parallel processes and automatically replaces stale caches during transitions.
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs
Overall, PipeTransformer combines the Freeze Algorithm, AutoPipe, AutoDP, and AutoCache modules to provide a significant training speedup. We evaluate PipeTransformer using Vision Transformer (ViT) on ImageNet and BERT on GLUE and SQuAD datasets. Our results show that PipeTransformer attains up to 2.83-fold speedup without losing accuracy. We also provide various performance analyses for a more comprehensive understanding of our algorithmic and system-wise design. Finally, we have also developed open-source flexible APIs for PipeTransformer, which offer a clean separation among the freeze algorithm, model definitions, and training accelerations, allowing for transferability to other algorithms that require similar freezing strategies. Overall Design
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs
Overall Design Suppose we aim to train a massive model in a distributed training system where the hybrid of pipelined model parallelism and data parallelism is used to target scenarios where either the memory of a single GPU device cannot hold the model, or if loaded, the batch size is small enough to avoid running out of memory. More specifically, we define our settings as follows: Training task and model definition. We train Transformer models (e.g., Vision Transformer, BERT on large-scale image or text datasets. The Transformer model has layers, in which the th layer is composed of a forward computation function and a corresponding set of parameters.
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs
Training infrastructure. Assume the training infrastructure contains a GPU cluster that has GPU servers (i.e. nodes). Each node has GPUs. Our cluster is homogeneous, meaning that each GPU and server have the same hardware configuration. Each GPU's memory capacity is . Servers are connected by a high bandwidth network interface such as InfiniBand interconnect.
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs
Pipeline parallelism. In each machine, we load a model into a pipeline which has partitions ( also represents the pipeline length). The th partition consists of consecutive layers. We assume each partition is handled by a single GPU device. , meaning that we can build multiple pipelines for multiple model replicas in a single machine. We assume all GPU devices in a pipeline belonging to the same machine. Our pipeline is a synchronous pipeline, which does not involve stale gradients, and the number of micro-batches is . In the Linux OS, each pipeline is handled by a single process. We refer the reader to GPipe [10] for more details.
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs
Data parallelism. DDP is a cross-machine distributed data-parallel process group within parallel workers. Each worker is a pipeline replica (a single process). The th worker's index (ID) is rank . For any two pipelines in DDP, they can belong to either the same GPU server or different GPU servers, and they can exchange gradients with the AllReduce algorithm.
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs
Under these settings, our goal is to accelerate training by leveraging freeze training, which does not require all layers to be trained throughout the duration of the training. Additionally, it may help save computation, communication, memory cost, and potentially prevent overfitting by consecutively freezing layers. However, these benefits can only be achieved by overcoming the four challenges of designing an adaptive freezing algorithm, dynamical pipeline re-partitioning, efficient resource reallocation, and cross-process caching, as discussed in the introduction. Figure 5. Overview of PipeTransformer Training System
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs
PipeTransformer co-designs an on-the-fly freeze algorithm and an automated elastic pipelining training system that can dynamically transform the scope of the pipelined model and the number of pipeline replicas. The overall system architecture is illustrated in Figure 5. To support PipeTransformer’s elastic pipelining, we maintain a customized version of PyTorch Pipeline. For data parallelism, we use PyTorch DDP as a baseline. Other libraries are standard mechanisms of an operating system (e.g.,multi-processing) and thus avoid specialized software or hardware customization requirements. To ensure the generality of our framework, we have decoupled the training system into four core components: freeze algorithm, AutoPipe, AutoDP, and AutoCache. The freeze algorithm (grey) samples indicators from the training loop and makes layer-wise freezing decisions, which will be shared with AutoPipe (green). AutoPipe is an elastic pipeline module that speeds up training by excluding frozen layers from the pipeline and packing the active layers into fewer GPUs (pink), leading to both fewer cross-GPU communications and smaller pipeline bubbles. Subsequently, AutoPipe passes pipeline length information to AutoDP (purple), which then spawns more pipeline replicas to increase data-parallel width, if possible. The illustration also includes an example in which AutoDP introduces a new replica (purple). AutoCache (orange edges) is a cross-pipeline caching module, as illustrated by connections between pipelines. The source code architecture is aligned with Figure 5 for readability and generality.
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs
Implementation Using PyTorch APIs As can be seen from Figure 5, PipeTransformers contain four components: Freeze Algorithm, AutoPipe, AutoDP, and AutoCache. Among them, AutoPipe and AutoDP relies on PyTorch DDP (torch.nn.parallel.DistributedDataParallel) [1] and Pipeline (torch.distributed.pipeline), respectively. In this blog, we only highlight the key implementation details of AutoPipe and AutoDP. For details of Freeze Algorithm and AutoCache, please refer to our paper. AutoPipe: Elastic Pipelining AutoPipe can accelerate training by excluding frozen layers from the pipeline and packing the active layers into fewer GPUs. This section elaborates on the key components of AutoPipe that dynamically 1) partition pipelines, 2) minimize the number of pipeline devices, and 3) optimize mini-batch chunk size accordingly. Basic Usage of PyTorch Pipeline
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs
Basic Usage of PyTorch Pipeline Before diving into details of AutoPipe, let us warm up the basic usage of PyTorch Pipeline (torch.distributed.pipeline.sync.Pipe, see this tutorial). More specially, we present a simple example to understand the design of Pipeline in practice: # Step 1: build a model including two linear layers fc1 = nn.Linear(16, 8).cuda(0) fc2 = nn.Linear(8, 4).cuda(1) # Step 2: wrap the two layers with nn.Sequential model = nn.Sequential(fc1, fc2) # Step 3: build Pipe (torch.distributed.pipeline.sync.Pipe) model = Pipe(model, chunks=8) # do training/inference input = torch.rand(16, 16).cuda(0) output_rref = model(input)
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs
output_rref = model(input) ``` In this basic example, we can see that before initializing Pipe, we need to partition the model nn.Sequential into multiple GPU devices and set optimal chunk number (chunks). Balancing computation time across partitions is critical to pipeline training speed, as skewed workload distributions across stages can lead to stragglers and forcing devices with lighter workloads to wait. The chunk number may also have a non-trivial influence on the throughput of the pipeline. Balanced Pipeline Partitioning In dynamic training system such as PipeTransformer, maintaining optimally balanced partitions in terms of parameter numbers does not guarantee the fastest training speed because other factors also play a crucial role: Figure 6. The partition boundary is in the middle of a skip connection
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs
Cross-partition communication overhead. Placing a partition boundary in the middle of a skip connection leads to additional communications since tensors in the skip connection must now be copied to a different GPU. For example, with BERT partitions in Figure 6, partition must take intermediate outputs from both partition and partition . In contrast, if the boundary is placed after the addition layer, the communication overhead between partition and is visibly smaller. Our measurements show that having cross-device communication is more expensive than having slightly imbalanced partitions (see the Appendix in our paper). Therefore, we do not consider breaking skip connections (highlighted separately as an entire attention layer and MLP layer in green color at line 7 in Algorithm 1.
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs
Frozen layer memory footprint. During training, AutoPipe must recompute partition boundaries several times to balance two distinct types of layers: frozen layers and active layers. The frozen layer's memory cost is a fraction of that inactive layer, given that the frozen layer does not need backward activation maps, optimizer states, and gradients. Instead of launching intrusive profilers to obtain thorough metrics on memory and computational cost, we define a tunable cost factor to estimate the memory footprint ratio of a frozen layer over the same active layer. Based on empirical measurements in our experimental hardware, we set it to .
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs
Based on the above two considerations, AutoPipe balances pipeline partitions based on parameter sizes. More specifically, AutoPipe uses a greedy algorithm to allocate all frozen and active layers to evenly distribute partitioned sublayers into GPU devices. Pseudocode is described as the load\_balance() function in Algorithm 1. The frozen layers are extracted from the original model and kept in a separate model instance in the first device of a pipeline. Note that the partition algorithm employed in this paper is not the only option; PipeTransformer is modularized to work with any alternatives. Pipeline Compression
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs
Pipeline Compression Pipeline compression helps to free up GPUs to accommodate more pipeline replicas and reduce the number of cross-device communications between partitions. To determine the timing of compression, we can estimate the memory cost of the largest partition after compression, and then compare it with that of the largest partition of a pipeline at timestep . To avoid extensive memory profiling, the compression algorithm uses the parameter size as a proxy for the training memory footprint. Based on this simplification, the criterion of pipeline compression is as follows:
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs
Once the freeze notification is received, AutoPipe will always attempt to divide the pipeline length by 2 (e.g., from 8 to 4, then 2). By using as the input, the compression algorithm can verify if the result satisfies the criterion in Equation (1). Pseudocode is shown in lines 25-33 in Algorithm 1. Note that this compression makes the acceleration ratio exponentially increase during training, meaning that if a GPU server has a larger number of GPUs (e.g., more than 8), the acceleration ratio will be further amplified.
https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/
pytorch blogs