diff --git "a/blogs_splitted_dataset.jsonl" "b/blogs_splitted_dataset.jsonl" --- "a/blogs_splitted_dataset.jsonl" +++ "b/blogs_splitted_dataset.jsonl" @@ -29,9 +29,9 @@ {"page_content": "Today, we\u2019re announcing a [new Udacity course](https://blog.udacity.com/2019/05/announcing-the-secure-and-private-ai-scholarship-challenge-with-facebook.html), building upon the Intro to Deep Learning course launched last year. This new course, led by Andrew Trask of Oxford University and OpenMined, covers important concepts around privacy in AI, including methods such as differential privacy and federated learning. Facebook will also be providing scholarships to support students as they continue their ML education in Udacity\u2019s full Nanodegree programs.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}} {"page_content": "The [fast.ai](https://www.fast.ai) community is also continuing to invest energy and resources in PyTorch. In June, fast.ai will launch a new course called Deep Learning from the Foundations, which will show developers how to go all the way from writing matrix multiplication from scratch to how to train and implement a state-of-the-art ImageNet model. The course will include deep dives into the underlying implementation of methods in the PyTorch and fast.ai libraries, and will use the code to explain and illustrate the academic papers that underlie these methods.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}} {"page_content": "As part of the course, fast.ai will also release new software modules, including fastai.audio, which brings the power of fast.ai\u2019s deep abstractions and curated algorithms to the new PyTorch.audio module, and show how fastai.vision can be used to [create stunning high-resolution videos](https://www.fast.ai/2019/05/03/decrappify) from material such as old classic movies, and from cutting-edge microscopy sequences through a collaboration with the [Salk Institute](https://www.salk.edu). In addition, fast.ai is contributing its new X-ResNet module, including a suite of models pretrained on ImageNet.\n\n## Getting started with PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}} -{"page_content": "Everyone in the AI community \u2014 including those new to ML development as well as researchers and engineers looking for ways to accelerate their end-to-end workflows \u2014 can experiment with PyTorch instantly by visiting [pytorch.org](https://pytorch.org) and launching a [tutorial](https://pytorch.org/tutorials) in Colab. There are also many easy ways to [get started](https://pytorch.org/get-started/locally) both locally and on popular cloud platforms.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}} +{"page_content": "## Getting started with PyTorch\n\nEveryone in the AI community \u2014 including those new to ML development as well as researchers and engineers looking for ways to accelerate their end-to-end workflows \u2014 can experiment with PyTorch instantly by visiting [pytorch.org](https://pytorch.org) and launching a [tutorial](https://pytorch.org/tutorials) in Colab. There are also many easy ways to [get started](https://pytorch.org/get-started/locally) both locally and on popular cloud platforms.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"Introducing Accelerated PyTorch Training on Mac\"\nauthor: PyTorch\nfeatured-img: \"/assets/images/METAPT-002-BarGraph-02-static.png\"\n---\n\nIn collaboration with the Metal engineering team at Apple, we are excited to announce support for GPU-accelerated PyTorch training on Mac. Until now, PyTorch training on Mac only leveraged the CPU, but with the upcoming PyTorch v1.12 release, developers and researchers can take advantage of Apple silicon GPUs for significantly faster model training. This unlocks the ability to perform machine learning workflows like prototyping and fine-tuning locally, right on Mac.\n\n

\n \n

\n\n## Metal Acceleration", "metadata": {"source": "https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/", "category": "pytorch blogs"}} -{"page_content": "Accelerated GPU training is enabled using Apple\u2019s Metal Performance Shaders (MPS) as a backend for PyTorch. The MPS backend extends the PyTorch framework, providing scripts and capabilities to set up and run operations on Mac. MPS optimizes compute performance with kernels that are fine-tuned for the unique characteristics of each Metal GPU family. The new device maps machine learning computational graphs and primitives on the MPS Graph framework and tuned kernels provided by MPS. \n\n## Training Benefits on Apple Silicon\n\nEvery Apple silicon Mac has a unified memory architecture, providing the GPU with direct access to the full memory store. This makes Mac a great platform for machine learning, enabling users to train larger networks or batch sizes locally. This reduces costs associated with cloud-based development or the need for additional local GPUs. The Unified Memory architecture also reduces data retrieval latency, improving end-to-end performance.", "metadata": {"source": "https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/", "category": "pytorch blogs"}} +{"page_content": "## Metal Acceleration\n\nAccelerated GPU training is enabled using Apple\u2019s Metal Performance Shaders (MPS) as a backend for PyTorch. The MPS backend extends the PyTorch framework, providing scripts and capabilities to set up and run operations on Mac. MPS optimizes compute performance with kernels that are fine-tuned for the unique characteristics of each Metal GPU family. The new device maps machine learning computational graphs and primitives on the MPS Graph framework and tuned kernels provided by MPS. \n\n## Training Benefits on Apple Silicon\n\nEvery Apple silicon Mac has a unified memory architecture, providing the GPU with direct access to the full memory store. This makes Mac a great platform for machine learning, enabling users to train larger networks or batch sizes locally. This reduces costs associated with cloud-based development or the need for additional local GPUs. The Unified Memory architecture also reduces data retrieval latency, improving end-to-end performance.", "metadata": {"source": "https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/", "category": "pytorch blogs"}} {"page_content": "In the graphs below, you can see the performance speedup from accelerated GPU training and evaluation compared to the CPU baseline:\n\n

\n \n

\n\n

\nAccelerated GPU training and evaluation speedups over CPU-only (times faster)\n

\n\n\n## Getting Started\n\nTo get started, just install the latest [Preview (Nightly) build](https://pytorch.org/get-started/locally/) on your Apple silicon Mac running macOS 12.3 or later with a native version (arm64) of Python.\n \nYou can also learn more about Metal and MPS on [Apple\u2019s Metal page](https://developer.apple.com/metal/).", "metadata": {"source": "https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/", "category": "pytorch blogs"}} {"page_content": "\\* _Testing conducted by Apple in April 2022 using production Mac Studio systems with Apple M1 Ultra, 20-core CPU, 64-core GPU 128GB of RAM, and 2TB SSD. Tested with macOS Monterey 12.3, prerelease PyTorch 1.12, ResNet50 (batch size=128), HuggingFace BERT (batch size=64), and VGG16 (batch size=64). Performance tests are conducted using specific computer systems and reflect the approximate performance of Mac Studio._", "metadata": {"source": "https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"Accelerating Hugging Face and TIMM models with PyTorch 2.0\"\nauthor: Mark Saroufim\nfeatured-img: \"assets/images/pytorch-2.0-feature-img.png\"\n---\n\n`torch.compile()` makes it easy to experiment with different compiler backends to make PyTorch code faster with a single line decorator `torch.compile()`. It works either directly over an nn.Module as a drop-in replacement for `torch.jit.script()` but without requiring you to make any source code changes. We expect this one line code change to provide you with between 30%-2x training time speedups on the vast majority of models that you\u2019re already running.\n\n```python\n\nopt_module = torch.compile(module)\n\n```\n\ntorch.compile supports arbitrary PyTorch code, control flow, mutation and comes with experimental support for dynamic shapes. We\u2019re so excited about this development that we call it PyTorch 2.0.", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}} @@ -42,17 +42,17 @@ {"page_content": "torch.compile() supports many different backends but one that we\u2019re particularly excited about is Inductor which generates Triton kernels [https://github.com/openai/triton](https://github.com/openai/triton) which are written in Python yet outperform the vast majority of handwritten CUDA kernels. Suppose our example above was called trig.py we can actually inspect the code generated triton kernels by running.\n\n```\nTORCH_COMPILE_DEBUG=1 python trig.py\n```\n\n```python", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}} {"page_content": "```python\n\n@pointwise(size_hints=[16384], filename=__file__, meta={'signature': {0: '*fp32', 1: '*fp32', 2: 'i32'}, 'device': 0, 'constants': {}, 'configs': [instance_descriptor(divisible_by_16=(0, 1, 2), equal_to_1=())]})\n@triton.jit\ndef kernel(in_ptr0, out_ptr0, xnumel, XBLOCK : tl.constexpr):\n xnumel = 10000\n xoffset = tl.program_id(0) * XBLOCK\n xindex = xoffset + tl.reshape(tl.arange(0, XBLOCK), [XBLOCK])\n xmask = xindex < xnumel\n x0 = xindex\n tmp0 = tl.load(in_ptr0 + (x0), xmask)\n tmp1 = tl.sin(tmp0)\n tmp2 = tl.sin(tmp1)\n tl.store(out_ptr0 + (x0 + tl.zeros([XBLOCK], tl.int32)), tmp2, xmask)\n\n```\n\nAnd you can verify that fusing the two `sins` did actually occur because the two `sin` operations occur within a single Triton kernel and the temporary variables are held in registers with very fast access.\n\n### a real model\n\nAs a next step let\u2019s try a real model like resnet50 from the PyTorch hub.", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}} {"page_content": "```python\nimport torch\nmodel = torch.hub.load('pytorch/vision:v0.10.0', 'resnet18', pretrained=True)\nopt_model = torch.compile(model, backend=\"inductor\")\nmodel(torch.randn(1,3,64,64))\n\n```\n\nIf you actually run you may be surprised that the first run is slow and that\u2019s because the model is being compiled. Subsequent runs will be faster so it's common practice to warm up your model before you start benchmarking it.\n\nYou may have noticed how we also passed in the name of a compiler explicitly here with \u201cinductor\u201d but it\u2019s not the only available backend, you can run in a REPL `torch._dynamo.list_backends()` to see the full list of available backends. For fun you should try out `aot_cudagraphs` or `nvfuser`.\n\n### Hugging Face models", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}} -{"page_content": "Let\u2019s do something a bit more interesting now, our community frequently\nuses pretrained models from transformers [https://github.com/huggingface/transformers](https://github.com/huggingface/transformers) or TIMM [https://github.com/rwightman/pytorch-image-models](https://github.com/rwightman/pytorch-image-models) and one of our design goals for PyTorch 2.0 was that any new compiler stack needs to work out of the box with the vast majority of models people actually run.\n\nSo we\u2019re going to directly download a pretrained model from the Hugging Face hub and optimize it\n\n```python", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}} +{"page_content": "### Hugging Face models\n\nLet\u2019s do something a bit more interesting now, our community frequently\nuses pretrained models from transformers [https://github.com/huggingface/transformers](https://github.com/huggingface/transformers) or TIMM [https://github.com/rwightman/pytorch-image-models](https://github.com/rwightman/pytorch-image-models) and one of our design goals for PyTorch 2.0 was that any new compiler stack needs to work out of the box with the vast majority of models people actually run.\n\nSo we\u2019re going to directly download a pretrained model from the Hugging Face hub and optimize it\n\n```python", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}} {"page_content": "```python\n\nimport torch\nfrom transformers import BertTokenizer, BertModel\n# Copy pasted from here https://huggingface.co/bert-base-uncased\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\nmodel = BertModel.from_pretrained(\"bert-base-uncased\").to(device=\"cuda:0\")\nmodel = torch.compile(model) # This is the only line of code that we changed\ntext = \"Replace me by any text you'd like.\"\nencoded_input = tokenizer(text, return_tensors='pt').to(device=\"cuda:0\")\noutput = model(**encoded_input)\n\n```\n\nIf you remove the `to(device=\"cuda:0\")` from the model and `encoded_input` then PyTorch 2.0 will generate C++ kernels that will be optimized for running on your CPU. You can inspect both Triton or C++ kernels for BERT, they\u2019re obviously more complex than the trigonometry example we had above but you can similarly skim it and understand if you understand PyTorch.\n\nThe same code also works just fine if used with [https://github.com/huggingface/accelerate](https://github.com/huggingface/accelerate) and DDP", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}} {"page_content": "Similarly let\u2019s try out a TIMM example\n\n```python\nimport timm\nimport torch\nmodel = timm.create_model('resnext101_32x8d', pretrained=True, num_classes=2)\nopt_model = torch.compile(model, backend=\"inductor\")\nopt_model(torch.randn(64,3,7,7))\n```\n\nOur goal with PyTorch was to build a breadth-first compiler that would speed up the vast majority of actual models people run in open source. The Hugging Face Hub ended up being an extremely valuable benchmarking tool for us, ensuring that any optimization we work on actually helps accelerate models people want to run.\n\nSo please try out PyTorch 2.0, enjoy the free perf and if you\u2019re not seeing it then please open an issue and we will make sure your model is supported [https://github.com/pytorch/torchdynamo/issues](https://github.com/pytorch/torchdynamo/issues)\n\nAfter all, we can\u2019t claim we\u2019re created a breadth-first unless YOUR models actually run faster.", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch 1.6 now includes Stochastic Weight Averaging'\nauthor: Pavel Izmailov, Andrew Gordon Wilson and Vincent Queneneville-Belair\n---\n\nDo you use stochastic gradient descent (SGD) or Adam? Regardless of the procedure you use to train your neural network, you can likely achieve significantly better generalization at virtually no additional cost with a simple new technique now natively supported in PyTorch 1.6, Stochastic Weight Averaging (SWA) [1]. Even if you have already trained your model, it\u2019s easy to realize the benefits of SWA by running SWA for a small number of epochs starting with a pre-trained model. [Again](https://twitter.com/MilesCranmer/status/1282140440892932096) and [again](https://twitter.com/leopd/status/1285969855062192129), researchers are discovering that SWA improves the performance of well-tuned models in a wide array of practical applications with little cost or effort!", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} {"page_content": "SWA has a wide range of applications and features:\n* SWA significantly improves performance compared to standard training techniques in computer vision (e.g., VGG, ResNets, Wide ResNets and DenseNets on ImageNet and CIFAR benchmarks [1, 2]).\n* SWA provides state-of-the-art performance on key benchmarks in semi-supervised learning and domain adaptation [2].\n* SWA was shown to improve performance in language modeling (e.g., AWD-LSTM on WikiText-2 [4]) and policy-gradient methods in deep reinforcement learning [3].\n* SWAG, an extension of SWA, can approximate Bayesian model averaging in Bayesian deep learning and achieves state-of-the-art uncertainty calibration results in various settings. Moreover, its recent generalization MultiSWAG provides significant additional performance gains and mitigates double-descent [4, 10]. Another approach, Subspace Inference, approximates the Bayesian posterior in a small subspace of the parameter space around the SWA solution [5].\n* SWA for low precision training, SWALP, can match the performance of full-precision SGD training, even with all numbers quantized down to 8 bits, including gradient accumulators [6].\n* SWA in parallel, SWAP, was shown to greatly speed up the training of neural networks by using large batch sizes and, in particular, set a record by training a neural network to 94% accuracy on CIFAR-10 in 27 seconds [11].", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} {"page_content": "
\n \n
\n\n**Figure 1**. *Illustrations of SWA and SGD with a Preactivation ResNet-164 on CIFAR-100 [1]. **Left**: test error surface for three FGE samples and the corresponding SWA solution (averaging in weight space). **Middle** and **Right**: test error and train loss surfaces showing the weights proposed by SGD (at convergence) and SWA, starting from the same initialization of SGD after 125 training epochs. Please see [1] for details on how these figures were constructed*.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} {"page_content": "In short, SWA performs an equal average of the weights traversed by SGD (or any stochastic optimizer) with a modified learning rate schedule (see the left panel of Figure 1.). SWA solutions end up in the center of a wide flat region of loss, while SGD tends to converge to the boundary of the low-loss region, making it susceptible to the shift between train and test error surfaces (see the middle and right panels of Figure 1). We emphasize that SWA **can be used with any optimizer, such as Adam, and is not specific to SGD**.\n\nPreviously, SWA was in PyTorch contrib. In PyTorch 1.6, we provide a new convenient implementation of SWA in [torch.optim.swa_utils](https://pytorch.org/docs/stable/optim.html#stochastic-weight-averaging).\n\n## Is this just Averaged SGD?", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} -{"page_content": "At a high level, averaging SGD iterates dates back several decades in convex optimization [7, 8], where it is sometimes referred to as Polyak-Ruppert averaging, or averaged SGD. **But the details matter**. Averaged SGD is often used in conjunction with a decaying learning rate, and an exponential moving average (EMA), typically for convex optimization. In convex optimization, the focus has been on improved rates of convergence. In deep learning, this form of averaged SGD smooths the trajectory of SGD iterates but does not perform very differently.\n\nBy contrast, SWA uses an **equal average** of SGD iterates with a modified **cyclical or high constant learning rate** and exploits the flatness of training objectives [8] specific to **deep learning** for **improved generalization**. \n\n## How does Stochastic Weight Averaging Work?", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} -{"page_content": "There are two important ingredients that make SWA work. First, SWA uses a **modified learning rate** schedule so that SGD (or other optimizers such as Adam) continues to bounce around the optimum and explore diverse models instead of simply converging to a single solution. For example, we can use the standard decaying learning rate strategy for the first 75% of training time and then set the learning rate to a reasonably high constant value for the remaining 25% of the time (see Figure 2 below). The second ingredient is to take an average of the weights **(typically an equal average)** of the networks traversed by SGD. For example, we can maintain a running average of the weights obtained at the end of every epoch within the last 25% of training time (see Figure 2). After training is complete, we then set the weights of the network to the computed SWA averages.\n\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} -{"page_content": "**Figure 2**. *Illustration of the learning rate schedule adopted by SWA. Standard decaying schedule is used for the first 75% of the training and then a high constant value is used for the remaining 25%. The SWA averages are formed during the last 25% of training*.\n\nOne important detail is the batch normalization. Batch normalization layers compute running statistics of activations during training. Note that the SWA averages of the weights are never used to make predictions during training. So the batch normalization layers do not have the activation statistics computed at the end of training. We can compute these statistics by doing a single forward pass on the train data with the SWA model.\n\nWhile we focus on SGD for simplicity in the description above, SWA can be combined with any optimizer. You can also use cyclical learning rates instead of a high constant value (see e.g., [2]).\n\n## How to use SWA in PyTorch?", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} -{"page_content": "In `torch.optim.swa_utils` we implement all the SWA ingredients to make it convenient to use SWA with any model. In particular, we implement `AveragedModel` class for SWA models, `SWALR` learning rate scheduler, and `update_bn` utility function to update SWA batch normalization statistics at the end of training. \n\nIn the example below, `swa_model` is the SWA model that accumulates the averages of the weights. We train the model for a total of 300 epochs, and we switch to the SWA learning rate schedule and start to collect SWA averages of the parameters at epoch 160. \n\n```python\nfrom torch.optim.swa_utils import AveragedModel, SWALR\nfrom torch.optim.lr_scheduler import CosineAnnealingLR\n\nloader, optimizer, model, loss_fn = ...\nswa_model = AveragedModel(model)\nscheduler = CosineAnnealingLR(optimizer, T_max=100)\nswa_start = 5\nswa_scheduler = SWALR(optimizer, swa_lr=0.05)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} +{"page_content": "## Is this just Averaged SGD?\n\nAt a high level, averaging SGD iterates dates back several decades in convex optimization [7, 8], where it is sometimes referred to as Polyak-Ruppert averaging, or averaged SGD. **But the details matter**. Averaged SGD is often used in conjunction with a decaying learning rate, and an exponential moving average (EMA), typically for convex optimization. In convex optimization, the focus has been on improved rates of convergence. In deep learning, this form of averaged SGD smooths the trajectory of SGD iterates but does not perform very differently.\n\nBy contrast, SWA uses an **equal average** of SGD iterates with a modified **cyclical or high constant learning rate** and exploits the flatness of training objectives [8] specific to **deep learning** for **improved generalization**. \n\n## How does Stochastic Weight Averaging Work?", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} +{"page_content": "## How does Stochastic Weight Averaging Work?\n\nThere are two important ingredients that make SWA work. First, SWA uses a **modified learning rate** schedule so that SGD (or other optimizers such as Adam) continues to bounce around the optimum and explore diverse models instead of simply converging to a single solution. For example, we can use the standard decaying learning rate strategy for the first 75% of training time and then set the learning rate to a reasonably high constant value for the remaining 25% of the time (see Figure 2 below). The second ingredient is to take an average of the weights **(typically an equal average)** of the networks traversed by SGD. For example, we can maintain a running average of the weights obtained at the end of every epoch within the last 25% of training time (see Figure 2). After training is complete, we then set the weights of the network to the computed SWA averages.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} +{"page_content": "
\n \n
\n\n**Figure 2**. *Illustration of the learning rate schedule adopted by SWA. Standard decaying schedule is used for the first 75% of the training and then a high constant value is used for the remaining 25%. The SWA averages are formed during the last 25% of training*.\n\nOne important detail is the batch normalization. Batch normalization layers compute running statistics of activations during training. Note that the SWA averages of the weights are never used to make predictions during training. So the batch normalization layers do not have the activation statistics computed at the end of training. We can compute these statistics by doing a single forward pass on the train data with the SWA model.\n\nWhile we focus on SGD for simplicity in the description above, SWA can be combined with any optimizer. You can also use cyclical learning rates instead of a high constant value (see e.g., [2]).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} +{"page_content": "## How to use SWA in PyTorch?\n\nIn `torch.optim.swa_utils` we implement all the SWA ingredients to make it convenient to use SWA with any model. In particular, we implement `AveragedModel` class for SWA models, `SWALR` learning rate scheduler, and `update_bn` utility function to update SWA batch normalization statistics at the end of training. \n\nIn the example below, `swa_model` is the SWA model that accumulates the averages of the weights. We train the model for a total of 300 epochs, and we switch to the SWA learning rate schedule and start to collect SWA averages of the parameters at epoch 160. \n\n```python\nfrom torch.optim.swa_utils import AveragedModel, SWALR\nfrom torch.optim.lr_scheduler import CosineAnnealingLR\n\nloader, optimizer, model, loss_fn = ...\nswa_model = AveragedModel(model)\nscheduler = CosineAnnealingLR(optimizer, T_max=100)\nswa_start = 5\nswa_scheduler = SWALR(optimizer, swa_lr=0.05)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} {"page_content": "for epoch in range(100):\n for input, target in loader:\n optimizer.zero_grad()\n loss_fn(model(input), target).backward()\n optimizer.step()\n if epoch > swa_start:\n swa_model.update_parameters(model)\n swa_scheduler.step()\n else:\n scheduler.step()\n\n# Update bn statistics for the swa_model at the end\ntorch.optim.swa_utils.update_bn(loader, swa_model)\n# Use swa_model to make predictions on test data \npreds = swa_model(test_input)\n```\n\nNext, we explain each component of `torch.optim.swa_utils` in detail.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} {"page_content": "`AveragedModel` class serves to compute the weights of the SWA model. You can create an averaged model by running `swa_model = AveragedModel(model)`. You can then update the parameters of the averaged model by `swa_model.update_parameters(model)`. By default, `AveragedModel` computes a running equal average of the parameters that you provide, but you can also use custom averaging functions with the `avg_fn` parameter. In the following example, `ema_model` computes an exponential moving average.\n\n```python\nema_avg = lambda averaged_model_parameter, model_parameter, num_averaged:\\\n0.1 * averaged_model_parameter + 0.9 * model_parameter\nema_model = torch.optim.swa_utils.AveragedModel(model, avg_fn=ema_avg)\n```\n\nIn practice, we find an equal average with the modified learning rate schedule in Figure 2 provides the best performance.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} {"page_content": "`SWALR` is a learning rate scheduler that anneals the learning rate to a fixed value, and then keeps it constant. For example, the following code creates a scheduler that linearly anneals the learning rate from its initial value to `0.05` in `5` epochs within each parameter group.\n\n```python\nswa_scheduler = torch.optim.swa_utils.SWALR(optimizer, \nanneal_strategy=\"linear\", anneal_epochs=5, swa_lr=0.05)\n\n```\nWe also implement cosine annealing to a fixed value (`anneal_strategy=\"cos\"`). In practice, we typically switch to `SWALR` at epoch `swa_start` (e.g. after 75% of the training epochs), and simultaneously start to compute the running averages of the weights:\n\n```python\nscheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=100)\nswa_start = 75\nfor epoch in range(100):\n # \n if i > swa_start:\n swa_model.update_parameters(model)\n swa_scheduler.step()\n else:\n scheduler.step()\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} @@ -61,8 +61,8 @@ {"page_content": "**Figure 3**: *visualization of mode connectivity for ResNet-20 with no skip connections on CIFAR-10 dataset. The visualization is created in collaboration with Javier Ideami [(https://losslandscape.com/)](https://losslandscape.com/). For more details, see this [blogpost](https://izmailovpavel.github.io/curves_blogpost/)*.\n\nWe expect solutions that are centered in the flat region of the loss to generalize better than those near the boundary. Indeed, train and test error surfaces are not perfectly aligned in the weight space. Solutions that are centered in the flat region are not as susceptible to the shifts between train and test error surfaces as those near the boundary. In Figure 4 below, we show the train loss and test error surfaces along the direction connecting the SWA and SGD solutions. As you can see, while the SWA solution has a higher train loss compared to the SGD solution, it is centered in a region of low loss and has a substantially better test error.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} {"page_content": "
\n \n
\n\n**Figure 4**. *Train loss and test error along the line connecting the SWA solution (circle) and SGD solution (square). The SWA solution is centered in a wide region of low train loss, while the SGD solution lies near the boundary. Because of the shift between train loss and test error surfaces, the SWA solution leads to much better generalization*.\n\n## What are the results achieved with SWA?\n\nWe release a GitHub [repo](https://github.com/izmailovpavel/torch_swa_examples) with examples using the PyTorch implementation of SWA for training DNNs. For example, these examples can be used to achieve the following results on CIFAR-100:\n\n\n {:.table.table-striped.table-bordered}\n | | VGG-16 | ResNet-164 | WideResNet-28x10 | \n| ------------- | ------------- | ------------- | ------------- |\n| SGD | 72.8 \u00b1 0.3 | 78.4 \u00b1 0.3 | 81.0 \u00b1 0.3 | \n| SWA | 74.4 \u00b1 0.3 | 79.8 \u00b1 0.4 | 82.5 \u00b1 0.2 |", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} {"page_content": "## Semi-Supervised Learning\n\nIn a follow-up [paper](https://arxiv.org/abs/1806.05594) SWA was applied to semi-supervised learning, where it improved the best reported results in multiple settings [2]. For example, with SWA you can get 95% accuracy on CIFAR-10 if you only have the training labels for 4k training data points (the previous best reported result on this problem was 93.7%). This paper also explores averaging multiple times within epochs, which can accelerate convergence and find still flatter solutions in a given time.\n\n
\n \n
\n**Figure 5**. Performance of fast-SWA on semi-supervised learning with CIFAR-10. fast-SWA achieves record results in every setting considered.\n\n## Reinforcement Learning", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} -{"page_content": "In another follow-up [paper](http://www.gatsby.ucl.ac.uk/~balaji/udl-camera-ready/UDL-24.pdf) SWA was shown to improve the performance of policy gradient methods A2C and DDPG on several Atari games and MuJoCo environments [3]. This application is also an instance of where SWA is used with Adam. Recall that SWA is not specific to SGD and can benefit essentially any optimizer.\n\n\n{:.table.table-striped.table-bordered}\n | Environment Name | A2C | A2C + SWA | \n| ------------- | ------------- | ------------- | \n| Breakout | 522 \u00b1 34 | 703 \u00b1 60 |\n| Qbert | 18777 \u00b1 778 | 21272 \u00b1 655 |\n| SpaceInvaders | 7727 \u00b1 1121 | 21676 \u00b1 8897 |\n| Seaquest | 1779 \u00b1 4 | 1795 \u00b1 4 |\n| BeamRider | 9999 \u00b1 402 | 11321 \u00b1 1065 |\n| CrazyClimber | 147030 \u00b1 10239 | 139752 \u00b1 11618 |\n\n\n## Low Precision Training", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} -{"page_content": "We can filter through quantization noise by combining weights that have been rounded down with weights that have been rounded up. Moreover, by averaging weights to find a flat region of the loss surface, large perturbations of the weights will not affect the quality of the solution (Figures 9 and 10). Recent [work](https://arxiv.org/abs/1904.11943) shows that by adapting SWA to the low precision setting, in a method called SWALP, one can match the performance of full-precision SGD even with all training in 8 bits [5]. This is quite a practically important result, given that (1) SGD training in 8 bits performs notably worse than full precision SGD, and (2) low precision training is significantly harder than predictions in low precision after training (the usual setting). For example, a ResNet-164 trained on CIFAR-100 with float (16-bit) SGD achieves 22.2% error, while 8-bit SGD achieves 24.0% error. By contrast, SWALP with 8 bit training achieves 21.8% error.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} +{"page_content": "## Reinforcement Learning\n\nIn another follow-up [paper](http://www.gatsby.ucl.ac.uk/~balaji/udl-camera-ready/UDL-24.pdf) SWA was shown to improve the performance of policy gradient methods A2C and DDPG on several Atari games and MuJoCo environments [3]. This application is also an instance of where SWA is used with Adam. Recall that SWA is not specific to SGD and can benefit essentially any optimizer.\n\n\n{:.table.table-striped.table-bordered}\n | Environment Name | A2C | A2C + SWA | \n| ------------- | ------------- | ------------- | \n| Breakout | 522 \u00b1 34 | 703 \u00b1 60 |\n| Qbert | 18777 \u00b1 778 | 21272 \u00b1 655 |\n| SpaceInvaders | 7727 \u00b1 1121 | 21676 \u00b1 8897 |\n| Seaquest | 1779 \u00b1 4 | 1795 \u00b1 4 |\n| BeamRider | 9999 \u00b1 402 | 11321 \u00b1 1065 |\n| CrazyClimber | 147030 \u00b1 10239 | 139752 \u00b1 11618 |\n\n\n## Low Precision Training", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} +{"page_content": "## Low Precision Training\n\nWe can filter through quantization noise by combining weights that have been rounded down with weights that have been rounded up. Moreover, by averaging weights to find a flat region of the loss surface, large perturbations of the weights will not affect the quality of the solution (Figures 9 and 10). Recent [work](https://arxiv.org/abs/1904.11943) shows that by adapting SWA to the low precision setting, in a method called SWALP, one can match the performance of full-precision SGD even with all training in 8 bits [5]. This is quite a practically important result, given that (1) SGD training in 8 bits performs notably worse than full precision SGD, and (2) low precision training is significantly harder than predictions in low precision after training (the usual setting). For example, a ResNet-164 trained on CIFAR-100 with float (16-bit) SGD achieves 22.2% error, while 8-bit SGD achieves 24.0% error. By contrast, SWALP with 8 bit training achieves 21.8% error.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} {"page_content": "
\n \n
\n**Figure 9**. *Quantizing a solution leads to a perturbation of the weights which has a greater effect on the quality of the sharp solution (left) compared to wide solution (right)*. \n\n\n
\n \n
\n**Figure 10**. *The difference between standard low precision training and SWALP*.\n\nAnother [work](https://arxiv.org/abs/2002.00343), SQWA, presents an approach for quantization and fine-tuning of neural networks in low precision [12]. In particular, SQWA achieved state-of-the-art results for DNNs quantized to 2 bits on CIFAR-100 and ImageNet.\n\n## Calibration and Uncertainty Estimates\n\nBy finding a centred solution in the loss, SWA can also improve calibration and uncertainty representation. Indeed, SWA can be viewed as an approximation to an ensemble, resembling a Bayesian model average, but with a single model [1].", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} {"page_content": "SWA can be viewed as taking the first moment of SGD iterates with a modified learning rate schedule. We can directly generalize SWA by also taking the second moment of iterates to form a Gaussian approximate posterior over the weights, further characterizing the loss geometry with SGD iterates. This approach,[SWA-Gaussian (SWAG)](https://arxiv.org/abs/1902.02476) is a simple, scalable and convenient approach to uncertainty estimation and calibration in Bayesian deep learning [4]. The SWAG distribution approximates the shape of the true posterior: Figure 6 below shows the SWAG distribution and the posterior log-density for ResNet-20 on CIFAR-10.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} {"page_content": "
\n \n
\n**Figure 6**. *SWAG posterior approximation and the loss surface for a ResNet-20 without skip-connections trained on CIFAR-10 in the subspace formed by the two largest eigenvalues of the SWAG covariance matrix. The shape of SWAG distribution is aligned with the posterior: the peaks of the two distributions coincide, and both distributions are wider in one direction than in the orthogonal direction. Visualization created in collaboration with* [Javier Ideami](https://losslandscape.com/).\n\nEmpirically, SWAG performs on par or better than popular alternatives including MC dropout, KFAC Laplace, and temperature scaling on uncertainty quantification, out-of-distribution detection, calibration and transfer learning in computer vision tasks. Code for SWAG is available [here](https://github.com/wjmaddox/swa_gaussian).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}} @@ -76,23 +76,23 @@ {"page_content": "---\nlayout: blog_detail\ntitle: \"Introducing TorchRec, and other domain library updates in PyTorch 1.11\"\nauthor: Team PyTorch\nfeatured-img: \"assets/images/pytorch-logo.jpg\"\n---\n\nWe are introducing the beta release of TorchRec and a number of improvements to the current PyTorch domain libraries, alongside the [PyTorch 1.11 release](https://pytorch.org/blog/pytorch-1.11-released/). These updates demonstrate our focus on developing common and extensible APIs across all domains to make it easier for our community to build ecosystem projects on PyTorch. Highlights include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "- **TorchRec**, a PyTorch domain library for Recommendation Systems, is available in beta. [View it on GitHub](https://github.com/pytorch/torchrec).\n- **TorchAudio** - Added Enformer- and RNN-T-based models and recipes to support the full development lifecycle of a streaming ASR model. See the release notes [here](https://github.com/pytorch/audio/releases).\n- **TorchText** - Added beta support for RoBERTa and XLM-R models, byte-level BPE tokenizer, and text datasets backed by TorchData. See the release notes [here](https://github.com/pytorch/text/releases).\n- **TorchVision** - Added 4 new model families and 14 new classification datasets such as CLEVR, GTSRB, FER2013. See the release notes [here](https://github.com/pytorch/vision/releases).\n\n## TorchRec 0.1", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "## TorchRec 0.1\n\nWe [announced TorchRec](https://pytorch.org/blog/introducing-torchrec/) a few weeks ago and we are excited to release the beta version today. To recap, TorchRec is a PyTorch domain library for Recommendation Systems. This new library provides common sparsity and parallelism primitives, enabling researchers to build state-of-the-art personalization models and deploy them in production. TorchRec was used to train a 1.25 trillion parameter model, pushed to production in January 2022.\n\nIn particular, the library includes:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} -{"page_content": "- Modeling primitives, such as embedding bags and jagged tensors, that enable easy authoring of large, performant multi-device/multi-node models using hybrid data-parallelism and model-parallelism.\n- Optimized RecSys kernels powered by [FBGEMM](https://github.com/pytorch/FBGEMM), including support for sparse and quantized operations.\n- A sharder which can partition embedding tables with a variety of different strategies including data-parallel, table-wise, row-wise, table-wise-row-wise, and column-wise sharding.\n- A planner which can automatically generate optimized sharding plans for models.\n- Pipelining to overlap dataloading device transfer (copy to GPU), inter-device communications (input_dist), and computation (forward, backward) for increased performance.\n- GPU inference support.\n- Common modules for RecSys, such as models and public datasets (Criteo & Movielens).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} +{"page_content": "In particular, the library includes:\n\n- Modeling primitives, such as embedding bags and jagged tensors, that enable easy authoring of large, performant multi-device/multi-node models using hybrid data-parallelism and model-parallelism.\n- Optimized RecSys kernels powered by [FBGEMM](https://github.com/pytorch/FBGEMM), including support for sparse and quantized operations.\n- A sharder which can partition embedding tables with a variety of different strategies including data-parallel, table-wise, row-wise, table-wise-row-wise, and column-wise sharding.\n- A planner which can automatically generate optimized sharding plans for models.\n- Pipelining to overlap dataloading device transfer (copy to GPU), inter-device communications (input_dist), and computation (forward, backward) for increased performance.\n- GPU inference support.\n- Common modules for RecSys, such as models and public datasets (Criteo & Movielens).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "Please check the TorchRec announcement post [here](https://pytorch.org/blog/introducing-torchrec/), [video tutorial](https://www.youtube.com/watch?v=cjgj41dvSeQ), install instructions [here](https://github.com/pytorch/torchrec#readme), test drive the feature through this tutorial [here](https://pytorch.org/tutorials/intermediate/torchrec_tutorial.html), and refer to the reference document [here](https://pytorch.org/torchrec/).\n\n## TorchAudio 0.11\n\n#### TorchAudio: Building Blocks for Audio and Speech Processing\n\nWe published a paper, [TorchAudio: Building Blocks for Audio and Speech Processing](https://arxiv.org/abs/2110.15018), describing the overview of the TorchAudio library. If you find TorchAudio useful for your research, please help us share with the community by citing our paper.\n\n#### (Beta) RNN-T & (Prototype) Emformer Models and Recipes\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "Emformer is an efficient memory-transformer-based streaming acoustic model that has demonstrated state-of-the-art streaming automatic speech recognition (ASR) performance in low-latency, resource-constrained scenarios, such as on-device applications (citation: [https://arxiv.org/abs/2010.10759](https://arxiv.org/abs/2010.10759)).\n\nThe TorchAudio v0.11 release includes the following beta features:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "- Implementation of Emformer ([docs](https://pytorch.org/audio/main/models.html#emformer))\n- Recurrent neural network transducer (RNN-T) streaming ASR model that uses Emformer for its transcription network ([docs](https://pytorch.org/audio/main/models.html#rnn-t))\n- RNN-T beam search decoder with TorchScript support ([docs](https://pytorch.org/audio/main/models.html#rnntbeamsearch))\n- LibriSpeech Emformer RNN-T training recipe ([GitHub](https://github.com/pytorch/audio/tree/release/0.11/examples/asr/librispeech_emformer_rnnt)) and corresponding pre-trained streaming ASR inference pipeline ([docs](https://pytorch.org/audio/main/pipelines.html#emformer-rnnt-base-librispeech))\n\nAlso there are prototype features that are available from nightly builds or the main branch.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "- Training recipes trained on MuST-C and TED-LIUM3 datasets. ([GitHub](https://github.com/pytorch/audio/tree/main/examples/asr/emformer_rnnt))\n- Pre-trained pipelines corresponding to the recipes. ([docs](https://pytorch.org/audio/main/prototype.pipelines.html))\n- Tutorial that steps through performing online speech recognition with RNN-T Emformer model. ([docs](https://pytorch.org/audio/main/tutorials/online_asr_tutorial.html))\n\nCollectively, these features cover the full development lifecycle of a streaming ASR model, from definition through training and inference, and enable users to easily develop their own Emformer- and RNN-T-based models.\n\nSpecial thanks to Yangyang Shi, Jay Mahadeokar, and Gil Keren for their code contributions and guidance.\n\n#### (Beta) HuBERT Pretrain Model", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} -{"page_content": "The masked prediction training of HuBERT model requires the masked logits, unmasked logits, and feature norm as the outputs. The logits are for cross-entropy losses and the feature norm is for penalty loss. The release adds [HuBERTPretrainModel](https://github.com/pytorch/audio/blob/main/torchaudio/models/wav2vec2/model.py#L120-L205) and corresponding factory functions ([hubert_pretrain_base](https://github.com/pytorch/audio/blob/main/torchaudio/models/wav2vec2/model.py#L964-L1027), [hubert_pretrain_large](https://github.com/pytorch/audio/blob/main/torchaudio/models/wav2vec2/model.py#L1030-L1090), and [hubert_pretrain_xlarge](https://github.com/pytorch/audio/blob/main/torchaudio/models/wav2vec2/model.py#L1093-L1153)) to enable training from scratch.\n\n#### (Prototype) CTC Beam Search Decoder\n\nIn recent releases, TorchAudio has added support for ASR models fine-tuned on CTC loss. The addition of an inference time CTC beam search decoder enables running end-to-end ASR evaluation using TorchAudio utils.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} -{"page_content": "The CTC decoder in TorchAudio supports customizable beam search decoding with lexicon constraint. It also has optional KenLM language model support.\n\nFor more details, please check out the [API tutorial](https://pytorch.org/audio/main/tutorials/asr_inference_with_ctc_decoder_tutorial.html) and [documentation](https://pytorch.org/audio/main/prototype.ctc_decoder.html). This prototype feature is available through nightly builds.\n\n#### (Prototype) Streaming API\n\nTorchAudio started as simple audio I/O APIs that supplement PyTorch. With the recent addition of ASR models and training recipes, the project has received requests to support high-level application development.\n\nStreaming API makes it easy to develop and test the model in online inference. It utilizes ffmpeg under the hood, and enables reading media from online services and hardware devices, decoding media in an incremental manner, and applying filters and preprocessing.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} -{"page_content": "Please checkout the [API tutorial](https://pytorch.org/audio/main/tutorials/streaming_api_tutorial.html) and [the documentation](https://pytorch.org/audio/main/prototype.io.html). There are also the [streaming ASR](https://pytorch.org/audio/main/tutorials/online_asr_tutorial.html) tutorial and the [device streaming ASR tutorial](https://pytorch.org/audio/main/tutorials/device_asr.html). This feature is available from nightly releases. Please refer to [pytorch.org](https://pytorch.org/get-started/locally/) for how to install nightly builds.\n\n## TorchText 0.12\n\n#### (Beta) RoBERTa and XLM-R Models\n\nTorchText has added support for pre-trained RoBERTa and XLM-R models. It would allow users to train end-2-end Transformer Encoder based models on standard NLP tasks using TorchText.\n\nMore specifically:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} -{"page_content": "More specifically:\n\n- The models are torchscriptable and hence can be employed for production use-cases.\n- The model APIs let users to easily attach custom task-specific heads with pre-trained encoders.\n- The API also comes equipped with data pre-processing transforms to match the pre-trained weights and model configuration.\n\nWe have added a [tutorial](https://pytorch.org/text/main/tutorials/sst2_classification_non_distributed.html) to demonstrate SST-2 binary text classification task with pre-trained XLM-R base architecture.\n\nFor additional details on model APIs and usage examples, please refer to the [documentation](https://pytorch.org/text/main/models.html).\n\n#### (Beta) byte-level BPE tokenizer", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} -{"page_content": "TorchText has added support for a Byte-Level BPE tokenizer, as used in GPT-2. This tokenizer is also used for tokenizing inputs to the pre-trained RoBERTa models described previously. In addition to the RoBERTa vocab, users can also load their own custom BPE vocab to use the tokenizer. Furthermore, the tokenizer is fully torchscriptable and hence can be employed for production use-cases. For additional details on model APIs and usage examples, please refer to the [documentation](https://pytorch.org/text/main/transforms.html#gpt2bpetokenizer).\n\n#### (Beta) Text datasets backed by TorchData\n\nTorchText has modernized its datasets by migrating from older-style Iterable Datasets to [TorchData\u2019s](https://github.com/pytorch/data#readme) DataPipes. TorchData is a library that provides modular/composable primitives, allowing users to load and transform data in performant data pipelines.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} +{"page_content": "#### (Beta) HuBERT Pretrain Model\n\nThe masked prediction training of HuBERT model requires the masked logits, unmasked logits, and feature norm as the outputs. The logits are for cross-entropy losses and the feature norm is for penalty loss. The release adds [HuBERTPretrainModel](https://github.com/pytorch/audio/blob/main/torchaudio/models/wav2vec2/model.py#L120-L205) and corresponding factory functions ([hubert_pretrain_base](https://github.com/pytorch/audio/blob/main/torchaudio/models/wav2vec2/model.py#L964-L1027), [hubert_pretrain_large](https://github.com/pytorch/audio/blob/main/torchaudio/models/wav2vec2/model.py#L1030-L1090), and [hubert_pretrain_xlarge](https://github.com/pytorch/audio/blob/main/torchaudio/models/wav2vec2/model.py#L1093-L1153)) to enable training from scratch.\n\n#### (Prototype) CTC Beam Search Decoder", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} +{"page_content": "#### (Prototype) CTC Beam Search Decoder\n\nIn recent releases, TorchAudio has added support for ASR models fine-tuned on CTC loss. The addition of an inference time CTC beam search decoder enables running end-to-end ASR evaluation using TorchAudio utils.\n\nThe CTC decoder in TorchAudio supports customizable beam search decoding with lexicon constraint. It also has optional KenLM language model support.\n\nFor more details, please check out the [API tutorial](https://pytorch.org/audio/main/tutorials/asr_inference_with_ctc_decoder_tutorial.html) and [documentation](https://pytorch.org/audio/main/prototype.ctc_decoder.html). This prototype feature is available through nightly builds.\n\n#### (Prototype) Streaming API\n\nTorchAudio started as simple audio I/O APIs that supplement PyTorch. With the recent addition of ASR models and training recipes, the project has received requests to support high-level application development.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} +{"page_content": "Streaming API makes it easy to develop and test the model in online inference. It utilizes ffmpeg under the hood, and enables reading media from online services and hardware devices, decoding media in an incremental manner, and applying filters and preprocessing.\n\nPlease checkout the [API tutorial](https://pytorch.org/audio/main/tutorials/streaming_api_tutorial.html) and [the documentation](https://pytorch.org/audio/main/prototype.io.html). There are also the [streaming ASR](https://pytorch.org/audio/main/tutorials/online_asr_tutorial.html) tutorial and the [device streaming ASR tutorial](https://pytorch.org/audio/main/tutorials/device_asr.html). This feature is available from nightly releases. Please refer to [pytorch.org](https://pytorch.org/get-started/locally/) for how to install nightly builds.\n\n## TorchText 0.12\n\n#### (Beta) RoBERTa and XLM-R Models", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} +{"page_content": "#### (Beta) RoBERTa and XLM-R Models\n\nTorchText has added support for pre-trained RoBERTa and XLM-R models. It would allow users to train end-2-end Transformer Encoder based models on standard NLP tasks using TorchText.\n\nMore specifically:\n\n- The models are torchscriptable and hence can be employed for production use-cases.\n- The model APIs let users to easily attach custom task-specific heads with pre-trained encoders.\n- The API also comes equipped with data pre-processing transforms to match the pre-trained weights and model configuration.\n\nWe have added a [tutorial](https://pytorch.org/text/main/tutorials/sst2_classification_non_distributed.html) to demonstrate SST-2 binary text classification task with pre-trained XLM-R base architecture.\n\nFor additional details on model APIs and usage examples, please refer to the [documentation](https://pytorch.org/text/main/models.html).\n\n#### (Beta) byte-level BPE tokenizer", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} +{"page_content": "#### (Beta) byte-level BPE tokenizer\n\nTorchText has added support for a Byte-Level BPE tokenizer, as used in GPT-2. This tokenizer is also used for tokenizing inputs to the pre-trained RoBERTa models described previously. In addition to the RoBERTa vocab, users can also load their own custom BPE vocab to use the tokenizer. Furthermore, the tokenizer is fully torchscriptable and hence can be employed for production use-cases. For additional details on model APIs and usage examples, please refer to the [documentation](https://pytorch.org/text/main/transforms.html#gpt2bpetokenizer).\n\n#### (Beta) Text datasets backed by TorchData\n\nTorchText has modernized its datasets by migrating from older-style Iterable Datasets to [TorchData\u2019s](https://github.com/pytorch/data#readme) DataPipes. TorchData is a library that provides modular/composable primitives, allowing users to load and transform data in performant data pipelines.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "These DataPipes work out-of-the-box with PyTorch DataLoader and would enable new functionalities like auto-sharding. Users can now easily do data manipulation and pre-processing using user-defined functions and transformations in a functional style programming. Datasets backed by DataPipes also enable standard flow-control like batching, collation, shuffling and bucketizing.\n\nCollectively, DataPipes provides a comprehensive experience for data preprocessing and tensorization needs in a pythonic and flexible way for model training. We have added a [tutorial](https://pytorch.org/text/main/tutorials/sst2_classification_non_distributed.html) to demonstrate data-processing pipelining using the modernized dataset for binary text-classification.\n\nYou can learn more about TorchData DataPipe APIs in its [official documentation](https://pytorch.org/data).\n\n## TorchVision 0.12\n\n### New Models\n\nFour new model families have been released in the latest version along with pre-trained weights for their variants.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "#### #1 Object Detection\n\n[FCOS](https://arxiv.org/pdf/1904.01355.pdf) is a popular, fully convolutional, anchor-free model for object detection. In this release we include a community-contributed model implementation as well as pre-trained weights. The model was trained on COCO train2017 and can be used as follows:\n\n```python\nimport torch\nfrom torchvision import models\n\nx = [torch.rand(3, 224, 224)]\nfcos = models.detection.fcos_resnet50_fpn(pretrained=True).eval()\npredictions = fcos(x)\n```\n\nThe box AP of the pre-trained model on COCO val2017 is 39.2 (see [#4961](https://github.com/pytorch/vision/pull/4961) for more details).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "We would like to thank [Hu Ye](https://github.com/xiaohu2015) and [Zhiqiang Wang](https://github.com/zhiqwang) for contributing to the model implementation and initial training. This was the first community-contributed model in a long while, and given its success, we decided to use the learnings from this process and create a new [model contribution guidelines](https://github.com/pytorch/vision/blob/main/CONTRIBUTING_MODELS.md).\n\n#### #2 Optical Flow support and RAFT model\n\nTorchVision now supports optical flow! Optical Flow models try to predict movement in a video: given two consecutive frames, the model predicts where each pixel of the first frame ends up in the second frame. Check out our [new tutorial on Optical Flow](https://pytorch.org/vision/0.12/auto_examples/plot_optical_flow.html#sphx-glr-auto-examples-plot-optical-flow-py)!", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "We implemented a torchscript-compatible [RAFT](https://arxiv.org/abs/2003.12039) model with pre-trained weights (both normal and \u201csmall\u201d versions), and added support for [training and evaluating](https://github.com/pytorch/vision/tree/main/references/optical_flow) optical flow models. Our training scripts support distributed training across processes and nodes, leading to much faster training time than the original implementation. We also added 5 new [optical flow datasets](https://pytorch.org/vision/0.12/datasets.html#optical-flow): Flying Chairs, Flying Things, Sintel, Kitti, and HD1K.\n\n

\n \n

\n\n#### #3. Image Classification", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} -{"page_content": "[Vision Transformer](https://arxiv.org/abs/2010.11929) (ViT) and [ConvNeXt](https://arxiv.org/abs/2201.03545) are two popular architectures which can be used as image classifiers or as backbones for downstream vision tasks. In this release we include 8 pre-trained weights for their classification variants. The models were trained on ImageNet and can be used as follows:\n\n```python\nimport torch\nfrom torchvision import models\n\nx = torch.rand(1, 3, 224, 224)\nvit = models.vit_b_16(pretrained=True).eval()\nconvnext = models.convnext_tiny(pretrained=True).eval()\npredictions1 = vit(x)\npredictions2 = convnext(x)\n```\n\nThe accuracies of the pre-trained models obtained on ImageNet val are seen below:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} +{"page_content": "#### #3. Image Classification\n\n[Vision Transformer](https://arxiv.org/abs/2010.11929) (ViT) and [ConvNeXt](https://arxiv.org/abs/2201.03545) are two popular architectures which can be used as image classifiers or as backbones for downstream vision tasks. In this release we include 8 pre-trained weights for their classification variants. The models were trained on ImageNet and can be used as follows:\n\n```python\nimport torch\nfrom torchvision import models\n\nx = torch.rand(1, 3, 224, 224)\nvit = models.vit_b_16(pretrained=True).eval()\nconvnext = models.convnext_tiny(pretrained=True).eval()\npredictions1 = vit(x)\npredictions2 = convnext(x)\n```\n\nThe accuracies of the pre-trained models obtained on ImageNet val are seen below:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "| **Model** | **Acc@1** | **Acc@5** |\n| -------------- | --------: | --------: |\n| vit_b_16 | 81.072 | 95.318 |\n| vit_b_32 | 75.912 | 92.466 |\n| vit_l_16 | 79.662 | 94.638 |\n| vit_l_32 | 76.972 | 93.07 |\n| convnext_tiny | 82.52 | 96.146 |\n| convnext_small | 83.616 | 96.65 |\n| convnext_base | 84.062 | 96.87 |\n| convnext_large | 84.414 | 96.976 |\n\nThe above models have been trained using an adjusted version of our new [training recipe](https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/) and this allows us to offer models with accuracies significantly higher than the ones on the original papers.\n\n#### #4. GPU Video Decoding\n\nIn this release, we add support for GPU video decoding in the video reading API. To use hardware-accelerated decoding, we just need to pass a cuda device to the video reading API as shown below:\n\n```python\nimport torchvision", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} -{"page_content": "reader = torchvision.io.VideoReader(file_name, device=\"cuda:0\")\nfor frame in reader:\n print(frame)\n```\n\nWe also support seeking to anyframe or a keyframe in the video before reading, as shown below:\n\n```python\nreader.seek(seek_time)\n```\n\n### New Datasets\n\nWe have implemented 14 new [classification datasets](https://pytorch.org/vision/0.12/datasets.html#image-classification): CLEVR, GTSRB, FER2013, SUN397, Country211, Flowers102, fvgc_aircraft, OxfordIIITPet, DTD, Food 101, Rendered SST2, Stanford cars, PCAM, and EuroSAT.\n\nAs part of our work on Optical Flow support (see above for more details), we also added 5 new [optical flow datasets](https://pytorch.org/vision/0.12/datasets.html#optical-flow): Flying Chairs, Flying Things, Sintel, Kitti, and HD1K.\n\n### Other Updates", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} +{"page_content": "```python\nimport torchvision\n\nreader = torchvision.io.VideoReader(file_name, device=\"cuda:0\")\nfor frame in reader:\n print(frame)\n```\n\nWe also support seeking to anyframe or a keyframe in the video before reading, as shown below:\n\n```python\nreader.seek(seek_time)\n```\n\n### New Datasets\n\nWe have implemented 14 new [classification datasets](https://pytorch.org/vision/0.12/datasets.html#image-classification): CLEVR, GTSRB, FER2013, SUN397, Country211, Flowers102, fvgc_aircraft, OxfordIIITPet, DTD, Food 101, Rendered SST2, Stanford cars, PCAM, and EuroSAT.\n\nAs part of our work on Optical Flow support (see above for more details), we also added 5 new [optical flow datasets](https://pytorch.org/vision/0.12/datasets.html#optical-flow): Flying Chairs, Flying Things, Sintel, Kitti, and HD1K.\n\n### Other Updates", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "- **New documentation layout**: Each function / class is now documented in a separate page, clearing up some space in the per-module pages, and easing the discovery of the proposed APIs. Compare e.g. our [previous docs](https://pytorch.org/vision/0.11/transforms.html) vs the [new ones](https://pytorch.org/vision/0.12/transforms.html). Please let us know if you have any [feedback](https://github.com/pytorch/vision/issues/5511)!\n- **New [model contribution guidelines](https://github.com/pytorch/vision/blob/main/CONTRIBUTING_MODELS.md)** have been published following the success of the [FCOS](https://github.com/pytorch/vision/pull/4961) model which was contributed by the community. These guidelines aim to be an overview of the model contribution process for anyone who would like to suggest, implement and train a new model.\n- **Upcoming Prototype API** - We are currently working on a prototype API which adds Multi-weight support on all of our model builder methods. This will enable us to offer multiple pre-trained weights, associated with their meta-data and inference transforms. The API is still under review and thus was not included in the release but you can read more about it on our [blogpost](https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/) and provide your feedback on the dedicated [Github issue](https://github.com/pytorch/vision/issues/5088).\n- **Changes in our deprecation policy** - Up until now, torchvision would almost never remove deprecated APIs. In order to be more aligned and consistent with pytorch core, we are updating our deprecation policy. We are now following a 2-release deprecation cycle: deprecated APIs will raise a warning for 2 versions, and will be removed after that. To reflect these changes and to smooth the transition, we have decided to:\n - Remove all APIs that had been deprecated before or on v0.8, released 1.5 years ago.\n - Update the removal timeline of all other deprecated APIs to v0.14, to reflect the new 2-cycle policy starting now in v0.12.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "### Captum 0.5\n\n[Captum](https://captum.ai/) is a PyTorch library for model interpretability. For this release, we expanded Captum with influential instances and added support for both similarity based influences and novel algorithms, [TracIn](https://arxiv.org/abs/2002.08484) and its variants. TracIn variants offer faster approximation of influence scores based on random projections for fully connected layers.\n\nMore specifically the new, influence, subsection of Captum includes:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "- **[SimilarityInfluence](https://captum.ai/api/influence.html#similarityinfluence)** computes similarity scores between test and training examples using default (cosine or euclidean) or custom user definite metrics w.r.t. given input model layers.\n- **[TracInCP](https://captum.ai/api/influence.html#tracincp)** approximates the influential score of each training example on a given test example based on the dot-product similarity between loss gradients w.r.t. model parameters for test and training examples. Note that if we use training examples as test examples then we compute self influence. This method and its variants described below also return top-k proponents and opponents which are the top-k largest positive and negative influential examples respectively.\n- **[TracInCPFast](https://captum.ai/api/influence.html#tracincpfast)** is an approximation of TracInCP that avoids computing the gradients w.r.t. large parameter matrices. It approximates influence score based on the dot products between last fully connected layer activations and loss gradients w.r.t. that layer for training and test examples.\n- **[TracInCPFastRandProj](https://captum.ai/api/influence.html#tracincpfastrandproj)** uses a nearest neighbor approximation library such as annoy to compute the dot product between the training and test quantities. In order to reduce the dimensionality of layer activations and corresponding gradients this method, in addition, allows to project those vectors into a lower dimensional space using random projection matrices.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}} @@ -104,25 +104,25 @@ {"page_content": "Here is a small code example of running ResNet18 with `torch.compile`:\n\n```\nimport torch\nimport torchvision\nimport torch_xla.core.xla_model as xm\n\ndef eval_model(loader):\n device = xm.xla_device()\n xla_resnet18 = torchvision.models.resnet18().to(device)\n xla_resnet18.eval()\n dynamo_resnet18 = torch.compile(\n xla_resnet18, backend='torchxla_trace_once')\n for data, _ in loader:\n output = dynamo_resnet18(data)\n```\n\nWith `torch.compile` PyTorch/XLA only traces the ResNet18 model once during the init time and executes the compiled binary everytime `dynamo_resnet18` is invoked, instead of tracing the model every step. To illustrate the benefits of Dynamo+XLA, below is an inference speedup analysis to compare Dynamo and LazyTensor (without Dynamo) using TorchBench on a Cloud TPU v4-8 where the y-axis is the speedup multiplier.\n\n\n![Inference Speedup - PyTorch/XLA Dynamo on TPU](/assets/images/2023-03-22-inferencespeedup.svg){:width=\"100%\"}", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}} {"page_content": "Dynamo for training is in the development stage with its implementation being at an earlier stage than inference. Developers are welcome to test this early feature, however, in the 2.0 release, PyTorch/XLA supports the forward and backward pass graphs and not the optimizer graph; the optimizer graph is available in the nightly builds and will land in the PyTorch/XLA 2.1 release. Below is an example of what training looks like using the ResNet18 example with `torch.compile`:\n\n```\nimport torch\nimport torchvision\nimport torch_xla.core.xla_model as xm\n\ndef train_model(model, data, target):\n loss_fn = torch.nn.CrossEntropyLoss()\n pred = model(data)\n loss = loss_fn(pred, target)\n loss.backward()\n return pred", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}} {"page_content": "def train_model_main(loader):\n device = xm.xla_device()\n xla_resnet18 = torchvision.models.resnet18().to(device)\n xla_resnet18.train()\n dynamo_train_model = torch.compile(\n train_model, backend='aot_torchxla_trace_once')\n for data, target in loader:\n output = dynamo_train_model(xla_resnet18, data, target)\n```\n\nNote that the backend for training is `aot_torchxla_trace_once` (API will be updated for stable release) whereas the inference backend is `torchxla_trace_once` (name subject to change). We expect to extract and execute 3 graphs per training step instead of 1 training step if you use the Lazy tensor. Below is a training speedup analysis to compare Dynamo and Lazy using the TorchBench on Cloud TPU v4-8.\n\n\n![Training Speedup - PyTorch/XLA Dynamo on TPU](/assets/images/2023-03-22-trainingspeedup.svg){:width=\"100%\"}\n\n\n\n## PJRT Runtime (Beta)", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}} -{"page_content": "PyTorch/XLA is migrating from XRT to the new PJRT runtime. PJRT is a better-maintained stack, with demonstrated performance advantages, including, on average, a 35% performance for training on TorchBench 2.0 models. It also supports a richer set of features enabling technologies like SPMD. In the PyTorch/XLA 2.0 release, PJRT is the default runtime for TPU and CPU; GPU support is in experimental state. The PJRT features included in the PyTorch/XLA 2.0 release are:\n\n* TPU runtime implementation in `libtpu` using the [PJRT Plugin API](https://github.com/openxla/community/blob/main/rfcs/20230123-pjrt-plugin.md#rfc-openxla-pjrt-plugin) improves performance by up to 30%\n* `torch.distributed` support for TPU v2 and v3, including `pjrt://` `init_method` (Experimental)\n* Single-host GPU support. Multi-host support coming soon. (Experimental)", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}} +{"page_content": "## PJRT Runtime (Beta)\n\nPyTorch/XLA is migrating from XRT to the new PJRT runtime. PJRT is a better-maintained stack, with demonstrated performance advantages, including, on average, a 35% performance for training on TorchBench 2.0 models. It also supports a richer set of features enabling technologies like SPMD. In the PyTorch/XLA 2.0 release, PJRT is the default runtime for TPU and CPU; GPU support is in experimental state. The PJRT features included in the PyTorch/XLA 2.0 release are:\n\n* TPU runtime implementation in `libtpu` using the [PJRT Plugin API](https://github.com/openxla/community/blob/main/rfcs/20230123-pjrt-plugin.md#rfc-openxla-pjrt-plugin) improves performance by up to 30%\n* `torch.distributed` support for TPU v2 and v3, including `pjrt://` `init_method` (Experimental)\n* Single-host GPU support. Multi-host support coming soon. (Experimental)", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}} {"page_content": "Switching to PJRT requires no change (or minimal change for GPUs) to user code (see [pjrt.md](https://github.com/pytorch/xla/blob/master/docs/pjrt.md) for more details). Runtime configuration is as simple as setting the `PJRT_DEVICE` environment variable to the local device type (i.e. `TPU`, `GPU`, `CPU`). Below are examples of using PJRT runtimes on different devices. \n\n```\n# TPU Device\nPJRT_DEVICE=TPU python3 xla/test/test_train_mp_imagenet.py --fake_data --batch_size=256 --num_epochs=1\n```\n\n```\n# TPU Pod Device\ngcloud alpha compute tpus tpu-vm ssh $USER-pjrt --zone=us-central2-b --project=$PROJECT --worker=all --command=\"git clone --depth=1 --branch r2.0 https://github.com/pytorch/xla.git\"\n\ngcloud alpha compute tpus tpu-vm ssh $USER-pjrt --zone=us-central2-b --project=$PROJECT --worker=all --command=\"PJRT_DEVICE=TPU python3 xla/test/test_train_mp_imagenet.py --fake_data --batch_size=256 --num_epochs=1\"\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}} {"page_content": "```\n# GPU Device (Experimental)\nPJRT_DEVICE=GPU GPU_NUM_DEVICES=4 python3 xla/test/test_train_mp_imagenet.py --fake_data --batch_size=128 --num_epochs=1\n```\n\nBelow is a performance comparison between XRT and PJRT by task on TorchBench 2.0 on v4-8 TPU. To learn more about PJRT vs. XRT please review the [documentation](https://github.com/pytorch/xla/blob/r2.0/docs/pjrt.md#tpu).\n\n\n![TorchBench Training Time](/assets/images/2023-03-22-torchbenchtraining.svg){:width=\"100%\"}\n\n\n\n## Parallelization\n\n\n### GSPMD (Experimental)", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}} -{"page_content": "We are delighted to introduce General and Scalable Parallelization for ML Computation Graphs ([GSPMD](https://arxiv.org/abs/2105.04663)) in PyTorch as a new experimental data & model sharding solution. [GSPMD](https://arxiv.org/abs/2105.04663) provides automatic parallelization for common ML workloads, allowing developers to write PyTorch programs as if on a single large device and without custom sharded computation ops and/or collective communication ops. The XLA compiler transforms the single device program into a partitioned one with proper collectives, based on the user provided sharding hints. The API ([RFC](https://github.com/pytorch/xla/issues/3871)) will be available in the PyTorch/XLA 2.0 release as an experimental feature on a single TPU VM host. \n\n\n#### Next Steps for GSPMD", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}} -{"page_content": "GSPMD is experimental in 2.0 release. To bring it to Stable status, we plan to address a number of feature gaps and known issues in the following releases, including multi-host support, DTensor integration, partial replication sharding, asynchronous data loading, and checkpointing. \n\n\n### FSDP (Beta)\n\nPyTorch/XLA [introduced](https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/) fully sharded data parallel (FSDP) experimental support in version 1.12. This feature is a parallel representation of PyTorch FSDP and there are subtle differences in how XLA and upstream CUDA kernels are set up. `auto_wrap_policy` is a new argument that enables developers to automatically specify conditions for propagating partitioning specifications to neural network submodules. `auto_wrap_policy`s may be simply passed in as an argument when wrapping a model with FSDP. Two `auto_wrap_policy` callables worth noting are: `size_based_auto_wrap_policy`, `transformer_auto_wrap_policy`.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}} +{"page_content": "## Parallelization\n\n\n### GSPMD (Experimental)\n\nWe are delighted to introduce General and Scalable Parallelization for ML Computation Graphs ([GSPMD](https://arxiv.org/abs/2105.04663)) in PyTorch as a new experimental data & model sharding solution. [GSPMD](https://arxiv.org/abs/2105.04663) provides automatic parallelization for common ML workloads, allowing developers to write PyTorch programs as if on a single large device and without custom sharded computation ops and/or collective communication ops. The XLA compiler transforms the single device program into a partitioned one with proper collectives, based on the user provided sharding hints. The API ([RFC](https://github.com/pytorch/xla/issues/3871)) will be available in the PyTorch/XLA 2.0 release as an experimental feature on a single TPU VM host. \n\n\n#### Next Steps for GSPMD", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}} +{"page_content": "#### Next Steps for GSPMD\n\nGSPMD is experimental in 2.0 release. To bring it to Stable status, we plan to address a number of feature gaps and known issues in the following releases, including multi-host support, DTensor integration, partial replication sharding, asynchronous data loading, and checkpointing. \n\n\n### FSDP (Beta)\n\nPyTorch/XLA [introduced](https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/) fully sharded data parallel (FSDP) experimental support in version 1.12. This feature is a parallel representation of PyTorch FSDP and there are subtle differences in how XLA and upstream CUDA kernels are set up. `auto_wrap_policy` is a new argument that enables developers to automatically specify conditions for propagating partitioning specifications to neural network submodules. `auto_wrap_policy`s may be simply passed in as an argument when wrapping a model with FSDP. Two `auto_wrap_policy` callables worth noting are: `size_based_auto_wrap_policy`, `transformer_auto_wrap_policy`.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}} {"page_content": "`size_based_auto_wrap_policy` enables users to wrap submodules with a minimum number of parameters. The example below wraps model submodules having at least 10M parameters.\n\n```\nauto_wrap_policy = partial(size_based_auto_wrap_policy, min_num_params=1e7)\n```\n\n`transformer_auto_wrap_policy` enables users to wrap all submodules that match a specific layer type. The example below wraps model submodules named `torch.nn.Conv2d`. To learn more, review [this ResNet example](https://github.com/pytorch/xla/blob/master/test/test_train_mp_imagenet_fsdp.py#L237-L255) by Ronghang Hu.\n\n```\nauto_wrap_policy = partial(transformer_auto_wrap_policy, transformer_layer_cls={torch.nn.Conv2d})\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}} {"page_content": "PyTorch/XLA FSDP is now integrated in HuggingFace trainer class ([PR](https://github.com/huggingface/transformers/pull/21406)) enabling users to train much larger models on PyTorch/XLA ([official Hugging Face documentation](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#pytorchxla-fully-sharded-data-parallel)). A 16B parameters GPT2 model trained on Cloud TPU v4-64 with this FSDP configuration achieved 39% hardware utilization.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}} {"page_content": "\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
TPU Accelerator - Num Devices\n v4-64\n
GPT2 Parameter Count\n 16B\n
Layers Wrapped with FSDP\n GPT2Block\n
TFLOPs / Chip\n 275\n
PFLOPs / Step\n 50\n
Hardware Utilization\n 39%\n
\n\n\n\n### Differences Between FSDP & GSPMD", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}} -{"page_content": "FSDP is a data parallelism technique that reduces device memory footprint by storing model parameters, optimizer states, and gradients all sharded. Note that the actual computation is still local to the device and requires all-gathering the sharded model parameters for both forward and backward passes, hence the name \u201cdata parallel\u201d. FSDP is one of the newest additions to PyTorch/XLA to scale large model training.\n\nGSPMD on the other hand, is a general parallelization system that enables various types of parallelisms, including both data and model parallelisms. PyTorch/XLA provides a sharding annotation API and XLAShardedTensor abstraction, so a user can annotate any tensor with sharding specs in the PyTorch program. Developers don\u2019t need to manually implement sharded computations or inject collective communications ops to get it right. The XLA compiler does the work so that each computation can run in a distributed manner on multiple devices.\n\n\n### Examples & Preliminary Results", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}} -{"page_content": "To learn about PyTorch/XLA parallelism sharding API, visit our [RFC](https://github.com/pytorch/xla/issues/3871) and see the [Sample Code](https://github.com/pytorch/xla/tree/r2.0/test/spmd) references. Below is a simple example to enable data and model parallelism.\n\n```\nmodel = SimpleLinear().to(xm.xla_device())\n# Sharding annotate the linear layer weights.\nxs.mark_sharding(model.fc1.weight, mesh, partition_spec)\n# Training loop\nmodel.train()\nfor step, (data, target) in enumerate(loader):\n optimizer.zero_grad()\n data = data.to(xm.xla_device())\n target = target.to(xm.xla_device())\n # Sharding annotate input data, we can shard any input\n # dimensions. Sharidng the batch dimension enables \n # data parallelism, sharding the feature dimension enables\n # spatial partitioning.\n xs.mark_sharding(data, mesh, partition_spec)\n ouput = model(data)\n loss = loss_fn(output, target)\n optimizer.step()\n xm.mark_step()\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}} +{"page_content": "### Differences Between FSDP & GSPMD\n\nFSDP is a data parallelism technique that reduces device memory footprint by storing model parameters, optimizer states, and gradients all sharded. Note that the actual computation is still local to the device and requires all-gathering the sharded model parameters for both forward and backward passes, hence the name \u201cdata parallel\u201d. FSDP is one of the newest additions to PyTorch/XLA to scale large model training.\n\nGSPMD on the other hand, is a general parallelization system that enables various types of parallelisms, including both data and model parallelisms. PyTorch/XLA provides a sharding annotation API and XLAShardedTensor abstraction, so a user can annotate any tensor with sharding specs in the PyTorch program. Developers don\u2019t need to manually implement sharded computations or inject collective communications ops to get it right. The XLA compiler does the work so that each computation can run in a distributed manner on multiple devices.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}} +{"page_content": "### Examples & Preliminary Results\n\nTo learn about PyTorch/XLA parallelism sharding API, visit our [RFC](https://github.com/pytorch/xla/issues/3871) and see the [Sample Code](https://github.com/pytorch/xla/tree/r2.0/test/spmd) references. Below is a simple example to enable data and model parallelism.\n\n```\nmodel = SimpleLinear().to(xm.xla_device())\n# Sharding annotate the linear layer weights.\nxs.mark_sharding(model.fc1.weight, mesh, partition_spec)\n# Training loop\nmodel.train()\nfor step, (data, target) in enumerate(loader):\n optimizer.zero_grad()\n data = data.to(xm.xla_device())\n target = target.to(xm.xla_device())\n # Sharding annotate input data, we can shard any input\n # dimensions. Sharidng the batch dimension enables \n # data parallelism, sharding the feature dimension enables\n # spatial partitioning.\n xs.mark_sharding(data, mesh, partition_spec)\n ouput = model(data)\n loss = loss_fn(output, target)\n optimizer.step()\n xm.mark_step()\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}} {"page_content": "The following graph highlights the memory efficiency benefits of PyTorch/XLA FSDP and SPMD on Cloud TPU v4-8 running ResNet50.\n\n\n![Batch Size Scaling with Spatial Partitioning](/assets/images/2023-03-22-batchsizescaling.svg){:width=\"100%\"}\n\n\n\n## Closing Thoughts\u2026\n\nWe are excited to bring these features to the PyTorch community, and this is really just the beginning. Areas like dynamic shapes, deeper support for OpenXLA and many others are in development and we plan to put out more blogs to dive into the details. PyTorch/XLA is developed fully open source and we invite you to join the community of developers by filing issues, submitting pull requests, and sending RFCs on [GitHub](github.com/pytorch/xla). You can try PyTorch/XLA on a variety of XLA devices including TPUs and GPUs. [Here](https://colab.sandbox.google.com/github/pytorch/xla/blob/master/contrib/colab/getting-started.ipynb) is how to get started.\n\nCongratulations again to the PyTorch community on this milestone!\n\nCheers,\n\nThe PyTorch Team at Google", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"Compromised PyTorch-nightly dependency chain between December 25th and December 30th, 2022.\"\nauthor: The PyTorch Team\n---\n\nIf you installed PyTorch-nightly on Linux via pip between December 25, 2022 and December 30, 2022, please uninstall it and torchtriton immediately, and use the latest nightly binaries (newer than Dec 30th 2022).\n\n```bash\n$ pip3 uninstall -y torch torchvision torchaudio torchtriton\n$ pip3 cache purge\n```\n\nPyTorch-nightly Linux packages installed via pip during that time installed a dependency, torchtriton, which was compromised on the Python Package Index (PyPI) code repository and ran a malicious binary. This is what is known as a supply chain attack and directly affects dependencies for packages that are hosted on public package indices.\n\n**NOTE:** Users of the PyTorch **stable** packages **are not** affected by this issue.\n\n\n## How to check if your Python environment is affected", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}} {"page_content": "The following command searches for the malicious binary in the torchtriton package (`PYTHON_SITE_PACKAGES/triton/runtime/triton`) and prints out whether your current Python environment is affected or not.\n\n```bash\npython3 -c \"import pathlib;import importlib.util;s=importlib.util.find_spec('triton'); affected=any(x.name == 'triton' for x in (pathlib.Path(s.submodule_search_locations[0] if s is not None else '/' ) / 'runtime').glob('*'));print('You are {}affected'.format('' if affected else 'not '))\"\n```\n\nThe malicious binary is executed when the triton package is imported, which requires explicit code to do and is not PyTorch\u2019s default behavior.\n\n## The Background", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}} {"page_content": "## The Background\n\nAt around 4:40pm GMT on December 30 (Friday), we learned about a malicious dependency package (`torchtriton`) that was uploaded to the Python Package Index (PyPI) code repository with the same package name as the one we ship on the [PyTorch nightly package index](https://download.pytorch.org/whl/nightly). Since the [PyPI index takes precedence](https://github.com/pypa/pip/issues/8606), this malicious package was being installed instead of the version from our official repository. This design enables somebody to register a package by the same name as one that exists in a third party index, and pip will install their version by default.\n\nThis malicious package has the same name `torchtriton` but added in code that uploads sensitive data from the machine.\n\n\n## What we know\n\ntorchtriton on PyPI contains a malicious triton binary which is installed at `PYTHON_SITE_PACKAGES/triton/runtime/triton`. Its SHA256 hash is listed below.", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}} {"page_content": "`SHA256(triton)= 2385b29489cd9e35f92c072780f903ae2e517ed422eae67246ae50a5cc738a0e`\n\nThe binary\u2019s main function does the following:\n\n- Get system information\n - nameservers from `/etc/resolv.conf`\n - hostname from `gethostname()`\n - current username from `getlogin()`\n - current working directory name from `getcwd()`\n - environment variables\n- Read the following files\n - `/etc/hosts`\n - `/etc/passwd`\n - The first 1,000 files in `$HOME/*`\n - `$HOME/.gitconfig`\n - `$HOME/.ssh/*`\n- Upload all of this information, including file contents, via encrypted DNS queries to the domain *.h4ck[.]cfd, using the DNS server wheezy[.]io\n\nThe binary\u2019s file upload functionality is limited to files less than 99,999 bytes in size. It also uploads only the first 1,000 files in $HOME (but all files < 99,999 bytes in the .ssh directory).\n\n## Steps taken towards mitigation", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}} -{"page_content": "- torchtriton has been removed as a dependency for our nightly packages and replaced with pytorch-triton ([pytorch/pytorch#91539](https://github.com/pytorch/pytorch/pull/91539)) and a dummy package registered on PyPI (so that this issue doesn\u2019t repeat)\n- All nightly packages that depend on torchtriton have been removed from our package indices at https://download.pytorch.org until further notice\n- We have reached out to the PyPI security team to get proper ownership of the `torchtriton` package on PyPI and to delete the malicious version", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}} +{"page_content": "## Steps taken towards mitigation\n\n- torchtriton has been removed as a dependency for our nightly packages and replaced with pytorch-triton ([pytorch/pytorch#91539](https://github.com/pytorch/pytorch/pull/91539)) and a dummy package registered on PyPI (so that this issue doesn\u2019t repeat)\n- All nightly packages that depend on torchtriton have been removed from our package indices at https://download.pytorch.org until further notice\n- We have reached out to the PyPI security team to get proper ownership of the `torchtriton` package on PyPI and to delete the malicious version", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'Running PyTorch Models on Jetson Nano'\nauthor: Jeff Tang, Hamid Shojanazeri, Geeta Chauhan\nfeatured-img: 'assets/images/pytorch-logo.jpg'\n---\n\n### Overview\nNVIDIA [Jetson Nano](https://developer.nvidia.com/embedded/jetson-nano-developer-kit), part of the [Jetson family of products](https://developer.nvidia.com/embedded/jetson-modules) or Jetson modules, is a small yet powerful Linux (Ubuntu) based embedded computer with 2/4GB GPU. With it, you can run many PyTorch models efficiently. This document summarizes our experience of running different deep learning models using 3 different mechanisms on Jetson Nano:\n\n 1. Jetson Inference the higher-level NVIDIA API that has built-in support for running most common computer vision models which can be transfer-learned with PyTorch on the Jetson platform.", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}} {"page_content": "2. TensorRT, an SDK for high-performance inference from NVIDIA that requires the conversion of a PyTorch model to ONNX, and then to the TensorRT engine file that the TensorRT runtime can run.\n\n 3. PyTorch with the direct PyTorch API `torch.nn` for inference.\n\n### Setting up Jetson Nano\nAfter purchasing a Jetson Nano [here](https://developer.nvidia.com/buy-jetson?product=jetson_nano&location=US), simply follow the clear step-by-step [instructions](https://developer.nvidia.com/embedded/learn/get-started-jetson-nano-devkit) to download and write the Jetson Nano Developer Kit SD Card Image to a microSD card, and complete the setup. After the setup is done and the Nano is booted, you\u2019ll see the standard Linux prompt along with the username and the Nano name used in the setup.\n\nTo check the GPU status on Nano, run the following commands:\n\n```\nsudo pip3 install jetson-stats\nsudo jtop\n```\n\nYou\u2019ll see information, including:", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}} -{"page_content": "
\n \n
\n\nYou can also see the installed CUDA version:\n\n```\n$ ls -lt /usr/local\nlrwxrwxrwx 1 root root 22 Aug 2 01:47 cuda -> /etc/alternatives/cuda\nlrwxrwxrwx 1 root root 25 Aug 2 01:47 cuda-10 -> /etc/alternatives/cuda-10\ndrwxr-xr-x 12 root root 4096 Aug 2 01:47 cuda-10.2\n```\n\nTo use a camera on Jetson Nano, for example, Arducam 8MP IMX219, follow the instructions [here](https://www.arducam.com/docs/camera-for-jetson-nano/mipi-camera-modules-for-jetson-nano/driver-installation/) or run the commands below after [installing a camera module](https://developer.nvidia.com/embedded/learn/jetson-nano-2gb-devkit-user-guide#id-.JetsonNano2GBDeveloperKitUserGuidevbatuu_v1.0-Camera):\n\n```\ncd ~\nwget https://github.com/ArduCAM/MIPI_Camera/releases/download/v0.0.3/install_full.sh\nchmod +x install_full.sh\n./install_full.sh -m arducam\n```", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}} +{"page_content": "You\u2019ll see information, including:\n\n
\n \n
\n\nYou can also see the installed CUDA version:\n\n```\n$ ls -lt /usr/local\nlrwxrwxrwx 1 root root 22 Aug 2 01:47 cuda -> /etc/alternatives/cuda\nlrwxrwxrwx 1 root root 25 Aug 2 01:47 cuda-10 -> /etc/alternatives/cuda-10\ndrwxr-xr-x 12 root root 4096 Aug 2 01:47 cuda-10.2\n```\n\nTo use a camera on Jetson Nano, for example, Arducam 8MP IMX219, follow the instructions [here](https://www.arducam.com/docs/camera-for-jetson-nano/mipi-camera-modules-for-jetson-nano/driver-installation/) or run the commands below after [installing a camera module](https://developer.nvidia.com/embedded/learn/jetson-nano-2gb-devkit-user-guide#id-.JetsonNano2GBDeveloperKitUserGuidevbatuu_v1.0-Camera):\n\n```\ncd ~\nwget https://github.com/ArduCAM/MIPI_Camera/releases/download/v0.0.3/install_full.sh\nchmod +x install_full.sh\n./install_full.sh -m arducam\n```", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}} {"page_content": "Another way to do this is to use the original Jetson Nano camera driver:\n\n```\nsudo dpkg -r arducam-nvidia-l4t-kernel\nsudo shutdown -r now\n```\n\nThen, use ls /dev/video0 to confirm the camera is found:\n\n```\n$ ls /dev/video0\n/dev/video0\n```\n\nAnd finally, the following command to see the camera in action:\n\n```\nnvgstcapture-1.0 --orientation=2\n```\n\n### Using Jetson Inference\nNVIDIA [Jetson Inference](https://github.com/dusty-nv/jetson-inference) API offers the easiest way to run image recognition, object detection, semantic segmentation, and pose estimation models on Jetson Nano. Jetson Inference has TensorRT built-in, so it\u2019s very fast. \n\nTo test run Jetson Inference, first clone the repo and download the models:\n\n```\ngit clone --recursive https://github.com/dusty-nv/jetson-inference\ncd jetson-inference\n```\n\nThen use the pre-built [Docker Container](https://github.com/dusty-nv/jetson-inference/blob/master/docs/jetpack-setup-2.md) that already has PyTorch installed to test run the models:", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}} {"page_content": "```\ndocker/run.sh --volume ~/jetson_inference:/jetson_inference\n```\n\nTo run image recognition, object detection, semantic segmentation, and pose estimation models on test images, use the following:\n\n```\ncd build/aarch64/bin\n./imagenet.py images/jellyfish.jpg /jetson_inference/jellyfish.jpg\n./segnet.py images/dog.jpg /jetson_inference/dog.jpeg\n./detectnet.py images/peds_0.jpg /jetson_inference/peds_0.jpg\n./posenet.py images/humans_0.jpg /jetson_inference/pose_humans_0.jpg\n```\n\nFour result images from running the four different models will be generated. Exit the docker image to see them:\n\n```\n$ ls -lt ~/jetson_inference/\n-rw-r--r-- 1 root root 68834 Oct 15 21:30 pose_humans_0.jpg\n-rw-r--r-- 1 root root 914058 Oct 15 21:30 peds_0.jpg\n-rw-r--r-- 1 root root 666239 Oct 15 21:30 dog.jpeg\n-rw-r--r-- 1 root root 179760 Oct 15 21:29 jellyfish.jpg\n```", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}} {"page_content": "
\n \"Using\n \"Using\n
\n\n\n
\n \"Using\n \"Using\n
\n\nYou can also use the docker image to run PyTorch models because the image has PyTorch, torchvision and torchaudio installed:\n\n```\n# pip list|grep torch\ntorch (1.9.0)\ntorchaudio (0.9.0a0+33b2469)\ntorchvision (0.10.0a0+300a8a4)\n```", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}} @@ -138,25 +138,25 @@ {"page_content": "1. Use Jetson as a portable GPU device to run an NN chess engine model: \n[https://medium.com/@ezchess/jetson-lc0-running-leela-chess-zero-on-nvidia-jetson-a-portable-gpu-device-a213afc9c018](https://medium.com/@ezchess/jetson-lc0-running-leela-chess-zero-on-nvidia-jetson-a-portable-gpu-device-a213afc9c018)\n\n2. A MaskEraser app using PyTorch and torchvision, installed directly with pip:\n[https://github.com/INTEC-ATI/MaskEraser#install-pytorch](https://github.com/INTEC-ATI/MaskEraser#install-pytorch)", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch 1.9 Release, including torch.linalg and Mobile Interpreter'\nauthor: Team PyTorch \n---\n\nWe are excited to announce the release of PyTorch 1.9. The release is composed of more than 3,400 commits since 1.8, made by 398 contributors. The release notes are available [here](https://github.com/pytorch/pytorch/releases). Highlights include:\n1. Major improvements to support scientific computing, including *torch.linalg*, *torch.special*, and Complex Autograd\n2. Major improvements in on-device binary size with Mobile Interpreter\n3. Native support for elastic-fault tolerance training through the upstreaming of TorchElastic into PyTorch Core\n4. Major updates to the PyTorch RPC framework to support large scale distributed training with GPU support\n5. New APIs to optimize performance and packaging for model inference deployment \n6. Support for Distributed training, GPU utilization and SM efficiency in the PyTorch Profiler", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} {"page_content": "Along with 1.9, we are also releasing major updates to the PyTorch libraries, which you can read about in [this blog post](https://pytorch.org/blog/pytorch-1.9-new-library-releases/). \n\nWe\u2019d like to thank the community for their support and work on this latest release. We\u2019d especially like to thank Quansight and Microsoft for their contributions.\n\nFeatures in PyTorch releases are classified as Stable, Beta, and Prototype. You can learn more about the definitions in [this blog post](https://pytorch.org/blog/pytorch-feature-classification-changes/). \n\n# Frontend APIs\n\n### (Stable) *torch.linalg*", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} -{"page_content": "In 1.9, the *torch.linalg* module is moving to a stable release. Linear algebra is essential to deep learning and scientific computing, and the *torch.linalg* module extends PyTorch\u2019s support for it with implementations of every function from [NumPy\u2019s linear algebra module](https://numpy.org/doc/stable/reference/routines.linalg.html) (now with support for accelerators and autograd) and more, like [*torch.linalg.matrix_norm*](https://pytorch.org/docs/1.9.0/generated/torch.linalg.matrix_norm.html?highlight=matrix_norm#torch.linalg.matrix_norm) and [*torch.linalg.householder_product*](https://pytorch.org/docs/1.9.0/generated/torch.linalg.householder_product.html?highlight=householder_product#torch.linalg.householder_product). This makes the module immediately familiar to users who have worked with NumPy. Refer to [the documentation](https://pytorch.org/docs/1.9.0/linalg.html?highlight=linalg#module-torch.linalg) here.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} +{"page_content": "# Frontend APIs\n\n### (Stable) *torch.linalg*\n\nIn 1.9, the *torch.linalg* module is moving to a stable release. Linear algebra is essential to deep learning and scientific computing, and the *torch.linalg* module extends PyTorch\u2019s support for it with implementations of every function from [NumPy\u2019s linear algebra module](https://numpy.org/doc/stable/reference/routines.linalg.html) (now with support for accelerators and autograd) and more, like [*torch.linalg.matrix_norm*](https://pytorch.org/docs/1.9.0/generated/torch.linalg.matrix_norm.html?highlight=matrix_norm#torch.linalg.matrix_norm) and [*torch.linalg.householder_product*](https://pytorch.org/docs/1.9.0/generated/torch.linalg.householder_product.html?highlight=householder_product#torch.linalg.householder_product). This makes the module immediately familiar to users who have worked with NumPy. Refer to [the documentation](https://pytorch.org/docs/1.9.0/linalg.html?highlight=linalg#module-torch.linalg) here.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} {"page_content": "We plan to publish another blog post with more details on the *torch.linalg* module next week!\n\n### (Stable) Complex Autograd \n\nThe Complex Autograd feature, released as a beta in PyTorch 1.8, is now stable. Since the beta release, we have extended support for Complex Autograd for over 98% operators in PyTorch 1.9, improved testing for complex operators by adding more OpInfos, and added greater validation through TorchAudio migration to native complex tensors (refer to [this issue](https://github.com/pytorch/audio/issues/1337)). \n\nThis feature provides users the functionality to calculate complex gradients and optimize real valued loss functions with complex variables. This is a required feature for multiple current and downstream prospective users of complex numbers in PyTorch like TorchAudio, ESPNet, Asteroid, and FastMRI. Refer to [the documentation](https://pytorch.org/docs/1.9.0/notes/autograd.html#autograd-for-complex-numbers) for more details. \n\n### (Stable) torch.use_deterministic_algorithms()", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} -{"page_content": "To help with debugging and writing reproducible programs, PyTorch 1.9 includes a *torch.use_determinstic_algorithms* option. When this setting is enabled, operations will behave deterministically, if possible, or throw a runtime error if they might behave nondeterministically. Here are a couple examples:\n\n```python\n>>> a = torch.randn(100, 100, 100, device='cuda').to_sparse()\n>>> b = torch.randn(100, 100, 100, device='cuda')\n\n# Sparse-dense CUDA bmm is usually nondeterministic\n>>> torch.bmm(a, b).eq(torch.bmm(a, b)).all().item()\nFalse\n\n>>> torch.use_deterministic_algorithms(True)\n\n# Now torch.bmm gives the same result each time, but with reduced performance\n>>> torch.bmm(a, b).eq(torch.bmm(a, b)).all().item()\nTrue\n\n# CUDA kthvalue has no deterministic algorithm, so it throws a runtime error\n>>> torch.zeros(10000, device='cuda').kthvalue(1)\nRuntimeError: kthvalue CUDA does not have a deterministic implementation...\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} +{"page_content": "### (Stable) torch.use_deterministic_algorithms() \n\nTo help with debugging and writing reproducible programs, PyTorch 1.9 includes a *torch.use_determinstic_algorithms* option. When this setting is enabled, operations will behave deterministically, if possible, or throw a runtime error if they might behave nondeterministically. Here are a couple examples:\n\n```python\n>>> a = torch.randn(100, 100, 100, device='cuda').to_sparse()\n>>> b = torch.randn(100, 100, 100, device='cuda')\n\n# Sparse-dense CUDA bmm is usually nondeterministic\n>>> torch.bmm(a, b).eq(torch.bmm(a, b)).all().item()\nFalse\n\n>>> torch.use_deterministic_algorithms(True)\n\n# Now torch.bmm gives the same result each time, but with reduced performance\n>>> torch.bmm(a, b).eq(torch.bmm(a, b)).all().item()\nTrue\n\n# CUDA kthvalue has no deterministic algorithm, so it throws a runtime error\n>>> torch.zeros(10000, device='cuda').kthvalue(1)\nRuntimeError: kthvalue CUDA does not have a deterministic implementation...\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} {"page_content": "PyTorch 1.9 adds deterministic implementations for a number of indexing operations, too, including *index_add*, *index_copy*, and *index_put with accum=False*. For more details, refer to the [documentation](https://pytorch.org/docs/1.9.0/generated/torch.use_deterministic_algorithms.html?highlight=use_deterministic#torch.use_deterministic_algorithms) and [reproducibility note](https://pytorch.org/docs/1.9.0/notes/randomness.html?highlight=reproducibility).\n\n### (Beta) *torch.special*\n\nA *torch.special* module, analogous to [SciPy\u2019s special module](https://docs.scipy.org/doc/scipy/reference/special.html), is now available in beta. This module contains many functions useful for scientific computing and working with distributions such as *iv*, *ive*, *erfcx*, *logerfc*, and *logerfcx*. Refer to [the documentation](https://pytorch.org/docs/master/special.html) for more details. \n\n### (Beta) nn.Module parameterization", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} -{"page_content": "```nn.Module``` parameterization allows users to parametrize any parameter or buffer of an ```nn.Module``` without modifying the ```nn.Module``` itself. It allows you to constrain the space in which your parameters live without the need for special optimization methods.\n\nThis also contains a new implementation of the ```spectral_norm``` parametrization for PyTorch 1.9. More parametrization will be added to this feature (weight_norm, matrix constraints and part of pruning) for the feature to become stable in 1.10. For more details, refer to the [documentation](https://pytorch.org/docs/1.9.0/generated/torch.nn.utils.parametrizations.spectral_norm.html?highlight=parametrize) and [tutorial](https://pytorch.org/tutorials/intermediate/parametrizations.html).\n\n# PyTorch Mobile\n\n### (Beta) Mobile Interpreter \n\nWe are releasing Mobile Interpreter, a streamlined version of the PyTorch runtime, in beta. The Interpreter will execute PyTorch programs in edge devices, with reduced binary size footprint.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} -{"page_content": "Mobile Interpreter is one of the top requested features for PyTorch Mobile. This new release will significantly reduce binary size compared with the current on-device runtime. In order for you to get the binary size improvements with our interpreter (which can reduce the binary size up to ~75% for a typical application) follow these instructions. As an example, using Mobile Interpreter, we can reach 2.6 MB compressed with MobileNetV2 in arm64-v7a Android. With this latest release we are making it much simpler to integrate the interpreter by providing pre-built libraries for iOS and Android.\n\n### TorchVision Library", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} -{"page_content": "Starting from 1.9, users can use the TorchVision library on their iOS/Android apps. The Torchvision library contains the C++ TorchVision ops and needs to be linked together with the main PyTorch library for iOS, for Android it can be added as a gradle dependency. This allows using TorchVision prebuilt MaskRCNN operators for object detections and segmentation. To learn more about the library, please refer to our tutorials and [demo apps](https://github.com/pytorch/android-demo-app/tree/master/D2Go). \n\n### Demo apps", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} +{"page_content": "### (Beta) nn.Module parameterization \n\n```nn.Module``` parameterization allows users to parametrize any parameter or buffer of an ```nn.Module``` without modifying the ```nn.Module``` itself. It allows you to constrain the space in which your parameters live without the need for special optimization methods.\n\nThis also contains a new implementation of the ```spectral_norm``` parametrization for PyTorch 1.9. More parametrization will be added to this feature (weight_norm, matrix constraints and part of pruning) for the feature to become stable in 1.10. For more details, refer to the [documentation](https://pytorch.org/docs/1.9.0/generated/torch.nn.utils.parametrizations.spectral_norm.html?highlight=parametrize) and [tutorial](https://pytorch.org/tutorials/intermediate/parametrizations.html).\n\n# PyTorch Mobile\n\n### (Beta) Mobile Interpreter", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} +{"page_content": "# PyTorch Mobile\n\n### (Beta) Mobile Interpreter \n\nWe are releasing Mobile Interpreter, a streamlined version of the PyTorch runtime, in beta. The Interpreter will execute PyTorch programs in edge devices, with reduced binary size footprint. \n\nMobile Interpreter is one of the top requested features for PyTorch Mobile. This new release will significantly reduce binary size compared with the current on-device runtime. In order for you to get the binary size improvements with our interpreter (which can reduce the binary size up to ~75% for a typical application) follow these instructions. As an example, using Mobile Interpreter, we can reach 2.6 MB compressed with MobileNetV2 in arm64-v7a Android. With this latest release we are making it much simpler to integrate the interpreter by providing pre-built libraries for iOS and Android.\n\n### TorchVision Library", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} +{"page_content": "### TorchVision Library\n\nStarting from 1.9, users can use the TorchVision library on their iOS/Android apps. The Torchvision library contains the C++ TorchVision ops and needs to be linked together with the main PyTorch library for iOS, for Android it can be added as a gradle dependency. This allows using TorchVision prebuilt MaskRCNN operators for object detections and segmentation. To learn more about the library, please refer to our tutorials and [demo apps](https://github.com/pytorch/android-demo-app/tree/master/D2Go). \n\n### Demo apps", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} {"page_content": "### Demo apps\n\nWe are releasing a new video app based on [PyTorch Video](https://pytorchvideo.org/) library and an updated speech recognition app based on the latest torchaudio, wave2vec model. Both are available on [iOS](https://github.com/pytorch/ios-demo-app) and [Android](https://github.com/pytorch/android-demo-app). In addition, we have updated the seven Computer Vision and three Natural Language Processing demo apps, including the HuggingFace DistilBERT, and the DeiT vision transformer models, with PyTorch Mobile v1.9. With the addition of these two apps, we now offer a full suite of demo apps covering image, text, audio, and video. To get started check out our [iOS demo apps](https://github.com/pytorch/ios-demo-app) and [Android demo apps](https://github.com/pytorch/android-demo-app).\n\n
\n \n
\n\n# Distributed Training\n\n### (Beta) TorchElastic is now part of core", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} -{"page_content": "[TorchElastic](https://github.com/pytorch/pytorch/issues/50621), which was open sourced over a year ago in the [pytorch/elastic](https://github.com/pytorch/elastic) github repository, is a runner and coordinator for PyTorch worker processes. Since then, it has been adopted by various distributed torch use-cases: 1) [deepspeech.pytorch](https://medium.com/pytorch/training-deepspeech-using-torchelastic-ad013539682) 2) [pytorch-lightning](https://pytorch-lightning.readthedocs.io/en/stable/advanced/multi_gpu.html#torchelastic) 3) [Kubernetes CRD](https://github.com/pytorch/elastic/blob/master/kubernetes/README.md). Now, it is part of PyTorch core.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} +{"page_content": "### (Beta) TorchElastic is now part of core \n\n[TorchElastic](https://github.com/pytorch/pytorch/issues/50621), which was open sourced over a year ago in the [pytorch/elastic](https://github.com/pytorch/elastic) github repository, is a runner and coordinator for PyTorch worker processes. Since then, it has been adopted by various distributed torch use-cases: 1) [deepspeech.pytorch](https://medium.com/pytorch/training-deepspeech-using-torchelastic-ad013539682) 2) [pytorch-lightning](https://pytorch-lightning.readthedocs.io/en/stable/advanced/multi_gpu.html#torchelastic) 3) [Kubernetes CRD](https://github.com/pytorch/elastic/blob/master/kubernetes/README.md). Now, it is part of PyTorch core.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} {"page_content": "As its name suggests, the core function of TorcheElastic is to gracefully handle scaling events. A notable corollary of elasticity is that peer discovery and rank assignment are built into TorchElastic enabling users to run distributed training on preemptible instances without requiring a gang scheduler. As a side note, [etcd](https://etcd.io/) used to be a hard dependency of TorchElastic. With the upstream, this is no longer the case since we have added a \u201cstandalone\u201d rendezvous based on c10d::Store. For more details, refer to the [documentation](https://pytorch.org/docs/1.9.0/distributed.elastic.html).\n\n### (Beta) Distributed Training Updates\n\nIn addition to TorchElastic, there are a number of beta features available in the distributed package:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} {"page_content": "* **(Beta) CUDA support is available in RPC**: Compared to CPU RPC and general-purpose RPC frameworks, CUDA RPC is a much more efficient way for P2P Tensor communication. It is built on top of TensorPipe which can automatically choose a communication channel for each Tensor based on Tensor device type and channel availability on both the caller and the callee. Existing TensorPipe channels cover NVLink, InfiniBand, SHM, CMA, TCP, etc. See [this recipe](https://pytorch.org/tutorials/recipes/cuda_rpc.html) for how CUDA RPC helps to attain 34x speedup compared to CPU RPC.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} {"page_content": "* **(Beta) ZeroRedundancyOptimizer**: ZeroRedundancyOptimizer can be used in conjunction with DistributedDataParallel to reduce the size of per-process optimizer states. The idea of ZeroRedundancyOptimizer comes from [DeepSpeed/ZeRO project](https://github.com/microsoft/DeepSpeed) and [Marian](https://github.com/marian-nmt/marian-dev), where the optimizer in each process owns a shard of model parameters and their corresponding optimizer states. When running `step()`, each optimizer only updates its own parameters, and then uses collective communication to synchronize updated parameters across all processes. Refer to [this documentation](https://pytorch.org/docs/master/distributed.optim.html) and this [tutorial](https://pytorch.org/tutorials/recipes/zero_redundancy_optimizer.html) to learn more.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} {"page_content": "* **(Beta) Support for profiling distributed collectives**: PyTorch\u2019s profiler tools, *torch.profiler* and *torch.autograd.profiler*, are able to profile distributed collectives and point to point communication primitives including allreduce, alltoall, allgather, send/recv, etc. This is enabled for all backends supported natively by PyTorch: gloo, mpi, and nccl. This can be used to debug performance issues, analyze traces that contain distributed communication, and gain insight into performance of applications that use distributed training. To learn more, refer to [this documentation](https://pytorch.org/docs/1.9.0/distributed.html#profiling-collective-communication). \n\n# Performance Optimization and Tooling\n\n### (Stable) Freezing API", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} -{"page_content": "Module Freezing is the process of inlining module parameters and attributes values as constants into the TorchScript internal representation. This allows further optimization and specialization of your program, both for TorchScript optimizations and lowering to other backends. It is used by [optimize_for_mobile API](https://github.com/pytorch/pytorch/blob/master/torch/utils/mobile_optimizer.py), ONNX, and others. \n\nFreezing is recommended for model deployment. It helps TorchScript JIT optimizations optimize away overhead and bookkeeping that is necessary for training, tuning, or debugging PyTorch models. It enables graph fusions that are not semantically valid on non-frozen graphs - such as fusing Conv-BN. For more details, refer to the [documentation](https://pytorch.org/docs/1.9.0/generated/torch.jit.freeze.html).\n\n### (Beta) PyTorch Profiler \n\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} +{"page_content": "### (Stable) Freezing API \n\nModule Freezing is the process of inlining module parameters and attributes values as constants into the TorchScript internal representation. This allows further optimization and specialization of your program, both for TorchScript optimizations and lowering to other backends. It is used by [optimize_for_mobile API](https://github.com/pytorch/pytorch/blob/master/torch/utils/mobile_optimizer.py), ONNX, and others. \n\nFreezing is recommended for model deployment. It helps TorchScript JIT optimizations optimize away overhead and bookkeeping that is necessary for training, tuning, or debugging PyTorch models. It enables graph fusions that are not semantically valid on non-frozen graphs - such as fusing Conv-BN. For more details, refer to the [documentation](https://pytorch.org/docs/1.9.0/generated/torch.jit.freeze.html).\n\n### (Beta) PyTorch Profiler \n\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} {"page_content": "The new PyTorch Profiler graduates to beta and leverages [Kineto](https://github.com/pytorch/kineto/) for GPU profiling, TensorBoard for visualization and is now the standard across our tutorials and documentation. \n\nPyTorch 1.9 extends support for the new *torch.profiler* API to more builds, including Windows and Mac and is recommended in most cases instead of the previous *torch.autograd.profiler* API. The new API supports existing profiler features, integrates with CUPTI library (Linux-only) to trace on-device CUDA kernels and provides support for long-running jobs, e.g.:\n\n```python\ndef trace_handler(p):\n output = p.key_averages().table(sort_by=\"self_cuda_time_total\", row_limit=10)\n print(output)\n p.export_chrome_trace(\"/tmp/trace_\" + str(p.step_num) + \".json\")", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} {"page_content": "with profile(\n activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],\n # schedule argument specifies the iterations on which the profiler is active\n schedule=torch.profiler.schedule(\n wait=1,\n warmup=1,\n active=2),\n # on_trace_ready argument specifies the handler for the traces\n on_trace_ready=trace_handler\n) as p:\n for idx in range(8):\n model(inputs)\n # profiler will trace iterations 2 and 3, and then 6 and 7 (counting from zero)\n p.step()\n```\n\nMore usage examples can be found on the [profiler recipe page](https://pytorch.org/tutorials/recipes/recipes/profiler_recipe.html). \n\nThe PyTorch Profiler Tensorboard plugin has new features for:\n* Distributed Training summary view with communications overview for NCCL\n* GPU Utilization and SM Efficiency in Trace view and GPU operators view\n* Memory Profiling view\n* Jump to source when launched from Microsoft VSCode\n* Ability for load traces from cloud object storage systems", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} {"page_content": "### (Beta) Inference Mode API \n\nInference Mode API allows significant speed-up for inference workloads while remaining safe and ensuring no incorrect gradients can ever be computed. It offers the best possible performance when no autograd is required. For more details, refer to [the documentation for inference mode itself](https://pytorch.org/docs/1.9.0/generated/torch.inference_mode.html?highlight=inference%20mode#torch.inference_mode) and [the documentation explaining when to use it and the difference with no_grad mode](https://pytorch.org/docs/1.9.0/notes/autograd.html#locally-disabling-gradient-computation).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} {"page_content": "### (Beta) *torch.package* \n \n*torch.package* is a new way to package PyTorch models in a self-contained, stable format. A package will include both the model\u2019s data (e.g. parameters, buffers) and its code (model architecture). Packaging a model with its full set of Python dependencies, combined with a description of a conda environment with pinned versions, can be used to easily reproduce training. Representing a model in a self-contained artifact will also allow it to be published and transferred throughout a production ML pipeline while retaining the flexibility of a pure-Python representation. For more details, refer to [the documentation](https://pytorch.org/docs/1.9.0/package.html).\n\n### (Prototype) prepare_for_inference", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} -{"page_content": "prepare_for_inference is a new prototype feature that takes in a module and performs graph-level optimizations to improve inference performance, depending on the device. It is meant to be a PyTorch-native option that requires minimal changes to user\u2019s workflows. For more details, see [the documentation](https://github.com/pytorch/pytorch/blob/master/torch/jit/_freeze.py#L168) for the Torchscript version [here](https://github.com/pytorch/pytorch/blob/master/torch/jit/_freeze.py#L168) or the FX version [here](https://github.com/pytorch/pytorch/blob/master/torch/fx/experimental/optimization.py#L234).\n\n### (Prototype) Profile-directed typing in TorchScript", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} +{"page_content": "### (Prototype) prepare_for_inference \n\nprepare_for_inference is a new prototype feature that takes in a module and performs graph-level optimizations to improve inference performance, depending on the device. It is meant to be a PyTorch-native option that requires minimal changes to user\u2019s workflows. For more details, see [the documentation](https://github.com/pytorch/pytorch/blob/master/torch/jit/_freeze.py#L168) for the Torchscript version [here](https://github.com/pytorch/pytorch/blob/master/torch/jit/_freeze.py#L168) or the FX version [here](https://github.com/pytorch/pytorch/blob/master/torch/fx/experimental/optimization.py#L234).\n\n### (Prototype) Profile-directed typing in TorchScript", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} {"page_content": "TorchScript has a hard requirement for source code to have type annotations in order for compilation to be successful. For a long time, it was only possible to add missing or incorrect type annotations through trial and error (i.e., by fixing the type-checking errors generated by *torch.jit.script* one by one), which was inefficient and time consuming. Now, we have enabled profile directed typing for *torch.jit.script* by leveraging existing tools like MonkeyType, which makes the process much easier, faster, and more efficient. For more details, refer to [the documentation](https://pytorch.org/docs/1.9.0/jit.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} {"page_content": "Thanks for reading. If you\u2019re interested in these updates and want to join the PyTorch community, we encourage you to join the [discussion forums](https://discuss.pytorch.org/) and [open GitHub issues](https://github.com/pytorch/pytorch/issues). To get the latest news from PyTorch, follow us on [Facebook](https://www.facebook.com/pytorch/), [Twitter](https://twitter.com/PyTorch), [Medium](https://medium.com/pytorch), [YouTube](https://www.youtube.com/pytorch), or [LinkedIn](https://www.linkedin.com/company/pytorch). \n\nCheers!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'Announcing PyTorch Developer Day 2020'\nauthor: Team PyTorch\n---\n\nStarting this year, we plan to host two separate events for PyTorch: one for developers and users to discuss core technical development, ideas and roadmaps called **\u201cDeveloper Day\u201d**, and another for the PyTorch ecosystem and industry communities to showcase their work and discover opportunities to collaborate called **\u201cEcosystem Day\u201d** (scheduled for early 2021).\n\n
\n \n
\n\nThe **PyTorch Developer Day** (#PTD2) is kicking off on November 12, 2020, 8AM PST with a full day of technical talks on a variety of topics, including updates to the core framework, new tools and libraries to support development across a variety of domains. You'll also see talks covering the latest research around systems and tooling in ML.", "metadata": {"source": "https://pytorch.org/blog/pytorch-developer-day-2020/", "category": "pytorch blogs"}} @@ -190,15 +190,15 @@ {"page_content": "---\nlayout: blog_detail\ntitle: 'Adding a Contributor License Agreement for PyTorch'\nauthor: Team PyTorch\n---\n\nTo ensure the ongoing growth and success of the framework, we're introducing the use of the Apache Contributor License Agreement (CLA) for PyTorch. We care deeply about the broad community of contributors who make PyTorch such a great framework, so we want to take a moment to explain why we are adding a CLA.\n\n#### Why Does PyTorch Need a CLA?\n\nCLAs help clarify that users and maintainers have the relevant rights to use and maintain code contributed to an open source project, while allowing contributors to retain ownership rights to their code.", "metadata": {"source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"}} {"page_content": "PyTorch has grown from a small group of enthusiasts to a now global community with over 1,600 contributors from dozens of countries, each bringing their own diverse perspectives, values and approaches to collaboration. Looking forward, clarity about how this collaboration is happening is an important milestone for the framework as we continue to build a stronger, safer and more scalable community around PyTorch.", "metadata": {"source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"}} {"page_content": "The text of the Apache CLA can be found [here](https://www.apache.org/licenses/contributor-agreements.html), together with an accompanying [FAQ](https://www.apache.org/licenses/cla-faq.html). The language in the PyTorch CLA is identical to the Apache template. Although CLAs have been the subject of significant discussion in the open source community, we are seeing that using a CLA, and particularly the Apache CLA, is now standard practice when projects and communities reach a certain scale. Popular projects that have adopted some type of CLA include: Visual Studio Code, Flutter, TensorFlow, kubernetes, Ubuntu, Django, Python, Go, Android and many others.\n\n#### What is Not Changing", "metadata": {"source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"}} -{"page_content": "PyTorch\u2019s BSD license is **not** changing. There is no impact to PyTorch users. CLAs will only be required for new contributions to the project. For past contributions, no action is necessary. Everything else stays the same, whether it\u2019s IP ownership, workflows, contributor roles or anything else that you\u2019ve come to expect from PyTorch. \n\n#### How the New CLA will Work\n\nMoving forward, all contributors to projects under the PyTorch GitHub organization will need to sign a CLA to merge their contributions. \n\n
\n \n
\n\nIf you've contributed to other Facebook Open Source projects, you may have already signed the CLA, and no action is required. If you have not signed the CLA, a GitHub check will prompt you to sign it before your pull requests can be merged. You can reach the CLA from this [link](https://code.facebook.com/cla).\n\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"}} -{"page_content": "If you're contributing as an individual, meaning the code is not something you worked on as part of your job, you should sign the individual contributor agreement. This agreement associates your GitHub username with future contributions and only needs to be signed once.\n\nIf you're contributing as part of your employment, you may need to sign the [corporate contributor agreement](https://code.facebook.com/cla/corporate). Check with your legal team on filling this out. Also you will include a list of github ids from your company.\n\nAs always, we continue to be humbled and grateful for all your support, and we look forward to scaling PyTorch together to even greater heights in the years to come.\n\nThank you!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"}} +{"page_content": "#### What is Not Changing\n\nPyTorch\u2019s BSD license is **not** changing. There is no impact to PyTorch users. CLAs will only be required for new contributions to the project. For past contributions, no action is necessary. Everything else stays the same, whether it\u2019s IP ownership, workflows, contributor roles or anything else that you\u2019ve come to expect from PyTorch. \n\n#### How the New CLA will Work\n\nMoving forward, all contributors to projects under the PyTorch GitHub organization will need to sign a CLA to merge their contributions. \n\n
\n \n
\n\nIf you've contributed to other Facebook Open Source projects, you may have already signed the CLA, and no action is required. If you have not signed the CLA, a GitHub check will prompt you to sign it before your pull requests can be merged. You can reach the CLA from this [link](https://code.facebook.com/cla).", "metadata": {"source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"}} +{"page_content": "
\n \n
\n\nIf you're contributing as an individual, meaning the code is not something you worked on as part of your job, you should sign the individual contributor agreement. This agreement associates your GitHub username with future contributions and only needs to be signed once.\n\nIf you're contributing as part of your employment, you may need to sign the [corporate contributor agreement](https://code.facebook.com/cla/corporate). Check with your legal team on filling this out. Also you will include a list of github ids from your company.\n\nAs always, we continue to be humbled and grateful for all your support, and we look forward to scaling PyTorch together to even greater heights in the years to come.\n\nThank you!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'Feature Extraction in TorchVision using Torch FX'\nauthor: Alexander Soare and Francisco Massa\nfeatured-img: 'assets/images/fx-image2.png'\n---\n\n\n\n# Introduction\n\n[FX](https://pytorch.org/docs/stable/fx.html) based feature extraction is a new [TorchVision utility](https://pytorch.org/vision/stable/feature_extraction.html) that lets us access intermediate transformations of an input during the forward pass of a PyTorch Module. It does so by symbolically tracing the forward method to produce a graph where each node represents a single operation. Nodes are named in a human-readable manner such that one may easily specify which nodes they want to access.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}} {"page_content": "Did that all sound a little complicated? Not to worry as there\u2019s a little in this article for everyone. Whether you\u2019re a beginner or an advanced deep-vision practitioner, chances are you will want to know about FX feature extraction. If you still want more background on feature extraction in general, read on. If you\u2019re already comfortable with that and want to know how to do it in PyTorch, skim ahead to Existing Methods in PyTorch: Pros and Cons. And if you already know about the challenges of doing feature extraction in PyTorch, feel free to skim forward to FX to The Rescue.\n\n\n## A Recap On Feature Extraction\n\nWe\u2019re all used to the idea of having a deep neural network (DNN) that takes inputs and produces outputs, and we don\u2019t necessarily think of what happens in between. Let\u2019s just consider a ResNet-50 classification model as an example:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}} {"page_content": "

\n\t\"CResNet-50\n\t
\n\t\tFigure 1: ResNet-50 takes an image of a bird and transforms that into the abstract concept \"bird\". Source: Bird image from ImageNet.\n

\n\nWe know though, that there are many sequential \u201clayers\u201d within the ResNet-50 architecture that transform the input step-by-step. In Figure 2 below, we peek under the hood to show the layers within ResNet-50, and we also show the intermediate transformations of the input as it passes through those layers.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}} {"page_content": "

\n\t\"ResNet-50\n\t
\n\t\tFigure 2: ResNet-50 transforms the input image in multiple steps. Conceptually, we may access the intermediate transformation of the image after each one of these steps. Source: Bird image from ImageNet.\n

\n\n\n## Existing Methods In PyTorch: Pros and Cons\n\nThere were already a few ways of doing feature extraction in PyTorch prior to FX based feature extraction being introduced.\n\nTo illustrate these, let\u2019s consider a simple convolutional neural network that does the following\n\n* Applies several \u201cblocks\u201d each with several convolution layers within.\n* After several blocks, it uses a global average pool and flatten operation.\n* Finally it uses a single output classification layer.\n\n```python\nimport torch\nfrom torch import nn", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}} -{"page_content": "class ConvBlock(nn.Module):\n \"\"\"\n Applies `num_layers` 3x3 convolutions each followed by ReLU then downsamples\n via 2x2 max pool.\n \"\"\"\n\n def __init__(self, num_layers, in_channels, out_channels):\n super().__init__()\n self.convs = nn.ModuleList(\n [nn.Sequential(\n nn.Conv2d(in_channels if i==0 else out_channels, out_channels, 3, padding=1),\n nn.ReLU()\n )\n for i in range(num_layers)]\n )\n self.downsample = nn.MaxPool2d(kernel_size=2, stride=2)\n \n def forward(self, x):\n for conv in self.convs:\n x = conv(x)\n x = self.downsample(x)\n return x\n \n\nclass CNN(nn.Module):\n \"\"\"\n Applies several ConvBlocks each doubling the number of channels, and\n halving the feature map size, before taking a global average and classifying.\n \"\"\"", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}} +{"page_content": "```python\nimport torch\nfrom torch import nn\n\n\nclass ConvBlock(nn.Module):\n \"\"\"\n Applies `num_layers` 3x3 convolutions each followed by ReLU then downsamples\n via 2x2 max pool.\n \"\"\"\n\n def __init__(self, num_layers, in_channels, out_channels):\n super().__init__()\n self.convs = nn.ModuleList(\n [nn.Sequential(\n nn.Conv2d(in_channels if i==0 else out_channels, out_channels, 3, padding=1),\n nn.ReLU()\n )\n for i in range(num_layers)]\n )\n self.downsample = nn.MaxPool2d(kernel_size=2, stride=2)\n \n def forward(self, x):\n for conv in self.convs:\n x = conv(x)\n x = self.downsample(x)\n return x\n \n\nclass CNN(nn.Module):\n \"\"\"\n Applies several ConvBlocks each doubling the number of channels, and\n halving the feature map size, before taking a global average and classifying.\n \"\"\"", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}} {"page_content": "def __init__(self, in_channels, num_blocks, num_classes):\n super().__init__()\n first_channels = 64\n self.blocks = nn.ModuleList(\n [ConvBlock(\n 2 if i==0 else 3,\n in_channels=(in_channels if i == 0 else first_channels*(2**(i-1))),\n out_channels=first_channels*(2**i))\n for i in range(num_blocks)]\n )\n self.global_pool = nn.AdaptiveAvgPool2d((1, 1))\n self.cls = nn.Linear(first_channels*(2**(num_blocks-1)), num_classes)\n\n def forward(self, x):\n for block in self.blocks:\n x = block(x)\n x = self.global_pool(x)\n x = x.flatten(1)\n x = self.cls(x)\n return x\n\n\nmodel = CNN(3, 4, 10)\nout = model(torch.zeros(1, 3, 32, 32)) # This will be the final logits over classes\n\n```\n\nLet\u2019s say we want to get the final feature map before global average pooling. We could do the following:\n\n### Modify the forward method", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}} -{"page_content": "```python\ndef forward(self, x):\n for block in self.blocks:\n x = block(x)\n self.final_feature_map = x\n x = self.global_pool(x)\n x = x.flatten(1)\n x = self.cls(x)\n return x\n```\n\nOr return it directly:\n\n```python\ndef forward(self, x):\n for block in self.blocks:\n x = block(x)\n final_feature_map = x\n x = self.global_pool(x)\n x = x.flatten(1)\n x = self.cls(x)\n return x, final_feature_map\n```\nThat looks pretty easy. But there are some downsides here which all stem from the same underlying issue: that is, modifying the source code is not ideal:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}} +{"page_content": "### Modify the forward method\n\n```python\ndef forward(self, x):\n for block in self.blocks:\n x = block(x)\n self.final_feature_map = x\n x = self.global_pool(x)\n x = x.flatten(1)\n x = self.cls(x)\n return x\n```\n\nOr return it directly:\n\n```python\ndef forward(self, x):\n for block in self.blocks:\n x = block(x)\n final_feature_map = x\n x = self.global_pool(x)\n x = x.flatten(1)\n x = self.cls(x)\n return x, final_feature_map\n```\nThat looks pretty easy. But there are some downsides here which all stem from the same underlying issue: that is, modifying the source code is not ideal:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}} {"page_content": "* It\u2019s not always easy to access and change given the practical considerations of a project.\n* If we want flexibility (switching feature extraction on or off, or having variations on it), we need to further adapt the source code to support that.\n* It\u2019s not always just a question of inserting a single line of code. Think about how you would go about getting the feature map from one of the intermediate blocks with the way I\u2019ve written this module.\n* Overall, we\u2019d rather avoid the overhead of maintaining source code for a model, when we actually don\u2019t need to change anything about how it works.\n\nOne can see how this downside can start to get a lot more thorny when dealing with larger, more complicated models, and trying to get at features from within nested submodules.\n\n### Write a new module using the parameters from the original one\n\nFollowing on the example from above, say we want to get a feature map from each block. We could write a new module like so:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}} {"page_content": "```python\nclass CNNFeatures(nn.Module):\n def __init__(self, backbone):\n super().__init__()\n self.blocks = backbone.blocks\n\n def forward(self, x):\n feature_maps = []\n for block in self.blocks:\n x = block(x)\n feature_maps.append(x)\n return feature_maps\n\n\nbackbone = CNN(3, 4, 10)\nmodel = CNNFeatures(backbone)\nout = model(torch.zeros(1, 3, 32, 32)) # This is now a list of Tensors, each representing a feature map\n```\n\nIn fact, this is much like the method that TorchVision used internally to make many of its detection models. \n\nAlthough this approach solves some of the issues with modifying the source code directly, there are still some major downsides:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}} {"page_content": "* It\u2019s only really straight-forward to access the outputs of top-level submodules. Dealing with nested submodules rapidly becomes complicated.\n* We have to be careful not to miss any important operations in between the input and the output. We introduce potential for errors in transcribing the exact functionality of the original module to the new module.\n\nOverall, this method and the last both have the complication of tying in feature extraction with the model\u2019s source code itself. Indeed, if we examine the source code for TorchVision models we might suspect that some of the design choices were influenced by the desire to use them in this way for downstream tasks.\n\n### Use hooks\n\nHooks move us away from the paradigm of writing source code, towards one of specifying outputs. Considering our toy CNN example above, and the goal of getting feature maps for each layer, we could use hooks like this:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}} @@ -216,8 +216,8 @@ {"page_content": "| | Can use source code as is without any modifications or rewriting | Full flexibility in accessing features | Drops unnecessary computational steps | TorchScript friendly |\n|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|\n| Modify forward method | NO | Technically yes. Depends on how much code you\u2019re willing to write. So in practice, NO. | YES | YES | \n|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|\n| New module that reuses submodules / parameters of original module | NO | Technically yes. Depends on how much code you\u2019re willing to write. So in practice, NO. | YES | YES |\n|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|\n| Hooks | YES | Mostly YES. Only outputs of submodules | NO | NO |\n|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|\n| FX | YES | YES | YES | YES |\n|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}} {"page_content": "Table 2: A copy of Table 1 with an added row for FX feature extraction. FX feature extraction gets YES across the board!\n\n\n## Current FX Limitations\n\nAlthough I would have loved to end the post there, FX does have some of its own limitations which boil down to:\n\n1. There may be some Python code that isn\u2019t yet handled by FX when it comes to the step of interpretation and translation into a graph.\n2. Dynamic control flow can\u2019t be represented in terms of a static graph.\n\nThe easiest thing to do when these problems crop up is to bundle the underlying code into a \u201cleaf node\u201d. Recall the example graph from Figure 3? Conceptually, we may agree that the `submodule` should be treated as a node in itself rather than a set of nodes representing the underlying operations. If we do so, we can redraw the graph as:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}} {"page_content": "

\n\t\"The\n\t
\n\t\tFigure 5: The individual operations within `submodule` may (left - within red box), may be consolidated into one node (right - node #2) if we consider the `submodule` as a \"leaf\" node.\n

\n\n\nWe would want to do so if there is some problematic code within the submodule, but we don\u2019t have any need for extracting any intermediate transformations from within it. In practice, this is easily achievable by providing a keyword argument to create_feature_extractor or get_graph_node_names.\n\n\n```python\nmodel = CNN(3, 4, 10)\nnodes, _ = get_graph_node_names(model, tracer_kwargs={'leaf_modules': [ConvBlock]})\nprint(nodes)\n```\n\nfor which the output will be:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}} -{"page_content": "```python\n['x', 'blocks.0', 'blocks.1', 'blocks.2', 'blocks.3', 'global_pool', 'flatten', 'cls']\n```\n\nNotice how, as compared to previously, all the nodes for any given `ConvBlock` are consolidated into a single node.\n\nWe could do something similar with functions. For example, Python\u2019s inbuilt `len` needs to be wrapped and the result should be treated as a leaf node. Here\u2019s how you can do that with core FX functionality:\n\n```python\ntorch.fx.wrap('len')\n\nclass MyModule(nn.Module):\n def forward(self, x):\n x += 1\n len(x)\n\nmodel = MyModule()\nfeature_extractor = create_feature_extractor(model, return_nodes=['add'])\n```\n\nFor functions you define, you may instead use another keyword argument to `create_feature_extractor` (minor detail: here\u2019s[ why you might want to do it this way instead](https://github.com/pytorch/pytorch/issues/62021#issue-950458396)):\n\n\n```python\ndef myfunc(x):\n return len(x)\n\nclass MyModule(nn.Module):\n def forward(self, x):\n x += 1\n myfunc(x)", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}} -{"page_content": "model = MyModule()\nfeature_extractor = create_feature_extractor(\n model, return_nodes=['add'], tracer_kwargs={'autowrap_functions': [myfunc]})\n```\n\nNotice that none of the fixes above involved modifying source code.\n\nOf course, there may be times when the very intermediate transformation one is trying to get access to is within the same forward method or function that is causing problems. Here, we can\u2019t just treat that module or function as a leaf node, because then we can\u2019t access the intermediate transformations within. In these cases, some rewriting of the source code will be needed. Here are some examples (not exhaustive)", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}} +{"page_content": "for which the output will be:\n\n```python\n['x', 'blocks.0', 'blocks.1', 'blocks.2', 'blocks.3', 'global_pool', 'flatten', 'cls']\n```\n\nNotice how, as compared to previously, all the nodes for any given `ConvBlock` are consolidated into a single node.\n\nWe could do something similar with functions. For example, Python\u2019s inbuilt `len` needs to be wrapped and the result should be treated as a leaf node. Here\u2019s how you can do that with core FX functionality:\n\n```python\ntorch.fx.wrap('len')\n\nclass MyModule(nn.Module):\n def forward(self, x):\n x += 1\n len(x)\n\nmodel = MyModule()\nfeature_extractor = create_feature_extractor(model, return_nodes=['add'])\n```\n\nFor functions you define, you may instead use another keyword argument to `create_feature_extractor` (minor detail: here\u2019s[ why you might want to do it this way instead](https://github.com/pytorch/pytorch/issues/62021#issue-950458396)):\n\n\n```python\ndef myfunc(x):\n return len(x)", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}} +{"page_content": "```python\ndef myfunc(x):\n return len(x)\n\nclass MyModule(nn.Module):\n def forward(self, x):\n x += 1\n myfunc(x)\n\nmodel = MyModule()\nfeature_extractor = create_feature_extractor(\n model, return_nodes=['add'], tracer_kwargs={'autowrap_functions': [myfunc]})\n```\n\nNotice that none of the fixes above involved modifying source code.\n\nOf course, there may be times when the very intermediate transformation one is trying to get access to is within the same forward method or function that is causing problems. Here, we can\u2019t just treat that module or function as a leaf node, because then we can\u2019t access the intermediate transformations within. In these cases, some rewriting of the source code will be needed. Here are some examples (not exhaustive)", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}} {"page_content": "- FX will raise an error when trying to trace through code with an `assert` statement. In this case you may need to remove that assertion or switch it with [`torch._assert`](https://pytorch.org/docs/stable/generated/torch._assert.html) (this is not a public function - so consider it a bandaid and use with caution).\n- Symbolically tracing in-place changes to slices of tensors is not supported. You will need to make a new variable for the slice, apply the operation, then reconstruct the original tensor using concatenation or stacking.\n- Representing dynamic control flow in a static graph is just not logically possible. See if you can distill the coded logic down to something that is not dynamic - see FX documentation for tips.\n\nIn general, you may consult the FX documentation for more detail on the [limitations of symbolic tracing](https://pytorch.org/docs/stable/fx.html#limitations-of-symbolic-tracing) and the possible workarounds.\n\n## Conclusion", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}} {"page_content": "## Conclusion\n\nWe did a quick recap on feature extraction and why one might want to do it. Although there are existing methods for doing feature extraction in PyTorch they all have rather significant shortcomings. We learned how TorchVision\u2019s FX feature extraction utility works and what makes it so versatile compared to the existing methods. While there are still some minor kinks to iron out for the latter, we understand the limitations, and can trade them off against the limitations of other methods depending on our use case. Hopefully by adding this new utility to your PyTorch toolkit, you\u2019re now equipped to handle the vast majority of feature extraction requirements you may come across.\n\nHappy coding!", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch Ecosystem Day 2021 Recap and New Contributor Resources'\nauthor: Team PyTorch\n---\n\nThank you to our incredible community for making the first ever PyTorch Ecosystem Day a success! The day was filled with discussions on new developments, trends and challenges showcased through 71 posters, 32 breakout sessions and 6 keynote speakers. \n\n
\n \n
\n\nSpecial thanks to our keynote speakers: Piotr Bialecki, Ritchie Ng, Miquel Farr\u00e9, Joe Spisak, Geeta Chauhan, and Suraj Subramanian who shared updates from the latest release of PyTorch, exciting work being done with partners, use case example from Disney, the growth and development of the PyTorch community in Asia Pacific, and latest contributor highlights.", "metadata": {"source": "https://pytorch.org/blog/ecosystem-day-2021-recap/", "category": "pytorch blogs"}} @@ -232,9 +232,9 @@ {"page_content": "Although each image is 2D, the Earth itself is 3D. In order to stitch together images, they first need to be projected onto a 2D representation of the Earth, called a coordinate reference system (CRS). Most people are familiar with equal angle representations like Mercator that distort the size of regions (Greenland looks larger than Africa even though Africa is 15x larger), but there are many other CRSs that are commonly used. Each dataset may use a different CRS, and each image within a single dataset may also be in a unique CRS. In order to use data from multiple layers, they must all share a common CRS, otherwise the data won't be properly aligned. For those who aren't familiar with remote sensing data, this can be a daunting task.\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}} {"page_content": "

\nEven if you correctly georeference images during indexing, if you don't project them to a common CRS, you'll end up with rotated images with nodata values around them, and the images won't be pixel-aligned.\n

\n\n# The solution\n\nAt the moment, it can be quite challenging to work with both deep learning models and geospatial data without having expertise in both of these very different fields. To address these challenges, we've built TorchGeo, a PyTorch domain library for working with geospatial data. TorchGeo is designed to make it simple:\n\n1. for machine learning experts to work with geospatial data, and\n2. for remote sensing experts to explore machine learning solutions.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}} {"page_content": "TorchGeo is not just a research project, but a production-quality library that uses continuous integration to test every commit with a range of Python versions on a range of platforms (Linux, macOS, Windows). It can be easily installed with any of your favorite package managers, including pip, conda, and [spack](https://spack.io):\n\n```\n$ pip install torchgeo\n```\n\nTorchGeo is designed to have the same API as other PyTorch domain libraries like torchvision, torchtext, and torchaudio. If you already use torchvision in your workflow for computer vision datasets, you can switch to TorchGeo by changing only a few lines of code. All TorchGeo datasets and samplers are compatible with the PyTorch ``DataLoader`` class, meaning that you can take advantage of wrapper libraries like [PyTorch Lightning](https://www.pytorchlightning.ai/) for distributed training. In the following sections, we'll explore possible use cases for TorchGeo to show how simple it is to use.\n\n# Geospatial datasets and samplers", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}} -{"page_content": "

\n \n

\n\n

\nExample application in which we combine A) a scene from Landsat 8 and B) Cropland Data Layer labels, even though these files are in different EPSG projections. We want to sample patches C) and D) from these datasets using a geospatial bounding box as an index.\n

\n\nMany remote sensing applications involve working with [*geospatial datasets*](https://torchgeo.readthedocs.io/en/latest/api/datasets.html#geospatial-datasets) \u2014datasets with geographic metadata. In TorchGeo, we define a ``GeoDataset`` class to represent these kinds of datasets. Instead of being indexed by an integer, each ``GeoDataset`` is indexed by a spatiotemporal bounding box, meaning that two or more datasets covering a different geographic extent can be intelligently combined.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}} -{"page_content": "In this example, we show how easy it is to work with geospatial data and to sample small image patches from a combination of Landsat and Cropland Data Layer (CDL) data using TorchGeo. First, we assume that the user has Landsat 7 and 8 imagery downloaded. Since Landsat 8 has more spectral bands than Landsat 7, we'll only use the bands that both satellites have in common. We'll create a single dataset including all images from both Landsat 7 and 8 data by taking the union between these two datasets.\n\n```c++\nfrom torch.utils.data import DataLoader\nfrom torchgeo.datasets import CDL, Landsat7, Landsat8, stack_samples\nfrom torchgeo.samplers import RandomGeoSampler\n\nlandsat7 = Landsat7(root=\"...\")\nlandsat8 = Landsat8(root=\"...\", bands=Landsat8.all_bands[1:-2])\nlandsat = landsat7 | landsat8\n```", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}} -{"page_content": "Next, we take the intersection between this dataset and the CDL dataset. We want to take the intersection instead of the union to ensure that we only sample from regions where we have both Landsat and CDL data. Note that we can automatically download and checksum CDL data. Also note that each of these datasets may contain files in different CRSs or resolutions, but TorchGeo automatically ensures that a matching CRS and resolution is used.\n\n```c++\ncdl = CDL(root=\"...\", download=True, checksum=True)\ndataset = landsat & cdl\n```", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}} +{"page_content": "# Geospatial datasets and samplers\n\n

\n \n

\n\n

\nExample application in which we combine A) a scene from Landsat 8 and B) Cropland Data Layer labels, even though these files are in different EPSG projections. We want to sample patches C) and D) from these datasets using a geospatial bounding box as an index.\n

", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}} +{"page_content": "Many remote sensing applications involve working with [*geospatial datasets*](https://torchgeo.readthedocs.io/en/latest/api/datasets.html#geospatial-datasets) \u2014datasets with geographic metadata. In TorchGeo, we define a ``GeoDataset`` class to represent these kinds of datasets. Instead of being indexed by an integer, each ``GeoDataset`` is indexed by a spatiotemporal bounding box, meaning that two or more datasets covering a different geographic extent can be intelligently combined.\n\nIn this example, we show how easy it is to work with geospatial data and to sample small image patches from a combination of Landsat and Cropland Data Layer (CDL) data using TorchGeo. First, we assume that the user has Landsat 7 and 8 imagery downloaded. Since Landsat 8 has more spectral bands than Landsat 7, we'll only use the bands that both satellites have in common. We'll create a single dataset including all images from both Landsat 7 and 8 data by taking the union between these two datasets.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}} +{"page_content": "```c++\nfrom torch.utils.data import DataLoader\nfrom torchgeo.datasets import CDL, Landsat7, Landsat8, stack_samples\nfrom torchgeo.samplers import RandomGeoSampler\n\nlandsat7 = Landsat7(root=\"...\")\nlandsat8 = Landsat8(root=\"...\", bands=Landsat8.all_bands[1:-2])\nlandsat = landsat7 | landsat8\n```\n\nNext, we take the intersection between this dataset and the CDL dataset. We want to take the intersection instead of the union to ensure that we only sample from regions where we have both Landsat and CDL data. Note that we can automatically download and checksum CDL data. Also note that each of these datasets may contain files in different CRSs or resolutions, but TorchGeo automatically ensures that a matching CRS and resolution is used.\n\n```c++\ncdl = CDL(root=\"...\", download=True, checksum=True)\ndataset = landsat & cdl\n```", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}} {"page_content": "This dataset can now be used with a PyTorch data loader. Unlike benchmark datasets, geospatial datasets often include very large images. For example, the CDL dataset consists of a single image covering the entire contiguous United States. In order to sample from these datasets using geospatial coordinates, TorchGeo defines a number of [*samplers*](https://torchgeo.readthedocs.io/en/latest/api/samplers.html). In this example, we'll use a random sampler that returns 256 x 256 pixel images and 10,000 samples per epoch. We'll also use a custom collation function to combine each sample dictionary into a mini-batch of samples.\n\n```c++\nsampler = RandomGeoSampler(dataset, size=256, length=10000)\ndataloader = DataLoader(dataset, batch_size=128, sampler=sampler, collate_fn=stack_samples)\n```\n\nThis data loader can now be used in your normal training/evaluation pipeline.\n\n```c++\nfor batch in dataloader:\n image = batch[\"image\"]\n mask = batch[\"mask\"]", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}} {"page_content": "# train a model, or make predictions using a pre-trained model\n```\n\nMany applications involve intelligently composing datasets based on geospatial metadata like this. For example, users may want to:\n\n- Combine datasets for multiple image sources and treat them as equivalent (e.g., Landsat 7 and 8)\n- Combine datasets for disparate geospatial locations (e.g., Chesapeake NY and PA)\n\nThese combinations require that all queries are present in *at least one* dataset, and can be created using a ``UnionDataset``. Similarly, users may want to:\n\n- Combine image and target labels and sample from both simultaneously (e.g., Landsat and CDL)\n- Combine datasets for multiple image sources for multimodal learning or data fusion (e.g., Landsat and Sentinel)\n\nThese combinations require that all queries are present in *both* datasets, and can be created using an ``IntersectionDataset``. TorchGeo automatically composes these datasets for you when you use the intersection (``&``) and union \\(``|``\\) operators.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}} {"page_content": "# Multispectral and geospatial transforms\n\nIn deep learning, it's common to augment and transform the data so that models are robust to variations in the input space. Geospatial data can have variations such as seasonal changes and warping effects, as well as image processing and capture issues like cloud cover and atmospheric distortion. TorchGeo utilizes augmentations and transforms from the [Kornia](https://kornia.github.io/) library, which supports GPU acceleration and supports multispectral imagery with more than 3 channels.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}} @@ -243,7 +243,7 @@ {"page_content": "

\nTrue color (left) and NDVI (right) of the Texas Hill Region, taken on November 16, 2018 by the Sentinel-2 satellite. In the NDVI image, red indicates water bodies, yellow indicates barren soil, light green indicates unhealthy vegetation, and dark green indicates healthy vegetation.\n

\n\n# Benchmark datasets\n\nOne of the driving factors behind progress in computer vision is the existence of standardized benchmark datasets like ImageNet and MNIST. Using these datasets, researchers can directly compare the performance of different models and training procedures to determine which perform the best. In the remote sensing domain, there are many such datasets, but due to the aforementioned difficulties of working with this data and the lack of existing libraries for loading these datasets, many researchers opt to use their own custom datasets.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}} {"page_content": "One of the goals of TorchGeo is to provide easy-to-use data loaders for these existing datasets. TorchGeo includes a number of [*benchmark datasets*](https://torchgeo.readthedocs.io/en/latest/api/datasets.html#non-geospatial-datasets) \u2014datasets that include both input images and target labels. This includes datasets for tasks like image classification, regression, semantic segmentation, object detection, instance segmentation, change detection, and more.\n\nIf you've used torchvision before, these types of datasets should be familiar. In this example, we'll create a dataset for the Northwestern Polytechnical University (NWPU) very-high-resolution ten-class (VHR-10) geospatial object detection dataset. This dataset can be automatically downloaded, checksummed, and extracted, just like with torchvision.\n\n```c++\nfrom torch.utils.data import DataLoader\nfrom torchgeo.datasets import VHR10", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}} {"page_content": "dataset = VHR10(root=\"...\", download=True, checksum=True)\ndataloader = DataLoader(dataset, batch_size=128, shuffle=True, num_workers=4)\n\nfor batch in dataloader:\n image = batch[\"image\"]\n label = batch[\"label\"]\n\n # train a model, or make predictions using a pre-trained model\n```\n\nAll TorchGeo datasets are compatible with PyTorch data loaders, making them easy to integrate into existing training workflows. The only difference between a benchmark dataset in TorchGeo and a similar dataset in torchvision is that each dataset returns a dictionary with keys for each PyTorch ``Tensor``.\n\n

\n \n

\n\n

\nExample predictions from a Mask R-CNN model trained on the NWPU VHR-10 dataset. The model predicts sharp bounding boxes and masks for all objects with high confidence scores.\n

\n\n# Reproducibility with PyTorch Lightning", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}} -{"page_content": "Another key goal of TorchGeo is reproducibility. For many of these benchmark datasets, there is no predefined train-val-test split, or the predefined split has issues with class imbalance or geographic distribution. As a result, the performance metrics reported in the literature either can't be reproduced, or aren't indicative of how well a pre-trained model would work in a different geographic location.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}} +{"page_content": "# Reproducibility with PyTorch Lightning\n\nAnother key goal of TorchGeo is reproducibility. For many of these benchmark datasets, there is no predefined train-val-test split, or the predefined split has issues with class imbalance or geographic distribution. As a result, the performance metrics reported in the literature either can't be reproduced, or aren't indicative of how well a pre-trained model would work in a different geographic location.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}} {"page_content": "In order to facilitate direct comparisons between results published in the literature and further reduce the boilerplate code needed to run experiments with datasets in TorchGeo, we have created PyTorch Lightning [*datamodules*](https://torchgeo.readthedocs.io/en/latest/api/datamodules.html) with well-defined train-val-test splits and [*trainers*](https://torchgeo.readthedocs.io/en/latest/api/trainers.html) for various tasks like classification, regression, and semantic segmentation. These datamodules show how to incorporate augmentations from the kornia library, include preprocessing transforms (with pre-calculated channel statistics), and let users easily experiment with hyperparameters related to the data itself (as opposed to the modeling process). Training a semantic segmentation model on the Inria Aerial Image Labeling dataset is as easy as a few imports and four lines of code.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}} {"page_content": "```c++\nfrom pytorch_lightning import Trainer\nfrom torchgeo.datamodules import InriaAerialImageLabelingDataModule\nfrom torchgeo.trainers import SemanticSegmentationTask\n\ndatamodule = InriaAerialImageLabelingDataModule(root_dir=\"...\", batch_size=64, num_workers=6)\ntask = SemanticSegmentationTask(segmentation_model=\"unet\", encoder_weights=\"imagenet\", learning_rate=0.1)\ntrainer = Trainer(gpus=1, default_root_dir=\"...\")\n\ntrainer.fit(model=task, datamodule=datamodule)\n```\n\n

\n \n

\n\n

\nBuilding segmentations produced by a U-Net model trained on the Inria Aerial Image Labeling dataset. Reproducing these results is as simple as a few imports and four lines of code, making comparison of different models and training techniques simple and easy.\n

", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}} {"page_content": "In our [preprint](https://arxiv.org/abs/2111.08872) we show a set of results that use the aforementioned datamodules and trainers to benchmark simple modeling approaches for several of the datasets in TorchGeo. For example, we find that a simple ResNet-50 can achieve state-of-the-art performance on the [So2Sat](https://ieeexplore.ieee.org/document/9014553) dataset. These types of baseline results are important for evaluating the contribution of different modeling choices when tackling problems with remotely sensed data.\n\n# Future work and contributing\n\nThere is still a lot of remaining work to be done in order to make TorchGeo as easy to use as possible, especially for users without prior deep learning experience. One of the ways in which we plan to achieve this is by expanding our tutorials to include subjects like \"writing a custom dataset\" and \"transfer learning\", or tasks like \"land cover mapping\" and \"object detection\".", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}} @@ -253,13 +253,13 @@ {"page_content": "---\nlayout: blog_detail\ntitle: 'What\u2019s New in PyTorch Profiler 1.9?'\nauthor: Sabrina Smai, Program Manager on the AI Framework team at Microsoft\n---\n\nPyTorch Profiler v1.9 has been released! The goal of this new release (previous [PyTorch Profiler release](https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/)) is to provide you with new state-of-the-art tools to help diagnose and fix machine learning performance issues regardless of whether you are working on one or numerous machines. The objective is to target the execution steps that are the most costly in time and/or memory, and visualize the work load distribution between GPUs and CPUs. \n\nHere is a summary of the five major features being released:", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}} {"page_content": "1.\t**Distributed Training View**: This helps you understand how much time and memory is consumed in your distributed training job. Many issues occur when you take a training model and split the load into worker nodes to be run in parallel as it can be a black box. The overall model goal is to speed up model training. This distributed training view will help you diagnose and debug issues within individual nodes. \n2.\t**Memory View**: This view allows you to understand your memory usage better. This tool will help you avoid the famously pesky Out of Memory error by showing active memory allocations at various points of your program run. \n3.\t**GPU Utilization Visualization**: This tool helps you make sure that your GPU is being fully utilized. \n4.\t**Cloud Storage Support**: Tensorboard plugin can now read profiling data from Azure Blob Storage, Amazon S3, and Google Cloud Platform. \n5.\t**Jump to Source Code**: This feature allows you to visualize stack tracing information and jump directly into the source code. This helps you quickly optimize and iterate on your code based on your profiling results.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}} {"page_content": "## Getting Started with PyTorch Profiling Tool\nPyTorch includes a profiling functionality called \u00ab PyTorch Profiler \u00bb. The PyTorch Profiler tutorial can be found [here](https://pytorch.org/tutorials/intermediate/tensorboard_profiler_tutorial.html).\n\nTo instrument your PyTorch code for profiling, you must:\n\n$ pip install torch-tb-profiler\n\n```python\nimport torch.profiler as profiler\nWith profiler.profile(XXXX)\n```\n\n**Comments**:\n\n\u2022 For CUDA and CPU profiling, see [below](https://github.com/pytorch/kineto/blob/master/tb_plugin/examples/resnet50_profiler_api.py): \n```\nwith torch.profiler.profile( \nactivities=[ \ntorch.profiler.ProfilerActivity.CPU, \ntorch.profiler.ProfilerActivity.CUDA], \n```\n\n\u2022\tWith profiler.record_function(\u201c$NAME\u201d): allows putting a decorator (a tag associated to a name) for a block of function\n\n\u2022\tProfile_memory=True parameter under profiler.profile allows you to profile CPU and GPU memory footprint\n\n## Visualizing PyTorch Model Performance using PyTorch Profiler\n\n### Distributed Training", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}} -{"page_content": "Recent advances in deep learning argue for the value of large datasets and large models, which requires you to scale out model training to more computational resources. Distributed Data Parallel (DDP) and NVIDIA Collective Communications Library (NCCL) are the widely adopted paradigms in PyTorch for accelerating your deep learning training. \n\nIn this release of PyTorch Profiler, DDP with NCCL backend is now supported.\n\n
\n \n
\n\n### Computation/Communication Overview\n\nIn the Computation/Communication overview under the Distributed training view, you can observe the computation-to-communication ratio of each worker and [load balancer](https://en.wikipedia.org/wiki/Load_balancing_(computing) nodes between worker as measured by granularity. \n\n**Scenario 1**:", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}} +{"page_content": "### Distributed Training \n\nRecent advances in deep learning argue for the value of large datasets and large models, which requires you to scale out model training to more computational resources. Distributed Data Parallel (DDP) and NVIDIA Collective Communications Library (NCCL) are the widely adopted paradigms in PyTorch for accelerating your deep learning training. \n\nIn this release of PyTorch Profiler, DDP with NCCL backend is now supported.\n\n
\n \n
\n\n### Computation/Communication Overview\n\nIn the Computation/Communication overview under the Distributed training view, you can observe the computation-to-communication ratio of each worker and [load balancer](https://en.wikipedia.org/wiki/Load_balancing_(computing) nodes between worker as measured by granularity. \n\n**Scenario 1**:", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}} {"page_content": "**Scenario 1**:\n\nIf the computation and overlapping time of one worker is much larger than the others, this may suggest an issue in the workload balance or worker being a straggler. Computation is the sum of kernel time on GPU minus the overlapping time. The overlapping time is the time saved by interleaving communications during computation. The more overlapping time represents better parallelism between computation and communication. Ideally the computation and communication completely overlap with each other. Communication is the total communication time minus the overlapping time. The example image below displays how this scenario appears on Tensorboard. \n\n
\n \n

Figure: A straggler example

\n
\n\n**Scenario 2**:", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}} {"page_content": "**Scenario 2**:\n\nIf there is a small batch size (i.e. less computation on each worker) or the data to be transferred is large, the computation-to-communication may also be small and be seen in the profiler with low GPU utilization and long waiting times. This computation/communication view will allow you to diagnose your code to reduce communication by adopting gradient accumulation, or to decrease the communication proportion by increasing batch size. DDP communication time depends on model size. Batch size has no relationship with model size. So increasing batch size could make computation time longer and make computation-to-communication ratio bigger. \n\n### Synchronizing/Communication Overview", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}} -{"page_content": "In the Synchronizing/Communication view, you can observe the efficiency of communication. This is done by taking the step time minus computation and communication time. Synchronizing time is part of the total communication time for waiting and synchronizing with other workers. The Synchronizing/Communication view includes initialization, data loader, CPU computation, and so on Insights like what is the ratio of total communication is really used for exchanging data and what is the idle time of waiting for data from other workers can be drawn from this view. \n\n
\n \n
\n\nFor example, if there is an inefficient workload balance or straggler issue, you\u2019ll be able to identify it in this Synchronizing/Communication view. This view will show several workers\u2019 waiting time being longer than others.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}} +{"page_content": "### Synchronizing/Communication Overview\n\nIn the Synchronizing/Communication view, you can observe the efficiency of communication. This is done by taking the step time minus computation and communication time. Synchronizing time is part of the total communication time for waiting and synchronizing with other workers. The Synchronizing/Communication view includes initialization, data loader, CPU computation, and so on Insights like what is the ratio of total communication is really used for exchanging data and what is the idle time of waiting for data from other workers can be drawn from this view. \n\n
\n \n
\n\nFor example, if there is an inefficient workload balance or straggler issue, you\u2019ll be able to identify it in this Synchronizing/Communication view. This view will show several workers\u2019 waiting time being longer than others.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}} {"page_content": "
\n \n
\n\nThis table view above allows you to see the detailed statistics of all communication ops in each node. This allows you to see what operation types are being called, how many times each op is called, what is the size of the data being transferred by each op, etc. \n\n### Memory View:\n\nThis memory view tool helps you understand the hardware resource consumption of the operators in your model. Understanding the time and memory consumption on the operator-level allows you to resolve performance bottlenecks and in turn, allow your model to execute faster. Given limited GPU memory size, optimizing the memory usage can: \n\n1. Allow bigger model which can potentially generalize better on end level tasks.\n2. Allow bigger batch size. Bigger batch sizes increase the training speed.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}} {"page_content": "The profiler records all the memory allocation during the profiler interval. Selecting the \u201cDevice\u201d will allow you to see each operator\u2019s memory usage on the GPU side or host side. You must enable ```profile_memory=True``` to generate the below memory data as shown [here](https://github.com/pytorch/kineto/blob/master/tb_plugin/examples/resnet50_profiler_api.py#L39). \n\n```\nWith torch.profiler.profile(\nProfiler_memory=True # this will take 1 \u2013 2 minutes to complete. \n)\n```\n\n**Important Definitions**:\n\n\u2022\t\u201cSize Increase\u201d displays the sum of all allocation bytes and minus all the memory release bytes.\n\n\u2022\t\u201cAllocation Size\u201d shows the sum of all allocation bytes without considering the memory release.\n\n\u2022\t\u201cSelf\u201d means the allocated memory is not from any child operators, instead by the operator itself.\n\n
\n \n
\n\n\n### GPU Metric on Timeline:", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}} -{"page_content": "This feature will help you debug performance issues when one or more GPU are underutilized. Ideally, your program should have high GPU utilization (aiming for 100% GPU utilization), minimal CPU to GPU communication, and no overhead. \n\n**Overview**:\nThe overview page highlights the results of three important GPU usage metrics at different levels (i.e. GPU Utilization, Est. SM Efficiency, and Est. Achieved Occupancy). Essentially, each GPU has a bunch of SM each with a bunch of warps that can execute a bunch of threads concurrently. Warps execute a bunch because the amount depends on the GPU. But at a high level, this GPU Metric on Timeline tool allows you can see the whole stack, which is useful. \n\nIf the GPU utilization result is low, this suggests a potential bottleneck is present in your model. Common reasons: \n\n\u2022Insufficient parallelism in kernels (i.e., low batch size) \n\n\u2022Small kernels called in a loop. This is to say the launch overheads are not amortized", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}} +{"page_content": "### GPU Metric on Timeline:\n\nThis feature will help you debug performance issues when one or more GPU are underutilized. Ideally, your program should have high GPU utilization (aiming for 100% GPU utilization), minimal CPU to GPU communication, and no overhead. \n\n**Overview**:\nThe overview page highlights the results of three important GPU usage metrics at different levels (i.e. GPU Utilization, Est. SM Efficiency, and Est. Achieved Occupancy). Essentially, each GPU has a bunch of SM each with a bunch of warps that can execute a bunch of threads concurrently. Warps execute a bunch because the amount depends on the GPU. But at a high level, this GPU Metric on Timeline tool allows you can see the whole stack, which is useful. \n\nIf the GPU utilization result is low, this suggests a potential bottleneck is present in your model. Common reasons: \n\n\u2022Insufficient parallelism in kernels (i.e., low batch size) \n\n\u2022Small kernels called in a loop. This is to say the launch overheads are not amortized", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}} {"page_content": "\u2022CPU or I/O bottlenecks lead to the GPU not receiving enough work to keep busy \n\nLooking of the overview page where the performance recommendation section is where you\u2019ll find potential suggestions on how to increase that GPU utilization. In this example, GPU utilization is low so the performance recommendation was to increase batch size. Increasing batch size 4 to 32, as per the performance recommendation, increased the GPU Utilization by 60.68%.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}} {"page_content": "GPU Utilization: the step interval time in the profiler when a GPU engine was executing a workload. The high the utilization %, the better. The drawback of using GPU utilization solely to diagnose performance bottlenecks is it is too high-level and coarse. It won\u2019t be able to tell you how many Streaming Multiprocessors are in use. Note that while this metric is useful for detecting periods of idleness, a high value does not indicate efficient use of the GPU, only that it is doing anything at all. For instance, a kernel with a single thread running continuously will get a GPU Utilization of 100%", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}} {"page_content": "Estimated Stream Multiprocessor Efficiency (Est. SM Efficiency) is a finer grained metric, it indicates what percentage of SMs are in use at any point in the trace This metric reports the percentage of time where there is at least one active warp on a SM and those that are stalled (NVIDIA [doc](https://forums.developer.nvidia.com/t/nvprof-question-about-the-sm-efficiency-metric/72640#:~:text=My%20understanding%20from%20the%20profiler%20documentation%20is%20that,that%20%E2%80%9Cactive%20warps%E2%80%9D%20include%20warps%20that%20are%20stalled.)). Est. SM Efficiency also has it\u2019s limitation. For instance, a kernel with only one thread per block can\u2019t fully use each SM. SM Efficiency does not tell us how busy each SM is, only that they are doing anything at all, which can include stalling while waiting on the result of a memory load. To keep an SM busy, it is necessary to have a sufficient number of ready warps that can be run whenever a stall occurs", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}} @@ -268,7 +268,7 @@ {"page_content": "Mean Est. Achieved Occupancy: \nEst. Achieved Occupancy is defined as above in overview. \u201cMean Est. Achieved Occupancy\u201d is weighted average of all runs of this kernel name, using each run\u2019s duration as weight. \n\n_Trace View_\nThis trace view displays a timeline that shows the duration of operators in your model and which system executed the operation. This view can help you identify whether the high consumption and long execution is because of input or model training. Currently, this trace view shows GPU Utilization and Est. SM Efficiency on a timeline. \n\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}} {"page_content": "GPU utilization is calculated independently and divided into multiple 10 millisecond buckets. The buckets\u2019 GPU utilization values are drawn alongside the timeline between 0 \u2013 100%. In the above example, the \u201cProfilerStep5\u201d GPU utilization during thread 28022\u2019s busy time is higher than the following the one during \u201cOptimizer.step\u201d. This is where you can zoom-in to investigate why that is. \n\n
\n \n
\n\nFrom above, we can see the former\u2019s kernels are longer than the later\u2019s kernels. The later\u2019s kernels are too short in execution, which results in lower GPU utilization. \n\nEst. SM Efficiency: Each kernel has a calculated est. SM efficiency between 0 \u2013 100%. For example, the below kernel has only 64 blocks, while the SMs in this GPU is 80. Then its \u201cEst. SM Efficiency\u201d is 64/80, which is 0.8.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}} {"page_content": "
\n \n
\n\n### Cloud Storage Support\n\nAfter running pip install tensorboard, to have data be read through these cloud providers, you can now run: \n\n``` sh \ntorch-tb-profiler[blob] \ntorch-tb-profiler[gs] \ntorch-tb-profiler[s3] \n``` \n```pip install torch-tb-profiler[blob]```, ```pip install torch-tb-profiler[gs]```, or ```pip install torch-tb-profiler[S3]``` to have data be read through these cloud providers. For more information, please refer to this [README](https://github.com/pytorch/kineto/tree/main/tb_plugin). \n\n### Jump to Source Code:", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}} -{"page_content": "One of the great benefits of having both TensorBoard and the PyTorch Profiler being integrated directly in Visual Studio Code (VS Code) is the ability to directly jump to the source code (file and line) from the profiler stack traces. VS Code Python Extension now [supports TensorBoard Integration](https://devblogs.microsoft.com/python/python-in-visual-studio-code-february-2021-release/).\n\nJump to source is ONLY available when Tensorboard is launched within VS Code. Stack tracing will appear on the plugin UI if the profiling with_stack=True. When you click on a stack trace from the PyTorch Profiler, VS Code will automatically open the corresponding file side by side and jump directly to the line of code of interest for you to debug. This allows you to quickly make actionable optimizations and changes to your code based on the profiling results and suggestions.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}} +{"page_content": "### Jump to Source Code:\n\nOne of the great benefits of having both TensorBoard and the PyTorch Profiler being integrated directly in Visual Studio Code (VS Code) is the ability to directly jump to the source code (file and line) from the profiler stack traces. VS Code Python Extension now [supports TensorBoard Integration](https://devblogs.microsoft.com/python/python-in-visual-studio-code-february-2021-release/).\n\nJump to source is ONLY available when Tensorboard is launched within VS Code. Stack tracing will appear on the plugin UI if the profiling with_stack=True. When you click on a stack trace from the PyTorch Profiler, VS Code will automatically open the corresponding file side by side and jump directly to the line of code of interest for you to debug. This allows you to quickly make actionable optimizations and changes to your code based on the profiling results and suggestions.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}} {"page_content": "
\n \n

Gify: Jump to Source using Visual Studio Code Plug In UI

\n
\n\nFor how to optimize batch size performance, check out the step-by-step tutorial [here](https://opendatascience.com/optimizing-pytorch-performance-batch-size-with-pytorch-profiler/). PyTorch Profiler is also integrated with [PyTorch Lightning](https://pytorch-lightning.readthedocs.io/en/stable/advanced/profiler.html#pytorch-profiling) and you can simply launch your lightning training jobs with --```trainer.profiler=pytorch``` flag to generate the traces. Check out an example [here](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/basic_examples/profiler_example.py). \n\n## What\u2019s Next for the PyTorch Profiler?\nYou just saw how PyTorch Profiler can help optimize a model. You can now try the Profiler by ```pip install torch-tb-profiler``` to optimize your PyTorch model.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}} {"page_content": "Look out for an advanced version of this tutorial in the future. We are also thrilled to continue to bring state-of-the-art tool to PyTorch users to improve ML performance. We'd love to hear from you. Feel free to open an issue [here](https://github.com/pytorch/kineto/issues). \n\nFor new and exciting features coming up with PyTorch Profiler, follow @PyTorch on Twitter and check us out on pytorch.org. \n\n## Acknowledgements\n\nThe author would like to thank the contributions of the following individuals to this piece. From the Facebook side: Geeta Chauhan, Gisle Dankel, Woo Kim, Sam Farahzad, and Mark Saroufim. On the Microsoft side: AI Framework engineers (Teng Gao, Mike Guo, and Yang Gu), Guoliang Hua, and Thuy Nguyen.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'Announcing the Winners of the 2020 Global PyTorch Summer Hackathon'\nauthor: Team PyTorch\n---\n\nMore than 2,500 participants in this year\u2019s Global PyTorch Summer Hackathon pushed the envelope to create unique new tools and applications for PyTorch developers and researchers.\n\n
\n \n
\n\n***Notice**: None of the projects submitted to the hackathon are associated with or offered by Facebook, Inc.* \n\nThis year\u2019s projects fell into three categories:\n\n* **PyTorch Developer Tools:** a tool or library for improving productivity and efficiency for PyTorch researchers and developers.\n\n* **Web/Mobile Applications Powered by PyTorch:** a web or mobile interface and/or an embedded device built using PyTorch.", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}} @@ -283,7 +283,7 @@ {"page_content": "Fluence is a PyTorch-based deep learning library for language research. It specifically addresses the large compute demands of natural language processing (NLP) research. Fluence aims to provide low-resource and computationally efficient algorithms for NLP, giving researchers algorithms that can enhance current NLP methods or help discover where current methods fall short.\n\n**3rd place**: [Causing: CAUSal INterpretation using Graphs](https://devpost.com/software/realrate-explainable-ai-for-company-ratings)", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}} {"page_content": "Causing (CAUSal INterpretation using Graphs) is a multivariate graphic analysis tool for bringing transparency to neural networks. It explains causality and helps researchers and developers interpret the causal effects of a given equation system to ensure fairness. Developers can input data and a model describing the dependencies between the variables within the data set into Causing, and Causing will output a colored graph of quantified effects acting between the model\u2019s variables. In addition, it also allows developers to estimate these effects to validate whether data fits a model.\n\nThank you,\n\n**The PyTorch team**", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"PyTorch\u2019s Tracing Based Selective Build\"\nauthor: Dhruv Matani, Suraj Subramanian\nfeatured-img: \"/assets/images/pytorchs-tracing-based-selective-build_Figure_4.png\"\n---\n\n## Introduction\n\n**TL;DR**: It can be challenging to run PyTorch on mobile devices, SBCs (Single Board Computers), and IOT devices. When compiled, the PyTorch library is huge and includes dependencies that might not be needed for the on-device use case. \n\nTo run a specific set of models on-device, we actually require only a small subset of the features in the PyTorch library. We found that using a PyTorch runtime generated using **selective build** can achieve up to 90% reduction in binary size (for the CPU and QuantizedCPU backends on an x86-64 build on Linux). In this blog, we share our experience of generating model-specific minimal runtimes using Selective Build and show you how to do the same.\n\n## Why is this important for app developers?", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}} -{"page_content": "Using a PyTorch runtime generated by **selective build** can reduce the size of AI-powered apps by 30+ MB - a significant reduction for a typical mobile app! Making mobile applications more lightweight has many benefits - they are runnable on a wider variety of devices, consume less cellular data, and can be downloaded and updated faster on user\u2019s devices.\n\n## What does the Developer Experience look like?\n\nThis method can work seamlessly with any existing PyTorch Mobile deployment workflows. All you need to do is replace the general PyTorch runtime library with a runtime customized for the specific models you wish to use in your application. The general steps in this process are:", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}} +{"page_content": "## Why is this important for app developers?\n\nUsing a PyTorch runtime generated by **selective build** can reduce the size of AI-powered apps by 30+ MB - a significant reduction for a typical mobile app! Making mobile applications more lightweight has many benefits - they are runnable on a wider variety of devices, consume less cellular data, and can be downloaded and updated faster on user\u2019s devices.\n\n## What does the Developer Experience look like?\n\nThis method can work seamlessly with any existing PyTorch Mobile deployment workflows. All you need to do is replace the general PyTorch runtime library with a runtime customized for the specific models you wish to use in your application. The general steps in this process are:", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}} {"page_content": "1. Build the PyTorch Runtime in **instrumentation mode** (this is called an **instrumentation build** of PyTorch). This will record the used operators, kernels and features.\n2. Run your models through this instrumentation build by using the provided **model_tracer** binary. This will generate a single YAML file that stores all the features used by your model. These features will be preserved in the minimal runtime.\n3. Build PyTorch using this YAML file as input. This is the **selective build** technique, and it greatly reduces the size of the final PyTorch binary.\n4. Use this selectively-built PyTorch library to reduce the size of your mobile application!\n\n\nBuilding the PyTorch Runtime in a special **\u201cinstrumentation\u201d mode** ( by passing the `TRACING_BASED=1` build option) generates an **instrumentation build** runtime of PyTorch, along with a **model_tracer** binary. Running a model with this build allows us to trace the parts of PyTorch used by the model.", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}} {"page_content": "

\n \n

\n\n

\n Figure 1: Instrumentation build of PyTorch\n

\n\n```python\n# Clone the PyTorch repo\ngit clone https://github.com/pytorch/pytorch.git\ncd pytorch\n\n# Build the model_tracer\nUSE_NUMPY=0 USE_DISTRIBUTED=0 USE_CUDA=0 TRACING_BASED=1 \\\n python setup.py develop\n```\n\nNow this instrumentation build is used to run a model inference with representative inputs. The **model_tracer** binary observes parts of the instrumentation build that were activated during the inference run, and dumps it to a YAML file.\n\n

\n \n

\n\n

\n Figure 2: YAML file generated by running model(s) on an instrumentation build\n

", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}} {"page_content": "```python\n# Generate YAML file\n./build/bin/model_tracer \\\n --model_input_path /tmp/path_to_model.ptl \\\n --build_yaml_path /tmp/selected_ops.yaml\n```\n\nNow we build the PyTorch Runtime again, but this time using the YAML file generated by the tracer. The runtime now only includes those parts that are needed for this model. This is called **\u201cSelectively built PyTorch runtime\u201d** in the diagram below.\n\n```python\n# Clean out cached configuration\nmake clean\n\n# Build PyTorch using Selected Operators (from the YAML file)\n# using the host toolchain, and use this generated library\nBUILD_PYTORCH_MOBILE_WITH_HOST_TOOLCHAIN=1 \\\nUSE_LIGHTWEIGHT_DISPATCH=0 \\\nBUILD_LITE_INTERPRETER=1 \\\nSELECTED_OP_LIST=/tmp/selected_ops.yaml \\\nTRACING_BASED=1 \\\n ./scripts/build_mobile.sh\n```\n\n

\n \n

\n\n

\n Figure 3: Selective Build of PyTorch and model execution on a selectively built PyTorch runtime\n

", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}} @@ -305,18 +305,18 @@ {"page_content": "## Introduction\n\nIn recent years, scaling model sizes has become a promising area of research. In the field of NLP, language models have gone from hundreds of millions of parameters (BERT) to hundreds of billions of parameters (GPT-3) demonstrating significant improvements on downstream tasks. The [scaling laws](https://arxiv.org/pdf/2001.08361.pdf) for large scale language models have also been studied extensively in the industry. A similar trend can be observed in the vision field, with the community moving to transformer based models (like [Vision Transformer](https://arxiv.org/pdf/2010.11929.pdf), [Masked Auto Encoders](https://arxiv.org/pdf/2111.06377.pdf)) as well. It is clear that individual modalities - text, image, video - have benefited massively from recent advancements in scale, and frameworks have quickly adapted to accommodate larger models.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}} {"page_content": "At the same time, multimodality is becoming increasingly important in research with tasks like image-text retrieval, visual question-answering, visual dialog and text to image generation gaining traction in real world applications. Training large scale multimodal models is the natural next step and we already see several efforts in this area like [CLIP](https://openai.com/blog/clip/) from OpenAI, [Parti](https://parti.research.google/) from Google and [CM3](https://arxiv.org/pdf/2201.07520.pdf) from Meta.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}} {"page_content": "In this blog, we present a case study demonstrating the scaling of [FLAVA](https://flava-model.github.io/) to 10B params using techniques from PyTorch Distributed. FLAVA is a vision and language foundation model, available in [TorchMultimodal](https://github.com/facebookresearch/multimodal/tree/main/torchmultimodal/models/flava), which has shown competitive performance on both unimodal and multimodal benchmarks. We also give the relevant code pointers in this blog. The instructions for running an example script to scale FLAVA can be found [here](https://github.com/facebookresearch/multimodal/tree/main/examples/flava/native).\n\n## Scaling FLAVA Overview", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}} -{"page_content": "FLAVA is a foundation multimodal model which consists of transformer based image and text encoders followed by a transformer-based multimodal fusion module. It is pretrained on both unimodal and multimodal data with a diverse set of losses. This includes masked language, image and multimodal modeling losses that require the model to reconstruct the original input from its context (self-supervised learning). It also uses image text matching loss over positive and negative examples of aligned image-text pairs as well as CLIP style contrastive loss. In addition to multimodal tasks (like image-text retrieval), FLAVA demonstrated competitive performance on unimodal benchmarks as well (GLUE tasks for NLP and image classification for vision).\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}} +{"page_content": "## Scaling FLAVA Overview\n\nFLAVA is a foundation multimodal model which consists of transformer based image and text encoders followed by a transformer-based multimodal fusion module. It is pretrained on both unimodal and multimodal data with a diverse set of losses. This includes masked language, image and multimodal modeling losses that require the model to reconstruct the original input from its context (self-supervised learning). It also uses image text matching loss over positive and negative examples of aligned image-text pairs as well as CLIP style contrastive loss. In addition to multimodal tasks (like image-text retrieval), FLAVA demonstrated competitive performance on unimodal benchmarks as well (GLUE tasks for NLP and image classification for vision).\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}} {"page_content": "The original FLAVA model has ~350M parameters and uses ViT-B16 configurations (from the [Vision Transformer paper](https://arxiv.org/pdf/2010.11929.pdf)) for image and text encoders. The multimodal fusion transformer follows the unimodal encoders but with half the number of layers. We explore increasing the size of each encoder to larger ViT variants. \n\nAnother aspect of scaling is adding the ability to increase the batch size. FLAVA makes use of contrastive loss over in-batch negatives, which typically benefits from large batch size (as studied [here](https://openreview.net/pdf?id=U2exBrf_SJh)). The largest training efficiency or throughput is also generally achieved when operating near maximum possible batch sizes as determined by the amount of GPU memory available (also see the experiments section). \n\nThe following table displays the different model configurations we experimented with. We also determine the maximum batch size that was able to fit in memory for each configuration in the experiments section.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}} {"page_content": "| Approx Model params | Hidden size | MLP size | Heads | Unimodal layers | Multimodal layers | Model size (fp32) |\n|-----------------------|---------------|----------|---------|-------------------|---------------------|---------------------|\n| 350M (original) | 768 | 3072 | 12 | 12 | 6 | 1.33GB |\n| 900M | 1024 | 4096 | 16 | 24 | 12 | 3.48GB |\n| 1.8B | 1280 | 5120 | 16 | 32 | 16 | 6.66GB |\n| 2.7B | 1408 | 6144 | 16 | 40 | 20 | 10.3GB |\n| 4.8B | 1664 | 8192 | 16 | 48 | 24 | 18.1GB |\n| 10B | 2048 | 10240 | 16 | 64 | 40 | 38GB |", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}} {"page_content": "## Optimization overview\n\nPyTorch offers several native techniques to efficiently scale models. In the following sections, we go over some of these techniques and show how they can be applied to scale up a FLAVA model to 10 billion parameters.\n\n## Distributed Data Parallel\n\nA common starting point for distributed training is data parallelism. Data parallelism replicates the model across each worker (GPU), and partitions the dataset across the workers. Different workers process different data partitions in parallel and synchronize their gradients (via all reduce) before model weights are updated. The figure below showcases the flow (forward, backward, and weight update steps) for processing a single example for data parallelism:\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}} {"page_content": "

\n Source: https://engineering.fb.com/2021/07/15/open-source/fsdp/\n

\n\nPyTorch provides a native API, [DistributedDataParallel](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html) (DDP) to enable data parallelism which can be used as a module wrapper as showcased below. Please see PyTorch Distributed [documentation](https://pytorch.org/docs/stable/distributed.html#) for more details.\n\n```Python\nfrom torchmultimodal.models.flava.model import flava_model_for_pretraining\nimport torch\nimport torch.distributed as dist\n\nmodel = flava_model_for_pretraining().cuda()\n# Initialize PyTorch Distributed process groups\n# Please see https://pytorch.org/tutorials/intermediate/dist_tuto.html for details\ndist.init_process_group(backend=\u201dnccl\u201d)\n# Wrap model in DDP\nmodel = torch.nn.parallel.DistributedDataParallel(model, device_ids=[torch.cuda.current_device()])\n```\n\n## Fully Sharded Data Parallel", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}} -{"page_content": "GPU memory usage of a training application can roughly be broken down into model inputs, intermediate activations (needed for gradient computation), model parameters, gradients, and optimizer states. Scaling a model will typically increase each of these elements. Scaling a model with DDP can eventually result in out-of-memory issues when a single GPU's memory becomes insufficient since it replicates the parameters, gradients, and optimizer states on all workers.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}} +{"page_content": "## Fully Sharded Data Parallel\n\nGPU memory usage of a training application can roughly be broken down into model inputs, intermediate activations (needed for gradient computation), model parameters, gradients, and optimizer states. Scaling a model will typically increase each of these elements. Scaling a model with DDP can eventually result in out-of-memory issues when a single GPU's memory becomes insufficient since it replicates the parameters, gradients, and optimizer states on all workers.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}} {"page_content": "To reduce this replication and save GPU memory, we can shard the model parameters, gradients, and optimizer states across all workers with each worker only managing a single shard. This technique was popularized by the [ZeRO-3](https://arxiv.org/abs/1910.02054) approach developed by Microsoft. A PyTorch-native implementation of this approach is available as [FullyShardedDataParallel](https://pytorch.org/docs/stable/fsdp.html) (FSDP) API, released as a beta feature in PyTorch 1.12. During a module\u2019s forward and backward passes, FSDP unshards the model parameters as needed for computation (using all-gather) and reshards them after computation. It synchronizes gradients using the reduce-scatter collective to ensure sharded gradients are globally averaged. The forward and backward pass flow of a model wrapped in FSDP are detailed below:\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}} {"page_content": "

\n Source: https://engineering.fb.com/2021/07/15/open-source/fsdp/\n

\n\nTo use FSDP, the submodules of a model need to be wrapped with the API to control when specific submodules are sharded or unsharded. FSDP provides an auto-wrapping API (see the [auto_wrap_policy](https://pytorch.org/docs/stable/fsdp.html#torch.distributed.fsdp.FullyShardedDataParallel) argument) that can be used out of the box as well as several [wrapping policies](https://github.com/pytorch/pytorch/blob/master/torch/distributed/fsdp/wrap.py) and the ability to [write your own policy](https://github.com/pytorch/pytorch/blob/75c0e3a471c19b883feca15fd4ecfabedf746691/torch/distributed/fsdp/fully_sharded_data_parallel.py#L858).", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}} {"page_content": "The following example demonstrates wrapping the FLAVA model with FSDP. We specify the auto-wrapping policy as `transformer_auto_wrap_policy`. This will wrap individual transformer layers (`TransformerEncoderLayer`), the image transformer (`ImageTransformer`), text encoder (`BERTTextEncoder`) and multimodal encoder (`FLAVATransformerWithoutEmbeddings`) as individual FSDP units. This uses a recursive wrapping approach for efficient memory management. For example, after an individual transformer layer\u2019s forward or backward pass is finished, its parameters are discarded, freeing up memory thereby reducing peak memory usage.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}} {"page_content": "FSDP also provides a number of configurable options to tune the performance of applications. For example, in our use case, we illustrate the use of the new `limit_all_gathers` flag, which prevents all-gathering model parameters too early thereby alleviating memory pressure on the application. We encourage users to experiment with this flag which can potentially improve the performance of applications with high active memory usage.\n\n```Python\nimport torch\nfrom torch.distributed.fsdp import FullyShardedDataParallel as FSDP\nfrom torch.distributed.fsdp.wrap import transformer_auto_wrap_policy\nfrom torchmultimodal.models.flava.model import flava_model_for_pretraining\nfrom torchmultimodal.models.flava.text_encoder import BertTextEncoder\nfrom torchmultimodal.models.flava.image_encoder import ImageTransformer\nfrom torchmultimodal.models.flava.transformer import FLAVATransformerWithoutEmbeddings\nfrom torchmultimodal.modules.layers.transformer import TransformerEncoderLayer", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}} {"page_content": "model = flava_model_for_pretraining().cuda()\ndist.init_process_group(backend=\u201dnccl\u201d)\n\nmodel = FSDP(\n model,\n device_id=torch.cuda.current_device(),\n auto_wrap_policy=partial(\n transformer_auto_wrap_policy,\n transformer_layer_cls={\n TransformerEncoderLayer,\n ImageTransformer,\n BERTTextEncoder,\n FLAVATransformerWithoutEmbeddings\n },\n ),\n limit_all_gathers=True,\n )\n```\n\n## Activation Checkpointing", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}} -{"page_content": "As discussed above, intermediate activations, model parameters, gradients, and optimizer states contribute to the overall GPU memory usage. FSDP can reduce memory consumption due to the latter three but does not reduce memory consumed by activations. Memory used by activations increases with increase in batch size or number of hidden layers. Activation checkpointing is a technique to decrease this memory usage by recomputing the activations during the backward pass instead of holding them in memory for a specific checkpointed module. For example, we observed ~4x reduction in the peak active memory after forward pass by applying activation checkpointing to the 2.7B parameter model.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}} +{"page_content": "## Activation Checkpointing\n\nAs discussed above, intermediate activations, model parameters, gradients, and optimizer states contribute to the overall GPU memory usage. FSDP can reduce memory consumption due to the latter three but does not reduce memory consumed by activations. Memory used by activations increases with increase in batch size or number of hidden layers. Activation checkpointing is a technique to decrease this memory usage by recomputing the activations during the backward pass instead of holding them in memory for a specific checkpointed module. For example, we observed ~4x reduction in the peak active memory after forward pass by applying activation checkpointing to the 2.7B parameter model.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}} {"page_content": "PyTorch offers a wrapper based activation checkpointing API. In particular, `checkpoint_wrapper` allows users to wrap an individual module with checkpointing, and `apply_activation_checkpointing` allows users to specify a policy with which to wrap modules within an overall module with checkpointing. Both these APIs can be applied to most models as they do not require any modifications to the model definition code. However, if more granular control over checkpointed segments, such as checkpointing specific functions within a module, is required, the functional `torch.utils.checkpoint` [API](https://pytorch.org/docs/stable/checkpoint.html) can be leveraged, although this requires modification to the model code. The application of the activation checkpointing wrapper to individual FLAVA transformer layers (denoted by `TransformerEncoderLayer`) is shown below. For a thorough description of activation checkpointing, please see the description in the [PyTorch documentation](https://pytorch.org/docs/stable/checkpoint.html).", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}} {"page_content": "```Python\nfrom torchmultimodal.models.flava.model import flava_model_for_pretraining\nfrom torch.distributed.algorithms._checkpoint.checkpoint_wrapper import apply_activation_checkpointing, checkpoint_wrapper, CheckpointImpl\nfrom torchmultimodal.modules.layers.transformer import TransformerEncoderLayer\n\nmodel = flava_model_for_pretraining()\ncheckpoint_tformer_layers_policy = lambda submodule: isinstance(submodule, TransformerEncoderLayer)\n\napply_activation_checkpointing(\n model,\n checkpoint_wrapper_fn=checkpoint_wrapper,\n check_fn=checkpoint_tformer_layers_policy,\n )\n```\nUsed together, wrapping FLAVA transformer layers with activation checkpointing and wrapping the overall model with FSDP as demonstrated above, we are able to scale FLAVA to 10B parameters.\n\n## Experiments", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}} {"page_content": "## Experiments\n\nWe conduct an empirical study about the impact of the different optimizations from the previous section on system performance. For all our experiments, we use a single node with 8 A100 40GB GPUs and run the pretraining for 1000 iterations. All runs also used PyTorch\u2019s [automatic mixed precision](https://pytorch.org/docs/stable/amp.html) with the bfloat16 data type. [TensorFloat32](https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices) format is also enabled to improve matmul performance on the A100. We define throughput as the average number of items (text or image) processed per second (we ignore the first 100 iterations while measuring throughput to account for warmup). We leave training to convergence and its impact on downstream task metrics as an area for future study.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}} @@ -327,8 +327,8 @@ {"page_content": "## References\n\n- [Introducing TorchMultimodal - a library for accelerating exploration in Multimodal AI](https://pytorch.org/blog/introducing-torchmultimodal/)\n- [FLAVA paper](https://deploy-preview-1186--pytorch-dot-org-preview.netlify.app/blog/introducing-torchmultimodal/)\n- [Introducing Pytorch FSDP](https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/)", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"A BetterTransformer for Fast Transformer Inference\"\nauthor: Michael Gschwind, Eric Han, Scott Wolchok, Rui Zhu, Christian Puhrsch\nfeatured-img: \"/assets/images/2022-7-12-a-better-transformer-for-fast-transformer-encoder-inference-3.png\"\n---\n\n**tl;dr** Transformers achieve state-of-the-art performance for NLP, and are becoming popular for a myriad of other tasks. They are computationally expensive which has been a blocker to their widespread productionisation. Launching with PyTorch 1.12, BetterTransformer implements a backwards-compatible fast path of `torch.nn.TransformerEncoder` for Transformer Encoder Inference and does not require model authors to modify their models. BetterTransformer improvements can exceed 2x in speedup and throughput for many common execution scenarios. To use BetterTransformer, [install](https://pytorch.org/get-started/locally/) PyTorch 1.12 and start using high-quality, high-performance Transformer models with the PyTorch API today.", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}} {"page_content": "

\n \n

\n\n

\nDiagram of the Transformer Encoder Architecture (from \"Attention Is All You Need\"). During Inference, the entire module will execute as a single PyTorch-native function.\n

\n\nIn this blog post, we share the following topics \u2014 Performance Improvements, Backwards compatibility, and Taking advantage of the FastPath. Learn more about these topics below. \n\n## Performance Improvements", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}} -{"page_content": "BetterTransformer launches with accelerated native implementations of MultiHeadAttention and TransformerEncoderLayer for CPUs and GPUs. These fast paths are integrated in the standard PyTorch Transformer APIs, and will accelerate [TransformerEncoder](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoder.html), [TransformerEncoderLayer](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html) and [MultiHeadAttention](https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html) nn.modules. These new modules implement two types of optimizations: (1) fused kernels combine multiple individual operators normally used to implement Transformers to provide a more efficient implementation, and (2) take advantage of sparsity in the inputs to avoid performing unnecessary operations on padding tokens. Padding tokens frequently account for a large fraction of input batches in many Transformer models used for Natural Language Processing. \n\n## Backwards compatibility", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}} -{"page_content": "Advantageously, **no model changes are necessary to benefit from the performance boost offered by BetterTransformer.** To benefit from fast path execution, inputs and operating conditions must satisfy some access conditions (see below). While the internal implementation of Transformer APIs has changed, PyTorch 1.12 maintains strict compatibility with Transformer modules shipped in previous versions, enabling PyTorch users to use models created and trained with previous PyTorch releases while benefiting from BetterTransformer improvements.\n\nIn addition to enabling the PyTorch nn.Modules, BetterTransformer provides improvements for PyTorch libraries. Performance benefits will become available through two different enablement paths:", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}} +{"page_content": "## Performance Improvements\n\nBetterTransformer launches with accelerated native implementations of MultiHeadAttention and TransformerEncoderLayer for CPUs and GPUs. These fast paths are integrated in the standard PyTorch Transformer APIs, and will accelerate [TransformerEncoder](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoder.html), [TransformerEncoderLayer](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html) and [MultiHeadAttention](https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html) nn.modules. These new modules implement two types of optimizations: (1) fused kernels combine multiple individual operators normally used to implement Transformers to provide a more efficient implementation, and (2) take advantage of sparsity in the inputs to avoid performing unnecessary operations on padding tokens. Padding tokens frequently account for a large fraction of input batches in many Transformer models used for Natural Language Processing.", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}} +{"page_content": "## Backwards compatibility\n\nAdvantageously, **no model changes are necessary to benefit from the performance boost offered by BetterTransformer.** To benefit from fast path execution, inputs and operating conditions must satisfy some access conditions (see below). While the internal implementation of Transformer APIs has changed, PyTorch 1.12 maintains strict compatibility with Transformer modules shipped in previous versions, enabling PyTorch users to use models created and trained with previous PyTorch releases while benefiting from BetterTransformer improvements.\n\nIn addition to enabling the PyTorch nn.Modules, BetterTransformer provides improvements for PyTorch libraries. Performance benefits will become available through two different enablement paths:", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}} {"page_content": "1. Transparent acceleration: Current users of PyTorch nn.Modules such as [MultiHeadAttention](https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html) as well as higher-level Transformer components will benefit from the improved performance of the new nn.Modules automatically. An example of this is the [visual transformer (ViT)](https://arxiv.org/abs/2010.11929) implementation used in the torchvision library ([code link](https://github.com/pytorch/vision/blob/87cde716b7f108f3db7b86047596ebfad1b88380/torchvision/models/vision_transformer.py#L103)).", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}} {"page_content": "2. Torchtext library acceleration: As part of this project, we have optimized Torchtext to build on the PyTorch core API to benefit from BetterTransformer enhancements while maintaining strict and transparent compatibility with previous library versions and models trained with previous Torchtext versions. Using PyTorch Transformers in Torchtext also ensures that Torchtext will benefit from expected future enhancements to the PyTorch Transformer implementation.\n\n## Taking advantage of the Fastpath\n\nBetterTransformer is a fastpath for the PyTorch Transformer API. The fastpath is a native, specialized implementation of key Transformer functions for CPU and GPU that applies to common Transformer use cases.", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}} {"page_content": "To take advantage of input sparsity (i.e. padding) in accelerating your model (see Figure 2), set the keyword argument `enable_nested_tensor=True` when instantiating a [TransformerEncoder](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoder.html) and pass in the `src_key_padding_mask` argument (which denotes padding tokens) during inference. This requires the padding mask to be contiguous, which is the typical case.", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}} @@ -347,7 +347,7 @@ {"page_content": "MI200-89 \u2013 PyTorch Inductor mode HuggingFace Transformers training speedup, running the standard PyTorch 2.0 test suite, over PyTorch eager-mode comparison based on AMD internal testing on a single GCD as of 3/10/2023 using a 2P AMD EPYC\u2122 7763 production server with 4x AMD Instinct\u2122 MI250 (128GB HBM2e) 560W GPUs with Infinity Fabric\u2122 technology; host ROCm\u2122 5.3, guest ROCm\u2122 5.4.4, PyTorch 2.0.0, Triton 2.0. Server manufacturers may vary configurations, yielding different results. Performance may vary based on factors including use of latest drivers and optimizations. \n\n\u00a9 2023 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, AMD CDNA, AMD Instinct, EPYC, Radeon, ROCm, Ryzen, and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other product names used in this publication are for identification purposes only and may be trademarks of their respective owners.", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'Announcing PyTorch Developer Day 2021'\nauthor: Team PyTorch\nfeatured-img: 'assets/images/ptdevday21.gif'\n---\n\nWe are excited to announce PyTorch Developer Day (#PTD2), taking place virtually from December 1 & 2, 2021. Developer Day is designed for developers and users to discuss core technical developments, ideas, and roadmaps. \n\n
\n \n
\n\n## Event Details \n**Technical Talks Live Stream - December 1, 2021**\n\nJoin us for technical talks on a variety of topics, including updates to the core framework, new tools and libraries to support development across a variety of domains, responsible AI and industry use cases. All talks will take place on December 1 and will be live streamed on PyTorch channels.", "metadata": {"source": "https://pytorch.org/blog/pytorch-developer-day-2021/", "category": "pytorch blogs"}} {"page_content": "Stay up to date by following us on our social channels: [Twitter](https://twitter.com/PyTorch), [Facebook](https://facebook.com/PyTorch), or [LinkedIn](https://www.linkedin.com/company/pytorch).\n\n**Poster Exhibition & Networking - December 2, 2021**\n\nOn the second day, we\u2019ll be hosting an online poster exhibition on Gather.Town. There will be opportunities to meet the authors and learn more about their PyTorch projects as well as network with the community. This poster and networking event is limited to people composed of PyTorch maintainers and contributors, long-time stakeholders and experts in areas relevant to PyTorch\u2019s future. Conversations from the networking event will strongly shape the future of PyTorch. As such, invitations are required to attend the networking event. \n\nApply for an invitation to the networking event by clicking [here](https://pytorchdeveloperday.fbreg.com/).\n\n## Call for Content Now Open", "metadata": {"source": "https://pytorch.org/blog/pytorch-developer-day-2021/", "category": "pytorch blogs"}} -{"page_content": "Submit your poster abstracts today! Please send us the title and brief summary of your project, tools and libraries that could benefit PyTorch researchers in academia and industry, application developers, and ML engineers for consideration. The focus must be on academic papers, machine learning research, or open-source projects related to PyTorch development, Responsible AI or Mobile. Please no sales pitches. **Deadline for submission is September 24, 2021**. \n\nYou can submit your poster abstract during your application & registration process [here](https://pytorchdeveloperday.fbreg.com/apply).\n\nVisit the [event website](https://pytorchdeveloperday.fbreg.com/) for more information and we look forward to having you at PyTorch Developer Day. For any questions about the event, contact [pytorch@fbreg.com](mailto:pytorch@fbreg.com).", "metadata": {"source": "https://pytorch.org/blog/pytorch-developer-day-2021/", "category": "pytorch blogs"}} +{"page_content": "## Call for Content Now Open\n\nSubmit your poster abstracts today! Please send us the title and brief summary of your project, tools and libraries that could benefit PyTorch researchers in academia and industry, application developers, and ML engineers for consideration. The focus must be on academic papers, machine learning research, or open-source projects related to PyTorch development, Responsible AI or Mobile. Please no sales pitches. **Deadline for submission is September 24, 2021**. \n\nYou can submit your poster abstract during your application & registration process [here](https://pytorchdeveloperday.fbreg.com/apply).\n\nVisit the [event website](https://pytorchdeveloperday.fbreg.com/) for more information and we look forward to having you at PyTorch Developer Day. For any questions about the event, contact [pytorch@fbreg.com](mailto:pytorch@fbreg.com).", "metadata": {"source": "https://pytorch.org/blog/pytorch-developer-day-2021/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'Efficient PyTorch: Tensor Memory Format Matters'\nauthor: 'Dhruv Matani, Suraj Subramanian'\nfeatured-img: ''\n---\n\nEnsuring the right memory format for your inputs can significantly impact the running time of your PyTorch vision models. When in doubt, choose a Channels Last memory format.\n\nWhen dealing with vision models in PyTorch that accept multimedia (for example image Tensorts) as input, the Tensor\u2019s memory format can significantly impact **the inference execution speed of your model on mobile platforms when using the CPU backend along with XNNPACK**. This holds true for training and inference on server platforms as well, but latency is particularly critical for mobile devices and users.\n\n", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}} {"page_content": "## Outline of this article\n1. Deep Dive into matrix storage/memory representation in C++. Introduction to [Row and Column major order](https://en.wikipedia.org/wiki/Row-_and_column-major_order).\n2. Impact of looping over a matrix in the same or different order as the storage representation, along with an example.\n3. Introduction to Cachegrind; a tool to inspect the cache friendliness of your code.\n4. Memory formats supported by PyTorch Operators.\n5. Best practices example to ensure efficient model execution with XNNPACK optimizations\n\n## Matrix Storage Representation in C++\n\nImages are fed into PyTorch ML models as multi-dimensional Tensors. These Tensors have specific memory formats. To understand this concept better, let\u2019s take a look at how a 2-d matrix may be stored in memory.", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}} {"page_content": "Broadly speaking, there are 2 main ways of efficiently storing multi-dimensional data in memory.\n1. **Row Major Order:** In this format, the matrix is stored in row order, with each row stored before the next row in memory. I.e. row N comes before row N+1.\n2. **Column Major Order:** In this format, the matrix is stored in column-order, with each column stored before the next column in memory. I.e. column N comes before column N+1.\n\nYou can see the differences graphically below.\n\n

\n\"C++\n
\nC++ stores multi-dimensional data in row-major format.\n

\n\n## Efficiently accessing elements of a 2d matrix\n\nSimilar to the storage format, there are 2 ways to access data in a 2d matrix.", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}} @@ -375,22 +375,22 @@ {"page_content": "## 2. Background\n\nEmbedding tables are ubiquitous in recommendation systems. Section 3 will discuss three FX transformations that optimize accesses to embedding tables. In this section, we provide some background on FX (Section 2.1) and embedding tables (Section 2.2).\n\n### 2.1 FX\n\nFigure 1 is a simple example adopted from [3] which illustrates using FX to transform a PyTorch program. It contains three steps: (1) capturing the graph from a program, (2) modifying the graph (in this example, all uses of RELU are replaced by GELU), and (3) generating a new program from the modified graph.\n\n

\n\n

\n\n**Figure 1: A FX example which replaces all uses of RELU by GELU in a PyTorch module.**\n\nThe FX API [4] provides many more functionalities for inspecting and transforming PyTorch program graphs.\n\n### 2.2 Embedding Tables\n\n

\n\n

", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}} {"page_content": "**Figure 2: Illustration of an embedding table for a sparse feature with batch size = 1**\n\nIn a recommendation system, sparse features (e.g., User ID, Story ID) are represented by embedding tables. An embedding table E is an HxD matrix, where H is the hash size, D is the embedding dimension. Each row of E is a vector of floats. Feature hashing [5] is used to map a sparse feature to a list of indices to E, say [S1,S2, \u2026, Sk], where 0<=Si<H. Its output value is computed as f(E[S1], E[S2], \u2026, E[Sk]), where E[Si] is the vector at row Si, and f is called the pooling function, which is typically one of the following functions: sum, average, maximum. See Figure 2 for an illustration.", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}} {"page_content": "To fully utilize the GPU, sparse features are usually processed in a batch. Each entity in a batch has its own list of indices. If a batch has B entities, a naive representation has B lists of indices. A more compact representation is to combine the B lists of indices into a single list of indices and add a list of the lengths of indices (one length for each entity in the batch). For example, if a batch has 3 entities whose lists of indices are as follows:\n\n- Entity 1: indices = [10, 20]\n- Entity 2: indices = [5, 9, 77, 81]\n- Entity 3: indices = [15, 20, 45]\n\nThen the indices and lengths for the entire batch will be:\n\n- Indices = [10, 20, 5, 9, 77, 81, 15, 20, 45]\n- Lengths = [2, 4, 3]\n\nAnd the output of the embedding table lookup for the whole batch is a BxD matrix.\n\n## 3. Three FX Transformations", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}} -{"page_content": "We have developed three FX transformations that accelerate accesses to embedding tables. Section 3.1 discusses a transformation that combines multiple small input tensors into a single big tensor; Section 3.2 a transformation that fuses multiple, parallel compute chains into a single compute chain; and Section 3.3 a transformation that overlaps communication with computation.\n\n### 3.1 Combining Input Sparse Features", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}} -{"page_content": "Recall that an input sparse feature in a batch is represented by two lists: a list of indices and a list of B lengths, where B is the batch size. In PyTorch, these two lists are implemented as two tensors. When a PyTorch model is run on a GPU, embedding tables are commonly stored in the GPU memory (which is closer to the GPU and has much higher read/write bandwidth than the CPU memory). To use an input sparse feature, its two tensors need to be first copied from CPU to GPU. Nevertheless, per host-to-device memory copying requires a kernel launch, which is relatively expensive compared to the actual data transfer time. If a model uses many input sparse features, this copying could become a performance bottleneck (e.g., 1000 input sparse features would require copying 2000 tensors from host to device).\n\nAn optimization that reduces the number of host-to-device memcpy is to combine multiple input sparse features before sending them to the device. For instance, given the following three input features:", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}} -{"page_content": "- Feature_A: indices = [106, 211, 7], lengths = [2, 1]\n- Feature_B: indices = [52, 498, 616, 870, 1013], lengths = [3, 2]\n- Feature_C: indices = [2011, 19, 351, 790], lengths = [1, 3]\n\nThe combined form is:\n\n- Features_A_B_C: indices = [106, 211, 7, 52, 498, 616, 870, 1013, 2011, 19, 351, 790], lengths = [2, 1, 3, 2, 1, 3]\n\nSo, instead of copying 3x2=6 tensors from host to device, we only need to copy 2 tensors.\n\nFigure 3(b) describes an implementation of this optimization, which has two components:\n\n- On the CPU side: The input pipeline is modified to combine all the indices of sparse features into a single tensor and similarly all the lengths into another tensor. Then the two tensors are copied to the GPU.\n- On the GPU side: Using FX, we insert a Permute_and_Split op into the model graph to recover the indices and lengths tensors of individual features from the combined tensors, and route them to the corresponding nodes downstream.", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}} -{"page_content": "

\n\n

\n\n(a). **Without the optimization**\n\n

\n\n

\n\n(b). **With the optimization**\n\n**Figure 3: Combining input sparse features**\n\n### 3.2 Horizontal fusion of computation chains started with accesses to embedding tables", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}} +{"page_content": "## 3. Three FX Transformations\n\nWe have developed three FX transformations that accelerate accesses to embedding tables. Section 3.1 discusses a transformation that combines multiple small input tensors into a single big tensor; Section 3.2 a transformation that fuses multiple, parallel compute chains into a single compute chain; and Section 3.3 a transformation that overlaps communication with computation.\n\n### 3.1 Combining Input Sparse Features", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}} +{"page_content": "### 3.1 Combining Input Sparse Features\n\nRecall that an input sparse feature in a batch is represented by two lists: a list of indices and a list of B lengths, where B is the batch size. In PyTorch, these two lists are implemented as two tensors. When a PyTorch model is run on a GPU, embedding tables are commonly stored in the GPU memory (which is closer to the GPU and has much higher read/write bandwidth than the CPU memory). To use an input sparse feature, its two tensors need to be first copied from CPU to GPU. Nevertheless, per host-to-device memory copying requires a kernel launch, which is relatively expensive compared to the actual data transfer time. If a model uses many input sparse features, this copying could become a performance bottleneck (e.g., 1000 input sparse features would require copying 2000 tensors from host to device).", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}} +{"page_content": "An optimization that reduces the number of host-to-device memcpy is to combine multiple input sparse features before sending them to the device. For instance, given the following three input features:\n\n- Feature_A: indices = [106, 211, 7], lengths = [2, 1]\n- Feature_B: indices = [52, 498, 616, 870, 1013], lengths = [3, 2]\n- Feature_C: indices = [2011, 19, 351, 790], lengths = [1, 3]\n\nThe combined form is:\n\n- Features_A_B_C: indices = [106, 211, 7, 52, 498, 616, 870, 1013, 2011, 19, 351, 790], lengths = [2, 1, 3, 2, 1, 3]\n\nSo, instead of copying 3x2=6 tensors from host to device, we only need to copy 2 tensors.\n\nFigure 3(b) describes an implementation of this optimization, which has two components:", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}} +{"page_content": "- On the CPU side: The input pipeline is modified to combine all the indices of sparse features into a single tensor and similarly all the lengths into another tensor. Then the two tensors are copied to the GPU.\n- On the GPU side: Using FX, we insert a Permute_and_Split op into the model graph to recover the indices and lengths tensors of individual features from the combined tensors, and route them to the corresponding nodes downstream.\n\n

\n\n

\n\n(a). **Without the optimization**\n\n

\n\n

\n\n(b). **With the optimization**\n\n**Figure 3: Combining input sparse features**\n\n### 3.2 Horizontal fusion of computation chains started with accesses to embedding tables", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}} {"page_content": "In a production model, it is fairly common to have 10s of embedding tables residing on each GPU. For performance reasons, lookups to these tables are grouped together so that their outputs are concatenated in a single big tensor (see the red part in Figure 4(a)). To apply computations to individual feature outputs, a Split op is used to divide the big tensors into N smaller tensors (where N is the number of features) and then the desired computations are applied to each tensor. This is shown in Figure 4(a), where the computation applied to each feature output O is Tanh(LayerNorm(O)). All the computation results are concatenated back to a big tensor, which is then passed to downstream ops (Op1 in Figure 4(a)).", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}} {"page_content": "The main runtime cost here is the GPU kernel launch overhead. For instance, the number of GPU kernel launches in Figure 4(a) is 2\\*N + 3 (each oval in the figure is a GPU kernel). This could become a performance issue because execution times of LayerNorm and Tanh on the GPU are short compared to their kernel launch times. In addition, the Split op may create an extra copy of the embedding output tensor, consuming additional GPU memory.", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}} {"page_content": "We use FX to implement an optimization called horizontal fusion which dramatically reduces the number of GPU kernel launches (in this example, the optimized number of GPU kernel launches is 5, see Figure 4(b)). Instead of doing an explicit Split, we use the Add_middle_dim op to reshape the 2D embedding tensor of shape (B, NxD) to a 3D tensor of shape (B, N, D). Then a single LayerNorm is applied to the last dimension of it. Then a single Tanh is applied to the result of the LayerNorm. At the end, we use the Remove_middle_dim op to reshape the Tanh\u2019s result back to a 2D tensor. In addition, since Add_middle_dim and Remove_middle_dim only reshape the tensor without creating an extra copy, the amount of GPU memory consumption could be reduced as well.\n\n

\n\n

\n\n(a). **Without the optimization**\n\n

\n\n

\n\n(b). **With the optimization**\n\n**Figure 4: Horizontal fusion**", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}} -{"page_content": "### 3.3 Overlapping Computation with Communication\n\nTraining of a production recommendation model is typically done on a distributed GPU system. Since the capacity of the device memory per GPU is not big enough to hold all the embedding tables in the model, they need to be distributed among the GPUs.\n\nWithin a training step, a GPU needs to read/write feature values from/to the embedding tables on the other GPUs. This is known as all-to-all communication [6] and can be a major performance bottleneck.\n\nWe use FX to implement a transformation that can overlap computation with all-to-all communication. Figure 5(a) shows the example of a model graph which has embedding table accesses (EmbeddingAllToAll) and other ops. Without any optimization, they are sequentially executed on a GPU stream, as shown in Figure 5(b). Using FX, we break EmbeddingAllToAll into EmbeddingAllToAll_Request and EmbeddingAllToAll_Wait, and schedule independent ops in between them.", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}} +{"page_content": "**Figure 4: Horizontal fusion**\n\n### 3.3 Overlapping Computation with Communication\n\nTraining of a production recommendation model is typically done on a distributed GPU system. Since the capacity of the device memory per GPU is not big enough to hold all the embedding tables in the model, they need to be distributed among the GPUs.\n\nWithin a training step, a GPU needs to read/write feature values from/to the embedding tables on the other GPUs. This is known as all-to-all communication [6] and can be a major performance bottleneck.\n\nWe use FX to implement a transformation that can overlap computation with all-to-all communication. Figure 5(a) shows the example of a model graph which has embedding table accesses (EmbeddingAllToAll) and other ops. Without any optimization, they are sequentially executed on a GPU stream, as shown in Figure 5(b). Using FX, we break EmbeddingAllToAll into EmbeddingAllToAll_Request and EmbeddingAllToAll_Wait, and schedule independent ops in between them.", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}} {"page_content": "

\n\n

\n\n**(a) Model graph**\n\n

\n\n

\n\n**(b) Original execution order**\n\n

\n\n

\n\n**(c)Optimized execution order**\n\n**Figure 5: Overlapping Computation with Communication**\n\n### 3.4 Summary\n\nTable 1 summarizes the optimizations discussed in this section and the corresponding performance bottlenecks addressed.\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Optimization\n Performance Bottleneck Addressed\n
Combining Input Sparse Features\n Host-to-device memory copy\n
Horizontal fusion\n GPU kernel launch overhead\n
Overlapping Computation with Communication\n Embedding all-to-all access time\n
", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}} {"page_content": "**Table 1: Summary of the optimizations and the performance bottlenecks addressed**\n\nWe have also developed other FX transformations which are not discussed in this section due to space limitations.\n\nTo discover which models would benefit from these transformations, we analyzed the performance data collected by MAIProf [7] from the models that run at Meta\u2019s data centers. Altogether, these transformations provide up to 2-3x of speedups compared to eager mode on a set of production models.\n\n## 4. Concluding Remarks\n\nThe graph mode in PyTorch is preferred over the eager mode for production use for performance reasons. FX is a powerful tool for capturing and optimizing the graph of a PyTorch program. We demonstrate three FX transformations that are used to optimize production recommendation models inside Meta. We hope that this blog can motivate other PyTorch model developers to use graph transformations to boost their models\u2019 performance.\n\nReferences", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}} {"page_content": "References\n\n[1] [End-to-end Machine Learning Framework](https://pytorch.org/features/)\n\n[2] [DNNFusion: Accelerating Deep Neural Networks Execution with Advanced Operator Fusion](https://arxiv.org/abs/2108.13342)\n\n[3] [Torch.FX: Practical Program Capture and Transformation for Deep Learning In Python](https://arxiv.org/pdf/2112.08429.pdf), MLSys 2022.\n\n[4] [Torch.fx\u2014PyTorch 1.12 documentation](https://pytorch.org/docs/stable/fx.html)\n\n[5] [Feature Hashing for Large Scale Multitask Learning](https://alex.smola.org/papers/2009/Weinbergeretal09.pdf)\n\n[6] [NVIDIA Collective Communication Library Documentation](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/)\n\n[7] [Performance Debugging of Production PyTorch Models at Meta](https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/)", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch 1.3 adds mobile, privacy, quantization, and named tensors'\nauthor: Team PyTorch\n---\n\nPyTorch continues to gain momentum because of its focus on meeting the needs of researchers, its streamlined workflow for production use, and most of all because of the enthusiastic support it has received from the AI community. PyTorch citations in papers on ArXiv [grew 194 percent in the first half of 2019 alone, as noted by O\u2019Reilly](https://www.oreilly.com/ideas/one-simple-graphic-researchers-love-pytorch-and-tensorflow?fbclid=IwAR3kYmlyD7zky37IYFu0cafQn7yemhl8P-7MNyB30z0q5RDzxcTOrP8kxDk), and the number of contributors to the platform has grown more than 50 percent over the last year, to nearly 1,200. Facebook, Microsoft, Uber, and other organizations across industries are increasingly using it as the foundation for their most important machine learning (ML) research and production workloads.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}} {"page_content": "We are now advancing the platform further with the release of PyTorch 1.3, which includes experimental support for features such as seamless model deployment to mobile devices, model quantization for better performance at inference time, and front-end improvements, like the ability to name tensors and create clearer code with less need for inline comments. We\u2019re also launching a number of additional tools and libraries to support model interpretability and bringing multimodal research to production.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}} {"page_content": "Additionally, we\u2019ve collaborated with Google and Salesforce to add broad support for Cloud Tensor Processing Units, providing a significantly accelerated option for training large-scale deep neural networks. [Alibaba Cloud](https://data.aliyun.com/bigdata/pai-pytorch?spm=5176.12825654.a9ylfrljh.d112.7b652c4ayuOO4M&scm=20140722.1068.1.1098&aly_as=-PvJ5e4c) also joins Amazon Web Services, Microsoft Azure, and Google Cloud as supported cloud platforms for PyTorch users. You can get started now at [pytorch.org](https://pytorch.org/get-started/locally/).\n\n# PyTorch 1.3\n\nThe 1.3 release of PyTorch brings significant new features, including experimental support for mobile device deployment, eager mode quantization at 8-bit integer, and the ability to name tensors. With each of these enhancements, we look forward to additional contributions and improvements from the PyTorch community.\n\n## Named tensors (experimental)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}} -{"page_content": "Cornell University\u2019s [Sasha Rush has argued](http://nlp.seas.harvard.edu/NamedTensor) that, despite its ubiquity in deep learning, the traditional implementation of tensors has significant shortcomings, such as exposing private dimensions, broadcasting based on absolute position, and keeping type information in documentation. He proposed named tensors as an alternative approach.\n\nToday, we name and access dimensions by comment:\n\n```python\n# Tensor[N, C, H, W]\n images = torch.randn(32, 3, 56, 56)\n images.sum(dim=1)\n images.select(dim=1, index=0)\n```\n\nBut naming explicitly leads to more readable and maintainable code:\n\n```python\nNCHW = [\u2018N\u2019, \u2018C\u2019, \u2018H\u2019, \u2018W\u2019]\n images = torch.randn(32, 3, 56, 56, names=NCHW)\n images.sum('C')\n images.select('C', index=0)\n```\n\n## Quantization (experimental)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}} -{"page_content": "It\u2019s important to make efficient use of both server-side and on-device compute resources when developing ML applications. To support more efficient deployment on servers and edge devices, PyTorch 1.3 now supports 8-bit model quantization using the familiar eager mode Python API. Quantization refers to techniques used to perform computation and storage at reduced precision, such as 8-bit integer. This currently experimental feature includes support for post-training quantization, dynamic quantization, and quantization-aware training. It leverages the [FBGEMM](https://github.com/pytorch/FBGEMM) and [QNNPACK](https://github.com/pytorch/QNNPACK) state-of-the-art quantized kernel back ends, for x86 and ARM CPUs, respectively, which are integrated with PyTorch and now share a common API.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}} +{"page_content": "## Named tensors (experimental)\n\nCornell University\u2019s [Sasha Rush has argued](http://nlp.seas.harvard.edu/NamedTensor) that, despite its ubiquity in deep learning, the traditional implementation of tensors has significant shortcomings, such as exposing private dimensions, broadcasting based on absolute position, and keeping type information in documentation. He proposed named tensors as an alternative approach.\n\nToday, we name and access dimensions by comment:\n\n```python\n# Tensor[N, C, H, W]\n images = torch.randn(32, 3, 56, 56)\n images.sum(dim=1)\n images.select(dim=1, index=0)\n```\n\nBut naming explicitly leads to more readable and maintainable code:\n\n```python\nNCHW = [\u2018N\u2019, \u2018C\u2019, \u2018H\u2019, \u2018W\u2019]\n images = torch.randn(32, 3, 56, 56, names=NCHW)\n images.sum('C')\n images.select('C', index=0)\n```\n\n## Quantization (experimental)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}} +{"page_content": "## Quantization (experimental)\n\nIt\u2019s important to make efficient use of both server-side and on-device compute resources when developing ML applications. To support more efficient deployment on servers and edge devices, PyTorch 1.3 now supports 8-bit model quantization using the familiar eager mode Python API. Quantization refers to techniques used to perform computation and storage at reduced precision, such as 8-bit integer. This currently experimental feature includes support for post-training quantization, dynamic quantization, and quantization-aware training. It leverages the [FBGEMM](https://github.com/pytorch/FBGEMM) and [QNNPACK](https://github.com/pytorch/QNNPACK) state-of-the-art quantized kernel back ends, for x86 and ARM CPUs, respectively, which are integrated with PyTorch and now share a common API.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}} {"page_content": "To learn more about the design and architecture, check out the API docs [here](https://pytorch.org/docs/master/quantization.html), and get started with any of the supported techniques using the tutorials available [here](https://pytorch.org/tutorials/).\n\n## PyTorch mobile (experimental)\n\nRunning ML on edge devices is growing in importance as applications continue to demand lower latency. It is also a foundational element for privacy-preserving techniques such as federated learning. To enable more efficient on-device ML, PyTorch 1.3 now supports an end-to-end workflow from Python to deployment on iOS and Android.\n\nThis is an early, experimental release, optimized for end-to-end development. Coming releases will focus on:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}} {"page_content": "* Optimization for size: Build level optimization and selective compilation depending on the operators needed for user applications (i.e., you pay binary size for only the operators you need)\n* Performance: Further improvements to performance and coverage on mobile CPUs and GPUs\n* High level API: Extend mobile native APIs to cover common preprocessing and integration tasks needed for incorporating ML in mobile applications. e.g. Computer vision and NLP\n\nLearn more or get started on Android or iOS [here](http://pytorch.org/mobile).\n\n# New tools for model interpretability and privacy\n\n## Captum", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}} {"page_content": "## Captum\n\nAs models become ever more complex, it is increasingly important to develop new methods for model interpretability. To help address this need, we\u2019re launching Captum, a tool to help developers working in PyTorch understand why their model generates a specific output. Captum provides state-of-the-art tools to understand how the importance of specific neurons and layers and affect predictions made by the models. Captum\u2019s algorithms include integrated gradients, conductance, SmoothGrad and VarGrad, and DeepLift.\n\nThe example below shows how to apply model interpretability algorithms on a pretrained ResNet model and then visualize the attributions for each pixel by overlaying them on the image.\n\n```python\nnoise_tunnel = NoiseTunnel(integrated_gradients)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}} @@ -398,8 +398,8 @@ {"page_content": "## CrypTen\n\nPractical applications of ML via cloud-based or machine-learning-as-a-service (MLaaS) platforms pose a range of security and privacy challenges. In particular, users of these platforms may not want or be able to share unencrypted data, which prevents them from taking full advantage of ML tools. To address these challenges, the ML community is exploring a number of technical approaches, at various levels of maturity. These include homomorphic encryption, secure multiparty computation, trusted execution environments, on-device computation, and differential privacy.\n\nTo provide a better understanding of how some of these technologies can be applied, we are releasing CrypTen, a new community-based research platform for taking the field of privacy-preserving ML forward. Learn more about CrypTen [here](https://ai.facebook.com/blog/crypten-a-new-research-tool-for-secure-machine-learning-with-pytorch). It is available on GitHub [here](https://github.com/facebookresearch/CrypTen).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}} {"page_content": "# Tools for multimodal AI systems\n\nDigital content is often made up of several modalities, such as text, images, audio, and video. For example, a single public post might contain an image, body text, a title, a video, and a landing page. Even one particular component may have more than one modality, such as a video that contains both visual and audio signals, or a landing page that is composed of images, text, and HTML sources.\n\nThe ecosystem of tools and libraries that work with PyTorch offer enhanced ways to address the challenges of building multimodal ML systems. Here are some of the latest libraries launching today:\n\n## Detectron2", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}} {"page_content": "## Detectron2\n\nObject detection and segmentation are used for tasks ranging from autonomous vehicles to content understanding for platform integrity. To advance this work, Facebook AI Research (FAIR) is releasing Detectron2, an object detection library now implemented in PyTorch. Detectron2 provides support for the latest models and tasks, increased flexibility to aid computer vision research, and improvements in maintainability and scalability to support production use cases.\n\nDetectron2 is available [here](https://github.com/facebookresearch/detectron2) and you can learn more [here](https://ai.facebook.com/blog/-detectron2-a-pytorch-based-modular-object-detection-library-).\n\n## Speech extensions to fairseq", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}} -{"page_content": "Language translation and audio processing are critical components in systems and applications such as search, translation, speech, and assistants. There has been tremendous progress in these fields recently thanks to the development of new architectures like transformers, as well as large-scale pretraining methods. We\u2019ve extended fairseq, a framework for sequence-to-sequence applications such as language translation, to include support for end-to-end learning for speech and audio recognition tasks.These extensions to fairseq enable faster exploration and prototyping of new speech research ideas while offering a clear path to production.\n\nGet started with fairseq [here](https://github.com/pytorch/fairseq/tree/master/examples/speech_recognition).\n\n# Cloud provider and hardware ecosystem support", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}} -{"page_content": "Cloud providers such as Amazon Web Services, Microsoft Azure, and Google Cloud provide extensive support for anyone looking to develop ML on PyTorch and deploy in production. We\u2019re excited to share the general availability of Google Cloud TPU support and a newly launched integration with Alibaba Cloud. We\u2019re also expanding hardware ecosystem support.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}} +{"page_content": "## Speech extensions to fairseq\n\nLanguage translation and audio processing are critical components in systems and applications such as search, translation, speech, and assistants. There has been tremendous progress in these fields recently thanks to the development of new architectures like transformers, as well as large-scale pretraining methods. We\u2019ve extended fairseq, a framework for sequence-to-sequence applications such as language translation, to include support for end-to-end learning for speech and audio recognition tasks.These extensions to fairseq enable faster exploration and prototyping of new speech research ideas while offering a clear path to production.\n\nGet started with fairseq [here](https://github.com/pytorch/fairseq/tree/master/examples/speech_recognition).\n\n# Cloud provider and hardware ecosystem support", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}} +{"page_content": "# Cloud provider and hardware ecosystem support\n\nCloud providers such as Amazon Web Services, Microsoft Azure, and Google Cloud provide extensive support for anyone looking to develop ML on PyTorch and deploy in production. We\u2019re excited to share the general availability of Google Cloud TPU support and a newly launched integration with Alibaba Cloud. We\u2019re also expanding hardware ecosystem support.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}} {"page_content": "* Google Cloud TPU support now broadly available. To accelerate the largest-scale machine learning (ML) applications deployed today and enable rapid development of the ML applications of tomorrow, Google created custom silicon chips called Tensor Processing Units ([TPUs](https://cloud.google.com/tpu/)). When assembled into multi-rack ML supercomputers called [Cloud TPU Pods](https://cloud.google.com/blog/products/ai-machine-learning/cloud-tpu-pods-break-ai-training-records), these TPUs can complete ML workloads in minutes or hours that previously took days or weeks on other systems. Engineers from Facebook, Google, and Salesforce worked together to enable and pilot Cloud TPU support in PyTorch, including experimental support for Cloud TPU Pods. PyTorch support for Cloud TPUs is also available in Colab. Learn more about how to get started with PyTorch on Cloud TPUs [here](https://github.com/pytorch/xla).\n* Alibaba adds support for PyTorch in Alibaba Cloud. The initial integration involves a one-click solution for PyTorch 1.x, Data Science Workshop notebook service, distributed training with Gloo/NCCL, as well as seamless integration with Alibaba IaaS such as OSS, ODPS, and NAS. Together with the toolchain provided by Alibaba, we look forward to significantly reducing the overhead necessary for adoption, as well as helping Alibaba Cloud\u2019s global customer base leverage PyTorch to develop new AI applications.\n* ML hardware ecosystem expands. In addition to key GPU and CPU partners, the PyTorch ecosystem has also enabled support for dedicated ML accelerators. Updates from [Intel](https://www.intel.ai/nnpi-glow-pytorch/) and [Habana](https://medium.com/@HabanaLabs/unlocking-ai-scaling-through-software-and-hardware-interface-standardization-77561cb7598b) showcase how PyTorch, connected to the Glow optimizing compiler, enables developers to utilize these market-specific solutions.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}} {"page_content": "# Growth in the PyTorch community\n\nAs an open source, community-driven project, PyTorch benefits from wide range of contributors bringing new capabilities to the ecosystem. Here are some recent examples:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}} {"page_content": "* Mila SpeechBrain aims to provide an open source, all-in-one speech toolkit based on PyTorch. The goal is to develop a single, flexible, user-friendly toolkit that can be used to easily develop state-of-the-art systems for speech recognition (both end to end and HMM-DNN), speaker recognition, speech separation, multi-microphone signal processing (e.g., beamforming), self-supervised learning, and many others. [Learn more](https://speechbrain.github.io/)\n* SpaCy is a new wrapping library with consistent and easy-to-use interfaces to several models, in order to extract features to power NLP pipelines. Support is provided for via spaCy\u2019s standard training API. The library also calculates an alignment so the transformer features can be related back to actual words instead of just wordpieces. [Learn more](https://explosion.ai/blog/spacy-pytorch-transformers)\n* HuggingFace PyTorch-Transformers (formerly known as pytorch-pretrained-bert is a library of state-of-the-art pretrained models for Natural Language Processing (NLP). The library currently contains PyTorch implementations, pretrained model weights, usage scripts, and conversion utilities for models such as BERT, GPT-2, RoBERTa, and DistilBERT. It has also grown quickly, with more than 13,000 GitHub stars and a broad set of users. [Learn more](https://github.com/huggingface/transformers)\n* PyTorch Lightning is a Keras-like ML library for PyTorch. It leaves core training and validation logic to you and automates the rest. Reproducibility is a crucial requirement for many fields of research, including those based on ML techniques. As the number of research papers submitted to arXiv and conferences skyrockets into the tens of thousands, scaling reproducibility becomes difficult. [Learn more](https://github.com/williamFalcon/pytorch-lightning).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}} @@ -410,7 +410,7 @@ {"page_content": "Summary:\n- **TorchVision** - Added multi-weight support API, new architectures, model variants, and pretrained weight. See the release notes [here](https://github.com/pytorch/vision/releases).\n- **TorchAudio** - Introduced beta features including a streaming API, a CTC beam search decoder, and new beamforming modules and methods. See the release notes [here](https://github.com/pytorch/audio/releases).\n- **TorchText** - Extended support for scriptable BERT tokenizer and added datasets for GLUE benchmark. See the release notes [here](https://github.com/pytorch/text/releases).\n- **TorchRec** - Added EmbeddingModule benchmarks, examples for TwoTower Retrieval, inference and sequential embeddings, metrics, improved planner and demonstrated integration with production components. See the release notes [here](https://github.com/pytorch/torchrec/releases).\n- **TorchX** - Launch PyTorch trainers developed on local workspaces onto five different types of schedulers. See the release notes [here](https://github.com/pytorch/torchx/blob/main/CHANGELOG.md?plain=1#L3).\n- **FBGemm** - Added and improved kernels for Recommendation Systems inference workloads, including table batched embedding bag, jagged tensor operations, and other special-case optimizations.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "## TorchVision v0.13\n\n### Multi-weight support API\n\nTorchVision v0.13 offers a new [Multi-weight support API](https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/) for loading different weights to the existing model builder methods:\n\n```python\nfrom torchvision.models import *\n\n# Old weights with accuracy 76.130%\nresnet50(weights=ResNet50_Weights.IMAGENET1K_V1)\n\n# New weights with accuracy 80.858%\nresnet50(weights=ResNet50_Weights.IMAGENET1K_V2)\n\n# Best available weights (currently alias for IMAGENET1K_V2)\n# Note that these weights may change across versions\nresnet50(weights=ResNet50_Weights.DEFAULT)\n\n# Strings are also supported\nresnet50(weights=\"IMAGENET1K_V2\")\n\n# No weights - random initialization\nresnet50(weights=None)\n```\n\nThe new API bundles along with the weights important details such as the preprocessing transforms and meta-data such as labels. Here is how to make the most out of it:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "```python\nfrom torchvision.io import read_image\nfrom torchvision.models import resnet50, ResNet50_Weights\n\nimg = read_image(\"test/assets/encode_jpeg/grace_hopper_517x606.jpg\")\n\n# Step 1: Initialize model with the best available weights\nweights = ResNet50_Weights.DEFAULT\nmodel = resnet50(weights=weights)\nmodel.eval()\n\n# Step 2: Initialize the inference transforms\npreprocess = weights.transforms()\n\n# Step 3: Apply inference preprocessing transforms\nbatch = preprocess(img).unsqueeze(0)\n\n# Step 4: Use the model and print the predicted category\nprediction = model(batch).squeeze(0).softmax(0)\nclass_id = prediction.argmax().item()\nscore = prediction[class_id].item()\ncategory_name = weights.meta[\"categories\"][class_id]\nprint(f\"{category_name}: {100 * score:.1f}%\")\n```\n\nYou can read more about the new API in the [docs](https://pytorch.org/vision/0.13/models.html). To provide your feedback, please use this dedicated [Github issue](https://github.com/pytorch/vision/issues/5088).\n\n### New architectures and model variants", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} -{"page_content": "#### Classification\n\nThe [Swin Transformer](https://arxiv.org/abs/2103.14030) and [EfficienetNetV2](https://arxiv.org/abs/2104.00298) are two popular classification models which are often used for downstream vision tasks. This release includes 6 pre-trained weights for their classification variants. Here is how to use the new models:\n\n```python\nimport torch\nfrom torchvision.models import *\n\nimage = torch.rand(1, 3, 224, 224)\nmodel = swin_t(weights=\"DEFAULT\").eval()\nprediction = model(image)\n\nimage = torch.rand(1, 3, 384, 384)\nmodel = efficientnet_v2_s(weights=\"DEFAULT\").eval()\nprediction = model(image)\n```\n\nIn addition to the above, we also provide new variants for existing architectures such as ShuffleNetV2, ResNeXt and MNASNet. The accuracies of all the new pre-trained models obtained on ImageNet-1K are seen below:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} +{"page_content": "### New architectures and model variants\n\n#### Classification\n\nThe [Swin Transformer](https://arxiv.org/abs/2103.14030) and [EfficienetNetV2](https://arxiv.org/abs/2104.00298) are two popular classification models which are often used for downstream vision tasks. This release includes 6 pre-trained weights for their classification variants. Here is how to use the new models:\n\n```python\nimport torch\nfrom torchvision.models import *\n\nimage = torch.rand(1, 3, 224, 224)\nmodel = swin_t(weights=\"DEFAULT\").eval()\nprediction = model(image)\n\nimage = torch.rand(1, 3, 384, 384)\nmodel = efficientnet_v2_s(weights=\"DEFAULT\").eval()\nprediction = model(image)\n```\n\nIn addition to the above, we also provide new variants for existing architectures such as ShuffleNetV2, ResNeXt and MNASNet. The accuracies of all the new pre-trained models obtained on ImageNet-1K are seen below:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "| **Model** | **Acc@1** | **Acc@5** |\n|--------------------------------|-----------|-----------|\n| swin_t | 81.474 | 95.776 |\n| swin_s | 83.196 | 96.36 |\n| swin_b | 83.582 | 96.64 |\n| efficientnet_v2_s | 84.228 | 96.878 |\n| efficientnet_v2_m | 85.112 | 97.156 |\n| efficientnet_v2_l | 85.808 | 97.788 |\n| resnext101_64x4d | 83.246 | 96.454 |\n| resnext101_64x4d (quantized) | 82.898 | 96.326 |\n| shufflenet_v2_x1_5 | 72.996 | 91.086 |\n| shufflenet_v2_x1_5 (quantized) | 72.052 | 0.700 |\n| shufflenet_v2_x2_0 | 76.230 | 93.006 |\n| shufflenet_v2_x2_0 (quantized) | 75.354 | 92.488 |\n| mnasnet0_75 | 71.180 | 90.496 |\n| mnas1_3 | 76.506 | 93.522 |", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "We would like to thank Hu Ye for contributing to TorchVision the Swin Transformer implementation.\n\n#### (BETA) Object Detection and Instance Segmentation\n\nWe have introduced 3 new model variants for RetinaNet, FasterRCNN and MaskRCNN that include several [post-paper architectural optimizations](https://github.com/pytorch/vision/pull/5444) and improved training recipes. All models can be used similarly:\n\n```python\nimport torch\nfrom torchvision.models.detection import *\n\nimages = [torch.rand(3, 800, 600)]\nmodel = retinanet_resnet50_fpn_v2(weights=\"DEFAULT\")\n# model = fasterrcnn_resnet50_fpn_v2(weights=\"DEFAULT\")\n# model = maskrcnn_resnet50_fpn_v2(weights=\"DEFAULT\")\nmodel.eval()\nprediction = model(images)\n```\n\nBelow we present the metrics of the new variants on COCO val2017. In parenthesis we denote the improvement over the old variants:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "| **Model** | **Box mAP** | **Mask mAP** |\n|----------------------------|-------------|--------------|\n| retinanet_resnet50_fpn_v2 | 41.5 (+5.1) | - |\n| fasterrcnn_resnet50_fpn_v2 | 46.7 (+9.7) | - |\n| maskrcnn_resnet50_fpn_v2 | 47.4 (+9.5) | 41.8 (+7.2) |\n\nWe would like to thank Ross Girshick, Piotr Dollar, Vaibhav Aggarwal, Francisco Massa and Hu Ye for their past research and contributions to this work.\n\n### New pre-trained weights \n\n#### SWAG weights\n\nThe ViT and RegNet model variants offer new pre-trained [SWAG](https://arxiv.org/abs/2201.08371) (\u200b\u200bSupervised Weakly from hashtAGs) weights. One of the biggest of these models achieves a whopping 88.6% accuracy on ImageNet-1K. We currently offer two versions of the weights: 1) fine-tuned end-to-end weights on ImageNet-1K (highest accuracy) and 2) frozen trunk weights with a linear classifier fit on ImageNet-1K (great for transfer learning). Below we see the detailed accuracies of each model variant:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} @@ -421,22 +421,22 @@ {"page_content": "We completely revamped our models documentation to make them easier to browse, and added various key information such as supported image sizes, or image pre-processing steps of pre-trained weights. We now have a [main model page](https://pytorch.org/vision/main/models.html) with various [summary tables](https://pytorch.org/vision/main/models.html#table-of-all-available-classification-weights) of available weights, and each model has a [dedicated page](https://pytorch.org/vision/main/models/resnet.html). Each model builder is also documented in their [own page](https://pytorch.org/vision/main/models/generated/torchvision.models.resnet50.html#torchvision.models.resnet50), with more details about the available weights, including accuracy, minimal image size, link to training recipes, and other valuable info. For comparison, our previous models docs are [here](https://pytorch.org/vision/0.12/models.html). To provide feedback on the new documentation, please use the dedicated [Github issue](https://github.com/pytorch/vision/issues/5511).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "## TorchAudio v0.12\n\n### (BETA) Streaming API\n\n

\n \n

\n\n\nStreamReader is TorchAudio\u2019s new I/O API. It is backed by FFmpeg\u2020, and allows users to:\n- Decode audio and video formats, including MP4 and AAC\n- Handle input forms, such as local files, network protocols, microphones, webcams, screen captures and file-like objects\n- Iterate over and decode chunk-by-chunk, while changing the sample rate or frame rate\n- Apply audio and video filters, such as low-pass filter and image scaling\n- Decode video with Nvidia's hardware-based decoder (NVDEC)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "For usage details, please check out the [documentation](https://pytorch.org/audio/0.12.0/io.html#streamreader) and tutorials:\n- [Media Stream API - Pt.1](https://pytorch.org/audio/0.12.0/tutorials/streaming_api_tutorial.html)\n- [Media Stream API - Pt.2](https://pytorch.org/audio/0.12.0/tutorials/streaming_api2_tutorial.html)\n- [Online ASR with Emformer RNN-T](https://pytorch.org/audio/0.12.0/tutorials/online_asr_tutorial.html)\n- [Device ASR with Emformer RNN-T](https://pytorch.org/audio/0.12.0/tutorials/device_asr.html)\n- [Accelerated Video Decoding with NVDEC](https://pytorch.org/audio/0.12.0/hw_acceleration_tutorial.html)\n\n\u2020 To use StreamReader, FFmpeg libraries are required. Please install FFmpeg. The coverage of codecs depends on how these libraries are configured. TorchAudio official binaries are compiled to work with FFmpeg 4 libraries; FFmpeg 5 can be used if TorchAudio is built from source.\n\n### (BETA) CTC Beam Search Decoder", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} -{"page_content": "TorchAudio integrates the wav2letter CTC beam search decoder from [Flashlight](https://arxiv.org/pdf/2201.12465.pdf) ([GitHub](https://github.com/flashlight/flashlight)). The addition of this inference time decoder enables running end-to-end CTC ASR evaluation using TorchAudio utils.\n\nCustomizable lexicon and lexicon-free decoders are supported, and both are compatible with KenLM n-gram language models or without using a language model. TorchAudio additionally supports downloading token, lexicon, and pretrained KenLM files for the LibriSpeech dataset.\n\nFor usage details, please check out the [documentation](https://pytorch.org/audio/0.12.0/models.decoder.html#ctcdecoder) and [ASR inference tutorial](https://pytorch.org/audio/0.12.0/tutorials/asr_inference_with_ctc_decoder_tutorial.html).\n\n### (BETA) New Beamforming Modules and Methods", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} -{"page_content": "To improve flexibility in usage, the release adds two new beamforming modules under torchaudio.transforms: [SoudenMVDR](https://pytorch.org/audio/0.12.0/transforms.html#soudenmvdr) and [RTFMVDR](https://pytorch.org/audio/0.12.0/transforms.html#rtfmvdr). The main differences from [MVDR](https://pytorch.org/audio/0.11.0/transforms.html#mvdr) are:\n- Use power spectral density (PSD) and relative transfer function (RTF) matrices as inputs instead of time-frequency masks. The module can be integrated with neural networks that directly predict complex-valued STFT coefficients of speech and noise\n- Add \\'reference_channel\\' as an input argument in the forward method, to allow users to select the reference channel in model training or dynamically change the reference channel in inference", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} +{"page_content": "### (BETA) CTC Beam Search Decoder\n\nTorchAudio integrates the wav2letter CTC beam search decoder from [Flashlight](https://arxiv.org/pdf/2201.12465.pdf) ([GitHub](https://github.com/flashlight/flashlight)). The addition of this inference time decoder enables running end-to-end CTC ASR evaluation using TorchAudio utils.\n\nCustomizable lexicon and lexicon-free decoders are supported, and both are compatible with KenLM n-gram language models or without using a language model. TorchAudio additionally supports downloading token, lexicon, and pretrained KenLM files for the LibriSpeech dataset.\n\nFor usage details, please check out the [documentation](https://pytorch.org/audio/0.12.0/models.decoder.html#ctcdecoder) and [ASR inference tutorial](https://pytorch.org/audio/0.12.0/tutorials/asr_inference_with_ctc_decoder_tutorial.html).\n\n### (BETA) New Beamforming Modules and Methods", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} +{"page_content": "### (BETA) New Beamforming Modules and Methods\n\nTo improve flexibility in usage, the release adds two new beamforming modules under torchaudio.transforms: [SoudenMVDR](https://pytorch.org/audio/0.12.0/transforms.html#soudenmvdr) and [RTFMVDR](https://pytorch.org/audio/0.12.0/transforms.html#rtfmvdr). The main differences from [MVDR](https://pytorch.org/audio/0.11.0/transforms.html#mvdr) are:\n- Use power spectral density (PSD) and relative transfer function (RTF) matrices as inputs instead of time-frequency masks. The module can be integrated with neural networks that directly predict complex-valued STFT coefficients of speech and noise\n- Add \\'reference_channel\\' as an input argument in the forward method, to allow users to select the reference channel in model training or dynamically change the reference channel in inference", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "Besides the two modules, new function-level beamforming methods are added under torchaudio.functional. These include:\n- [psd](https://pytorch.org/audio/0.12.0/functional.html#psd)\n- [mvdr_weights_souden](https://pytorch.org/audio/0.12.0/functional.html#mvdr-weights-souden)\n- [mvdr_weights_rtf](https://pytorch.org/audio/0.12.0/functional.html#mvdr-weights-rtf)\n- [rtf_evd](https://pytorch.org/audio/0.12.0/functional.html#rtf-evd)\n- [rtf_power](https://pytorch.org/audio/0.12.0/functional.html#rtf-power)\n- [apply_beamforming](https://pytorch.org/audio/0.12.0/functional.html#apply-beamforming)\n\nFor usage details, please check out the documentation at [torchaudio.transforms](https://pytorch.org/audio/0.12.0/transforms.html#multi-channel) and [torchaudio.functional](https://pytorch.org/audio/0.12.0/functional.html#multi-channel) and the [Speech Enhancement with MVDR Beamforming tutorial](https://pytorch.org/audio/0.12.0/tutorials/mvdr_tutorial.html).\n\n## TorchText v0.13\n\n### Glue Datasets", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "We increased the number of datasets in TorchText from 22 to 30 by adding the remaining 8 datasets from the GLUE benchmark (SST-2 was already supported). The complete list of GLUE datasets is as follows:\n- [CoLA](https://nyu-mll.github.io/CoLA/) ([paper](https://arxiv.org/pdf/1805.12471.pdf)): Single sentence binary classification acceptability task\n- [SST-2](https://nlp.stanford.edu/sentiment/) ([paper](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf)): Single sentence binary classification sentiment task\n- [MRPC](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf) ([paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/I05-50025B15D.pdf)): Dual sentence binary classification paraphrase task\n- [QQP](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs): Dual sentence binary classification paraphrase task\n- [STS-B](https://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark) ([paper](https://aclanthology.org/S17-2001.pdf)): Single sentence to float regression sentence similarity task\n- [MNLI](https://cims.nyu.edu/~sbowman/multinli/) ([paper](https://cims.nyu.edu/~sbowman/multinli/paper.pdf)): Sentence ternary classification NLI task\n- [QNLI](https://gluebenchmark.com/) ([paper](https://arxiv.org/pdf/1804.07461.pdf)): Sentence binary classification QA and NLI tasks\n- [RTE](https://aclweb.org/aclwiki/Recognizing_Textual_Entailment) ([paper](https://arxiv.org/pdf/2010.03061.pdf)): Dual sentence binary classification NLI task\n- [WNLI](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html) ([paper](http://commonsensereasoning.org/2011/papers/Levesque.pdf)): Dual sentence binary classification coreference and NLI tasks", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "### Scriptable BERT Tokenizer\n\nTorchText has extended support for scriptable tokenizer by adding the WordPiece tokenizer used in BERT. It is one of the commonly used algorithms for splitting input text into sub-words units and was introduced in [Japanese and Korean Voice Search (Schuster et al., 2012)](https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf). \n\nTorchScriptabilty support would allow users to embed the BERT text-preprocessing natively in C++ without needing the support of python runtime. As TorchText now supports the CMAKE build system to natively link torchtext binaries with application code, users can easily integrate BERT tokenizers for deployment needs.\n\nFor usage details, please refer to the corresponding [documentation](https://pytorch.org/text/main/transforms.html#torchtext.transforms.BERTTokenizer).\n\n## TorchRec v0.2.0\n\n### EmbeddingModule + DLRM benchmarks", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} -{"page_content": "A set of [benchmarking tests](https://github.com/pytorch/torchrec/tree/main/benchmarks), showing performance characteristics of TorchRec\u2019s base modules and research models built out of TorchRec.\n\n### TwoTower Retrieval Example, with FAISS\n\nWe provide an [example](https://github.com/pytorch/torchrec/tree/main/examples/retrieval) demonstrating training a distributed TwoTower (i.e. User-Item) Retrieval model that is sharded using TorchRec. The projected item embeddings are added to an IVFPQ FAISS index for candidate generation. The retrieval model and KNN lookup are bundled in a Pytorch model for efficient end-to-end retrieval.\n\n### Integrations", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} +{"page_content": "### EmbeddingModule + DLRM benchmarks\n\nA set of [benchmarking tests](https://github.com/pytorch/torchrec/tree/main/benchmarks), showing performance characteristics of TorchRec\u2019s base modules and research models built out of TorchRec.\n\n### TwoTower Retrieval Example, with FAISS\n\nWe provide an [example](https://github.com/pytorch/torchrec/tree/main/examples/retrieval) demonstrating training a distributed TwoTower (i.e. User-Item) Retrieval model that is sharded using TorchRec. The projected item embeddings are added to an IVFPQ FAISS index for candidate generation. The retrieval model and KNN lookup are bundled in a Pytorch model for efficient end-to-end retrieval.\n\n### Integrations", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "### Integrations\n\nWe demonstrate that TorchRec works out of the box with many components commonly used alongside PyTorch models in production like systems, such as \n- [Training](https://github.com/pytorch/torchrec/tree/main/examples/ray) a TorchRec model on Ray Clusters utilizing the Torchx Ray scheduler\n- [Preprocessing](https://github.com/pytorch/torchrec/tree/main/torchrec/datasets/scripts/nvt) and DataLoading with NVTabular on DLRM\n- [Training](https://github.com/pytorch/torchrec/tree/main/examples/torcharrow) a TorchRec model with on-the-fly preprocessing with TorchArrow showcasing RecSys domain UDFs\n\n### Sequential Embeddings Example: Bert4Rec\n\nWe provide an [example](https://github.com/pytorch/torchrec/tree/main/examples/bert4rec), using TorchRec, that reimplements the [BERT4REC](https://arxiv.org/abs/1904.06690) paper, showcasing EmbeddingCollection for non-pooled embeddings. Using DistributedModelParallel we see a 35% QPS gain over conventional data parallelism.\n\n### (Beta) Planner", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "### (Beta) Planner\n\nThe TorchRec library includes a built-in [planner](https://pytorch.org/torchrec/torchrec.distributed.planner.html) that selects near optimal sharding plan for a given model. The planner attempts to identify the best sharding plan by evaluating a series of proposals which are statically analyzed and fed into an integer partitioner. The planner is able to automatically adjust plans for a wide range of hardware setups, allowing users to scale performance seamlessly from local development environment to large scale production hardware. See this [notebook](https://github.com/pytorch/torchrec/blob/main/torchrec/distributed/planner/Planner_Introduction.ipynb) for a more detailed tutorial.\n\n### (Beta) Inference", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "### (Beta) Inference\n\n[TorchRec Inference](https://github.com/pytorch/torchrec/tree/main/torchrec/inference) is a C++ library that supports multi-gpu inference. The TorchRec library is used to shard models written and packaged in Python via torch.package (an alternative to TorchScript). The torch.deploy library is used to serve inference from C++ by launching multiple Python interpreters carrying the packaged model, thus subverting the GIL. Two models are provided as examples: [DLRM multi-GPU](https://github.com/pytorch/torchrec/blob/main/examples/inference/dlrm_predict.py) (sharded via TorchRec) and [DLRM single-GPU](https://github.com/pytorch/torchrec/blob/main/examples/inference/dlrm_predict_single_gpu.py).\n\n### (Beta) RecMetrics", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} -{"page_content": "RecMetrics is a [metrics](https://github.com/pytorch/torchrec/tree/main/torchrec/metrics) library that collects common utilities and optimizations for Recommendation models. It extends [torchmetrics](https://torchmetrics.readthedocs.io/en/stable/).\n- A centralized metrics module that allows users to add new metrics\n- Commonly used metrics, including AUC, Calibration, CTR, MSE/RMSE, NE & Throughput\n- Optimization for metrics related operations to reduce the overhead of metric computation\n- Checkpointing\n\n### (Prototype) Single process Batched + Fused Embeddings", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} +{"page_content": "### (Beta) RecMetrics\n\nRecMetrics is a [metrics](https://github.com/pytorch/torchrec/tree/main/torchrec/metrics) library that collects common utilities and optimizations for Recommendation models. It extends [torchmetrics](https://torchmetrics.readthedocs.io/en/stable/).\n- A centralized metrics module that allows users to add new metrics\n- Commonly used metrics, including AUC, Calibration, CTR, MSE/RMSE, NE & Throughput\n- Optimization for metrics related operations to reduce the overhead of metric computation\n- Checkpointing\n\n### (Prototype) Single process Batched + Fused Embeddings", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "Previously TorchRec\u2019s abstractions (EmbeddingBagCollection/EmbeddingCollection) over FBGEMM kernels, which provide benefits such as table batching, optimizer fusion, and UVM placement, could only be used in conjunction with DistributedModelParallel. We\u2019ve decoupled these notions from sharding, and introduced the [FusedEmbeddingBagCollection](https://github.com/pytorch/torchrec/blob/eb1247d8a2d16edc4952e5c2617e69acfe5477a5/torchrec/modules/fused_embedding_modules.py#L271), which can be used as a standalone module, with all of the above features, and can also be sharded.\n\n## TorchX v0.2.0\n\nTorchX is a job launcher that makes it easier to run PyTorch in distributed training clusters with many scheduler integrations including Kubernetes and Slurm. We're excited to release TorchX 0.2.0 with a number of improvements. TorchX is currently being used in production in both on-premise and cloud environments.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "Check out the [quickstart](https://pytorch.org/torchx/main/quickstart.html) to start launching local and remote jobs.\n\n### Workspaces\n\nTorchX [now supports workspaces](https://pytorch.org/torchx/main/workspace.html) which allows users to easily launch training jobs using their local workspace. TorchX can automatically build a patch with your local training code on top of a base image to minimize iteration time and time to training.\n\n### .torchxconfig\n\nSpecifying options in [.torchxconfig](https://pytorch.org/torchx/latest/runner.config.html) saves you from having to type long CLI commands each time you launch a job. You can also define project level generic configs and drop a config file in your home directory for user-level overrides.\n\n### Expanded Scheduler Support\n\nTorchX now supports [AWS Batch](https://pytorch.org/torchx/main/schedulers/aws_batch.html) and [Ray (experimental)](https://pytorch.org/torchx/main/schedulers/ray.html) schedulers in addition to our existing integrations.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "### Distributed Training On All Schedulers\n\nThe TorchX dist.ddp component now works on all schedulers without any configuration. Distributed training workers will automatically discover each other when using [torchelastic](https://pytorch.org/docs/stable/distributed.elastic.html) via [the builtin dist.ddp component](https://pytorch.org/torchx/main/components/distributed.html).\n\n### Hyper Parameter Optimization\n\nTorchX [integrates with Ax](https://ax.dev/versions/latest/api/runners.html#module-ax.runners.torchx) to let you scale hyper-parameter optimizations (HPO) by launching the search trials onto remote clusters.\n\n### File and Device Mounts\n\nTorchX now supports [remote filesystem mounts and custom devices](https://pytorch.org/torchx/main/specs.html#mounts). This enables your PyTorch jobs to efficiently access cloud storage such as NFS or Lustre. The device mounts enables usage of network accelerators like Infiniband and custom inference/training accelerators.\n\n## FBGemm v0.2.0", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "## FBGemm v0.2.0\n\nThe FBGEMM library contains optimized kernels meant to improve the performance of PyTorch workloads. We\u2019ve added a number of new features and optimizations over the last few months that we are excited to report.\n\n### Inference Table Batched Embedding (TBE)\n\nThe [table batched embedding bag](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops.py#L1541) (TBE) operator is an important base operation for embedding lookup for recommendation system inference on GPU. We added the following enhancements for performance and flexibility:\n\nAlignment restriction removed\n- Embedding dimension \\* data type size had to be multiple of 4B before and now, it is 1B.\n\nUnified Virtual Memory (UVM) caching kernel optimizations\n- UVM caching kernels now scale linearly with # of tables using UVM caching. Previously, it was having similar overhead as all tables using UVM caching\n- UVM caching kernel overhead is much smaller than before", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "### Inference FP8 Table Batched Embedding (TBE) \n\nThe [table batched embedding bag](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops.py#L1541) (TBE) previously supported FP32, FP16, INT8, INT4, and INT2 embedding weight types. While these weight types work well in many models, we integrate FP8 weight types (in both GPU and CPU operations) to allow for numerical and performance evaluations of FP8 in our models. Compared to INT8, FP8 does not require the additional bias and scale storage and calculations. Additionally, the next generation of H100 GPUs has the FP8 support on Tensor Core (mainly matmul ops).\n\n### Jagged Tensor Kernels", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} -{"page_content": "We added optimized kernels to speed up [TorchRec JaggedTensor](https://pytorch.org/torchrec/torchrec.sparse.html). The purpose of JaggedTensor is to handle the case where one dimension of the input data is \u201cjagged\u201d, meaning that each consecutive row in a given dimension may be a different length, which is often the case with sparse feature inputs in recommendation systems. The internal representation is shown below:\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} +{"page_content": "### Jagged Tensor Kernels\n\nWe added optimized kernels to speed up [TorchRec JaggedTensor](https://pytorch.org/torchrec/torchrec.sparse.html). The purpose of JaggedTensor is to handle the case where one dimension of the input data is \u201cjagged\u201d, meaning that each consecutive row in a given dimension may be a different length, which is often the case with sparse feature inputs in recommendation systems. The internal representation is shown below:\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "We added ops for [converting jagged tensors from sparse to dense formats](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/src/jagged_tensor_ops_cpu.cpp#L982) [and back](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/src/jagged_tensor_ops_cpu.cpp#L968), performing [matrix multiplications with jagged tensors](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/src/jagged_tensor_ops_cpu.cpp#L996), and [elementwise ops](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/src/jagged_tensor_ops_cpu.cpp#L995).\n \n### Optimized permute102-baddbmm-permute102", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "It is difficult to fuse various matrix multiplications where the batch size is not the batch size of the model, switching the batch dimension is a quick solution. We created the [permute102_baddbmm_permute102](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/src/sparse_ops_cpu.cpp#L2401) operation that switches the first and the second dimension, performs the batched matrix multiplication and then switches back. Currently we only support forward pass with FP16 data type and will support FP32 type and backward pass in the future.\n\n### Optimized index_select for dim 0 index selection", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "index_select is normally used as part of a sparse operation. While PyTorch supports a generic index_select for an arbitrary-dimension index selection, its performance for a special case like the dim 0 index selection is suboptimal. For this reason, we implement a [specialized index_select for dim 0](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/src/sparse_ops_cpu.cpp#L2421). In some cases, we have observed 1.4x performance gain from FBGEMM\u2019s index_select compared to the one from PyTorch (using uniform index distribution).\n\nMore about the implementation of influential instances can be found on our [GitHub](https://github.com/pytorch/captum/tree/master/captum/influence) page and [tutorials](https://captum.ai/tutorials/TracInCP_Tutorial).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}} @@ -450,17 +450,17 @@ {"page_content": "```python\na = np.array(((1, 2), (3, 4)), dtype=np.float32)\n\ninv_np = np.linalg.inv(a)\n\ndef inv_backward(result, grad):\n return -(result.transpose(-2, -1) @ (grad @ result.transpose(-2, -1)))\ngrad_np = inv_backward(inv_np, np.ones_like(inv_np))\n\nprint(grad_np)\n: [[-0.5 0.5]\n [ 0.5 -0.5]]\n```\n\nOf course, as programs become more complicated it\u2019s convenient to have builtin autograd support, and PyTorch\u2019s linear algebra module supports both real and complex autograd.\n\n# CUDA Support\n\nSupport for autograd and accelerators, like CUDA devices, is a core part of PyTorch. The ```torch.linalg``` module was developed with NVIDIA\u2019s PyTorch and cuSOLVER teams, who helped optimize its performance on CUDA devices with the cuSOLVER, cuBLAS, and MAGMA libraries. These improvements make PyTorch\u2019s CUDA linear algebra operations faster than ever. For example, let\u2019s look at the performance of PyTorch 1.9\u2019s ```torch.linalg.cholesky``` vs. PyTorch 1.8\u2019s (now deprecated) ```torch.cholesky```:", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}} {"page_content": "
\n \n
\n\n(The above charts were created using an Ampere A100 GPU with CUDA 11.3, cuSOLVER 11.1.1.58, and MAGMA 2.5.2. Matrices are in double precision.)\n\nThese charts show that performance has increased significantly on larger matrices, and that batched performance is better across the board. Other linear algebra operations, including ```torch.linalg.qr``` and ```torch.linalg.lstsq```, have also had their CUDA performance improved.\n\n# Beyond NumPy", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}} {"page_content": "# Beyond NumPy\n\nIn addition to offering all the functions in NumPy\u2019s linear algebra module with support for autograd and accelerators, ```torch.linalg``` has a few new functions of its own. NumPy\u2019s ```linalg.norm``` does not allow users to compute vector norms over arbitrary subsets of dimensions, so to enable this functionality we added ```torch.linalg.vector_norm```. We\u2019ve also started modernizing other linear algebra functionality in PyTorch, so we created ```torch.linalg.householder_product``` to replace the older ```torch.orgqr```, and we plan to continue adding more linear algebra functionality in the future, too.\n\n# The Future of Linear Algebra in PyTorch", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}} -{"page_content": "The ```torch.linalg``` module is fast and familiar with great support for autograd and accelerators. It\u2019s already being used in libraries like [botorch](https://github.com/pytorch/botorch), too. But we\u2019re not stopping here. We plan to continue updating more of PyTorch\u2019s existing linear algebra functionality (like ```torch.lobpcg```) and offering more support for low rank and sparse linear algebra. We also want to hear your feedback on how we can improve, so start a conversation on the [forum](https://discuss.pytorch.org/) or file an issue on our [Github](https://github.com/pytorch/pytorch) and share your thoughts. \n\nWe look forward to hearing from you and seeing what the community does with PyTorch\u2019s new linear algebra functionality!", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}} +{"page_content": "# The Future of Linear Algebra in PyTorch\n\nThe ```torch.linalg``` module is fast and familiar with great support for autograd and accelerators. It\u2019s already being used in libraries like [botorch](https://github.com/pytorch/botorch), too. But we\u2019re not stopping here. We plan to continue updating more of PyTorch\u2019s existing linear algebra functionality (like ```torch.lobpcg```) and offering more support for low rank and sparse linear algebra. We also want to hear your feedback on how we can improve, so start a conversation on the [forum](https://discuss.pytorch.org/) or file an issue on our [Github](https://github.com/pytorch/pytorch) and share your thoughts. \n\nWe look forward to hearing from you and seeing what the community does with PyTorch\u2019s new linear algebra functionality!", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"Democratizing AI with PyTorch Foundation and ROCm\u2122 support for PyTorch\"\nauthor: AMD\n---\n\n![AMD Founding Member](/assets/images/2023-02-14-democratizing-ai-with-pytorch-1.png){:width=\"50%\" style=\"display:block; margin-left:auto; margin-right:auto\"}\n\nLast year, Meta announced that [PyTorch](https://pytorch.org/) joined the Linux Foundation as a neutral home for growing the machine learning project and community with AMD representation as a part of the founding membership and governing board.\n\n[PyTorch Foundation\u2019s](https://pytorch.org/foundation) mission is to drive AI adoption by democratizing its software ecosystem through open source principles aligning with the AMD core principle of an Open software ecosystem. AMD strives to foster innovation through the support for latest generations of hardware, tools, libraries, and other components to simplify and accelerate adoption of AI across a broad range of scientific discoveries.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "
\n
\n

\nAMD, along with key PyTorch codebase developers (including those at Meta AI), delivered a set of updates to the ROCm\u2122 open software ecosystem that brings stable support for AMD Instinct\u2122 accelerators as well as many Radeon\u2122 GPUs. This now gives PyTorch developers the ability to build their next great AI solutions leveraging AMD GPU accelerators & ROCm. The support from PyTorch community in identifying gaps, prioritizing key updates, providing feedback for performance optimizing and supporting our journey from \u201cBeta\u201d to \u201cStable\u201d was immensely helpful and we deeply appreciate the strong collaboration between the two teams at AMD and PyTorch. The move for ROCm support from \u201cBeta\u201d to \u201cStable\u201d came in the PyTorch 1.12 release (June 2022) brings the added support to easily run PyTorch on native environment without having to configure custom dockers. This is a sign of confidence about the quality of support and performance of PyTorch using AMD Instinct and ROCm. The results of these collaborative efforts are evident in the performance measured on key industry benchmarks like Microsoft\u2019s SuperBench shown below in Graph 1.\n

\n
\n
\n

\n\u201cWe are excited to see the significant impact of developers at AMD to contribute to and extend features within PyTorch to make AI models run in a more performant, efficient, and scalable way. A great example of this is the thought-leadership around unified memory approaches between the framework and future hardware systems, and we look forward to seeing that feature progress.\u201d
\n- Soumith Chintala, PyTorch lead-maintainer and Director of Engineering, Meta AI\n

\n
\n
", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "The progressive improvements on both the AMD CDNA\u2122 architecture as well as ROCm and PyTorch shows single GPU model throughput increase from AMD Instinct MI100 to the latest generation AMD Instinct MI200 family GPUs going from ROCm 4.2 to ROCm 5.3 and from PyTorch 1.7 to PyTorch 1.12.\n\n![Graph 1: ML model performance over generation using Microsoft Superbench Suite](/assets/images/2023-02-14-democratizing-ai-with-pytorch-2.png){:width=\"100%\"}\n\nGraph 1: ML model performance over generation using Microsoft Superbench Suite 1, 2, 3\n\n\nBelow are a few of the key updates for ROCm support since the PyTorch 1.12 release\n\n \n\n## Full Continuous Integration (CI) for ROCm on PyTorch", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "With the ROCm support for PyTorch move from \u201cBeta\u201d to \u201cStable,\u201d all the functions and features commits are now verified through a full Continuous Integration (CI) process. The CI process helps ensure the proper build and test process ahead of an expected Docker and PIP wheel release with stable commits forthcoming.\n\n\n## Support for [Kineto Profiler](https://github.com/pytorch/kineto)\n\nThe addition of Kineto profiler support to ROCm now helps developers and users understand performance bottlenecks through effective diagnosis and profiling tools. The tool also provides recommendations to improve known issues and visualization through TensorBoard UI.\n\n## Key PyTorch Libraries support added", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "PyTorch ecosystem libraries like [TorchText](https://pytorch.org/tutorials/beginner/text_sentiment_ngrams_tutorial.html) (Text classification), [TorchRec](https://pytorch.org/torchrec/) (libraries for recommender systems - RecSys), [TorchVision](https://pytorch.org/vision/stable/index.html) (Computer Vision), [TorchAudio](https://pytorch.org/audio/stable/index.html) (audio and signal processing) are fully supported since ROCm 5.1 and upstreamed with PyTorch 1.12.\n\nKey libraries provided with the ROCm software stack including [MIOpen](https://github.com/ROCmSoftwarePlatform/MIOpen) (Convolution models), [RCCL](https://github.com/ROCmSoftwarePlatform/rccl) (ROCm Collective Communications) and [rocBLAS](https://github.com/ROCmSoftwarePlatform/rocBLAS) (BLAS for transformers) were further optimized to offer new potential efficiencies and higher performance.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "## Key PyTorch Libraries support added\n\nPyTorch ecosystem libraries like [TorchText](https://pytorch.org/tutorials/beginner/text_sentiment_ngrams_tutorial.html) (Text classification), [TorchRec](https://pytorch.org/torchrec/) (libraries for recommender systems - RecSys), [TorchVision](https://pytorch.org/vision/stable/index.html) (Computer Vision), [TorchAudio](https://pytorch.org/audio/stable/index.html) (audio and signal processing) are fully supported since ROCm 5.1 and upstreamed with PyTorch 1.12.\n\nKey libraries provided with the ROCm software stack including [MIOpen](https://github.com/ROCmSoftwarePlatform/MIOpen) (Convolution models), [RCCL](https://github.com/ROCmSoftwarePlatform/rccl) (ROCm Collective Communications) and [rocBLAS](https://github.com/ROCmSoftwarePlatform/rocBLAS) (BLAS for transformers) were further optimized to offer new potential efficiencies and higher performance.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "MIOpen innovates on several fronts, such as implementing fusion to optimize for memory bandwidth and GPU launch overheads, providing an auto-tuning infrastructure to overcome the large design space of problem configurations, and implementing different algorithms to optimize convolutions for different filter and input sizes. MIOpen is one of the first libraries to publicly support the bfloat16 data-type for convolutions, allowing efficient training at lower precision maintaining expected accuracy.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "RCCL (pronounced \"Rickle\") is a stand-alone library of standard collective communication routines for GPUs, implementing all-reduce, all-gather, reduce, broadcast, reduce-scatter, gather, scatter, and all-to-all. There is support for direct GPU-to-GPU send and receive operations. It has been optimized to achieve high bandwidth on platforms using PCIe\u00ae, Infinity Fabric\u2122 (GPU to GPU) as well as networking using InfiniBand Verbs or TCP/IP sockets. RCCL supports an arbitrary number of GPUs installed in single or multiple nodes and can be used in either single- or multi-process (e.g., MPI) applications.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "Along with the above key highlights, over 50 features and functionality improvements were completed jointly between AMD and PyTorch to add stable support for ROCm. These include improvements to tools, compilers, runtime, graph optimizations through TorchScript, INT8 quant path usage, and [ONNX runtime integration](https://onnxruntime.ai/) including support for Navi 21 based Radeon\u2122 PRO datacenter graphics card to name a few.\n\n## [AITemplate](https://github.com/facebookincubator/AITemplate) Inference Engine", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "MetaAI recently published a blog announcing the release of its open source AITemplate ([link](https://ai.facebook.com/blog/gpu-inference-engine-nvidia-amd-open-source/)) for a unified inference system supporting AMD Instinct GPU accelerators using the AMD ROCm stack. This Python based framework can help significantly improve performance through increased utilization of AMD matrix cores for transformer blocks. This is achieved through the AMD [Composable Kernel (CK) library](https://github.com/ROCmSoftwarePlatform/composable_kernel) which provides performance critical Kernels for ML AI workloads across multiple architectures including GPUs and CPUs through HIP & C++.\n\nMoreover, the AITemplate also provides out-of-the-box support for widely used AI models like BERT, ResNET, Vision Transformer, Stable Diffusion etc. simplifying deployment process through these pretrained models.\n\n \n## What\u2019s coming with future ROCm releases?\n\n### Unified memory models for CPU + GPU", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "As system architecture evolves to address the complexity of large problem sizes and data sets, memory management becomes a key performance bottle neck that needs a cohesive strategy to be addressed through innovations at both hardware and software levels. AMD is uniquely positioned to address this problem with its effective data center solutions integrating AMD EPYC\u2122 CPU cores with its AMD Instinct GPU compute units in a truly unified datacenter APU (Accelerated Processing Unit) form factor set to be launched in 2H 2023.\n\nThe software work to leverage the unified CPU + GPU memory has already started in collaboration with the PyTorch team, to enable the usage of a fast, low latency, synchronized memory model that enables not only AMD but also other AI accelerators to address the complex memory management problem of today. We are looking forward to this joint effort and announcement soon.\n\n## Acknowledgement", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "### Unified memory models for CPU + GPU\n\n \n\nAs system architecture evolves to address the complexity of large problem sizes and data sets, memory management becomes a key performance bottle neck that needs a cohesive strategy to be addressed through innovations at both hardware and software levels. AMD is uniquely positioned to address this problem with its effective data center solutions integrating AMD EPYC\u2122 CPU cores with its AMD Instinct GPU compute units in a truly unified datacenter APU (Accelerated Processing Unit) form factor set to be launched in 2H 2023.\n\nThe software work to leverage the unified CPU + GPU memory has already started in collaboration with the PyTorch team, to enable the usage of a fast, low latency, synchronized memory model that enables not only AMD but also other AI accelerators to address the complex memory management problem of today. We are looking forward to this joint effort and announcement soon.\n\n## Acknowledgement", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "## Acknowledgement\n\nThe content in this blog highlights the joint work between AMD and key PyTorch contributors including Meta, working on many of the core features, as well as Microsoft enabling ONNX Runtime support. We are looking forward to working with the other founding members at the PyTorch Foundation on the next steps and improvements to democratize and grow adoption of PyTorch across the industry.\n\n## CAUTIONARY STATEMENT", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "\nThis blog contains forward-looking statements concerning Advanced Micro Devices, Inc. (AMD) such as the availability, timing and expected benefits of an AMD datacenter APU form factor, which are made pursuant to the Safe Harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are commonly identified by words such as \"would,\" \"may,\" \"expects,\" \"believes,\" \"plans,\" \"intends,\" \"projects\" and other terms with similar meaning. Investors are cautioned that the forward-looking statements in this blog are based on current beliefs, assumptions and expectations, speak only as of the date of this blog and involve risks and uncertainties that could cause actual results to differ materially from current expectations. Such statements are subject to certain known and unknown risks and uncertainties, many of which are difficult to predict and generally beyond AMD's control, that could cause actual results and other future events to differ materially from those expressed in, or implied or projected by, the forward-looking information and statements. Investors are urged to review in detail the risks and uncertainties in AMD\u2019s Securities and Exchange Commission filings, including but not limited to AMD\u2019s most recent reports on Forms 10-K and 10-Q. AMD does not assume, and hereby disclaims, any obligation to update forward-looking statements made in this blog, except as may be required by law. \n", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "## Endnotes", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}} @@ -472,35 +472,36 @@ {"page_content": "#### Auto Wrapping\n\nModel layers should be wrapped in FSDP in a nested way to save peak memory and enable communication and computation overlapping. The simplest way to do it is auto wrapping, which can serve as a drop-in replacement for DDP without changing the rest of the code.\n\nfsdp_auto_wrap_policy argument allows specifying a callable function to recursively wrap layers with FSDP. default_auto_wrap_policy function provided by the PyTorch FSDP recursively wraps layers with the number of parameters larger than 100M. You can supply your own wrapping policy as needed. The example of writing a customized wrapping policy is shown in the [FSDP API doc](https://pytorch.org/docs/stable/fsdp.html).\n\nIn addition, cpu_offload could be configured optionally to offload wrapped parameters to CPUs when these parameters are not used in computation. This can further improve memory efficiency at the cost of data transfer overhead between host and device.\n\nThe example below shows how FSDP is wrapped using auto wrapping.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}} {"page_content": "```python\nfrom torch.distributed.fsdp import (\n FullyShardedDataParallel,\n CPUOffload,\n)\nfrom torch.distributed.fsdp.wrap import (\n default_auto_wrap_policy,\n)\nimport torch.nn as nn\n \nclass model(nn.Module):\n def __init__(self):\n super().__init__()\n self.layer1 = nn.Linear(8, 4)\n self.layer2 = nn.Linear(4, 16)\n self.layer3 = nn.Linear(16, 4)\n \nmodel = DistributedDataParallel(model())\nfsdp_model = FullyShardedDataParallel(\n model(),\n fsdp_auto_wrap_policy=default_auto_wrap_policy,\n cpu_offload=CPUOffload(offload_params=True),\n)\n```\n\n#### Manual Wrapping\n\nManual wrapping can be useful to explore complex sharding strategies by applying `wrap` selectively to some parts of the model. Overall settings can be passed to the enable_wrap() context manager.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}} {"page_content": "```python\nfrom torch.distributed.fsdp import (\n FullyShardedDataParallel,\n CPUOffload,\n)\nfrom torch.distributed.fsdp.wrap import (\n enable_wrap,\n wrap,\n)\nimport torch.nn as nn\nfrom typing import Dict\n \n \nclass model(nn.Module):\n def __init__(self):\n super().__init__()\n self.layer1 = wrap(nn.Linear(8, 4))\n self.layer2 = nn.Linear(4, 16)\n self.layer3 = wrap(nn.Linear(16, 4))\n \nwrapper_kwargs = Dict(cpu_offload=CPUOffload(offload_params=True))\nwith enable_wrap(wrapper_cls=FullyShardedDataParallel, **wrapper_kwargs):\n fsdp_model = wrap(model())\n```\n\nAfter wrapping the model with FSDP using one of the two above approaches, the model can be trained in a similar way as local training, like this:\n\n```python\noptim = torch.optim.Adam(fsdp_model.parameters(), lr=0.0001)\nfor sample, label in next_batch():\n out = fsdp_model(input)\n loss = criterion(out, label)\n loss.backward()\n optim.step()\n```\n\n### Benchmark Results", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}} -{"page_content": "We ran extensive scaling tests for 175B and 1T GPT models on AWS clusters using PyTorch FSDP. Each cluster node is an instance with 8 [NVIDIA A100-SXM4-40GB](https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/a100/pdf/nvidia-a100-datasheet-us-nvidia-1758950-r4-web.pdf) GPUs, and inter-nodes are connected via AWS Elastic Fabric Adapter (EFA) with 400 Gbps network bandwidth.\n\nGPT models are implemented using [minGPT](https://github.com/karpathy/minGPT). A randomly generated input dataset is used for benchmarking purposes. All experiments ran with 50K vocabulary size, fp16 precision and [SGD](https://pytorch.org/docs/stable/generated/torch.optim.SGD.html) optimizer.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}} +{"page_content": "### Benchmark Results\n\nWe ran extensive scaling tests for 175B and 1T GPT models on AWS clusters using PyTorch FSDP. Each cluster node is an instance with 8 [NVIDIA A100-SXM4-40GB](https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/a100/pdf/nvidia-a100-datasheet-us-nvidia-1758950-r4-web.pdf) GPUs, and inter-nodes are connected via AWS Elastic Fabric Adapter (EFA) with 400 Gbps network bandwidth.\n\nGPT models are implemented using [minGPT](https://github.com/karpathy/minGPT). A randomly generated input dataset is used for benchmarking purposes. All experiments ran with 50K vocabulary size, fp16 precision and [SGD](https://pytorch.org/docs/stable/generated/torch.optim.SGD.html) optimizer.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}} {"page_content": "| Model | Number of layers | Hidden size | Attention heads | Model size, billions of parameters |\n|----------|------------------|-------------|-----------------|------------------------------------|\n| GPT 175B | 96 | 12288 | 96 | 175 |\n| GPT 1T | 128 | 25600 | 160 | 1008 |\n\nIn addition to using FSDP with parameters CPU offloading in the experiments, the [activation checkpointing feature](https://pytorch.org/docs/stable/checkpoint.html) in PyTorch is also applied in the tests.\n\nThe maximum per-GPU throughput of 159 teraFLOP/s (51% of NVIDIA A100 peak theoretical performance 312 teraFLOP/s/GPU) is achieved with batch size 20 and sequence length 512 on 128 GPUs for the GPT 175B model; further increase of the number of GPUs leads to per-GPU throughput degradation because of growing communication between the nodes.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}} {"page_content": "For the GPT 1T model, the maximum per-GPU throughput of 84 teraFLOP/s (27% of the peak teraFLOP/s) is achieved with batch size 4 and sequence length 2048 on 128 GPUs. However, further increase of the number of GPUs doesn\u2019t affect the per-GPU throughput too much because we observed that the largest bottleneck in the 1T model training is not from communication but from the slow CUDA cache allocator when peak GPU memory is reaching the limit. The use of A100 80G GPUs with larger memory capacity will mostly resolve this issue and also help scale the batch size to achieve much larger throughput.\n\n

\n \n

\n\n

\n \n

\n\n### Future Work", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}} {"page_content": "### Future Work\n\nIn the next beta release, we are planning to add efficient distributed model/states checkpointing APIs, meta device support for large model materialization, and mixed-precision support inside FSDP computation and communication. We\u2019re also going to make it easier to switch between [DDP](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html), [ZeRO1, ZeRO2](https://arxiv.org/abs/1910.02054) and FSDP flavors of data parallelism in the new API. To further improve FSDP performance, memory fragmentation reduction and communication efficiency improvements are also planned.\n\n### A Bit of History of 2 Versions of FSDP", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}} -{"page_content": "[FairScale FSDP](https://engineering.fb.com/2021/07/15/open-source/fsdp/) was released in early 2021 as part of the FairScale library. And then we started the effort to upstream FairScale FSDP to PyTorch in PT 1.11, making it production-ready. We have selectively upstreamed and refactored key features from FairScale FSDP, redesigned user interfaces and made performance improvements.\n\nIn the near future, FairScale FSDP will stay in the FairScale repository for research projects, while generic and widely adopted features will be upstreamed to PyTorch incrementally and hardened accordingly.\n\nMeanwhile, PyTorch FSDP will focus more on production readiness and long-term support. This includes better integration with ecosystems and improvements on performance, usability, reliability, debuggability and composability.\n\n### Acknowledgments", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}} +{"page_content": "### A Bit of History of 2 Versions of FSDP\n\n[FairScale FSDP](https://engineering.fb.com/2021/07/15/open-source/fsdp/) was released in early 2021 as part of the FairScale library. And then we started the effort to upstream FairScale FSDP to PyTorch in PT 1.11, making it production-ready. We have selectively upstreamed and refactored key features from FairScale FSDP, redesigned user interfaces and made performance improvements.\n\nIn the near future, FairScale FSDP will stay in the FairScale repository for research projects, while generic and widely adopted features will be upstreamed to PyTorch incrementally and hardened accordingly.\n\nMeanwhile, PyTorch FSDP will focus more on production readiness and long-term support. This includes better integration with ecosystems and improvements on performance, usability, reliability, debuggability and composability.\n\n### Acknowledgments", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}} {"page_content": "### Acknowledgments\n\nWe would like to thank the authors of FairScale FSDP: Myle Ott, Sam Shleifer, Min Xu, Priya Goyal, Quentin Duval, Vittorio Caggiano, Tingting Markstrum, Anjali Sridhar. Thanks to the Microsoft DeepSpeed ZeRO team for developing and popularizing sharded data parallel techniques. Thanks to Pavel Belevich, Jessica Choi, Sisil Mehta for running experiments using PyTorch FSDP on different clusters. Thanks to Geeta Chauhan, Mahesh Yadav, Pritam Damania, Dmytro Dzhulgakov for supporting this effort and insightful discussions.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch library updates including new model serving library '\nauthor: Team PyTorch\n---\n\n\nAlong with the PyTorch 1.5 release, we are announcing new libraries for high-performance PyTorch model serving and tight integration with TorchElastic and Kubernetes. Additionally, we are releasing updated packages for torch_xla (Google Cloud TPUs), torchaudio, torchvision, and torchtext. All of these new libraries and enhanced capabilities are available today and accompany all of the core features [released in PyTorch 1.5](https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis). \n\n## TorchServe (Experimental)", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}} -{"page_content": "TorchServe is a flexible and easy to use library for serving PyTorch models in production performantly at scale. It is cloud and environment agnostic and supports features such as multi-model serving, logging, metrics, and the creation of RESTful endpoints for application integration. TorchServe was jointly developed by engineers from Facebook and AWS with feedback and engagement from the broader PyTorch community. The experimental release of TorchServe is available today. Some of the highlights include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}} +{"page_content": "## TorchServe (Experimental)\n\nTorchServe is a flexible and easy to use library for serving PyTorch models in production performantly at scale. It is cloud and environment agnostic and supports features such as multi-model serving, logging, metrics, and the creation of RESTful endpoints for application integration. TorchServe was jointly developed by engineers from Facebook and AWS with feedback and engagement from the broader PyTorch community. The experimental release of TorchServe is available today. Some of the highlights include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}} {"page_content": "* Support for both Python-based and TorchScript-based models\n* Default handlers for common use cases (e.g., image segmentation, text classification) as well as the ability to write custom handlers for other use cases\n* Model versioning, the ability to run multiple versions of a model at the same time, and the ability to roll back to an earlier version\n* The ability to package a model, learning weights, and supporting files (e.g., class mappings, vocabularies) into a single, persistent artifact (a.k.a. the \u201cmodel archive\u201d)\n* Robust management capability, allowing full configuration of models, versions, and individual worker threads via command line, config file, or run-time API\n* Automatic batching of individual inferences across HTTP requests\n* Logging including common metrics, and the ability to incorporate custom metrics\n* Ready-made Dockerfile for easy deployment\n* HTTPS support for secure deployment", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}} {"page_content": "To learn more about the APIs and the design of this feature, see the links below:\n* See for a full multi-node deployment reference architecture.\n* The full documentation can be found [here](https://pytorch.org/serve).\n\n## TorchElastic integration with Kubernetes (Experimental)", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}} {"page_content": "[TorchElastic](https://github.com/pytorch/elastic) is a proven library for training large scale deep neural networks at scale within companies like Facebook, where having the ability to dynamically adapt to server availability and scale as new compute resources come online is critical. Kubernetes enables customers using machine learning frameworks like PyTorch to run training jobs distributed across fleets of powerful GPU instances like the Amazon EC2 P3. Distributed training jobs, however, are not fault-tolerant, and a job cannot continue if a node failure or reclamation interrupts training. Further, jobs cannot start without acquiring all required resources, or scale up and down without being restarted. This lack of resiliency and flexibility results in increased training time and costs from idle resources. TorchElastic addresses these limitations by enabling distributed training jobs to be executed in a fault-tolerant and elastic manner. Until today, Kubernetes users needed to manage Pods and Services required for TorchElastic training jobs manually.", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}} {"page_content": "Through the joint collaboration of engineers at Facebook and AWS, TorchElastic, adding elasticity and fault tolerance, is now supported using vanilla Kubernetes and through the managed EKS service from AWS.\n\nTo learn more see the [TorchElastic repo](http://pytorch.org/elastic/0.2.0rc0/kubernetes.html) for the controller implementation and docs on how to use it.\n\n## torch_xla 1.5 now available", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}} -{"page_content": "[torch_xla](http://pytorch.org/xla/) is a Python package that uses the [XLA linear algebra compiler](https://www.tensorflow.org/xla) to accelerate the [PyTorch deep learning framework](https://pytorch.org/) on [Cloud TPUs](https://cloud.google.com/tpu/) and [Cloud TPU Pods](https://cloud.google.com/tpu/docs/tutorials/pytorch-pod). torch_xla aims to give PyTorch users the ability to do everything they can do on GPUs on Cloud TPUs as well while minimizing changes to the user experience. The project began with a conversation at NeurIPS 2017 and gathered momentum in 2018 when teams from Facebook and Google came together to create a proof of concept. We announced this collaboration at PTDC 2018 and made the PyTorch/XLA integration broadly available at PTDC 2019. The project already has 28 contributors, nearly 2k commits, and a repo that has been forked more than 100 times.", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}} +{"page_content": "## torch_xla 1.5 now available\n\n[torch_xla](http://pytorch.org/xla/) is a Python package that uses the [XLA linear algebra compiler](https://www.tensorflow.org/xla) to accelerate the [PyTorch deep learning framework](https://pytorch.org/) on [Cloud TPUs](https://cloud.google.com/tpu/) and [Cloud TPU Pods](https://cloud.google.com/tpu/docs/tutorials/pytorch-pod). torch_xla aims to give PyTorch users the ability to do everything they can do on GPUs on Cloud TPUs as well while minimizing changes to the user experience. The project began with a conversation at NeurIPS 2017 and gathered momentum in 2018 when teams from Facebook and Google came together to create a proof of concept. We announced this collaboration at PTDC 2018 and made the PyTorch/XLA integration broadly available at PTDC 2019. The project already has 28 contributors, nearly 2k commits, and a repo that has been forked more than 100 times.", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}} {"page_content": "This release of [torch_xla](http://pytorch.org/xla/) is aligned and tested with PyTorch 1.5 to reduce friction for developers and to provide a stable and mature PyTorch/XLA stack for training models using Cloud TPU hardware. You can [try it for free](https://medium.com/pytorch/get-started-with-pytorch-cloud-tpus-and-colab-a24757b8f7fc) in your browser on an 8-core Cloud TPU device with [Google Colab](https://colab.research.google.com/), and you can use it at a much larger scaleon [Google Cloud](https://cloud.google.com/gcp).\n\nSee the full torch_xla release notes [here](https://github.com/pytorch/xla/releases). Full docs and tutorials can be found [here](https://pytorch.org/xla/) and [here](https://cloud.google.com/tpu/docs/tutorials).\n\n## PyTorch Domain Libraries", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}} -{"page_content": "torchaudio, torchvision, and torchtext complement PyTorch with common datasets, models, and transforms in each domain area. We\u2019re excited to share new releases for all three domain libraries alongside PyTorch 1.5 and the rest of the library updates. For this release, all three domain libraries are removing support for Python2 and will support Python3 only.\n\n### torchaudio 0.5\nThe torchaudio 0.5 release includes new transforms, functionals, and datasets. Highlights for the release include:\n\n* Added the Griffin-Lim functional and transform, `InverseMelScale` and `Vol` transforms, and `DB_to_amplitude`. \n* Added support for `allpass`, `fade`, `bandpass`, `bandreject`, `band`, `treble`, `deemph`, and `riaa` filters and transformations.\n* New datasets added including `LJSpeech` and `SpeechCommands` datasets. \n\nSee the release full notes [here](https://github.com/pytorch/audio/releases) and full docs can be found [here](https://pytorch.org/audio/).", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}} +{"page_content": "## PyTorch Domain Libraries\n\ntorchaudio, torchvision, and torchtext complement PyTorch with common datasets, models, and transforms in each domain area. We\u2019re excited to share new releases for all three domain libraries alongside PyTorch 1.5 and the rest of the library updates. For this release, all three domain libraries are removing support for Python2 and will support Python3 only.\n\n### torchaudio 0.5\nThe torchaudio 0.5 release includes new transforms, functionals, and datasets. Highlights for the release include:\n\n* Added the Griffin-Lim functional and transform, `InverseMelScale` and `Vol` transforms, and `DB_to_amplitude`. \n* Added support for `allpass`, `fade`, `bandpass`, `bandreject`, `band`, `treble`, `deemph`, and `riaa` filters and transformations.\n* New datasets added including `LJSpeech` and `SpeechCommands` datasets. \n\nSee the release full notes [here](https://github.com/pytorch/audio/releases) and full docs can be found [here](https://pytorch.org/audio/).", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}} {"page_content": "### torchvision 0.6\nThe torchvision 0.6 release includes updates to datasets, models and a significant number of bug fixes. Highlights include:\n\n* Faster R-CNN now supports negative samples which allows the feeding of images without annotations at training time.\n* Added `aligned` flag to `RoIAlign` to match Detectron2. \n* Refactored abstractions for C++ video decoder\n\nSee the release full notes [here](https://github.com/pytorch/vision/releases) and full docs can be found [here](https://pytorch.org/docs/stable/torchvision/index.html).\n\n### torchtext 0.6\nThe torchtext 0.6 release includes a number of bug fixes and improvements to documentation. Based on user's feedback, dataset abstractions are currently being redesigned also. Highlights for the release include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}} {"page_content": "* Fixed an issue related to the SentencePiece dependency in conda package.\n* Added support for the experimental IMDB dataset to allow a custom vocab.\n* A number of documentation updates including adding a code of conduct and a deduplication of the docs on the torchtext site. \n\nYour feedback and discussions on the experimental datasets API are welcomed. You can send them to [issue #664](https://github.com/pytorch/text/issues/664). We would also like to highlight the pull request [here](https://github.com/pytorch/text/pull/701) where the latest dataset abstraction is applied to the text classification datasets. The feedback can be beneficial to finalizing this abstraction. \n\nSee the release full notes [here](https://github.com/pytorch/text/releases) and full docs can be found [here](https://pytorch.org/text/).\n\n\n*We\u2019d like to thank the entire PyTorch team, the Amazon team and the community for all their contributions to this work.*\n\nCheers!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"Understanding LazyTensor System Performance with PyTorch/XLA on Cloud TPU\"\nauthor: Vaibhav Singh\nfeatured-img: \"\"\n---\n\n## Introduction\n\nEase of use, expressivity, and debuggability are among the core principles of PyTorch. One of the key drivers for the ease of use is that PyTorch execution is by default \u201ceager, i.e. op by op execution preserves the imperative nature of the program. However, eager execution does not offer the compiler based optimization, for example, the optimizations when the computation can be expressed as a graph.\n\nLazyTensor [[1]], first introduced with PyTorch/XLA, helps combine these seemingly disparate approaches. While PyTorch eager execution is widely used, intuitive, and well understood, lazy execution is not as prevalent yet.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}} {"page_content": "In this post we will explore some of the basic concepts of the LazyTensor System with the goal of applying these concepts to understand and debug performance of LazyTensor based implementations in PyTorch. Although we will use PyTorch/XLA on Cloud TPU as the vehicle for exploring these concepts, we hope that these ideas will be useful to understand other system(s) built on LazyTensors.\n\n## LazyTensor\n\nAny operation performed on a PyTorch tensor is by default dispatched as a kernel or a composition of kernels to the underlying hardware. These kernels are executed asynchronously on the underlying hardware. The program execution is not blocked until the value of a tensor is fetched. This approach scales extremely well with massively parallel programmed hardware such as GPUs.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}} {"page_content": "The starting point of a LazyTensor system is a custom tensor type. In PyTorch/XLA, this type is called XLA tensor. In contrast to PyTorch\u2019s native tensor type, operations performed on XLA tensors are recorded into an IR graph. Let\u2019s examine an example that sums the product of two tensors:\n\n```python\nimport torch\nimport torch_xla\nimport torch_xla.core.xla_model as xm\n\ndev = xm.xla_device()\n\nx1 = torch.rand((3, 3)).to(dev)\nx2 = torch.rand((3, 8)).to(dev)\n\ny1 = torch.einsum('bs,st->bt', x1, x2)\nprint(torch_xla._XLAC._get_xla_tensors_text([y1]))\n```\n\nYou can execute [this](https://github.com/ultrons/xla/blob/lazy-tensor-post/contrib/colab/LazyTensor_Basics.ipynb) colab notebook to examine the resulting graph for y1. Notice that no computation has been performed yet.\n\n```python\ny1 = y1 + x2\nprint(torch_xla._XLAC._get_xla_tensors_text([y1]))\n```", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}} {"page_content": "The operations will continue until PyTorch/XLA encounters a barrier. This barrier can either be a [mark step()](https://github.com/pytorch/xla/blob/ff079bb48744e5aa6696201ccf34057f15fc7cac/torch_xla/core/xla_model.py#L751) api call or any other event which forces the execution of the graph recorded so far.\n\n```python\nxm.mark_step()\nprint(torch_xla._XLAC._get_xla_tensors_text([y1]))\n```\n\nOnce the mark_step() is called, the graph is compiled and then executed on TPU, i.e. the tensors have been materialized. Therefore, the graph is now reduced to a single line y1 tensor which holds the result of the computation.\n\n### Compile Once, Execute Often", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}} -{"page_content": "XLA compilation passes offer optimizations (e.g. op-fusion, which reduces HBM pressure by using scratch-pad memory for multiple ops, [ref](https://arxiv.org/pdf/2004.13336.pdf) ) and leverages lower level XLA infrastructure to optimally use the underlying hardware. However, there is one caveat, compilation passes are expensive, i.e. can add to the training step time. Therefore, this approach scales well if and only if we can **compile once and execute often** (compilation cache helps, such that the same graph is not compiled more than once).\n\nIn the following example, we create a small computation graph and time the execution:\n\n```python\ny1 = torch.rand((3, 8)).to(dev)\ndef dummy_step() :\n y1 = torch.einsum('bs,st->bt', y1, x)\n xm.mark_step()\n return y1\n```\n\n```python\n%timeit dummy_step\n```\n\n```python\nThe slowest run took 29.74 times longer than the fastest. This could mean that an intermediate result is being cached.\n10000000 loops, best of 5: 34.2 ns per loop\n```", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}} +{"page_content": "### Compile Once, Execute Often\n\nXLA compilation passes offer optimizations (e.g. op-fusion, which reduces HBM pressure by using scratch-pad memory for multiple ops, [ref](https://arxiv.org/pdf/2004.13336.pdf) ) and leverages lower level XLA infrastructure to optimally use the underlying hardware. However, there is one caveat, compilation passes are expensive, i.e. can add to the training step time. Therefore, this approach scales well if and only if we can **compile once and execute often** (compilation cache helps, such that the same graph is not compiled more than once).\n\nIn the following example, we create a small computation graph and time the execution:\n\n```python\ny1 = torch.rand((3, 8)).to(dev)\ndef dummy_step() :\n y1 = torch.einsum('bs,st->bt', y1, x)\n xm.mark_step()\n return y1\n```\n\n```python\n%timeit dummy_step\n```\n\n```python\nThe slowest run took 29.74 times longer than the fastest. This could mean that an intermediate result is being cached.\n10000000 loops, best of 5: 34.2 ns per loop\n```", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}} {"page_content": "You notice that the slowest step is quite longer than the fastest. This is because of the graph compilation overhead which is incurred only once for a given shape of graph, input shape, and output shape. Subsequent steps are faster because no graph compilation is necessary.\n\nThis also implies that we expect to see performance cliffs when the \u201ccompile once and execute often\u201d assumption breaks. Understanding when this assumption breaks is the key to understanding and optimizing the performance of a LazyTensor system. Let\u2019s examine what triggers the compilation.\n\n### Graph Compilation and Execution and LazyTensor Barrier", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}} {"page_content": "We saw that the computation graph is compiled and executed when a LazyTensor barrier is encountered. There are three scenarios when the LazyTensor barrier is automatically or manually introduced. The first is the explicit call of mark_step() api as shown in the preceding example. mark_step() is also called implicitly at every step when you wrap your dataloader with MpDeviceLoader (highly recommended to overlap compute and data upload to TPU device). The [Optimizer step](https://github.com/pytorch/xla/blob/master/torch_xla/core/xla_model.py#L804) method of xla_model also allows to implicitly call mark_step (when you set barrier=True).", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}} {"page_content": "The second scenario where a barrier is introduced is when PyTorch/XLA finds an op with no mapping (lowering) to equivalent XLA HLO ops. PyTorch has [2000+](https://dev-discuss.pytorch.org/t/where-do-the-2000-pytorch-operators-come-from-more-than-you-wanted-to-know/373) operations. Although most of these operations are composite (i.e. can be expressed in terms of other fundamental operations), some of these operations do not have corresponding lowering in XLA.\n\n

\n \n

\n\nWhat happens when an op with no XLA lowering is used? PyTorch XLA stops the operation recording and cuts the graph(s) leading to the input(s) of the unlowered op. This cut graph is then compiled and dispatched for execution. The results (materialized tensor) of execution are sent back from device to host, the unlowered op is then executed on the host (cpu), and then downstream LazyTensor operations creating a new graph(s) until a barrier is encountered again.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}} {"page_content": "The third and final scenario which results in a LazyTensor barrier is when there is a control structure/statement or another method which requires the value of a tensor. This statement would at the minimum cause the execution of the computation graph leading to the tensor (if the graph has already been seen) or cause compilation and execution of both.\n\nOther examples of such methods include .item(), isEqual(). In general, any operation that maps Tensor -> Scalar will cause this behavior.\n\n### Dynamic Graph\n\nAs illustrated in the preceding section, graph compilation cost is amortized if the same shape of the graph is executed many times. It\u2019s because the compiled graph is cached with a hash derived from the graph shape, input shape, and the output shape. If these shapes change it will trigger compilation, and too frequent compilation will result in training time degradation.\n\nLet\u2019s consider the following example:", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}} -{"page_content": "```python\ndef dummy_step(x, y, loss, acc=False):\n z = torch.einsum('bs,st->bt', y, x)\n step_loss = z.sum().view(1,)\n if acc:\n loss = torch.cat((loss, step_loss))\n else:\n loss = step_loss\n xm.mark_step()\n return loss\n\n\nimport time\ndef measure_time(acc=False):\n exec_times = []\n iter_count = 100\n x = torch.rand((512, 8)).to(dev)\n y = torch.rand((512, 512)).to(dev)\n loss = torch.zeros(1).to(dev)\n for i in range(iter_count):\n tic = time.time()\n loss = dummy_step(x, y, loss, acc=acc)\n toc = time.time()\n exec_times.append(toc - tic)\n return exec_times\n\ndyn = measure_time(acc=True) # acc= True Results in dynamic graph\nst = measure_time(acc=False) # Static graph, computation shape, inputs and output shapes don't change\n\nimport matplotlib.pyplot as plt\nplt.plot(st, label = 'static graph')\nplt.plot(dyn, label = 'dynamic graph')\nplt.legend()\nplt.title('Execution time in seconds')\n```\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}} -{"page_content": "Note that static and dynamic cases have the same computation but dynamic graph compiles every time, leading to the higher overall run-time. In practice, the training step with recompilation can sometimes be an order of magnitude or slower. In the next section we discuss some of the PyTorch/XLA tools to debug training degradation.\n\n### Profiling Training Performance with PyTorch/XLA\n\nPyTorch/XLA profiling consists of two major components. First is the client side profiling. This feature is turned on by simply setting the environment variable PT_XLA_DEBUG to 1. Client side profiling points to unlowered ops or device-to-host transfer in your source code. Client side profiling also reports if there are too frequent compilations happening during the training. You can explore some metrics and counters provided by PyTorch/XLA in conjunction with the profiler in [this](https://github.com/ultrons/xla/blob/lazy-tensor-post/contrib/colab/Exploring_LazyTensor_with_Debug_Metrics.ipynb) notebook.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}} -{"page_content": "The second component offered by PyTorch/XLA profiler is the inline trace annotation. For example:\n\n```python\nimport torch_xla.debug.profiler as xp\n\ndef train_imagenet():\n print('==> Preparing data..')\n img_dim = get_model_property('img_dim')\n ....\n server = xp.start_server(3294)\n def train_loop_fn(loader, epoch):\n ....\n model.train()\n for step, (data, target) in enumerate(loader):\n with xp.StepTrace('Train_Step', step_num=step):\n ....\n if FLAGS.amp:\n ....\n else:\n with xp.Trace('build_graph'):\n output = model(data)\n loss = loss_fn(output, target)\n loss.backward()\n xm.optimizer_step(optimizer)\n```\n\nNotice the start_server api call. The port number that you have used here is the same port number you will use with the tensorboard profiler in order to view the op trace similar to:\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}} +{"page_content": "Let\u2019s consider the following example:\n\n```python\ndef dummy_step(x, y, loss, acc=False):\n z = torch.einsum('bs,st->bt', y, x)\n step_loss = z.sum().view(1,)\n if acc:\n loss = torch.cat((loss, step_loss))\n else:\n loss = step_loss\n xm.mark_step()\n return loss\n\n\nimport time\ndef measure_time(acc=False):\n exec_times = []\n iter_count = 100\n x = torch.rand((512, 8)).to(dev)\n y = torch.rand((512, 512)).to(dev)\n loss = torch.zeros(1).to(dev)\n for i in range(iter_count):\n tic = time.time()\n loss = dummy_step(x, y, loss, acc=acc)\n toc = time.time()\n exec_times.append(toc - tic)\n return exec_times\n\ndyn = measure_time(acc=True) # acc= True Results in dynamic graph\nst = measure_time(acc=False) # Static graph, computation shape, inputs and output shapes don't change\n\nimport matplotlib.pyplot as plt\nplt.plot(st, label = 'static graph')\nplt.plot(dyn, label = 'dynamic graph')\nplt.legend()\nplt.title('Execution time in seconds')\n```", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}} +{"page_content": "

\n \n

\n\nNote that static and dynamic cases have the same computation but dynamic graph compiles every time, leading to the higher overall run-time. In practice, the training step with recompilation can sometimes be an order of magnitude or slower. In the next section we discuss some of the PyTorch/XLA tools to debug training degradation.\n\n### Profiling Training Performance with PyTorch/XLA", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}} +{"page_content": "PyTorch/XLA profiling consists of two major components. First is the client side profiling. This feature is turned on by simply setting the environment variable PT_XLA_DEBUG to 1. Client side profiling points to unlowered ops or device-to-host transfer in your source code. Client side profiling also reports if there are too frequent compilations happening during the training. You can explore some metrics and counters provided by PyTorch/XLA in conjunction with the profiler in [this](https://github.com/ultrons/xla/blob/lazy-tensor-post/contrib/colab/Exploring_LazyTensor_with_Debug_Metrics.ipynb) notebook.\n\nThe second component offered by PyTorch/XLA profiler is the inline trace annotation. For example:\n\n```python\nimport torch_xla.debug.profiler as xp", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}} +{"page_content": "```python\nimport torch_xla.debug.profiler as xp\n\ndef train_imagenet():\n print('==> Preparing data..')\n img_dim = get_model_property('img_dim')\n ....\n server = xp.start_server(3294)\n def train_loop_fn(loader, epoch):\n ....\n model.train()\n for step, (data, target) in enumerate(loader):\n with xp.StepTrace('Train_Step', step_num=step):\n ....\n if FLAGS.amp:\n ....\n else:\n with xp.Trace('build_graph'):\n output = model(data)\n loss = loss_fn(output, target)\n loss.backward()\n xm.optimizer_step(optimizer)\n```\n\nNotice the start_server api call. The port number that you have used here is the same port number you will use with the tensorboard profiler in order to view the op trace similar to:\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}} {"page_content": "Op trace along with the client-side debugging function is a powerful set of tools to debug and optimize your training performance with PyTorch/XLA. For more detailed instructions on the profiler usage, the reader is encouraged to explore blogs [part-1](https://cloud.google.com/blog/topics/developers-practitioners/pytorchxla-performance-debugging-tpu-vm-part-1), [part-2](https://cloud.google.com/blog/topics/developers-practitioners/pytorchxla-performance-debugging-cloud-tpu-vm-part-ii), and [part-3](https://cloud.google.com/blog/topics/developers-practitioners/pytorchxla-performance-debugging-cloud-tpu-vm-part-iii) of the blog series on PyTorch/XLA performance debugging.\n\n### Summary", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}} {"page_content": "### Summary\n\nIn this article we have reviewed the fundamentals of the LazyTensor system. We built on those fundamentals with PyTorch/XLA to understand the potential causes of training performance degradation. We discussed why \u201ccompile once and execute often\u201d helps to get the best performance on LazyTensor systems, and why training slows down when this assumption breaks.\n\nWe hope that PyTorch users will find these insights helpful for their novel works with LazyTensor systems.\n\n### Acknowledgements\n\nA big thank you to my outstanding colleagues Jack Cao, Milad Mohammedi, Karl Weinmeister, Rajesh Thallam, Jordan Tottan (Google) and Geeta Chauhan (Meta) for their meticulous reviews and feedback. And thanks to the extended PyTorch/XLA development team from Google, Meta, and the open source community to make PyTorch possible on TPUs. And finally, thanks to the authors of the [LazyTensor paper](https://arxiv.org/pdf/2102.13267.pdf) not only for developing LazyTensor but also for writing such an accessible paper.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}} {"page_content": "## Refrences\n\n[[1]] LazyTensor: combining eager execution with domain-specific compilers\n\n[1]: https://arxiv.org/pdf/2102.13267.pdf", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}} @@ -512,7 +513,7 @@ {"page_content": "The Transform Classes make sure that they apply the same random transforms to all the inputs to ensure consistent results.\n\nThe functional API has been updated to support all necessary signal processing kernels (resizing, cropping, affine transforms, padding etc) for all inputs:\n\n```python\nfrom torchvision.transforms.v2 import functional as F\n\n\n# High-level dispatcher, accepts any supported input type, fully BC\nF.resize(inpt, size=[224, 224])\n# Image tensor kernel\nF.resize_image_tensor(img_tensor, size=[224, 224], antialias=True) \n# PIL image kernel\nF.resize_image_pil(img_pil, size=[224, 224], interpolation=BILINEAR)\n# Video kernel\nF.resize_video(video, size=[224, 224], antialias=True) \n# Mask kernel\nF.resize_mask(mask, size=[224, 224])\n# Bounding box kernel\nF.resize_bounding_box(bbox, size=[224, 224], spatial_size=[256, 256])\n```", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}} {"page_content": "Under the hood, the API uses Tensor subclassing to wrap the input, attach useful meta-data and dispatch to the right kernel. For your data to be compatible with these new transforms, you can either use the provided dataset wrapper which should work with most of torchvision built-in datasets, or your can wrap your data manually into Datapoints:\n\n```python\nfrom torchvision.datasets import wrap_dataset_for_transforms_v2\nds = CocoDetection(..., transforms=v2_transforms)\nds = wrap_dataset_for_transforms_v2(ds) # data is now compatible with transforms v2!\n\n# Or wrap your data manually using the lower-level Datapoint classes:\nfrom torchvision import datapoints\n\nimgs = datapoints.Image(images)\nvids = datapoints.Video(videos)\nmasks = datapoints.Mask(target[\"masks\u201c])\nbboxes = datapoints.BoundingBox(target[\"boxes\u201c], format=\u201dXYXY\u201d, spatial_size=imgs.shape)\n```", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}} {"page_content": "In addition to the new API, we now provide importable implementations for several data augmentations that are used in SoTA research such as [_Large Scale Jitter_](https://github.com/pytorch/vision/blob/928b05cad36eadb13e169f03028767c8bcd1f21d/torchvision/transforms/v2/_geometry.py#L1109), [_AutoAugmentation_](https://github.com/pytorch/vision/blob/main/torchvision/transforms/v2/_auto_augment.py) methods and [_several_](https://github.com/pytorch/vision/blob/main/torchvision/transforms/v2/__init__.py) new Geometric, Color and Type Conversion transforms.\n\nThe API continues to support both PIL and Tensor backends for Images, single or batched input and maintains JIT-scriptability on both the functional and class APIs.. The new API has been [_verified_](https://github.com/pytorch/vision/pull/6433#issuecomment-1256741233) to achieve the same accuracy as the previous implementation.\n\n## An end-to-end example", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}} -{"page_content": "Here is an example of the new API using the following [_image_](https://user-images.githubusercontent.com/5347466/195350223-8683ef25-1367-4292-9174-c15f85c7358e.jpg). It works both with PIL images and Tensors. For more examples and tutorials, [_take a look at our gallery!_](https://pytorch.org/vision/0.15/auto_examples/index.html)\n\n\n```python\nfrom torchvision import io, utils\nfrom torchvision import datapoints\nfrom torchvision.transforms import v2 as T\nfrom torchvision.transforms.v2 import functional as F", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}} +{"page_content": "## An end-to-end example\n\n Here is an example of the new API using the following [_image_](https://user-images.githubusercontent.com/5347466/195350223-8683ef25-1367-4292-9174-c15f85c7358e.jpg). It works both with PIL images and Tensors. For more examples and tutorials, [_take a look at our gallery!_](https://pytorch.org/vision/0.15/auto_examples/index.html)\n\n\n```python\nfrom torchvision import io, utils\nfrom torchvision import datapoints\nfrom torchvision.transforms import v2 as T\nfrom torchvision.transforms.v2 import functional as F", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}} {"page_content": "# Defining and wrapping input to appropriate Tensor Subclasses\npath = \"COCO_val2014_000000418825.jpg\"\nimg = datapoints.Image(io.read_image(path))\n# img = PIL.Image.open(path)\nbboxes = datapoints.BoundingBox(\n [[2, 0, 206, 253], [396, 92, 479, 241], [328, 253, 417, 332],\n [148, 68, 256, 182], [93, 158, 170, 260], [432, 0, 438, 26],\n [422, 0, 480, 25], [419, 39, 424, 52], [448, 37, 456, 62],\n [435, 43, 437, 50], [461, 36, 469, 63], [461, 75, 469, 94],\n [469, 36, 480, 64], [440, 37, 446, 56], [398, 233, 480, 304],\n [452, 39, 463, 63], [424, 38, 429, 50]],\n format=datapoints.BoundingBoxFormat.XYXY,\n spatial_size=F.get_spatial_size(img),\n)\nlabels = [59, 58, 50, 64, 76, 74, 74, 74, 74, 74, 74, 74, 74, 74, 50, 74, 74]\n# Defining and applying Transforms V2\ntrans = T.Compose(\n [\n T.ColorJitter(contrast=0.5),\n T.RandomRotation(30),\n T.CenterCrop(480),\n ]\n)\nimg, bboxes, labels = trans(img, bboxes, labels)\n# Visualizing results\nviz = utils.draw_bounding_boxes(F.to_image_tensor(img), boxes=bboxes)\nF.to_pil_image(viz).show()\n```", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}} {"page_content": "## Development milestones and future work\n\nHere is where we are in development:\n\n- [x] Design API\n- [x] Write Kernels for transforming Videos, Bounding Boxes, Masks and Labels\n- [x] Rewrite all existing Transform Classes (stable + references) on the new API:\n - [x] Image Classification\n - [x] Video Classification\n - [x] Object Detection\n - [x] Instance Segmentation\n - [x] Semantic Segmentation\n- [x] Verify the accuracy of the new API for all supported Tasks and Backends\n- [x] Speed Benchmarks and Performance Optimizations (in progress - planned for Dec)\n- [x] Graduate from Prototype (planned for Q1)\n- [ ] Add support of Depth Perception, Keypoint Detection, Optical Flow and more (future)\n- [ ] Add smooth support for batch-wise transforms like MixUp and CutMix\n\n\nWe would love to get [_feedback_](https://github.com/pytorch/vision/issues/6753) from you to improve its functionality. Please reach out to us if you have any questions or suggestions.", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"New Library Updates in PyTorch 1.13\"\nauthor: Team PyTorch\nfeatured-img: \"assets/images/new-library-updates-in-pytorch-1.13-2.jpg\"\n---\n\n## Summary\n\nWe are bringing a number of improvements to the current PyTorch libraries, alongside the PyTorch 1.13 [release](https://github.com/pytorch/pytorch/releases). These updates demonstrate our focus on developing common and extensible APIs across all domains to make it easier for our community to build ecosystem projects on PyTorch.\n\nAlong with **1.13**, we are releasing updates to the PyTorch Libraries, please find them below.\n\n### TorchAudio \n\n#### (Beta) Hybrid Demucs Model and Pipeline\n\nHybrid Demucs is a music source separation model that uses both spectrogram and time domain features. It has demonstrated state-of-the-art performance in the Sony\u00ae Music DeMixing Challenge. (citation: [https://arxiv.org/abs/2111.03600](https://arxiv.org/abs/2111.03600))\n\nThe TorchAudio v0.13 release includes the following features", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} @@ -522,34 +523,35 @@ {"page_content": "- LIBRISPEECH ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.LIBRISPEECH.html#torchaudio.datasets.LIBRISPEECH))\n- LibriMix ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.LibriMix.html#torchaudio.datasets.LibriMix))\n- QUESST14 ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.QUESST14.html#torchaudio.datasets.QUESST14))\n- SPEECHCOMMANDS ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.SPEECHCOMMANDS.html#torchaudio.datasets.SPEECHCOMMANDS))\n- (new) FluentSpeechCommands ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.FluentSpeechCommands.html#torchaudio.datasets.FluentSpeechCommands))\n- (new) Snips ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.Snips.html#torchaudio.datasets.Snips))\n- (new) IEMOCAP ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.IEMOCAP.html#torchaudio.datasets.IEMOCAP))\n- (new) VoxCeleb1 ([Identification](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.VoxCeleb1Identification.html#torchaudio.datasets.VoxCeleb1Identification), [Verification](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.VoxCeleb1Verification.html#torchaudio.datasets.VoxCeleb1Verification))", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} {"page_content": "#### (Beta) Custom Language Model support in CTC Beam Search Decoding\n\nTorchAudio released a CTC beam search decoder in release 0.12, with KenLM language model support. This release, there is added functionality for creating custom Python language models that are compatible with the decoder, using the `torchaudio.models.decoder.CTCDecoderLM` wrapper.\n\nFor more information on using a custom language model, please refer to the [documentation](https://pytorch.org/audio/0.13.0/generated/torchaudio.models.decoder.CTCDecoder.html#ctcdecoderlm) and [tutorial](https://pytorch.org/audio/0.13.0/tutorials/asr_inference_with_ctc_decoder_tutorial.html#custom-language-model).\n\n#### (Beta) StreamWriter\n\ntorchaudio.io.StreamWriter is a class for encoding media including audio and video. This can handle a wide variety of codecs, chunk-by-chunk encoding and GPU encoding.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} {"page_content": "```python\nwriter = StreamWriter(\"example.mp4\")\nwriter.add_audio_stream(\n sample_rate=16_000,\n num_channels=2,\n)\nwriter.add_video_stream(\n frame_rate=30,\n height=96,\n width=128,\n format=\"rgb24\",\n)\nwith writer.open():\n writer.write_audio_chunk(0, audio)\n writer.write_video_chunk(1, video)\n```\n\nFor more information, refer to [the documentation](https://pytorch.org/audio/0.13.0/generated/torchaudio.io.StreamWriter.html) and the following tutorials\n- [StreamWriter Basic Usage](https://pytorch.org/audio/0.13.0/tutorials/streamwriter_basic_tutorial.html)\n- [StreamWriter Advanced Usage](https://pytorch.org/audio/0.13.0/tutorials/streamwriter_advanced.html)\n- [Hardware-Accelerated Video Decoding and Encoding](https://pytorch.org/audio/0.13.0/hw_acceleration_tutorial.html)\n\n### TorchData\n\nFor a complete list of changes and new features, please visit [our repository\u2019s 0.5.0 release note](https://github.com/pytorch/data/releases).\n\n#### (Prototype) DataLoader2", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} -{"page_content": "`DataLoader2` was introduced in the last release to execute `DataPipe` graph, with support for dynamic sharding for multi-process/distributed data loading, multiple backend ReadingServices, and `DataPipe` graph in-place modification (e.g. shuffle control).\n\nIn this release, we further consolidated the API for `DataLoader2` and a [detailed documentation is now available here](https://pytorch.org/data/0.5/dataloader2.html). We continue to welcome early adopters and feedback, as well as potential contributors. If you are interested in trying it out, we encourage you to install the nightly version of TorchData.\n\n#### (Beta) Data Loading from Cloud Service Providers\n\nWe extended our support to load data from additional cloud storage providers via DataPipes, now covering AWS, Google Cloud Storage, and Azure. A [tutorial is also available](https://pytorch.org/data/0.5/tutorial.html#working-with-cloud-storage-providers). We are open to feedback and feature requests.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} +{"page_content": "#### (Prototype) DataLoader2\n\n`DataLoader2` was introduced in the last release to execute `DataPipe` graph, with support for dynamic sharding for multi-process/distributed data loading, multiple backend ReadingServices, and `DataPipe` graph in-place modification (e.g. shuffle control).\n\nIn this release, we further consolidated the API for `DataLoader2` and a [detailed documentation is now available here](https://pytorch.org/data/0.5/dataloader2.html). We continue to welcome early adopters and feedback, as well as potential contributors. If you are interested in trying it out, we encourage you to install the nightly version of TorchData.\n\n#### (Beta) Data Loading from Cloud Service Providers\n\nWe extended our support to load data from additional cloud storage providers via DataPipes, now covering AWS, Google Cloud Storage, and Azure. A [tutorial is also available](https://pytorch.org/data/0.5/tutorial.html#working-with-cloud-storage-providers). We are open to feedback and feature requests.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} {"page_content": "We also performed a simple benchmark, comparing the performance of data loading from AWS S3 and attached volume on an AWS EC2 instance. The results are [visible here](https://github.com/pytorch/data/blob/gh/NivekT/100/head/benchmarks/cloud/aws_s3_results.md).\n\n### torch::deploy (Beta)\n\ntorch::deploy is now in Beta! torch::deploy is a C++ library for Linux based operating systems that allows you to run multiple Python interpreters in a single process. You can run your existing eager PyTorch models without any changes for production inference use cases. Highlights include: \n\n- Existing models work out of the box\u2013no need to modify your python code to support tracing.\n- Full support for your existing Python environment including C extensions.\n- No need to cross process boundaries to load balance in multi-GPU serving environments.\n- Model weight can be shared between multiple Python interpreters.\n- A vastly improved installation and setup process.\n\n```Python\ntorch::deploy::InterpreterManager manager(4);", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} {"page_content": "// access one of the 4 interpreters\nauto I = manager.acquireOne();\n\n// run infer from your_model.py\nI.global(\"your_model\", \"infer\")({at::randn({10, 240, 320})});\n```\n\nLearn more [here](https://github.com/pytorch/multipy).\n\n#### (Beta) CUDA/ROCm/CPU Backends\n\ntorch::deploy now links against standard PyTorch Python distributions so all accelerators that PyTorch core supports such as CUDA and AMD/HIP work out of the box.\n\n- Can install any device variant of PyTorch via pip/conda like normal.\n- [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/)\n\n#### (Prototype) aarch64/arm64 support\n\ntorch::deploy now has basic support for aarch64 Linux systems.\n\n- We're looking to gather feedback on it and learn more about arm use cases for eager PyTorch models.\n- Learn more / share your use case at [https://github.com/pytorch/multipy/issues/64](https://github.com/pytorch/multipy/issues/64)\n\n### TorchEval\n\n#### (Prototype) Introducing Native Metrics Support for PyTorch", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} {"page_content": "TorchEval is a library built for users who want highly performant implementations of common metrics to evaluate machine learning models. It also provides an easy to use interface for building custom metrics with the same toolkit. Building your metrics with TorchEval makes running distributed training loops with [torch.distributed](https://pytorch.org/docs/stable/distributed.html) a breeze.\n\nLearn more with our [docs](https://pytorch.org/torcheval), see our [examples](https://pytorch.org/torcheval/metric_example.html), or check out our [GitHub repo](http://github.com/pytorch/torcheval).\n\n### TorchMultimodal Release (Beta)\n\nPlease watch for upcoming blogs in early November that will introduce TorchMultimodal, a PyTorch domain library for training SoTA multi-task multimodal models at scale, in more details; in the meantime, play around with the library and models through our [tutorial](https://pytorch.org/tutorials/beginner/flava_finetuning_tutorial.html).\n\n### TorchRec", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} {"page_content": "### TorchRec\n\n#### (Prototype) Simplified Optimizer Fusion APIs\n\nWe\u2019ve provided a simplified and more intuitive API for setting fused optimizer settings via apply_optimizer_in_backward. This new approach enables the ability to specify optimizer settings on a per-parameter basis and sharded modules will configure [FBGEMM\u2019s TableBatchedEmbedding modules accordingly](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops.py#L181). Additionally, this now let's TorchRec\u2019s planner account for optimizer memory usage. This should alleviate reports of sharding jobs OOMing after using Adam using a plan generated from planner.\n\n#### (Prototype) Simplified Sharding APIs", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} -{"page_content": "We\u2019re introducing the shard API, which now allows you to shard only the embedding modules within a model, and provides an alternative to the current main entry point - DistributedModelParallel. This lets you have a finer grained control over the rest of the model, which can be useful for customized parallelization logic, and inference use cases (which may not require any parallelization on the dense layers). We\u2019re also introducing construct_module_sharding_plan, providing a simpler interface to the TorchRec sharder.\n\n#### (Beta) Quantized Comms", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} -{"page_content": "Applying [quantization or mixed precision](https://dlp-kdd.github.io/assets/pdf/a11-yang.pdf) to tensors in a collective call during model parallel training greatly improves training efficiency, with little to no effect on model quality. TorchRec now integrates with the [quantized comms library provided by FBGEMM GPU](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/fbgemm_gpu/quantize_comm.py) and provides an interface to construct encoders and decoders (codecs) that surround the all_to_all, and reduce_scatter collective calls in the output_dist of a sharded module. We also allow you to construct your own codecs to apply to your sharded module. The codces provided by FBGEMM allow FP16, BF16, FP8, and INT8 compressions, and you may use different quantizations for the forward pass and backward pass.\n\n### TorchSnapshot (Beta)", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} -{"page_content": "Along with PyTorch 1.13, we are releasing the beta version of TorchSnapshot, which is a performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind. Highlights include:\n\n- Performance: TorchSnapshot provides a fast checkpointing implementation employing various optimizations, including zero-copy serialization for most tensor types, overlapped device-to-host copy and storage I/O, parallelized storage I/O\n- Memory Use: TorchSnapshot's memory usage adapts to the host's available resources, greatly reducing the chance of out-of-memory issues when saving and loading checkpoints\n- Usability: Simple APIs that are consistent between distributed and non-distributed workloads\n\nLearn more with our [tutorial](https://pytorch.org/torchsnapshot/main/getting_started.html).\n\n### TorchVision", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} +{"page_content": "#### (Prototype) Simplified Sharding APIs\n\nWe\u2019re introducing the shard API, which now allows you to shard only the embedding modules within a model, and provides an alternative to the current main entry point - DistributedModelParallel. This lets you have a finer grained control over the rest of the model, which can be useful for customized parallelization logic, and inference use cases (which may not require any parallelization on the dense layers). We\u2019re also introducing construct_module_sharding_plan, providing a simpler interface to the TorchRec sharder.\n\n#### (Beta) Quantized Comms", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} +{"page_content": "#### (Beta) Quantized Comms\n\nApplying [quantization or mixed precision](https://dlp-kdd.github.io/assets/pdf/a11-yang.pdf) to tensors in a collective call during model parallel training greatly improves training efficiency, with little to no effect on model quality. TorchRec now integrates with the [quantized comms library provided by FBGEMM GPU](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/fbgemm_gpu/quantize_comm.py) and provides an interface to construct encoders and decoders (codecs) that surround the all_to_all, and reduce_scatter collective calls in the output_dist of a sharded module. We also allow you to construct your own codecs to apply to your sharded module. The codces provided by FBGEMM allow FP16, BF16, FP8, and INT8 compressions, and you may use different quantizations for the forward pass and backward pass.\n\n### TorchSnapshot (Beta)", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} +{"page_content": "### TorchSnapshot (Beta)\n\nAlong with PyTorch 1.13, we are releasing the beta version of TorchSnapshot, which is a performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind. Highlights include:\n\n- Performance: TorchSnapshot provides a fast checkpointing implementation employing various optimizations, including zero-copy serialization for most tensor types, overlapped device-to-host copy and storage I/O, parallelized storage I/O\n- Memory Use: TorchSnapshot's memory usage adapts to the host's available resources, greatly reducing the chance of out-of-memory issues when saving and loading checkpoints\n- Usability: Simple APIs that are consistent between distributed and non-distributed workloads\n\nLearn more with our [tutorial](https://pytorch.org/torchsnapshot/main/getting_started.html).\n\n### TorchVision", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} {"page_content": "### TorchVision \n\nWe are happy to introduce torchvision v0.14 [(release note)](https://github.com/pytorch/vision/releases). This version introduces a new [model registration API](https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/) to help users retrieving and listing models and weights. It also includes new image and video classification models such as MViT, S3D, Swin Transformer V2, and MaxViT. Last but not least, we also have new primitives and augmentation such as PolynomicalLR scheduler and SimpleCopyPaste.\n\n#### (Beta) Model Registration API", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} -{"page_content": "Following up on the [multi-weight support API](https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/) that was released on the previous version, we have added a new [model registration API](https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/) to help users retrieve models and weights. There are now 4 new methods under the torchvision.models module: get_model, get_model_weights, get_weight, and list_models. Here are examples of how we can use them:\n\n```Python\nimport torchvision\nfrom torchvision.models import get_model, get_model_weights, list_models\n\n\nmax_params = 5000000\n\ntiny_models = []\nfor model_name in list_models(module=torchvision.models):\n weights_enum = get_model_weights(model_name)\n if len([w for w in weights_enum if w.meta[\"num_params\"] <= max_params]) > 0:\n tiny_models.append(model_name)\n\nprint(tiny_models)\n# ['mnasnet0_5', 'mnasnet0_75', 'mnasnet1_0', 'mobilenet_v2', ...]", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} +{"page_content": "#### (Beta) Model Registration API\n\nFollowing up on the [multi-weight support API](https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/) that was released on the previous version, we have added a new [model registration API](https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/) to help users retrieve models and weights. There are now 4 new methods under the torchvision.models module: get_model, get_model_weights, get_weight, and list_models. Here are examples of how we can use them:\n\n```Python\nimport torchvision\nfrom torchvision.models import get_model, get_model_weights, list_models\n\n\nmax_params = 5000000\n\ntiny_models = []\nfor model_name in list_models(module=torchvision.models):\n weights_enum = get_model_weights(model_name)\n if len([w for w in weights_enum if w.meta[\"num_params\"] <= max_params]) > 0:\n tiny_models.append(model_name)\n\nprint(tiny_models)\n# ['mnasnet0_5', 'mnasnet0_75', 'mnasnet1_0', 'mobilenet_v2', ...]", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} {"page_content": "model = get_model(tiny_models[0], weights=\"DEFAULT\")\nprint(sum(x.numel() for x in model.state_dict().values()))\n# 2239188\n```\n\n#### (Beta) New Video Classification Models\n\nWe added two new video classification models, MViT and S3D. MViT is a state of the art video classification transformer model which has 80.757% accuracy on the Kinetics400 dataset, while S3D is a relatively small model with good accuracy for its size. These models can be used as follows:\n\n```Python\nimport torch\nfrom torchvision.models.video import *\n\nvideo = torch.rand(3, 32, 800, 600)\nmodel = mvit_v2_s(weights=\"DEFAULT\")\n# model = s3d(weights=\"DEFAULT\")\nmodel.eval()\nprediction = model(images)\n```\n\nHere is the table showing the accuracy of the new video classification models tested in the Kinetics400 dataset.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} {"page_content": "| **Model** | **Acc@1** | **Acc@5** |\n|--------------------------------|-----------|-----------|\n| mvit_v1_b | 81.474 | 95.776 |\n| mvit_v2_s | 83.196 | 96.36 |\n| s3d | 83.582 | 96.64 |\n\nWe would like to thank Haoqi Fan, Yanghao Li, Christoph Feichtenhofer and Wan-Yen Lo for their work on [PyTorchVideo](https://github.com/facebookresearch/pytorchvideo/) and their support during the development of the MViT model. We would like to thank Sophia Zhi for her contribution implementing the S3D model in torchvision.\n\n#### (Stable) New Architecture and Model Variants\n\nFor Classification Models, we\u2019ve added the Swin Transformer V2 architecture along with pre-trained weights for its tiny/small/base variants. In addition, we have added support for the MaxViT transformer. Here is an example on how to use the models:\n\n```Python\nimport torch\nfrom torchvision.models import *", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} {"page_content": "image = torch.rand(1, 3, 224, 224)\nmodel = swin_v2_t(weights=\"DEFAULT\").eval()\n# model = maxvit_t(weights=\"DEFAULT\").eval()\nprediction = model(image)\n```\n\nHere is the table showing the accuracy of the models tested on ImageNet1K dataset.\n\n| **Model** | **Acc@1** | **Acc@1 change over V1** | **Acc@5** | **Acc@5 change over V1** |\n|---------------|-----------|--------------------------|-----------|--------------------------|\n| swin_v2_t | 82.072 | + 0.598 | 96.132 | + 0.356 |\n| swin_v2_s | 83.712 | + 0.516 | 96.816 | + 0.456 |\n| swin_v2_b | 84.112 | + 0.530 | 96.864 | + 0.224 |\n| maxvit_t | 83.700 | - | 96.722 | - |\n\nWe would like to thank [Ren Pang](https://github.com/ain-soph) and [Teodor Poncu](https://github.com/TeodorPoncu) for contributing the 2 models to torchvision.\n\n### (Stable) New Primitives & Augmentations", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} -{"page_content": "In this release we\u2019ve added the [SimpleCopyPaste](https://arxiv.org/abs/2012.07177) augmentation in our reference scripts and we up-streamed the PolynomialLR scheduler to PyTorch Core. We would like to thank [Lezwon Castelino](https://github.com/lezwon) and [Federico Pozzi](https://github.com/federicopozzi33) for their contributions. We are continuing our efforts to modernize TorchVision by adding more SoTA primitives, Augmentations and architectures with the help of our community. If you are interested in contributing, have a look at the following [issue](https://github.com/pytorch/vision/issues/6323).\n\n### Torch-TensorRT\n\n#### (Prototype) TensorRT with FX2TRT frontend\n\nTorch-TensorRT is the PyTorch integration for TensorRT, providing high performance inference on NVIDIA GPUs. Torch-TRT allows for optimizing models directly in PyTorch for deployment providing up to 6x performance improvement.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} +{"page_content": "### (Stable) New Primitives & Augmentations\n\nIn this release we\u2019ve added the [SimpleCopyPaste](https://arxiv.org/abs/2012.07177) augmentation in our reference scripts and we up-streamed the PolynomialLR scheduler to PyTorch Core. We would like to thank [Lezwon Castelino](https://github.com/lezwon) and [Federico Pozzi](https://github.com/federicopozzi33) for their contributions. We are continuing our efforts to modernize TorchVision by adding more SoTA primitives, Augmentations and architectures with the help of our community. If you are interested in contributing, have a look at the following [issue](https://github.com/pytorch/vision/issues/6323).\n\n### Torch-TensorRT\n\n#### (Prototype) TensorRT with FX2TRT frontend\n\nTorch-TensorRT is the PyTorch integration for TensorRT, providing high performance inference on NVIDIA GPUs. Torch-TRT allows for optimizing models directly in PyTorch for deployment providing up to 6x performance improvement.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} {"page_content": "Torch-TRT is an AoT compiler which ingests an nn.Module or TorchScript module, optimizes compatible subgraphs in TensorRT & leaves the rest to run in PyTorch. This gives users the performance of TensorRT, but the usability and familiarity of Torch.\n\nTorch-TensorRT is part of the PyTorch ecosystem, and was released as v1.0 in November \u201821. There are currently two distinct front-ends: Torchscript & FX. Each provides the same value proposition and underlying operation with the primary difference being the input & output formats (TS vs FX / Python).\n\nThe Torchscript front-end was included in v1.0 and should be considered stable. The FX front-end is first released in v1.2 and should be considered a Beta.\n\nRelevant Links:", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} {"page_content": "Relevant Links:\n\n- [Github](https://github.com/pytorch/TensorRT)\n- [Documentation](https://pytorch.org/TensorRT/)\n- [Generic (TS) getting started guide](https://pytorch.org/TensorRT/getting_started/getting_started_with_python_api.html)\n- [FX getting started guide](https://pytorch.org/TensorRT/tutorials/getting_started_with_fx_path.html)\n\n#### (Stable) Introducing Torch-TensorRT", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} -{"page_content": "Torch-TensorRT is an integration for PyTorch that leverages inference optimizations of TensorRT on NVIDIA GPUs. It takes advantage of TensorRT optimizations, such as FP16 and INT8 reduced precision, graph optimization, operation fusion, etc. while offering a fallback to native PyTorch when TensorRT does not support the model subgraphs. Currently, there are two frontend paths existing in the library that help to convert a PyTorch model to tensorRT engine. One path is through Torch Script (TS) and the other is through FX frontend. That being said, the models are traced by either TS or FX into their IR graph and then converted to TensorRT from it.\n\nLearn more with our [tutorial](https://pytorch.org/TensorRT/).\n\n### TorchX\n\nTorchX 0.3 updates include a new list API, experiment tracking, elastic training and improved scheduler support. There\u2019s also a new Multi-Objective NAS [tutorial](https://pytorch.org/tutorials/intermediate/ax_multiobjective_nas_tutorial.html) using TorchX + Ax.\n\n#### (Prototype) List", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} -{"page_content": "The newly added list command and API allows you to list recently launched jobs and their statuses for a given scheduler directly from within TorchX.\n\n- This removes the need for using secondary tools to list the jobs.\n- Full programmatic access to recent jobs for integration with custom tools.\n\n```Python\n$ torchx list -s kubernetes\nAPP HANDLE APP STATUS\n----------------------------------------------- -----------------\nkubernetes://torchx/default:train-f2nx4459p5crr SUCCEEDED\n```\n\nLearn more with our [documentation](https://pytorch.org/torchx/main/schedulers.html#torchx.schedulers.Scheduler.list).\n\n#### (Prototype) Tracker\n\nTorchX Tracker is a new prototype library that provides a flexible and customizable experiment and artifact tracking interface. This allows you to track inputs and outputs for jobs across multiple steps to make it easier to use TorchX with pipelines and other external systems.\n\n```Python\nfrom torchx import tracker", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} -{"page_content": "app_run = tracker.app_run_from_env()\napp_run.add_metadata(lr=lr, gamma=gamma) # hyper parameters\napp_run.add_artifact(\"model\", \"storage://path/mnist_cnn.pt\") # logs / checkpoints\napp_run.add_source(parent_run_id, \"model\") # lineage\n```\n\nExample:\n\n- [https://github.com/pytorch/torchx/tree/main/torchx/examples/apps/tracker](https://github.com/pytorch/torchx/tree/main/torchx/examples/apps/tracker)\n- [https://pytorch.org/torchx/main/tracker.html](https://pytorch.org/torchx/main/tracker.html)\n\n#### (Prototype) Elastic Training and Autoscaling\n\nElasticity on Ray and Kubernetes \u2013 automatic scale up of distributed training jobs when using a supported scheduler. Learn more with our [documentation](https://pytorch.org/torchx/main/components/distributed.html).\n\n#### (Prototype) Scheduler Improvements: IBM\u00ae Spectrum LSF\n\nAdded prototype support for the IBM Spectrum LSF scheduler.\n\n#### (Beta) AWS Batch Scheduler\n\nThe AWS Batch scheduler integration is now in beta.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} -{"page_content": "- log fetching and listing jobs is now supported.\n- Added configs for job priorities and queue policies\n- Easily access job UI via ui_url\n[https://pytorch.org/torchx/main/schedulers/aws_batch.html](https://pytorch.org/torchx/main/schedulers/aws_batch.html)\n\n#### (Prototype) AnyPrecision Optimizer \n\nDrop in replacement for AdamW optimizer that reduces GPU memory, enables two main features:\n\n- Ability to successfully train the entire model pipeline in full BFloat16.\nKahan summation ensures precision. This can improve training throughput, especially on huge models, by reduced memory and increased computation speed.\n- Ability to change the variance state to BFloat16. This can reduce overall memory required for model training with additional speed improvements.\n\nFind more information [here](https://github.com/pytorch/torchdistx/pull/52).", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} +{"page_content": "#### (Stable) Introducing Torch-TensorRT\n\nTorch-TensorRT is an integration for PyTorch that leverages inference optimizations of TensorRT on NVIDIA GPUs. It takes advantage of TensorRT optimizations, such as FP16 and INT8 reduced precision, graph optimization, operation fusion, etc. while offering a fallback to native PyTorch when TensorRT does not support the model subgraphs. Currently, there are two frontend paths existing in the library that help to convert a PyTorch model to tensorRT engine. One path is through Torch Script (TS) and the other is through FX frontend. That being said, the models are traced by either TS or FX into their IR graph and then converted to TensorRT from it.\n\nLearn more with our [tutorial](https://pytorch.org/TensorRT/).\n\n### TorchX", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} +{"page_content": "### TorchX\n\nTorchX 0.3 updates include a new list API, experiment tracking, elastic training and improved scheduler support. There\u2019s also a new Multi-Objective NAS [tutorial](https://pytorch.org/tutorials/intermediate/ax_multiobjective_nas_tutorial.html) using TorchX + Ax.\n\n#### (Prototype) List\n\nThe newly added list command and API allows you to list recently launched jobs and their statuses for a given scheduler directly from within TorchX.\n\n- This removes the need for using secondary tools to list the jobs.\n- Full programmatic access to recent jobs for integration with custom tools.\n\n```Python\n$ torchx list -s kubernetes\nAPP HANDLE APP STATUS\n----------------------------------------------- -----------------\nkubernetes://torchx/default:train-f2nx4459p5crr SUCCEEDED\n```\n\nLearn more with our [documentation](https://pytorch.org/torchx/main/schedulers.html#torchx.schedulers.Scheduler.list).\n\n#### (Prototype) Tracker", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} +{"page_content": "#### (Prototype) Tracker\n\nTorchX Tracker is a new prototype library that provides a flexible and customizable experiment and artifact tracking interface. This allows you to track inputs and outputs for jobs across multiple steps to make it easier to use TorchX with pipelines and other external systems.\n\n```Python\nfrom torchx import tracker\n\napp_run = tracker.app_run_from_env()\napp_run.add_metadata(lr=lr, gamma=gamma) # hyper parameters\napp_run.add_artifact(\"model\", \"storage://path/mnist_cnn.pt\") # logs / checkpoints\napp_run.add_source(parent_run_id, \"model\") # lineage\n```\n\nExample:\n\n- [https://github.com/pytorch/torchx/tree/main/torchx/examples/apps/tracker](https://github.com/pytorch/torchx/tree/main/torchx/examples/apps/tracker)\n- [https://pytorch.org/torchx/main/tracker.html](https://pytorch.org/torchx/main/tracker.html)\n\n#### (Prototype) Elastic Training and Autoscaling", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} +{"page_content": "#### (Prototype) Elastic Training and Autoscaling\n\nElasticity on Ray and Kubernetes \u2013 automatic scale up of distributed training jobs when using a supported scheduler. Learn more with our [documentation](https://pytorch.org/torchx/main/components/distributed.html).\n\n#### (Prototype) Scheduler Improvements: IBM\u00ae Spectrum LSF\n\nAdded prototype support for the IBM Spectrum LSF scheduler.\n\n#### (Beta) AWS Batch Scheduler\n\nThe AWS Batch scheduler integration is now in beta.\n\n- log fetching and listing jobs is now supported.\n- Added configs for job priorities and queue policies\n- Easily access job UI via ui_url\n[https://pytorch.org/torchx/main/schedulers/aws_batch.html](https://pytorch.org/torchx/main/schedulers/aws_batch.html)\n\n#### (Prototype) AnyPrecision Optimizer \n\nDrop in replacement for AdamW optimizer that reduces GPU memory, enables two main features:", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} +{"page_content": "- Ability to successfully train the entire model pipeline in full BFloat16.\nKahan summation ensures precision. This can improve training throughput, especially on huge models, by reduced memory and increased computation speed.\n- Ability to change the variance state to BFloat16. This can reduce overall memory required for model training with additional speed improvements.\n\nFind more information [here](https://github.com/pytorch/torchdistx/pull/52).", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"PyTorch 1.11, TorchData, and functorch are now available\"\nauthor: Team PyTorch\nfeatured-img: \"assets/images/pytorch-logo.jpg\"\n---\n\nWe are excited to announce the release of PyTorch 1.11 ([release notes](https://github.com/pytorch/pytorch/releases/tag/v1.11.0)). This release is composed of over 3,300 commits since 1.10, made by 434 contributors. Along with 1.11, we are releasing beta versions of TorchData and functorch.\n\nSummary:\n\n* **TorchData** is a new library for common modular data loading primitives for easily constructing flexible and performant data pipelines. [View it on GitHub](https://github.com/pytorch/data).\n* **functorch**, a library that adds composable function transforms to PyTorch, is now available in beta. [View it on GitHub](https://github.com/pytorch/functorch).\n* Distributed Data Parallel (DDP) static graph optimizations available in stable.\n\n## Introducing TorchData", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}} -{"page_content": "We are delighted to present the Beta release of [TorchData](https://github.com/pytorch/data). This is a library of common modular data loading primitives for easily constructing flexible and performant data pipelines. Based on community feedback, we have found that the existing DataLoader bundled too many features together and can be difficult to extend. Moreover, different use cases often have to rewrite the same data loading utilities over and over again. The goal here is to enable composable data loading through Iterable-style and Map-style building blocks called \u201c[DataPipes](https://github.com/pytorch/data#what-are-datapipes)\u201d that work well out of the box with the [PyTorch\u2019s DataLoader](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}} +{"page_content": "## Introducing TorchData\n\nWe are delighted to present the Beta release of [TorchData](https://github.com/pytorch/data). This is a library of common modular data loading primitives for easily constructing flexible and performant data pipelines. Based on community feedback, we have found that the existing DataLoader bundled too many features together and can be difficult to extend. Moreover, different use cases often have to rewrite the same data loading utilities over and over again. The goal here is to enable composable data loading through Iterable-style and Map-style building blocks called \u201c[DataPipes](https://github.com/pytorch/data#what-are-datapipes)\u201d that work well out of the box with the [PyTorch\u2019s DataLoader](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}} {"page_content": "A `DataPipe` takes in some access function over Python data structures, `__iter__` for `IterDataPipe` and `__getitem__` for `MapDataPipe`, and returns a new access function with a slight transformation applied. You can chain multiple DataPipes together to form a data pipeline that performs all the necessary data transformation.\n\nWe have implemented over 50 DataPipes that provide different core functionalities, such as opening files, parsing texts, transforming samples, caching, shuffling, and batching. For users who are interested in connecting to cloud providers (such as Google Drive or AWS S3), the [fsspec](https://pytorch.org/data/0.3.0/torchdata.datapipes.iter.html#io-datapipes) and iopath DataPipes will allow you to do so. The documentation provides detailed explanations and usage examples of each [IterDataPipe](https://pytorch.org/data/0.3.0/torchdata.datapipes.iter.html) and [MapDataPipe](https://pytorch.org/data/0.3.0/torchdata.datapipes.map.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}} {"page_content": "In this release, some of the PyTorch domain libraries have migrated their datasets to use DataPipes. In TorchText, the [popular datasets provided by the library](https://github.com/pytorch/text/tree/release/0.12/torchtext/datasets) are implemented using DataPipes and a [section of its SST-2 binary text classification tutorial](https://pytorch.org/text/0.12.0/tutorials/sst2_classification_non_distributed.html#dataset) demonstrates how you can use DataPipes to preprocess data for your model. There also are other prototype implementations of datasets with DataPipes in [TorchVision (available in nightly releases)](https://github.com/pytorch/vision/tree/main/torchvision/prototype/datasets/_builtin) and in [TorchRec](https://pytorch.org/torchrec/torchrec.datasets.html). You can find more [specific examples here](https://pytorch.org/data/0.3.0/examples.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}} {"page_content": "The [documentation for TorchData](https://pytorch.org/data) is now live. It contains a tutorial that covers [how to use DataPipes](https://pytorch.org/data/0.3.0/tutorial.html#using-datapipes), [use them with DataLoader](https://pytorch.org/data/0.3.0/tutorial.html#working-with-dataloader), and [implement custom ones](https://pytorch.org/data/0.3.0/tutorial.html#implementing-a-custom-datapipe). FAQs and future plans related to DataLoader are described in [our project\u2019s README file](https://github.com/pytorch/data#readme).\n\n## Introducing functorch\n\nWe\u2019re excited to announce the first beta release of [functorch](https://github.com/pytorch/functorch). Heavily inspired by [Google JAX](https://github.com/google/jax), functorch is a library that adds composable function transforms to PyTorch. It aims to provide composable vmap (vectorization) and autodiff transforms that work with PyTorch modules and PyTorch autograd with good eager-mode performance.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}} {"page_content": "Composable function transforms can help with a number of use cases that are tricky to do in PyTorch today:\n\n* computing per-sample-gradients (or other per-sample quantities)\n* running ensembles of models on a single machine\n* efficiently batching together tasks in the inner-loop of MAML\n* efficiently computing Jacobians and Hessians as well as batched ones\n\nComposing vmap (vectorization), vjp (reverse-mode AD), and jvp (forward-mode AD) transforms allows us to effortlessly express the above without designing a separate library for each.\n\nFor more details, please see our [documentation](https://pytorch.org/functorch/), [tutorials](https://pytorch.org/functorch), and [installation instructions](https://pytorch.org/functorch/stable/install.html).\n\n## Distributed Training\n\n### (Stable) DDP static graph", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}} -{"page_content": "DDP static graph assumes that your model employs the same set of used/unused parameters in every iteration, so that it can deterministically know states like which hooks will fire, how many times the hooks will fire and gradients computation ready order after the first iteration. Static graph caches these states in the first iteration, and thus it could support features that DDP can not support in previous releases, e.g., support multiple activation checkpoints on the same parameters regardless of whether there are unused parameters or not. The static graph feature also applies performance optimizations when there are unused parameters, e.g., it avoids traversing graphs to search unused parameters every iteration, and enables dynamic bucketing order. These optimizations in the DDP static graph brought 10% QPS gain for some recommendation models.\n\nTo enable static graph, just simply set static_graph=True in the DDP API like this:\n\n```\nddp_model = DistributedDataParallel(model, static_graph=True)\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}} -{"page_content": "For more details, please see our [documentation](https://pytorch.org/docs/master/generated/torch.nn.parallel.DistributedDataParallel.html) and [tutorials](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html).\n\nThanks for reading, If you\u2019re interested in these updates and want to join the PyTorch community, we encourage you to join the [discussion forums](https://discuss.pytorch.org/) and [open GitHub issues](https://github.com/pytorch/pytorch/issues). To get the latest news from PyTorch, follow us on [Twitter](https://twitter.com/PyTorch), [Medium](https://medium.com/pytorch), [YouTube](https://www.youtube.com/pytorch), and [LinkedIn](https://www.linkedin.com/company/pytorch).\n\nCheers!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}} +{"page_content": "### (Stable) DDP static graph\n\nDDP static graph assumes that your model employs the same set of used/unused parameters in every iteration, so that it can deterministically know states like which hooks will fire, how many times the hooks will fire and gradients computation ready order after the first iteration. Static graph caches these states in the first iteration, and thus it could support features that DDP can not support in previous releases, e.g., support multiple activation checkpoints on the same parameters regardless of whether there are unused parameters or not. The static graph feature also applies performance optimizations when there are unused parameters, e.g., it avoids traversing graphs to search unused parameters every iteration, and enables dynamic bucketing order. These optimizations in the DDP static graph brought 10% QPS gain for some recommendation models.\n\nTo enable static graph, just simply set static_graph=True in the DDP API like this:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}} +{"page_content": "```\nddp_model = DistributedDataParallel(model, static_graph=True)\n```\n\nFor more details, please see our [documentation](https://pytorch.org/docs/master/generated/torch.nn.parallel.DistributedDataParallel.html) and [tutorials](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html).\n\nThanks for reading, If you\u2019re interested in these updates and want to join the PyTorch community, we encourage you to join the [discussion forums](https://discuss.pytorch.org/) and [open GitHub issues](https://github.com/pytorch/pytorch/issues). To get the latest news from PyTorch, follow us on [Twitter](https://twitter.com/PyTorch), [Medium](https://medium.com/pytorch), [YouTube](https://www.youtube.com/pytorch), and [LinkedIn](https://www.linkedin.com/company/pytorch).\n\nCheers!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"Torchserve Performance Tuning, Animated Drawings Case-Study\"\nauthor: Hamid Shojanazeri, Geeta Chauhan, Mark Saroufim, Jesse Smith\nfeatured-img: \"assets/images/sketch_animator.png\"\n---", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}} {"page_content": "In this post we discuss performance tuning of Torchserve for serving your models in production. One of the biggest challenges in the life cycle of a ML project is deploying models in production. This requires a reliable serving solution along with solutions that address the MLOps needs. A robust serving solution needs to provide support for multi model serving, model versioning, metric logging, monitoring and scaling to serve the peak traffic. In this post, we will have an overview of Torchserve and how to tune its performance for production use-cases. We discuss the [Animated Drawings app](https://ai.facebook.com/blog/using-ai-to-bring-childrens-drawings-to-life/) from Meta that can turn your human figure sketches to animations and how it could serve the peak traffic with Torchserve. The Animated Drawing\u2019s workflow is below.\n\n

\n\n

", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}} {"page_content": "[https://ai.facebook.com/blog/using-ai-to-bring-childrens-drawings-to-life/](https://ai.facebook.com/blog/using-ai-to-bring-childrens-drawings-to-life/)\n\nMany AI systems and tools are designed to handle realistic images of humans, children's drawings add a level of complexity and unpredictability as they are often constructed in abstract, fanciful ways. These types of morphological and stylistic variations can confuse even state-of-the-art AI systems that excel at spotting objects in photorealistic images and drawings.\nMeta AI researchers are working to overcome this challenge so that AI systems will be better able to recognize drawings of human figures in the wildly varied ways that children create them. This great blog post provides more details about the Animated Drawings and the approach taken.\n\n## Torchserve\n\n

\n\n

Fig1. Overall flow of Torchserve performance tuning
\n

", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}} @@ -564,7 +566,7 @@ {"page_content": "def postprocess(self, output):\n responses = []\n for inference_output in inference_outputs:\n responses_json = {\n 'classes': inference_output['pred_classes'].tolist(),\n 'scores': inference_output['scores'].tolist(),\n \"boxes\": inference_output['pred_boxes'].tolist()\n }\n responses.append(json.dumps(responses_json))\n\n return responses\n```\n\n## Metrics\n\nAn essential component in serving models in production is the ability to monitor them. **Torchserve** **collects** **system level** [metrics](https://github.com/pytorch/serve/blob/master/docs/metrics.md) regularly and **allows** adding **custom metrics** as well.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}} {"page_content": "**[System level metrics](https://github.com/pytorch/serve/blob/master/docs/metrics.md#system-metrics)** consist of CPU utilization, available and used disk space and memory on the host machine along with number of requests with different response codes (e.g 200-300, 400-500 and above 500). **Custom metrics** can be **added** to the metrics as explained [here](https://github.com/pytorch/serve/blob/master/docs/metrics.md#custom-metrics-api). TorchServe logs these two sets of metrics to different log files. Metrics are collected by default at:\n\n* System metrics - log_directory/ts_metrics.log\n* Custom metrics - log directory/model_metrics.log", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}} {"page_content": "As mentioned before, Torchserve also exposes [metric API](https://github.com/pytorch/serve/blob/master/docs/metrics_api.md), that by default listens to port 8082 and enables users to query and monitor the collected metrics. The default metrics endpoint returns Prometheus formatted metrics. You can query metrics using curl requests or point a [Prometheus Server](https://github.com/pytorch/serve/blob/master/docs/metrics_api.md#prometheus-server) to the endpoint and use [Grafana](https://github.com/pytorch/serve/blob/master/docs/metrics_api.md#grafana) for dashboards. \n\nWhile serving a model you can query metrics using curl request as follows:\n\n```\ncurl http://127.0.0.1:8082/metrics\n```", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}} -{"page_content": "In case you are looking into exporting the logged metrics, please refer to this [example](https://github.com/google/mtail) that uses mtail to export metrics to Prometheus. Tracking these metrics in a dashboard allows you to monitor performance regressions that may have been sporadic or hard to spot during an offline benchmark run.\n\n## What to consider for tuning performance of a model in production\n\nThe workflow suggested in Fig 1, is the general idea on how to approach model deployment in production with Torchserve.\n\nIn many cases serving models in production is **optimized** **based** on **throughput** or **latency** service level agreement (**SLA)s**. Usually **real-time** **applications** are more concerned about **latency** whereas **off-line applications** may care more about higher **throughput**.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}} +{"page_content": "```\ncurl http://127.0.0.1:8082/metrics\n```\n\nIn case you are looking into exporting the logged metrics, please refer to this [example](https://github.com/google/mtail) that uses mtail to export metrics to Prometheus. Tracking these metrics in a dashboard allows you to monitor performance regressions that may have been sporadic or hard to spot during an offline benchmark run.\n\n## What to consider for tuning performance of a model in production\n\nThe workflow suggested in Fig 1, is the general idea on how to approach model deployment in production with Torchserve.\n\nIn many cases serving models in production is **optimized** **based** on **throughput** or **latency** service level agreement (**SLA)s**. Usually **real-time** **applications** are more concerned about **latency** whereas **off-line applications** may care more about higher **throughput**.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}} {"page_content": "There are a number of main factors contributing to the performance of a serving model in production. In particular, we are focusing on serving Pytorch models with Torchserve here, however most of these factors generalize to all models from other frameworks as well.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}} {"page_content": "* **Model optimizations**: this is a pre-step for deploying models into production. This is a very broad discussion that we will get into in a series of future blogs. This includes techniques like quantization, pruning to decrease the size of the model, using Intermediate representations (IR graphs) such as Torchscript in Pytorch, fusing kernels and many others. Currently [torchprep](https://github.com/msaroufim/torchprep) provides many of these techniques as a CLI tool. \n* **Batch inference:** it refers to feeding multiple inputs into a model, while it is essential during training, it can be very helpful to manage the cost at inference time as well. Hardware accelerators are optimized for parallelism and batching helps to saturate the compute capacity and often leads to higher throughput. The main difference in inference is you can\u2019t wait too long to get a batch filled from clients, something we call dynamic batching\n* **Number of Workers :** Torchserve uses workers to serve models. Torchserve workers are Python processes that hold a copy of the model weights for running inference. Too few workers means you\u2019re not benefitting from enough parallelism but too many can cause worker contention and degrade end to end performance.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}} {"page_content": "- **Hardware :** choosing the appropriate hardware based on the model, application and latency, throughput budget. This could be one of the **supported** hardwares in Torchserve, **CPU, GPU, AWS Inferentia**. Some hardware configurations are intended for best in class performance and others are better suited for cost effective inference. From our experiments we\u2019ve found that GPUs shine best at larger batch sizes whereas the right CPUs and AWS Inferentia can be far more cost effective for lower batch sizes and low latency.\n\n## Best Practices for Performance tuning on Torchserve\n\nTo get the best performance out of your model while serving it with Torchserve, we are sharing some of the best practices here. Torchserve provides a [benchmark](https://github.com/pytorch/serve/tree/c87bfec8916d340de5de5810b14a016049b0e395/benchmarks#benchmarking-with-apache-bench) suite that provides helpful insight to make informed decisions on different choices as detailed below.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}} @@ -572,7 +574,7 @@ {"page_content": "Torchserve exposes a [config property](https://github.com/pytorch/serve/blob/c87bfec8916d340de5de5810b14a016049b0e395/docs/configuration.md#config-model) to set the number of workers. To provide an **efficient parallelism** through **multiple workers** and avoiding them to compete over resources, as a baseline we **recommend** following setting on CPU and GPU:\n\n\n **CPU** : In the handler, `torch.set_num_threads(1) `then set the number of workers to `num physical cores / 2. `But the the best threading configurations can be achieved by leveraging the Intel CPU launcher script.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}} {"page_content": "**GPU**: number of available GPUs can be set through[ number_gpus](https://github.com/pytorch/serve/blob/c87bfec8916d340de5de5810b14a016049b0e395/docs/configuration.md#limit-gpu-usage) in config.properties. Torchserve uses round robin to assign workers to GPUs. We recommend setting the number of workers as follows. `Number of worker = (Number of available GPUs) / (Number of Unique Models). `Note that GPUs that are pre-Ampere do not provide any resource isolation with Multi Instance GPUs.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}} {"page_content": "* **Batch size** can directly affect the latency and the throughput. To better utilize the compute resources batch size needs to be increased. However, there is a tradeoff between latency and throughput. **Larger batch sizes** can **increase** the **throughput but results in a higher latency** as well. Batch size can be set in Torchserve in two ways, either through[ model config](https://github.com/pytorch/serve/blob/c87bfec8916d340de5de5810b14a016049b0e395/docs/configuration.md#config-model) in config.properties or while registering the model using [Management API](https://github.com/pytorch/serve/blob/c87bfec8916d340de5de5810b14a016049b0e395/docs/management_api.md#scale-workers). \n\nIn the next section, we are going to use Torchserve benchmark suite to decide the best combination of model optimization, hardware, workers, and batch size. \n\n## Animated Drawings Performance Tuning", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}} -{"page_content": "To use the Torchserve benchmark suite, first we need to have an archived file, \u201c.mar\u201d file as discussed above, that contains the model, handler and all other artifacts to load and run inference. Animated Drawings uses Detectron2\u2019s implementation of Mask-RCNN for an object detection model. \n\n### How to run benchmark suite \n\nThe [Automated benchmark suite](https://github.com/pytorch/serve/tree/master/benchmarks#auto-benchmarking-with-apache-bench) in Torchserve let you benchmark multiple models with different setting including batch size and number of worker and finally generate a report for you. To get started:\n\n```\ngit clone https://github.com/pytorch/serve.git\n\ncd serve/benchmarks\n\npip install -r requirements-ab.txt\n\napt-get install apache2-utils\n```\n\nModel level settings can be configured in a yaml file similar to \n\n```yaml", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}} +{"page_content": "## Animated Drawings Performance Tuning \n\nTo use the Torchserve benchmark suite, first we need to have an archived file, \u201c.mar\u201d file as discussed above, that contains the model, handler and all other artifacts to load and run inference. Animated Drawings uses Detectron2\u2019s implementation of Mask-RCNN for an object detection model. \n\n### How to run benchmark suite \n\nThe [Automated benchmark suite](https://github.com/pytorch/serve/tree/master/benchmarks#auto-benchmarking-with-apache-bench) in Torchserve let you benchmark multiple models with different setting including batch size and number of worker and finally generate a report for you. To get started:\n\n```\ngit clone https://github.com/pytorch/serve.git\n\ncd serve/benchmarks\n\npip install -r requirements-ab.txt\n\napt-get install apache2-utils\n```\n\nModel level settings can be configured in a yaml file similar to \n\n```yaml", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}} {"page_content": "```yaml\n\nModel_name:\n eager_mode:\n benchmark_engine: \"ab\"\n url: \"Path to .mar file\"\n workers:\n - 1\n - 4\n batch_delay: 100\n batch_size:\n - 1\n - 2\n - 4\n - 8\n requests: 10000\n concurrency: 10\n input: \"Path to model input\"\n backend_profiling: False\n exec_env: \"local\"\n processors:\n - \"cpu\"\n - \"gpus\": \"all\"\n\n```\n\nThis yaml file will be referenced in the [benchmark_config_template](https://github.com/pytorch/serve/blob/master/benchmarks/benchmark_config_template.yaml#L12).yaml file that includes other settings for generating reports, this can optionally work with AWS cloud watch for logs as well.\n\n```\npython benchmarks/auto_benchmark.py --input benchmark_config_template.yaml\n```", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}} {"page_content": "Running the **benchmarks**, results will be written in \u201ccsv\u201d file that can be found in \u201c_ /tmp/benchmark/ab_report.csv_\u201d and full report \u201c/tmp/ts_benchmark/report.md\". It will include items such as Torchserve average latency, model P99 latency, throughput, number of concurrency, number of requests, handler time, and some other metrics. Here we focus on some of the important ones that we track to tune the performance which are, **concurrency**, **model P99** latency, **throughput**. We look at these numbers specifically in **combination** with **batch size**, the used **device, number of workers** and if any **model optimization** has been done.\n\n\nThe **latency SLA** for this model has been set to **100 ms,** this is real-time application and as we discussed earlier, latency is more of a concern and **throughput** ideally should be as high as possible while it does **not violate** the **latency SLA.**", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}} {"page_content": "Through searching the space, over different batch sizes (1-32), number of workers (1-16) and devices (CPU,GPU), we have run a set of experiments that summarized the best ones in the table below.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}} @@ -583,14 +585,14 @@ {"page_content": "### Acknowledgement\n\nWe would like to thank Somya Jain (Meta), Christopher Gustave (Meta) for their great support and guidance throughout many steps of this blog and providing insights to Sketch Animator workflow. Also, special thanks to[ Li Ning](https://www.linkedin.com/in/li-ning-7274604/) from AWS for the great efforts to make performance tuning much easier on Torchserve with automated benchmark suite.\n\n\n", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"Scaling Vision Model Training Platforms with PyTorch\"\nauthor: Vaibhav Aggarwal, Mannat Singh, Anjali Sridhar, Yanghao Li, Shoubhik Debnath, Ronghang Hu, Will Feng, Xinlei Chen, Tingting Markstrum, Diana Liskovich, Anupam Bhatnagar, Chay Ryali, Haoqi Fan, Tete Xiao, Min Xu, Rahul Iyer, Christoph Feichtenhofer, Ross Girshick, Piotr Dollar, Aaron Adcock, Wan-Yen Lo, CK Luk\nfeatured-img: \"/assets/images/scaling-vision-figure_1-solutions-to-the-challenges.png\"\n---\n\n*TL;DR: We demonstrate the use of PyTorch with FairScale\u2019s FullyShardedDataParallel (FSDP) API in writing large vision transformer models. We discuss our techniques for scaling and optimizing these models on a GPU cluster. The goal of this platform scaling effort is to enable research at scale. This blog does not discuss model accuracy, new model architectures, or new training recipes.*\n\n## 1. Introduction", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "## 1. Introduction\n\nLatest vision research [1, 2] demonstrates model scaling as a promising research direction. In this project, we aim to enable our platforms to train massive vision transformer (ViT) [3] models. We present our work on scaling the largest trainable ViT from 1B to 120B parameters in FAIR vision platforms. We wrote ViT in PyTorch and leveraged its support for large-scale, distributed training on a GPU cluster.\n\nIn the rest of this blog, we will first discuss the main challenges, namely *scalability*, *optimization*, and *numerical stability*. Then we will discuss how we tackle them with techniques including *data and model parallelism*, *automatic mixed precision*, *kernel fusion*, and *bfloat16*. Finally, we present our results and conclude.\n\n## 2. Main Challenges\n\n### 2.1 Scalability", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "### 2.1 Scalability\n\nThe key scalability challenge is to efficiently shard a model\u2019s operations and state across multiple GPUs. A 100B parameter model requires ~200GB of RAM just for parameters, assuming fp16 representation. So, it is impossible to fit the model on a single GPU (A100 has at most 80GB RAM). Therefore, we need some way to efficiently shard a model\u2019s data (input, parameters, activations, and optimizer state) across multiple GPUs.\n\nAnother aspect of this problem is to scale without significantly changing the training recipe. E.g. Certain representation learning recipes use a global batch size of up to 4096 beyond which we start to see accuracy degradation. We cannot scale to more than 4096 GPUs without using some form of tensor or pipeline parallelism.\n\n### 2.2 Optimization", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "## 2. Main Challenges\n\n### 2.1 Scalability\n\nThe key scalability challenge is to efficiently shard a model\u2019s operations and state across multiple GPUs. A 100B parameter model requires ~200GB of RAM just for parameters, assuming fp16 representation. So, it is impossible to fit the model on a single GPU (A100 has at most 80GB RAM). Therefore, we need some way to efficiently shard a model\u2019s data (input, parameters, activations, and optimizer state) across multiple GPUs.\n\nAnother aspect of this problem is to scale without significantly changing the training recipe. E.g. Certain representation learning recipes use a global batch size of up to 4096 beyond which we start to see accuracy degradation. We cannot scale to more than 4096 GPUs without using some form of tensor or pipeline parallelism.\n\n### 2.2 Optimization", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "### 2.2 Optimization\n\nThe key optimization challenge is to maintain high GPU utilization even as we scale the number of model parameters and flops. When we scale models to teraflops and beyond, we start to hit major bottlenecks in our software stack that super-linearly increase training time and reduce accelerator utilization. We require hundreds or thousands of GPUs to run just a single experiment. Improvements in accelerator utilization can lead to significant reductions in cost and improve fleet utilization. It enables us to fund more projects and run more experiments in parallel.\n\n### 2.3 Numerical Stability", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "The key stability challenge is to avoid numerical instability and divergence at large scale. We empirically observed in our experiments that the training instability gets severe and hard to deal with when we scale up model sizes, data, batch sizes, learning rate, etc. Vision Transformers particularly face training instability even at a lower parameter threshold. E.g., we find it challenging to train even ViT-H (with just 630M parameters) in mixed-precision mode without using strong data augmentation. We need to study the model properties and training recipes to make sure that the models train stably and converge.\n\n## 3. Our Solutions\n\n**Figure 1** depicts our solutions to each of the challenges.\n\n

\n\n

\n\n### 3.1 Addressing scaling challenges with data parallelism and model parallelism\n\nWe apply various forms of data and model parallelism to enable fitting very large models in GPU memory.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "We use FairScale\u2019s *FullyShardedDataParallel (FSDP)* API [4], based on PyTorch, to shard parameters, gradients, and optimizer state across multiple GPUs, thereby reducing the memory footprint per GPU. This process consists of the following three steps:\n\n- Step 1: We wrapped the entire model in a single FSDP instance. This shards the model parameters at the end of a forward pass and gathers parameters at the beginning of a forward pass. This enabled us to scale ~3x from 1.5B to 4.5B parameters. \n\n- Step 2: We experimented with wrapping individual model layers in separate FSDP instances. This nested wrapping further reduced the memory footprint by sharding and gathering parameters of individual model layers instead of an entire model. The peak memory is then determined by an individually wrapped transformer block in GPU memory in this mode instead of the entire model.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "### 2.3 Numerical Stability\n\nThe key stability challenge is to avoid numerical instability and divergence at large scale. We empirically observed in our experiments that the training instability gets severe and hard to deal with when we scale up model sizes, data, batch sizes, learning rate, etc. Vision Transformers particularly face training instability even at a lower parameter threshold. E.g., we find it challenging to train even ViT-H (with just 630M parameters) in mixed-precision mode without using strong data augmentation. We need to study the model properties and training recipes to make sure that the models train stably and converge.\n\n## 3. Our Solutions\n\n**Figure 1** depicts our solutions to each of the challenges.\n\n

\n\n

\n\n### 3.1 Addressing scaling challenges with data parallelism and model parallelism", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "We apply various forms of data and model parallelism to enable fitting very large models in GPU memory.\n\nWe use FairScale\u2019s *FullyShardedDataParallel (FSDP)* API [4], based on PyTorch, to shard parameters, gradients, and optimizer state across multiple GPUs, thereby reducing the memory footprint per GPU. This process consists of the following three steps:\n\n- Step 1: We wrapped the entire model in a single FSDP instance. This shards the model parameters at the end of a forward pass and gathers parameters at the beginning of a forward pass. This enabled us to scale ~3x from 1.5B to 4.5B parameters. \n\n- Step 2: We experimented with wrapping individual model layers in separate FSDP instances. This nested wrapping further reduced the memory footprint by sharding and gathering parameters of individual model layers instead of an entire model. The peak memory is then determined by an individually wrapped transformer block in GPU memory in this mode instead of the entire model.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "- Step 3: We used *activation-checkpoint* to reduce the memory consumption by activations. It saves the input tensors and discards the intermediate activation tensors during the forward pass. These are recomputed during the backward pass.\n\nIn addition, we experimented with model-parallelism techniques such as pipeline parallelism [5], which allow us to scale to more GPUs without increasing the batch size.\n\n### 3.2 Addressing optimization challenges with advanced AMP and kernel fusion\n\n#### Advanced AMP\n\nAutomatic Mixed Precision (AMP) [6] training refers to training models using a lower precision of bits than FP32 or the default but still maintaining accuracy. We experimented with three levels of AMP as described below:\n\n- AMP O1: This refers to training in mixed precision where weights are in FP32 and some operations are in FP16. With AMP O1, the ops that might impact accuracy remain in FP32 and are not autocasted to FP16.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "- AMP O2: This refers to training in mixed precision but with more weights and ops in FP16 than in O1. Weights do not implicitly remain in FP32 and are cast to FP16. A copy of the master weights is maintained in the FP32 precision that is used by the optimizer. If we want the normalization layer weights in FP32 then we need to explicitly use layer wrapping to ensure that.\n\n- Full FP16: This refers to training in full FP16 where weights and operations are in FP16. FP16 is challenging to enable for training due to convergence issues.\n\nWe found that AMP O2 with LayerNorm wrapping in FP32 leads to the best performance without sacrificing accuracy.\n\n#### Kernel Fusion\n\n- To reduce GPU kernel launch overhead and increase GPU work granularity, we experimented with kernel fusions, including fused dropout and fused layer-norm, using the [xformers library](https://github.com/facebookresearch/xformers) [7].\n\n### 3.3 Addressing stability challenges by studying ops numerical stability and training recipes", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "#### BFloat16 in general but with LayerNorm in FP32\n\nThe [bfloat16](https://cloud.google.com/tpu/docs/bfloat16) (BF16) [8] floating-point format provides the same dynamic range as FP32 with a memory footprint identical to FP16. We found that we could train models in the BF16 format using the same set of hyperparameters as in FP32, without special parameter tuning. Nevertheless, we found that we need to keep LayerNorm in FP32 mode in order for the training to converge.\n\n### 3.4 Final training recipe\n\nA summary of the final training recipe.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "1. Wrap the outer model in an FSDP instance. Enable parameter sharding after the forward pass.\n2. Wrap individual ViT blocks with activation checkpointing, nested FSDP wrapping, and parameter flattening.\n3. Enable mixed precision mode (AMP O2) with bfloat16 representation. Maintain the optimizer state in FP32 precision to enhance numerical stability.\n4. Wrap normalization layers like LayerNorm in FP32 for better numerical stability.\n5. Maximize the Nvidia TensorCore utilization by keeping matrix dimensions to be multiple of 8. For More details check [Nvidia Tensor Core Performance Guide](https://developer.download.nvidia.com/video/gputechconf/gtc/2019/presentation/s9926-tensor-core-performance-the-ultimate-guide.pdf).\n\n## 4. Results", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "A summary of the final training recipe.\n\n1. Wrap the outer model in an FSDP instance. Enable parameter sharding after the forward pass.\n2. Wrap individual ViT blocks with activation checkpointing, nested FSDP wrapping, and parameter flattening.\n3. Enable mixed precision mode (AMP O2) with bfloat16 representation. Maintain the optimizer state in FP32 precision to enhance numerical stability.\n4. Wrap normalization layers like LayerNorm in FP32 for better numerical stability.\n5. Maximize the Nvidia TensorCore utilization by keeping matrix dimensions to be multiple of 8. For More details check [Nvidia Tensor Core Performance Guide](https://developer.download.nvidia.com/video/gputechconf/gtc/2019/presentation/s9926-tensor-core-performance-the-ultimate-guide.pdf).\n\n## 4. Results", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "## 4. Results\n\nIn this section, we show the scaling results of ViT on three types of tasks: (1) image classification, (2) object detection (3) video understanding. **Our key result is that we are able to train massive ViT backbones across these vision tasks after applying the discussed scaling and optimization techniques. This enables vision research at a much larger scale.** We trained the models to convergence to verify that we maintain the current baselines even with all the optimizations. A common trend in Figures 2, 3, 4 is that we are able to train up to 25B-param models with an epoch time of less than 4 hours on 128 A100 GPUs. The 60B and 120B models are relatively slower to train.\n\n**Figure 2** shows the *image-classification* scaling result. It plots the epoch time for training ViTs on ImageNet using 128 A100-80GB GPUs with different model sizes.\n\n

\n\n

", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "

\nFigure 2: Image-classification scaling result.\n

\n\n**Figure 3** shows the *object-detection* scaling result. It plots the epoch time for training [ViTDet](https://arxiv.org/abs/2203.16527) [9] with different ViT backbones on COCO using 128 A100-80GB GPUs.\n\n

\n\n

\n\n

\nFigure 3: Object-detection scaling result.\n

\n\n**Figure 4** shows the *video-understanding* scaling result. It plots the epoch time for training [MViTv2](https://arxiv.org/abs/2112.01526) [10] models on [Kinetics 400](https://www.deepmind.com/open-source/kinetics) [11] using 128 V100 (32 GB) GPUs in FP32.\n\n

\n\n

\n\n

\nFigure 4: Video-understanding scaling result.\n

", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "**Figure 5** shows the optimization result with the ViT-H model in Figure 2 on 8 A100-40GB GPUs.\nThree versions are used: (1) the baseline uses PyTorch\u2019s DDP [12] with AMP O1, (2) FSDP + AMP-O2 + other optimizations, and (3) FSDP + FP16 + other optimizations. These optimizations altogether speed up the training by up to 2.2x.\n\n

\n\n

\n\n

\nFigure 5: Training speedups from various optimizations.\n

\n\n## 5. Concluding Remarks\n\nWe have demonstrated the use of PyTorch with FairScale\u2019s FullyShardedDataParallel (FSDP) API in writing large vision transformer models. We discuss our techniques for scaling and optimizing these models on a GPU cluster. We hope that this article can motivate others to develop large-scale ML models with PyTorch and its ecosystem.\n\n## References\n\n[1] [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377)", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}} @@ -598,7 +600,7 @@ {"page_content": "[12] [Getting Started with Distributed Data Parallel (DDP)](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html)", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'Announcing PyTorch Ecosystem Day'\nauthor: Team PyTorch\n---\n\nWe\u2019re proud to announce our first PyTorch Ecosystem Day. The virtual, one-day event will focus completely on our Ecosystem and Industry PyTorch communities!\n\n\nPyTorch is a deep learning framework of choice for academics and companies, all thanks to its rich ecosystem of tools and strong community. As with our developers, our ecosystem partners play a pivotal role in the development and growth of the community.\n\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"}} {"page_content": "We will be hosting our first PyTorch Ecosystem Day, a virtual event designed for our ecosystem and industry communities to showcase their work and discover new opportunities to collaborate. \n \nPyTorch Ecosystem Day will be held on April 21, with both a morning and evening session, to ensure we reach our global community. Join us virtually for a day filled with discussions on new developments, trends, challenges, and best practices through keynotes, breakout sessions, and a unique networking opportunity hosted through Gather.Town . \n\n## Event Details\nApril 21, 2021 (Pacific Time)\nFully digital experience \n \n* Morning Session: (EMEA)\nOpening Talks - 8:00 am-9:00 am PT\nPoster Exhibition & Breakout Sessions - 9:00 am-12:00 pm PT \n\n* Evening Session (APAC/US)\nOpening Talks - 3:00 pm-4:00 pm PT\nPoster Exhibition & Breakout Sessions - 3:00 pm-6:00 pm PT \n\n* Networking - 9:00 am-7:00 pm PT", "metadata": {"source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"}} -{"page_content": "### There are two ways to participate in PyTorch Ecosystem Day:\n \n1. **Poster Exhibition** from the PyTorch ecosystem and industry communities covering a variety of topics. Posters are available for viewing throughout the duration of the event. To be part of the poster exhibition, please see below for submission details. If your poster is accepted, we highly recommend tending your poster during one of the morning or evening sessions or both!\n \n2. **Breakout Sessions** are 40-min sessions freely designed by the community. The breakouts can be talks, demos, tutorials or discussions. Note: you must have an accepted poster to apply for the breakout sessions.", "metadata": {"source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"}} +{"page_content": "* Networking - 9:00 am-7:00 pm PT\n\n### There are two ways to participate in PyTorch Ecosystem Day:\n \n1. **Poster Exhibition** from the PyTorch ecosystem and industry communities covering a variety of topics. Posters are available for viewing throughout the duration of the event. To be part of the poster exhibition, please see below for submission details. If your poster is accepted, we highly recommend tending your poster during one of the morning or evening sessions or both!\n \n2. **Breakout Sessions** are 40-min sessions freely designed by the community. The breakouts can be talks, demos, tutorials or discussions. Note: you must have an accepted poster to apply for the breakout sessions.", "metadata": {"source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"}} {"page_content": "Call for posters now open! [Submit your proposal](https://pytorchecosystemday.fbreg.com/posters) today! Please send us the **title** and **summary** of your projects, tools, and libraries that could benefit PyTorch researchers in academia and industry, application developers, and ML engineers for consideration. The focus must be on academic papers, machine learning research, or open-source projects. Please no sales pitches. **Deadline for submission is March 18, 2021.** \n\nVisit [pytorchecosystemday.fbreg.com](http://pytorchecosystemday.fbreg.com) for more information and we look forward to welcoming you to PyTorch Ecosystem Day on April 21st!", "metadata": {"source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch 1.5 released, new and updated APIs including C++ frontend API parity with Python'\nauthor: Team PyTorch\n---\n\n\nToday, we\u2019re announcing the availability of PyTorch 1.5, along with new and updated libraries. This release includes several major new API additions and improvements. PyTorch now includes a significant update to the C++ frontend, \u2018channels last\u2019 memory format for computer vision models, and a stable release of the distributed RPC framework used for model-parallel training. The release also has new APIs for autograd for hessians and jacobians, and an API that allows the creation of Custom C++ Classes that was inspired by pybind.\n\nYou can find the detailed release notes [here](https://github.com/pytorch/pytorch/releases).\n\n## C++ Frontend API (Stable)\n\nThe C++ frontend API is now at parity with Python, and the features overall have been moved to \u2018stable\u2019 (previously tagged as experimental). Some of the major highlights include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}} {"page_content": "* Now with ~100% coverage and docs for C++ torch::nn module/functional, users can easily translate their model from Python API to C++ API, making the model authoring experience much smoother.\n* Optimizers in C++ had deviated from the Python equivalent: C++ optimizers can\u2019t take parameter groups as input while the Python ones can. Additionally, step function implementations were not exactly the same. With the 1.5 release, C++ optimizers will always behave the same as the Python equivalent.\n* The lack of tensor multi-dim indexing API in C++ is a well-known issue and had resulted in many posts in PyTorch Github issue tracker and forum. The previous workaround was to use a combination of `narrow` / `select` / `index_select` / `masked_select`, which was clunky and error-prone compared to the Python API\u2019s elegant `tensor[:, 0, ..., mask]` syntax. With the 1.5 release, users can use `tensor.index({Slice(), 0, \"...\", mask})` to achieve the same purpose.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}} @@ -607,17 +609,17 @@ {"page_content": "You can try it out in the tutorial [here](https://pytorch.org/tutorials/advanced/torch_script_custom_classes.html).\n\n\n## Distributed RPC framework APIs (Now Stable)\n\nThe Distributed [RPC framework](https://pytorch.org/docs/stable/rpc.html) was launched as experimental in the 1.4 release and the proposal is to mark Distributed RPC framework as stable and no longer experimental. This work involves a lot of enhancements and bug fixes to make the distributed RPC framework more reliable and robust overall, as well as adding a couple of new features, including profiling support, using TorchScript functions in RPC, and several enhancements for ease of use. Below is an overview of the various APIs within the framework:\n\n### RPC API\nThe RPC API allows users to specify functions to run and objects to be instantiated on remote nodes. These functions are transparently recorded so that gradients can backpropagate through remote nodes using Distributed Autograd.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}} {"page_content": "### Distributed Autograd\nDistributed Autograd connects the autograd graph across several nodes and allows gradients to flow through during the backwards pass. Gradients are accumulated into a context (as opposed to the .grad field as with Autograd) and users must specify their model\u2019s forward pass under a with `dist_autograd.context()` manager in order to ensure that all RPC communication is recorded properly. Currently, only FAST mode is implemented (see [here](https://pytorch.org/docs/stable/rpc/distributed_autograd.html#distributed-autograd-design) for the difference between FAST and SMART modes).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}} {"page_content": "### Distributed Optimizer\nThe distributed optimizer creates RRefs to optimizers on each worker with parameters that require gradients, and then uses the RPC API to run the optimizer remotely. The user must collect all remote parameters and wrap them in an `RRef`, as this is required input to the distributed optimizer. The user must also specify the distributed autograd `context_id` so that the optimizer knows in which context to look for gradients.\n\nLearn more about distributed RPC framework APIs [here](https://pytorch.org/docs/stable/rpc.html).\n\n## New High level autograd API (Experimental)\n\nPyTorch 1.5 brings new functions including jacobian, hessian, jvp, vjp, hvp and vhp to the `torch.autograd.functional` submodule. This feature builds on the current API and allows the user to easily perform these functions.\n\nDetailed design discussion on GitHub can be found [here](https://github.com/pytorch/pytorch/issues/30632).\n\n## Python 2 no longer supported", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}} -{"page_content": "Starting PyTorch 1.5.0, we will no longer support Python 2, specifically version 2.7. Going forward support for Python will be limited to Python 3, specifically Python 3.5, 3.6, 3.7 and 3.8 (first enabled in PyTorch 1.4.0).\n\n\n*We\u2019d like to thank the entire PyTorch team and the community for all their contributions to this work.*\n\nCheers!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}} +{"page_content": "## Python 2 no longer supported\n\nStarting PyTorch 1.5.0, we will no longer support Python 2, specifically version 2.7. Going forward support for Python will be limited to Python 3, specifically Python 3.5, 3.6, 3.7 and 3.8 (first enabled in PyTorch 1.4.0).\n\n\n*We\u2019d like to thank the entire PyTorch team and the community for all their contributions to this work.*\n\nCheers!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'OpenMined and PyTorch partner to launch fellowship funding for privacy-preserving ML community'\nauthor: Andrew Trask (OpenMined/U.Oxford), Shubho Sengupta, Laurens van der Maaten, Joe Spisak\nexcerpt: Many applications of machine learning (ML) pose a range of security and privacy challenges.\n---\n\n
\n \n
\n\nMany applications of machine learning (ML) pose a range of security and privacy challenges. In particular, users may not be willing or allowed to share their data, which prevents them from taking full advantage of ML platforms like PyTorch. To take the field of privacy-preserving ML (PPML) forward, OpenMined and PyTorch are announcing plans to jointly develop a combined platform to accelerate PPML research as well as new funding for fellowships.", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}} {"page_content": "There are many techniques attempting to solve the problem of privacy in ML, each at various levels of maturity. These include (1) homomorphic encryption, (2) secure multi-party computation, (3) trusted execution environments, (4) on-device computation, (5) federated learning with secure aggregation, and (6) differential privacy. Additionally, a number of open source projects implementing these techniques were created with the goal of enabling research at the intersection of privacy, security, and ML. Among them, PySyft and CrypTen have taken an \u201cML-first\u201d approach by presenting an API that is familiar to the ML community, while masking the complexities of privacy and security protocols. We are excited to announce that these two projects are now collaborating closely to build a mature PPML ecosystem around PyTorch.", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}} {"page_content": "Additionally, to bolster this ecosystem and take the field of privacy preserving ML forward, we are also calling for contributions and supporting research efforts on this combined platform by providing funding to support the OpenMined community and the researchers that contribute, build proofs of concepts and desire to be on the cutting edge of how privacy-preserving technology is applied. We will provide funding through the [RAAIS Foundation](https://www.raais.org/), a non-profit organization with a mission to advance education and research in artificial intelligence for the common good. We encourage interested parties to apply to one or more of the fellowships listed below.\n\n## Tools Powering the Future of Privacy-Preserving ML", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}} {"page_content": "The next generation of privacy-preserving open source tools enable ML researchers to easily experiment with ML models using secure computing techniques without needing to be cryptography experts. By integrating with PyTorch, PySyft and CrypTen offer familiar environments for ML developers to research and apply these techniques as part of their work.\n\n**PySyft** is a Python library for secure and private ML developed by the OpenMined community. It is a flexible, easy-to-use library that makes secure computation techniques like [multi-party computation (MPC)](https://en.wikipedia.org/wiki/Secure_multi-party_computation) and privacy-preserving techniques like [differential privacy](https://en.wikipedia.org/wiki/Differential_privacy) accessible to the ML community. It prioritizes ease of use and focuses on integrating these techniques into end-user use cases like federated learning with mobile phones and other edge devices, encrypted ML as a service, and privacy-preserving data science.", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}} {"page_content": "**CrypTen** is a framework built on PyTorch that enables private and secure ML for the PyTorch community. It is the first step along the journey towards a privacy-preserving mode in PyTorch that will make secure computing techniques accessible beyond cryptography researchers. It currently implements [secure multiparty computation](https://en.wikipedia.org/wiki/Secure_multi-party_computation) with the goal of offering other secure computing backends in the near future. Other benefits to ML researchers include:\n\n* It is **ML first** and presents secure computing techniques via a CrypTensor object that looks and feels exactly like a PyTorch Tensor. This allows the user to use automatic differentiation and neural network modules akin to those in PyTorch.\n* The framework focuses on **scalability and performance** and is built with real-world challenges in mind.", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}} {"page_content": "The focus areas for CrypTen and PySyft are naturally aligned and complement each other. The former focuses on building support for various secure and privacy preserving techniques on PyTorch through an encrypted tensor abstraction, while the latter focuses on end user use cases like deployment on edge devices and a user friendly data science platform.\n\nWorking together will enable PySyft to use CrypTen as a backend for encrypted tensors. This can lead to an increase in performance for PySyft and the adoption of CrypTen as a runtime by PySyft\u2019s userbase. In addition to this, PyTorch is also adding cryptography friendly features such as support for cryptographically secure random number generation. Over the long run, this allows each library to focus exclusively on its core competencies while enjoying the benefits of the synergistic relationship.\n\n## New Funding for OpenMined Contributors", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}} -{"page_content": "We are especially excited to announce that the PyTorch team has invested $250,000 to support OpenMined in furthering the development and proliferation of privacy-preserving ML. This gift will be facilitated via the [RAAIS Foundation](https://www.raais.org/) and will be available immediately to support paid fellowship grants for the OpenMined community.\n\n## How to get involved\n\nThanks to the support from the PyTorch team, OpenMined is able to offer three different opportunities for you to participate in the project\u2019s development. Each of these fellowships furthers our shared mission to lower the barrier-to-entry for privacy-preserving ML and to create a more privacy-preserving world.\n\n### Core PySyft CrypTen Integration Fellowships", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}} -{"page_content": "During these fellowships, we will integrate CrypTen as a supported backend for encrypted computation in PySyft. This will allow for the high-performance, secure multi-party computation capabilities of CrypTen to be used alongside other important tools in PySyft such as differential privacy and federated learning. For more information on the roadmap and how to apply for a paid fellowship, check out the project\u2019s [call for contributors](https://blog.openmined.org/openmined-pytorch-fellowship-crypten-project).\n\n### Federated Learning on Mobile, Web, and IoT Devices", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}} +{"page_content": "## New Funding for OpenMined Contributors\n\nWe are especially excited to announce that the PyTorch team has invested $250,000 to support OpenMined in furthering the development and proliferation of privacy-preserving ML. This gift will be facilitated via the [RAAIS Foundation](https://www.raais.org/) and will be available immediately to support paid fellowship grants for the OpenMined community.\n\n## How to get involved\n\nThanks to the support from the PyTorch team, OpenMined is able to offer three different opportunities for you to participate in the project\u2019s development. Each of these fellowships furthers our shared mission to lower the barrier-to-entry for privacy-preserving ML and to create a more privacy-preserving world.\n\n### Core PySyft CrypTen Integration Fellowships", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}} +{"page_content": "### Core PySyft CrypTen Integration Fellowships\n\nDuring these fellowships, we will integrate CrypTen as a supported backend for encrypted computation in PySyft. This will allow for the high-performance, secure multi-party computation capabilities of CrypTen to be used alongside other important tools in PySyft such as differential privacy and federated learning. For more information on the roadmap and how to apply for a paid fellowship, check out the project\u2019s [call for contributors](https://blog.openmined.org/openmined-pytorch-fellowship-crypten-project).\n\n### Federated Learning on Mobile, Web, and IoT Devices", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}} {"page_content": "During these fellowships, we will be extending PyTorch with the ability to perform federated learning across mobile, web, and IoT devices. To this end, a PyTorch front-end will be able to coordinate across federated learning backends that run in Javascript, Kotlin, Swift, and Python. Furthermore, we will also extend PySyft with the ability to coordinate these backends using peer-to-peer connections, providing low latency and the ability to run secure aggregation as a part of the protocol. For more information on the roadmap and how to apply for a paid fellowship, check out the project\u2019s [call for contributors](https://blog.openmined.org/announcing-the-pytorch-openmined-federated-learning-fellowships).\n\n### Development Challenges", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}} -{"page_content": "Over the coming months, we will issue regular open competitions for increasing the performance and security of the PySyft and PyGrid codebases. For performance-related challenges, contestants will compete (for a cash prize) to make a specific PySyft demo (such as federated learning) as fast as possible. For security-related challenges, contestants will compete to hack into a PyGrid server. The first to demonstrate their ability will win the cash bounty! For more information on the challenges and to sign up to receive emails when each challenge is opened, [sign up here](http://blog.openmined.org/announcing-the-openmined-pytorch-development-challenges).\n\nTo apply, select one of the above projects and identify a role that matches your strengths!\n\nCheers,\n\nAndrew, Laurens, Joe, and Shubho", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}} +{"page_content": "### Development Challenges\n\nOver the coming months, we will issue regular open competitions for increasing the performance and security of the PySyft and PyGrid codebases. For performance-related challenges, contestants will compete (for a cash prize) to make a specific PySyft demo (such as federated learning) as fast as possible. For security-related challenges, contestants will compete to hack into a PyGrid server. The first to demonstrate their ability will win the cash bounty! For more information on the challenges and to sign up to receive emails when each challenge is opened, [sign up here](http://blog.openmined.org/announcing-the-openmined-pytorch-development-challenges).\n\nTo apply, select one of the above projects and identify a role that matches your strengths!\n\nCheers,\n\nAndrew, Laurens, Joe, and Shubho", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'How Computational Graphs are Constructed in PyTorch'\nauthor: Preferred Networks\nfeatured-img: 'assets/images/augmented_computational_graph.png'\n---\n\nIn the previous [post](https://pytorch.org/blog/overview-of-pytorch-autograd-engine/) we went over the theoretical foundations of automatic differentiation and reviewed the implementation in PyTorch. In this post, we will be showing the parts of PyTorch involved in creating the graph and executing it. In order to understand the following contents, please read @ezyang\u2019s wonderful [blog post](http://blog.ezyang.com/2019/05/pytorch-internals/) about PyTorch internals.\n\n# Autograd components\n\nFirst of all, let\u2019s look at where the different components of autograd live:", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "[tools/autograd](https://github.com/pytorch/pytorch/tree/release/1.9/tools/autograd): Here we can find the definition of the derivatives as we saw in the previous post [derivatives.yaml](https://github.com/pytorch/pytorch/blob/release/1.9/tools/autograd/derivatives.yaml), several python scripts and a folder called [templates](https://github.com/pytorch/pytorch/tree/release/1.9/tools/autograd/templates). These scripts and the templates are used at building time to generate the C++ code for the derivatives as specified in the yaml file. Also, the scripts here generate wrappers for the regular ATen functions so that the computational graph can be constructed.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "[torch/autograd](https://github.com/pytorch/pytorch/tree/release/1.9/torch/autograd): This folder is where the autograd components that can be used directly from python are located. In [function.py](https://github.com/pytorch/pytorch/blob/release/1.9/torch/autograd/function.py) we find the actual definition of `torch.autograd.Function`, a class used by users to write their own differentiable functions in python as per the documentation. [functional.py](https://github.com/pytorch/pytorch/blob/release/1.9/torch/autograd/functional.py) holds components for functionally computing the jacobian vector product, hessian, and other gradient related computations of a given function.\nThe rest of the files have additional components such as gradient checkers, anomaly detection, and the autograd profiler.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}} @@ -663,7 +665,7 @@ {"page_content": "

\n\n

\n\n### Visualizations\n\nAx provides a number of visualizations that make it possible to analyze and understand the results of an experiment. Here, we will focus on the performance of the Gaussian process models that model the unknown objectives, which are used to help us discover promising configurations faster. Ax makes it easy to better understand how accurate these models are and how they perform on unseen data via leave-one-out cross-validation. In the figures below, we see that the model fits look quite good - predictions are close to the actual outcomes, and predictive 95% confidence intervals cover the actual outcomes well. Additionally, we observe that the model size `(num_params)` metric is much easier to model than the validation accuracy `(val_acc)` metric.\n\n\n\n", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}} {"page_content": "\n\n
\n

\n\n

\n\n

\n\n

\n
\n\n## Takeaways\n\n- We showed how to run a fully automated multi-objective Neural Architecture Search using Ax.\n\n- Using the Ax Scheduler, we were able to run the optimization automatically in a fully asynchronous fashion - this can be done locally (as done in the tutorial) or by deploying trials remotely to a cluster (simply by changing the TorchX scheduler configuration).\n\n- The state-of-the-art multi-objective Bayesian optimization algorithms available in Ax allowed us to efficiently explore the tradeoffs between validation accuracy and model size.\n\n## Advanced Functionality\n\nAx has a number of other advanced capabilities that we did not discuss in our tutorial. Among these are the following:\n\n### Early Stopping", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}} {"page_content": "### Early Stopping\n\nWhen evaluating a new candidate configuration, partial learning curves are typically available while the NN training job is running. We can use the information contained in the partial curves to identify under-performing trials to stop early in order to free up computational resources for more promising candidates. While not demonstrated in the above tutorial, Ax supports early stopping out-of-the-box - see our [early stopping tutorial](https://ax.dev/versions/latest/tutorials/early_stopping/early_stopping.html) for more details.\n\n### High-dimensional search spaces", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}} -{"page_content": "In our tutorial, we used Bayesian optimization with a standard Gaussian process in order to keep the runtime low. However, these models typically scale to only about 10-20 tunable parameters. Our new SAASBO method ([paper](https://proceedings.mlr.press/v161/eriksson21a/eriksson21a.pdf), [Ax tutorial](https://ax.dev/tutorials/saasbo.html), [BoTorch tutorial](https://botorch.org/tutorials/saasbo)) is very sample-efficient and enables tuning hundreds of parameters. SAASBO can easily be enabled by passing `use_saasbo=True` to `choose_generation_strategy`.\n\n## Acknowledgements\n\nWe thank the TorchX team (in particular Kiuk Chung and Tristan Rice) for their help with integrating TorchX with Ax, and the Adaptive Experimentation team @ Meta for their contributions to Ax and BoTorch.\n\n## References", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}} +{"page_content": "### High-dimensional search spaces\n\nIn our tutorial, we used Bayesian optimization with a standard Gaussian process in order to keep the runtime low. However, these models typically scale to only about 10-20 tunable parameters. Our new SAASBO method ([paper](https://proceedings.mlr.press/v161/eriksson21a/eriksson21a.pdf), [Ax tutorial](https://ax.dev/tutorials/saasbo.html), [BoTorch tutorial](https://botorch.org/tutorials/saasbo)) is very sample-efficient and enables tuning hundreds of parameters. SAASBO can easily be enabled by passing `use_saasbo=True` to `choose_generation_strategy`.\n\n## Acknowledgements\n\nWe thank the TorchX team (in particular Kiuk Chung and Tristan Rice) for their help with integrating TorchX with Ax, and the Adaptive Experimentation team @ Meta for their contributions to Ax and BoTorch.\n\n## References", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}} {"page_content": "## References\n\n[D. Eriksson, P. Chuang, S. Daulton, M. Balandat. Optimizing model accuracy and latency using Bayesian multi-objective neural architecture search. Meta Research blog, July 2021.](https://research.facebook.com/blog/2021/07/optimizing-model-accuracy-and-latency-using-bayesian-multi-objective-neural-architecture-search/)", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch Adds New Ecosystem Projects for Encrypted AI and Quantum Computing, Expands PyTorch Hub'\nauthor: Team PyTorch\n---\n\nThe PyTorch ecosystem includes projects, tools, models and libraries from a broad community of researchers in academia and industry, application developers, and ML engineers. The goal of this ecosystem is to support, accelerate, and aid in your exploration with PyTorch and help you push the state of the art, no matter what field you are exploring. Similarly, we are expanding the recently launched PyTorch Hub to further help you discover and reproduce the latest research.\n\nIn this post, we\u2019ll highlight some of the projects that have been added to the PyTorch ecosystem this year and provide some context on the criteria we use to evaluate community projects. We\u2019ll also provide an update on the fast-growing PyTorch Hub and share details on our upcoming PyTorch Summer Hackathon.", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}} {"page_content": "
\n \n
\n\n## Recently added ecosystem projects\n\nFrom private AI to quantum computing, we\u2019ve seen the community continue to expand into new and interesting areas. The latest projects include:\n\n- [Advertorch](https://github.com/BorealisAI/advertorch): A Python toolbox for adversarial robustness research. The primary functionalities are implemented in PyTorch. Specifically, AdverTorch contains modules for generating adversarial perturbations and defending against adversarial examples, as well as scripts for adversarial training.\n\n- [botorch](https://botorch.org/): A modular and easily extensible interface for composing Bayesian optimization primitives, including probabilistic models, acquisition functions, and optimizers.\n\n- [Skorch](https://github.com/skorch-dev/skorch): A high-level library for PyTorch that provides full scikit-learn compatibility.", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}} @@ -672,8 +674,8 @@ {"page_content": "If you would like to have your project included in the PyTorch ecosystem and featured on [pytorch.org/ecosystem](http://pytorch.org/ecosystem), please complete the form [here](https://pytorch.org/ecosystem/join). If you've previously submitted a project for consideration and haven't heard back, we promise to get back to you as soon as we can - we've received a lot of submissions!\n\n## PyTorch Hub for reproducible research | New models", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}} {"page_content": "Since [launching](https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/) the PyTorch Hub in beta, we\u2019ve received a lot of interest from the community including the contribution of many new models. Some of the latest include [U-Net for Brain MRI](https://pytorch.org/hub/mateuszbuda_brain-segmentation-pytorch_unet/) contributed by researchers at Duke University, [Single Shot Detection](https://pytorch.org/hub/nvidia_deeplearningexamples_ssd/) from NVIDIA and [Transformer-XL](https://pytorch.org/hub/huggingface_pytorch-pretrained-bert_transformerXL/) from HuggingFace.\n\nWe\u2019ve seen organic integration of the PyTorch Hub by folks like [paperswithcode](https://paperswithcode.com/), making it even easier for you to try out the state of the art in AI research. In addition, companies like [Seldon](https://github.com/axsaucedo/seldon-core/tree/pytorch_hub/examples/models/pytorchhub) provide production-level support for PyTorch Hub models on top of Kubernetes.", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}} {"page_content": "### What are the benefits of contributing a model in the PyTorch Hub?\n\n- *Compatibility:* PyTorch Hub models are prioritized first for testing by the TorchScript and Cloud TPU teams, and used as baselines for researchers across a number of fields.\n\n- *Visibility:* Models in the Hub will be promoted on [pytorch.org](http://pytorch.org/) as well as on [paperswithcode](https://paperswithcode.com/).\n\n- *Ease of testing and reproducibility:* Each model comes with code, clear preprocessing requirements, and methods/dependencies to run. There is also tight integration with [Google Colab](https://colab.research.google.com/github/pytorch/pytorch.github.io/blob/master/assets/hub/facebookresearch_WSL-Images_resnext.ipynb#scrollTo=LM_l7vXJvnDM), making it a true single click to get started.\n\n### PyTorch Hub contributions welcome!", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}} -{"page_content": "We are actively looking to grow the PyTorch Hub and welcome contributions. You don\u2019t need to be an original paper author to contribute, and we\u2019d love to see the number of domains and fields broaden. So what types of contributions are we looking for?\n\n- Artifacts of a published or an arXiv paper (or something of a similar nature that serves a different audience \u2014 such as ULMFit) that a large audience would need.\n\n AND\n\n- Reproduces the published results (or better)\n\nOverall these models are aimed at researchers either trying to reproduce a baseline, or trying to build downstream research on top of the model (such as feature-extraction or fine-tuning) as well as researchers looking for a demo of the paper for subjective evaluation. Please keep this audience in mind when contributing.\n\nIf you are short on inspiration or would just like to find out what the SOTA is an any given field or domain, checkout the Paperswithcode [state-of-the-art gallery](https://paperswithcode.com/sota).\n\n## PyTorch Summer Hackathon", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}} -{"page_content": "We\u2019ll be hosting the first PyTorch Summer Hackathon next month. We invite you to apply to participate in the in-person hackathon on August 8th to 9th at Facebook's Menlo Park campus. We'll be bringing the community together to work on innovative ML projects that can solve a broad range of complex challenges.\n\nApplications will be reviewed and accepted on a rolling basis until spaces are filled. For those who cannot join this Hackathon in person, we\u2019ll be following up soon with other ways to participate.\n\nPlease visit [this link to apply](https://www.eventbrite.com/e/pytorch-summer-hackathon-in-menlo-park-registration-63756668913).\n\nThank you for being part of the PyTorch community!\n\n-Team PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}} +{"page_content": "### PyTorch Hub contributions welcome!\n\nWe are actively looking to grow the PyTorch Hub and welcome contributions. You don\u2019t need to be an original paper author to contribute, and we\u2019d love to see the number of domains and fields broaden. So what types of contributions are we looking for?\n\n- Artifacts of a published or an arXiv paper (or something of a similar nature that serves a different audience \u2014 such as ULMFit) that a large audience would need.\n\n AND\n\n- Reproduces the published results (or better)\n\nOverall these models are aimed at researchers either trying to reproduce a baseline, or trying to build downstream research on top of the model (such as feature-extraction or fine-tuning) as well as researchers looking for a demo of the paper for subjective evaluation. Please keep this audience in mind when contributing.", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}} +{"page_content": "If you are short on inspiration or would just like to find out what the SOTA is an any given field or domain, checkout the Paperswithcode [state-of-the-art gallery](https://paperswithcode.com/sota).\n\n## PyTorch Summer Hackathon\n\nWe\u2019ll be hosting the first PyTorch Summer Hackathon next month. We invite you to apply to participate in the in-person hackathon on August 8th to 9th at Facebook's Menlo Park campus. We'll be bringing the community together to work on innovative ML projects that can solve a broad range of complex challenges.\n\nApplications will be reviewed and accepted on a rolling basis until spaces are filled. For those who cannot join this Hackathon in person, we\u2019ll be following up soon with other ways to participate.\n\nPlease visit [this link to apply](https://www.eventbrite.com/e/pytorch-summer-hackathon-in-menlo-park-registration-63756668913).\n\nThank you for being part of the PyTorch community!\n\n-Team PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models'\nauthor: Chaoyang He, Shen Li, Mahdi Soltanolkotabi, and Salman Avestimehr\nfeatured-img: 'assets/images/pipetransformer_overview.png'\n---\n\nIn this blog post, we describe the first peer-reviewed research paper that explores accelerating the hybrid of PyTorch DDP (`torch.nn.parallel.DistributedDataParallel`) [1] and Pipeline (`torch.distributed.pipeline`) - [PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models](http://proceedings.mlr.press/v139/he21a.html) (Transformers such as BERT [2] and ViT [3]), published at ICML 2021.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} {"page_content": "PipeTransformer leverages automated elastic pipelining for efficient distributed training of Transformer models. In PipeTransformer, we designed an adaptive on-the-fly freeze algorithm that can identify and freeze some layers gradually during training and an elastic pipelining system that can dynamically allocate resources to train the remaining active layers. More specifically, PipeTransformer automatically excludes frozen layers from the pipeline, packs active layers into fewer GPUs, and forks more replicas to increase data-parallel width. We evaluate PipeTransformer using Vision Transformer (ViT) on ImageNet and BERT on SQuAD and GLUE datasets. Our results show that compared to the state-of-the-art baseline, PipeTransformer attains up to 2.83-fold speedup without losing accuracy. We also provide various performance analyses for a more comprehensive understanding of our algorithmic and system-wise design.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} {"page_content": "Next, we will introduce the background, motivation, our idea, design, and how we implement the algorithm and system with PyTorch Distributed APIs.\n\n* Paper: [http://proceedings.mlr.press/v139/he21a.html](http://proceedings.mlr.press/v139/he21a.html)\n* Source Code: [https://DistML.ai](https://distml.ai).\n* Slides: [https://docs.google.com/presentation/d/1t6HWL33KIQo2as0nSHeBpXYtTBcy0nXCoLiKd0EashY/edit?usp=sharing](https://docs.google.com/presentation/d/1t6HWL33KIQo2as0nSHeBpXYtTBcy0nXCoLiKd0EashY/edit?usp=sharing)\n\n# Introduction\n

\n\"Model\n
\nFigure 1: the Parameter Number of Transformer Models Increases Dramatically.\n

", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} @@ -692,12 +694,12 @@ {"page_content": "Under these settings, our goal is to accelerate training by leveraging freeze training, which does not require all layers to be trained throughout the duration of the training. Additionally, it may help save computation, communication, memory cost, and potentially prevent overfitting by consecutively freezing layers. However, these benefits can only be achieved by overcoming the four challenges of designing an adaptive freezing algorithm, dynamical pipeline re-partitioning, efficient resource reallocation, and cross-process caching, as discussed in the introduction.\n\n\n

\n\"Overview\"\n
\nFigure 5. Overview of PipeTransformer Training System\n

", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} {"page_content": "PipeTransformer co-designs an on-the-fly freeze algorithm and an automated elastic pipelining training system that can dynamically transform the scope of the pipelined model and the number of pipeline replicas. The overall system architecture is illustrated in Figure 5. To support PipeTransformer\u2019s elastic pipelining, we maintain a customized version of PyTorch Pipeline. For data parallelism, we use PyTorch DDP as a baseline. Other libraries are standard mechanisms of an operating system (e.g.,multi-processing) and thus avoid specialized software or hardware customization requirements. To ensure the generality of our framework, we have decoupled the training system into four core components: freeze algorithm, AutoPipe, AutoDP, and AutoCache. The freeze algorithm (grey) samples indicators from the training loop and makes layer-wise freezing decisions, which will be shared with AutoPipe (green). AutoPipe is an elastic pipeline module that speeds up training by excluding frozen layers from the pipeline and packing the active layers into fewer GPUs (pink), leading to both fewer cross-GPU communications and smaller pipeline bubbles. Subsequently, AutoPipe passes pipeline length information to AutoDP (purple), which then spawns more pipeline replicas to increase data-parallel width, if possible. The illustration also includes an example in which AutoDP introduces a new replica (purple). AutoCache (orange edges) is a cross-pipeline caching module, as illustrated by connections between pipelines. The source code architecture is aligned with Figure 5 for readability and generality.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} {"page_content": "# Implementation Using PyTorch APIs\n\nAs can be seen from Figure 5, PipeTransformers contain four components: Freeze Algorithm, AutoPipe, AutoDP, and AutoCache. Among them, AutoPipe and AutoDP relies on PyTorch DDP (`torch.nn.parallel.DistributedDataParallel`) [1] and Pipeline (`torch.distributed.pipeline`), respectively. In this blog, we only highlight the key implementation details of AutoPipe and AutoDP. For details of Freeze Algorithm and AutoCache, please refer to our paper.\n\n## AutoPipe: Elastic Pipelining\n\nAutoPipe can accelerate training by excluding frozen layers from the pipeline and packing the active layers into fewer GPUs. This section elaborates on the key components of AutoPipe that dynamically 1) partition pipelines, 2) minimize the number of pipeline devices, and 3) optimize mini-batch chunk size accordingly.\n\n### Basic Usage of PyTorch Pipeline", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} -{"page_content": "Before diving into details of AutoPipe, let us warm up the basic usage of PyTorch Pipeline (`torch.distributed.pipeline.sync.Pipe`, see [this tutorial](https://pytorch.org/docs/stable/pipeline.html)). More specially, we present a simple example to understand the design of Pipeline in practice:\n\n```python\n# Step 1: build a model including two linear layers\nfc1 = nn.Linear(16, 8).cuda(0)\nfc2 = nn.Linear(8, 4).cuda(1)\n\n# Step 2: wrap the two layers with nn.Sequential\nmodel = nn.Sequential(fc1, fc2)\n\n# Step 3: build Pipe (torch.distributed.pipeline.sync.Pipe)\nmodel = Pipe(model, chunks=8)\n\n# do training/inference\ninput = torch.rand(16, 16).cuda(0)\noutput_rref = model(input)\n```", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} +{"page_content": "### Basic Usage of PyTorch Pipeline\n\nBefore diving into details of AutoPipe, let us warm up the basic usage of PyTorch Pipeline (`torch.distributed.pipeline.sync.Pipe`, see [this tutorial](https://pytorch.org/docs/stable/pipeline.html)). More specially, we present a simple example to understand the design of Pipeline in practice:\n\n```python\n# Step 1: build a model including two linear layers\nfc1 = nn.Linear(16, 8).cuda(0)\nfc2 = nn.Linear(8, 4).cuda(1)\n\n# Step 2: wrap the two layers with nn.Sequential\nmodel = nn.Sequential(fc1, fc2)\n\n# Step 3: build Pipe (torch.distributed.pipeline.sync.Pipe)\nmodel = Pipe(model, chunks=8)\n\n# do training/inference\ninput = torch.rand(16, 16).cuda(0)\noutput_rref = model(input)\n```", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} {"page_content": "In this basic example, we can see that before initializing `Pipe`, we need to partition the model `nn.Sequential` into multiple GPU devices and set optimal chunk number (`chunks`). Balancing computation time across partitions is critical to pipeline training speed, as skewed workload distributions across stages can lead to stragglers and forcing devices with lighter workloads to wait. The chunk number may also have a non-trivial influence on the throughput of the pipeline.\n\n\n### Balanced Pipeline Partitioning\n\nIn dynamic training system such as PipeTransformer, maintaining optimally balanced partitions in terms of parameter numbers does not guarantee the fastest training speed because other factors also play a crucial role:\n\n

\n\n
\nFigure 6. The partition boundary is in the middle of a skip connection\n

", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} {"page_content": "1. Cross-partition communication overhead. Placing a partition boundary in the middle of a skip connection leads to additional communications since tensors in the skip connection must now be copied to a different GPU. For example, with BERT partitions in Figure 6, partition must take intermediate outputs from both partition and partition . In contrast, if the boundary is placed after the addition layer, the communication overhead between partition and is visibly smaller. Our measurements show that having cross-device communication is more expensive than having slightly imbalanced partitions (see the Appendix in our paper). Therefore, we do not consider breaking skip connections (highlighted separately as an entire attention layer and MLP layer in green color at line 7 in Algorithm 1.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} {"page_content": "2. Frozen layer memory footprint. During training, AutoPipe must recompute partition boundaries several times to balance two distinct types of layers: frozen layers and active layers. The frozen layer's memory cost is a fraction of that inactive layer, given that the frozen layer does not need backward activation maps, optimizer states, and gradients. Instead of launching intrusive profilers to obtain thorough metrics on memory and computational cost, we define a tunable cost factor to estimate the memory footprint ratio of a frozen layer over the same active layer. Based on empirical measurements in our experimental hardware, we set it to .\n\n\n\n

\n\n
\n

", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} {"page_content": "Based on the above two considerations, AutoPipe balances pipeline partitions based on parameter sizes. More specifically, AutoPipe uses a greedy algorithm to allocate all frozen and active layers to evenly distribute partitioned sublayers into GPU devices. Pseudocode is described as the `load\\_balance()` function in Algorithm 1. The frozen layers are extracted from the original model and kept in a separate model instance in the first device of a pipeline.\n\nNote that the partition algorithm employed in this paper is not the only option; PipeTransformer is modularized to work with any alternatives.\n\n\n### Pipeline Compression", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} -{"page_content": "Pipeline compression helps to free up GPUs to accommodate more pipeline replicas and reduce the number of cross-device communications between partitions. To determine the timing of compression, we can estimate the memory cost of the largest partition after compression, and then compare it with that of the largest partition of a pipeline at timestep . To avoid extensive memory profiling, the compression algorithm uses the parameter size as a proxy for the training memory footprint. Based on this simplification, the criterion of pipeline compression is as follows:\n\n

\n\n
\n

", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} +{"page_content": "### Pipeline Compression\n\nPipeline compression helps to free up GPUs to accommodate more pipeline replicas and reduce the number of cross-device communications between partitions. To determine the timing of compression, we can estimate the memory cost of the largest partition after compression, and then compare it with that of the largest partition of a pipeline at timestep . To avoid extensive memory profiling, the compression algorithm uses the parameter size as a proxy for the training memory footprint. Based on this simplification, the criterion of pipeline compression is as follows:\n\n

\n\n
\n

", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} {"page_content": "Once the freeze notification is received, AutoPipe will always attempt to divide the pipeline length by 2 (e.g., from 8 to 4, then 2). By using as the input, the compression algorithm can verify if the result satisfies the criterion in Equation (1). Pseudocode is shown in lines 25-33 in Algorithm 1. Note that this compression makes the acceleration ratio exponentially increase during training, meaning that if a GPU server has a larger number of GPUs (e.g., more than 8), the acceleration ratio will be further amplified.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} {"page_content": "

\n\n
\nFigure 7. Pipeline Bubble: , and denote forward, backward, and the optimizer update of micro-batch on device , respectively. The total bubble size in each iteration is times per micro-batch forward and backward cost.\n

", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} {"page_content": "Additionally, such a technique can also speed up training by shrinking the size of pipeline bubbles. To explain bubble sizes in a pipeline, Figure 7 depicts how 4 micro-batches run through a 4-device pipeline . In general, the total bubble size is times per micro-batch forward and backward cost. Therefore, it is clear that shorter pipelines have smaller bubble sizes.\n\n### Dynamic Number of Micro-Batches", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} @@ -714,8 +716,8 @@ {"page_content": "We summarize the overall experimental results in the table above. Note that the speedup we report is based on a conservative value that can obtain comparable or even higher accuracy. A more aggressive (, ) can obtain a higher speedup but may lead to a slight loss in accuracy. Note that the model size of BERT (24 layers) is larger than ViT-B/16 (12 layers), thus it takes more time for communication.\n\n## Performance Analysis\n\n### Speedup Breakdown\n\nThis section presents evaluation results and analyzes the performance of different components in \\autopipe. More experimental results can be found in the Appendix.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} {"page_content": "

\n\n
\nFigure 9. Speedup Breakdown (ViT on ImageNet)\n

\n\nTo understand the efficacy of all four components and their impacts on training speed, we experimented with different combinations and used their training sample throughput (samples/second) and speedup ratio as metrics. Results are illustrated in Figure 9. Key takeaways from these experimental results are:\n\n1. the main speedup is the result of elastic pipelining which is achieved through the joint use of AutoPipe and AutoDP;\n2. AutoCache's contribution is amplified by AutoDP;\n3. freeze training alone without system-wise adjustment even downgrades the training speed.\n\n### Tuning in Freezing Algorithm", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} {"page_content": "

\n\n
\nFigure 10. Tuning in Freezing Algorithm\n

\n\nWe ran experiments to show how the in the freeze algorithms influences training speed. The result clearly demonstrates that a larger (excessive freeze) leads to a greater speedup but suffers from a slight performance degradation. In the case shown in Figure 10, where , freeze training outperforms normal training and obtains a -fold speedup. We provide more results in the Appendix.\n\n### Optimal Chunks in the elastic pipeline", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} -{"page_content": "

\n\n
\nFigure 11. Optimal chunk number in the elastic pipeline\n

\n\nWe profiled the optimal number of micro-batches for different pipeline lengths . Results are summarized in Figure 11. As we can see, different values lead to different optimal , and the throughput gaps across different M values are large (as shown when ), which confirms the necessity of an anterior profiler in elastic pipelining.\n\n### Understanding the Timing of Caching\n\n

\n\n
\nFigure 12. the timing of caching\n

", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} -{"page_content": "To evaluate AutoCache, we compared the sample throughput of training that activates AutoCache from epoch (blue) with the training job without AutoCache (red). Figure 12 shows that enabling caching too early can slow down training, as caching can be more expensive than the forward propagation on a small number of frozen layers. After more layers are frozen, caching activations clearly outperform the corresponding forward propagation. As a result, AutoCache uses a profiler to determine the proper timing to enable caching. In our system, for ViT (12 layers), caching starts from 3 frozen layers, while for BERT (24 layers), caching starts from 5 frozen layers.\n\nFor more detailed experimental analysis, please refer to our paper.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} +{"page_content": "### Optimal Chunks in the elastic pipeline\n\n

\n\n
\nFigure 11. Optimal chunk number in the elastic pipeline\n

\n\nWe profiled the optimal number of micro-batches for different pipeline lengths . Results are summarized in Figure 11. As we can see, different values lead to different optimal , and the throughput gaps across different M values are large (as shown when ), which confirms the necessity of an anterior profiler in elastic pipelining.\n\n### Understanding the Timing of Caching", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} +{"page_content": "### Understanding the Timing of Caching\n\n

\n\n
\nFigure 12. the timing of caching\n

\n\nTo evaluate AutoCache, we compared the sample throughput of training that activates AutoCache from epoch (blue) with the training job without AutoCache (red). Figure 12 shows that enabling caching too early can slow down training, as caching can be more expensive than the forward propagation on a small number of frozen layers. After more layers are frozen, caching activations clearly outperform the corresponding forward propagation. As a result, AutoCache uses a profiler to determine the proper timing to enable caching. In our system, for ViT (12 layers), caching starts from 3 frozen layers, while for BERT (24 layers), caching starts from 5 frozen layers.\n\nFor more detailed experimental analysis, please refer to our paper.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} {"page_content": "# Summarization\nThis blog introduces PipeTransformer, a holistic solution that combines elastic pipeline-parallel and data-parallel for distributed training using PyTorch Distributed APIs. More specifically, PipeTransformer incrementally freezes layers in the pipeline, packs remaining active layers into fewer GPUs, and forks more pipeline replicas to increase the data-parallel width. Evaluations on ViT and BERT models show that compared to the state-of-the-art baseline, PipeTransformer attains up to 2.83\u00d7 speedups without accuracy loss.\n\n\n# Reference\n\n[1] Li, S., Zhao, Y., Varma, R., Salpekar, O., Noordhuis, P., Li,T., Paszke, A., Smith, J., Vaughan, B., Damania, P., et al. Pytorch Distributed: Experiences on Accelerating Dataparallel Training. Proceedings of the VLDB Endowment,13(12), 2020\n\n[2] Devlin, J., Chang, M. W., Lee, K., and Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT, 2019", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} {"page_content": "[3] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al. An image is Worth 16x16 words: Transformers for Image Recognition at Scale.\n\n[4] Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. Language Models are Few-shot Learners.\n\n[5] Lepikhin, D., Lee, H., Xu, Y., Chen, D., Firat, O., Huang, Y., Krikun, M., Shazeer, N., and Chen, Z. Gshard: Scaling Giant Models with Conditional Computation and Automatic Sharding.\n\n[6] Li, M., Andersen, D. G., Park, J. W., Smola, A. J., Ahmed, A., Josifovski, V., Long, J., Shekita, E. J., and Su, B. Y. Scaling Distributed Machine Learning with the Parameter Server. In 11th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 14), pp. 583\u2013598, 2014.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} {"page_content": "[7] Jiang, Y., Zhu, Y., Lan, C., Yi, B., Cui, Y., and Guo, C. A Unified Architecture for Accelerating Distributed DNN Training in Heterogeneous GPU/CPU Clusters. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20), pp. 463\u2013479. USENIX Association, November 2020. ISBN 978-1-939133-19- 9.\n\n[8] Kim, S., Yu, G. I., Park, H., Cho, S., Jeong, E., Ha, H., Lee, S., Jeong, J. S., and Chun, B. G. Parallax: Sparsity-aware Data Parallel Training of Deep Neural Networks. In Proceedings of the Fourteenth EuroSys Conference 2019, pp. 1\u201315, 2019.\n\n[9] Kim, C., Lee, H., Jeong, M., Baek, W., Yoon, B., Kim, I., Lim, S., and Kim, S. TorchGPipe: On-the-fly Pipeline Parallelism for Training Giant Models.\n\n[10] Huang, Y., Cheng, Y., Bapna, A., Firat, O., Chen, M. X., Chen, D., Lee, H., Ngiam, J., Le, Q. V., Wu, Y., et al. Gpipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} @@ -723,10 +725,10 @@ {"page_content": "[14] Shazeer, N., Cheng, Y., Parmar, N., Tran, D., Vaswani, A., Koanantakool, P., Hawkins, P., Lee, H., Hong, M., Young, C., Sepassi, R., and Hechtman, B. Mesh-Tensorflow: Deep Learning for Supercomputers. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 31, pp. 10414\u201310423. Curran Associates, Inc., 2018.\n\n[15] Shoeybi, M., Patwary, M., Puri, R., LeGresley, P., Casper, J., and Catanzaro, B. Megatron-LM: Training Multi-billion Parameter Language Models using Model Parallelism.\n\n[16] Rajbhandari, S., Rasley, J., Ruwase, O., and He, Y. ZERO: Memory Optimization towards Training a Trillion Parameter Models.\n\n[17] Raghu, M., Gilmer, J., Yosinski, J., and Sohl Dickstein, J. Svcca: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability. In NIPS, 2017.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} {"page_content": "[18] Morcos, A., Raghu, M., and Bengio, S. Insights on Representational Similarity in Neural Networks with Canonical Correlation. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 31, pp. 5732\u20135741. Curran Associates, Inc., 2018.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'Model Serving in PyTorch'\nauthor: Jeff Smith\nredirect_from: /2019/05/08/model-serving-in-pyorch.html\n---\n\nPyTorch has seen a lot of adoption in research, but people can get confused about how well PyTorch models can be taken into production. This blog post is meant to clear up any confusion people might have about the road to production in PyTorch.\nUsually when people talk about taking a model \u201cto production,\u201d they usually mean performing **inference**, sometimes called model evaluation or prediction or serving. At the level of a function call, in PyTorch, inference looks something like this:\n\n* In Python\n * `module(input)`\n* In traced modules\n * `module(input)`\n* In C++\n * `at::Tensor output = module->forward(inputs).toTensor();`\n\nSince we at Facebook perform inference operations using PyTorch hundreds of trillions of times per day, we've done a lot to make sure that inference runs as efficiently as possible.\n\n## Serving Strategies", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}} -{"page_content": "That zoomed-in view of how you use models in inference isn't usually the whole story, though. In a real world machine learning system, you often need to do more than just run a single inference operation in the REPL or Jupyter notebook. Instead, you usually need to integrate your model into a larger application in some way. Depending on what you need to do, you can usually take one of the following approaches.\n\n### Direct embedding\n\nIn application settings like mobile, we often just directly call the model as part of a larger program. This isn't just for apps; usually this is how robotics and dedicated devices work as well. At a code-level, the call to the model is exactly the same as what is shown above in the section about inference shown above. A key concern is often that a Python interpreter is not present in such environments, which is why PyTorch allows you to call your models from C++ and ship a model without the need for a Python runtime.\n\n### Model microservices", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}} -{"page_content": "If you're using your model in a server side context and you're managing multiple models, you might choose to treat each individual model (or each individual model version) as a separate service, usually using some sort of packaging mechanism like a Docker container. Then that service is often made network accessible via some sort of service, either using JSON over HTTP or an RPC technology like gRPC. The key characteristic of this approach is that you're defining a service with a single endpoint that just calls your model. Then you do do all of your model management (promotion, rollback, etc.) via whatever system you already use to manage your services (e.g. kubernetes, ECS).\n\n### Model servers", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}} +{"page_content": "## Serving Strategies\n\nThat zoomed-in view of how you use models in inference isn't usually the whole story, though. In a real world machine learning system, you often need to do more than just run a single inference operation in the REPL or Jupyter notebook. Instead, you usually need to integrate your model into a larger application in some way. Depending on what you need to do, you can usually take one of the following approaches.\n\n### Direct embedding\n\nIn application settings like mobile, we often just directly call the model as part of a larger program. This isn't just for apps; usually this is how robotics and dedicated devices work as well. At a code-level, the call to the model is exactly the same as what is shown above in the section about inference shown above. A key concern is often that a Python interpreter is not present in such environments, which is why PyTorch allows you to call your models from C++ and ship a model without the need for a Python runtime.\n\n### Model microservices", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}} +{"page_content": "### Model microservices\n\nIf you're using your model in a server side context and you're managing multiple models, you might choose to treat each individual model (or each individual model version) as a separate service, usually using some sort of packaging mechanism like a Docker container. Then that service is often made network accessible via some sort of service, either using JSON over HTTP or an RPC technology like gRPC. The key characteristic of this approach is that you're defining a service with a single endpoint that just calls your model. Then you do do all of your model management (promotion, rollback, etc.) via whatever system you already use to manage your services (e.g. kubernetes, ECS).\n\n### Model servers", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}} {"page_content": "### Model servers\n\nAn additional possible solution is to use a model server. This is an application built to manage and serve models. It allows you to upload multiple models and get distinct prediction endpoints for each of them. Typically such systems include a number of other features to help solve more of the whole problem of managing and serving models. This can include things like metrics, visualization, data pre-processing, and more. Even something as simple as having a system for automatically versioning models can make building important features like model rollbacks much easier.\n\n### Evolving Patterns", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}} -{"page_content": "The above is a somewhat arbitrary breakdown of different approaches based on a snapshot in time. Design patterns are still evolving. Recently, model server designs have started to adopt more of the technologies of general service infrastructure such as Docker containers and kubernetes, so many model servers have started to share properties of the model microservice design discussed above. For a deeper dive into the general concepts of model server designs, you can check out my [book on machine learning systems](https://www.manning.com/books/machine-learning-systems).\n\n## Serving PyTorch Models\n\nSo, if you're a PyTorch user, what should you use if you want to take your models to production?", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}} +{"page_content": "### Evolving Patterns\n\nThe above is a somewhat arbitrary breakdown of different approaches based on a snapshot in time. Design patterns are still evolving. Recently, model server designs have started to adopt more of the technologies of general service infrastructure such as Docker containers and kubernetes, so many model servers have started to share properties of the model microservice design discussed above. For a deeper dive into the general concepts of model server designs, you can check out my [book on machine learning systems](https://www.manning.com/books/machine-learning-systems).\n\n## Serving PyTorch Models\n\nSo, if you're a PyTorch user, what should you use if you want to take your models to production?", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}} {"page_content": "If you're on mobile or working on an embedded system like a robot, direct embedding in your application is often the right choice. \nFor mobile specifically, your use case might be served by the ONNX export functionality.\nNote that ONNX, by its very nature, has limitations and doesn't support all of the functionality provided by the larger PyTorch project.\nYou can check out [this tutorial](https://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html) on deploying PyTorch models to mobile using ONNX to see if this path might suit your use case. \nThat said, we've heard that there's a lot more that PyTorch users want to do on mobile, so look for more mobile-specific functionality in PyTorch in the future.\nFor other embedded systems, like robots, running [inference on a PyTorch model from the C++ API](https://pytorch.org/tutorials/advanced/cpp_export.html) could be the right solution.", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}} {"page_content": "If you can't use the cloud or prefer to manage all services using the same technology, you can follow [this example](https://medium.com/datadriveninvestor/deploy-your-pytorch-model-to-production-f69460192217) to build a simple model microservice using the Flask web framework.\n\nIf you want to manage multiple models within a non-cloud service solution, there are teams developing PyTorch support in model servers like [MLFlow](https://mlflow.org/), [Kubeflow](https://www.kubeflow.org/), and [RedisAI.](https://oss.redislabs.com/redisai/) We're excited to see innovation from multiple teams building OSS model servers, and we'll continue to highlight innovation in the PyTorch ecosystem in the future.", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}} {"page_content": "If you can use the cloud for your application, there are several great choices for working with models in the cloud. For AWS Sagemaker, you can start find a guide to [all of the resources from AWS for working with PyTorch](https://docs.aws.amazon.com/sagemaker/latest/dg/pytorch.html), including docs on how to use the [Sagemaker Python SDK](https://sagemaker.readthedocs.io/en/stable/using_pytorch.html). You can also see [some](https://youtu.be/5h1Ot2dPi2E) [talks](https://youtu.be/qc5ZikKw9_w) we've given on using PyTorch on Sagemaker. Finally, if you happen to be using PyTorch via FastAI, then they've written a [really simple guide](https://course.fast.ai/deployment_amzn_sagemaker.html) to getting up and running on Sagemaker.", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}} @@ -757,7 +759,7 @@ {"page_content": "Similarly, by graph capturing the model, we eliminate CPU overhead and accompanying synchronization overhead. CUDA graphs implementation results in a 1.12x performance boost for our max-scale BERT configuration. To maximize the benefits from CUDA graphs, it is important to keep the scope of the graph as large as possible. To achieve this, we modified the model script to remove CPU-GPU synchronizations during the execution such that the full model can be graph captured. Furthermore, we also made sure that the tensor sizes during the execution are static within the scope of the graph. For instance, in BERT, only a specific subset of total tokens contribute to loss function, determined by a pre-generated mask tensor. Extracting the indices of valid tokens from this mask, and using these indices to gather the tokens that contribute to the loss, results in a tensor with a dynamic shape, i.e. with shape that is not constant across iterations. In order to make sure tensor sizes are static, instead of using the dynamic-shape tensors in the loss computation, we used static shape tensors where a mask is used to indicate which elements are valid. As a result, all tensor shapes are static. Dynamic shapes also require CPU-GPU synchronization since it has to involve the framework\u2019s memory management on the CPU side. With static-only shapes, no CPU-GPU synchronizations are necessary. This is shown in Figure 5.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}} {"page_content": "

\n\t\"Synchronization\n\t
\n\tFigure 5. By using a fixed size tensor and a boolean mask as described in the text, we are able to eliminate CPU synchronizations needed for dynamic sized tensors \n

\n\n\n## CUDA graphs in NVIDIA DL examples collection\n\nSingle GPU use cases can also benefit from using CUDA Graphs. This is particularly true for workloads launching many short kernels with small batches. A good example is training and inference for recommender systems. Below we present preliminary benchmark results for NVIDIA's implementation of the Deep Learning Recommendation Model (DLRM) from our Deep Learning Examples collection. Using CUDA graphs for this workload provides significant speedups for both training and inference. The effect is particularly visible when using very small batch sizes, where CPU overheads are more pronounced.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}} {"page_content": "CUDA graphs are being actively integrated into other PyTorch NGC model scripts and the NVIDIA Github deep learning examples. Stay tuned for more examples on how to use it.\n\n\n

\n\t\"CUDA\n

\n

\n\t\"CUDA\n
\n\tFigure 6: CUDA graphs optimization for the DLRM model.\n

\n\n\n# Call to action: CUDA Graphs in PyTorch v1.10", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}} -{"page_content": "CUDA graphs can provide substantial benefits for workloads that comprise many small GPU kernels and hence bogged down by CPU launch overheads. This has been demonstrated in our MLPerf efforts, optimizing PyTorch models. Many of these optimizations, including CUDA graphs, have or will eventually be integrated into our PyTorch NGC model scripts [collection](https://ngc.nvidia.com/catalog/collections?orderBy=scoreDESC&pageNumber=0&query=pytorch&quickFilter=&filters=) and the NVIDIA [Github deep learning examples](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/). For now, check out our open-source MLPerf training v1.0 [implementation](https://github.com/mlcommons/training_results_v1.0/tree/master/NVIDIA) which could serve as a good starting point to see CUDA graph in action. Alternatively, try the PyTorch CUDA graphs API on your own workloads.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}} +{"page_content": "# Call to action: CUDA Graphs in PyTorch v1.10\n\nCUDA graphs can provide substantial benefits for workloads that comprise many small GPU kernels and hence bogged down by CPU launch overheads. This has been demonstrated in our MLPerf efforts, optimizing PyTorch models. Many of these optimizations, including CUDA graphs, have or will eventually be integrated into our PyTorch NGC model scripts [collection](https://ngc.nvidia.com/catalog/collections?orderBy=scoreDESC&pageNumber=0&query=pytorch&quickFilter=&filters=) and the NVIDIA [Github deep learning examples](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/). For now, check out our open-source MLPerf training v1.0 [implementation](https://github.com/mlcommons/training_results_v1.0/tree/master/NVIDIA) which could serve as a good starting point to see CUDA graph in action. Alternatively, try the PyTorch CUDA graphs API on your own workloads.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}} {"page_content": "We thank many NVIDIAN\u2019s and Facebook engineers for their discussions and suggestions: \n[Karthik Mandakolathur US](mailto:karthik@nvidia.com),\n[Tomasz Grel](mailto:tgrel@nvidia.com), \n[PLJoey Conway](mailto:jconway@nvidia.com), \n[Arslan Zulfiqar US](mailto:azulfiqar@nvidia.com)\n\n## Authors bios\n\n[**Vinh Nguyen**](mailto:vinhn@nvidia.com)\n*DL Engineer, NVIDIA*\n\nVinh is a Deep learning engineer and data scientist, having published more than 50 scientific articles attracting more than 2500 citations. At NVIDIA, his work spans a wide range of deep learning and AI applications, including speech, language and vision processing, and recommender systems.\n\n[**Michael Carilli**](mailto:mcarilli@nvidia.com)\n*Senior Developer Technology Engineer, NVIDIA*", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}} {"page_content": "Michael worked at the Air Force Research Laboratory optimizing CFD code for modern parallel architectures. He holds a PhD in computational physics from the University of California, Santa Barbara. A member of the PyTorch team, he focuses on making GPU training fast, numerically stable, and easy(er) for internal teams, external customers, and Pytorch community users.\n\n[**Sukru Burc Eryilmaz**](mailto:seryilmaz@nvidia.com)\n*Senior Architect in Dev Arch, NVIDIA*\n\nSukru received his PhD from Stanford University, and B.S from Bilkent University. He currently works on improving the end-to-end performance of neural network training both at single-node scale and supercomputer scale. \n\n[**Vartika Singh**](mailto:vartikas@nvidia.com)\n*Tech Partner Lead for DL Frameworks and Libraries, NVIDIA*", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}} {"page_content": "Vartika has led teams working in confluence of cloud and distributed computing, scaling and AI, influencing the design and strategy of major corporations. She currently works with the major frameworks and compiler organizations and developers within and outside NVIDIA, to help the design to work efficiently and optimally on NVIDIA hardware.\n\n[**Michelle Lin**](mailto:miclin@nvidia.com)\n*Product Intern, NVIDIA*\n\nMichelle is currently pursuing an undergraduate degree in Computer Science and Business Administration at UC Berkeley. She is currently managing execution of projects such as conducting market research and creating marketing assets for Magnum IO.\n\n[**Natalia Gimelshein**](mailto:ngimel@fb.com)\n*Applied Research Scientist, Facebook*\n\nNatalia Gimelshein worked on GPU performance optimization for deep learning workloads at NVIDIA and Facebook. She is currently a member of the PyTorch core team, working with partners to seamlessly support new software and hardware features.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}} @@ -779,7 +781,7 @@ {"page_content": "* [42% of all physicians reported having burnout. \u2013 Medscape](https://www.medscape.com/slideshow/2020-lifestyle-burnout-6012460)\n* [The problem has grown worse due to the pandemic with 64% of U.S. physicians now reporting burnout. - AAFP](https://www.aafp.org/journals/fpm/blogs/inpractice/entry/covid_burnout_survey.html#:~:text=Physician%20burnout%20was%20already%20a,5%2C000%20%E2%80%94%20practice%20in%20the%20U.S.)\n* [\"Too many bureaucratic tasks e.g., charting and paperwork\" is the leading contribution to burnout, increased computerization ranks 4th.](https://login.medscape.com/login/sso/getlogin?urlCache=aHR0cHM6Ly93d3cubWVkc2NhcGUuY29tL3NsaWRlc2hvdy8yMDIwLWxpZmVzdHlsZS1idXJub3V0LTYwMTI0NjA%3D&ac=401) - Medscape\n* [75% of U.S. Consumers Wish Their Healthcare Experiences Were More Personalized,](https://www.businesswire.com/news/home/20200218005006/en/75-of-U.S.-Consumers-Wish-Their-Healthcare-Experiences-Were-More-Personalized-Redpoint-Global-Survey-Reveals)- Business Wire\n* [61% of patients would visit their healthcare provider more often if the communication experience felt more personalized.](https://www.businesswire.com/news/home/20200218005006/en/75-of-U.S.-Consumers-Wish-Their-Healthcare-Experiences-Were-More-Personalized-Redpoint-Global-Survey-Reveals) \u2013 Business Wire", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "Physician burnout is one of the primary causes for increased [medical errors](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6175626/), malpractice suits, turnover, and decreased access to care. Burnout leads to an increase in healthcare costs and a decrease in overall patient satisfaction. [Burnout costs the United States $4.6 billion a year.](https://www.nejm.org/doi/full/10.1056/NEJMp2003149)\n\nWhat can we do to bring back trust, joy, and humanity to the delivery of healthcare? A significant portion of the administrative work consists of entering patient data into Electronic Health Records (EHRs) and creating clinical documentation. Clinical documentation is created from information already in the EHR as well as from the patient-provider encounter conversation.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "This article will showcase how the Nuance Dragon Ambient eXperience (DAX), an AI-powered, voice-enabled, ambient clinical intelligence solution, automatically documents patient encounters accurately and efficiently at the point of care and the technologies that enable it.\n\nNuance DAX enhances the quality of care and patient experience, increases provider efficiency and satisfaction, and improves financial outcomes. It can be used in office and telehealth settings in all ambulatory specialties, including primary and urgent care.\n\n

\n \n

\n\n## Natural Language Processing", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "Natural Language Processing (NLP) is one of the most challenging fields in Artificial Intelligence (AI). It comprehends a set of algorithms that allow computers to understand or generate the language used by humans. These algorithms can process and analyze vast amounts of natural language data from different sources (either sound or text) to build models that can understand, classify, or even generate natural language as humans would. Like other fields in AI, NLP has significantly progressed thanks to the advent of Deep Learning (DL), which has resulted in models that can obtain results on par with humans in some tasks.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "## Natural Language Processing\n\nNatural Language Processing (NLP) is one of the most challenging fields in Artificial Intelligence (AI). It comprehends a set of algorithms that allow computers to understand or generate the language used by humans. These algorithms can process and analyze vast amounts of natural language data from different sources (either sound or text) to build models that can understand, classify, or even generate natural language as humans would. Like other fields in AI, NLP has significantly progressed thanks to the advent of Deep Learning (DL), which has resulted in models that can obtain results on par with humans in some tasks.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "These advanced NLP techniques are being applied in healthcare. During a typical patient-provider encounter, a conversation ensues where the doctor constructs, through questions and answers, a chronological description of the development of the patient's presenting illness or symptoms. A physician examines the patient and makes clinical decisions to establish a diagnosis and determine a treatment plan. This conversation, and data in the EHR, provide the required information for physicians to generate the clinical documentation, referred to as medical reports.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "Two main NLP components play a role in automating the creation of clinical documentation. The first component, Automatic Speech Recognition (ASR), is used to translate speech into text. It takes the audio recording of the encounter and generates a conversation transcription (cf. Figure 2). The second component, Automatic Text Summarization, helps generate summaries from large text documents. This component is responsible for understanding and capturing the nuances and most essential aspects from the transcribed conversation into a final report in narrative form (cf. Figure 3), structured form, or a combination of both.\n\nWe will focus on this second component, Automatic Text Summarization, which is a difficult task with many challenges:", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "* Its performance is tied to the ASR quality from multiple speakers (noisy input).\n* The input is conversational in nature and contains layman's terms.\n* Protected Health Information (PHI) regulations limit medical data access.\n* The information for one output sentence is potentially spread across multiple conversation turns.\n* There is no explicit sentence alignment between input and output.\n* Various medical specialties, encounter types, and EHR systems constitute a broad and complex output space. \n* Physicians have different styles of conducting encounters and have their preferences for medical reports; there is no standard. \n* Standard summarization metrics might differ from human judgment of quality.\n\n

\n \n

\n\n

\nFigure 2: Transcript of a patient-doctor conversation\n

\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} @@ -789,12 +791,12 @@ {"page_content": "All these challenges require our researchers to run a battery of extensive experiments. Thanks to the flexibility of PyTorch and Fairseq, their productivity has greatly increased. Further, the ecosystem offers an easy path from ideation, implementation, experimentation, and final roll-out to production. Using multiple GPUs or CPUs is as simple as providing an additional argument to the tools, and because of the tight Python integration, PyTorch code can be easily debugged.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "In our continuous effort to contribute to the open-source community, features have been developed at Nuance and pushed to the Fairseq GitHub repository. These try to overcome some of the challenges mentioned such as, facilitating copying of, especially rare or unseen, words from the input to summary, training speedups by improving Tensor Core utilization, and ensuring TorchScript compatibility of different Transformer configurations. Following, we will show an example of how to train a Transformer model with a Pointer Generator mechanism (Transformer-PG), which can copy words from the input.\n\n## How to build a Transformer model with a Pointer Generator mechanism\n\nIn this step-by-step guide, it is assumed the user has already installed PyTorch and Fairseq.\n\n### 1. Create a vocabulary and extend it with source position markers:\n\nThese markers will allow the model to point to any word in the input sequence.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "```python\nvocab_size=\nposition_markers=512\nexport LC_ALL=C\ncat train.src train.tgt |\n tr -s '[:space:]' '\\n' |\n sort |\n uniq -c |\n sort -k1,1bnr -k2 |\n head -n \"$((vocab_size - 4))\" |\n awk '{ print $2 \" \" $1 }' > dict.pg.txt\npython3 -c \"[print(' 0'.format(n)) for n in range($position_markers)]\" >> dict.pg.txt\n```\n\nThis will create a file \"dict.pg.txt\" that contains the \\ most frequent words followed by 512 position markers named from \"\\\" to \"\\\".\n\nIn case we have an input like\n\n```python\nsrc = \"Hello, I'm The Dogtor\"\n```\n\nit could happen that our model has been trained without the word \"Dogtor\" in its vocabulary. Therefore, when we feed this sequence into the model, it should be converted to:\n\n```python\nsrc = \"Hello, I'm The \"\n```", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "Now, \"\\\" is part of our vocabulary and could be predicted by the model (this is where the pointer-generator comes in). In such a case, we will only need to post-process the output to replace \"\\\" by the word at input position 3.\n\n### 2. Preprocess the text data to replace unknown words by its positional markers:\n\nWe can use the scripts from [https://github.com/pytorch/fairseq/tree/master/examples/pointer_generator](https://github.com/pytorch/fairseq/tree/master/examples/pointer_generator).\n\n```python\n# Considering we have our data in:\n# train_src = /path/to/train.src\n# train_tgt = /path/to/train.tgt\n# valid_src = /path/to/valid.src\n# valid_tgt = /path/to/valid.tgt\n./preprocess.py --source /path/to/train.src \\\n --target /path/to/train.tgt \\\n --vocab <(cut -d' ' -f1 dict.pg.txt) \\\n --source-out /path/to/train.pg.src \\\n --target-out /path/to/train.pg.tgt", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "```python\nsrc = \"Hello, I'm The \"\n```\n\nNow, \"\\\" is part of our vocabulary and could be predicted by the model (this is where the pointer-generator comes in). In such a case, we will only need to post-process the output to replace \"\\\" by the word at input position 3.\n\n### 2. Preprocess the text data to replace unknown words by its positional markers:\n\nWe can use the scripts from [https://github.com/pytorch/fairseq/tree/master/examples/pointer_generator](https://github.com/pytorch/fairseq/tree/master/examples/pointer_generator).\n\n```python\n# Considering we have our data in:\n# train_src = /path/to/train.src\n# train_tgt = /path/to/train.tgt\n# valid_src = /path/to/valid.src\n# valid_tgt = /path/to/valid.tgt\n./preprocess.py --source /path/to/train.src \\\n --target /path/to/train.tgt \\\n --vocab <(cut -d' ' -f1 dict.pg.txt) \\\n --source-out /path/to/train.pg.src \\\n --target-out /path/to/train.pg.tgt", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "./preprocess.py --source /path/to/valid.src \\\n --target /path/to/valid.tgt \\\n --vocab <(cut -d' ' -f1 dict.pg.txt) \\\n --source-out /path/to/valid.pg.src \\\n --target-out /path/to/valid.pg.tgt\n\n./preprocess.py --source /path/to/test.src \\\n --vocab <(cut -d' ' -f1 dict.pg.txt) \\\n --source-out /path/to/test.pg.src\n```\n\n### 3. Now let's binarize the data, so that it can be processed faster:", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "```python\nfairseq-preprocess --task \"translation\" \\\n --source-lang \"pg.src\" \\\n --target-lang \"pg.tgt\" \\\n --trainpref /path/to/train \\\n --validpref /path/to/valid \\\n --srcdict dict.pg.txt \\\n --cpu \\\n --joined-dictionary \\\n --destdir \n```\t\t \n\t\t\t\t \nYou might notice the type of task is \"translation\". This is because there is no \"summarization\" task available; we could understand it as a kind of NMT task where the input and output languages are shared and the output (summary) is shorter than the input.\n\n### 4. Now we can train the model:", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "```python\nfairseq-train \\\n --save-dir \\\n --task \"translation\" \\\n --source-lang \"src\" \\\n --target-lang \"tgt\" \\\n --arch \"transformer_pointer_generator\" \\\n --max-source-positions 512 \\\n --max-target-positions 128 \\\n --truncate-source \\\n --max-tokens 2048 \\\n --required-batch-size-multiple 1 \\\n --required-seq-len-multiple 8 \\\n --share-all-embeddings \\\n --dropout 0.1 \\\n --criterion \"cross_entropy\" \\\n --optimizer adam \\\n --adam-betas '(0.9, 0.98)' \\\n --adam-eps 1e-9 \\\n --update-freq 4 \\\n --lr 0.004 \\\n # Pointer Generator\n --alignment-layer -1 \\\n --alignment-heads 1 \\\n --source-position-markers 512\n```\n\nThis configuration makes use of features Nuance has contributed back to Fairseq:", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "* Transformer with a Pointer Generator mechanism to facilitate copying of words from the input.\n* Sequence length padded to a multiple of 8 to better use tensor cores and reduce training time.\n\n### 5. Now let's take a look at how to generate a summary with our new medical report generation system:\n\n```python\nimport torch\nfrom examples.pointer_generator.pointer_generator_src.transformer_pg import TransformerPointerGeneratorModel\n\n# Patient-Doctor conversation\ninput = \"[doctor] Lisa Simpson, thirty six year old female, presents to the clinic today because \" \\\n \"she has severe right wrist pain\"\n\n# Load the model\nmodel = TransformerPointerGeneratorModel.from_pretrained(data_name_or_path=,\n model_name_or_path=,\n checkpoint_file=\"checkpoint_best.pt\")\n\nresult = model.translate([input], beam=2)", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "print(result[0])\nMs. is a 36-year-old female who presents to the clinic today for evaluation of her right wrist.\n```\n\n### 6. Alternatively, we can use fairseq-interactive and a postprocessing tool to substitute positional unknown tokens by its words from the input:\n\n```python\nfairseq-interactive \\\n --batch-size \\\n --task translation \\\n --source-lang src \\\n --target-lang tgt \\\n --path /checkpoint_last.pt \\\n --input /path/to/test.pg.src \\\n --buffer-size 20 \\\n --max-len-a 0 \\\n --max-len-b 128 \\\n --beam 2 \\\n --skip-invalid-size-inputs-valid-test | tee generate.out\n\ngrep \"^H-\" generate.out | cut -f 3- > generate.hyp\n\n./postprocess.py \\\n\t--source <(awk 'NF<512' /path/to/test.pg.src) \\\n\t--target generate.hyp \\\n\t--target-out generate.hyp.processed\n```", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "### 4. Now we can train the model:\n\n```python\nfairseq-train \\\n --save-dir \\\n --task \"translation\" \\\n --source-lang \"src\" \\\n --target-lang \"tgt\" \\\n --arch \"transformer_pointer_generator\" \\\n --max-source-positions 512 \\\n --max-target-positions 128 \\\n --truncate-source \\\n --max-tokens 2048 \\\n --required-batch-size-multiple 1 \\\n --required-seq-len-multiple 8 \\\n --share-all-embeddings \\\n --dropout 0.1 \\\n --criterion \"cross_entropy\" \\\n --optimizer adam \\\n --adam-betas '(0.9, 0.98)' \\\n --adam-eps 1e-9 \\\n --update-freq 4 \\\n --lr 0.004 \\\n # Pointer Generator\n --alignment-layer -1 \\\n --alignment-heads 1 \\\n --source-position-markers 512\n```", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "This configuration makes use of features Nuance has contributed back to Fairseq:\n\n* Transformer with a Pointer Generator mechanism to facilitate copying of words from the input.\n* Sequence length padded to a multiple of 8 to better use tensor cores and reduce training time.\n\n### 5. Now let's take a look at how to generate a summary with our new medical report generation system:\n\n```python\nimport torch\nfrom examples.pointer_generator.pointer_generator_src.transformer_pg import TransformerPointerGeneratorModel\n\n# Patient-Doctor conversation\ninput = \"[doctor] Lisa Simpson, thirty six year old female, presents to the clinic today because \" \\\n \"she has severe right wrist pain\"\n\n# Load the model\nmodel = TransformerPointerGeneratorModel.from_pretrained(data_name_or_path=,\n model_name_or_path=,\n checkpoint_file=\"checkpoint_best.pt\")\n\nresult = model.translate([input], beam=2)", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "result = model.translate([input], beam=2)\n\nprint(result[0])\nMs. is a 36-year-old female who presents to the clinic today for evaluation of her right wrist.\n```\n\n### 6. Alternatively, we can use fairseq-interactive and a postprocessing tool to substitute positional unknown tokens by its words from the input:\n\n```python\nfairseq-interactive \\\n --batch-size \\\n --task translation \\\n --source-lang src \\\n --target-lang tgt \\\n --path /checkpoint_last.pt \\\n --input /path/to/test.pg.src \\\n --buffer-size 20 \\\n --max-len-a 0 \\\n --max-len-b 128 \\\n --beam 2 \\\n --skip-invalid-size-inputs-valid-test | tee generate.out\n\ngrep \"^H-\" generate.out | cut -f 3- > generate.hyp\n\n./postprocess.py \\\n\t--source <(awk 'NF<512' /path/to/test.pg.src) \\\n\t--target generate.hyp \\\n\t--target-out generate.hyp.processed\n```", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "Now we have the final set of reports in \"generate.hyp.processed\", with \"\\\" replaced by the original word from the input sequence.\n\n## Model Deployment\n\nPyTorch offers great flexibility in modeling and a rich surrounding ecosystem. However, while several recent articles have suggested that the use of PyTorch in research and academia may be close to surpassing TensorFlow, there seems to be an overall sense of TensorFlow being the preferred platform for deployment to production. Is this still the case in 2021? Teams looking to serve their PyTorch models in production have a few options.\n\nBefore describing our journey, let's take a brief detour and define the term model.\n\n### Models as computation graphs", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "A few years back, it was still common for machine learning toolkits to support only particular classes of models of a rather fixed and rigid structure, with only a few degrees of freedom (like the kernel of a support vector machine or the number of hidden layers of a neural network). Inspired by foundational work in Theano, toolkits like Microsoft's CNTK or Google's TensorFlow were among the first to popularize a more flexible view on models, as computation graphs with associated parameters that can be estimated from data. This view blurred the boundaries between popular types of models (such as DNNs or SVMs), as it became easy to blend the characteristics of each into your type of graph. Still, such a graph had to be defined upfront before estimating its parameters, and it was pretty static. This made it easy to save models to a self-contained bundle, like a TensorFlow SavedModel (such a bundle simply contains the structure of the graph, as well as the concrete values of the estimated parameters). However, debugging such models can be difficult because the statements in the Python code that build the graph are logically separate from the lines that execute it. Researchers also long for easier ways of expressing dynamic behavior, such as the computation steps of the forward pass of a model being conditionally dependent on its input data (or its previous output).", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "Most recently, the above limitations have led to a second revolution spearheaded by PyTorch and TensorFlow 2. The computation graph is no longer defined explicitly. Instead, it will be populated implicitly as the Python code executes operations on tensor arguments. An essential technique that powers this development is automatic differentiation. As the computation graph is being built implicitly while executing the steps of the forward pass, all the necessary data will be tracked for later computation of the gradient concerning the model parameters. This allows for great flexibility in training a model, but it raises an important question. If the computation happening inside a model is only implicitly defined through our Python code's steps as it executes concrete data, what is it that we want to save as a model? The answer \u2013 at least initially \u2013 was the Python code with all its dependencies, along with the estimated parameters. This is undesirable for practical reasons. For instance, there is a danger that the team working on model deployment does not exactly reproduce the Python code dependencies used during training, leading to subtly divergent behavior. The solution typically consists of combining two techniques, scripting and tracing, that is, extra annotations in your Python code and execution of your code on exemplary input data, allowing PyTorch to define and save the graph that should be executed during later inference on new, unseen data. This requires some discipline by whoever creates the model code (arguably voiding some of the original flexibility of eager execution), but it results in a self-contained model bundle in TorchScript format. The solution in TensorFlow 2 is remarkably similar.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} @@ -802,7 +804,7 @@ {"page_content": "Our journey in deploying the report generation models reflects the above discussion. We started out serving our models by deploying the model code and its dependencies along with the parameter checkpoints in a custom Docker image exposing a gRPC service interface. However, we soon noticed that it became error-prone to replicate the exact code and environment used by the modeling team while estimating the parameters. Moreover, this approach prevented us from leveraging high-performance model serving frameworks like NVIDIA's Triton, which is written in C++ and requires self-contained models that can be used without a Python interpreter. At this stage, we were facing a choice between attempting to export our PyTorch models to ONNX or TorchScript format. ONNX is an open specification for representing machine learning models that increasingly finds adoption. It is powered by a high-performance runtime developed by Microsoft (ONNX Runtime). While we were able to achieve performance acceleration for our TensorFlow BERT-based model using ONNX Runtime, at the time one of our PyTorch model required some operators that weren\u2019t yet supported in ONNX. Rather than implement these using custom operators, we decided to look into TorchScript for the time being.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "### A maturing ecosystem\n\nIs it all roses? No, it has been a rockier journey than we expected. We encountered what seems to be a memory leak in the MKL libraries used by PyTorch while serving the PyTorch code directly. We encountered deadlocks in trying to load multiple models from multiple threads. We had difficulties exporting our models to ONNX and TorchScript formats. Models would not work out-of-the-box on hardware with multiple GPUs, they always accessed the particular GPU device on which they were exported. We encountered excessive memory usage in the Triton inference server while serving TorchScript models, which we found out was due to automatic differentiation accidentally being enabled during the forward pass. However, the ecosystem keeps improving, and there is a helpful and vibrant open-source community eager to work with us to mitigate such issues.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "Where to go from here? For those that require the flexibility of serving PyTorch code directly, without going through the extra step of exporting self-contained models, it is worth pointing out that the TorchServe project now provides a way of bundling the code together with parameter checkpoints into a single servable archive, greatly reducing the risk of code and parameters running apart. To us, however, exporting models to TorchScript has proven beneficial. It provides a clear interface between modeling and deployment teams, and TorchScript further reduces the latency when serving models on GPU via its just-in-time compilation engine.\n\n### Scaling at large and the future", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "Finally, efficient deployment to the cloud is about more than just computing the response of a single model instance efficiently. Flexibility is needed in managing, versioning and updating models. High-level scalability must be achieved via techniques such as load-balancing, horizontal scaling and vertical scaling. If many models are involved, scale-to-zero quickly becomes a topic as it is unacceptable to pay for serving models that do not answer any requests. Providing such extra functionality on top of a low-level inference server like Triton is the job of an orchestration framework. After gaining some first experience with KubeFlow, to that end, we decided to turn our attention to Azure ML, which provides similar functionality but integrates more deeply with the Azure platform, on which we crucially rely for large parts of our technology stack already. This part of our journey has just begun.\n\n## Conclusion", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "### Scaling at large and the future\n\nFinally, efficient deployment to the cloud is about more than just computing the response of a single model instance efficiently. Flexibility is needed in managing, versioning and updating models. High-level scalability must be achieved via techniques such as load-balancing, horizontal scaling and vertical scaling. If many models are involved, scale-to-zero quickly becomes a topic as it is unacceptable to pay for serving models that do not answer any requests. Providing such extra functionality on top of a low-level inference server like Triton is the job of an orchestration framework. After gaining some first experience with KubeFlow, to that end, we decided to turn our attention to Azure ML, which provides similar functionality but integrates more deeply with the Azure platform, on which we crucially rely for large parts of our technology stack already. This part of our journey has just begun.\n\n## Conclusion", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "Academia has long recognized that we are \"standing on the shoulders of giants.\" As Artificial Intelligence is maturing from a scientific discipline into technology, the same spirit of collaboration that originally fueled its scientific foundation has carried over into the world of software engineering. Open-source enthusiasts join technology companies worldwide to build open software ecosystems that allow for new angles at solving some of the most pressing challenges of modern society. In this article, we've taken a look at Nuance's [Dragon Ambient eXperience](http://www.nuance.com/ambient), an AI-powered, voice-enabled solution that automatically documents patient care, reducing healthcare providers' administrative burdens. Nuance DAX improves the patient-provider experience, reduces physician burnout, and improves financial outcomes. It brings back trust, joy, and humanity to the delivery of healthcare. Fairseq and PyTorch have proven to be an incredible platform for powering this AI technology, and in turn, Nuance has contributed back some of its innovations in this space. For further reading, we invite you to take a look at our recent [ACL publication](https://www.aclweb.org/anthology/2020.nlpmc-1.4/) and the [Nuance \"What's Next\" blog](https://whatsnext.nuance.com/rd/using-deep-learning-to-generate-medical-reports/).", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'Microsoft becomes maintainer of the Windows version of PyTorch'\nauthor: Maxim Lukiyanov - Principal PM at Microsoft, Emad Barsoum - Group EM at Microsoft, Guoliang Hua - Principal EM at Microsoft, Nikita Shulga - Tech Lead at Facebook, Geeta Chauhan - PE Lead at Facebook, Chris Gottbrath - Technical PM at Facebook, Jiachen Pu - Engineer at Facebook\n\n---\n\nAlong with the PyTorch 1.6 release, we are excited to announce that Microsoft has expanded its participation in the PyTorch community and will be responsible for the development and maintenance of the PyTorch build for Windows.", "metadata": {"source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"}} {"page_content": "According to the latest [Stack Overflow developer survey](https://insights.stackoverflow.com/survey/2020#technology-developers-primary-operating-systems), Windows remains the primary operating system for the developer community (46% Windows vs 28% MacOS). [Jiachen Pu](https://github.com/peterjc123) initially made a heroic effort to add support for PyTorch on Windows, but due to limited resources, Windows support for PyTorch has lagged behind other platforms. Lack of test coverage resulted in unexpected issues popping up every now and then. Some of the core tutorials, meant for new users to learn and adopt PyTorch, would fail to run. The installation experience was also not as smooth, with the lack of official PyPI support for PyTorch on Windows. Lastly, some of the PyTorch functionality was simply not available on the Windows platform, such as the TorchAudio domain library and distributed training support. To help alleviate this pain, Microsoft is happy to bring its Windows expertise to the table and bring PyTorch on Windows to its best possible self.", "metadata": {"source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"}} @@ -825,7 +827,7 @@ {"page_content": "The objective of Seamless Scene Segmentation is to predict a \u201cpanoptic\u201d segmentation [3] from an image, that is a complete labeling where each pixel is assigned with a class id and, where possible, an instance id. Like many modern CNNs dealing with instance detection and segmentation, we adopt the Mask R-CNN framework [4], using ResNet50 + FPN [5] as a backbone. This architecture works in two stages: first, the \u201cProposal Head\u201d selects a set of candidate bounding boxes on the image (i.e. the proposals) that could contain an object; then, the \u201cMask Head\u201d focuses on each proposal, predicting its class and segmentation mask. The output of this process is a \u201csparse\u201d instance segmentation, covering only the parts of the image that contain countable objects (e.g. cars and pedestrians).", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}} {"page_content": "To complete our panoptic approach coined Seamless Scene Segmentation, we add a third stage to Mask R-CNN. Stemming from the same backbone, the \u201cSemantic Head\u201d predicts a dense semantic segmentation over the whole image, also accounting for the uncountable or amorphous classes (e.g. road and sky). The outputs of the Mask and Semantic heads are finally fused using a simple non-maximum suppression algorithm to generate the final panoptic prediction. All details about the actual network architecture, used losses and underlying math can be found at the [project website](https://research.mapillary.com/publication/cvpr19a) for our CVPR 2019 paper [1].", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}} {"page_content": "While several versions of Mask R-CNN are publicly available, including an [official implementation](https://github.com/facebookresearch/Detectron) written in Caffe2, at Mapillary we decided to build Seamless Scene Segmentation from scratch using PyTorch, in order to have full control and understanding of the whole pipeline. While doing so we encountered a couple of main stumbling blocks, and had to come up with some creative workarounds we are going to describe next.\n\n## Dealing with variable-sized tensors", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}} -{"page_content": "Something that sets aside panoptic segmentation networks from traditional CNNs is the prevalence of variable-sized data. In fact, many of the quantities we are dealing with cannot be easily represented with fixed sized tensors: each image contains a different number of objects, the Proposal head can produce a different number of proposals for each image, and the images themselves can have different sizes. While this is not a problem per-se -- one could just process images one at a time -- we would still like to exploit batch-level parallelism as much as possible. Furthermore, when performing distributed training with multiple GPUs, `DistributedDataParallel` expects its inputs to be batched, uniformly-sized tensors.\n\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}} +{"page_content": "## Dealing with variable-sized tensors\n\nSomething that sets aside panoptic segmentation networks from traditional CNNs is the prevalence of variable-sized data. In fact, many of the quantities we are dealing with cannot be easily represented with fixed sized tensors: each image contains a different number of objects, the Proposal head can produce a different number of proposals for each image, and the images themselves can have different sizes. While this is not a problem per-se -- one could just process images one at a time -- we would still like to exploit batch-level parallelism as much as possible. Furthermore, when performing distributed training with multiple GPUs, `DistributedDataParallel` expects its inputs to be batched, uniformly-sized tensors.\n\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}} {"page_content": "Our solution to these issues is to wrap each batch of variable-sized tensors in a `PackedSequence`. `PackedSequence` is little more than a glorified list class for tensors, tagging its contents as \u201crelated\u201d, ensuring that they all share the same type, and providing useful methods like moving all the tensors to a particular device, etc. When performing light-weight operations that wouldn\u2019t be much faster with batch-level parallelism, we simply iterate over the contents of the `PackedSequence` in a for loop. When performance is crucial, e.g. in the body of the network, we simply concatenate the contents of the PackedSequence, adding zero padding as required (like in RNNs with variable-length inputs), and keeping track of the original dimensions of each tensor.\n\n`PackedSequence`s also help us deal with the second problem highlighted above. We slightly modify `DistributedDataParallel` to recognize `PackedSequence` inputs, splitting them in equally sized chunks and distributing their contents across the GPUs.", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}} {"page_content": "## Asymmetric computational graphs with Distributed Data Parallel\n\nAnother, perhaps more subtle, peculiarity of our network is that it can generate asymmetric computational graphs across GPUs. In fact, some of the modules that compose the network are \u201coptional\u201d, in the sense that they are not always computed for all images. As an example, when the Proposal head doesn\u2019t output any proposal, the Mask head is not traversed at all. If we are training on multiple GPUs with `DistributedDataParallel`, this results in one of the replicas not computing gradients for the Mask head parameters.\n\nPrior to PyTorch 1.1, this resulted in a crash, so we had to develop a workaround. Our simple but effective solution was to compute a \u201cfake forward pass\u201d when no actual forward is required, i.e. something like this:\n\n```python\ndef fake_forward():\n fake_input = get_correctly_shaped_fake_input()\n fake_output = mask_head(fake_input)\n fake_loss = fake_output.sum() * 0\n return fake_loss\n```", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}} {"page_content": "Here, we generate a batch of bogus data, pass it through the Mask head, and return a loss that always back-progates zeros to all parameters.\n\nStarting from PyTorch 1.1 this workaround is no longer required: by setting `find_unused_parameters=True` in the constructor, `DistributedDataParallel` is told to identify parameters whose gradients have not been computed by all replicas and correctly handle them. This leads to some substantial simplifications in our code base!\n\n## In-place Activated BatchNorm\n\n_Github project page: [https://github.com/mapillary/inplace_abn/](https://github.com/mapillary/inplace_abn/)_", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}} @@ -839,12 +841,12 @@ {"page_content": "[2] In-place Activated BatchNorm for Memory-Optimized Training of DNNs; Samuel Rota Bul\u00f2, Lorenzo Porzi, Peter Kontschieder; Computer Vision and Pattern Recognition (CVPR), 2018\n\n[3] Panoptic Segmentation; Alexander Kirillov, Kaiming He, Ross Girshick, Carsten Rother, Piotr Dollar; Computer Vision and Pattern Recognition (CVPR), 2019\n\n[4] Mask R-CNN; Kaiming He, Georgia Gkioxari, Piotr Dollar, Ross Girshick; International Conference on Computer Vision (ICCV), 2017\n\n[5] Feature Pyramid Networks for Object Detection; Tsung-Yi Lin, Piotr Dollar, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie; Computer Vision and Pattern Recognition (CVPR), 2017", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'Introduction to Quantization on PyTorch'\nauthor: Raghuraman Krishnamoorthi, James Reed, Min Ni, Chris Gottbrath, and Seth Weidman\n---\n\nIt\u2019s important to make efficient use of both server-side and on-device compute resources when developing machine learning applications. To support more efficient deployment on servers and edge devices, PyTorch added a support for model quantization using the familiar eager mode Python API.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}} {"page_content": "Quantization leverages 8bit integer (int8) instructions to reduce the model size and run the inference faster (reduced latency) and can be the difference between a model achieving quality of service goals or even fitting into the resources available on a mobile device. Even when resources aren\u2019t quite so constrained it may enable you to deploy a larger and more accurate model. Quantization is available in PyTorch starting in version 1.3 and with the release of PyTorch 1.4 we published quantized models for ResNet, ResNext, MobileNetV2, GoogleNet, InceptionV3 and ShuffleNetV2 in the PyTorch torchvision 0.5 library.\n\nThis blog post provides an overview of the quantization support on PyTorch and its incorporation with the TorchVision domain library.\n\n## **What is Quantization?**", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}} -{"page_content": "Quantization refers to techniques for doing both computations and memory accesses with lower precision data, usually int8 compared to floating point implementations. This enables performance gains in several important areas:\n* 4x reduction in model size;\n* 2-4x reduction in memory bandwidth;\n* 2-4x faster inference due to savings in memory bandwidth and faster compute with int8 arithmetic (the exact speed up varies depending on the hardware, the runtime, and the model).\n\nQuantization does not however come without additional cost. Fundamentally quantization means introducing approximations and the resulting networks have slightly less accuracy. These techniques attempt to minimize the gap between the full floating point accuracy and the quantized accuracy.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}} +{"page_content": "## **What is Quantization?**\n\nQuantization refers to techniques for doing both computations and memory accesses with lower precision data, usually int8 compared to floating point implementations. This enables performance gains in several important areas:\n* 4x reduction in model size;\n* 2-4x reduction in memory bandwidth;\n* 2-4x faster inference due to savings in memory bandwidth and faster compute with int8 arithmetic (the exact speed up varies depending on the hardware, the runtime, and the model).\n\nQuantization does not however come without additional cost. Fundamentally quantization means introducing approximations and the resulting networks have slightly less accuracy. These techniques attempt to minimize the gap between the full floating point accuracy and the quantized accuracy.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}} {"page_content": "We designed quantization to fit into the PyTorch framework. The means that:\n1. PyTorch has data types corresponding to [quantized tensors](https://github.com/pytorch/pytorch/wiki/Introducing-Quantized-Tensor), which share many of the features of tensors.\n2. One can write kernels with quantized tensors, much like kernels for floating point tensors to customize their implementation. PyTorch supports quantized modules for common operations as part of the `torch.nn.quantized` and `torch.nn.quantized.dynamic` name-space.\n3. Quantization is compatible with the rest of PyTorch: quantized models are traceable and scriptable. The quantization method is virtually identical for both server and mobile backends. One can easily mix quantized and floating point operations in a model.\n4. Mapping of floating point tensors to quantized tensors is customizable with user defined observer/fake-quantization blocks. PyTorch provides default implementations that should work for most use cases.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}} {"page_content": "
\n \n
\n\nWe developed three techniques for quantizing neural networks in PyTorch as part of quantization tooling in the `torch.quantization` name-space.\n\n## **The Three Modes of Quantization Supported in PyTorch starting version 1.3**", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}} {"page_content": "1. ### **Dynamic Quantization**\n The easiest method of quantization PyTorch supports is called **dynamic quantization**. This involves not just converting the weights to int8 - as happens in all quantization variants - but also converting the activations to int8 on the fly, just before doing the computation (hence \u201cdynamic\u201d). The computations will thus be performed using efficient int8 matrix multiplication and convolution implementations, resulting in faster compute. However, the activations are read and written to memory in floating point format.\n * **PyTorch API**: we have a simple API for dynamic quantization in PyTorch. `torch.quantization.quantize_dynamic` takes in a model, as well as a couple other arguments, and produces a quantized model! Our [end-to-end tutorial](https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html) illustrates this for a BERT model; while the tutorial is long and contains sections on loading pre-trained models and other concepts unrelated to quantization, the part the quantizes the BERT model is simply:", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}} {"page_content": "```python\n import torch.quantization\n quantized_model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)\n ```\n * See the documentation for the function [here](https://pytorch.org/docs/stable/quantization.html#torch.quantization.quantize_dynamic) an end-to-end example in our tutorials [here](https://pytorch.org/tutorials/advanced/dynamic_quantization_tutorial.html) and [here](https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html).\n\n2. ### **Post-Training Static Quantization**", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}} -{"page_content": "One can further improve the performance (latency) by converting networks to use both integer arithmetic and int8 memory accesses. Static quantization performs the additional step of first feeding batches of data through the network and computing the resulting distributions of the different activations (specifically, this is done by inserting \u201cobserver\u201d modules at different points that record these distributions). This information is used to determine how specifically the different activations should be quantized at inference time (a simple technique would be to simply divide the entire range of activations into 256 levels, but we support more sophisticated methods as well). Importantly, this additional step allows us to pass quantized values between operations instead of converting these values to floats - and then back to ints - between every operation, resulting in a significant speed-up.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}} +{"page_content": "2. ### **Post-Training Static Quantization**\n\n One can further improve the performance (latency) by converting networks to use both integer arithmetic and int8 memory accesses. Static quantization performs the additional step of first feeding batches of data through the network and computing the resulting distributions of the different activations (specifically, this is done by inserting \u201cobserver\u201d modules at different points that record these distributions). This information is used to determine how specifically the different activations should be quantized at inference time (a simple technique would be to simply divide the entire range of activations into 256 levels, but we support more sophisticated methods as well). Importantly, this additional step allows us to pass quantized values between operations instead of converting these values to floats - and then back to ints - between every operation, resulting in a significant speed-up.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}} {"page_content": "With this release, we\u2019re supporting several features that allow users to optimize their static quantization:\n 1. Observers: you can customize observer modules which specify how statistics are collected prior to quantization to try out more advanced methods to quantize your data.\n 2. Operator fusion: you can fuse multiple operations into a single operation, saving on memory access while also improving the operation\u2019s numerical accuracy.\n 3. Per-channel quantization: we can independently quantize weights for each output channel in a convolution/linear layer, which can lead to higher accuracy with almost the same speed.\n\n * ### **PyTorch API**:\n * To fuse modules, we have `torch.quantization.fuse_modules`\n * Observers are inserted using `torch.quantization.prepare`\n * Finally, quantization itself is done using `torch.quantization.convert`", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}} {"page_content": "We have a tutorial with an end-to-end example of quantization (this same tutorial also covers our third quantization method, quantization-aware training), but because of our simple API, the three lines that perform post-training static quantization on the pre-trained model `myModel` are:\n ```python\n # set quantization config for server (x86)\n deploymentmyModel.qconfig = torch.quantization.get_default_config('fbgemm')\n\n # insert observers\n torch.quantization.prepare(myModel, inplace=True)\n # Calibrate the model and collect statistics\n\n # convert to quantized version\n torch.quantization.convert(myModel, inplace=True)\n ```", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}} {"page_content": "3. ### **Quantization Aware Training**\n **Quantization-aware training(QAT)** is the third method, and the one that typically results in highest accuracy of these three. With QAT, all weights and activations are \u201cfake quantized\u201d during both the forward and backward passes of training: that is, float values are rounded to mimic int8 values, but all computations are still done with floating point numbers. Thus, all the weight adjustments during training are made while \u201caware\u201d of the fact that the model will ultimately be quantized; after quantizing, therefore, this method usually yields higher accuracy than the other two methods.\n* ### **PyTorch API**:\n * `torch.quantization.prepare_qat` inserts fake quantization modules to model quantization.\n * Mimicking the static quantization API, `torch.quantization.convert` actually quantizes the model once training is complete.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}} @@ -862,8 +864,8 @@ {"page_content": "### **Conclusion**\nTo get started on quantizing your models in PyTorch, start with [the tutorials on the PyTorch website](https://pytorch.org/tutorials/#model-optimization). If you are working with sequence data start with [dynamic quantization for LSTM](https://pytorch.org/tutorials/advanced/dynamic_quantization_tutorial.html), or [BERT](https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html). If you are working with image data then we recommend starting with the [transfer learning with quantization](https://pytorch.org/tutorials/intermediate/quantized_transfer_learning_tutorial.html) tutorial. Then you can explore [static post training quantization](https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html). If you find that the accuracy drop with post training quantization is too high, then try [quantization aware training](https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html).", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}} {"page_content": "If you run into issues you can get community help by posting in at [discuss.pytorch.org](https://discuss.pytorch.org/), use the quantization category for quantization related issues.\n\n_This post is authored by Raghuraman Krishnamoorthi, James Reed, Min Ni, Chris Gottbrath and Seth Weidman. Special thanks to Jianyu Huang, Lingyi Liu and Haixin Liu for producing quantization metrics included in this post._\n\n### **Further reading**:\n1. PyTorch quantization presentation at Neurips: [(https://research.fb.com/wp-content/uploads/2019/12/2.-Quantization.pptx)](https://research.fb.com/wp-content/uploads/2019/12/2.-Quantization.pptx)\n2. Quantized Tensors [(https://github.com/pytorch/pytorch/wiki/\nIntroducing-Quantized-Tensor)](https://github.com/pytorch/pytorch/wiki/Introducing-Quantized-Tensor)\n3. Quantization RFC on Github [(https://github.com/pytorch/pytorch/\nissues/18318)](https://github.com/pytorch/pytorch/issues/18318)", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"Scaling PyTorch FSDP for Training Foundation Models on IBM Cloud\"\nauthor: Linsong Chu, Less Wright, Hamid Shojanazeri, Sophia Wen, Raghu Ganti, Geeta Chauhan\nfeatured-img: \"/assets/images/scaling-pytorch-fsdp-image1-IBM_scaling_FSDP_visual_new.png\"\n---\n\nLarge model training using a cloud native approach is of growing interest for many enterprises given the emergence and success of [foundation models](https://research.ibm.com/blog/what-are-foundation-models). Some AI practitioners may assume that the only way they can achieve high GPU utilization for distributed training jobs is to run them on HPC systems, such as those inter-connected with Infiniband and may not consider Ethernet connected systems. We demonstrate how the latest distributed training technique, Fully Sharded Data Parallel (FSDP) from PyTorch, successfully scales to models of size 10B+ parameters using commodity Ethernet networking in IBM Cloud.\n\n## PyTorch FSDP Scaling", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}} -{"page_content": "As models get larger, the standard techniques for data parallel training work only if the GPU can hold a full replica of the model, along with its training state (optimizer, activations, etc.). However, GPU memory increases have not kept up with the model size increases and new techniques for training such models have emerged (e.g., Fully Sharded Data Parallel, [DeepSpeed](https://www.deepspeed.ai/)), which allow us to efficiently distribute the model and data over multiple GPUs during training. In this blog post, we demonstrate a path to achieve remarkable scaling of model training to 64 nodes (512 GPUs) using PyTorch native FSDP APIs as we increase model sizes to 11B.\n\n### What is Fully Sharded Data Parallel?", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}} -{"page_content": "FSDP extends the distributed data parallel training (DDP) approach by sharding model parameters, gradient and optimizer states into K FSDP units, determined by using a wrapping policy. FSDP achieves large model training efficiency in terms of resources and performance by significantly reducing the memory footprint on each GPU and overlapping computation and communication.\n\nResource efficiency is achieved with memory footprint reduction by having all GPUs own a portion of each FSDP unit. To process a given FSDP unit, all GPUs share their locally owned portion via all_gather communication calls.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}} +{"page_content": "## PyTorch FSDP Scaling\n\nAs models get larger, the standard techniques for data parallel training work only if the GPU can hold a full replica of the model, along with its training state (optimizer, activations, etc.). However, GPU memory increases have not kept up with the model size increases and new techniques for training such models have emerged (e.g., Fully Sharded Data Parallel, [DeepSpeed](https://www.deepspeed.ai/)), which allow us to efficiently distribute the model and data over multiple GPUs during training. In this blog post, we demonstrate a path to achieve remarkable scaling of model training to 64 nodes (512 GPUs) using PyTorch native FSDP APIs as we increase model sizes to 11B.\n\n### What is Fully Sharded Data Parallel?", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}} +{"page_content": "### What is Fully Sharded Data Parallel?\n\nFSDP extends the distributed data parallel training (DDP) approach by sharding model parameters, gradient and optimizer states into K FSDP units, determined by using a wrapping policy. FSDP achieves large model training efficiency in terms of resources and performance by significantly reducing the memory footprint on each GPU and overlapping computation and communication.\n\nResource efficiency is achieved with memory footprint reduction by having all GPUs own a portion of each FSDP unit. To process a given FSDP unit, all GPUs share their locally owned portion via all_gather communication calls.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}} {"page_content": "Performance efficiency is accomplished by overlapping all_gather communication calls for upcoming FSDP units with computation of the current FSDP unit. Once the current FSDP unit has been processed, the non-locally owned parameters are dropped, freeing memory for the upcoming FSDP units. This process achieves training efficiency by the overlap of computation and communication, while also reducing the peak memory needed by each GPU.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}} {"page_content": "In what follows, we demonstrate how FSDP allows us to keep hundreds of GPUs highly utilized throughout a distributed training job, while running over standard Ethernet networking (system description towards the end of the blog). We chose the T5 architecture for our experiments and leveraged the code from the [FSDP workshop](https://github.com/pytorch/workshops/tree/master/FSDP_Workshop). In each of our experiments, we start with a single node experiment to create a baseline and report the metric seconds/iteration normalized by the batch size as well as compute the teraflops based on the [Megatron-LM paper](https://cs.stanford.edu/~matei/papers/2021/sc_megatron_lm.pdf) (see Appendix for details of teraflop computation for T5). Our experiments aim to maximize the batch size (while avoiding cudaMalloc retries) to take full advantage of overlap in computation and communications, as discussed below. Scaling is defined as the ratio of the seconds/iteration normalized by batch size for N nodes versus a single node, representing how well we can utilize the additional GPUs as more nodes are added.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}} {"page_content": "### Experimental Results\n\nOur first set of experiments using the T5-3B configuration (mixed precision with BF16, activation checkpointing, and transformer wrapping policy) demonstrated scaling efficiency of 95% as we increased the number of GPUs from 8 to 512 (1 to 64 nodes, respectively). We achieved these results without any modifications to the existing FSDP APIs. We observed that, for this scale, over Ethernet based network, there is sufficient bandwidth to enable continuous overlap of communication and computation.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}} @@ -879,16 +881,16 @@ {"page_content": "---\nlayout: blog_detail\ntitle: \"Empowering PyTorch on Intel\u00ae Xeon\u00ae Scalable processors with Bfloat16\"\nauthor: Mingfei Ma (Intel), Vitaly Fedyunin (Meta), Wei Wei (Meta)\nfeatured-img: '\\assets\\images\\empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16.png'\n---\n\n## Overview\n\nRecent years, the growing complexity of AI models have been posing requirements on hardware for more and more compute capability. Reduced precision numeric format has been proposed to address this problem. Bfloat16 is a custom 16-bit floating point format for AI which consists of one sign bit, eight exponent bits, and seven mantissa bits. With the same dynamic range as float32, bfloat16 doesn\u2019t require a special handling such as loss scaling. Therefore, bfloat16 is a drop-in replacement for float32 when running deep neural networks for both inference and training.", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}} {"page_content": "The 3rd Gen Intel\u00ae Xeon\u00ae Scalable processor (codenamed Cooper Lake), is the first general purpose x86 CPU with native bfloat16 support. Three new bfloat16 instructions were introduced in Intel\u00ae Advanced Vector Extensions-512 (Intel\u00ae AVX-512): VCVTNE2PS2BF16, VCVTNEPS2BF16, and VDPBF16PS. The first two instructions perform conversion from float32 to bfloat16, and the last one performs a dot product of bfloat16 pairs. Bfloat16 theoretical compute throughput is doubled over float32 on Cooper Lake. On the next generation of Intel\u00ae Xeon\u00ae Scalable Processors, bfloat16 compute throughput will be further enhanced through Advanced Matrix Extensions (Intel\u00ae AMX) instruction set extension.", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}} {"page_content": "Intel and Meta previously collaborated to enable bfloat16 on PyTorch, and the related work was published in an earlier [blog](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Intel-and-Facebook-Accelerate-PyTorch-Performance-with-3rd-Gen/post/1335659) during launch of Cooper Lake. In that blog, we introduced the hardware advancement for native bfloat16 support and showcased a performance boost of 1.4x to 1.6x of bfloat16 over float32 from DLRM, ResNet-50 and ResNext-101-32x4d.\n\nIn this blog, we will introduce the latest software enhancement on bfloat16 in PyTorch 1.12, which would apply to much broader scope of user scenarios and showcase even higher performance boost.\n\n## Native Level Optimization on Bfloat16", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}} -{"page_content": "On PyTorch CPU bfloat16 path, the compute intensive operators, e.g., convolution, linear and bmm, use oneDNN (oneAPI Deep Neural Network Library) to achieve optimal performance on Intel CPUs with AVX512_BF16 or AMX support. The other operators, such as tensor operators and neural network operators, are optimized at PyTorch native level. We have enlarged bfloat16 kernel level optimizations to majority of operators on dense tensors, both inference and training applicable (sparse tensor bfloat16 support will be covered in future work), specifically:", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}} +{"page_content": "## Native Level Optimization on Bfloat16\n\nOn PyTorch CPU bfloat16 path, the compute intensive operators, e.g., convolution, linear and bmm, use oneDNN (oneAPI Deep Neural Network Library) to achieve optimal performance on Intel CPUs with AVX512_BF16 or AMX support. The other operators, such as tensor operators and neural network operators, are optimized at PyTorch native level. We have enlarged bfloat16 kernel level optimizations to majority of operators on dense tensors, both inference and training applicable (sparse tensor bfloat16 support will be covered in future work), specifically:", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}} {"page_content": "- **Bfloat16 vectorization**: Bfloat16 is stored as unsigned 16-bit integer, which requires it to be casted to float32 for arithmetic operations such as add, mul, etc. Specifically, each bfloat16 vector will be converted to two float32 vectors, processed accordingly and then converted back. While for non-arithmetic operations such as cat, copy, etc., it is a straight memory copy and no data type conversion will be involved.\n- **Bfloat16 reduction**: Reduction on bfloat16 data uses float32 as accumulation type to guarantee numerical stability, e.g., sum, BatchNorm2d, MaxPool2d, etc.\n- **Channels Last optimization**: For vision models, Channels Last is the preferable memory format over Channels First from performance perspective. We have implemented fully optimized CPU kernels for all the commonly used CV modules on channels last memory format, taking care of both float32 and bfloat16.\n\n## Run Bfloat16 with Auto Mixed Precision", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}} -{"page_content": "To run model on bfloat16, typically user can either explicitly convert the data and model to bfloat16, for example:\n\n```console\n# with explicit conversion\ninput = input.to(dtype=torch.bfloat16)\nmodel = model.to(dtype=torch.bfloat16)\n```\n\nor utilize torch.amp (Automatic Mixed Precision) package. The autocast instance serves as context managers or decorators that allow regions of your script to run in mixed precision, for example:\n\n```console\n# with AMP\nwith torch.autocast(device_type=\"cpu\", dtype=torch.bfloat16):\n output = model(input)\n```\n\nGenerally, the explicit conversion approach and AMP approach have similar performance. Even though, we recommend run bfloat16 models with AMP, because:\n\n- **Better user experience with automatic fallback**: If your script includes operators that don\u2019t have bfloat16 support, autocast will implicitly convert them back to float32 while the explicit converted model will give a runtime error.", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}} +{"page_content": "## Run Bfloat16 with Auto Mixed Precision\n\nTo run model on bfloat16, typically user can either explicitly convert the data and model to bfloat16, for example:\n\n```console\n# with explicit conversion\ninput = input.to(dtype=torch.bfloat16)\nmodel = model.to(dtype=torch.bfloat16)\n```\n\nor utilize torch.amp (Automatic Mixed Precision) package. The autocast instance serves as context managers or decorators that allow regions of your script to run in mixed precision, for example:\n\n```console\n# with AMP\nwith torch.autocast(device_type=\"cpu\", dtype=torch.bfloat16):\n output = model(input)\n```\n\nGenerally, the explicit conversion approach and AMP approach have similar performance. Even though, we recommend run bfloat16 models with AMP, because:\n\n- **Better user experience with automatic fallback**: If your script includes operators that don\u2019t have bfloat16 support, autocast will implicitly convert them back to float32 while the explicit converted model will give a runtime error.", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}} {"page_content": "- **Mixed data type for activation and parameters**: Unlike the explicit conversion which converts all the model parameters to bfloat16, AMP mode will run in mixed data type. To be specific, input/output will be kept in bfloat16 while parameters, e.g., weight/bias, will be kept in float32. The mixed data type of activation and parameters will help improve performance while maintaining the accuracy.\n\n## Performance Gains\n\nWe benchmarked inference performance of TorchVision models on Intel\u00ae Xeon\u00ae Platinum 8380H CPU @ 2.90GHz (codenamed Cooper Lake), single instance per socket (batch size = 2 x number of physical cores). Results show that bfloat16 has 1.4x to 2.2x performance gain over float32.\n\n

\n \n

\n\n## The performance boost of bfloat16 over float32 primarily comes from 3 aspects:", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}} {"page_content": "- The compute intensive operators take advantage of the new bfloat16 native instruction VDPBF16PS which doubles the hardware compute throughput.\n- Bfloat16 have only half the memory footprint of float32, so theoretically the memory bandwidth intensive operators will be twice faster.\n- On Channels Last, we intentionally keep the same parallelization scheme for all the memory format aware operators (can\u2019t do this on Channels First though), which increases the data locality when passing each layer\u2019s output to the next. Basically, it keeps the data closer to CPU cores while data would reside in cache anyway. And bfloat16 will have a higher cache hit rate compared with float32 in such scenarios due to smaller memory footprint.\n\n## Conclusion & Future Work", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}} -{"page_content": "In this blog, we introduced recent software optimizations on bfloat16 introduced in PyTorch 1.12. Results on the 3rd Gen Intel\u00ae Xeon\u00ae Scalable processor show that bfloat16 has 1.4x to 2.2x performance gain over float32 on the TorchVision models. Further improvement is expected on the next generation of Intel\u00ae Xeon\u00ae Scalable Processors with AMX instruction support. Though the performance number for this blog is collected with TorchVision models, the benefit is broad across all topologies. And we will continue to extend the bfloat16 optimization effort to a broader scope in the future!\n\n## Acknowledgement\n\nThe results presented in this blog is a joint effort of Meta and Intel PyTorch team. Special thanks to Vitaly Fedyunin and Wei Wei from Meta who spent precious time and gave substantial assistance! Together we made one more step on the path of improving the PyTorch CPU eco system.\n\n## Reference", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}} +{"page_content": "## Conclusion & Future Work\n\nIn this blog, we introduced recent software optimizations on bfloat16 introduced in PyTorch 1.12. Results on the 3rd Gen Intel\u00ae Xeon\u00ae Scalable processor show that bfloat16 has 1.4x to 2.2x performance gain over float32 on the TorchVision models. Further improvement is expected on the next generation of Intel\u00ae Xeon\u00ae Scalable Processors with AMX instruction support. Though the performance number for this blog is collected with TorchVision models, the benefit is broad across all topologies. And we will continue to extend the bfloat16 optimization effort to a broader scope in the future!\n\n## Acknowledgement\n\nThe results presented in this blog is a joint effort of Meta and Intel PyTorch team. Special thanks to Vitaly Fedyunin and Wei Wei from Meta who spent precious time and gave substantial assistance! Together we made one more step on the path of improving the PyTorch CPU eco system.\n\n## Reference", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}} {"page_content": "## Reference\n\n- [The bfloat16 numerical format](https://cloud.google.com/tpu/docs/bfloat16?hl=en)\n- [https://pytorch.org/docs/master/amp.html#torch.autocast](https://pytorch.org/docs/master/amp.html#torch.autocast)\n- [Intel and Facebook Accelerate PyTorch Performance with 3rd Gen Intel\u00ae Xeon\u00ae Processors and Intel\u00ae Deep Learning Boost\u2019s new BFloat16 capability](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Intel-and-Facebook-Accelerate-PyTorch-Performance-with-3rd-Gen/post/1335659)", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"Announcing PyTorch Conference 2022\"\nauthor:\nfeatured-img: \"/assets/images/pytorch-conference-2022.png\"\n---\n\nWe are excited to announce that the PyTorch Conference returns in-person as a satellite event to [NeurlPS](https://l.workplace.com/l.php?u=https%3A%2F%2Fnips.cc%2F&h=AT3cdRwSEhyuNXpH2ptWjk-KxMxcceaYeTfflT6PEezDQ_zeUxRv1gjX7GhTQBgvZxFAR0wlSBwuhpipdMjUknMnhY5oJ5C4HjLNO40-12UnoeYALriwrvdxGfgigo8KYlWu_gRIQwlO-2r0wTnNft0whoSaOdVAxw&__tn__=-UK-R&c[0]=AT3z6QRLu8Uw48lKQ_P6FFq7ncHfjsfI16OGZvWO9kALatCY4sZcMjNzR7a4OiOG25RKVHpDX0TGutZHyM_R8Kl2s71Y3DEbq5QccmUVaSzCbcMUSc5Ms2zXHoeGxUlw1XirihAydPsX4Y1OmF6GRjqH8YFTNTFQRN3I8j2SFhR8LEUDxDmfnZ8Q7c2hXi0HeGc) (Neural Information Processing Systems) in New Orleans on Dec. 2nd.\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/announcing-pytorch-conference-2022/", "category": "pytorch blogs"}} {"page_content": "We changed the name from PyTorch Developer Day to PyTorch Conference to signify the turning of a new chapter as we look to the future of PyTorch, encompassing the entire PyTorch Community. This conference will bring together leading researchers, academics and developers from the Machine Learning (ML) and Deep Learning (DL) communities to join a multiple set of talks and a poster session; covering new software releases on [PyTorch](https://pytorch.org/), use cases in academia and industry, as well as ML/DL development and production trends.\n\n### EVENT OVERVIEW\n\nWhen: Dec 2nd, 2022 (In-Person and Virtual)\n\nWhere: New Orleans, Louisiana (USA) | *Virtual option as well*\n\n### SCHEDULE\n\nAll times in Central Standard.\n\n8:00-9:00 am   Registration/Check in\n\n9:00-11:20 am   Keynote & Technical Talks\n\n11:30-1:00 pm   Lunch\n\n1:00-3:00 pm   Poster Session & Breakouts\n\n3:00-4:00 pm   Community/Partner Talks\n\n4:00-5:00 pm   Panel Discussion", "metadata": {"source": "https://pytorch.org/blog/announcing-pytorch-conference-2022/", "category": "pytorch blogs"}} -{"page_content": "Agenda subject to change.\n\nAll talks will be livestreamed and available to the public. The in-person event will be by invitation only as space is limited. If you\u2019d like to apply to attend in person, please submit all requests [here](https://pytorchconference22.splashthat.com/).\n\n### LINKS\n\n- [Submit Content for Consideration by Sept. 30th](https://docs.google.com/forms/d/121ptOuhqhmcPev9g5Zt2Ffl-NtB_oeyFk5CWjumUVLQ/edit)\n- [Livestream event page](https://www.facebook.com/events/1562940847455759)\n- [Apply for an invitation to the in-person event](https://pytorchconference22.splashthat.com/)", "metadata": {"source": "https://pytorch.org/blog/announcing-pytorch-conference-2022/", "category": "pytorch blogs"}} +{"page_content": "4:00-5:00 pm   Panel Discussion\n\nAgenda subject to change.\n\nAll talks will be livestreamed and available to the public. The in-person event will be by invitation only as space is limited. If you\u2019d like to apply to attend in person, please submit all requests [here](https://pytorchconference22.splashthat.com/).\n\n### LINKS\n\n- [Submit Content for Consideration by Sept. 30th](https://docs.google.com/forms/d/121ptOuhqhmcPev9g5Zt2Ffl-NtB_oeyFk5CWjumUVLQ/edit)\n- [Livestream event page](https://www.facebook.com/events/1562940847455759)\n- [Apply for an invitation to the in-person event](https://pytorchconference22.splashthat.com/)", "metadata": {"source": "https://pytorch.org/blog/announcing-pytorch-conference-2022/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'New PyTorch library releases including TorchVision Mobile, TorchAudio I/O, and more'\nauthor: Team PyTorch \n---\n\nToday, we are announcing updates to a number of PyTorch libraries, alongside the [PyTorch 1.8 release](https://pytorch.org/blog/pytorch-1.8-released). The updates include new releases for the domain libraries including TorchVision, TorchText and TorchAudio as well as new version of TorchCSPRNG. These releases include a number of new features and improvements and, along with the PyTorch 1.8 release, provide a broad set of updates for the PyTorch community to build on and leverage.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "Some highlights include:\n* **TorchVision** - Added support for PyTorch Mobile including [Detectron2Go](https://ai.facebook.com/blog/d2go-brings-detectron2-to-mobile) (D2Go), auto-augmentation of data during training, on the fly type conversion, and [AMP autocasting](https://pytorch.org/docs/stable/amp.html). \n* **TorchAudio** - Major improvements to I/O, including defaulting to sox_io backend and file-like object support. Added Kaldi Pitch feature and support for CMake based build allowing TorchAudio to better support no-Python environments.\n* **TorchText** - Updated the dataset loading API to be compatible with standard PyTorch data loading utilities.\n* **TorchCSPRNG** - Support for cryptographically secure pseudorandom number generators for PyTorch is now stable with new APIs for AES128 ECB/CTR and CUDA support on Windows.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "Please note that, starting in PyTorch 1.6, features are classified as Stable, Beta, and Prototype. Prototype features are not included as part of the binary distribution and are instead available through either building from source, using nightlies or via compiler flag. You can see the detailed announcement [here](https://pytorch.org/blog/pytorch-feature-classification-changes/).\n\n\n# TorchVision 0.9.0\n### [Stable] TorchVision Mobile: Operators, Android Binaries, and Tutorial\nWe are excited to announce the first on-device support and binaries for a PyTorch domain library. We have seen significant appetite in both research and industry for on-device vision support to allow low latency, privacy friendly, and resource efficient mobile vision experiences. You can follow this [new tutorial](https://github.com/pytorch/android-demo-app/tree/master/D2Go) to build your own Android object detection app using TorchVision operators, D2Go, or your own custom operators and model.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}} @@ -915,8 +917,8 @@ {"page_content": "- [TorchScript jit.script](https://pytorch.org/docs/stable/generated/torch.jit.script.html#torch.jit.script)\n - This system directly parses sections of an annotated python script to translate into its own representation what the user is doing. This system then applies its own version of auto differentiation to the graph, and passes sections of the subsequent forward and backwards graphs to nvFuser for optimization.\n- [FuncTorch](https://pytorch.org/functorch/stable/generated/functorch.compile.memory_efficient_fusion.html#functorch.compile.memory_efficient_fusion)\n - This system doesn\u2019t directly look at the user python script, instead inserting a mechanism that captures PyTorch operations as they\u2019re being run. We refer to this type of capture system as \u201ctrace program acquisition\u201d, since we\u2019re tracing what has been performed. FuncTorch doesn\u2019t perform its own auto differentiation \u2013 it simply traces PyTorch\u2019s autograd directly to get backward graphs.\n- [TorchDynamo](https://github.com/pytorch/torchdynamo)\n - TorchDynamo is another program acquisition mechanism built on top of FuncTorch. TorchDynamo parses the Python bytecode produced from the user script in order to select portions to trace with FuncTorch. The benefit of TorchDynamo is that it\u2019s able to apply decorators to a user\u2019s script, effectively isolating what should be sent to FuncTorch, making it easier for FuncTorch to successfully trace complex Python scripts.", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}} {"page_content": "These systems are available for users to interact with directly while nvFuser automatically and seamlessly optimizes performance critical regions of the user\u2019s code. These systems automatically send parsed user programs to nvFuser so nvFuser can:\n\n1. Analyze the operations being run on GPUs\n2. Plan parallelization and optimization strategies for those operations\n3. Apply those strategies in generated GPU code\n4. Runtime-compile the generated optimized GPU functions\n5. Execute those CUDA kernels on subsequent iterations", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}} {"page_content": "It is important to note nvFuser does not yet support all PyTorch operations, and there are still some scenarios that are actively being improved in nvFuser that are discussed herein. However, nvFuser does support many DL performance critical operations today, and the number of supported operations will grow in subsequent PyTorch releases. nvFuser is capable of generating highly specialized and optimized GPU functions for the operations it does have support for. This means nvFuser is able to power new PyTorch systems like TorchDynamo and FuncTorch to combine the flexibility PyTorch is known for with unbeatable performance.\n\n## nvFuser Performance", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}} -{"page_content": "Before getting into how to use nvFuser, in this section we\u2019ll show the improvements in training speed nvFuser provides for a variety of models from the [HuggingFace Transformers](https://github.com/huggingface/transformers) and [PyTorch Image Models (TIMM)](https://github.com/rwightman/pytorch-image-models) repositories and we will discuss current gaps in nvFuser performance that are under development today. All performance numbers in this section were taken using an NVIDIA A100 40GB GPU, and used either FuncTorch alone or Functorch with TorchDynamo.\n\n## HuggingFace Transformer Benchmarks\n\nnvFuser can dramatically accelerate training of HuggingFace Transformers when combined with another important optimization (more on that in a moment). Performance improvements can be seen in Figure 1 to range between 1.12x and 1.50x across a subset of popular HuggingFace Transformer networks.\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}} -{"page_content": "

\nFigure 1: Performance gains of 8 training scenarios from HuggingFace\u2019s Transformer repository. First performance boost in the dark green is due to replacing the optimizer with an NVIDIA Apex fused AdamW optimizer. The light green is due to adding nvFuser. Models were run with batch size and sequence lengths of [64, 128], [8, 512], [2, 1024], [64, 128], [8, 512], [8, src_seql=512, tgt_seql=128], [8, src_seql=1024, tgt_seql=128], and [8, 512] respectively. All networks were run with Automatic Mixed Precision (AMP) enabled with dtype=float16.\n

", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}} +{"page_content": "## nvFuser Performance\n\nBefore getting into how to use nvFuser, in this section we\u2019ll show the improvements in training speed nvFuser provides for a variety of models from the [HuggingFace Transformers](https://github.com/huggingface/transformers) and [PyTorch Image Models (TIMM)](https://github.com/rwightman/pytorch-image-models) repositories and we will discuss current gaps in nvFuser performance that are under development today. All performance numbers in this section were taken using an NVIDIA A100 40GB GPU, and used either FuncTorch alone or Functorch with TorchDynamo.\n\n## HuggingFace Transformer Benchmarks\n\nnvFuser can dramatically accelerate training of HuggingFace Transformers when combined with another important optimization (more on that in a moment). Performance improvements can be seen in Figure 1 to range between 1.12x and 1.50x across a subset of popular HuggingFace Transformer networks.", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}} +{"page_content": "

\n \n

\n\n

\nFigure 1: Performance gains of 8 training scenarios from HuggingFace\u2019s Transformer repository. First performance boost in the dark green is due to replacing the optimizer with an NVIDIA Apex fused AdamW optimizer. The light green is due to adding nvFuser. Models were run with batch size and sequence lengths of [64, 128], [8, 512], [2, 1024], [64, 128], [8, 512], [8, src_seql=512, tgt_seql=128], [8, src_seql=1024, tgt_seql=128], and [8, 512] respectively. All networks were run with Automatic Mixed Precision (AMP) enabled with dtype=float16.\n

", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}} {"page_content": "While these speedups are significant, it\u2019s important to understand that nvFuser doesn\u2019t (yet) automate everything about running networks quickly. For HuggingFace Transformers, for example, it was important to use the AdamW fused optimizer from [NVIDIA\u2019s Apex repository](https://github.com/NVIDIA/apex) as the optimizer otherwise consumed a large portion of runtime. Using the fused AdamW optimizer to make the network faster exposes the next major performance bottleneck \u2014 memory bound operations. These operations are optimized by nvFuser, providing another large performance boost. With the fused optimizer and nvFuser enabled, the training speed of these networks improved between 1.12x to 1.5x.\nHuggingFace Transformer models were run with [the torch.amp module](https://pytorch.org/docs/stable/amp.html). (\u201camp\u201d stands for Automated Mixed Precision, see the [\u201cWhat Every User Should Know about Mixed Precision in PyTorch\u201d](https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/) blog post for details.) An option to use nvFuser was added to HuggingFace\u2019sTrainer. If you have [TorchDynamo installed](https://github.com/pytorch/torchdynamo#requirements-and-setup) you can activate it to enable nvFuser in HuggingFace by passing *torchdynamo = \u2018nvfuser\u2019* to the Trainer class.\nnvFuser has great support for normalization kernels and related fusions frequently found in Natural Language Processing (NLP) models, and it is recommended users try nvFuser in their NLP workloads.", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}} {"page_content": "## PyTorch Image Models (TIMM) Benchmarks\nnvFuser, can also significantly reduce the training time of TIMM networks, up to over 1.3x vs. eager PyTorch, and up to 1.44x vs. eager PyTorch when combined with the torch.amp module. Figure 1 shows nvFuser\u2019s speedup without torch.amp, and when torch.amp is used with the NHWC (\u201cchannels last\u201d) and NCHW (\u201cchannels first\u201d) formats. nvFuser is integrated in TIMM through FuncTorch tracing directly (without TorchDynamo) and can be used by adding the [--aot-autograd command line argument](https://github.com/rwightman/pytorch-image-models/commit/ca991c1fa57373286b9876aa63370fd19f5d6032) when running the TIMM benchmark or training script.\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}} {"page_content": "

\nFigure 1: The Y-axis is the performance gain nvFuser provides over not using nvFuser. A value of 1.0 means no change in perf, 2.0 would mean nvFuser is twice as fast, 0.5 would mean nvFuser takes twice the time to run. Square markers are with float16 Automatic Mixed Precision (AMP) and channels first contiguous inputs, circle markers are float32 inputs, and triangles are with float16 AMP and channels last contiguous inputs. Missing data points are due to an error being encountered when tracing.\n

", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}} @@ -935,17 +937,18 @@ {"page_content": "## Feedback\n\nReview [PyTorch Profiler documentation](https://pytorch.org/docs/stable/profiler.html), give Profiler a try and let us know about your experience. Provide your feedback on [PyTorch Discussion Forum](https://discuss.pytorch.org/) or file issues on [PyTorch GitHub](https://github.com/pytorch/pytorch).", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"How Disney Improved Activity Recognition Through Multimodal Approaches with PyTorch\"\nauthor: Monica Alfaro, Albert Aparicio, Francesc Guitart, Marc Junyent, Pablo Pernias, Marcel Porta, and Miquel \u00c0ngel Farr\u00e9 (former Senior Technology Manager)\nfeatured-img: 'assets/images/disney_media_logo.jpg'\n---\n\n# Introduction\n\nAmong the many things Disney Media & Entertainment Distribution (DMED) is responsible for, is the management and distribution of a huge array of media assets including news, sports, entertainment and features, episodic programs, marketing and advertising and more.\n\n\n\n

\n \n

\n\n\n\nOur team focuses on media annotation as part of DMED Technology\u2019s content platforms group. In our day-to-day work, we automatically analyze a variety of content that constantly challenges the efficiency of our machine learning workflow and the accuracy of our models.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "Several of our colleagues recently discussed the workflow efficiencies that we achieved by switching to an end-to-end video analysis pipeline using PyTorch, as well as how we approach animated character recognition. We invite you to read more about both in this previous post.\n\nWhile the conversion to an end-to-end PyTorch pipeline is a solution that any company might benefit from, animated character recognition was a uniquely-Disney concept and solution.\n\nIn this article we will focus on activity recognition, which is a general challenge across industries \u2014 but with some specific opportunities when leveraged in the media production field, because we can combine audio, video, and subtitles to provide a solution.\n\n# Experimenting with Multimodality", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "Working on a multimodal problem adds more complexity to the usual training pipelines. Having multiple information modes for each example means that the multimodal pipeline has to have specific implementations to process each mode in the dataset. Usually after this processing step, the pipeline has to merge or fuse the outputs.\n\nOur initial experiments in multimodality were completed using the [MMF framework](https://github.com/facebookresearch/mmf). MMF is a modular framework for vision and language multimodal research. MMF contains reference implementations of state-of-the-art vision and language models and has also powered multiple research projects at Meta AI Research (as seen in this [poster](https://assets.pytorch.org/pted2021/posters/A3.png) presented in PyTorch Ecosystem Day 2020). Along with the recent release of TorchMultimodal, a PyTorch library for training state-of-the-art multimodal models at scale, MMF highlights the growing interest in Multimodal understanding.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "MMF tackles this complexity with modular management of all the elements of the pipeline through a wide set of different implementations for specific modules, ranging from the processing of the modalities to the fusion of the processed information.\n\nIn our scenario, MMF was a great entry point to experiment with multimodality. It allowed us to iterate quickly by combining audio, video and closed captioning and experiment at different levels of scale with certain multimodal models, shifting from a single GPU to TPU Pods.\n\n# Multimodal Transformers\n\nWith a workbench based on MMF, our initial model was based on a concatenation of features from each modality evolving to a pipeline that included a Transformer-based fusion module to combine the different input modes.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "Specifically, we made use of the fusion module called MMFTransformer, developed in collaboration with the Meta AI Research team. This is an implementation based on [VisualBERT](https://arxiv.org/abs/1908.03557) for which the necessary modifications were added to be able to work with text, audio and video.\n\nDespite having decent results with the out-of-box implementation MMFTransformer, we were still far from our goal, and the Transformers-based models required more data than we had available.\n\n# Searching for less data-hungry solutions\n\nSearching for less data-hungry solutions, our team started studying [MLP-Mixer](https://arxiv.org/abs/2105.01601). This new architecture has been proposed by the Google Brain team and it provides an alternative to well established de facto architectures like convolutions or self-attention for computer vision tasks.\n\n## MLP-Mixer", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "## MLP-Mixer\n\nThe core idea behind mixed variations consists of replacing the convolutions or self-attention mechanisms used in transformers with Multilayer Perceptrons. This change in architecture favors the performance of the model in high data regimes (especially with respect to the Transformers), while also opening some questions regarding the inductive biases hidden in the convolutions and the self-attention layers.\n\nThose proposals perform great in solving image classification tasks by splitting the image in chunks, flattening those chunks into 1D vectors and passing them through a sequence of Mixer Layers.\n\n\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "Inspired by the advantages of Mixer based architectures, our team searched for parallelisms with the type of problems we try to solve in video classification: specifically, instead of a single image, we have a set of frames that need to be classified, along with audio and closed captioning in the shape of new modalities.\n\n# Activity Recognition reinterpreting the MLP-Mixer\n\nOur proposal takes the core idea of the [MLP-Mixer](https://arxiv.org/abs/2105.01601) \u2014 using multiple multi-layer perceptrons on a sequence and transposed sequence and extends it into a Multi Modal framework that allows us to process video, audio & text with the same architecture.\n\nFor each of the modalities, we use different extractors that will provide embeddings describing the content. Given the embeddings of each modality, the MLP-Mixer architecture solves the problem of deciding which of the modalities might be the most important, while also weighing how much each modality contributes to the final labeling.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "For example, when it comes to detecting laughs, sometimes the key information is in audio or in the frames, and in some of the cases we have a strong signal in the closed caption.\n\nWe tried processing each frame separately with a ResNet34 and getting a sequence of embeddings and by using a video-specific model called R3D, both pre-trained on ImageNet and Kinetics400 respectively.\n\n\n\n

\n \n

\n\n\n\nTo process the audio, we use the pretrained ResNet34, and we remove the final layers to be able to extract 2D embeddings from the audio spectrograms (for 224x224 images we end up with 7x7 embeddings).\n\n\n\n

\n \n

\n\n\n\nFor closed captioning, we are using a pre-trained BERT-large, with all layers frozen, except for the Embeddings & LayerNorms.\n\n\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "Once we have extracted the embedding from each modality, we concatenate them into a single sequence and pass it through a set of MLP-Mixer blocks; next we use average pooling & a classification head to get predictions.\n\n\n\n

\n \n

\n\n\n\nOur experiments have been performed on a custom, manually labeled dataset for activity recognition with 15 classes, which we know from experiments are hard and cannot all be predicted accurately using a single modality.\n\nThese experiments have shown a significant increase in performance using our approach, especially in a low/mid-data regime (75K training samples).\n\nWhen it comes to using only Text and Audio, our experiments showed a 15 percent improvement in accuracy over using a classifier on top of the features extracted by state-of-the-art backbones.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "Using Text, Audio and Video we have seen a 17 percent improvement in accuracy over using Meta AIFacebook\u2019s MMF Framework, which uses a VisualBERT-like model to combine modalities using more powerful state of the art backbones.\n\nCurrently, we extended the initial model to cover up to 55 activity classes and 45 event classes. One of the challenges we expect to improve upon in the future is to include all activities and events, even those that are less frequent.\n\n## Interpreting the MLP-Mixer mode combinations \n\nAn MLP-Mixer is a concatenation of MultiLayer Perceptrons. This can be, very roughly, approximated to a linear operation, in the sense that, once trained, the weights are fixed and the input will directly affect the output.\n\nOnce we assume that approximation, we also assume that for an input consisting of NxM numbers, we could find a NxM matrix that (when multiplied elementwise) could approximate the predictions of the MLP-Mixer for a class.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "

\n \n

\n\n\n\nWe will call this matrix a stencil, and if we have access to it, we can find what parts of the input embeddings are responsible for a specific prediction.\n\nYou can think of it as a punch card with holes in specific positions. Only information in those positions will pass and contribute to a specific prediction. So we can measure the intensity of the input at those positions.\n\n\n\n

\n \n

\n\n\n\nOf course, this is an oversimplification, and there won't exist a unique stencil that perfectly represents all of the contributions of the input to a class (otherwise that would mean that the problem could be solved linearly). So this should be used for visualization purposes only, not as an accurate predictor.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "Once we have a set of stencils for each class, we can effortlessly measure input contribution without relying on any external visualization techniques.\n\nTo find a stencil, we can start from a \"random noise\" stencil and optimize it to maximize the activations for a specific class by just back-propagating through the MLP-Mixer.\n\n\n\n

\n \n

\n\n\n\nBy doing this we can end up with many valid stencils, and we can reduce them to a few by using K-means to cluster them into similar stencils and averaging each cluster.\n\n# Using the Mixer to get the best of each world\n\nMLP-Mixer, used as an image classification model without convolutional layers, requires a lot of data, since the lack of inductive bias \u2013 one of the model's good points overall \u2013 is a weakness when it comes to working in low data domains.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "When used as a way to combine information previously extracted by large pretrained backbones (as opposed to being used as a full end-to-end solution), they shine. The Mixer\u2019s strength lies in finding temporal or structural coherence between different inputs. For example, in video-related tasks we could extract embeddings from the frames using a powerful, pretrained model that understands what is going on at frame level and use the mixer to make sense of it in a sequential manner.\n\nThis way of using the Mixer allows us to work with limited amounts of data and still get better results than what was achieved with Transformers. This is because Mixers seem to be more stable during training and seem to pay attention to all the inputs, while Transformers tend to collapse and pay attention only to some modalities/parts of the sequence.\n\nAcknowledgements: We would like to thank the Meta AI Research and Partner Engineering teams for this collaboration.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "# Experimenting with Multimodality\n\nWorking on a multimodal problem adds more complexity to the usual training pipelines. Having multiple information modes for each example means that the multimodal pipeline has to have specific implementations to process each mode in the dataset. Usually after this processing step, the pipeline has to merge or fuse the outputs.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "Our initial experiments in multimodality were completed using the [MMF framework](https://github.com/facebookresearch/mmf). MMF is a modular framework for vision and language multimodal research. MMF contains reference implementations of state-of-the-art vision and language models and has also powered multiple research projects at Meta AI Research (as seen in this [poster](https://assets.pytorch.org/pted2021/posters/A3.png) presented in PyTorch Ecosystem Day 2020). Along with the recent release of TorchMultimodal, a PyTorch library for training state-of-the-art multimodal models at scale, MMF highlights the growing interest in Multimodal understanding.\n\nMMF tackles this complexity with modular management of all the elements of the pipeline through a wide set of different implementations for specific modules, ranging from the processing of the modalities to the fusion of the processed information.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "In our scenario, MMF was a great entry point to experiment with multimodality. It allowed us to iterate quickly by combining audio, video and closed captioning and experiment at different levels of scale with certain multimodal models, shifting from a single GPU to TPU Pods.\n\n# Multimodal Transformers\n\nWith a workbench based on MMF, our initial model was based on a concatenation of features from each modality evolving to a pipeline that included a Transformer-based fusion module to combine the different input modes.\n\nSpecifically, we made use of the fusion module called MMFTransformer, developed in collaboration with the Meta AI Research team. This is an implementation based on [VisualBERT](https://arxiv.org/abs/1908.03557) for which the necessary modifications were added to be able to work with text, audio and video.\n\nDespite having decent results with the out-of-box implementation MMFTransformer, we were still far from our goal, and the Transformers-based models required more data than we had available.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "# Searching for less data-hungry solutions\n\nSearching for less data-hungry solutions, our team started studying [MLP-Mixer](https://arxiv.org/abs/2105.01601). This new architecture has been proposed by the Google Brain team and it provides an alternative to well established de facto architectures like convolutions or self-attention for computer vision tasks.\n\n## MLP-Mixer\n\nThe core idea behind mixed variations consists of replacing the convolutions or self-attention mechanisms used in transformers with Multilayer Perceptrons. This change in architecture favors the performance of the model in high data regimes (especially with respect to the Transformers), while also opening some questions regarding the inductive biases hidden in the convolutions and the self-attention layers.\n\nThose proposals perform great in solving image classification tasks by splitting the image in chunks, flattening those chunks into 1D vectors and passing them through a sequence of Mixer Layers.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "

\n \n

\n\n\n\nInspired by the advantages of Mixer based architectures, our team searched for parallelisms with the type of problems we try to solve in video classification: specifically, instead of a single image, we have a set of frames that need to be classified, along with audio and closed captioning in the shape of new modalities.\n\n# Activity Recognition reinterpreting the MLP-Mixer\n\nOur proposal takes the core idea of the [MLP-Mixer](https://arxiv.org/abs/2105.01601) \u2014 using multiple multi-layer perceptrons on a sequence and transposed sequence and extends it into a Multi Modal framework that allows us to process video, audio & text with the same architecture.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "For each of the modalities, we use different extractors that will provide embeddings describing the content. Given the embeddings of each modality, the MLP-Mixer architecture solves the problem of deciding which of the modalities might be the most important, while also weighing how much each modality contributes to the final labeling.\n\nFor example, when it comes to detecting laughs, sometimes the key information is in audio or in the frames, and in some of the cases we have a strong signal in the closed caption.\n\nWe tried processing each frame separately with a ResNet34 and getting a sequence of embeddings and by using a video-specific model called R3D, both pre-trained on ImageNet and Kinetics400 respectively.\n\n\n\n

\n \n

\n\n\n\nTo process the audio, we use the pretrained ResNet34, and we remove the final layers to be able to extract 2D embeddings from the audio spectrograms (for 224x224 images we end up with 7x7 embeddings).", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "

\n \n

\n\n\n\nFor closed captioning, we are using a pre-trained BERT-large, with all layers frozen, except for the Embeddings & LayerNorms.\n\n\n\n

\n \n

\n\n\n\nOnce we have extracted the embedding from each modality, we concatenate them into a single sequence and pass it through a set of MLP-Mixer blocks; next we use average pooling & a classification head to get predictions.\n\n\n\n

\n \n

\n\n\n\nOur experiments have been performed on a custom, manually labeled dataset for activity recognition with 15 classes, which we know from experiments are hard and cannot all be predicted accurately using a single modality.\n\nThese experiments have shown a significant increase in performance using our approach, especially in a low/mid-data regime (75K training samples).", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "When it comes to using only Text and Audio, our experiments showed a 15 percent improvement in accuracy over using a classifier on top of the features extracted by state-of-the-art backbones.\n\nUsing Text, Audio and Video we have seen a 17 percent improvement in accuracy over using Meta AIFacebook\u2019s MMF Framework, which uses a VisualBERT-like model to combine modalities using more powerful state of the art backbones.\n\nCurrently, we extended the initial model to cover up to 55 activity classes and 45 event classes. One of the challenges we expect to improve upon in the future is to include all activities and events, even those that are less frequent.\n\n## Interpreting the MLP-Mixer mode combinations \n\nAn MLP-Mixer is a concatenation of MultiLayer Perceptrons. This can be, very roughly, approximated to a linear operation, in the sense that, once trained, the weights are fixed and the input will directly affect the output.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "Once we assume that approximation, we also assume that for an input consisting of NxM numbers, we could find a NxM matrix that (when multiplied elementwise) could approximate the predictions of the MLP-Mixer for a class.\n\n\n\n

\n \n

\n\n\n\nWe will call this matrix a stencil, and if we have access to it, we can find what parts of the input embeddings are responsible for a specific prediction.\n\nYou can think of it as a punch card with holes in specific positions. Only information in those positions will pass and contribute to a specific prediction. So we can measure the intensity of the input at those positions.\n\n\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "Of course, this is an oversimplification, and there won't exist a unique stencil that perfectly represents all of the contributions of the input to a class (otherwise that would mean that the problem could be solved linearly). So this should be used for visualization purposes only, not as an accurate predictor.\n\nOnce we have a set of stencils for each class, we can effortlessly measure input contribution without relying on any external visualization techniques.\n\nTo find a stencil, we can start from a \"random noise\" stencil and optimize it to maximize the activations for a specific class by just back-propagating through the MLP-Mixer.\n\n\n\n

\n \n

\n\n\n\nBy doing this we can end up with many valid stencils, and we can reduce them to a few by using K-means to cluster them into similar stencils and averaging each cluster.\n\n# Using the Mixer to get the best of each world", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "# Using the Mixer to get the best of each world\n\nMLP-Mixer, used as an image classification model without convolutional layers, requires a lot of data, since the lack of inductive bias \u2013 one of the model's good points overall \u2013 is a weakness when it comes to working in low data domains.\n\nWhen used as a way to combine information previously extracted by large pretrained backbones (as opposed to being used as a full end-to-end solution), they shine. The Mixer\u2019s strength lies in finding temporal or structural coherence between different inputs. For example, in video-related tasks we could extract embeddings from the frames using a powerful, pretrained model that understands what is going on at frame level and use the mixer to make sense of it in a sequential manner.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "This way of using the Mixer allows us to work with limited amounts of data and still get better results than what was achieved with Transformers. This is because Mixers seem to be more stable during training and seem to pay attention to all the inputs, while Transformers tend to collapse and pay attention only to some modalities/parts of the sequence.\n\nAcknowledgements: We would like to thank the Meta AI Research and Partner Engineering teams for this collaboration.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'Practical Quantization in PyTorch'\nauthor: Suraj Subramanian, Mark Saroufim, Jerry Zhang\nfeatured-img: ''\n---\n\nQuantization is a cheap and easy way to make your DNN run faster and with lower memory requirements. PyTorch offers a few different approaches to quantize your model. In this blog post, we'll lay a (quick) foundation of quantization in deep learning, and then take a look at how each technique looks like in practice. Finally we'll end with recommendations from the literature for using quantization in your workflows.\n\n

\n \n
\n Fig 1. PyTorch <3 Quantization\n

\n\n**Contents**\n* TOC\n{:toc}\n## Fundamentals of Quantization\n\n> If someone asks you what time it is, you don't respond \"10:14:34:430705\", but you might say \"a quarter past 10\".\n\nQuantization has roots in information compression; in deep networks it refers to reducing the numerical precision of its weights and/or activations.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}} {"page_content": "Overparameterized DNNs have more degrees of freedom and this makes them good candidates for information compression [[1]]. When you quantize a model, two things generally happen - the model gets smaller and runs with better efficiency. Hardware vendors explicitly allow for faster processing of 8-bit data (than 32-bit data) resulting in higher throughput. A smaller model has lower memory footprint and power consumption [[2]], crucial for deployment at the edge.\n\n### Mapping function\nThe mapping function is what you might guess - a function that maps values from floating-point to integer space. A commonly used mapping function is a linear transformation given by , where is the input and are **quantization parameters**.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}} {"page_content": "To reconvert to floating point space, the inverse function is given by . \n\n, and their difference constitutes the *quantization error*.\n\n### Quantization Parameters\nThe mapping function is parameterized by the **scaling factor** and **zero-point** . \n\n is simply the ratio of the input range to the output range \n", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}} @@ -978,7 +981,7 @@ {"page_content": "## References\n[[1]] Gholami, A., Kim, S., Dong, Z., Yao, Z., Mahoney, M. W., & Keutzer, K. (2021). A survey of quantization methods for efficient neural network inference. arXiv preprint arXiv:2103.13630.\n\n[[2]] Krishnamoorthi, R. (2018). Quantizing deep convolutional networks for efficient inference: A whitepaper. arXiv preprint arXiv:1806.08342.\n\n[[3]] Wu, H., Judd, P., Zhang, X., Isaev, M., & Micikevicius, P. (2020). Integer quantization for deep learning inference: Principles and empirical evaluation. arXiv preprint arXiv:2004.09602.\n\n[[4]] PyTorch Quantization Docs\n\n\n[1]: https://arxiv.org/pdf/2103.13630.pdf\n[2]: https://arxiv.org/pdf/1806.08342.pdf\n[3]: https://arxiv.org/abs/2004.09602\n[4]: https://pytorch.org/docs/stable/quantization.html#prototype-fx-graph-mode-quantization", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'Towards Reproducible Research with PyTorch Hub'\nauthor: Team PyTorch\nredirect_from: /2019/06/10/pytorch_hub.html\n---\n\nReproducibility is an essential requirement for many fields of research including those based on machine learning techniques. However, many machine learning publications are either not reproducible or are difficult to reproduce. With the continued growth in the number of research publications, including tens of thousands of papers now hosted on arXiv and submissions to conferences at an all time high, research reproducibility is more important than ever. While many of these publications are accompanied by code as well as trained models which is helpful but still leaves a number of steps for users to figure out for themselves.", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}} {"page_content": "We are excited to announce the availability of PyTorch Hub, a simple API and workflow that provides the basic building blocks for improving machine learning research reproducibility. PyTorch Hub consists of a pre-trained model repository designed specifically to facilitate research reproducibility and enable new research. It also has built-in support for [Colab](https://colab.research.google.com/), integration with [*Papers With Code*](https://paperswithcode.com/) and currently contains a broad set of models that include Classification and Segmentation, Generative, Transformers, etc.\n\n
\n \n
\n\n## [Owner] Publishing models", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}} -{"page_content": "PyTorch Hub supports the publication of pre-trained models (model definitions and pre-trained weights) to a GitHub repository by adding a simple ```hubconf.py``` file.\nThis provides an enumeration of which models are to be supported and a list of dependencies needed to run the models.\nExamples can be found in the [torchvision](https://github.com/pytorch/vision/blob/master/hubconf.py), [huggingface-bert](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/hubconf.py) and [gan-model-zoo](https://github.com/facebookresearch/pytorch_GAN_zoo) repositories.\n\nLet us look at the simplest case: `torchvision`'s `hubconf.py`:\n\n```python\n# Optional list of dependencies required by the package\ndependencies = ['torch']", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}} +{"page_content": "## [Owner] Publishing models\n\nPyTorch Hub supports the publication of pre-trained models (model definitions and pre-trained weights) to a GitHub repository by adding a simple ```hubconf.py``` file.\nThis provides an enumeration of which models are to be supported and a list of dependencies needed to run the models.\nExamples can be found in the [torchvision](https://github.com/pytorch/vision/blob/master/hubconf.py), [huggingface-bert](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/hubconf.py) and [gan-model-zoo](https://github.com/facebookresearch/pytorch_GAN_zoo) repositories.\n\nLet us look at the simplest case: `torchvision`'s `hubconf.py`:\n\n```python\n# Optional list of dependencies required by the package\ndependencies = ['torch']", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}} {"page_content": "from torchvision.models.alexnet import alexnet\nfrom torchvision.models.densenet import densenet121, densenet169, densenet201, densenet161\nfrom torchvision.models.inception import inception_v3\nfrom torchvision.models.resnet import resnet18, resnet34, resnet50, resnet101, resnet152,\\\nresnext50_32x4d, resnext101_32x8d\nfrom torchvision.models.squeezenet import squeezenet1_0, squeezenet1_1\nfrom torchvision.models.vgg import vgg11, vgg13, vgg16, vgg19, vgg11_bn, vgg13_bn, vgg16_bn, vgg19_bn\nfrom torchvision.models.segmentation import fcn_resnet101, deeplabv3_resnet101\nfrom torchvision.models.googlenet import googlenet\nfrom torchvision.models.shufflenetv2 import shufflenet_v2_x0_5, shufflenet_v2_x1_0\nfrom torchvision.models.mobilenet import mobilenet_v2\n```", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}} {"page_content": "In `torchvision`, the models have the following properties:\n- Each model file can function and be executed independently\n- They dont require any package other than PyTorch (encoded in `hubconf.py` as `dependencies['torch']`)\n- They dont need separate entry-points, because the models when created, work seamlessly out of the box\n\nMinimizing package dependencies reduces the friction for users to load your model for immediate experimentation.\n\nA more involved example is HuggingFace's BERT models. Here is their `hubconf.py`\n\n```python\ndependencies = ['torch', 'tqdm', 'boto3', 'requests', 'regex']\n\nfrom hubconfs.bert_hubconf import (\n bertTokenizer,\n bertModel,\n bertForNextSentencePrediction,\n bertForPreTraining,\n bertForMaskedLM,\n bertForSequenceClassification,\n bertForMultipleChoice,\n bertForQuestionAnswering,\n bertForTokenClassification\n)\n```", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}} {"page_content": "Each model then requires an entrypoint to be created. Here is a code snippet to specify an entrypoint of the ```bertForMaskedLM``` model, which returns the pre-trained model weights.\n\n```python\ndef bertForMaskedLM(*args, **kwargs):\n \"\"\"\n BertForMaskedLM includes the BertModel Transformer followed by the\n pre-trained masked language modeling head.\n Example:\n ...\n \"\"\"\n model = BertForMaskedLM.from_pretrained(*args, **kwargs)\n return model\n```\n\nThese entry-points can serve as wrappers around complex model factories. They can give a clean and consistent help docstring, have logic to support downloading of pretrained weights (for example via `pretrained=True`) or have additional hub-specific functionality such as visualization.", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}} @@ -987,14 +990,14 @@ {"page_content": "It is also common that repo owners will want to continually add bug fixes or performance improvements. PyTorch Hub makes it super simple for users to get the latest update by calling:\n\n```python\nmodel = torch.hub.load(..., force_reload=True)\n```\n\nWe believe this will help to alleviate the burden of repetitive package releases by repo owners and instead allow them to focus more on their research.\nIt also ensures that, as a user, you are getting the freshest available models.\n\nOn the contrary, stability is important for users. Hence, some model owners serve them from a specificed branch or tag, rather than the `master` branch, to ensure stability of the code.\nFor example, `pytorch_GAN_zoo` serves them from the `hub` branch:\n\n```python\nmodel = torch.hub.load('facebookresearch/pytorch_GAN_zoo:hub', 'DCGAN', pretrained=True, useGPU=False)\n```", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}} {"page_content": "Note that the ```*args```, ```**kwargs``` passed to `hub.load()` are used to *instantiate* a model. In the above example, `pretrained=True` and `useGPU=False` are given to the model's entrypoint.\n\n\n### Explore a loaded model\n\nOnce you have a model from PyTorch Hub loaded, you can use the following workflow to find out the available methods that are supported as well as understand better what arguments are requires to run it.\n\n\n```dir(model)``` to see all available methods of the model. Let's take a look at `bertForMaskedLM`'s available methods.\n\n```python\n>>> dir(model)\n>>>\n['forward'\n...\n'to'\n'state_dict',\n]\n```\n\n```help(model.forward)``` provides a view into what arguments are required to make your loaded model run\n\n```python\n>>> help(model.forward)\n>>>\nHelp on method forward in module pytorch_pretrained_bert.modeling:\nforward(input_ids, token_type_ids=None, attention_mask=None, masked_lm_labels=None)\n...\n```", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}} {"page_content": "Have a closer look at the [BERT](https://pytorch.org/hub/huggingface_pytorch-pretrained-bert_bert/) and [DeepLabV3](https://pytorch.org/hub/pytorch_vision_deeplabv3_resnet101/) pages, where you can see how these models can be used once loaded.\n\n### Other ways to explore\n\nModels available in PyTorch Hub also support both [Colab](https://colab.research.google.com/github/pytorch/pytorch.github.io/blob/master/assets/hub/facebookresearch_pytorch-gan-zoo_pgan.ipynb) and are directly linked on [Papers With Code](https://paperswithcode.com/) and you can get started with a single click. [Here](https://paperswithcode.com/paper/densely-connected-convolutional-networks) is a good example to get started with (shown below).\n\n
\n \n
\n\n## Additional resources:", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}} -{"page_content": "* PyTorch Hub API documentation can be found [here](https://pytorch.org/docs/stable/hub.html).\n* Submit a model [here](https://github.com/pytorch/hub) for publication in PyTorch Hub.\n* Go to [https://pytorch.org/hub](https://pytorch.org/hub) to learn more about the available models.\n* Look for more models to come on [paperswithcode.com](https://paperswithcode.com/).\n\n\nA BIG thanks to the folks at HuggingFace, the PapersWithCode team, fast.ai and Nvidia as well as Morgane Riviere (FAIR Paris) and lots of others for helping bootstrap this effort!!\n\nCheers!\n\nTeam PyTorch\n\n\n\n\n## FAQ:\n\n**Q: If we would like to contribute a model that is already in the Hub but perhaps mine has better accuracy, should I still contribute?**\n\n\nA: Yes!! A next step for Hub is to implement an upvote/downvote system to surface the best models.\n\n**Q: Who hosts the model weights for PyTorch Hub?**", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}} +{"page_content": "## Additional resources:\n\n* PyTorch Hub API documentation can be found [here](https://pytorch.org/docs/stable/hub.html).\n* Submit a model [here](https://github.com/pytorch/hub) for publication in PyTorch Hub.\n* Go to [https://pytorch.org/hub](https://pytorch.org/hub) to learn more about the available models.\n* Look for more models to come on [paperswithcode.com](https://paperswithcode.com/).\n\n\nA BIG thanks to the folks at HuggingFace, the PapersWithCode team, fast.ai and Nvidia as well as Morgane Riviere (FAIR Paris) and lots of others for helping bootstrap this effort!!\n\nCheers!\n\nTeam PyTorch\n\n\n\n\n## FAQ:\n\n**Q: If we would like to contribute a model that is already in the Hub but perhaps mine has better accuracy, should I still contribute?**\n\n\nA: Yes!! A next step for Hub is to implement an upvote/downvote system to surface the best models.\n\n**Q: Who hosts the model weights for PyTorch Hub?**", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}} {"page_content": "A: You, as the contributor, are responsible to host the model weights. You can host your model in your favorite cloud storage or, if it fits within the limits, on GitHub. If it is not within your means to host the weights, check with us via opening an issue on the hub repository.\n\n**Q: What if my model is trained on private data? Should I still contribute this model?**\n\n\nA: No! PyTorch Hub is centered around open research and that extends to the usage of open datasets to train these models on. If a pull request for a proprietary model is submitted, we will kindly ask that you resubmit a model trained on something open and available.\n\n**Q: Where are my downloaded models saved?**\n\n\nA: We follow the [XDG Base Directory Specification](https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html) and adhere to common standards around cached files and directories.\n\nThe locations are used in the order of:", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}} -{"page_content": "* Calling ```hub.set_dir()```\n* ```$TORCH_HOME/hub```, if environment variable ```TORCH_HOME``` is set.\n* ```$XDG_CACHE_HOME/torch/hub```, if environment variable ```XDG_CACHE_HOME``` is set.\n* ```~/.cache/torch/hub```", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}} +{"page_content": "The locations are used in the order of:\n\n* Calling ```hub.set_dir()```\n* ```$TORCH_HOME/hub```, if environment variable ```TORCH_HOME``` is set.\n* ```$XDG_CACHE_HOME/torch/hub```, if environment variable ```XDG_CACHE_HOME``` is set.\n* ```~/.cache/torch/hub```", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'The road to 1.0: production ready PyTorch'\nauthor: The PyTorch Team\nredirect_from: /2018/05/02/road-to-1.0.html\n---\n\nWe would like to give you a preview of the roadmap for PyTorch 1.0 , the next release of PyTorch. Over the last year, we've had 0.2, 0.3 and 0.4 transform PyTorch from a [Torch+Chainer]-like interface into something cleaner, adding double-backwards, numpy-like functions, advanced indexing and removing Variable boilerplate. At this time, we're confident that the API is in a reasonable and stable state to confidently release a 1.0.\n\nHowever, 1.0 isn't just about stability of the interface.\n\nOne of PyTorch's biggest strengths is its first-class Python integration, imperative style, simplicity of the API and options. These are aspects that make PyTorch good for research and hackability.\n\nOne of its biggest downsides has been production-support. What we mean by production-support is the countless things one has to do to models to run them efficiently at massive scale:", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}} {"page_content": "- exporting to C++-only runtimes for use in larger projects\n- optimizing mobile systems on iPhone, Android, Qualcomm and other systems\n- using more efficient data layouts and performing kernel fusion to do faster inference (saving 10% of speed or memory at scale is a big win)\n- quantized inference (such as 8-bit inference)\n\nStartups, large companies and anyone who wants to build a product around PyTorch have asked for production support. At Facebook (the largest stakeholder for PyTorch) we have Caffe2, which has been the production-ready platform, running in our datacenters and shipping to more than 1 billion phones spanning eight generations of iPhones and six generations of Android CPU architectures. It has server-optimized inference on Intel / ARM, TensorRT support, and all the necessary bits for production. Considering all this value locked-in to a platform that the PyTorch team works quite closely with, **we decided to marry PyTorch and Caffe2 which gives the production-level readiness for PyTorch**.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}} {"page_content": "Supporting production features without adding usability issues for our researchers and end-users needs creative solutions.\n\n## Production != Pain for researchers\n\nAdding production capabilities involves increasing the API complexity and number of configurable options for models. One configures memory-layouts (NCHW vs NHWC vs N,C/32,H,W,32, each providing different performance characteristics), quantization (8-bit? 3-bit?), fusion of low-level kernels (you used a Conv + BatchNorm + ReLU, let's fuse them into a single kernel), separate backend options (MKLDNN backend for a few layers and NNPACK backend for other layers), etc.\n\nPyTorch's central goal is to provide a great platform for research and hackability. So, while we add all these optimizations, we've been working with a hard design constraint to never trade these off against usability.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}} {"page_content": "To pull this off, we are introducing `torch.jit`, a just-in-time (JIT) compiler that at runtime takes your PyTorch models and rewrites them to run at production-efficiency. The JIT compiler can also export your model to run in a C++-only runtime based on Caffe2 bits.\n\n> In 1.0, your code continues to work as-is, we're not making any big changes to the existing API.\n\nMaking your model production-ready is an opt-in annotation, which uses the `torch.jit` compiler to export your model to a Python-less environment, and improving its performance. Let's walk through the JIT compiler in detail.\n\n## `torch.jit`: A JIT-compiler for your models", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}} -{"page_content": "We strongly believe that it's hard to match the productivity you get from specifying your models directly as idiomatic Python code. This is what makes PyTorch so flexible, but it also means that PyTorch pretty much never knows the operation you'll run next. This however is a big blocker for export/productionization and heavyweight automatic performance optimizations because they need full upfront knowledge of how the computation will look before it even gets executed.\n\nWe provide two opt-in ways of recovering this information from your code, one based on tracing native python code and one based on compiling a subset of the python language annotated into a python-free intermediate representation. After thorough discussions we concluded that they're both going to be useful in different contexts, and as such you will be able to mix and match them freely.\n\n## Tracing Mode", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}} +{"page_content": "## `torch.jit`: A JIT-compiler for your models\n\nWe strongly believe that it's hard to match the productivity you get from specifying your models directly as idiomatic Python code. This is what makes PyTorch so flexible, but it also means that PyTorch pretty much never knows the operation you'll run next. This however is a big blocker for export/productionization and heavyweight automatic performance optimizations because they need full upfront knowledge of how the computation will look before it even gets executed.\n\nWe provide two opt-in ways of recovering this information from your code, one based on tracing native python code and one based on compiling a subset of the python language annotated into a python-free intermediate representation. After thorough discussions we concluded that they're both going to be useful in different contexts, and as such you will be able to mix and match them freely.\n\n## Tracing Mode", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}} {"page_content": "## Tracing Mode\n\nThe PyTorch tracer, `torch.jit.trace`, is a function that records all the native PyTorch operations performed in a code region, along with the data dependencies between them. In fact, PyTorch has had a tracer since 0.3, which has been used for exporting models through ONNX. What changes now, is that you no longer necessarily need to take the trace and run it elsewhere - PyTorch can re-execute it for you, using a carefully designed high-performance C++ runtime. As we develop PyTorch 1.0 this runtime will integrate all the optimizations and hardware integrations that Caffe2 provides.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}} {"page_content": "The biggest benefit of this approach is that it doesn't really care how your Python code is structured \u2014 you can trace through generators or coroutines, modules or pure functions. Since we only record native PyTorch operators, these details have no effect on the trace recorded. This behavior, however, is a double-edged sword. For example, if you have a loop in your model, it will get unrolled in the trace, inserting a copy of the loop body for as many times as the loop ran. This opens up opportunities for zero-cost abstraction (e.g. you can loop over modules, and the actual trace will be loop-overhead free!), but on the other hand this will also affect data dependent loops (think of e.g. processing sequences of varying lengths), effectively hard-coding a single length into the trace.\n\nFor networks that do not contain loops and if statements, tracing is non-invasive and is robust enough to handle a wide variety of coding styles. This code example illustrates what tracing looks like:", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}} {"page_content": "```python\n# This will run your nn.Module or regular Python function with the example\n# input that you provided. The returned callable can be used to re-execute\n# all operations that happened during the example run, but it will no longer\n# use the Python interpreter.\nfrom torch.jit import trace\ntraced_model = trace(model, example_input=input)\ntraced_fn = trace(fn, example_input=input)\n\n# The training loop doesn't change. Traced model behaves exactly like an\n# nn.Module, except that you can't edit what it does or change its attributes.\n# Think of it as a \"frozen module\".\nfor input, target in data_loader:\n loss = loss_fn(traced_model(input), target)\n```\n\n## Script Mode\n\nTracing mode is a great way to minimize the impact on your code, but we're also very excited about the models that fundamentally make use of control flow such as RNNs. Our solution to this is a scripting mode.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}} @@ -1010,7 +1013,7 @@ {"page_content": "We are excited to welcome PFN to the PyTorch community, and to jointly work towards the common goal of furthering advances in deep learning technology. Learn more about the PFN\u2019s migration to PyTorch [here](https://preferred.jp/en/news/pr20191205/).\n\n## Tools for elastic training and large scale computer vision\n\n### PyTorch Elastic (Experimental)\n\nLarge scale model training is becoming commonplace with architectures like BERT and the growth of model parameter counts into the billions or even tens of billions. To achieve convergence at this scale in a reasonable amount of time, the use of distributed training is needed.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}} {"page_content": "The current PyTorch Distributed Data Parallel (DDP) module enables data parallel training where each process trains the same model but on different shards of data. It enables bulk synchronous, multi-host, multi-GPU/CPU execution of ML training. However, DDP has several shortcomings; e.g. jobs cannot start without acquiring all the requested nodes; jobs cannot continue after a node fails due to error or transient issue; jobs cannot incorporate a node that joined later; and lastly; progress cannot be made with the presence of a slow/stuck node.\n\nThe focus of [PyTorch Elastic](https://github.com/pytorch/elastic), which uses Elastic Distributed Data Parallelism, is to address these issues and build a generic framework/APIs for PyTorch to enable reliable and elastic execution of these data parallel training workloads. It will provide better programmability, higher resilience to failures of all kinds, higher-efficiency and larger-scale training compared with pure DDP.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}} {"page_content": "Elasticity, in this case, means both: 1) the ability for a job to continue after node failure (by running with fewer nodes and/or by incorporating a new host and transferring state to it); and 2) the ability to add/remove nodes dynamically due to resource availability changes or bottlenecks.\n\nWhile this feature is still experimental, you can try it out on AWS EC2, with the instructions [here](https://github.com/pytorch/elastic/tree/master/aws). Additionally, the PyTorch distributed team is working closely with teams across AWS to support PyTorch Elastic training within services such as Amazon Sagemaker and Elastic Kubernetes Service (EKS). Look for additional updates in the near future.\n\n### New Classification Framework", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}} -{"page_content": "Image and video classification are at the core of content understanding. To that end, you can now leverage a new end-to-end framework for large-scale training of state-of-the-art image and video classification models. It allows researchers to quickly prototype and iterate on large distributed training jobs at the scale of billions of images. Advantages include:\n\n* Ease of use - This framework features a modular, flexible design that allows anyone to train machine learning models on top of PyTorch using very simple abstractions. The system also has out-of-the-box integration with AWS on PyTorch Elastic, facilitating research at scale and making it simple to move between research and production.\n* High performance - Researchers can use the framework to train models such as Resnet50 on ImageNet in as little as 15 minutes.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}} +{"page_content": "### New Classification Framework\n\nImage and video classification are at the core of content understanding. To that end, you can now leverage a new end-to-end framework for large-scale training of state-of-the-art image and video classification models. It allows researchers to quickly prototype and iterate on large distributed training jobs at the scale of billions of images. Advantages include:\n\n* Ease of use - This framework features a modular, flexible design that allows anyone to train machine learning models on top of PyTorch using very simple abstractions. The system also has out-of-the-box integration with AWS on PyTorch Elastic, facilitating research at scale and making it simple to move between research and production.\n* High performance - Researchers can use the framework to train models such as Resnet50 on ImageNet in as little as 15 minutes.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}} {"page_content": "You can learn more at the [NeurIPS Expo workshop](https://nips.cc/ExpoConferences/2019/schedule?workshop_id=16) on Multi-Modal research to production or get started with the PyTorch Elastic Imagenet example [here](https://github.com/pytorch/elastic/blob/master/examples/imagenet/main.py).\n\n## Come see us at NeurIPS\n\nThe PyTorch team will be hosting workshops at NeurIPS during the industry expo on 12/8. Join the sessions below to learn more, and visit the team at the PyTorch booth on the show floor and during the Poster Session. At the booth, we\u2019ll be walking through an interactive demo of PyTorch running fast neural style transfer on a Cloud TPU - here\u2019s a [sneak peek](https://colab.research.google.com/github/pytorch/xla/blob/master/contrib/colab/style_transfer_inference-xrt-1-15.ipynb).", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}} {"page_content": "We\u2019re also publishing a [paper that details the principles that drove the implementation of PyTorch](https://papers.nips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library) and how they\u2019re reflected in its architecture.\n\n*[Multi-modal Research to Production](https://nips.cc/ExpoConferences/2019/schedule?workshop_id=16)* - This workshop will dive into a number of modalities such as computer vision (large scale image classification and instance segmentation) and Translation and Speech (seq-to-seq Transformers) from the lens of taking cutting edge research to production. Lastly, we will also walk through how to use the latest APIs in PyTorch to take eager mode developed models into graph mode via Torchscript and quantize them for scale production deployment on servers or mobile devices. Libraries used include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}} {"page_content": "* Classification Framework - a newly open sourced PyTorch framework developed by Facebook AI for research on large-scale image and video classification. It allows researchers to quickly prototype and iterate on large distributed training jobs. Models built on the framework can be seamlessly deployed to production.\n* Detectron2 - the recently released object detection library built by the Facebook AI Research computer vision team. We will articulate the improvements over the previous version including: 1) Support for latest models and new tasks; 2) Increased flexibility, to enable new computer vision research; 3) Maintainable and scalable, to support production use cases.\n* Fairseq - general purpose sequence-to-sequence library, can be used in many applications, including (unsupervised) translation, summarization, dialog and speech recognition.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}} @@ -1025,7 +1028,7 @@ {"page_content": "```python\n>>> x = torch.tensor([0.5, 0.75], requires_grad=True)\n>>> y = torch.tensor([0.1, 0.90], requires_grad=True)\n>>> z = torch.exp(x * y).sum()\n>>> torch.autograd.backward([z], inputs=[x])\n>>> x.grad\ntensor([0.1051, 1.7676])\n>>> y.grad # None\n>>>\n\n```\n## Using `torch.autograd.grad`\n\nAn alternative to `backward()` is to use [`torch.autograd.grad()`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/autograd/__init__.py#L177-L277). The main difference to `backward()` is that `grad()` returns a tuple of tensors with the gradients of the `outputs` w.r.t. the `inputs` kwargs instead of storing them in the `.grad` field of the tensors. As you can see, the `grad()` code shown below is very similar to backward.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "```python\ndef grad(\n outputs: _TensorOrTensors,\n inputs: _TensorOrTensors,\n grad_outputs: Optional[_TensorOrTensors] = None,\n retain_graph: Optional[bool] = None,\n create_graph: bool = False,\n only_inputs: bool = True,\n allow_unused: bool = False,\n is_grads_batched: bool = False\n) -> Tuple[torch.Tensor, ...]:\n \n outputs = (outputs,) if isinstance(outputs, torch.Tensor) else tuple(outputs)\n inputs = (inputs,) if isinstance(inputs, torch.Tensor) else tuple(inputs)\n overridable_args = outputs + inputs\n if has_torch_function(overridable_args):\n return handle_torch_function(\n grad,\n overridable_args,\n outputs,\n inputs,\n grad_outputs=grad_outputs,\n retain_graph=retain_graph,\n create_graph=create_graph,\n only_inputs=only_inputs,\n allow_unused=allow_unused,\n )", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "grad_outputs_ = _tensor_or_tensors_to_tuple(grad_outputs, len(outputs))\n grad_outputs_ = _make_grads(outputs, grad_outputs_)\n\n if retain_graph is None:\n retain_graph = create_graph\n\n if is_grads_batched:\n # \u2026. It will not be covered here\n else:\n return Variable._execution_engine.run_backward(\n outputs, grad_outputs_, retain_graph, create_graph, inputs,\n allow_unused, accumulate_grad=False) # Calls into the C++ engine to run the backward pass\n\n```\n\nFigure 1 shows the computational graph with the `backward()` and `grad()` arguments highlighted in red and blue, respectively:\n\n

\n \n

\n\n

\nFgiure 1: Correspondence of `backward`/`grad` arguments in the graphs.\n

\n\n# Going Inside the Autograd Engine\n\n## Refreshing Concepts: Nodes and Edges", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}} -{"page_content": "As we saw in [2](https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/)\nThe computational graph comprises `Node` and `Edge` objects. Please read that post if you haven\u2019t done it yet.\n\n### Nodes\n\n`Node` objects are defined in [`torch/csrc/autograd/function.h`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/function.h#L105-L176), and they provide an overload of `operator()` for the associated function and a list of edges to do the graph traversal. Note that `Node` is a base class that autograd functions inherit from and override the `apply` method to execute the backward function.\n```c++\nstruct TORCH_API Node : std::enable_shared_from_this {\n ...\n /// Evaluates the function on the given inputs and returns the result of the\n /// function call.\n variable_list operator()(variable_list&& inputs) {\n ...\n }", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}} +{"page_content": "## Refreshing Concepts: Nodes and Edges\n\nAs we saw in [2](https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/)\nThe computational graph comprises `Node` and `Edge` objects. Please read that post if you haven\u2019t done it yet.\n\n### Nodes\n\n`Node` objects are defined in [`torch/csrc/autograd/function.h`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/function.h#L105-L176), and they provide an overload of `operator()` for the associated function and a list of edges to do the graph traversal. Note that `Node` is a base class that autograd functions inherit from and override the `apply` method to execute the backward function.\n```c++\nstruct TORCH_API Node : std::enable_shared_from_this {\n ...\n /// Evaluates the function on the given inputs and returns the result of the\n /// function call.\n variable_list operator()(variable_list&& inputs) {\n ...\n }", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "protected:\n /// Performs the `Node`'s actual operation.\n virtual variable_list apply(variable_list&& inputs) = 0;\n \u2026\n edge_list next_edges_;\n uint64_t topological_nr_ = 0;\n \u2026\n\n```", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "```\n\nThere is an attribute called [`topological_nr_`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/function.h#L481) in every node object. This number is used to optimize the graph execution as it allows to discard of graph branches under certain conditions. The topological number is the longest distance between this node and any leaf node and it is shown in Figure 2. Its main property is that for any pair of nodes `x`, `y` in a directed graph `topo_nr(x) < topo_nr(y)` means that there is no path from `x` to `y`. So this allows for reducing the number of paths in the graph in need of traversal. Check the [topological_nr](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/function.h#L314-L343)\n) method comment for further details.\n\n

\n \n

\n\n

\nFigure 2: Example of the Topological Number calculation\n

", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "### Edges\n\nThe [`Edge`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/edge.h#L14-L39) object links `Node`s together, and its implementation is straightforward.\n\n```c++\nstruct Edge {\n ...\n /// The function this `Edge` points to.\n std::shared_ptr function;\n /// The identifier of a particular input to the function.\n uint32_t input_nr;\n};\n\n```\n\nIt only requires a function pointer to the `Node` and an input number that is the index of the output from the forward function this edge points to. When preparing the set of gradients before calling \"function\", we know that what is flowing from this edge should be accumulated in the \"input_nr\"th argument. Note that the input/output name is flipped here and this is the input to the backward function.\n `Edge` objects are constructed using the [`gradient_edge`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/variable.cpp#L221-L233) function method.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}} @@ -1039,7 +1042,7 @@ {"page_content": "`exec_info_` and the associated `ExecInfo` struct are used only when the `inputs` argument is specified or it is a call to `autograd.grad()`. They allow filter paths on the graph that are not needeed since only the gradients are calculated only for the variables in the `inputs` list.\n\n `captured_vars_` is where the results of the graph execution are temporarily stored if we used the `torch.autograd.grad()` api instead of `torch.autograd.backward()` since `grad()` returns the gradients as tensors instead of just filling the `.grad` field of the inputs.\n\n\n### NodeTask", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "### NodeTask\n\nThe [`NodeTask`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.h#L210-L242) struct is a basic class that holds an `fn_` pointer to the node to execute, and an `inputs_` buffer to store the input arguments to this function. Note that the functions executed by the backward pass are the derivatives specified in the `derivatives.yaml` file. or the user provided backward function when using custom functions as described in the second blog post.\n\nThe `inputs_` buffer is also where the output gradients of the previously executed functions are aggregated, and it is defined as a [`std::vector` container](https://github.com/pytorch/pytorch/blob/release/1.10/torch/csrc/autograd/input_buffer.h) with facilities to accumulate values at a given position.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "```c++\nstruct NodeTask {\n std::weak_ptr base_;\n std::shared_ptr fn_;\n // This buffer serves as an implicit \"addition\" node for all of the\n // gradients flowing here. Once all the dependencies are finished, we\n // use the contents of this buffer to run the function.\n InputBuffer inputs_;\n};\n\n```\n### GraphRoot\n\nThe [`GraphRoot`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/functions/basic_ops.h#L72-L89) is a special function used to hold multiple input variables in a single place. The code is pretty simple as it only acts as a container of variables.\n\n```c++\nstruct TORCH_API GraphRoot : public Node {\n GraphRoot(edge_list functions, variable_list inputs)\n : Node(std::move(functions)),\n outputs(std::move(inputs)) {\n for (const auto& t : outputs) {\n add_input_metadata(t);\n }\n }\n\n variable_list apply(variable_list&& inputs) override {\n return outputs;\n }\n\n```\n\n### AccumulateGrad", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}} -{"page_content": "### AccumulateGrad\n\nThis function is set during the graph creation in `gradient_edge` when the `Variable` object doesn\u2019t have a `grad_fn`. This is, it is a leaf node.\n\n```c++\n if (const auto& gradient = self.grad_fn()) {\n // \u2026\n } else {\n return Edge(grad_accumulator(self), 0);\n }\n\n```\n\nThe function body is defined in [`torch/csrc/autograd/functions/accumulate_grad.cpp`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/functions/accumulate_grad.cpp#L25-L63) and it essentially accumulates the input grads in the object\u2019s `.grad` attribute.\n\n```c++\nauto AccumulateGrad::apply(variable_list&& grads) -> variable_list {\n check_input_variables(\"AccumulateGrad\", grads, 1, 0);\n \u2026\n\n at::Tensor new_grad = callHooks(variable, std::move(grads[0]));\n std::lock_guard lock(mutex_);", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}} +{"page_content": "```\n\n### AccumulateGrad\n\nThis function is set during the graph creation in `gradient_edge` when the `Variable` object doesn\u2019t have a `grad_fn`. This is, it is a leaf node.\n\n```c++\n if (const auto& gradient = self.grad_fn()) {\n // \u2026\n } else {\n return Edge(grad_accumulator(self), 0);\n }\n\n```\n\nThe function body is defined in [`torch/csrc/autograd/functions/accumulate_grad.cpp`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/functions/accumulate_grad.cpp#L25-L63) and it essentially accumulates the input grads in the object\u2019s `.grad` attribute.\n\n```c++\nauto AccumulateGrad::apply(variable_list&& grads) -> variable_list {\n check_input_variables(\"AccumulateGrad\", grads, 1, 0);\n \u2026\n\n at::Tensor new_grad = callHooks(variable, std::move(grads[0]));\n std::lock_guard lock(mutex_);", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "at::Tensor& grad = variable.mutable_grad();\n accumulateGrad(\n variable,\n grad,\n new_grad,\n 1 + !post_hooks().empty() /* num_expected_refs */,\n [&grad](at::Tensor&& grad_update) { grad = std::move(grad_update); });\n return variable_list();\n}\n}} // namespace torch::autograd\n\n\n\n```\n\n[`accumulateGrad`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/functions/accumulate_grad.h#L100)\ndoes several checks on the tensors format and eventually performs the `variable_grad += new_grad;` accumulation.\n\n## Preparing the graph for execution\n\nNow, let\u2019s walk through [`Engine::execute`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L969-L1126). The first thing to do besides arguments consistency checks is to create the actual `GraphTask` object we described above. This object keeps all the metadata of the graph execution.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "```c++\nauto Engine::execute(const edge_list& roots,\n const variable_list& inputs,\n bool keep_graph,\n bool create_graph,\n bool accumulate_grad,\n const edge_list& outputs) -> variable_list {\n\n validate_outputs(roots, const_cast(inputs), [](const std::string& msg) {\n return msg;\n });\n\n // Checks\n\n auto graph_task = std::make_shared(\n /* keep_graph */ keep_graph,\n /* create_graph */ create_graph,\n /* depth */ not_reentrant_backward_call ? 0 : total_depth + 1,\n /* cpu_ready_queue */ local_ready_queue);\n\n // If we receive a single root, skip creating extra root node\n // \u2026\n // Prepare graph by computing dependencies\n // \u2026\n // Queue the root \n // \u2026\n // launch execution\n // \u2026\n}\n\n```", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "```\n\nAfter creating the `GraphTask`, we use its associated function if we only have one root node. If we have multiple root nodes, we create a special `GraphRoot` object as described before.\n\n```c++\n bool skip_dummy_node = roots.size() == 1;\n auto graph_root = skip_dummy_node ?\n roots.at(0).function :\n std::make_shared(roots, inputs);\n\n```\n\nThe next step is to fill the `dependencies_` map in the `GraphTask` object since the engine must know when it can execute a task. The `outputs` here is the `inputs` argument passed to the `torch.autograd.backward()` call in Python. But here, we have reversed the names since the gradients w.r.t. the inputs of the forward pass are now the outputs of the backward pass. And from now on, there is no concept of forward/backward, but only graph traversal and execution.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}} @@ -1083,18 +1086,18 @@ {"page_content": "1. Automatic mixed precision (AMP) training is now natively supported and a stable feature (See [here](https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/) for more details) - thanks for NVIDIA\u2019s contributions; \n2. Native TensorPipe support now added for tensor-aware, point-to-point communication primitives built specifically for machine learning; \n3. Added support for complex tensors to the frontend API surface;\n4. New profiling tools providing tensor-level memory consumption information;\n5. Numerous improvements and new features for both distributed data parallel (DDP) training and the remote procedural call (RPC) packages.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} {"page_content": "Additionally, from this release onward, features will be classified as Stable, Beta and Prototype. Prototype features are not included as part of the binary distribution and are instead available through either building from source, using nightlies or via compiler flag. You can learn more about what this change means in the post [here](https://pytorch.org/blog/pytorch-feature-classification-changes/). You can also find the full release notes [here](https://github.com/pytorch/pytorch/releases). \n\n# Performance & Profiling\n\n## [Stable] Automatic Mixed Precision (AMP) Training", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} {"page_content": "AMP allows users to easily enable automatic mixed precision training enabling higher performance and memory savings of up to 50% on Tensor Core GPUs. Using the natively supported `torch.cuda.amp` API, AMP provides convenience methods for mixed precision, where some operations use the `torch.float32 (float)` datatype and other operations use `torch.float16 (half)`. Some ops, like linear layers and convolutions, are much faster in `float16`. Other ops, like reductions, often require the dynamic range of `float32`. Mixed precision tries to match each op to its appropriate datatype.\n\n* Design doc ([Link](https://github.com/pytorch/pytorch/issues/25081))\n* Documentation ([Link](https://pytorch.org/docs/stable/amp.html))\n* Usage examples ([Link](https://pytorch.org/docs/stable/notes/amp_examples.html))\n\n## [Beta] Fork/Join Parallelism", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} -{"page_content": "This release adds support for a language-level construct as well as runtime support for coarse-grained parallelism in TorchScript code. This support is useful for situations such as running models in an ensemble in parallel, or running bidirectional components of recurrent nets in parallel, and allows the ability to unlock the computational power of parallel architectures (e.g. many-core CPUs) for task level parallelism.\n\nParallel execution of TorchScript programs is enabled through two primitives: `torch.jit.fork` and `torch.jit.wait`. In the below example, we parallelize execution of `foo`:\n\n```python\nimport torch\nfrom typing import List\n\ndef foo(x):\n return torch.neg(x)\n\n@torch.jit.script\ndef example(x):\n futures = [torch.jit.fork(foo, x) for _ in range(100)]\n results = [torch.jit.wait(future) for future in futures]\n return torch.sum(torch.stack(results))\n\nprint(example(torch.ones([])))\n ```\n \n* Documentation ([Link](https://pytorch.org/docs/stable/jit.html))\n\n## [Beta] Memory Profiler", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} -{"page_content": "The `torch.autograd.profiler` API now includes a memory profiler that lets you inspect the tensor memory cost of different operators inside your CPU and GPU models.\n\nHere is an example usage of the API:\n\n```python\nimport torch\nimport torchvision.models as models\nimport torch.autograd.profiler as profiler\n\nmodel = models.resnet18()\ninputs = torch.randn(5, 3, 224, 224)\nwith profiler.profile(profile_memory=True, record_shapes=True) as prof:\n model(inputs)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} +{"page_content": "## [Beta] Fork/Join Parallelism \n\nThis release adds support for a language-level construct as well as runtime support for coarse-grained parallelism in TorchScript code. This support is useful for situations such as running models in an ensemble in parallel, or running bidirectional components of recurrent nets in parallel, and allows the ability to unlock the computational power of parallel architectures (e.g. many-core CPUs) for task level parallelism.\n\nParallel execution of TorchScript programs is enabled through two primitives: `torch.jit.fork` and `torch.jit.wait`. In the below example, we parallelize execution of `foo`:\n\n```python\nimport torch\nfrom typing import List\n\ndef foo(x):\n return torch.neg(x)\n\n@torch.jit.script\ndef example(x):\n futures = [torch.jit.fork(foo, x) for _ in range(100)]\n results = [torch.jit.wait(future) for future in futures]\n return torch.sum(torch.stack(results))\n\nprint(example(torch.ones([])))\n ```\n \n* Documentation ([Link](https://pytorch.org/docs/stable/jit.html))", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} +{"page_content": "## [Beta] Memory Profiler \n\nThe `torch.autograd.profiler` API now includes a memory profiler that lets you inspect the tensor memory cost of different operators inside your CPU and GPU models.\n\nHere is an example usage of the API:\n\n```python\nimport torch\nimport torchvision.models as models\nimport torch.autograd.profiler as profiler\n\nmodel = models.resnet18()\ninputs = torch.randn(5, 3, 224, 224)\nwith profiler.profile(profile_memory=True, record_shapes=True) as prof:\n model(inputs)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} {"page_content": "# NOTE: some columns were removed for brevity\nprint(prof.key_averages().table(sort_by=\"self_cpu_memory_usage\", row_limit=10))\n# --------------------------- --------------- --------------- ---------------\n# Name CPU Mem Self CPU Mem Number of Calls\n# --------------------------- --------------- --------------- ---------------\n# empty 94.79 Mb 94.79 Mb 123\n# resize_ 11.48 Mb 11.48 Mb 2\n# addmm 19.53 Kb 19.53 Kb 1\n# empty_strided 4 b 4 b 1\n# conv2d 47.37 Mb 0 b 20\n# --------------------------- --------------- --------------- ---------------\n ```\n\n* PR ([Link](https://github.com/pytorch/pytorch/pull/37775))\n* Documentation ([Link](https://pytorch.org/docs/stable/autograd.html#profiler))\n\n# Distributed Training & RPC \n\n## [Beta] TensorPipe backend for RPC", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} -{"page_content": "PyTorch 1.6 introduces a new backend for the RPC module which leverages the TensorPipe library, a tensor-aware point-to-point communication primitive targeted at machine learning, intended to complement the current primitives for distributed training in PyTorch (Gloo, MPI, ...) which are collective and blocking. The pairwise and asynchronous nature of TensorPipe lends itself to new networking paradigms that go beyond data parallel: client-server approaches (e.g., parameter server for embeddings, actor-learner separation in Impala-style RL, ...) and model and pipeline parallel training (think GPipe), gossip SGD, etc.\n\n```python\n# One-line change needed to opt in\ntorch.distributed.rpc.init_rpc(\n ...\n backend=torch.distributed.rpc.BackendType.TENSORPIPE,\n)\n\n# No changes to the rest of the RPC API\ntorch.distributed.rpc.rpc_sync(...)\n```\n\n* Design doc ([Link](https://github.com/pytorch/pytorch/issues/35251))\n* Documentation ([Link](https://pytorch.org/docs/stable/rpc/index.html))\n\n## [Beta] DDP+RPC", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} -{"page_content": "## [Beta] DDP+RPC \n\nPyTorch Distributed supports two powerful paradigms: DDP for full sync data parallel training of models and the RPC framework which allows for distributed model parallelism. Previously, these two features worked independently and users couldn\u2019t mix and match these to try out hybrid parallelism paradigms.\n\nStarting in PyTorch 1.6, we\u2019ve enabled DDP and RPC to work together seamlessly so that users can combine these two techniques to achieve both data parallelism and model parallelism. An example is where users would like to place large embedding tables on parameter servers and use the RPC framework for embedding lookups, but store smaller dense parameters on trainers and use DDP to synchronize the dense parameters. Below is a simple code snippet. \n\n```python\n// On each trainer\n\nremote_emb = create_emb(on=\"ps\", ...)\nddp_model = DDP(dense_model)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} +{"page_content": "## [Beta] TensorPipe backend for RPC\n\nPyTorch 1.6 introduces a new backend for the RPC module which leverages the TensorPipe library, a tensor-aware point-to-point communication primitive targeted at machine learning, intended to complement the current primitives for distributed training in PyTorch (Gloo, MPI, ...) which are collective and blocking. The pairwise and asynchronous nature of TensorPipe lends itself to new networking paradigms that go beyond data parallel: client-server approaches (e.g., parameter server for embeddings, actor-learner separation in Impala-style RL, ...) and model and pipeline parallel training (think GPipe), gossip SGD, etc.\n\n```python\n# One-line change needed to opt in\ntorch.distributed.rpc.init_rpc(\n ...\n backend=torch.distributed.rpc.BackendType.TENSORPIPE,\n)\n\n# No changes to the rest of the RPC API\ntorch.distributed.rpc.rpc_sync(...)\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} +{"page_content": "* Design doc ([Link](https://github.com/pytorch/pytorch/issues/35251))\n* Documentation ([Link](https://pytorch.org/docs/stable/rpc/index.html))\n\n## [Beta] DDP+RPC \n\nPyTorch Distributed supports two powerful paradigms: DDP for full sync data parallel training of models and the RPC framework which allows for distributed model parallelism. Previously, these two features worked independently and users couldn\u2019t mix and match these to try out hybrid parallelism paradigms.\n\nStarting in PyTorch 1.6, we\u2019ve enabled DDP and RPC to work together seamlessly so that users can combine these two techniques to achieve both data parallelism and model parallelism. An example is where users would like to place large embedding tables on parameter servers and use the RPC framework for embedding lookups, but store smaller dense parameters on trainers and use DDP to synchronize the dense parameters. Below is a simple code snippet. \n\n```python\n// On each trainer\n\nremote_emb = create_emb(on=\"ps\", ...)\nddp_model = DDP(dense_model)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} {"page_content": "for data in batch:\n with torch.distributed.autograd.context():\n res = remote_emb(data)\n loss = ddp_model(res)\n torch.distributed.autograd.backward([loss])\n```\n\n* DDP+RPC Tutorial ([Link](https://pytorch.org/tutorials/advanced/rpc_ddp_tutorial.html))\n* Documentation ([Link](https://pytorch.org/docs/stable/rpc/index.html))\n* Usage Examples ([Link](https://github.com/pytorch/examples/pull/800))\n\n## [Beta] RPC - Asynchronous User Functions", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} -{"page_content": "RPC Asynchronous User Functions supports the ability to yield and resume on the server side when executing a user-defined function. Prior to this feature, when a callee processes a request, one RPC thread waits until the user function returns. If the user function contains IO (e.g., nested RPC) or signaling (e.g., waiting for another request to unblock), the corresponding RPC thread would sit idle waiting for these events. As a result, some applications have to use a very large number of threads and send additional RPC requests, which can potentially lead to performance degradation. To make a user function yield on such events, applications need to: 1) Decorate the function with the `@rpc.functions.async_execution` decorator; and 2) Let the function return a `torch.futures.Future` and install the resume logic as callbacks on the `Future` object. See below for an example:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} +{"page_content": "## [Beta] RPC - Asynchronous User Functions\n\nRPC Asynchronous User Functions supports the ability to yield and resume on the server side when executing a user-defined function. Prior to this feature, when a callee processes a request, one RPC thread waits until the user function returns. If the user function contains IO (e.g., nested RPC) or signaling (e.g., waiting for another request to unblock), the corresponding RPC thread would sit idle waiting for these events. As a result, some applications have to use a very large number of threads and send additional RPC requests, which can potentially lead to performance degradation. To make a user function yield on such events, applications need to: 1) Decorate the function with the `@rpc.functions.async_execution` decorator; and 2) Let the function return a `torch.futures.Future` and install the resume logic as callbacks on the `Future` object. See below for an example:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} {"page_content": "```python\n@rpc.functions.async_execution\ndef async_add_chained(to, x, y, z):\n return rpc.rpc_async(to, torch.add, args=(x, y)).then(\n lambda fut: fut.wait() + z\n )\n\nret = rpc.rpc_sync(\n \"worker1\", \n async_add_chained, \n args=(\"worker2\", torch.ones(2), 1, 1)\n)\n \nprint(ret) # prints tensor([3., 3.])\n```\n\n* Tutorial for performant batch RPC using Asynchronous User Functions ([Link](https://github.com/pytorch/tutorials/blob/release/1.6/intermediate_source/rpc_async_execution.rst))\n* Documentation ([Link](https://pytorch.org/docs/stable/rpc.html#torch.distributed.rpc.functions.async_execution))\n* Usage examples ([Link](https://github.com/pytorch/examples/tree/master/distributed/rpc/batch))\n\n# Frontend API Updates\n\n## [Beta] Complex Numbers", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} -{"page_content": "The PyTorch 1.6 release brings beta level support for complex tensors including torch.complex64 and torch.complex128 dtypes. A complex number is a number that can be expressed in the form a + bj, where a and b are real numbers, and j is a solution of the equation x^2 = \u22121. Complex numbers frequently occur in mathematics and engineering, especially in signal processing and the area of complex neural networks is an active area of research. The beta release of complex tensors will support common PyTorch and complex tensor functionality, plus functions needed by Torchaudio, ESPnet and others. While this is an early version of this feature, and we expect it to improve over time, the overall goal is provide a NumPy compatible user experience that leverages PyTorch\u2019s ability to run on accelerators and work with autograd to better support the scientific community. \n\n# Mobile Updates", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} +{"page_content": "# Frontend API Updates\n\n## [Beta] Complex Numbers \n\nThe PyTorch 1.6 release brings beta level support for complex tensors including torch.complex64 and torch.complex128 dtypes. A complex number is a number that can be expressed in the form a + bj, where a and b are real numbers, and j is a solution of the equation x^2 = \u22121. Complex numbers frequently occur in mathematics and engineering, especially in signal processing and the area of complex neural networks is an active area of research. The beta release of complex tensors will support common PyTorch and complex tensor functionality, plus functions needed by Torchaudio, ESPnet and others. While this is an early version of this feature, and we expect it to improve over time, the overall goal is provide a NumPy compatible user experience that leverages PyTorch\u2019s ability to run on accelerators and work with autograd to better support the scientific community. \n\n# Mobile Updates", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} {"page_content": "# Mobile Updates\n\nPyTorch 1.6 brings increased performance and general stability for mobile on-device inference. We squashed a few bugs, continued maintenance and added few new features while improving fp32 and int8 performance on a large variety of ML model inference on CPU backend.\n\n## [Beta] Mobile Features and Performance \n\n* Stateless and stateful XNNPACK Conv and Linear operators\n* Stateless MaxPool2d + JIT optimization passes\n* JIT pass optimizations: Conv + BatchNorm fusion, graph rewrite to replace conv2d/linear with xnnpack ops, relu/hardtanh fusion, dropout removal\n* QNNPACK integration removes requantization scale constraint\n* Per-channel quantization for conv, linear and dynamic linear\n* Disable tracing for mobile client to save ~600 KB on full-jit builds\n\n# Updated Domain Libraries\n\n## torchvision 0.7", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} -{"page_content": "## torchvision 0.7 \n\ntorchvision 0.7 introduces two new pretrained semantic segmentation models, [FCN ResNet50](https://arxiv.org/abs/1411.4038) and [DeepLabV3 ResNet50](https://arxiv.org/abs/1706.05587), both trained on COCO and using smaller memory footprints than the ResNet101 backbone. We also introduced support for AMP (Automatic Mixed Precision) autocasting for torchvision models and operators, which automatically selects the floating point precision for different GPU operations to improve performance while maintaining accuracy. \n\n* Release notes ([Link](https://github.com/pytorch/vision/releases))\n\n## torchaudio 0.6\n\ntorchaudio now officially supports Windows. This release also introduces a new model module (with wav2letter included), new functionals (contrast, cvm, dcshift, overdrive, vad, phaser, flanger, biquad), datasets (GTZAN, CMU), and a new optional sox backend with support for TorchScript.\n\n* Release notes ([Link](https://github.com/pytorch/audio/releases))\n\n# Additional updates\n\n## HACKATHON", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} -{"page_content": "## HACKATHON\n\nThe Global PyTorch Summer Hackathon is back! This year, teams can compete in three categories virtually:\n\n 1. **PyTorch Developer Tools:** Tools or libraries designed to improve productivity and efficiency of PyTorch for researchers and developers\n 2. **Web/Mobile Applications powered by PyTorch:** Applications with web/mobile interfaces and/or embedded devices powered by PyTorch \n 3. **PyTorch Responsible AI Development Tools:** Tools, libraries, or web/mobile apps for responsible AI development\n\nThis is a great opportunity to connect with the community and practice your machine learning skills. \n\n* [Join the hackathon](http://pytorch2020.devpost.com/)\n* [Watch educational videos](https://www.youtube.com/pytorch)\n\n\n## LPCV Challenge", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} +{"page_content": "# Updated Domain Libraries\n\n## torchvision 0.7 \n\ntorchvision 0.7 introduces two new pretrained semantic segmentation models, [FCN ResNet50](https://arxiv.org/abs/1411.4038) and [DeepLabV3 ResNet50](https://arxiv.org/abs/1706.05587), both trained on COCO and using smaller memory footprints than the ResNet101 backbone. We also introduced support for AMP (Automatic Mixed Precision) autocasting for torchvision models and operators, which automatically selects the floating point precision for different GPU operations to improve performance while maintaining accuracy. \n\n* Release notes ([Link](https://github.com/pytorch/vision/releases))\n\n## torchaudio 0.6\n\ntorchaudio now officially supports Windows. This release also introduces a new model module (with wav2letter included), new functionals (contrast, cvm, dcshift, overdrive, vad, phaser, flanger, biquad), datasets (GTZAN, CMU), and a new optional sox backend with support for TorchScript.\n\n* Release notes ([Link](https://github.com/pytorch/audio/releases))", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} +{"page_content": "# Additional updates\n\n## HACKATHON\n\nThe Global PyTorch Summer Hackathon is back! This year, teams can compete in three categories virtually:\n\n 1. **PyTorch Developer Tools:** Tools or libraries designed to improve productivity and efficiency of PyTorch for researchers and developers\n 2. **Web/Mobile Applications powered by PyTorch:** Applications with web/mobile interfaces and/or embedded devices powered by PyTorch \n 3. **PyTorch Responsible AI Development Tools:** Tools, libraries, or web/mobile apps for responsible AI development\n\nThis is a great opportunity to connect with the community and practice your machine learning skills. \n\n* [Join the hackathon](http://pytorch2020.devpost.com/)\n* [Watch educational videos](https://www.youtube.com/pytorch)\n\n\n## LPCV Challenge", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} {"page_content": "## LPCV Challenge\n\nThe [2020 CVPR Low-Power Vision Challenge (LPCV) - Online Track for UAV video](https://lpcv.ai/2020CVPR/video-track) submission deadline is coming up shortly. You have until July 31, 2020 to build a system that can discover and recognize characters in video captured by an unmanned aerial vehicle (UAV) accurately using PyTorch and Raspberry Pi 3B+. \n\n## Prototype Features\n\nTo reiterate, Prototype features in PyTorch are early features that we are looking to gather feedback on, gauge the usefulness of and improve ahead of graduating them to Beta or Stable. The following features are not part of the PyTorch 1.6 release and instead are available in nightlies with separate docs/tutorials to help facilitate early usage and feedback.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} {"page_content": "#### Distributed RPC/Profiler\nAllow users to profile training jobs that use `torch.distributed.rpc` using the autograd profiler, and remotely invoke the profiler in order to collect profiling information across different nodes. The RFC can be found [here](https://github.com/pytorch/pytorch/issues/39675) and a short recipe on how to use this feature can be found [here](https://github.com/pytorch/tutorials/tree/master/prototype_source).\n\n#### TorchScript Module Freezing\nModule Freezing is the process of inlining module parameters and attributes values into the TorchScript internal representation. Parameter and attribute values are treated as final value and they cannot be modified in the frozen module. The PR for this feature can be found [here](https://github.com/pytorch/pytorch/pull/32178) and a short tutorial on how to use this feature can be found [here](https://github.com/pytorch/tutorials/tree/master/prototype_source).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} {"page_content": "#### Graph Mode Quantization\nEager mode quantization requires users to make changes to their model, including explicitly quantizing activations, module fusion, rewriting use of torch ops with Functional Modules and quantization of functionals are not supported. If we can trace or script the model, then the quantization can be done automatically with graph mode quantization without any of the complexities in eager mode, and it is configurable through a `qconfig_dict`. A tutorial on how to use this feature can be found [here](https://github.com/pytorch/tutorials/tree/master/prototype_source).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}} @@ -1113,15 +1116,15 @@ {"page_content": "Alternatively, the original mask can be passed into the `attn_mask` field however due to the mentioned kernel constraints that would limit the implementation to only support the generic `sdpa_math` kernel.\n\n\n### Step 3 (Bonus): Faster matmuls with padding\n\nOn top of the performance improvements from SDPA, our analysis yielded a nice ancillary win. In Andrej's words \"The most dramatic optimization to nanoGPT so far (~25% speedup) is to simply increase the vocab size from 50257 to 50304 (nearest multiple of 64).\"\n\n\n![Tweet by Andrej Karpathy](/assets/images/2023-04-18-accelerating-large-language-models/tweet.png){:style=\"max-height:800px; width:100%; max-width:600px\"}", "metadata": {"source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"}} {"page_content": "The vocab size determines the dimensions of matmuls in the output layer of GPT, and these are so large that they were taking a _majority_ of the time for the entire training loop! We discovered that they were achieving performance significantly below the peak throughput achievable on the A100 GPU, and guessed from [NVIDIA's matmul documentation](https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html) that 64-element alignment would yield better results. Indeed, padding these matmuls achieves nearly a 3x speedup! The underlying cause is that unaligned memory accesses significantly reduce efficiency. A deeper analysis can be found in [this Twitter thread](https://twitter.com/cHHillee/status/1630274804795445248).\n\nWith this optimization we were able to further reduce training time from ~113 ms (using flash attention) to ~87 ms per batch.\n\n\n## Results\n\nThe figure below demonstrates the performance gained using Pytorch custom kernels. Here are the exact figures:", "metadata": {"source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"}} {"page_content": "* baseline (nanoGPT implementation): ~143ms\n* sdpa_math (generic): ~134ms (6.71% faster)\n* `mem_efficient` kernel: ~119ms (20.16% faster)\n* `flash_attention` kernel: ~113ms (26.54% faster)\n* flash_attention + padded vocab: ~87ms (64.37% faster)\n\nAll code was run on an 8 x NVIDIA Corporation A100 server with 80 GB HBM [A100 SXM4 80GB], and for the purpose of this experiment dropout was set to 0.\n\n\n![Using scaled dot product attention with custom kernels and torch.compile delivers significant speedups for training large language models](/assets/images/2023-04-18-accelerating-large-language-models/PyTorch_Better-Transformer_Chart-2.png){:style=\"max-height:800px; width:100%\"} \n\n\n**Figure 2:** Using scaled dot product attention with custom kernels and torch.compile delivers significant speedups for training large language models, such as for [nanoGPT](https://github.com/karpathy/nanoGPT) shown here.\n\n\n## Enhancing Numerical Model Stability", "metadata": {"source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"}} -{"page_content": "In addition to being faster, PyTorch's implementation offers increased numerical stability by avoiding loss of precision in many execution scenarios. There is a great explanation [here](https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/118), but essentially the PyTorch implementation scales the Query and Key matrices _before_ multiplication, which is said to be more stable and avoid loss of precision. Because of the merged custom kernel architecture of SDPA, this scaling does not introduce additional overhead in the computation of the attention result. In comparison, an implementation from the individual computational components would require separate pre-scaling at additional cost. For an additional explanation, see Appendix A.\n\n\n### Improved Memory Consumption", "metadata": {"source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"}} -{"page_content": "Yet another large advantage of using the torch SDPA kernels is the reduced memory footprint, which allows for the utilization of larger batch sizes. The following chart compares the best validation loss after one hour of training for both flash attention and the baseline implementations of causal attention. As can be seen, the maximum batch size achieved with the baseline causal attention implementation (on 8 x NVIDIA Corporation A100 server with 80 GB HBM) was 24, significantly less then the maximum achieved with flash attention, which was 39.\n\n![Using Flash Attention enables the usage of larger batch sizes](/assets/images/2023-04-18-accelerating-large-language-models/chart.png){:style=\"max-height:800px; width:100%\"} \n\n\n**Figure 3:** Using Flash Attention enables the usage of larger batch sizes, allowing users to achieve lower validation loss after one hour of training (smaller is better).\n\n\n## Conclusion", "metadata": {"source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"}} +{"page_content": "## Enhancing Numerical Model Stability\n\nIn addition to being faster, PyTorch's implementation offers increased numerical stability by avoiding loss of precision in many execution scenarios. There is a great explanation [here](https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/118), but essentially the PyTorch implementation scales the Query and Key matrices _before_ multiplication, which is said to be more stable and avoid loss of precision. Because of the merged custom kernel architecture of SDPA, this scaling does not introduce additional overhead in the computation of the attention result. In comparison, an implementation from the individual computational components would require separate pre-scaling at additional cost. For an additional explanation, see Appendix A.\n\n\n### Improved Memory Consumption", "metadata": {"source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"}} +{"page_content": "### Improved Memory Consumption\n\nYet another large advantage of using the torch SDPA kernels is the reduced memory footprint, which allows for the utilization of larger batch sizes. The following chart compares the best validation loss after one hour of training for both flash attention and the baseline implementations of causal attention. As can be seen, the maximum batch size achieved with the baseline causal attention implementation (on 8 x NVIDIA Corporation A100 server with 80 GB HBM) was 24, significantly less then the maximum achieved with flash attention, which was 39.\n\n![Using Flash Attention enables the usage of larger batch sizes](/assets/images/2023-04-18-accelerating-large-language-models/chart.png){:style=\"max-height:800px; width:100%\"} \n\n\n**Figure 3:** Using Flash Attention enables the usage of larger batch sizes, allowing users to achieve lower validation loss after one hour of training (smaller is better).\n\n\n## Conclusion", "metadata": {"source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"}} {"page_content": "## Conclusion\n\nAccelerated PyTorch 2 Transformers were designed to make the training and production deployment of state-of-the-art transformer models affordable and integrated with PyTorch 2.0 model JIT compilation. The newly introduced PyTorch SDPA operator provides improved performance for training Transformer models and is particularly valuable for the expensive Large Language Model training. In this post we demonstrate a number of optimizations on the exemplary nanoGPT model including:\n\n\n\n* Over 26% training speedup, when compared against the baseline with constant batch size\n* An additional speedup achieved with padded vocabulary, bringing the total optimization to approximately 64% compared to the baseline\n* Additional numerical stability\n\n\n## Appendix A: Analyzing Attention Numeric Stability", "metadata": {"source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"}} {"page_content": "In this section we provide a more in depth explanation of the previously mentioned enhanced numerical stability which is gained by prescaling SDPA\u2019s input vectors. The following is a simplified version of nanoGPT\u2019s mathematical implementation of SDPA. The important thing to note here is that the query undergoes matrix multiplication without being scaled.\n\n```\n# nanoGPT implementation of SDPA\n# notice q (our query vector) is not scaled !\natt = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1)))\natt = att.masked_fill(self.bias[:,:,:T,:T] == 0, float('-inf'))\natt = F.softmax(att, dim=-1)\n\n# Dropout is set to 0, so we can safely ignore this line in the implementation# att = self.attn_dropout(att) \n\ny_nanogpt = att @ v # (B, nh, T, T) x (B, nh, T, hs) -> (B, nh, T, hs)\n```\n\nThe following is the equivalent mathematical implementation in torch\u2019s `scaled_dot_product_attention`.", "metadata": {"source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"}} {"page_content": "```\n# PyTorch implementation of SDPA\nembed_size = q.size(-1)\nscaling_factor = math.sqrt(math.sqrt(embed_size))\nq = q / scaling_factor \t# notice q _is_ scaled here !\n\n# same as above, but with scaling factor\natt = q @ (k.transpose(-2, -1) / scaling_factor)\natt = att.masked_fill(self.bias[:,:,:T,:T] == 0, float('-inf'))\natt = F.softmax(att0, dim=-1)\n\n# Dropout is set to 0, so we can safely ignore this line in the implementation# att = self.attn_dropout(att) \n\ny_scale_before = att @ v\n```\n\nMathematically both approaches should be equivalent, however our experimentation shows that in practice we receive different results from each approach. \n\nUsing the approach above, we verified `y_scale_before` matches the expected output from using the `scaled_dot_product_attention `method while `y_nanogpt` does not.\n\nThe `torch.allclose` method was used to test equivalence. Specifically, we showed that:", "metadata": {"source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"}} {"page_content": "```\ny_sdpa = torch.nn.functional._scaled_dot_product_attention(\n\tq,\n\tk,\n\tv,\n\tattn_mask=self.bias[:,:,:T,:T] != 0,\n\tdropout_p=0.0,\n\tneed_attn_weights=False,\n\tis_causal=False,\n)\n\ntorch.allclose(y_sdpa, y_nanogpt) # False, indicating fp issues\ntorch.allclose(y_sdpa, y_scale_before) # True, as expected\n```\n\n## Appendix B: Reproducing Experiment Results", "metadata": {"source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"}} -{"page_content": "Researchers seeking to reproduce these results should start with the following commit from Andrej\u2019s nanoGPT repository - **b3c17c6c6a363357623f223aaa4a8b1e89d0a465**. This commit was used as the baseline when measuring the per batch speed improvements. For results which include padded vocabulary optimizations (which yielded the most significant improvements to batch speed), use the following commit - **77e7e04c2657846ddf30c1ca2dd9f7cbb93ddeab**. From either checkout, selecting kernels for experimentation is made trivial with the use of the [torch.backends](https://pytorch.org/docs/stable/backends.html) API. \n\nThe desired kernel can be selected via a context manager:\n\n```\nwith torch.backends.cuda.sdp_kernel (\n enable_math = False,\n enable_flash = False,\n enable_mem_efficient = True\n):\n train(model)\n```", "metadata": {"source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"}} +{"page_content": "## Appendix B: Reproducing Experiment Results\n\nResearchers seeking to reproduce these results should start with the following commit from Andrej\u2019s nanoGPT repository - **b3c17c6c6a363357623f223aaa4a8b1e89d0a465**. This commit was used as the baseline when measuring the per batch speed improvements. For results which include padded vocabulary optimizations (which yielded the most significant improvements to batch speed), use the following commit - **77e7e04c2657846ddf30c1ca2dd9f7cbb93ddeab**. From either checkout, selecting kernels for experimentation is made trivial with the use of the [torch.backends](https://pytorch.org/docs/stable/backends.html) API. \n\nThe desired kernel can be selected via a context manager:\n\n```\nwith torch.backends.cuda.sdp_kernel (\n enable_math = False,\n enable_flash = False,\n enable_mem_efficient = True\n):\n train(model)\n```", "metadata": {"source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"Straggler Mitigation On PyTorch DDP By Hierarchical SGD\"\nauthor: Yi Wang (Cruise AI), Rohan Varma (Meta AI)\n---\n\n[PyTorch DDP](https://pytorch.org/docs/stable/notes/ddp.html) has been widely adopted across the industry for distributed training, which by default runs synchronous SGD to synchronize gradients across model replicas at every step. The performance of this technique is critical for fast iteration during model exploration as well as resource and cost saving. The performance is critical for fast iteration and cost saving of model development and exploration. To resolve a ubiquitous performance bottleneck introduced by slow nodes in large-scale training, Cruise and Meta co-developed a solution based on the [Hierarchical SGD](https://arxiv.org/abs/2007.13819) algorithm to significantly accelerate training in the presence of these stragglers.\n\n\n## The Need For Straggler Mitigation", "metadata": {"source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"}} -{"page_content": "In DDP setup, a straggler problem can occur when one or more processes run much slower (\"stragglers\") than other processes. When this happens, all the processes have to wait for the stragglers before synchronizing gradients and completing the communication, which essentially bottlenecks distributed performance to the slowest worker.As a result, even for the cases of training relatively small models, the communication cost can still be a major performance bottleneck.\n\n\n### Potential Causes of Stragglers\n\nSevere straggler issues are usually caused by workload imbalance before synchronization, and many factors can contribute to this imbalance. For instance, some data loader workers in the distributed environment can become stragglers, because some input examples can be outliers in terms of the data size, or the data transfer of some examples can be drastically slowed down due to unstable network I/O, or the on-the-fly data transformation costs can have a high variance.", "metadata": {"source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"}} +{"page_content": "## The Need For Straggler Mitigation\n\nIn DDP setup, a straggler problem can occur when one or more processes run much slower (\"stragglers\") than other processes. When this happens, all the processes have to wait for the stragglers before synchronizing gradients and completing the communication, which essentially bottlenecks distributed performance to the slowest worker.As a result, even for the cases of training relatively small models, the communication cost can still be a major performance bottleneck.\n\n\n### Potential Causes of Stragglers\n\nSevere straggler issues are usually caused by workload imbalance before synchronization, and many factors can contribute to this imbalance. For instance, some data loader workers in the distributed environment can become stragglers, because some input examples can be outliers in terms of the data size, or the data transfer of some examples can be drastically slowed down due to unstable network I/O, or the on-the-fly data transformation costs can have a high variance.", "metadata": {"source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"}} {"page_content": "Besides data loading, other phases before gradient synchronization can also cause stragglers, such as unbalanced workloads of embedding table lookup during the forward pass in recommendation systems.\n\n\n### The Appearance of Stragglers\n\nIf we profile DDP training jobs that have stragglers, we can find that some processes may have much higher gradient synchronization costs (a.k.a., allreducing gradients) than other processes at a certain step. As a result, the distributed performance can be dominated by the communication cost even if the model size is very small. In this case, some processes run faster than the straggler(s) at a step, and hence they have to wait for the stragglers and spend a much longer time on allreduce.", "metadata": {"source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"}} {"page_content": "The below shows screenshots of two trace files output by PyTorch profiler in a use case. Each screenshot profiles 3 steps.\n* The first screenshot shows that a process has a very high allreduce cost in both the first and the third steps, because this process reaches the synchronization phase earlier than the straggler(s), and it spends more time on waiting. On the other hand, the allreduce cost is relatively small in the second step, this suggests that 1) there is no straggler at this step; or 2) this process is the straggler among all the processes, so it does not need to wait for any other process.\n\n\n![chart showing allreduce cost](/assets/images/straggler-mitigation/straggler-mitigation-1.png){:style=\"max-height:800px; width:100%\"} \n\nBoth the 1st and the 3rd Steps Are Slowed Down by Stragglers\n\n\n* The second screenshot shows a normal case without stragglers. In this case, all the gradient synchronizations are relatively short.", "metadata": {"source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"}} {"page_content": "![chart showing normal case without stragglers](/assets/images/straggler-mitigation/straggler-mitigation-2.png){:style=\"max-height:800px; width:100%\"} \n\nNormal Case Without Stragglers\n\n\n## Hierarchical SGD in PyTorch\n\nRecently hierarchical SGD has been proposed to optimize the communication costs by mainly reducing the total amount of data transfer in large-scale distributed training, and multiple convergence analyses have been provided ([example](https://arxiv.org/pdf/2010.12998.pdf)). As a main novelty of this post, at Cruise we could leverage hierarchical SGD to mitigate stragglers, which may also occur on training relatively small models. Our implementation has been upstreamed by Cruise to PyTorch in early 2022.\n\n\n### How Does Hierarchical SGD Work?\n\nAs the name implies, hierarchical SGD organizes all the processes into groups at different levels as a hierarchy, and runs synchronization by following the rules below:", "metadata": {"source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"}} @@ -1133,7 +1136,7 @@ {"page_content": "Straggler mitigation is not a novel study in distributed training. Multiple approaches have been proposed, such as [gossip SGD](https://arxiv.org/pdf/1705.09056.pdf), [data encoding](https://proceedings.neurips.cc/paper/2017/file/663772ea088360f95bac3dc7ffb841be-Paper.pdf), [gradient coding](http://proceedings.mlr.press/v70/tandon17a/tandon17a.pdf), as well as some particularly designed for parameter-server architecture, including [backup workers](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/45187.pdf) and [stale synchronous parallel](http://www.cs.cmu.edu/~seunghak/SSPTable_NIPS2013.pdf). However, to the best of our knowledge, before this effort we have not found a good open-source PyTorch implementation of straggler mitigation that can work like a plugin to our training system at Cruise. In contrast, our implementation only requires the minimal changes \u2013 no need to modify the existing code or tune any existing hyperparameters. This is a very appealing advantage for industry users.", "metadata": {"source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"}} {"page_content": "As the code example below shows, only a few lines need to be added to the setup of DDP model, and the training loop code can keep untouched. As explained previously, hierarchical SGD is an extended form of local SGD, so the enablement can be quite similar to local SGD (see PyTorch docs of [PostLocalSGDOptimizer](https://pytorch.org/docs/stable/distributed.optim.html#torch.distributed.optim.PostLocalSGDOptimizer)):\n\n1. Register a post-local SGD communication hook to run a warmup stage of fully synchronous SGD and defer hierarchical SGD.\n2. Create a post-local SGD optimizer that wraps an existing local optimizer and a hierarchical SGD configuration.\n\n```\nimport torch.distributed.algorithms.model_averaging.hierarchical_model_averager as hierarchicalSGD\nfrom torch.distributed.algorithms.ddp_comm_hooks.post_localSGD_hook import (\n PostLocalSGDState,\n post_localSGD_hook,\n)\nfrom torch.distributed.optim import PostLocalSGDOptimizer", "metadata": {"source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"}} {"page_content": "ddp_model = nn.parallel.DistributedDataParallel(\n module=model,\n device_ids=[rank],\n)\n\n# Register a post-local SGD communication hook for the warmup.\nsubgroup, _ = torch.distributed.new_subgroups()\nstate = PostLocalSGDState(subgroup=subgroup, start_localSGD_iter=1_000)\nddp_model.register_comm_hook(state, post_localSGD_hook)\n\n# Wraps the existing (local) optimizer to run hierarchical model averaging.\noptim = PostLocalSGDOptimizer(\n optim=optim,\n averager=hierarchicalSGD.HierarchicalModelAverager(\n # The config runs a 4-level hierarchy SGD among 128 processes:\n # 1) Each process runs mini-batch SGD locally;\n # 2) Each 8-process group synchronize every 2 steps;\n # 3) Each 32-process group synchronize every 4 steps;\n # 4) All 128 processes synchronize every 8 steps.\n period_group_size_dict=OrderedDict([(2, 8), (4, 32), (8, 128)]),\n # Do not run hierarchical SGD until 1K steps for model parity.\n warmup_steps=1_000)\n)\n```\n\n### Algorithm Hyperparameters", "metadata": {"source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"}} -{"page_content": "Hierarchical SGD has two major hyperparameters: _period_group_size_dict_ and _warmup_steps_.\n\n* **period_group_size_dict** is an ordered dictionary mapping from synchronization period to process group size, used for initializing process groups of different sizes in a hierarchy to synchronize parameters concurrently. A larger group is expected to use a larger synchronization period.\n* **warmup_steps** specifies a number of steps as the warmup stage to run synchronous SGD before hierarchical SGD. Similar to [post-local SGD](https://arxiv.org/pdf/1808.07217.pdf) algorithm, a warmup stage is usually recommended to achieve a higher accuracy. The value should be the same as _start_localSGD_iter_ arg used in _PostLocalSGDState_ when post_localSGD_hook is registered. Typically the warmup stage should at least cover the beginning of training when the loss is decreased drastically.", "metadata": {"source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"}} +{"page_content": "### Algorithm Hyperparameters\n\nHierarchical SGD has two major hyperparameters: _period_group_size_dict_ and _warmup_steps_.\n\n* **period_group_size_dict** is an ordered dictionary mapping from synchronization period to process group size, used for initializing process groups of different sizes in a hierarchy to synchronize parameters concurrently. A larger group is expected to use a larger synchronization period.\n* **warmup_steps** specifies a number of steps as the warmup stage to run synchronous SGD before hierarchical SGD. Similar to [post-local SGD](https://arxiv.org/pdf/1808.07217.pdf) algorithm, a warmup stage is usually recommended to achieve a higher accuracy. The value should be the same as _start_localSGD_iter_ arg used in _PostLocalSGDState_ when post_localSGD_hook is registered. Typically the warmup stage should at least cover the beginning of training when the loss is decreased drastically.", "metadata": {"source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"}} {"page_content": "A subtle difference between the PyTorch implementation and the initial design proposed by relevant papers is that, after the warmup stage, by default the processes within each host still run intra-host gradient synchronization at every step. This is because that:\n\n1. The intra-host communication is relatively cheap, and it can usually significantly accelerate the convergence;\n2. The intra-host group (of size 4 or 8 for most industry users) can usually be a good choice of the smallest group of processes that synchronize most frequently in hierarchical SGD. If the synchronization period is 1, then gradient synchronization is faster than model parameter synchronization (a.k.a., model averaging), because DDP automatically overlaps gradient synchronization and the backward pass.\n\nSuch intra-host gradient synchronization can be disabled by unsetting _post_local_gradient_allreduce_ arg in _PostLocalSGDState_.\n\n\n## Demonstration", "metadata": {"source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"}} {"page_content": "## Demonstration\n\nNow we demonstrate that hierarchical SGD can accelerate distributed training by mitigating stragglers.\n\n\n### Experimental Setup\n\nWe compared the performance of hierarchical SGD against local SGD and synchronous SGD on [ResNet18](https://pytorch.org/vision/main/models/generated/torchvision.models.resnet18.html) (model size: 45MB). Since the model is so small, the training is not bottlenecked by data transfer cost during synchronization. To avoid the noises incurred by data loading from remote storage, the input data was randomly simulated from memory. We varied the number of GPUs used by training from 64 to 256. The batch size per worker is 32, and the number of iterations of training is 1,000. Since we don\u2019t evaluate convergence efficiency in this set of experiments, warmup is not enabled.", "metadata": {"source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"}} {"page_content": "We also emulated stragglers at a rate of 1% on 128 and 256 GPUs, and 2% on 64 GPUs, to make sure at least one stragglers at every step on average. These stragglers randomly appear on different CUDA devices. Each straggler stalls for 1 second besides the normal per-step training time (~55ms in our setup). This can be perceived as a practical scenario where 1% or 2% of input data are outliers in terms of the data pre-processing cost (I/O and/or data transformation on the fly) during training, and such cost is 20X+ larger than the average.\n\nThe code snippet below shows how a straggler can be emulated in the training loop. We applied it to a ResNet model, and it can be easily applied to the other models as well.\n\n```\n loss = loss_fn(y_pred, y)\n # Emulate a straggler that lags for 1 second at a rate of 1%.\n if random.randint(1, 100) == 1:\n time.sleep(1)\n loss.backward()\n optimizer.step()\n```", "metadata": {"source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"}} @@ -1151,7 +1154,7 @@ {"page_content": "The above approach does not always produce the expected results and is hard to discover. For example, since the [``get_weight()``](https://pytorch.org/vision/main/models.html#using-models-from-hub) method is exposed publicly under the same module, it will be included in the list despite not being a model. In general, reducing the verbosity (less imports, shorter names etc) and being able to initialize models and weights directly from their names (better support of configs, TorchHub etc) was [feedback](https://github.com/pytorch/vision/issues/5088) provided previously by the community. To solve this problem, we have developed a model registration API.\n\n## A new approach\n\nWe\u2019ve added 4 new methods under the torchvision.models module:\n\n```python\nfrom torchvision.models import get_model, get_model_weights, get_weight, list_models\n```", "metadata": {"source": "https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/", "category": "pytorch blogs"}} {"page_content": "The styles and naming conventions align closely with a prototype mechanism proposed by Philip Meier for the [Datasets V2](https://github.com/pytorch/vision/blob/main/torchvision/prototype/datasets/_api.py) API, aiming to offer a similar user experience. The model registration methods are kept private on purpose as we currently focus only on supporting the built-in models of TorchVision.\n\n### List models\n\nListing all available models in TorchVision can be done with a single function call:\n\n```python\n>>> list_models()\n['alexnet', 'mobilenet_v3_large', 'mobilenet_v3_small', 'quantized_mobilenet_v3_large', ...]\n```\n\nTo list the available models of specific submodules:\n\n```python\n>>> list_models(module=torchvision.models)\n['alexnet', 'mobilenet_v3_large', 'mobilenet_v3_small', ...]\n>>> list_models(module=torchvision.models.quantization)\n['quantized_mobilenet_v3_large', ...]\n```\n\n### Initialize models\n\nNow that you know which models are available, you can easily initialize a model with pre-trained weights:", "metadata": {"source": "https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/", "category": "pytorch blogs"}} {"page_content": "```python\n>>> get_model(\"quantized_mobilenet_v3_large\", weights=\"DEFAULT\")\nQuantizableMobileNetV3(\n (features): Sequential(\n ....\n )\n)\n```\n\n### Get weights\nSometimes, while working with config files or using TorchHub, you might have the name of a specific weight entry and wish to get its instance. This can be easily done with the following method:\n\n```python\n>>> get_weight(\"ResNet50_Weights.IMAGENET1K_V2\")\nResNet50_Weights.IMAGENET1K_V2\n```\n\nTo get the enum class with all available weights of a specific model you can use either its name:\n\n```python\n>>> get_model_weights(\"quantized_mobilenet_v3_large\")\n\n```\n\nOr its model builder method:\n\n```python\n>>> get_model_weights(torchvision.models.quantization.mobilenet_v3_large)\n\n```\n\n### TorchHub support\nThe new methods are also available via TorchHub:\n\n```python\nimport torch", "metadata": {"source": "https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/", "category": "pytorch blogs"}} -{"page_content": "# Fetching a specific weight entry by its name:\nweights = torch.hub.load(\"pytorch/vision\", \"get_weight\", weights=\"ResNet50_Weights.IMAGENET1K_V2\")\n\n# Fetching the weights enum class to list all available entries:\nweight_enum = torch.hub.load(\"pytorch/vision\", \"get_model_weights\", name=\"resnet50\")\nprint([weight for weight in weight_enum])\n```\n\n## Putting it all together\n\nFor example, if you wanted to retrieve all the small-sized models with pre-trained weights and initialize one of them, it\u2019s a matter of using the above APIs:\n\n```python\nimport torchvision\nfrom torchvision.models import get_model, get_model_weights, list_models\n\n\nmax_params = 5000000\n\ntiny_models = []\nfor model_name in list_models(module=torchvision.models):\n weights_enum = get_model_weights(model_name)\n if len([w for w in weights_enum if w.meta[\"num_params\"] <= max_params]) > 0:\n tiny_models.append(model_name)\n\nprint(tiny_models)\n# ['mnasnet0_5', 'mnasnet0_75', 'mnasnet1_0', 'mobilenet_v2', ...]", "metadata": {"source": "https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/", "category": "pytorch blogs"}} +{"page_content": "```python\nimport torch\n\n# Fetching a specific weight entry by its name:\nweights = torch.hub.load(\"pytorch/vision\", \"get_weight\", weights=\"ResNet50_Weights.IMAGENET1K_V2\")\n\n# Fetching the weights enum class to list all available entries:\nweight_enum = torch.hub.load(\"pytorch/vision\", \"get_model_weights\", name=\"resnet50\")\nprint([weight for weight in weight_enum])\n```\n\n## Putting it all together\n\nFor example, if you wanted to retrieve all the small-sized models with pre-trained weights and initialize one of them, it\u2019s a matter of using the above APIs:\n\n```python\nimport torchvision\nfrom torchvision.models import get_model, get_model_weights, list_models\n\n\nmax_params = 5000000\n\ntiny_models = []\nfor model_name in list_models(module=torchvision.models):\n weights_enum = get_model_weights(model_name)\n if len([w for w in weights_enum if w.meta[\"num_params\"] <= max_params]) > 0:\n tiny_models.append(model_name)\n\nprint(tiny_models)\n# ['mnasnet0_5', 'mnasnet0_75', 'mnasnet1_0', 'mobilenet_v2', ...]", "metadata": {"source": "https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/", "category": "pytorch blogs"}} {"page_content": "model = get_model(tiny_models[0], weights=\"DEFAULT\")\nprint(sum(x.numel() for x in model.state_dict().values()))\n# 2239188\n```\n\nFor more technical details please see the original [RFC](https://github.com/pytorch/vision/pull/6330). Please spare a few minutes to provide your feedback on the new API, as this is crucial for graduating it from beta and including it in the next release. You can do this on the dedicated [Github Issue](https://github.com/pytorch/vision/issues/6365). We are looking forward to reading your comments!", "metadata": {"source": "https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'Introducing native PyTorch automatic mixed precision for faster training on NVIDIA GPUs'\nauthor: Mengdi Huang, Chetan Tekur, Michael Carilli\n---\n\nMost deep learning frameworks, including PyTorch, train with 32-bit floating point (FP32) arithmetic by default. However this is not essential to achieve full accuracy for many deep learning models. In 2017, NVIDIA researchers developed a methodology for [mixed-precision training](https://developer.nvidia.com/blog/mixed-precision-training-deep-neural-networks/), which combined [single-precision](https://blogs.nvidia.com/blog/2019/11/15/whats-the-difference-between-single-double-multi-and-mixed-precision-computing/) (FP32) with half-precision (e.g. FP16) format when training a network, and achieved the same accuracy as FP32 training using the same hyperparameters, with additional performance benefits on NVIDIA GPUs:\n\n* Shorter training time;\n* Lower memory requirements, enabling larger batch sizes, larger models, or larger inputs.", "metadata": {"source": "https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/", "category": "pytorch blogs"}} {"page_content": "In order to streamline the user experience of training in mixed precision for researchers and practitioners, NVIDIA developed [Apex](https://developer.nvidia.com/blog/apex-pytorch-easy-mixed-precision-training/) in 2018, which is a lightweight PyTorch extension with [Automatic Mixed Precision](https://developer.nvidia.com/automatic-mixed-precision) (AMP) feature. This feature enables automatic conversion of certain GPU operations from FP32 precision to mixed precision, thus improving performance while maintaining accuracy.\n\nFor the PyTorch 1.6 release, developers at NVIDIA and Facebook moved mixed precision functionality into PyTorch core as the AMP package, [torch.cuda.amp](https://pytorch.org/docs/stable/amp.html). `torch.cuda.amp` is more flexible and intuitive compared to `apex.amp`. Some of `apex.amp`'s known pain points that `torch.cuda.amp` has been able to fix:", "metadata": {"source": "https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/", "category": "pytorch blogs"}} @@ -1159,14 +1162,14 @@ {"page_content": "With AMP being added to PyTorch core, we have started the process of deprecating `apex.amp.` We have moved `apex.amp` to maintenance mode and will support customers using `apex.amp.` However, we highly encourage `apex.amp` customers to transition to using `torch.cuda.amp` from PyTorch Core.\n\n# Example Walkthrough\nPlease see official docs for usage:\n* [https://pytorch.org/docs/stable/amp.html](https://pytorch.org/docs/stable/amp.html )\n* [https://pytorch.org/docs/stable/notes/amp_examples.html](https://pytorch.org/docs/stable/notes/amp_examples.html)\n\nExample:\n\n```python\nimport torch\n# Creates once at the beginning of training\nscaler = torch.cuda.amp.GradScaler()\n\nfor data, label in data_iter:\n optimizer.zero_grad()\n # Casts operations to mixed precision\n with torch.cuda.amp.autocast():\n loss = model(data)\n\n # Scales the loss, and calls backward()\n # to create scaled gradients\n scaler.scale(loss).backward()", "metadata": {"source": "https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/", "category": "pytorch blogs"}} {"page_content": "# Unscales gradients and calls\n # or skips optimizer.step()\n scaler.step(optimizer)\n\n # Updates the scale for next iteration\n scaler.update()\n```\n\n# Performance Benchmarks\nIn this section, we discuss the accuracy and performance of mixed precision training with AMP on the latest NVIDIA GPU A100 and also previous generation V100 GPU. The mixed precision performance is compared to FP32 performance, when running Deep Learning workloads in the [NVIDIA pytorch:20.06-py3 container](https://ngc.nvidia.com/catalog/containers/nvidia:pytorch?ncid=partn-52193#cid=ngc01_partn_en-us) from NGC.", "metadata": {"source": "https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/", "category": "pytorch blogs"}} {"page_content": "## Accuracy: AMP (FP16), FP32\nThe advantage of using AMP for Deep Learning training is that the models converge to the similar final accuracy while providing improved training performance. To illustrate this point, for [Resnet 50 v1.5 training](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Classification/ConvNets/resnet50v1.5#training-accuracy-nvidia-dgx-a100-8x-a100-40gb), we see the following accuracy results where higher is better. Please note that the below accuracy numbers are sample numbers that are subject to run to run variance of up to 0.4%. Accuracy numbers for other models including BERT, Transformer, ResNeXt-101, Mask-RCNN, DLRM can be found at [NVIDIA Deep Learning Examples Github](https://github.com/NVIDIA/DeepLearningExamples).\n\nTraining accuracy: NVIDIA DGX A100 (8x A100 40GB)", "metadata": {"source": "https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/", "category": "pytorch blogs"}} -{"page_content": "\n \n \n \n \n \n \n \n \n \n \n \n \n
 epochs Mixed Precision Top 1(%) TF32 Top1(%)
 90 76.93 76.85
\n\nTraining accuracy: NVIDIA DGX-1 (8x V100 16GB)\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\t \n \n \n \n \n \n
 epochs Mixed Precision Top 1(%) FP32 Top1(%)
5076.2576.26
9077.0977.01
25078.4278.30
\n\n## Speedup Performance:", "metadata": {"source": "https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/", "category": "pytorch blogs"}} -{"page_content": "### FP16 on NVIDIA V100 vs. FP32 on V100\nAMP with FP16 is the most performant option for DL training on the V100. In Table 1, we can observe that for various models, AMP on V100 provides a speedup of 1.5x to 5.5x over FP32 on V100 while converging to the same final accuracy.\n\n
\n \n
\n*Figure 2. Performance of mixed precision training on NVIDIA 8xV100 vs. FP32 training on 8xV100 GPU. Bars represent the speedup factor of V100 AMP over V100 FP32. The higher the better.*\n\n## FP16 on NVIDIA A100 vs. FP16 on V100\n\nAMP with FP16 remains the most performant option for DL training on the A100. In Figure 3, we can observe that for various models, AMP on A100 provides a speedup of 1.3x to 2.5x over AMP on V100 while converging to the same final accuracy.", "metadata": {"source": "https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/", "category": "pytorch blogs"}} +{"page_content": "Training accuracy: NVIDIA DGX A100 (8x A100 40GB)\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n
 epochs Mixed Precision Top 1(%) TF32 Top1(%)
 90 76.93 76.85
\n\nTraining accuracy: NVIDIA DGX-1 (8x V100 16GB)\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\t \n \n \n \n \n \n
 epochs Mixed Precision Top 1(%) FP32 Top1(%)
5076.2576.26
9077.0977.01
25078.4278.30
\n\n## Speedup Performance:", "metadata": {"source": "https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/", "category": "pytorch blogs"}} +{"page_content": "## Speedup Performance:\n\n### FP16 on NVIDIA V100 vs. FP32 on V100\nAMP with FP16 is the most performant option for DL training on the V100. In Table 1, we can observe that for various models, AMP on V100 provides a speedup of 1.5x to 5.5x over FP32 on V100 while converging to the same final accuracy.\n\n
\n \n
\n*Figure 2. Performance of mixed precision training on NVIDIA 8xV100 vs. FP32 training on 8xV100 GPU. Bars represent the speedup factor of V100 AMP over V100 FP32. The higher the better.*\n\n## FP16 on NVIDIA A100 vs. FP16 on V100\n\nAMP with FP16 remains the most performant option for DL training on the A100. In Figure 3, we can observe that for various models, AMP on A100 provides a speedup of 1.3x to 2.5x over AMP on V100 while converging to the same final accuracy.", "metadata": {"source": "https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/", "category": "pytorch blogs"}} {"page_content": "
\n \n
\n*Figure 3. Performance of mixed precision training on NVIDIA 8xA100 vs. 8xV100 GPU. Bars represent the speedup factor of A100 over V100. The higher the better.*", "metadata": {"source": "https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/", "category": "pytorch blogs"}} {"page_content": "# Call to action\nAMP provides a healthy speedup for Deep Learning training workloads on Nvidia Tensor Core GPUs, especially on the latest Ampere generation A100 GPUs. You can start experimenting with AMP enabled models and model scripts for A100, V100, T4 and other GPUs available at NVIDIA deep learning [examples](https://github.com/NVIDIA/DeepLearningExamples). NVIDIA PyTorch with native AMP support is available from the [PyTorch NGC container](https://ngc.nvidia.com/catalog/containers/nvidia:pytorch?ncid=partn-52193#cid=ngc01_partn_en-us) version 20.06. We highly encourage existing `apex.amp` customers to transition to using `torch.cuda.amp` from PyTorch Core available in the latest [PyTorch 1.6 release](https://pytorch.org/blog/pytorch-1.6-released/).", "metadata": {"source": "https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'Tensor Comprehensions in PyTorch'\nauthor: Priya Goyal (FAIR), Nicolas Vasilache (FAIR), Oleksandr Zinenko (Inria & DI ENS), Theodoros Theodoridis (ETH Z\u00fcrich), Zachary DeVito (FAIR), William S. Moses (MIT CSAIL), Sven Verdoolaege (FAIR), Andrew Adams (FAIR), Albert Cohen (Inria & DI ENS & FAIR)\nredirect_from: /2018/03/05/tensor-comprehensions.html\n---\n\nTensor Comprehensions (TC) is a tool that lowers the barrier for writing high-performance code. It generates GPU code from a simple high-level language and autotunes the code for specific input sizes.\n\n**We highly recommend reading the [Tensor Comprehensions blogpost](https://research.fb.com/announcing-tensor-comprehensions/) first.**\n\nIf you ran into any of the following scenarios, TC is a useful tool for you.\n\n- Your PyTorch layer is large and slow, and you contemplated writing a dedicated C++ or CUDA code for it. But you don't know how to program in CUDA or write low-level code.", "metadata": {"source": "https://pytorch.org/blog/tensor-comprehensions/", "category": "pytorch blogs"}} {"page_content": "- You wrote a CUDA layer, but it took a week to write, debug, optimize for speed. You wished you could do this in an hour.\n\n- You want to fuse multiple layers like Conv-ReLU-BatchNorm or Linear-ReLU-Linear-ReLU in your network for speed, but it was quite difficult to comprehend\n\n- Your research involves weird Tensor shapes that CuDNN and MKL are not optimized for. For example, you do convolutions of 13 x 24 with an input image of 143 x 55. You tried running it with CuDNN and it was slower than you wished.\n\n- Your code is slowed-down by transposing Tensors constantly to fit a particular memory layout. You wish it was easy to write custom code that operates efficiently on your input layout.\n\n\nTensor Comprehensions are seamless to use in PyTorch, interoperating with PyTorch Tensors and `nn` Variables.\n\nLet us run through using TC with PyTorch.\n\n#### 1. Install the package\n\n```bash\nconda install -c pytorch -c tensorcomp tensor_comprehensions\n```", "metadata": {"source": "https://pytorch.org/blog/tensor-comprehensions/", "category": "pytorch blogs"}} {"page_content": "At this time we only provide Linux-64 binaries which have been tested on Ubuntu 16.04 and CentOS7.\n\nTC depends on heavyweight C++ projects such as [Halide](http://halide-lang.org/), [Tapir-LLVM](https://github.com/wsmoses/Tapir-LLVM) and [ISL](http://isl.gforge.inria.fr/). Hence, we rely on Anaconda to distribute these dependencies reliably. For the same reason, TC is not available via PyPI.\n\n#### 2. Import the python package\n\n```python\nimport tensor_comprehensions as tc\n```\n\n#### 3. Define the TC expression and create a python function\n\n```python\nlang = \"\"\"\ndef fcrelu(float(B,M) I, float(N,M) W1, float(N) B1) -> (O1) {\n O1(b, n) +=! I(b, m) * W1(n, m)\n O1(b, n) = O1(b, n) + B1(n)\n O1(b, n) = fmax(O1(b, n), 0)\n}\n\"\"\"\nfcrelu = tc.define(lang, name=\"fcrelu\")\n```\n\nThis `fcrelu` function takes PyTorch Tensors as input and returns a PyTorch Tensor. It takes input `I`, weight `W1`, bias `B1` and returns output `O1`.\n\n#### 4. Let's create some dummy input tensors", "metadata": {"source": "https://pytorch.org/blog/tensor-comprehensions/", "category": "pytorch blogs"}} -{"page_content": "```python\nB, M, N = 100, 128, 100\nI, W1, B1 = torch.randn(B, M).cuda(), torch.randn(N, M).cuda(), torch.randn(N).cuda()\n```\n\n#### 5. Now autotune the function for your input sizes\n\n```python\nfcrelu.autotune(I, W1, B1, cache=\"fcrelu_100_128_100.tc\")\n```\n\nThe autotuner is your biggest friend. You generally do not want to use a `tc` function without autotuning it first.\n\nWhen the autotuning is running, the current best performance is displayed. If you are satisfied with the current result or you are out of time, stop the tuning procedure by pressing `Ctrl+C`.\n\n![tc-autotuner](https://pytorch.org/static/img/tc_autotuner.gif)\n\n`cache` saves the results of the autotuned kernel search and saves it to the file `fcrelu_100_128_100.tc`. The next time you call the same line of code, it loads the results of the autotuning without recomputing it.", "metadata": {"source": "https://pytorch.org/blog/tensor-comprehensions/", "category": "pytorch blogs"}} +{"page_content": "#### 4. Let's create some dummy input tensors\n\n```python\nB, M, N = 100, 128, 100\nI, W1, B1 = torch.randn(B, M).cuda(), torch.randn(N, M).cuda(), torch.randn(N).cuda()\n```\n\n#### 5. Now autotune the function for your input sizes\n\n```python\nfcrelu.autotune(I, W1, B1, cache=\"fcrelu_100_128_100.tc\")\n```\n\nThe autotuner is your biggest friend. You generally do not want to use a `tc` function without autotuning it first.\n\nWhen the autotuning is running, the current best performance is displayed. If you are satisfied with the current result or you are out of time, stop the tuning procedure by pressing `Ctrl+C`.\n\n![tc-autotuner](https://pytorch.org/static/img/tc_autotuner.gif)\n\n`cache` saves the results of the autotuned kernel search and saves it to the file `fcrelu_100_128_100.tc`. The next time you call the same line of code, it loads the results of the autotuning without recomputing it.", "metadata": {"source": "https://pytorch.org/blog/tensor-comprehensions/", "category": "pytorch blogs"}} {"page_content": "The autotuner has a few hyperparameters (just like your ConvNet has learning rate, number of layers, etc.). We pick reasonable defaults, but you can read about using advanced options [here](https://facebookresearch.github.io/TensorComprehensions/framework/pytorch_integration/writing_layers.html#specifying-mapping-options).\n\n#### 6. Call the function with the inputs, to get your result\n\n```python\nout = fcrelu(I, W1, B1)\n```\n\nNow, let's look at how to write TC expressions.\n\n## A quick primer on the TC language\n\nThe TC notation focuses on the mathematical nature of the layer, leaving performance considerations to it's backend code that uses Halide and polyhedral compilation techniques which accumulate decades of cutting edge Loop Nest Optimization (LNO) research.\n\nTC is close to [np.einsum](https://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html). We shall quickly learn TC by example", "metadata": {"source": "https://pytorch.org/blog/tensor-comprehensions/", "category": "pytorch blogs"}} {"page_content": "```python\nlang = \"\"\"\ndef matmul(float(M,N) A, float(N,K) B) -> (output) {\n output(i, j) +=! A(i, kk) * B(kk, j)\n}\n\"\"\"\n```\n\nIn this example, we define a function `matmul` which takes two input `A` and `B` of shapes `M x N` and `N x K` and returns a single `output`. The shape of `output` is automatically inferred by the TC language (discussed below).\n\nLet's look at this line:\n\n```python\noutput(i, j) +=! A(i, kk) * B(kk, j)\n```\n\nIt says:\n\n- `output(i, j)` means output is 2D.\n- for each location `output(i, j)`, we add (`+=`) `A(i, kk) * B(kk, j)`.\n- `i` is well-defined as all locations in `A` dim=0, i.e. `i in range(0, M)`\n- `j` is well-defined as all locations in `B` dim=1, i.e. `j in range(0, K)`\n- `kk` is inferred as all locations from `0` to `N`\n\nThe shape of output is inferred from the maximum values `i` and `j` can take, which is `M` and `K`, so output is of size `M x K`.\n\nThe `!` symbol initializes output with `0.0`. It is equivalent to:\n\n```python\noutput(i, j) = 0\noutput(i, j) += A(i, kk) * B(kk, j)\n```", "metadata": {"source": "https://pytorch.org/blog/tensor-comprehensions/", "category": "pytorch blogs"}} {"page_content": "**Scalar inputs and range constraints: implement AvgPool2d**\n\n```python\n\"\"\"\n\n{% raw %}def avgpool(float(B, C, H, W) input) -> (output) {{{% endraw %}\n output(b, c, h, w) += input(b, c, h * {sH} + kh, w * {sW} + kw) where kh in 0:{kH}, kw in 0:{kW}\n{% raw %}}}{% endraw %}\n\n\"\"\"\navgpool = tc.define(LANG, name=\"avgpool\", constants={\"sH\":1, \"sW\":1, \"kH\":2, \"kW\":2})\n```\n\nhere the `where` keyword can take ranges of values to operate on. `0:{kH}` is equivalent `range(kH)` in Python.\n\nNote: the syntax for passing in scalars is subject to change in the next release.\n\n## torch.nn layers\n\nWe added some sugar-coating around the basic PyTorch integration of TC to make it easy to integrate TC into larger `torch.nn` models by defining the forward and backward TC expressions and taking `Variable` inputs / outputs. Here is an [example](https://github.com/facebookresearch/TensorComprehensions/blob/master/test_python/layers/test_convolution_train.py) of defining a convolution layer with TC.", "metadata": {"source": "https://pytorch.org/blog/tensor-comprehensions/", "category": "pytorch blogs"}} @@ -1175,7 +1178,7 @@ {"page_content": "## Getting Started\n\n- [Walk through Tutorial](https://facebookresearch.github.io/TensorComprehensions/framework/pytorch_integration/writing_layers.html) to quickly get started with understanding and using Tensor Comprehensions PyTorch package.\n- [Over 20 examples](https://github.com/facebookresearch/TensorComprehensions/tree/master/test_python/layers) of various ML layers with TC, including `avgpool`, `maxpool`, `matmul`, matmul - give output buffers and `batch-matmul`, `convolution`, `strided-convolution`, `batchnorm`, `copy`, `cosine similarity`, `Linear`, `Linear + ReLU`, `group-convolutions`, strided `group-convolutions`, `indexing`, `Embedding` (lookup table), small-mobilenet, `softmax`, `tensordot`, `transpose`\n- [Detailed docs](https://facebookresearch.github.io/TensorComprehensions/framework/pytorch_integration/getting_started.html) on Tensor Comprehensions and integration with PyTorch.\n\n## Communication", "metadata": {"source": "https://pytorch.org/blog/tensor-comprehensions/", "category": "pytorch blogs"}} {"page_content": "## Communication\n\n- [Slack](https://tensorcomprehensions.herokuapp.com/): For discussion around framework integration, build support, collaboration, etc. join our slack channel.\n- Email: tensorcomp@fb.com\n- [GitHub](https://github.com/facebookresearch/TensorComprehensions): bug reports, feature requests, install issues, RFCs, thoughts, etc.\n\n## Acknowledgements\n\nWe would like to thank Soumith Chintala, [Edward Yang](https://github.com/ezyang) and [Sam Gross](https://github.com/colesbury) for their immense guidance and help in making the integration API nice and smooth. We would also like to thank rest of the PyTorch team and our pre-release users for their helpful feedback that guided us in making the integration better.", "metadata": {"source": "https://pytorch.org/blog/tensor-comprehensions/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"Introducing TorchVision\u2019s New Multi-Weight Support API\"\nauthor: Vasilis Vryniotis\nfeatured-img: \"assets/images/torchvision_featured.png\"\n---\n\nTorchVision has a new backwards compatible API for building models with multi-weight support. The new API allows loading different pre-trained weights on the same model variant, keeps track of vital meta-data such as the classification labels and includes the preprocessing transforms necessary for using the models. In this blog post, we plan to review the prototype API, show-case its features and highlight key differences with the existing one.\n\n
\n \n
\n\nWe are hoping to get your thoughts about the API prior finalizing it. To collect your feedback, we have created a [Github issue](https://github.com/pytorch/vision/issues/5088) where you can post your thoughts, questions and comments.\n\n## Limitations of the current API", "metadata": {"source": "https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/", "category": "pytorch blogs"}} -{"page_content": "TorchVision currently provides pre-trained models which could be a starting point for transfer learning or used as-is in Computer Vision applications. The typical way to instantiate a pre-trained model and make a prediction is:\n\n```Python\nimport torch\n\nfrom PIL import Image\nfrom torchvision import models as M\nfrom torchvision.transforms import transforms as T\n\n\nimg = Image.open(\"test/assets/encode_jpeg/grace_hopper_517x606.jpg\")\n\n# Step 1: Initialize model\nmodel = M.resnet50(pretrained=True)\nmodel.eval()\n\n# Step 2: Define and initialize the inference transforms\npreprocess = T.Compose([\n T.Resize([256, ]),\n T.CenterCrop(224),\n T.PILToTensor(),\n T.ConvertImageDtype(torch.float),\n T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n])\n\n# Step 3: Apply inference preprocessing transforms\nbatch = preprocess(img).unsqueeze(0)\nprediction = model(batch).squeeze(0).softmax(0)", "metadata": {"source": "https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/", "category": "pytorch blogs"}} +{"page_content": "## Limitations of the current API\n\nTorchVision currently provides pre-trained models which could be a starting point for transfer learning or used as-is in Computer Vision applications. The typical way to instantiate a pre-trained model and make a prediction is:\n\n```Python\nimport torch\n\nfrom PIL import Image\nfrom torchvision import models as M\nfrom torchvision.transforms import transforms as T\n\n\nimg = Image.open(\"test/assets/encode_jpeg/grace_hopper_517x606.jpg\")\n\n# Step 1: Initialize model\nmodel = M.resnet50(pretrained=True)\nmodel.eval()\n\n# Step 2: Define and initialize the inference transforms\npreprocess = T.Compose([\n T.Resize([256, ]),\n T.CenterCrop(224),\n T.PILToTensor(),\n T.ConvertImageDtype(torch.float),\n T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n])\n\n# Step 3: Apply inference preprocessing transforms\nbatch = preprocess(img).unsqueeze(0)\nprediction = model(batch).squeeze(0).softmax(0)", "metadata": {"source": "https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/", "category": "pytorch blogs"}} {"page_content": "# Step 4: Use the model and print the predicted category\nclass_id = prediction.argmax().item()\nscore = prediction[class_id].item()\nwith open(\"imagenet_classes.txt\", \"r\") as f:\n categories = [s.strip() for s in f.readlines()]\n category_name = categories[class_id]\nprint(f\"{category_name}: {100 * score}%\")\n\n```\n\nThere are a few limitations with the above approach:", "metadata": {"source": "https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/", "category": "pytorch blogs"}} {"page_content": "1. **Inability to support multiple pre-trained weights:** Since the `pretrained` variable is boolean, we can only offer one set of weights. This poses a severe limitation when we significantly [improve the accuracy of existing models](https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/) and we want to make those improvements available to the community. It also stops us from offering pre-trained weights of the same model variant on different datasets.\n2. **Missing inference/preprocessing transforms:** The user is forced to define the necessary transforms prior using the model. The inference transforms are usually linked to the training process and dataset used to estimate the weights. Any minor discrepancies in these transforms (such as interpolation value, resize/crop sizes etc) can lead to major reductions in accuracy or unusable models.\n3. **Lack of meta-data:** Critical pieces of information in relation to the weights are unavailable to the users. For example, one needs to look into external sources and the documentation to find things like the [category labels](https://github.com/pytorch/vision/issues/1946), the training recipe, the accuracy metrics etc.", "metadata": {"source": "https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/", "category": "pytorch blogs"}} {"page_content": "The new API addresses the above limitations and reduces the amount of boilerplate code needed for standard tasks.\n\n## Overview of the prototype API\n\nLet\u2019s see how we can achieve exactly the same results as above using the new API:\n\n```Python\nfrom PIL import Image\nfrom torchvision.prototype import models as PM\n\n\nimg = Image.open(\"test/assets/encode_jpeg/grace_hopper_517x606.jpg\")\n\n# Step 1: Initialize model\nweights = PM.ResNet50_Weights.IMAGENET1K_V1\nmodel = PM.resnet50(weights=weights)\nmodel.eval()\n\n# Step 2: Initialize the inference transforms\npreprocess = weights.transforms()\n\n# Step 3: Apply inference preprocessing transforms\nbatch = preprocess(img).unsqueeze(0)\nprediction = model(batch).squeeze(0).softmax(0)\n\n# Step 4: Use the model and print the predicted category\nclass_id = prediction.argmax().item()\nscore = prediction[class_id].item()\ncategory_name = weights.meta[\"categories\"][class_id]\nprint(f\"{category_name}: {100 * score}*%*\")\n```", "metadata": {"source": "https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/", "category": "pytorch blogs"}} @@ -1206,21 +1209,21 @@ {"page_content": "* Loop Unrolling: We automatically unroll loops in the code (for big loops, we unroll a small subset of it), which then empowers us to do further optimizations on the for loops control flow. For example, the fuser can fuse together operations across iterations of the loop body, which results in a good performance improvement for control flow intensive models like LSTMs.\n* Batch Matrix Multiplication: For RNNs where the input is pre-multiplied (i.e. the model has a lot of matrix multiplies with the same LHS or RHS), we can efficiently batch those operations together into a single matrix multiply while chunking the outputs to achieve equivalent semantics. \n\nBy applying these techniques, we reduced our time in the forward pass by an additional 1.6ms to 8.4ms (1.2x speed up) and timing in backward by 7ms to around 20ms (1.35x speed up). \n\n### LSTM Layer (backward)", "metadata": {"source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"}} {"page_content": "* \u201cTree\u201d Batch Matrix Muplication: It is often the case that a single weight is reused multiple times in the LSTM backward graph, forming a tree where the leaves are matrix multiplies and nodes are adds. These nodes can be combined together by concatenating the LHSs and RHSs in different dimensions, then computed as a single matrix multiplication. The formula of equivalence can be denoted as follows:\n \n $L1 * R1 + L2 * R2 = torch.cat((L1, L2), dim=1) * torch.cat((R1, R2), dim=0)$\n \n* Autograd is a critical component of what makes PyTorch such an elegant ML framework. As such, we carried this through to PyTorch JIT, but using a new **Automatic Differentiation** (AD) mechanism that works on the IR level. JIT automatic differentiation will slice the forward graph into symbolically differentiable subgraphs, and generate backwards nodes for those subgraphs. Taking the above IR as an example, we group the graph nodes into a single `prim::DifferentiableGraph_0` for the operations that has AD formulas. For operations that have not been added to AD formulas, we will fall back to Autograd during execution.", "metadata": {"source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"}} {"page_content": "* Optimizing the backwards path is hard, and the implicit broadcasting semantics make the optimization of automatic differentiation harder. PyTorch makes it convenient to write tensor operations without worrying about the shapes by broadcasting the tensors for you. For performance, the painful point in backward is that we need to have a summation for such kind of broadcastable operations. This results in the derivative of every broadcastable op being followed by a summation. Since we cannot currently fuse reduce operations, this causes FusionGroups to break into multiple small groups leading to bad performance. To deal with this, refer to this great [post](http://lernapparat.de/fast-lstm-pytorch/) written by Thomas Viehmann.\n\n### Misc Optimizations", "metadata": {"source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"}} -{"page_content": "* In addition to the steps laid about above, we also eliminated overhead between CUDA kernel launches and unnecessary tensor allocations. One example is when you do a tensor device look up. This can provide some poor performance initially with a lot of unnecessary allocations. When we remove these this results in a reduction from milliseconds to nanoseconds between kernel launches.\n* Lastly, there might be normalization applied in the custom LSTMCell like LayerNorm. Since LayerNorm and other normalization ops contains reduce operations, it is hard to fuse it in its entirety. Instead, we automatically decompose Layernorm to a statistics computation (reduce operations) + element-wise transformations, and then fuse those element-wise parts together. As of this post, there are some limitations on our auto differentiation and graph fuser infrastructure which limits the current support to inference mode only. We plan to add backward support in a future release.", "metadata": {"source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"}} +{"page_content": "### Misc Optimizations\n\n* In addition to the steps laid about above, we also eliminated overhead between CUDA kernel launches and unnecessary tensor allocations. One example is when you do a tensor device look up. This can provide some poor performance initially with a lot of unnecessary allocations. When we remove these this results in a reduction from milliseconds to nanoseconds between kernel launches.\n* Lastly, there might be normalization applied in the custom LSTMCell like LayerNorm. Since LayerNorm and other normalization ops contains reduce operations, it is hard to fuse it in its entirety. Instead, we automatically decompose Layernorm to a statistics computation (reduce operations) + element-wise transformations, and then fuse those element-wise parts together. As of this post, there are some limitations on our auto differentiation and graph fuser infrastructure which limits the current support to inference mode only. We plan to add backward support in a future release.", "metadata": {"source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"}} {"page_content": "With the above optimizations on operation fusion, loop unrolling, batch matrix multiplication and some misc optimizations, we can see a clear performance increase on our custom TorchScript LSTM forward and backward from the following figure: \n\n
\n \n
\n\n\nThere are a number of additional optimizations that we did not cover in this post. In addition to the ones laid out in this post, we now see that our custom LSTM forward pass is on par with cuDNN. We are also working on optimizing backward more and expect to see improvements in future releases. Besides the speed that TorchScript provides, we introduced a much more flexible API that enable you to hand draft a lot more custom RNNs, which cuDNN could not provide.", "metadata": {"source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"PyTorch, a year in....\"\nauthor: \"The PyTorch Team\"\ndate: 2018-01-19 12:00:00 -0500\nredirect_from: /2018/01/19/a-year-in.html\n---\n\nToday marks 1 year since PyTorch was released publicly. It's been a wild ride \u2014 our quest to build a flexible deep learning research platform. Over the last year, we've seen an amazing community of people using, contributing to and evangelizing PyTorch \u2014 thank you for the love.\n\nLooking back, we wanted to summarize PyTorch over the past year: the progress, the news and highlights from the community.\n\n## Community\n\nWe've been blessed with a strong organic community of researchers and engineers who fell in love with PyTorch. The core team has engineers and researchers from multiple countries, companies and universities, and we couldn't have made PyTorch what it is without each contribution.\n\n\n### Research papers, packages and Github", "metadata": {"source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"}} -{"page_content": "Within days of release, users from the community started to implement their favorite research papers in PyTorch and release the code on Github. Open-source code is a primary and essential tool for researchers today.\n\nFolks came together to create [torchtext](https://github.com/pytorch/text), [torchvision](https://github.com/pytorch/vision) and [torchaudio](https://github.com/pytorch/audio) packages to help facilitate and democratize research in different domains.\n\nThe first community package based on PyTorch came from Brandon Amos, [titled Block](https://twitter.com/brandondamos/status/828652480573607937), and helped with easier manipulation of block matrices. The Locus Lab at **CMU** subsequently went on to [publish PyTorch packages](https://github.com/locuslab) and implementations for most of their research. The first research paper code came from Sergey Zagoruyko titled [Paying more attention to attention](https://twitter.com/PyTorch/status/822561885744726016).", "metadata": {"source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"}} +{"page_content": "### Research papers, packages and Github\n\nWithin days of release, users from the community started to implement their favorite research papers in PyTorch and release the code on Github. Open-source code is a primary and essential tool for researchers today.\n\nFolks came together to create [torchtext](https://github.com/pytorch/text), [torchvision](https://github.com/pytorch/vision) and [torchaudio](https://github.com/pytorch/audio) packages to help facilitate and democratize research in different domains.\n\nThe first community package based on PyTorch came from Brandon Amos, [titled Block](https://twitter.com/brandondamos/status/828652480573607937), and helped with easier manipulation of block matrices. The Locus Lab at **CMU** subsequently went on to [publish PyTorch packages](https://github.com/locuslab) and implementations for most of their research. The first research paper code came from Sergey Zagoruyko titled [Paying more attention to attention](https://twitter.com/PyTorch/status/822561885744726016).", "metadata": {"source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"}} {"page_content": "Jun-Yan Zhu, Taesung Park, Phillip Isola, Alyosha Efros and team from **U.C.Berkeley** released the hugely popular [Cycle-GAN and pix2pix](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix) which does image to image transforms.\n\n
\n \n
\n\nThe researchers at **HarvardNLP** and **Systran** started developing and improving [OpenNMT in PyTorch](https://github.com/OpenNMT/OpenNMT-py), seeded by initial reimplementation of the [Lua]Torch code from Adam Lerer at Facebook.\n\nThe MagicPony team at **Twitter** contributed implementations of their [Super-resolution work early on into PyTorch's examples](https://twitter.com/Rob_Bishop/status/821793080877588480).", "metadata": {"source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"}} {"page_content": "**Salesforce Research** released several packages, including their highlight release of [PyTorch-QRNN](https://twitter.com/Smerity/status/917472260851560448), a type of RNN that is 2x to 17x faster than standard LSTMs optimized by CuDNN. James Bradbury and team form one of the most active and engaging forces in the PyTorch community.\n\n

We're releasing @PyTorch-QRNN, 2-17x faster than NVIDIA's cuDNN LSTM.
Speed thanks to 50 lines of CUDA via CuPy.https://t.co/KaWhN4yDZd pic.twitter.com/yoLYj3pMI0

— Smerity (@Smerity) October 9, 2017
\n", "metadata": {"source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"}} {"page_content": "Researchers from **Uber**, **Northeastern** and **Stanford** came together to form an active probabilistic programming community around their packages [Pyro](http://pyro.ai/) and [ProbTorch](https://github.com/probtorch/probtorch). They are actively developing the torch.distributions core package. This community is so active and fast-moving, we had our first pytorch-probabilistic-programming meetup at NIPS 2017 with Fritz Obermeyer, Noah Goodman, Jan-Willem van de Meent, Brooks Paige, Dustin Tran and 22 additional attendees discussing how to make the world bayesian.\n\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"}} {"page_content": "**NVIDIA** Researchers released three high-quality repositories that implemented [pix2pix-HD](https://github.com/NVIDIA/pix2pixHD), [Sentiment Neuron](https://github.com/NVIDIA/sentiment-discovery) and [FlowNet2](https://github.com/NVIDIA/flownet2-pytorch) papers. Their analysis of scalability of different [Data Parallel models in PyTorch](https://github.com/NVIDIA/sentiment-discovery/blob/master/analysis/scale.md) was helpful to the community.\n\n
\n \n
\n\nThe Allen Institute for AI released [AllenNLP](http://allennlp.org/) which includes several state-of-the-art models in NLP \u2014 reference implementations and easy to use [web demos](http://demo.allennlp.org/machine-comprehension) for standard NLP tasks.\n\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"}} {"page_content": "We also had our first Kaggle winning team grt123 in July. They won the DataScience Bowl 2017 on Lung Cancer detection and [subsequently released their PyTorch implementations](https://twitter.com/PyTorch/status/881573658166267904).\n\nOn the visualization front, Tzu-Wei Huang implemented a [TensorBoard-PyTorch plugin](https://github.com/lanpa/tensorboard-pytorch) and Facebook AI Research released PyTorch compatibility for their [visdom](https://github.com/facebookresearch/visdom) visualization package.\n\n
\n \n \n
\n\nLastly, **Facebook AI Research** released several projects such as [ParlAI, fairseq-py, VoiceLoop and FaderNetworks](https://github.com/facebookresearch/) that implemented cutting-edge models and interfaced datasets in multiple domains.", "metadata": {"source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"}} {"page_content": "There are countless good projects that we haven't highlighted for the lack of space, you can find a curated list [here](https://github.com/soumith?tab=stars).\n\nWe would also like to give a huge shout-out to folks who actively help others out on the Forums, especially [ptrblck](https://discuss.pytorch.org/u/ptrblck/summary), [jpeg729](https://discuss.pytorch.org/u/jpeg729/summary), [QuantScientist](https://discuss.pytorch.org/u/quantscientist/summary), [albanD](https://discuss.pytorch.org/u/alband/summary), [Thomas Viehmann](https://discuss.pytorch.org/u/tom/summary) and [chenyuntc](https://discuss.pytorch.org/u/chenyuntc/summary). You are providing an invaluable service, thank you so much!\n\n## Metrics\n\nIn terms of sheer numbers,", "metadata": {"source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"}} -{"page_content": "* 87,769 lines of Python code on github that [import torch](https://github.com/search?l=Python&q=import+torch&type=Code)\n* [3,983 repositories on Github that mention PyTorch in their name or description](https://github.com/search?q=pytorch&type=Repositories)\n* More than half a million downloads of PyTorch binaries. 651,916 to be precise.\n* **5,400 users** wrote **21,500 posts** discussing 5,200 topics on our forums discuss.pytorch.org (http://discuss.pytorch.org/)\n* 131 mentions of PyTorch on Reddit's /r/machinelearning since the day of release. In the same period, TensorFlow was mentioned 255 times.\n\n\n### Research Metrics\n\nPyTorch is a research-focused framework. So one of the metrics of interest is to see the usage of PyTorch in machine learning research papers.\n\n\n* In the recent ICLR2018 conference submissions, PyTorch was mentioned in **87 papers**, compared to TensorFlow at 228 papers, Keras at 42 papers, Theano and Matlab at 32 papers.", "metadata": {"source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"}} +{"page_content": "## Metrics\n\nIn terms of sheer numbers,\n\n* 87,769 lines of Python code on github that [import torch](https://github.com/search?l=Python&q=import+torch&type=Code)\n* [3,983 repositories on Github that mention PyTorch in their name or description](https://github.com/search?q=pytorch&type=Repositories)\n* More than half a million downloads of PyTorch binaries. 651,916 to be precise.\n* **5,400 users** wrote **21,500 posts** discussing 5,200 topics on our forums discuss.pytorch.org (http://discuss.pytorch.org/)\n* 131 mentions of PyTorch on Reddit's /r/machinelearning since the day of release. In the same period, TensorFlow was mentioned 255 times.\n\n\n### Research Metrics\n\nPyTorch is a research-focused framework. So one of the metrics of interest is to see the usage of PyTorch in machine learning research papers.\n\n\n* In the recent ICLR2018 conference submissions, PyTorch was mentioned in **87 papers**, compared to TensorFlow at 228 papers, Keras at 42 papers, Theano and Matlab at 32 papers.", "metadata": {"source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"}} {"page_content": "* [Monthly arxiv.org mentions for frameworks](https://twitter.com/fchollet/status/951828914103402497) had PyTorch at 72 mentions, with TensorFlow at 273 mentions, Keras at 100 mentions, Caffe at 94 mentions and Theano at 53 mentions.\n\n## Courses, Tutorials and Books\n\nWhen we released PyTorch, we had good API documentation, but our tutorials were limited to a few ipython notebooks \u2014 helpful, but not good enough.\n\n[Sasank Chilamkurthy](https://github.com/chsasank) took it upon himself to revamp the tutorials into the [beautiful website](https://pytorch.org/tutorials/) that it is today.\n\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"}} {"page_content": "[Sean Robertson](https://github.com/spro/practical-pytorch) and [Justin Johnson](https://github.com/jcjohnson/pytorch-examples) wrote great new tutorials \u2014 in NLP, and to learn by example. [Yunjey Choi](https://github.com/yunjey/pytorch-tutorial) wrote a beautiful tutorial where most models were implemented in 30 lines or less.\nEach new tutorial helped users find their way faster, with different approaches to learning.\n\n[Goku Mohandas and Delip Rao](https://twitter.com/PyTorch/status/888500355943641088) switched the code content of their book-in-progress to use PyTorch.\n\nWe've seen quite a few university machine learning courses being taught with PyTorch as the primary tool, such as Harvard's [CS287](https://harvard-ml-courses.github.io/cs287-web/). Taking it one step further and democratizing learning, we had three online courses pop up that teach using PyTorch.", "metadata": {"source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"}} {"page_content": "- **Fast.ai's** \u201cDeep Learning for Coders\u201d is a popular online course. In September, Jeremy and Rachel [announced that the next fast.ai courses will be nearly entirely based on PyTorch](http://www.fast.ai/2017/09/08/introducing-pytorch-for-fastai/).\n- Ritchie Ng, a researcher with ties to NUS Singapore and Tsinghua released [a Udemy course](https://www.udemy.com/practical-deep-learning-with-pytorch/) titled Practical Deep Learning with PyTorch.\n- Sung Kim from HKUST released an [online course on Youtube](https://www.youtube.com/playlist?list=PLlMkM4tgfjnJ3I-dbhO9JTw7gNty6o_2m) that was aimed towards a general audience, titled: \u201cPyTorch Zero to All\u201d.\n\n\n## Engineering\n\nOver the last year we implemented multiple features, improved performance across the board and fixed lots of bugs. A full list of the work we've done is found in our [release notes](https://github.com/pytorch/pytorch/releases).\nHere are highlights from our work over the last year:\n\n## Higher-order gradients", "metadata": {"source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"}} -{"page_content": "With the release of several papers that implement penalties of gradients and with ongoing research in 2nd order gradient methods, this was an essential and sought-after feature. In August, we implemented a generalized interface that can take n-th order derivatives and increased the coverage of functions that support higher-order gradients over time, such that at the moment of writing almost all ops support this.\n\n\n## Distributed PyTorch\n\nIn August, we released a small distributed package that followed the highly popular MPI-collective approach. The package has multiple backends such as TCP, MPI, Gloo and NCCL2 to support various types of CPU/GPU collective operations and use-cases, and integrates distributed technologies such as Infiniband and RoCE. Distributed is hard, and we had bugs in the initial iteration. Over subsequent releases, we made the package more stable and improved performance.\n\n## Closer to NumPy", "metadata": {"source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"}} +{"page_content": "## Higher-order gradients\n\n With the release of several papers that implement penalties of gradients and with ongoing research in 2nd order gradient methods, this was an essential and sought-after feature. In August, we implemented a generalized interface that can take n-th order derivatives and increased the coverage of functions that support higher-order gradients over time, such that at the moment of writing almost all ops support this.\n\n\n## Distributed PyTorch\n\nIn August, we released a small distributed package that followed the highly popular MPI-collective approach. The package has multiple backends such as TCP, MPI, Gloo and NCCL2 to support various types of CPU/GPU collective operations and use-cases, and integrates distributed technologies such as Infiniband and RoCE. Distributed is hard, and we had bugs in the initial iteration. Over subsequent releases, we made the package more stable and improved performance.\n\n## Closer to NumPy", "metadata": {"source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"}} {"page_content": "## Closer to NumPy\n\nOne of the biggest demands from users were NumPy features that they were familiar with. Features such as Broadcasting and Advanced Indexing are convenient and save users a lot of verbosity. We implemented these features and started to align our API to be closer to NumPy. Over time, we expect to get closer and closer to NumPy's API where appropriate.\n\n## Sparse Tensors\n\nIn March, we released a small package supporting sparse Tensors and in May we released CUDA support for the sparse package. The package is small and limited in functionality, and is used for implementing Sparse Embeddings and commonly used sparse paradigms in deep learning. This package is still small in scope and there's demand to expand it \u2014 if you are interested in working on expanding the sparse package, reach out to us on our [Discussion Boards](https://discuss.pytorch.org/)\n\n\n## Performance", "metadata": {"source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"}} {"page_content": "## Performance\n\nPerformance is always an ongoing battle, especially for PyTorch which is a dynamic framework that wants to maximize flexibility. Over the last year, we've improved performance across board, from our core Tensor library to the neural network operators, writing faster micro-optimized across board.\n\n* We've added specialized AVX and AVX2 intrinsics for Tensor operations\n* Wrote faster GPU kernels for frequent workloads like concatenation and Softmax (among many other things)\n* Rewrote the code for several neural network operators (too many to list), but notably nn.Embedding and group convolutions.\n\n**Reducing framework overhead by 10x across board**", "metadata": {"source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"}} {"page_content": "Since PyTorch is a dynamic graph framework, we create a new graph on the fly at every iteration of a training loop. Hence, the framework overhead has to be low, or the workload has to be large enough that the framework overhead is hidden. In August, the authors of DyNet (Graham Neubig and team) showcased that it's much faster than PyTorch on small NLP models. This was an interesting challenge, we didn't realize that models of those sizes were being trained. In a multi-month (and ongoing) effort, we embarked upon a significant rewrite of PyTorch internals that reduced the framework overhead from more than 10 microseconds per operator execution to as little as 1 microsecond.\n\n**ATen**", "metadata": {"source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"}} @@ -1229,27 +1232,28 @@ {"page_content": "

I've been using PyTorch a few months now and I've never felt better. I have more energy. My skin is clearer. My eye sight has improved.

— Andrej Karpathy (@karpathy) May 26, 2017
\n\n\n

Talk to your doctor to find out if PyTorch is right for you.

— Sean Robertson (@sprobertson) May 26, 2017
\n", "metadata": {"source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"}} {"page_content": "

PyTorch gave me so much life that my skin got cleared, my grades are up, my bills are paid and my crops are watered.

— Adam Will \u00f0\ufe0f\u200d\u00f0 (@adam_will_do_it) May 26, 2017
\n\n\n

So have I! But my hair is also shiner and I've lost weight. @PyTorch for the win. https://t.co/qgU4oIOB4K

— Mariya (@thinkmariya) May 26, 2017
\n", "metadata": {"source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch for AMD ROCm\u2122 Platform now available as Python package'\nauthor: Niles Burbank \u2013 Director PM at AMD, Mayank Daga \u2013 Director, Deep Learning Software at AMD\n---\n\nWith the PyTorch 1.8 release, we are delighted to announce a new installation option for users of\nPyTorch on the ROCm\u2122 open software platform. An installable Python package is now hosted on\npytorch.org, along with instructions for local installation in the same simple, selectable format as\nPyTorch packages for CPU-only configurations and other GPU platforms. PyTorch on ROCm includes full\ncapability for mixed-precision and large-scale training using AMD\u2019s MIOpen & RCCL libraries. This\nprovides a new option for data scientists, researchers, students, and others in the community to get\nstarted with accelerated PyTorch using AMD GPUs.\n\n
\n \n
\n\n## The ROCm Ecosystem", "metadata": {"source": "https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/", "category": "pytorch blogs"}} -{"page_content": "ROCm is AMD\u2019s open source software platform for GPU-accelerated high performance computing and\nmachine learning. Since the original ROCm release in 2016, the ROCm platform has evolved to support\nadditional libraries and tools, a wider set of Linux\u00ae distributions, and a range of new GPUs. This includes\nthe AMD Instinct\u2122 MI100, the first GPU based on AMD CDNA\u2122 architecture. \n \nThe ROCm ecosystem has an established history of support for PyTorch, which was initially implemented\nas a fork of the PyTorch project, and more recently through ROCm support in the upstream PyTorch\ncode. PyTorch users can install PyTorch for ROCm using AMD\u2019s public PyTorch docker image, and can of\ncourse build PyTorch for ROCm from source. With PyTorch 1.8, these existing installation options are\nnow complemented by the availability of an installable Python package.", "metadata": {"source": "https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/", "category": "pytorch blogs"}} +{"page_content": "## The ROCm Ecosystem\n\nROCm is AMD\u2019s open source software platform for GPU-accelerated high performance computing and\nmachine learning. Since the original ROCm release in 2016, the ROCm platform has evolved to support\nadditional libraries and tools, a wider set of Linux\u00ae distributions, and a range of new GPUs. This includes\nthe AMD Instinct\u2122 MI100, the first GPU based on AMD CDNA\u2122 architecture. \n \nThe ROCm ecosystem has an established history of support for PyTorch, which was initially implemented\nas a fork of the PyTorch project, and more recently through ROCm support in the upstream PyTorch\ncode. PyTorch users can install PyTorch for ROCm using AMD\u2019s public PyTorch docker image, and can of\ncourse build PyTorch for ROCm from source. With PyTorch 1.8, these existing installation options are\nnow complemented by the availability of an installable Python package.", "metadata": {"source": "https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/", "category": "pytorch blogs"}} {"page_content": "The primary focus of ROCm has always been high performance computing at scale. The combined\ncapabilities of ROCm and AMD\u2019s Instinct family of data center GPUs are particularly suited to the\nchallenges of HPC at data center scale. PyTorch is a natural fit for this environment, as HPC and ML\nworkflows become more intertwined.\n\n### Getting started with PyTorch for ROCm", "metadata": {"source": "https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/", "category": "pytorch blogs"}} -{"page_content": "The scope for this build of PyTorch is AMD GPUs with ROCm support, running on Linux. The GPUs\nsupported by ROCm include all of AMD\u2019s Instinct family of compute-focused data center GPUs, along\nwith some other select GPUs. A current list of supported GPUs can be found in the [ROCm Github\nrepository](https://github.com/RadeonOpenCompute/ROCm#supported-gpus). After confirming that the target system includes supported GPUs and the current 4.0.1\nrelease of ROCm, installation of PyTorch follows the same simple Pip-based installation as any other\nPython package. As with PyTorch builds for other platforms, the configurator at [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/) provides the specific command line to be run.\n\nPyTorch for ROCm is built from the upstream PyTorch repository, and is a full featured implementation.\nNotably, it includes support for distributed training across multiple GPUs and supports accelerated\nmixed precision training.\n\n### More information", "metadata": {"source": "https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/", "category": "pytorch blogs"}} -{"page_content": "### More information\n\nA list of ROCm supported GPUs and operating systems can be found at\n[https://github.com/RadeonOpenCompute/ROCm](https://github.com/RadeonOpenCompute/ROCm)\nGeneral documentation on the ROCm platform is available at [https://rocmdocs.amd.com/en/latest/](https://rocmdocs.amd.com/en/latest/)\nROCm Learning Center at [https://developer.amd.com/resources/rocm-resources/rocm-learning-center/](https://developer.amd.com/resources/rocm-resources/rocm-learning-center/) General information on AMD\u2019s offerings for HPC and ML can be found at [https://amd.com/hpc](https://amd.com/hpc)\n\n### Feedback\nAn engaged user base is a tremendously important part of the PyTorch ecosystem. We would be deeply\nappreciative of feedback on the PyTorch for ROCm experience in the [PyTorch discussion forum](https://discuss.pytorch.org/) and, where appropriate, reporting any issues via [Github](https://github.com/pytorch/pytorch).", "metadata": {"source": "https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/", "category": "pytorch blogs"}} +{"page_content": "### Getting started with PyTorch for ROCm\n\nThe scope for this build of PyTorch is AMD GPUs with ROCm support, running on Linux. The GPUs\nsupported by ROCm include all of AMD\u2019s Instinct family of compute-focused data center GPUs, along\nwith some other select GPUs. A current list of supported GPUs can be found in the [ROCm Github\nrepository](https://github.com/RadeonOpenCompute/ROCm#supported-gpus). After confirming that the target system includes supported GPUs and the current 4.0.1\nrelease of ROCm, installation of PyTorch follows the same simple Pip-based installation as any other\nPython package. As with PyTorch builds for other platforms, the configurator at [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/) provides the specific command line to be run.", "metadata": {"source": "https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/", "category": "pytorch blogs"}} +{"page_content": "PyTorch for ROCm is built from the upstream PyTorch repository, and is a full featured implementation.\nNotably, it includes support for distributed training across multiple GPUs and supports accelerated\nmixed precision training.\n\n### More information\n\nA list of ROCm supported GPUs and operating systems can be found at\n[https://github.com/RadeonOpenCompute/ROCm](https://github.com/RadeonOpenCompute/ROCm)\nGeneral documentation on the ROCm platform is available at [https://rocmdocs.amd.com/en/latest/](https://rocmdocs.amd.com/en/latest/)\nROCm Learning Center at [https://developer.amd.com/resources/rocm-resources/rocm-learning-center/](https://developer.amd.com/resources/rocm-resources/rocm-learning-center/) General information on AMD\u2019s offerings for HPC and ML can be found at [https://amd.com/hpc](https://amd.com/hpc)", "metadata": {"source": "https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/", "category": "pytorch blogs"}} +{"page_content": "### Feedback\nAn engaged user base is a tremendously important part of the PyTorch ecosystem. We would be deeply\nappreciative of feedback on the PyTorch for ROCm experience in the [PyTorch discussion forum](https://discuss.pytorch.org/) and, where appropriate, reporting any issues via [Github](https://github.com/pytorch/pytorch).", "metadata": {"source": "https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"A Tour of PyTorch Internals (Part I)\"\nauthor: \"Trevor Killeen\"\ndate: 2017-05-11 12:00:00 -0500\nredirect_from: /2017/05/11/Internals.html\n---\n\nThe fundamental unit in PyTorch is the Tensor. This post will serve as an overview for how we implement Tensors in PyTorch, such that the user can interact with it from the Python shell. In particular, we want to answer four main questions:\n\n- How does PyTorch extend the Python interpreter to define a Tensor type that can be manipulated from Python code?\n- How does PyTorch wrap the C libraries that actually define the Tensor's properties and methods?\n- How does PyTorch cwrap work to generate code for Tensor methods?\n- How does PyTorch's build system take all of these components to compile and generate a workable application?\n\n## Extending the Python Interpreter", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"}} -{"page_content": "PyTorch defines a new package `torch`. In this post we will consider the `._C` module. This module is known as an \"extension module\" - a Python module written in C. Such modules allow us to define new built-in object types (e.g. the `Tensor`) and to call C/C++ functions.\n\nThe `._C` module is defined in `torch/csrc/Module.cpp`. The `init_C()` / `PyInit__C()` function creates the module and adds the method definitions as appropriate. This module is passed around to a number of different `__init()` functions that add further objects to the module, register new types, etc.\n\nOne collection of these `__init()` calls is the following:\n\n```cpp\nASSERT_TRUE(THPDoubleTensor_init(module));\nASSERT_TRUE(THPFloatTensor_init(module));\nASSERT_TRUE(THPHalfTensor_init(module));\nASSERT_TRUE(THPLongTensor_init(module));\nASSERT_TRUE(THPIntTensor_init(module));\nASSERT_TRUE(THPShortTensor_init(module));\nASSERT_TRUE(THPCharTensor_init(module));\nASSERT_TRUE(THPByteTensor_init(module));\n```", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"}} +{"page_content": "## Extending the Python Interpreter\n\nPyTorch defines a new package `torch`. In this post we will consider the `._C` module. This module is known as an \"extension module\" - a Python module written in C. Such modules allow us to define new built-in object types (e.g. the `Tensor`) and to call C/C++ functions.\n\nThe `._C` module is defined in `torch/csrc/Module.cpp`. The `init_C()` / `PyInit__C()` function creates the module and adds the method definitions as appropriate. This module is passed around to a number of different `__init()` functions that add further objects to the module, register new types, etc.\n\nOne collection of these `__init()` calls is the following:\n\n```cpp\nASSERT_TRUE(THPDoubleTensor_init(module));\nASSERT_TRUE(THPFloatTensor_init(module));\nASSERT_TRUE(THPHalfTensor_init(module));\nASSERT_TRUE(THPLongTensor_init(module));\nASSERT_TRUE(THPIntTensor_init(module));\nASSERT_TRUE(THPShortTensor_init(module));\nASSERT_TRUE(THPCharTensor_init(module));\nASSERT_TRUE(THPByteTensor_init(module));\n```", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"}} {"page_content": "These `__init()` functions add the Tensor object for each type to the `._C` module so that they can be used in the module. Let's learn how these methods work.\n\n## The THPTensor Type\n\nMuch like the underlying `TH` and `THC` libraries, PyTorch defines a \"generic\" Tensor which is then specialized to a number of different types. Before considering how this specialization works, let's first consider how defining a new type in Python works, and how we create the generic `THPTensor` type.\n\nThe Python runtime sees all Python objects as variables of type `PyObject *`, which serves as a \"base type\" for all Python objects. Every Python type contains the refcount for the object, and a pointer to the object's *type object*. The type object determines the properties of the type. For example, it might contain a list of methods associated with the type, and which C functions get called to implement those methods. The object also contains any fields necessary to represent its state.", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"}} {"page_content": "The formula for defining a new type is as follows:\n\n- Create a struct that defines what the new object will contain\n- Define the type object for the type\n\nThe struct itself could be very simple. Inn Python, all floating point types are actually objects on the heap. The Python float struct is defined as:\n```cpp\ntypedef struct {\n PyObject_HEAD\n double ob_fval;\n} PyFloatObject;\n```\nThe `PyObject_HEAD` is a macro that brings in the code that implements an object's reference counting, and a pointer to the corresponding type object. So in this case, to implement a float, the only other \"state\" needed is the floating point value itself.\n\nNow, let's see the struct for our `THPTensor` type:\n```cpp\nstruct THPTensor {\n PyObject_HEAD\n THTensor *cdata;\n};\n```\nPretty simple, right? We are just wrapping the underlying `TH` tensor by storing a pointer to it.", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"}} {"page_content": "The key part is defining the \"type object\" for a new type. An example definition of a type object for our Python float takes the form:\n```cpp\nstatic PyTypeObject py_FloatType = {\n PyVarObject_HEAD_INIT(NULL, 0)\n \"py.FloatObject\", /* tp_name */\n sizeof(PyFloatObject), /* tp_basicsize */\n 0, /* tp_itemsize */\n 0, /* tp_dealloc */\n 0, /* tp_print */\n 0, /* tp_getattr */\n 0, /* tp_setattr */\n 0, /* tp_as_async */\n 0, /* tp_repr */\n 0, /* tp_as_number */\n 0, /* tp_as_sequence */\n 0, /* tp_as_mapping */\n 0, /* tp_hash */\n 0, /* tp_call */\n 0, /* tp_str */\n 0, /* tp_getattro */\n 0, /* tp_setattro */\n 0, /* tp_as_buffer */\n Py_TPFLAGS_DEFAULT, /* tp_flags */\n \"A floating point number\", /* tp_doc */\n};\n```\nThe easiest way to think of a *type object* is as a set of fields which define the properties of the object. For example, the `tp_basicsize` field is set to `sizeof(PyFloatObject)`. This is so that Python knows how much memory to allocate when calling `PyObject_New()` for a `PyFloatObject.` The full list of fields you can set is defined in `object.h` in the CPython backend:\nhttps://github.com/python/cpython/blob/master/Include/object.h.", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"}} {"page_content": "The type object for our `THPTensor` is `THPTensorType`, defined in `csrc/generic/Tensor.cpp`. This object defines the name, size, mapping methods, etc. for a `THPTensor`.\n\nAs an example, let's take a look at the `tp_new` function we set in the `PyTypeObject`:\n\n```cpp\nPyTypeObject THPTensorType = {\n PyVarObject_HEAD_INIT(NULL, 0)\n ...\n THPTensor_(pynew), /* tp_new */\n};\n```\nThe `tp_new` function enables object creation. It is responsible for creating (as opposed to initializing) objects of that type and is equivalent to the `__new__()` method at the Python level. The C implementation is a static method that is passed the type being instantiated and any arguments, and returns a newly created object.\n\n```cpp\nstatic PyObject * THPTensor_(pynew)(PyTypeObject *type, PyObject *args, PyObject *kwargs)\n{\n HANDLE_TH_ERRORS\n Py_ssize_t num_args = args ? PyTuple_Size(args) : 0;", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"}} {"page_content": "THPTensorPtr self = (THPTensor *)type->tp_alloc(type, 0);\n// more code below\n```\nThe first thing our new function does is allocate the `THPTensor`. It then runs through a series of initializations based off of the args passed to the function. For example, when creating a `THPTensor` *x* from another `THPTensor` *y*, we set the newly created `THPTensor`'s `cdata` field to be the result of calling `THTensor_(newWithTensor)` with the *y*'s underlying `TH` Tensor as an argument. Similar constructors exist for sizes, storages, NumPy arrays, and sequences.\n\n** Note that we solely use `tp_new`, and not a combination of `tp_new` and `tp_init` (which corresponds to the `__init__()` function).\n\nThe other important thing defined in Tensor.cpp is how indexing works. PyTorch Tensors support Python's **Mapping Protocol**. This allows us to do things like:\n```python\nx = torch.Tensor(10).fill_(1)\ny = x[3] // y == 1\nx[4] = 2\n// etc.\n```\n** Note that this indexing extends to Tensor with more than one dimension", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"}} {"page_content": "We are able to use the `[]`-style notation by defining the three mapping methods described [here.](https://docs.python.org/3.7/c-api/typeobj.html#c.PyMappingMethods)\n\nThe most important methods are `THPTensor_(getValue)` and `THPTensor_(setValue)` which describe how to index a Tensor, for returning a new Tensor/Scalar, or updating the values of an existing Tensor in place. Read through these implementations to better understand how PyTorch supports basic tensor indexing.\n\n### Generic Builds (Part One)", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"}} -{"page_content": "We could spend a ton of time exploring various aspects of the `THPTensor` and how it relates to defining a new Python object. But we still need to see how the `THPTensor_(init)()` function is translated to the `THPIntTensor_init()` we used in our module initialization. How do we take our `Tensor.cpp` file that defines a \"generic\" Tensor and use it to generate Python objects for all the permutations of types? To put it another way, `Tensor.cpp` is littered with lines of code like:\n```cpp\nreturn THPTensor_(New)(THTensor_(new)(LIBRARY_STATE_NOARGS));\n```\nThis illustrates both cases we need to make type-specific:\n\n* Our output code will call `THPTensor_New(...)` in place of `THPTensor_(New)`\n* Our output code will call `THTensor_new(...)` in place of `THTensor_(new)`", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"}} +{"page_content": "### Generic Builds (Part One)\n\nWe could spend a ton of time exploring various aspects of the `THPTensor` and how it relates to defining a new Python object. But we still need to see how the `THPTensor_(init)()` function is translated to the `THPIntTensor_init()` we used in our module initialization. How do we take our `Tensor.cpp` file that defines a \"generic\" Tensor and use it to generate Python objects for all the permutations of types? To put it another way, `Tensor.cpp` is littered with lines of code like:\n```cpp\nreturn THPTensor_(New)(THTensor_(new)(LIBRARY_STATE_NOARGS));\n```\nThis illustrates both cases we need to make type-specific:\n\n* Our output code will call `THPTensor_New(...)` in place of `THPTensor_(New)`\n* Our output code will call `THTensor_new(...)` in place of `THTensor_(new)`", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"}} {"page_content": "In other words, for all supported Tensor types, we need to \"generate\" source code that has done the above substitutions. This is part of the \"build\" process for PyTorch. PyTorch relies on Setuptools (https://setuptools.readthedocs.io/en/latest/) for building the package, and we define a `setup.py` file in the top-level directory to customize the build process.\n\nOne component building an Extension module using Setuptools is to list the source files involved in the compilation. However, our `csrc/generic/Tensor.cpp` file is not listed! So how does the code in this file end up being a part of the end product?", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"}} {"page_content": "Recall that we are calling the `THPTensor*` functions (such as `init`) from the directory above `generic`. If we take a look in this directory, there is another file `Tensor.cpp` defined. The last line of this file is important:\n```cpp\n//generic_include TH torch/csrc/generic/Tensor.cpp\n```\nNote that this `Tensor.cpp` file is included in `setup.py`, but it is wrapped in a call to a Python helper function called `split_types`. This function takes as input a file, and looks for the \"//generic_include\" string in the file contents. If it is found, it generates a new output file for each Tensor type, with the following changes:\n\n- The output file is renamed to `Tensor.cpp`\n- The output file is slightly modified as follows:\n\n```cpp\n# Before:\n//generic_include TH torch/csrc/generic/Tensor.cpp", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"}} {"page_content": "# After:\n#define TH_GENERIC_FILE \"torch/src/generic/Tensor.cpp\"\n#include \"TH/THGenerateType.h\"\n```\nIncluding the header file on the second line has the side effect of including the source code in `Tensor.cpp` with some additional context defined. Let's take a look at one of the headers:\n\n```cpp\n#ifndef TH_GENERIC_FILE\n#error \"You must define TH_GENERIC_FILE before including THGenerateFloatType.h\"\n#endif\n\n#define real float\n#define accreal double\n#define TH_CONVERT_REAL_TO_ACCREAL(_val) (accreal)(_val)\n#define TH_CONVERT_ACCREAL_TO_REAL(_val) (real)(_val)\n#define Real Float\n#define THInf FLT_MAX\n#define TH_REAL_IS_FLOAT\n#line 1 TH_GENERIC_FILE\n#include TH_GENERIC_FILE\n#undef accreal\n#undef real\n#undef Real\n#undef THInf\n#undef TH_REAL_IS_FLOAT\n#undef TH_CONVERT_REAL_TO_ACCREAL\n#undef TH_CONVERT_ACCREAL_TO_REAL\n\n#ifndef THGenerateManyTypes\n#undef TH_GENERIC_FILE\n#endif\n```", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"}} {"page_content": "What this is doing is bringing in the code from the generic `Tensor.cpp` file and surrounding it with the following macro definitions. For example, we define real as a float, so any code in the generic Tensor implementation that refers to something as a real will have that real replaced with a float. In the corresponding file `THGenerateIntType.h`, the same macro would replace `real` with `int`.\n\nThese output files are returned from `split_types` and added to the list of source files, so we can see how the `.cpp` code for different types is created.", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"}} {"page_content": "There are a few things to note here: First, the `split_types` function is not strictly necessary. We could wrap the code in `Tensor.cpp` in a single file, repeating it for each type. The reason we split the code into separate files is to speed up compilation. Second, what we mean when we talk about the type replacement (e.g. replace real with a float) is that the C preprocessor will perform these substitutions during compilation. Merely surrounding the source code with these macros has no side effects until preprocessing.\n\n### Generic Builds (Part Two)", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"}} -{"page_content": "Now that we have source files for all the Tensor types, we need to consider how the corresponding header declarations are created, and also how the conversions from `THTensor_(method)` and `THPTensor_(method)` to `THTensor_method` and `THPTensor_method` work. For example, `csrc/generic/Tensor.h` has declarations like:\n```cpp\nTHP_API PyObject * THPTensor_(New)(THTensor *ptr);\n```\nWe use the same strategy for generating code in the source files for the headers. In `csrc/Tensor.h`, we do the following:\n```cpp\n#include \"generic/Tensor.h\"\n#include \n\n#include \"generic/Tensor.h\"\n#include \n```\nThis has the same effect, where we draw in the code from the generic header, wrapped with the same macro definitions, for each type. The only difference is that the resulting code is contained all within the same header file, as opposed to being split into multiple source files.", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"}} +{"page_content": "### Generic Builds (Part Two)\n\nNow that we have source files for all the Tensor types, we need to consider how the corresponding header declarations are created, and also how the conversions from `THTensor_(method)` and `THPTensor_(method)` to `THTensor_method` and `THPTensor_method` work. For example, `csrc/generic/Tensor.h` has declarations like:\n```cpp\nTHP_API PyObject * THPTensor_(New)(THTensor *ptr);\n```\nWe use the same strategy for generating code in the source files for the headers. In `csrc/Tensor.h`, we do the following:\n```cpp\n#include \"generic/Tensor.h\"\n#include \n\n#include \"generic/Tensor.h\"\n#include \n```\nThis has the same effect, where we draw in the code from the generic header, wrapped with the same macro definitions, for each type. The only difference is that the resulting code is contained all within the same header file, as opposed to being split into multiple source files.", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"}} {"page_content": "Lastly, we need to consider how we \"convert\" or \"substitute\" the function types. If we look in the same header file, we see a bunch of `#define` statements, including:\n```cpp\n#define THPTensor_(NAME) TH_CONCAT_4(THP,Real,Tensor_,NAME)\n```\nThis macro says that any string in the source code matching the format `THPTensor_(NAME)` should be replaced with `THPRealTensor_NAME`, where Real is derived from whatever the symbol Real is `#define`'d to be at the time. Because our header code and source code is surrounded by macro definitions for all the types as seen above, after the preprocessor has run, the resulting code is what we would expect. The code in the `TH` library defines the same macro for `THTensor_(NAME)`, supporting the translation of those functions as well. In this way, we end up with header and source files with specialized code.\n\n#### Module Objects and Type Methods", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"}} -{"page_content": "Now we have seen how we have wrapped `TH`'s Tensor definition in `THP`, and generated THP methods such as `THPFloatTensor_init(...)`. Now we can explore what the above code actually does in terms of the module we are creating. The key line in `THPTensor_(init)` is:\n```cpp\n# THPTensorBaseStr, THPTensorType are also macros that are specific\n# to each type\nPyModule_AddObject(module, THPTensorBaseStr, (PyObject *)&THPTensorType);\n```\nThis function registers our Tensor objects to the extension module, so we can use THPFloatTensor, THPIntTensor, etc. in our Python code.", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"}} +{"page_content": "#### Module Objects and Type Methods\n\nNow we have seen how we have wrapped `TH`'s Tensor definition in `THP`, and generated THP methods such as `THPFloatTensor_init(...)`. Now we can explore what the above code actually does in terms of the module we are creating. The key line in `THPTensor_(init)` is:\n```cpp\n# THPTensorBaseStr, THPTensorType are also macros that are specific\n# to each type\nPyModule_AddObject(module, THPTensorBaseStr, (PyObject *)&THPTensorType);\n```\nThis function registers our Tensor objects to the extension module, so we can use THPFloatTensor, THPIntTensor, etc. in our Python code.", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"}} {"page_content": "Just being able to create Tensors isn't very useful - we need to be able to call all the methods that `TH` defines. A simple example shows calling the in-place `zero_` method on a Tensor.\n```python\nx = torch.FloatTensor(10)\nx.zero_()\n```\nLet's start by seeing how we add methods to newly defined types. One of the fields in the \"type object\" is `tp_methods`. This field holds an array of method definitions (`PyMethodDef`s) and is used to associate methods (and their underlying C/C++ implementations) with a type. Suppose we wanted to define a new method on our `PyFloatObject` that replaces the value. We could implement this as follows:\n```cpp\nstatic PyObject * replace(PyFloatObject *self, PyObject *args) {\n\tdouble val;\n\tif (!PyArg_ParseTuple(args, \"d\", &val))\n\t\treturn NULL;\n\tself->ob_fval = val;\n\tPy_RETURN_NONE\n}\n```\nThis is equivalent to the Python method:\n```python\ndef replace(self, val):\n\tself.ob_fval = val\n```\nIt is instructive to read more about how defining methods works in CPython. In general, methods take as the first parameter the instance of the object, and optionally parameters for the positional arguments and keyword arguments. This static function is registered as a method on our float:\n```cpp\nstatic PyMethodDef float_methods[] = {\n\t{\"replace\", (PyCFunction)replace, METH_VARARGS,\n\t\"replace the value in the float\"\n\t},\n\t{NULL} /* Sentinel */\n}\n```\nThis registers a method called replace, which is implemented by the C function of the same name. The `METH_VARARGS` flag indicates that the method takes a tuple of arguments representing all the arguments to the function. This array is set to the `tp_methods` field of the type object, and then we can use the `replace` method on objects of that type.", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"}} {"page_content": "We would like to be able to call all of the methods for `TH` tensors on our `THP` tensor equivalents. However, writing wrappers for all of the `TH` methods would be time-consuming and error prone. We need a better way to do this.\n\n### PyTorch cwrap\n\nPyTorch implements its own cwrap tool to wrap the `TH` Tensor methods for use in the Python backend. We define a `.cwrap` file containing a series of C method declarations in our custom [YAML format](http://yaml.org). The cwrap tool takes this file and outputs `.cpp` source files containing the wrapped methods in a format that is compatible with our `THPTensor` Python object and the Python C extension method calling format. This tool is used to generate code to wrap not only `TH`, but also `CuDNN`. It is defined to be extensible.", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"}} {"page_content": "An example YAML \"declaration\" for the in-place `addmv_` function is as follows:\n```\n[[\n name: addmv_\n cname: addmv\n return: self\n arguments:\n - THTensor* self\n - arg: real beta\n default: AS_REAL(1)\n - THTensor* self\n - arg: real alpha\n default: AS_REAL(1)\n - THTensor* mat\n - THTensor* vec\n]]\n```\nThe architecture of the cwrap tool is very simple. It reads in a file, and then processes it with a series of **plugins.** See `tools/cwrap/plugins/__init__.py` for documentation on all the ways a plugin can alter the code.", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"}} @@ -1260,11 +1264,11 @@ {"page_content": "---\nlayout: blog_detail\ntitle: \"PyTorch 1.12: TorchArrow, Functional API for Modules and nvFuser, are now available\"\nauthor: Team PyTorch\nfeatured-img: ''\n---\n\nWe are excited to announce the release of PyTorch 1.12 ([release note](https://github.com/pytorch/pytorch/releases/tag/v1.12.0))! This release is composed of over 3124 commits, 433 contributors. Along with 1.12, we are releasing beta versions of AWS S3 Integration, PyTorch Vision Models on Channels Last on CPU, Empowering PyTorch on Intel\u00ae Xeon\u00ae Scalable processors with Bfloat16 and FSDP API. We want to sincerely thank our dedicated community for your contributions.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"}} {"page_content": "Summary:\n- Functional APIs to functionally apply module computation with a given set of parameters\n- Complex32 and Complex Convolutions in PyTorch\n- DataPipes from TorchData fully backward compatible with DataLoader \n- functorch with improved coverage for APIs\n- nvFuser a deep learning compiler for PyTorch\n- Changes to float32 matrix multiplication precision on Ampere and later CUDA hardware\n- TorchArrow, a new beta library for machine learning preprocessing over batch data\n\n## Frontend APIs\n\n### Introducing TorchArrow\n\nWe\u2019ve got a new Beta release ready for you to try and use: TorchArrow. This is a library for machine learning preprocessing over batch data. It features a performant and Pandas-style, easy-to-use API in order to speed up your preprocessing workflows and development.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"}} {"page_content": "Currently, it provides a Python DataFrame interface with the following features:\n- High-performance CPU backend, vectorized and extensible User-Defined Functions (UDFs) with [Velox](https://github.com/facebookincubator/velox)\n- Seamless handoff with PyTorch or other model authoring, such as Tensor collation and easily plugging into PyTorch DataLoader and DataPipes\n- Zero copy for external readers via Arrow in-memory columnar format\n\nFor more details, please find our [10-min tutorial](https://github.com/pytorch/torcharrow/blob/main/tutorial/tutorial.ipynb), installation [instructions](https://github.com/pytorch/torcharrow#installation), API [documentation](https://pytorch.org/torcharrow/beta/), and a [prototype](https://github.com/pytorch/torchrec/tree/main/examples/torcharrow) for data preprocessing in TorchRec.\n\n### (Beta) Functional API for Modules", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"}} -{"page_content": "PyTorch 1.12 introduces a new beta feature to functionally apply Module computation with a given set of parameters. Sometimes, the traditional PyTorch Module usage pattern that maintains a static set of parameters internally is too restrictive. This is often the case when implementing algorithms for meta-learning, where multiple sets of parameters may need to be maintained across optimizer steps. \n\nThe new ``torch.nn.utils.stateless.functional_call()`` API allows for: \n- Module computation with full flexibility over the set of parameters used\n- No need to reimplement your module in a functional way\n- Any parameter or buffer present in the module can be swapped with an externally-defined value for use in the call. Naming for referencing parameters / buffers follows the fully-qualified form in the module\u2019s ``state_dict()``\n\nExample:\n```python\nimport torch\nfrom torch import nn\nfrom torch.nn.utils.stateless import functional_call", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"}} +{"page_content": "### (Beta) Functional API for Modules\n\nPyTorch 1.12 introduces a new beta feature to functionally apply Module computation with a given set of parameters. Sometimes, the traditional PyTorch Module usage pattern that maintains a static set of parameters internally is too restrictive. This is often the case when implementing algorithms for meta-learning, where multiple sets of parameters may need to be maintained across optimizer steps. \n\nThe new ``torch.nn.utils.stateless.functional_call()`` API allows for: \n- Module computation with full flexibility over the set of parameters used\n- No need to reimplement your module in a functional way\n- Any parameter or buffer present in the module can be swapped with an externally-defined value for use in the call. Naming for referencing parameters / buffers follows the fully-qualified form in the module\u2019s ``state_dict()``\n\nExample:\n```python\nimport torch\nfrom torch import nn\nfrom torch.nn.utils.stateless import functional_call", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"}} {"page_content": "class MyModule(nn.Module):\n def __init__(self):\n super().__init__()\n self.fc1 = nn.Linear(3, 3)\n self.bn = nn.BatchNorm1d(3)\n self.fc2 = nn.Linear(3, 3)\n\n def forward(self, x):\n return self.fc2(self.bn(self.fc1(x)))\n\nm = MyModule()\n\n# Define parameter / buffer values to use during module computation.\nmy_weight = torch.randn(3, 3, requires_grad=True)\nmy_bias = torch.tensor([1., 2., 3.], requires_grad=True)\nparams_and_buffers = {\n 'fc1.weight': my_weight,\n 'fc1.bias': my_bias,\n # Custom buffer values can be used too.\n 'bn.running_mean': torch.randn(3),\n}\n\n# Apply module computation to the input with the specified parameters / buffers.\ninp = torch.randn(5, 3)\noutput = functional_call(m, params_and_buffers, inp)\n```\n\n### (Beta) Complex32 and Complex Convolutions in PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"}} {"page_content": "PyTorch today natively supports complex numbers, complex autograd, complex modules, and numerous complex operations, including linear algebra and Fast Fourier Transform (FFT) operators. Many libraries, including torchaudio and ESPNet, already make use of complex numbers in PyTorch, and PyTorch 1.12 further extends complex functionality with complex convolutions and the experimental complex32 (\u201ccomplex half\u201d) data type that enables half precision FFT operations. Due to the bugs in CUDA 11.3 package, we recommend using CUDA 11.6 package from wheels if you are using complex numbers.\n\n### (Beta) Forward-mode Automatic Differentiation", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"}} -{"page_content": "Forward-mode AD allows the computation of directional derivatives (or equivalently, Jacobian-vector products) eagerly in the forward pass. PyTorch 1.12 significantly improves the operator coverage for forward-mode AD. See our [tutorial](https://pytorch.org/tutorials/search.html?q=forward-mode+automatic+differentiation+%28beta%29&check_keywords=yes&area=default#) for more information.\n\n### TorchData \n\n#### BC DataLoader + DataPipe\n\n\\`DataPipe\\` from TorchData becomes fully backward compatible with the existing \\`DataLoader\\` regarding shuffle determinism and dynamic sharding in both multiprocessing and distributed environments. For more details, please check out the [tutorial](https://pytorch.org/data/0.4.0/tutorial.html#working-with-dataloader).\n\n#### (Beta) AWS S3 Integration", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"}} -{"page_content": "DataPipes based on [AWSSDK](https://github.com/aws/aws-sdk-cpp) have been integrated into TorchData. It provides the following features backed by native AWSSDK:\n- Retrieve list of urls from each S3 bucket based on prefix\n\t- Support timeout to prevent hanging indefinitely\n\t- Support to specify S3 bucket region\n\t\t\n- Load data from S3 urls\n\t- Support buffered and multi-part download\n\t- Support to specify S3 bucket region\n\nAWS native DataPipes are still in the beta phase. And, we will keep tuning them to improve their performance.\n\n#### (Prototype) DataLoader2\n\nDataLoader2 became available in prototype mode. We are introducing new ways to interact between DataPipes, DataLoading API, and backends (aka ReadingServices). Feature is stable in terms of API, but functionally not complete yet. We welcome early adopters and feedback, as well as potential contributors.\n\nFor more details, please checkout the [link](https://github.com/pytorch/data/tree/main/torchdata/dataloader2).\n\n### functorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"}} +{"page_content": "### (Beta) Forward-mode Automatic Differentiation\n\nForward-mode AD allows the computation of directional derivatives (or equivalently, Jacobian-vector products) eagerly in the forward pass. PyTorch 1.12 significantly improves the operator coverage for forward-mode AD. See our [tutorial](https://pytorch.org/tutorials/search.html?q=forward-mode+automatic+differentiation+%28beta%29&check_keywords=yes&area=default#) for more information.\n\n### TorchData \n\n#### BC DataLoader + DataPipe\n\n\\`DataPipe\\` from TorchData becomes fully backward compatible with the existing \\`DataLoader\\` regarding shuffle determinism and dynamic sharding in both multiprocessing and distributed environments. For more details, please check out the [tutorial](https://pytorch.org/data/0.4.0/tutorial.html#working-with-dataloader).\n\n#### (Beta) AWS S3 Integration", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"}} +{"page_content": "#### (Beta) AWS S3 Integration\n\nDataPipes based on [AWSSDK](https://github.com/aws/aws-sdk-cpp) have been integrated into TorchData. It provides the following features backed by native AWSSDK:\n- Retrieve list of urls from each S3 bucket based on prefix\n\t- Support timeout to prevent hanging indefinitely\n\t- Support to specify S3 bucket region\n\t\t\n- Load data from S3 urls\n\t- Support buffered and multi-part download\n\t- Support to specify S3 bucket region\n\nAWS native DataPipes are still in the beta phase. And, we will keep tuning them to improve their performance.\n\n#### (Prototype) DataLoader2\n\nDataLoader2 became available in prototype mode. We are introducing new ways to interact between DataPipes, DataLoading API, and backends (aka ReadingServices). Feature is stable in terms of API, but functionally not complete yet. We welcome early adopters and feedback, as well as potential contributors.\n\nFor more details, please checkout the [link](https://github.com/pytorch/data/tree/main/torchdata/dataloader2).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"}} {"page_content": "### functorch\n\nInspired by [Google JAX](https://github.com/google/jax), functorch is a library that offers composable vmap (vectorization) and autodiff transforms. It enables advanced autodiff use cases that would otherwise be tricky to express in PyTorch. Examples of these include:\n- [running ensembles of models on a single machine](https://pytorch.org/functorch/stable/notebooks/ensembling.html)\n- [efficiently computing Jacobians and Hessians](https://pytorch.org/functorch/stable/notebooks/jacobians_hessians.html)\n- [computing per-sample-gradients (or other per-sample quantities)](https://pytorch.org/functorch/stable/notebooks/per_sample_grads.html)\n\nWe\u2019re excited to announce functorch 0.2.0 with a number of improvements and new experimental features.\n\n#### Significantly improved coverage\n\nWe significantly improved coverage for ``functorch.jvp`` (our forward-mode autodiff API) and other APIs that rely on it (``functorch.{jacfwd, hessian}``).\n\n#### (Prototype) functorch.experimental.functionalize", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"}} {"page_content": "Given a function f, ``functionalize(f)`` returns a new function without mutations (with caveats). This is useful for constructing traces of PyTorch functions without in-place operations. For example, you can use ``make_fx(functionalize(f))`` to construct a mutation-free trace of a pytorch function. To learn more, please see the [documentation](https://pytorch.org/functorch/stable/generated/functorch.experimental.functionalize.html#functorch.experimental.functionalize).\n\nFor more details, please see our [installation instructions](https://pytorch.org/functorch/stable/install.html), [documentation](https://pytorch.org/functorch/), [tutorials](https://pytorch.org/functorch), and [release notes](https://github.com/pytorch/functorch/releases).\n\n## Performance Improvements\n\n### Introducing nvFuser, a deep learning compiler for PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"}} {"page_content": "In PyTorch 1.12, Torchscript is updating its default fuser (for Volta and later CUDA accelerators) to nvFuser, which supports a wider range of operations and is faster than NNC, the previous fuser for CUDA devices. A soon to be published blog post will elaborate on nvFuser and show how it speeds up training on a variety of networks. \n\nSee [the nvFuser documentation](https://github.com/pytorch/pytorch/blob/release/1.12/torch/csrc/jit/codegen/cuda/README.md) for more details on usage and debugging.\n\n### Changes to float32 matrix multiplication precision on Ampere and later CUDA hardware", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"}} @@ -1279,10 +1283,10 @@ {"page_content": "In this beta release, FSDP API added the following features to support various production workloads. Highlights of the the newly added features in this beta release include: \n1. Universal sharding strategy API - Users can easily change between sharding strategies with a single line change, and thus compare and use DDP (only data sharding), FSDP (full model and data sharding), or Zero2 (only sharding of optimizer and gradients) to optimize memory and performance for their specific training needs\n2. Fine grained mixed precision policies - Users can specify a mix of half and full data types (bfloat16, fp16 or fp32) for model parameters, gradient communication, and buffers via mixed precision policies. Models are automatically saved in fp32 to allow for maximum portability\n3. Transformer auto wrapping policy - allows for optimal wrapping of Transformer based models by registering the models layer class, and thus accelerated training performance\n4. Faster model initialization using device_id init - initialization is performed in a streaming fashion to avoid OOM issues and optimize init performance vs CPU init \n5. Rank0 streaming for full model saving of larger models - Fully sharded models can be saved by all GPU\u2019s streaming their shards to the rank 0 GPU, and the model is built in full state on the rank 0 CPU for saving", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"}} {"page_content": "For more details and example code, please checkout the [documentation](https://pytorch.org/docs/1.11/fsdp.html?highlight=fsdp#module-torch.distributed.fsdp) and the [tutorial](https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html). \n\n\nThanks for reading, If you\u2019re interested in these updates and want to join the PyTorch community, we encourage you to join the [discussion forums](https://discuss.pytorch.org/) and [open GitHub issues](https://github.com/pytorch/pytorch/issues). To get the latest news from PyTorch, follow us on [Twitter](https://twitter.com/PyTorch), [Medium](https://medium.com/pytorch), [YouTube](https://www.youtube.com/pytorch), and [LinkedIn](https://www.linkedin.com/company/pytorch).\n\nCheers! \n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch 1.4 released, domain libraries updated'\nauthor: Team PyTorch\n---\n\nToday, we\u2019re announcing the availability of PyTorch 1.4, along with updates to the PyTorch domain libraries. These releases build on top of the announcements from [NeurIPS 2019](https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/), where we shared the availability of PyTorch Elastic, a new classification framework for image and video, and the addition of Preferred Networks to the PyTorch community. For those that attended the workshops at NeurIPS, the content can be found [here](https://research.fb.com/neurips-2019-expo-workshops/).\n\n## PyTorch 1.4\n\nThe 1.4 release of PyTorch adds new capabilities, including the ability to do fine grain build level customization for PyTorch Mobile, and new experimental features including support for model parallel training and Java language bindings.\n\n### PyTorch Mobile - Build level customization", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/", "category": "pytorch blogs"}} -{"page_content": "Following the open sourcing of [PyTorch Mobile in the 1.3 release](https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/), PyTorch 1.4 adds additional mobile support including the ability to customize build scripts at a fine-grain level. This allows mobile developers to optimize library size by only including the operators used by their models and, in the process, reduce their on device footprint significantly. Initial results show that, for example, a customized MobileNetV2 is 40% to 50% smaller than the prebuilt PyTorch mobile library. You can learn more [here](https://pytorch.org/mobile/home/) about how to create your own custom builds and, as always, please engage with the community on the [PyTorch forums](https://discuss.pytorch.org/c/mobile) to provide any feedback you have.\n\nExample code snippet for selectively compiling only the operators needed for MobileNetV2:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/", "category": "pytorch blogs"}} +{"page_content": "### PyTorch Mobile - Build level customization\n\nFollowing the open sourcing of [PyTorch Mobile in the 1.3 release](https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/), PyTorch 1.4 adds additional mobile support including the ability to customize build scripts at a fine-grain level. This allows mobile developers to optimize library size by only including the operators used by their models and, in the process, reduce their on device footprint significantly. Initial results show that, for example, a customized MobileNetV2 is 40% to 50% smaller than the prebuilt PyTorch mobile library. You can learn more [here](https://pytorch.org/mobile/home/) about how to create your own custom builds and, as always, please engage with the community on the [PyTorch forums](https://discuss.pytorch.org/c/mobile) to provide any feedback you have.\n\nExample code snippet for selectively compiling only the operators needed for MobileNetV2:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/", "category": "pytorch blogs"}} {"page_content": "```python\n# Dump list of operators used by MobileNetV2:\nimport torch, yaml\nmodel = torch.jit.load('MobileNetV2.pt')\nops = torch.jit.export_opnames(model)\nwith open('MobileNetV2.yaml', 'w') as output:\n yaml.dump(ops, output)\n```\n\n```console\n# Build PyTorch Android library customized for MobileNetV2:\nSELECTED_OP_LIST=MobileNetV2.yaml scripts/build_pytorch_android.sh arm64-v8a\n\n# Build PyTorch iOS library customized for MobileNetV2:\nSELECTED_OP_LIST=MobileNetV2.yaml BUILD_PYTORCH_MOBILE=1 IOS_ARCH=arm64 scripts/build_ios.sh\n```\n\n### Distributed model parallel training (Experimental)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/", "category": "pytorch blogs"}} {"page_content": "With the scale of models, such as RoBERTa, continuing to increase into the billions of parameters, model parallel training has become ever more important to help researchers push the limits. This release provides a distributed RPC framework to support distributed model parallel training. It allows for running functions remotely and referencing remote objects without copying the real data around, and provides autograd and optimizer APIs to transparently run backwards and update parameters across RPC boundaries.\n\nTo learn more about the APIs and the design of this feature, see the links below:\n\n* [API documentation](https://pytorch.org/docs/stable/rpc.html)\n* [Distributed Autograd design doc](https://pytorch.org/docs/stable/notes/distributed_autograd.html)\n* [Remote Reference design doc](https://pytorch.org/docs/stable/notes/rref.html)\n\nFor the full tutorials, see the links below:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/", "category": "pytorch blogs"}} -{"page_content": "* [A full RPC tutorial](https://pytorch.org/tutorials/intermediate/rpc_tutorial.html)\n* [Examples using model parallel training for reinforcement learning and with an LSTM](https://github.com/pytorch/examples/tree/master/distributed/rpc)\n\nAs always, you can connect with community members and discuss more on the [forums](https://discuss.pytorch.org/c/distributed/distributed-rpc).\n\n### Java bindings (Experimental)\n\nIn addition to supporting Python and C++, this release adds experimental support for Java bindings. Based on the interface developed for Android in PyTorch Mobile, the new bindings allow you to invoke TorchScript models from any Java program. Note that the Java bindings are only available for Linux for this release, and for inference only. We expect support to expand in subsequent releases. See the code snippet below for how to use PyTorch within Java:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/", "category": "pytorch blogs"}} +{"page_content": "For the full tutorials, see the links below: \n\n* [A full RPC tutorial](https://pytorch.org/tutorials/intermediate/rpc_tutorial.html)\n* [Examples using model parallel training for reinforcement learning and with an LSTM](https://github.com/pytorch/examples/tree/master/distributed/rpc)\n\nAs always, you can connect with community members and discuss more on the [forums](https://discuss.pytorch.org/c/distributed/distributed-rpc).\n\n### Java bindings (Experimental)\n\nIn addition to supporting Python and C++, this release adds experimental support for Java bindings. Based on the interface developed for Android in PyTorch Mobile, the new bindings allow you to invoke TorchScript models from any Java program. Note that the Java bindings are only available for Linux for this release, and for inference only. We expect support to expand in subsequent releases. See the code snippet below for how to use PyTorch within Java:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/", "category": "pytorch blogs"}} {"page_content": "```java\nModule mod = Module.load(\"demo-model.pt1\");\nTensor data =\n Tensor.fromBlob(\n new int[] {1, 2, 3, 4, 5, 6}, // data\n new long[] {2, 3} // shape\n );\nIValue result = mod.forward(IValue.from(data), IValue.from(3.0));\nTensor output = result.toTensor();\nSystem.out.println(\"shape: \" + Arrays.toString(output.shape()));\nSystem.out.println(\"data: \" + Arrays.toString(output.getDataAsFloatArray()));\n```\n\nLearn more about how to use PyTorch from Java [here](https://github.com/pytorch/java-demo), and see the full Javadocs API documentation [here](https://pytorch.org/javadoc/1.4.0/).\n\nFor the full 1.4 release notes, see [here](https://github.com/pytorch/pytorch/releases).\n\n## Domain Libraries\n\nPyTorch domain libraries like torchvision, torchtext, and torchaudio complement PyTorch with common datasets, models, and transforms. We\u2019re excited to share new releases for all three domain libraries alongside the PyTorch 1.4 core release.\n\n### torchvision 0.5", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/", "category": "pytorch blogs"}} {"page_content": "### torchvision 0.5\n\nThe improvements to torchvision 0.5 mainly focus on adding support for production deployment including quantization, TorchScript, and ONNX. Some of the highlights include:\n\n* All models in torchvision are now torchscriptable making them easier to ship into non-Python production environments\n* ResNets, MobileNet, ShuffleNet, GoogleNet and InceptionV3 now have quantized counterparts with pre-trained models, and also include scripts for quantization-aware training.\n* In partnership with the Microsoft team, we\u2019ve added ONNX support for all models including Mask R-CNN.\n\nLearn more about torchvision 0.5 [here](https://github.com/pytorch/vision/releases).\n\n### torchaudio 0.4\n\nImprovements in torchaudio 0.4 focus on enhancing the currently available transformations, datasets, and backend support. Highlights include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/", "category": "pytorch blogs"}} {"page_content": "* SoX is now optional, and a new extensible backend dispatch mechanism exposes SoundFile as an alternative to SoX.\n* The interface for datasets has been unified. This enables the addition of two large datasets: LibriSpeech and Common Voice.\n* New filters such as biquad, data augmentation such as time and frequency masking, transforms such as MFCC, gain and dither, and new feature computation such as deltas, are now available.\n* Transformations now support batches and are jitable.\n* An interactive speech recognition demo with voice activity detection is available for experimentation.\n\nLearn more about torchaudio 0.4 [here](https://github.com/pytorch/audio/releases).\n\n### torchtext 0.5\n\ntorchtext 0.5 focuses mainly on improvements to the dataset loader APIs, including compatibility with core PyTorch APIs, but also adds support for unsupervised text tokenization. Highlights include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/", "category": "pytorch blogs"}} @@ -1291,16 +1295,16 @@ {"page_content": "1. SWA has been shown to significantly improve generalization in computer vision tasks, including VGG, ResNets, Wide ResNets and DenseNets on ImageNet and CIFAR benchmarks [1, 2].\n2. SWA provides state-of-the-art performance on key benchmarks in semi-supervised learning and domain adaptation [2].\n3. SWA is shown to improve the stability of training as well as the final average rewards of policy-gradient methods in deep reinforcement learning [3].\n4. An extension of SWA can obtain efficient Bayesian model averaging, as well as high quality uncertainty estimates and calibration in deep learning [4].\n5. SWA for low precision training, SWALP, can match the performance of full-precision SGD even with all numbers quantized down to 8 bits, including gradient accumulators [5].", "metadata": {"source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "In short, SWA performs an equal average of the weights traversed by SGD with a modified learning rate schedule (see the left panel of Figure 1.). SWA solutions end up in the center of a wide flat region of loss, while SGD tends to converge to the boundary of the low-loss region, making it susceptible to the shift between train and test error surfaces (see the middle and right panels of Figure 1).\n\n
\n \n
\n\n**Figure 1.** Illustrations of SWA and SGD with a Preactivation ResNet-164 on CIFAR-100 [1]. **Left:** test error surface for three FGE samples and the corresponding SWA solution (averaging in weight space). **Middle** and **Right:** test error and train loss surfaces showing the weights proposed by SGD (at convergence) and SWA, starting from the same initialization of SGD after 125 training epochs. Please see [1] for details on how these figures were constructed.", "metadata": {"source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "**With our new implementation in [torchcontrib](https://github.com/pytorch/contrib) using SWA is as easy as using any other optimizer in PyTorch:**\n\n```python\nfrom torchcontrib.optim import SWA\n\n...\n...\n\n# training loop\nbase_opt = torch.optim.SGD(model.parameters(), lr=0.1)\nopt = torchcontrib.optim.SWA(base_opt, swa_start=10, swa_freq=5, swa_lr=0.05)\nfor _ in range(100):\n opt.zero_grad()\n loss_fn(model(input), target).backward()\n opt.step()\nopt.swap_swa_sgd()\n```\n\nYou can wrap any optimizer from `torch.optim` using the `SWA` class, and then train your model as usual. When training is complete you simply call `swap_swa_sgd()` to set the weights of your model to their SWA averages. Below we explain the SWA procedure and the parameters of the `SWA` class in detail. We emphasize that SWA can be combined with *any* optimization procedure, such as Adam, in the same way that it can be combined with SGD.\n\n## Is this just Averaged SGD?", "metadata": {"source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"}} -{"page_content": "At a high level, averaging SGD iterates dates back several decades in convex optimization [6, 7], where it is sometimes referred to as Polyak-Ruppert averaging, or *averaged* SGD. **But the details matter**. *Averaged SGD* is often employed in conjunction with a decaying learning rate, and an exponentially moving average, typically for convex optimization. In convex optimization, the focus has been on improved rates of convergence. In deep learning, this form of averaged SGD smooths the trajectory of SGD iterates, but does not perform very differently.\n\nBy contrast, SWA is focused on an **equal average** of SGD iterates with a modified **cyclical or high constant learning rate**, and exploits the flatness of training objectives [8] specific to **deep learning** for **improved generalization**.\n\n## Stochastic Weight Averaging", "metadata": {"source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"}} -{"page_content": "There are two important ingredients that make SWA work. First, SWA uses a modified learning rate schedule so that SGD continues to explore the set of high-performing networks instead of simply converging to a single solution. For example, we can use the standard decaying learning rate strategy for the first 75% of training time, and then set the learning rate to a reasonably high constant value for the remaining 25% of the time (see the Figure 2 below). The second ingredient is to average the weights of the networks traversed by SGD. For example, we can maintain a running average of the weights obtained in the end of every epoch within the last 25% of training time (see Figure 2).\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"}} +{"page_content": "## Is this just Averaged SGD?\n\nAt a high level, averaging SGD iterates dates back several decades in convex optimization [6, 7], where it is sometimes referred to as Polyak-Ruppert averaging, or *averaged* SGD. **But the details matter**. *Averaged SGD* is often employed in conjunction with a decaying learning rate, and an exponentially moving average, typically for convex optimization. In convex optimization, the focus has been on improved rates of convergence. In deep learning, this form of averaged SGD smooths the trajectory of SGD iterates, but does not perform very differently.\n\nBy contrast, SWA is focused on an **equal average** of SGD iterates with a modified **cyclical or high constant learning rate**, and exploits the flatness of training objectives [8] specific to **deep learning** for **improved generalization**.\n\n## Stochastic Weight Averaging", "metadata": {"source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"}} +{"page_content": "## Stochastic Weight Averaging\n\nThere are two important ingredients that make SWA work. First, SWA uses a modified learning rate schedule so that SGD continues to explore the set of high-performing networks instead of simply converging to a single solution. For example, we can use the standard decaying learning rate strategy for the first 75% of training time, and then set the learning rate to a reasonably high constant value for the remaining 25% of the time (see the Figure 2 below). The second ingredient is to average the weights of the networks traversed by SGD. For example, we can maintain a running average of the weights obtained in the end of every epoch within the last 25% of training time (see Figure 2).\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "**Figure 2.** Illustration of the learning rate schedule adopted by SWA. Standard decaying schedule is used for the first 75% of the training and then a high constant value is used for the remaining 25%. The SWA averages are formed during the last 25% of training.\n\nIn our implementation the auto mode of the `SWA` optimizer allows us to run the procedure described above. To run SWA in auto mode you just need to wrap your optimizer `base_opt` of choice (can be SGD, Adam, or any other `torch.optim.Optimizer`) with `SWA(base_opt, swa_start, swa_freq, swa_lr)`. After `swa_start` optimization steps the learning rate will be switched to a constant value `swa_lr`, and in the end of every `swa_freq` optimization steps a snapshot of the weights will be added to the SWA running average. Once you run `opt.swap_swa_sgd()`, the weights of your model are replaced with their SWA running averages.\n\n## Batch Normalization", "metadata": {"source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"}} -{"page_content": "One important detail to keep in mind is batch normalization. Batch normalization layers compute running statistics of activations during training. Note that the SWA averages of the weights are never used to make predictions during training, and so the batch normalization layers do not have the activation statistics computed after you reset the weights of your model with `opt.swap_swa_sgd()`. To compute the activation statistics you can just make a forward pass on your training data using the SWA model once the training is finished. In the `SWA` class we provide a helper function `opt.bn_update(train_loader, model)`. It updates the activation statistics for every batch normalization layer in the model by making a forward pass on the `train_loader` data loader. You only need to call this function once in the end of training.\n\n## Advanced Learning-Rate Schedules", "metadata": {"source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"}} -{"page_content": "SWA can be used with any learning rate schedule that encourages exploration of the flat region of solutions. For example, you can use cyclical learning rates in the last 25% of the training time instead of a constant value, and average the weights of the networks corresponding to the lowest values of the learning rate within each cycle (see Figure 3).\n\n
\n \n
\n\n**Figure 3.** Illustration of SWA with an alternative learning rate schedule. Cyclical learning rates are adopted in the last 25% of training, and models for averaging are collected in the end of each cycle.\n\nIn our implementation you can implement custom learning rate and weight averaging strategies by using `SWA` in the manual mode. The following code is equivalent to the auto mode code presented in the beginning of this blogpost.", "metadata": {"source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"}} +{"page_content": "## Batch Normalization\n\nOne important detail to keep in mind is batch normalization. Batch normalization layers compute running statistics of activations during training. Note that the SWA averages of the weights are never used to make predictions during training, and so the batch normalization layers do not have the activation statistics computed after you reset the weights of your model with `opt.swap_swa_sgd()`. To compute the activation statistics you can just make a forward pass on your training data using the SWA model once the training is finished. In the `SWA` class we provide a helper function `opt.bn_update(train_loader, model)`. It updates the activation statistics for every batch normalization layer in the model by making a forward pass on the `train_loader` data loader. You only need to call this function once in the end of training.\n\n## Advanced Learning-Rate Schedules", "metadata": {"source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"}} +{"page_content": "## Advanced Learning-Rate Schedules\n\nSWA can be used with any learning rate schedule that encourages exploration of the flat region of solutions. For example, you can use cyclical learning rates in the last 25% of the training time instead of a constant value, and average the weights of the networks corresponding to the lowest values of the learning rate within each cycle (see Figure 3).\n\n
\n \n
\n\n**Figure 3.** Illustration of SWA with an alternative learning rate schedule. Cyclical learning rates are adopted in the last 25% of training, and models for averaging are collected in the end of each cycle.\n\nIn our implementation you can implement custom learning rate and weight averaging strategies by using `SWA` in the manual mode. The following code is equivalent to the auto mode code presented in the beginning of this blogpost.", "metadata": {"source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "```python\nopt = torchcontrib.optim.SWA(base_opt)\nfor i in range(100):\n opt.zero_grad()\n loss_fn(model(input), target).backward()\n opt.step()\n if i > 10 and i % 5 == 0:\n opt.update_swa()\nopt.swap_swa_sgd()\n```\n\nIn manual mode you don\u2019t specify `swa_start`, `swa_lr` and `swa_freq`, and just call `opt.update_swa()` whenever you want to update the SWA running averages (for example in the end of each learning rate cycle). In manual mode `SWA` doesn\u2019t change the learning rate, so you can use any schedule you want as you would normally do with any other `torch.optim.Optimizer`.\n\n## Why does it work?\n\nSGD converges to a solution within a wide flat region of loss. The weight space is extremely high-dimensional, and most of the volume of the flat region is concentrated near the boundary, so SGD solutions will always be found near the boundary of the flat region of the loss. SWA on the other hand averages multiple SGD solutions, which allows it to move towards the center of the flat region.", "metadata": {"source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "We expect solutions that are centered in the flat region of the loss to generalize better than those near the boundary. Indeed, train and test error surfaces are not perfectly aligned in the weight space. Solutions that are centered in the flat region are not as susceptible to the shifts between train and test error surfaces as those near the boundary. In Figure 4 below we show the train loss and test error surfaces along the direction connecting the SWA and SGD solutions. As you can see, while SWA solution has a higher train loss compared to the SGD solution, it is centered in the region of low loss, and has a substantially better test error.\n\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "**Figure 4.** Train loss and test error along the line connecting the SWA solution (circle) and SGD solution (square). SWA solution is centered in a wide region of low train loss while the SGD solution lies near the boundary. Because of the shift between train loss and test error surfaces, SWA solution leads to much better generalization.\n\n## Examples and Results\n\nWe released a GitHub repo [here](https://github.com/izmailovpavel/contrib_swa_examples) with examples of using the `torchcontrib` implementation of SWA for training DNNs. For example, these examples can be used to achieve the following results on CIFAR-100:", "metadata": {"source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "| DNN (Budget) | SGD | SWA 1 Budget | SWA 1.25 Budgets | SWA 1.5 Budgets |\n| ------------------------- |:------------:|:------------:|:----------------:|:---------------:|\n| VGG16 (200) | 72.55 \u00b1 0.10 | 73.91 \u00b1 0.12 | 74.17 \u00b1 0.15 | 74.27 \u00b1 0.25 |\n| PreResNet110 (150) | 76.77 \u00b1 0.38 | 78.75 \u00b1 0.16 | 78.91 \u00b1 0.29 | 79.10 \u00b1 0.21 |\n| PreResNet164 (150) | 78.49 \u00b1 0.36 | 79.77 \u00b1 0.17 | 80.18 \u00b1 0.23 | 80.35 \u00b1 0.16 |\n| WideResNet28x10 (200) | 80.82 \u00b1 0.23 | 81.46 \u00b1 0.23 | 81.91 \u00b1 0.27 | 82.15 \u00b1 0.27 |\n\n## Semi-Supervised Learning", "metadata": {"source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"}} -{"page_content": "In a follow-up [paper](https://arxiv.org/abs/1806.05594) SWA was applied to semi-supervised learning, where it illustrated improvements beyond the best reported results in multiple settings. For example, with SWA you can get 95% accuracy on CIFAR-10 if you only have the training labels for 4k training data points (the previous best reported result on this problem was 93.7%). This paper also explores averaging multiple times within epochs, which can accelerate convergence and find still flatter solutions in a given time.\n
\n\n
\n\n**Figure 5.** Performance of fast-SWA on semi-supervised learning with CIFAR-10. fast-SWA achieves record results in every setting considered.", "metadata": {"source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"}} +{"page_content": "## Semi-Supervised Learning\n\nIn a follow-up [paper](https://arxiv.org/abs/1806.05594) SWA was applied to semi-supervised learning, where it illustrated improvements beyond the best reported results in multiple settings. For example, with SWA you can get 95% accuracy on CIFAR-10 if you only have the training labels for 4k training data points (the previous best reported result on this problem was 93.7%). This paper also explores averaging multiple times within epochs, which can accelerate convergence and find still flatter solutions in a given time.\n
\n\n
\n\n**Figure 5.** Performance of fast-SWA on semi-supervised learning with CIFAR-10. fast-SWA achieves record results in every setting considered.", "metadata": {"source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "## Calibration and Uncertainty Estimates\n[SWA-Gaussian](https://arxiv.org/abs/1902.02476) (SWAG) is a simple, scalable and convenient approach to uncertainty estimation and calibration in Bayesian deep learning. Similarly to SWA, which maintains a running average of SGD iterates, SWAG estimates the first and second moments of the iterates to construct a Gaussian distribution over weights. SWAG distribution approximates the shape of the true posterior: Figure 6 below shows the SWAG distribution on top of the posterior log-density for PreResNet-164 on CIFAR-100.\n
\n\n
\n**Figure 6.** SWAG distribution on top of posterior log-density for PreResNet-164 on CIFAR-100. The shape of SWAG distribution is aligned with the posterior.", "metadata": {"source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "Empirically, SWAG performs on par or better than popular alternatives including MC dropout, KFAC Laplace, and temperature scaling on uncertainty quantification, out-of-distribution detection, calibration and transfer learning in computer vision tasks. Code for SWAG is available [here](https://github.com/wjmaddox/swa_gaussian).\n\n## Reinforcement Learning\n\nIn another follow-up [paper](http://www.gatsby.ucl.ac.uk/~balaji/udl-camera-ready/UDL-24.pdf) SWA was shown to improve the performance of policy gradient methods A2C and DDPG on several Atari games and MuJoCo environments.\n\n| Environment | A2C | A2C + SWA |\n|---------------|:----------------:|:----------------:|\n| Breakout | 522 \u00b1 34 | 703 \u00b1 60 |\n| Qbert | 18777 \u00b1 778 | 21272 \u00b1 655 |\n| SpaceInvaders | 7727 \u00b1 1121 | 21676 \u00b1 8897 |\n| Seaquest | 1779 \u00b1 4 | 1795 \u00b1 4 |\n| CrazyClimber | 147030 \u00b1 10239 | 139752 \u00b1 11618 |\n| BeamRider | 9999 \u00b1 402 | 11321 \u00b1 1065 |", "metadata": {"source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "## Low Precision Training\nWe can filter through quantization noise by combining weights that have been rounded down with weights that have been rounded up. Moreover, by averaging weights to find a flat region of the loss surface, large perturbations of the weights will not affect the quality of the solution (Figures 7 and 8). Recent work shows that by adapting SWA to the low precision setting, in a method called SWALP, one can *match the performance of full-precision SGD even with all training in 8 bits* [5]. This is quite a practically important result, given that (1) SGD training in 8 bits performs notably worse than full precision SGD, and (2) low precision training is significantly harder than predictions in low precision after training (the usual setting). For example, a ResNet-164 trained on CIFAR-100 with float (16-bit) SGD achieves 22.2% error, while 8-bit SGD achieves 24.0% error. By contrast, SWALP with 8 bit training achieves 21.8% error.\n
\n\n
", "metadata": {"source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"}} @@ -1309,14 +1313,14 @@ {"page_content": "We encourage you try out SWA! Using SWA is now as easy as using any other optimizer in PyTorch. And even if you have already trained your model with SGD (or any other optimizer), it\u2019s very easy to realize the benefits of SWA by running SWA for a small number of epochs starting with a pre-trained model.", "metadata": {"source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "- [1] Averaging Weights Leads to Wider Optima and Better Generalization; Pavel Izmailov, Dmitry Podoprikhin, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson; Uncertainty in Artificial Intelligence (UAI), 2018\n- [2] There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average; Ben Athiwaratkun, Marc Finzi, Pavel Izmailov, Andrew Gordon Wilson; International Conference on Learning Representations (ICLR), 2019\n- [3] Improving Stability in Deep Reinforcement Learning with Weight Averaging; Evgenii Nikishin, Pavel Izmailov, Ben Athiwaratkun, Dmitrii Podoprikhin, Timur Garipov, Pavel Shvechikov, Dmitry Vetrov, Andrew Gordon Wilson, UAI 2018 Workshop: Uncertainty in Deep Learning, 2018\n- [4] A Simple Baseline for Bayesian Uncertainty in Deep Learning, Wesley Maddox, Timur Garipov, Pavel Izmailov, Andrew Gordon Wilson, arXiv pre-print, 2019: [https://arxiv.org/abs/1902.02476](https://arxiv.org/abs/1902.02476)\n- [5] SWALP : Stochastic Weight Averaging in Low Precision Training, Guandao Yang, Tianyi Zhang, Polina Kirichenko, Junwen Bai, Andrew Gordon Wilson, Christopher De Sa, To appear at the International Conference on Machine Learning (ICML), 2019.\n- [6] David Ruppert. Efficient estimations from a slowly convergent Robbins-Monro process. Technical report, Cornell University Operations Research and Industrial Engineering, 1988.\n- [7] Acceleration of stochastic approximation by averaging. Boris T Polyak and Anatoli B Juditsky. SIAM Journal on Control and Optimization, 30(4):838\u2013855, 1992.\n- [8] Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs, Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry Vetrov, Andrew Gordon Wilson. Neural Information Processing Systems (NeurIPS), 2018", "metadata": {"source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"Introducing the PlayTorch app: Rapidly Create Mobile AI Experiences\"\nauthor: PlayTorch Team\nfeatured-img: \"\"\n---\n\n

\n \n

\n\nIn December, we announced PyTorch Live, a toolkit for building AI-powered mobile prototypes in minutes. The initial release included a command-line interface to set up a development environment and an SDK for building AI-powered experiences in React Native. Today, we're excited to share that PyTorch Live will now be known as PlayTorch. This new release provides an improved and simplified developer experience. PlayTorch development is independent from the PyTorch project and the PlayTorch code repository is moving into the Meta Research GitHub organization.\n\n## A New Workflow: The PlayTorch App", "metadata": {"source": "https://pytorch.org/blog/introducing-the-playtorch-app/", "category": "pytorch blogs"}} -{"page_content": "The PlayTorch team is excited to announce that we have partnered with [Expo](https://expo.dev) to change the way AI powered mobile experiences are built. Our new release simplifies the process of building mobile AI experiences by eliminating the need for a complicated development environment. You will now be able to build cross platform AI powered prototypes from the very browser you are using to read this blog.\n\nIn order to make this happen, we are releasing the [PlayTorch app](https://playtorch.dev/) which is able to run AI-powered experiences built in the [Expo Snack](https://snack.expo.dev/@playtorch/playtorch-starter?supportedPlatforms=my-device) web based code editor.\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/introducing-the-playtorch-app/", "category": "pytorch blogs"}} +{"page_content": "## A New Workflow: The PlayTorch App\n\nThe PlayTorch team is excited to announce that we have partnered with [Expo](https://expo.dev) to change the way AI powered mobile experiences are built. Our new release simplifies the process of building mobile AI experiences by eliminating the need for a complicated development environment. You will now be able to build cross platform AI powered prototypes from the very browser you are using to read this blog.\n\nIn order to make this happen, we are releasing the [PlayTorch app](https://playtorch.dev/) which is able to run AI-powered experiences built in the [Expo Snack](https://snack.expo.dev/@playtorch/playtorch-starter?supportedPlatforms=my-device) web based code editor.\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/introducing-the-playtorch-app/", "category": "pytorch blogs"}} {"page_content": "The PlayTorch app can be downloaded from the Apple App Store and Google Play Store. With the app installed, you can head over to [playtorch.dev/snack](https://playtorch.dev/snack) and write the code for your AI-powered PlayTorch Snack. When you want to try what you\u2019ve built, you can use the PlayTorch app\u2019s QR code scanner to scan the QR code on the Snack page and load the code to your device.\n\nNOTE: PlayTorch Snacks will not work in the Expo Go app.\n\n## More to Explore in the PlayTorch App\n\n### AI Demos\n\nThe PlayTorch app comes with several examples of how you can build AI powered experiences with a variety of different machine learning models from object detection to natural language processing. See what can be built with the PlayTorch SDK and be inspired to make something of your own as you play with the examples.\n\n

\n \n

\n\n### Sharing Your Creations", "metadata": {"source": "https://pytorch.org/blog/introducing-the-playtorch-app/", "category": "pytorch blogs"}} -{"page_content": "Any PlayTorch Snack that you run in the PlayTorch app can be shared with others in an instant. When they open the link on their device, the PlayTorch app will instantly load what you\u2019ve built from the cloud so they can experience it first hand.\n\n

\n \n

\n\nWhen you have something you want to share, let us know on [Discord](https://discord.gg/sQkXTqEt33) or [Twitter](https://twitter.com/PlayTorch) or embed the PlayTorch Snack on your own webpage.\n\n## SDK Overhaul\n\nWe learned a lot from the community after our initial launch in December and have been hard at work over the past several months to make the PlayTorch SDK (formerly known as PyTorch Live) simple, performant, and robust. In our initial version, the SDK relied on config files to define how a model ingested and output data.", "metadata": {"source": "https://pytorch.org/blog/introducing-the-playtorch-app/", "category": "pytorch blogs"}} +{"page_content": "### Sharing Your Creations\n\nAny PlayTorch Snack that you run in the PlayTorch app can be shared with others in an instant. When they open the link on their device, the PlayTorch app will instantly load what you\u2019ve built from the cloud so they can experience it first hand.\n\n

\n \n

\n\nWhen you have something you want to share, let us know on [Discord](https://discord.gg/sQkXTqEt33) or [Twitter](https://twitter.com/PlayTorch) or embed the PlayTorch Snack on your own webpage.\n\n## SDK Overhaul\n\nWe learned a lot from the community after our initial launch in December and have been hard at work over the past several months to make the PlayTorch SDK (formerly known as PyTorch Live) simple, performant, and robust. In our initial version, the SDK relied on config files to define how a model ingested and output data.", "metadata": {"source": "https://pytorch.org/blog/introducing-the-playtorch-app/", "category": "pytorch blogs"}} {"page_content": "Today, we are happy to announce the next version of our SDK can handle data processing in JavaScript for your prototypes with the new PlayTorch API that leverages the JavaScript Interface (JSI) to directly call C++ code. Not only have we completely redone the way you can interact with models, but we have also greatly expanded the variety of supported model architectures.\n\n## A New Data Processing API for Prototyping\n\nWith this JSI API, we now allow users direct access to tensors (data format for machine learning). Instead of only having access to predefined transformations, you can now manipulate tensors however you would like for your prototypes.\n\n

\n \n

\n\nNo more switching back and forth between code and config. You will now be able to write everything in JavaScript and have access to all of the type annotations and autocomplete features available to you in those languages.", "metadata": {"source": "https://pytorch.org/blog/introducing-the-playtorch-app/", "category": "pytorch blogs"}} {"page_content": "Check out our [tutorials](https://playtorch.dev/tutorials) to see the new Data Processing API in action, take a deeper dive in the [API docs](https://playtorch.dev/docs/api/core/), or inspect the code yourself on [GitHub](https://github.com/facebookresearch/playtorch).\n\n### Expanded Use Cases\n\nWith the new version of the SDK, we have added support for several cutting edge models.\n\n

\n \n

\n\nImage-to-image transformations are now supported thanks to our robust JSI API, so you can see what your world would look like if it were an anime.\n\n

\n \n

\n\nTranslate French to English with an AI powered translator using the Seq2Seq model.\n\n

\n \n

\n\nUse DeepLab V3 to segment images!\n\n## Start Playing", "metadata": {"source": "https://pytorch.org/blog/introducing-the-playtorch-app/", "category": "pytorch blogs"}} {"page_content": "## Start Playing\n\nIf you want to start creating AI experiences yourself, head over to [playtorch.dev](https://playtorch.dev) and try out our [tutorials](https://playtorch.dev/tutorials/). Each tutorial will guide you through building a simple AI powered experience that you can instantly run on your phone and share with others.\n\n## How to Get Support\n\nJoin us on [Discord](https://discord.gg/sQkXTqEt33), collaborate with us on [GitHub](https://github.com/facebookresearch/playtorch), or follow us on [Twitter](https://twitter.com/playtorch). Got questions or feedback? We\u2019d love to hear from you!", "metadata": {"source": "https://pytorch.org/blog/introducing-the-playtorch-app/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'Overview of PyTorch Autograd Engine'\nauthor: Preferred Networks, Inc.\n---\n\nThis blog post is based on PyTorch version 1.8, although it should apply for older versions too, since most of the mechanics have remained constant.\n\nTo help understand the concepts explained here, it is recommended that you read the awesome blog post by [@ezyang](https://twitter.com/ezyang): [PyTorch internals](http://blog.ezyang.com/2019/05/pytorch-internals/) if you are not familiar with PyTorch architecture components such as ATen or c10d.\n\n### What is autograd?\n\n**Background**", "metadata": {"source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"}} -{"page_content": "**Background**\n\nPyTorch computes the gradient of a function with respect to the inputs by using automatic differentiation. Automatic differentiation is a technique that, given a computational graph, calculates the gradients of the inputs. Automatic differentiation can be performed in two different ways; forward and reverse mode. Forward mode means that we calculate the gradients along with the result of the function, while reverse mode requires us to evaluate the function first, and then we calculate the gradients starting from the output. While both modes have their pros and cons, the reverse mode is the de-facto choice since the number of outputs is smaller than the number of inputs, which allows a much more efficient computation. Check [3] to learn more about this.\n\nAutomatic differentiation relies on a classic calculus formula known as the chain-rule. The chain rule allows us to calculate very complex derivatives by splitting them and recombining them later.", "metadata": {"source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"}} +{"page_content": "### What is autograd?\n\n**Background**\n\nPyTorch computes the gradient of a function with respect to the inputs by using automatic differentiation. Automatic differentiation is a technique that, given a computational graph, calculates the gradients of the inputs. Automatic differentiation can be performed in two different ways; forward and reverse mode. Forward mode means that we calculate the gradients along with the result of the function, while reverse mode requires us to evaluate the function first, and then we calculate the gradients starting from the output. While both modes have their pros and cons, the reverse mode is the de-facto choice since the number of outputs is smaller than the number of inputs, which allows a much more efficient computation. Check [3] to learn more about this.\n\nAutomatic differentiation relies on a classic calculus formula known as the chain-rule. The chain rule allows us to calculate very complex derivatives by splitting them and recombining them later.", "metadata": {"source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"}} {"page_content": "Formally speaking, given a composite function , we can calculate its derivative as . This result is what makes automatic differentiation work.\nBy combining the derivatives of the simpler functions that compose a larger one, such as a neural network, it is possible to compute the exact value of the gradient at a given point rather than relying on the numerical approximation, which would require multiple perturbations in the input to obtain a value.", "metadata": {"source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"}} {"page_content": "To get the intuition of how the reverse mode works, let\u2019s look at a simple function . Figure 1 shows its computational graph where the inputs x, y in the left, flow through a series of operations to generate the output z.\n\n
\n \n

Figure 1: Computational graph of f(x, y) = log(x*y)

\n
\n\nThe automatic differentiation engine will normally execute this graph. It will also extend it to calculate the derivatives of w with respect to the inputs x, y, and the intermediate result v.", "metadata": {"source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"}} {"page_content": "The example function can be decomposed in f and g, where and . Every time the engine executes an operation in the graph, the derivative of that operation is added to the graph to be executed later in the backward pass. Note, that the engine knows the derivatives of the basic functions.", "metadata": {"source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"}} @@ -1335,7 +1339,7 @@ {"page_content": "If we now calculate the transpose-Jacobian vector product obeying the chain rule, we obtain the following expression:\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"}} {"page_content": "Evaluating the Jvp for yields the result:\n\nWe can execute the same expression in PyTorch and calculate the gradient of the input:\n
\n
>>> import torch
\n
>>> x = torch.tensor([0.5, 0.75], requires_grad=True)
\n
>>> y = torch.log(x[0] * x[1]) * torch.sin(x[1])
\n
>>> y.backward(1.0)
\n
>>> x.grad
\n tensor([1.3633,\n 0.1912])\n
", "metadata": {"source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"}} {"page_content": "The result is the same as our hand-calculated Jacobian-vector product!\nHowever, PyTorch never constructed the matrix as it could grow prohibitively large but instead, created a graph of operations that traversed backward while applying the Jacobian-vector products defined in [tools/autograd/derivatives.yaml](https://github.com/pytorch/pytorch/blob/master/tools/autograd/derivatives.yaml).\n\n**Going through the graph**", "metadata": {"source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"}} -{"page_content": "Every time PyTorch executes an operation, the autograd engine constructs the graph to be traversed backward.\nThe reverse mode auto differentiation starts by adding a scalar variable at the end so that as we saw in the introduction. This is the initial gradient value that is supplied to the Jvp engine calculation as we saw in the section above.\n\nIn PyTorch, the initial gradient is explicitly set by the user when he calls the backward method.", "metadata": {"source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"}} +{"page_content": "**Going through the graph**\n\nEvery time PyTorch executes an operation, the autograd engine constructs the graph to be traversed backward.\nThe reverse mode auto differentiation starts by adding a scalar variable at the end so that as we saw in the introduction. This is the initial gradient value that is supplied to the Jvp engine calculation as we saw in the section above.\n\nIn PyTorch, the initial gradient is explicitly set by the user when he calls the backward method.", "metadata": {"source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"}} {"page_content": "Then, the Jvp calculation starts but it never constructs the matrix. Instead, when PyTorch records the computational graph, the derivatives of the executed forward operations are added (Backward Nodes). Figure 5 shows a backward graph generated by the execution of the functions and seen before.\n\n
\n \n

Figure 5: Computational Graph extended with the backward pass

\n
", "metadata": {"source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"}} {"page_content": "Once the forward pass is done, the results are used in the backward pass where the derivatives in the computational graph are executed. The basic derivatives are stored in the [tools/autograd/derivatives.yaml](https://github.com/pytorch/pytorch/blob/master/tools/autograd/derivatives.yaml) file and they are not regular derivatives but the Jvp versions of them [3]. They take their primitive function inputs and outputs as parameters along with the gradient of the function outputs with respect to the final outputs. By repeatedly multiplying the resulting gradients by the next Jvp derivatives in the graph, the gradients up to the inputs will be generated following the chain rule.\n\n
\n \n

Figure 6: How the chain rule is applied in backward differentiation

\n
", "metadata": {"source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"}} {"page_content": "Figure 6 represents the process by showing the chain rule. We started with a value of 1.0 as detailed before which is the already calculated gradient highlighted in green. And we move to the next node in the graph. The *backward* function registered in [derivatives.yaml](https://github.com/pytorch/pytorch/blob/a0a7a2d648f05b0192e6943c9684406cdf404fbf/tools/autograd/derivatives.yaml#L635-L636) will calculate the associated\n value highlighted in red and multiply it by . By the chain rule this results in which will be the already calculated gradient (green) when we process the next backward node in the graph.", "metadata": {"source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"}} @@ -1347,7 +1351,7 @@ {"page_content": "---\nlayout: blog_detail\ntitle: \"Case Study: Amazon Ads Uses PyTorch and AWS Inferentia to Scale Models for Ads Processing\"\nauthor: Yashal Kanungo \u2013 Applied Scientist, Kamran Khan - Sr. Technical Product Manager, Shubha Kumbadakone \u2013 Sr. Specialist, ML Frameworks\nfeatured-img: \"\"\n---\n\nAmazon Ads uses PyTorch, TorchServe, and AWS Inferentia to reduce inference costs by 71% and drive scale out.\n\nAmazon Ads helps companies build their brand and connect with shoppers through ads shown both within and beyond Amazon\u2019s store, including websites, apps, and streaming TV content in more than 15 countries. Businesses and brands of all sizes, including registered sellers, vendors, book vendors, Kindle Direct Publishing (KDP) authors, app developers, and agencies can upload their own ad creatives, which can include images, video, audio, and, of course, products sold on Amazon.\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"}} {"page_content": "To promote an accurate, safe, and pleasant shopping experience, these ads must comply with content guidelines. For example, ads cannot flash on and off, products must be featured in an appropriate context, and images and text should be appropriate for a general audience. To help ensure that ads meet the required policies and standards, we needed to develop scalable mechanisms and tools.", "metadata": {"source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"}} {"page_content": "As a solution, we used machine learning (ML) models to surface ads that might need revision. As deep neural networks flourished over the past decade, our data science team began exploring more versatile deep learning (DL) methods capable of processing text, images, audio, or video with minimal human intervention. To that end, we\u2019ve used PyTorch to build computer vision (CV) and natural language processing (NLP) models that automatically flag potentially non-compliant ads. PyTorch is intuitive, flexible, and user-friendly, and has made our transition to using DL models seamless. Deploying these new models on [AWS Inferentia-based Amazon EC2 Inf1 instances](https://aws.amazon.com/ec2/instance-types/inf1/), rather than on GPU-based instances, reduced our inference latency by 30 percent and our inference costs by 71 percent for the same workloads.\n\n## Transition to deep learning", "metadata": {"source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"}} -{"page_content": "Our ML systems paired classical models with word embeddings to evaluate ad text. But our requirements evolved, and as the volume of submissions continued to expand, we needed a method nimble enough to scale along with our business. In addition, our models must be fast and serve ads within milliseconds to provide an optimal customer experience.\n\nOver the last decade, DL has become very popular in numerous domains, including natural language, vision, and audio. Because deep neural networks channel data sets through many layers \u2014 extracting progressively higher-level features \u2014 they can make more nuanced inferences than classical ML models. Rather than simply detecting prohibited language, for example, a DL model can reject an ad for making false claims.", "metadata": {"source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"}} +{"page_content": "## Transition to deep learning\n\nOur ML systems paired classical models with word embeddings to evaluate ad text. But our requirements evolved, and as the volume of submissions continued to expand, we needed a method nimble enough to scale along with our business. In addition, our models must be fast and serve ads within milliseconds to provide an optimal customer experience.\n\nOver the last decade, DL has become very popular in numerous domains, including natural language, vision, and audio. Because deep neural networks channel data sets through many layers \u2014 extracting progressively higher-level features \u2014 they can make more nuanced inferences than classical ML models. Rather than simply detecting prohibited language, for example, a DL model can reject an ad for making false claims.", "metadata": {"source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"}} {"page_content": "In addition, DL techniques are transferable\u2013 a model trained for one task can be adapted to carry out a related task. For instance, a pre-trained neural network can be optimized to detect objects in images and then fine-tuned to identify specific objects that are not allowed to be displayed in an ad.\n\nDeep neural networks can automate two of classical ML\u2019s most time-consuming steps: feature engineering and data labeling. Unlike traditional supervised learning approaches, which require exploratory data analysis and hand-engineered features, deep neural networks learn the relevant features directly from the data. DL models can also analyze unstructured data, like text and images, without the preprocessing necessary in ML. Deep neural networks scale effectively with more data and perform especially well in applications involving large data sets.", "metadata": {"source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"}} {"page_content": "We chose PyTorch to develop our models because it helped us maximize the performance of our systems. With PyTorch, we can serve our customers better while taking advantage of Python\u2019s most intuitive concepts. The programming in PyTorch is object-oriented: it groups processing functions with the data they modify. As a result, our codebase is modular, and we can reuse pieces of code in different applications. In addition, PyTorch\u2019s eager mode allows loops and control structures and, therefore, more complex operations in the model. Eager mode makes it easy to prototype and iterate upon our models, and we can work with various data structures. This flexibility helps us update our models quickly to meet changing business requirements.", "metadata": {"source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"}} {"page_content": "\u201cBefore this, we experimented with other frameworks that were \u201cPythonic,\u201d but PyTorch was the clear winner for us here.\u201d said Yashal Kanungo, Applied Scientist. \u201cUsing PyTorch was easy because the structure felt native to Python programming, which the data scientists were very familiar with\u201d.\n\n### Training pipeline\n\nToday, we build our text models entirely in PyTorch. To save time and money, we often skip the early stages of training by fine-tuning a pre-trained NLP model for language analysis. If we need a new model to evaluate images or video, we start by browsing PyTorch\u2019s [torchvision](https://pytorch.org/vision/stable/index.html) library, which offers pretrained options for image and video classification, object detection, instance segmentation, and pose estimation. For specialized tasks, we build a custom model from the ground up. PyTorch is perfect for this, because eager mode and the user-friendly front end make it easy to experiment with different architectures.", "metadata": {"source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"}} @@ -1356,9 +1360,9 @@ {"page_content": "We prototype and iterate upon our models using SageMaker Notebooks. Eager mode lets us prototype models quickly by building a new computational graph for each training batch; the sequence of operations can change from iteration to iteration to accommodate different data structures or to jibe with intermediate results. That frees us to adjust the network during training without starting over from scratch. These dynamic graphs are particularly valuable for recursive computations based on variable sequence lengths, such as the words, sentences, and paragraphs in an ad that are analyzed with NLP.", "metadata": {"source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"}} {"page_content": "When we\u2019ve finalized the model architecture, we deploy training jobs on [SageMaker](https://aws.amazon.com/sagemaker/). PyTorch helps us develop large models faster by running numerous training jobs at the same time. PyTorch\u2019s [Distributed Data Parallel](https://sagemaker.readthedocs.io/en/stable/api/training/sdp_versions/v1.0.0/smd_data_parallel_pytorch.html) (DDP) module replicates a single model across multiple interconnected machines within SageMaker, and all the processes run forward passes simultaneously on their own unique portion of the data set. During the backward pass, the module averages the gradients of all the processes, so each local model is updated with the same parameter values.\n\n### Model deployment pipeline\n\nWhen we deploy the model in production, we want to ensure lower inference costs without impacting prediction accuracy. Several PyTorch features and AWS services have helped us address the challenge.", "metadata": {"source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"}} {"page_content": "The flexibility of a dynamic graph enriches training, but in deployment we want to maximize performance and portability. An advantage of developing NLP models in PyTorch is that out of the box, they can be traced into a static sequence of operations by [TorchScript](https://pytorch.org/docs/stable/jit.html), a subset of Python specialized for ML applications. Torchscript converts PyTorch models to a more efficient, production-friendly intermediate representation (IR) graph that is easily compiled. We run a sample input through the model, and TorchScript records the operations executed during the forward pass. The resulting IR graph can run in high-performance environments, including C++ and other multithreaded Python-free contexts, and optimizations such as operator fusion can speed up the runtime.\n\n### Neuron SDK and AWS Inferentia powered compute", "metadata": {"source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"}} -{"page_content": "We deploy our models on [Amazon EC2 Inf1 instances](https://aws.amazon.com/ec2/instance-types/inf1/) powered by AWS Inferentia, Amazon's first ML silicon designed to accelerate deep learning inference workloads. Inferentia has shown to reduce inference costs by up to 70% compared to Amazon EC2 GPU-based instances.\nWe used the [AWS Neuron](https://aws.amazon.com/machine-learning/neuron/) SDK \u2014 a set of software tools used with Inferentia \u2014 to compile and optimize our models for deployment on EC2 Inf1 instances.\n\nThe code snippet below shows how to compile a Hugging Face BERT model with Neuron. Like torch.jit.trace(), neuron.trace() records the model\u2019s operations on an example input during the forward pass to build a static IR graph.", "metadata": {"source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"}} +{"page_content": "### Neuron SDK and AWS Inferentia powered compute\n\nWe deploy our models on [Amazon EC2 Inf1 instances](https://aws.amazon.com/ec2/instance-types/inf1/) powered by AWS Inferentia, Amazon's first ML silicon designed to accelerate deep learning inference workloads. Inferentia has shown to reduce inference costs by up to 70% compared to Amazon EC2 GPU-based instances.\nWe used the [AWS Neuron](https://aws.amazon.com/machine-learning/neuron/) SDK \u2014 a set of software tools used with Inferentia \u2014 to compile and optimize our models for deployment on EC2 Inf1 instances.\n\nThe code snippet below shows how to compile a Hugging Face BERT model with Neuron. Like torch.jit.trace(), neuron.trace() records the model\u2019s operations on an example input during the forward pass to build a static IR graph.", "metadata": {"source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"}} {"page_content": "```python\nimport torch\nfrom transformers import BertModel, BertTokenizer\nimport torch.neuron\ntokenizer = BertTokenizer.from_pretrained(\"path to saved vocab\")\nmodel = BertModel.from_pretrained(\"path to the saved model\", returned_dict=False)\ninputs = tokenizer (\"sample input\", return_tensor=\"pt\")\nneuron_model = torch.neuron.trace(model,\n example_inputs = (inputs['input_ids'], inputs['attention_mask']),\n verbose = 1)\noutput = neuron_model(*(inputs['input_ids'], inputs['attention_mask']))\n```\n\n### Autocasting and recalibration", "metadata": {"source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"}} -{"page_content": "Under the hood, Neuron optimizes our models for performance by autocasting them to a smaller data type. As a default, most applications represent neural network values in the 32-bit single-precision floating point (FP32) number format. Autocasting the model to a 16-bit format \u2014 half-precision floating point (FP16) or Brain Floating Point (BF16) \u2014 reduces a model\u2019s memory footprint and execution time. In our case, we decided to use FP16 to optimize for performance while maintaining high accuracy.", "metadata": {"source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"}} +{"page_content": "### Autocasting and recalibration\n\nUnder the hood, Neuron optimizes our models for performance by autocasting them to a smaller data type. As a default, most applications represent neural network values in the 32-bit single-precision floating point (FP32) number format. Autocasting the model to a 16-bit format \u2014 half-precision floating point (FP16) or Brain Floating Point (BF16) \u2014 reduces a model\u2019s memory footprint and execution time. In our case, we decided to use FP16 to optimize for performance while maintaining high accuracy.", "metadata": {"source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"}} {"page_content": "Autocasting to a smaller data type can, in some cases, trigger slight differences in the model\u2019s predictions. To ensure that the model\u2019s accuracy is not affected, Neuron compares the performance metrics and predictions of the FP16 and FP32 models. When autocasting diminishes the model\u2019s accuracy, we can tell the Neuron compiler to convert only the weights and certain data inputs to FP16, keeping the rest of the intermediate results in FP32. In addition, we often run a few iterations with the training data to recalibrate our autocasted models. This process is much less intensive than the original training.\n\n### Deployment", "metadata": {"source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"}} {"page_content": "### Deployment\n\nTo analyze multimedia ads, we run an ensemble of DL models. All ads uploaded to Amazon are run through specialized models that assess every type of content they include: images, video and audio, headlines, texts, backgrounds, and even syntax, grammar, and potentially inappropriate language. The signals we receive from these models indicate whether or not an advertisement complies with our criteria.", "metadata": {"source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"}} {"page_content": "Deploying and monitoring multiple models is significantly complex, so we depend on [TorchServe](https://github.com/pytorch/serve), SageMaker\u2019s default PyTorch model serving library. Jointly developed by Facebook\u2019s PyTorch team and AWS to streamline the transition from prototyping to production, TorchServe helps us deploy trained PyTorch models at scale without having to write custom code. It provides a secure set of REST APIs for inference, management, metrics, and explanations. With features such as multi-model serving, model versioning, ensemble support, and automatic batching, TorchServe is ideal for supporting our immense workload. You can read more about deploying your Pytorch models on SageMaker with native TorchServe integration in this [blog post](https://aws.amazon.com/blogs/machine-learning/serving-pytorch-models-in-production-with-the-amazon-sagemaker-native-torchserve-integration/).", "metadata": {"source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"}} @@ -1368,7 +1372,7 @@ {"page_content": "### Batching\n\nHardware accelerators are optimized for parallelism, and batching \u2014 feeding a model multiple inputs in a single step \u2014 helps saturate all available capacity, typically resulting in higher throughputs. Excessively high batch sizes, however, can increase latency with minimal improvement in throughputs. Experimenting with different batch sizes helps us identify the sweet spot for our models and hardware accelerator. We run experiments to determine the best batch size for our model size, payload size, and request traffic patterns.\n\nThe Neuron compiler now supports variable batch sizes. Previously, tracing a model hardcoded the predefined batch size, so we had to pad our data, which can waste compute, slow throughputs, and exacerbate latency. Inferentia is optimized to maximize throughput for small batches, reducing latency by easing the load on the system.\n\n### Parallelism", "metadata": {"source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"}} {"page_content": "### Parallelism\n\nModel parallelism on multi-cores also improves throughput and latency, which is crucial for our heavy workloads. Each Inferentia chip contains four NeuronCores that can either run separate models simultaneously or form a pipeline to stream a single model. In our use case, the data parallel configuration offers the highest throughput at the lowest cost, because it scales out concurrent processing requests.\n\nData Parallel:\n\n

\n \n

\n\nModel Parallel:\n\n

\n \n

\n\n### Monitoring\n\nIt is critical that we monitor the accuracy of our inferences in production. Models that initially make good predictions can eventually degrade in deployment as they are exposed to a wider variety of data. This phenomenon, called model drift, usually occurs when the input data distributions or the prediction targets change.", "metadata": {"source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"}} {"page_content": "We use [SageMaker Model Monitor](https://aws.amazon.com/sagemaker/model-monitor/) to track parity between the training and production data. Model Monitor notifies us when predictions in production begin to deviate from the training and validation results. Thanks to this early warning, we can restore accuracy \u2014 by retraining the model if necessary \u2014 before our advertisers are affected. To track performance in real time, Model Monitor also sends us metrics about the quality of predictions, such as accuracy, F-scores, and the distribution of the predicted classes.\n\nTo determine if our application needs to scale, TorchServe logs resource utilization metrics for the CPU, Memory, and Disk at regular intervals; it also records the number of requests received versus the number served. For custom metrics, TorchServe offers a [Metrics API](https://github.com/pytorch/serve/blob/master/docs/metrics_api.md).\n\n### A rewarding result", "metadata": {"source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"}} -{"page_content": "Our DL models, developed in PyTorch and deployed on Inferentia, sped up our ads analysis while cutting costs. Starting with our first explorations in DL, programming in PyTorch felt natural. Its user-friendly features helped smooth the course from our early experiments to the deployment of our multimodal ensembles. PyTorch lets us prototype and build models quickly, which is vital as our advertising service evolves and expands. For an added benefit, PyTorch works seamlessly with Inferentia and our AWS ML stack. We look forward to building more use cases with PyTorch, so we can continue to serve our clients accurate, real-time results.", "metadata": {"source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"}} +{"page_content": "### A rewarding result\n\nOur DL models, developed in PyTorch and deployed on Inferentia, sped up our ads analysis while cutting costs. Starting with our first explorations in DL, programming in PyTorch felt natural. Its user-friendly features helped smooth the course from our early experiments to the deployment of our multimodal ensembles. PyTorch lets us prototype and build models quickly, which is vital as our advertising service evolves and expands. For an added benefit, PyTorch works seamlessly with Inferentia and our AWS ML stack. We look forward to building more use cases with PyTorch, so we can continue to serve our clients accurate, real-time results.", "metadata": {"source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch feature classification changes'\nauthor: Team PyTorch\n---\n\nTraditionally features in PyTorch were classified as either stable or experimental with an implicit third option of testing bleeding edge features by building master or through installing nightly builds (available via prebuilt whls). This has, in a few cases, caused some confusion around the level of readiness, commitment to the feature and backward compatibility that can be expected from a user perspective. Moving forward, we\u2019d like to better classify the 3 types of features as well as define explicitly here what each mean from a user perspective.\n\n# New Feature Designations\n\nWe will continue to have three designations for features but, as mentioned, with a few changes: Stable, Beta (previously Experimental) and Prototype (previously Nightlies). Below is a brief description of each and a comment on the backward compatibility expected:", "metadata": {"source": "https://pytorch.org/blog/pytorch-feature-classification-changes/", "category": "pytorch blogs"}} {"page_content": "## Stable\nNothing changes here. A stable feature means that the user value-add is or has been proven, the API isn\u2019t expected to change, the feature is performant and all documentation exists to support end user adoption.\n\n*Level of commitment*: We expect to maintain these features long term and generally there should be no major performance limitations, gaps in documentation and we also expect to maintain backwards compatibility (although breaking changes can happen and notice will be given one release ahead of time).", "metadata": {"source": "https://pytorch.org/blog/pytorch-feature-classification-changes/", "category": "pytorch blogs"}} {"page_content": "## Beta\nWe previously called these features \u2018Experimental\u2019 and we found that this created confusion amongst some of the users. In the case of a Beta level features, the value add, similar to a Stable feature, has been proven (e.g. pruning is a commonly used technique for reducing the number of parameters in NN models, independent of the implementation details of our particular choices) and the feature generally works and is documented. This feature is tagged as Beta because the API may change based on user feedback, because the performance needs to improve or because coverage across operators is not yet complete.\n\n*Level of commitment*: We are committing to seeing the feature through to the Stable classification. We are however not committing to Backwards Compatibility. Users can depend on us providing a solution for problems in this area going forward, but the APIs and performance characteristics of this feature may change.", "metadata": {"source": "https://pytorch.org/blog/pytorch-feature-classification-changes/", "category": "pytorch blogs"}} @@ -1392,16 +1396,16 @@ {"page_content": "A comparison of them with the vanilla data-parallel examples of [MNIST](https://github.com/pytorch/xla/blob/master/test/test_train_mp_mnist.py) and [ImageNet](https://github.com/pytorch/xla/blob/master/test/test_train_mp_imagenet.py) illustrates how to adapt a training script to use FSDP. A major distinction to keep in mind is that when stepping the optimizer on an FSDP-wrapped model, one should directly call `optimizer.step()` instead of `xm.optimizer_step(optimizer)`. The latter reduces the gradients across ranks, which is not what we need in FSDP, where the gradients are already reduced and sharded (from a reduce-scatter op in its backward pass).\n\n#### Installation\n\nFSDP is available from the PyTorch/XLA 1.12 and newer nightly releases. Please refer to [https://github.com/pytorch/xla#-available-images-and-wheels](https://github.com/pytorch/xla#-available-images-and-wheels) for a guide on installation as well as Cloud TPU allocation. Then clone PyTorch/XLA repo on a TPU VM as follows", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"}} {"page_content": "```python\nmkdir -p ~/pytorch && cd ~/pytorch\ngit clone --recursive https://github.com/pytorch/xla.git\ncd ~/\n```\n\n#### Train MNIST on v3-8 TPU\n\nIt gets around 98.9 accuracy for 2 epochs:\n\n```python\npython3 ~/pytorch/xla/test/test_train_mp_mnist_fsdp_with_ckpt.py \\\n --batch_size 16 --drop_last --num_epochs 2 \\\n --use_nested_fsdp\n```\n\nThe script above automatically tests consolidation of the sharded model checkpoints at the end. You can also manually consolidate the sharded checkpoint files via\n\n```python\npython3 -m torch_xla.distributed.fsdp.consolidate_sharded_ckpts \\\n --ckpt_prefix /tmp/mnist-fsdp/final_ckpt \\\n --ckpt_suffix \"_rank-*-of-*.pth\"\n```\n\n#### Train ImageNet with ResNet-50 on v3-8 TPU\n\nIt gets around 75.9 accuracy for 100 epochs, same as what one would get without using FSDP; download and preprocess the [ImageNet-1k](https://github.com/pytorch/examples/tree/master/imagenet#requirements) dataset to `/datasets/imagenet-1k`:", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"}} {"page_content": "```python\npython3 ~/pytorch/xla/test/test_train_mp_imagenet_fsdp.py \\\n --datadir /datasets/imagenet-1k --drop_last \\\n --model resnet50 --test_set_batch_size 64 --eval_interval 10 \\\n --lr 0.4 --batch_size 128 --num_warmup_epochs 5 \\\n --lr_scheduler_divide_every_n_epochs 30 --lr_scheduler_divisor 10 \\\n --num_epochs 100 \\\n --use_nested_fsdp\n```\n\nYou can also explore other options in these two examples, such as `--use_gradient_checkpointing` to apply gradient checkpointing (i.e. activation checkpointing) on the ResNet blocks, or `--compute_dtype bfloat16` to perform forward and backward passes in bfloat16 precision.\n\n### Examples on large-scale models", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"}} -{"page_content": "When building large models on TPUs, we often need to be aware of the memory constraints (e.g. 16 GB per core in TPU v3 and 32 GB per chip in TPU v4). For large models that cannot fit into a single TPU memory or the host CPU memory, one should use nested FSDP to implement the ZeRO-3 algorithm interleave submodule construction with inner FSDP wrapping, so that the full model never needs to be stored in memory during construction.\n\nWe illustrate these cases in [https://github.com/ronghanghu/ptxla_scaling_examples](https://github.com/ronghanghu/ptxla_scaling_examples), which provides examples of training a Vision Transformer (ViT) model with 10B+ parameters on a TPU v3 pod (with 128 cores) as well as other cases.\n\n### Design Notes", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"}} +{"page_content": "### Examples on large-scale models\n\nWhen building large models on TPUs, we often need to be aware of the memory constraints (e.g. 16 GB per core in TPU v3 and 32 GB per chip in TPU v4). For large models that cannot fit into a single TPU memory or the host CPU memory, one should use nested FSDP to implement the ZeRO-3 algorithm interleave submodule construction with inner FSDP wrapping, so that the full model never needs to be stored in memory during construction.\n\nWe illustrate these cases in [https://github.com/ronghanghu/ptxla_scaling_examples](https://github.com/ronghanghu/ptxla_scaling_examples), which provides examples of training a Vision Transformer (ViT) model with 10B+ parameters on a TPU v3 pod (with 128 cores) as well as other cases.\n\n### Design Notes", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"}} {"page_content": "### Design Notes\n\nOne might wonder why we need to develop a separate FSDP class in PyTorch/XLA instead of directly reusing [PyTorch's FSDP class](https://pytorch.org/docs/stable/fsdp.html) or extending it to the XLA backend. The main motivation behind a separate FSDP class in PyTorch/XLA is that the native PyTorch's FSDP class heavily relies on CUDA features that are not supported by XLA devices, while XLA also has several unique characteristics that need special handling. These distinctions require a different implementation of FSDP that would be much easier to build in a separate class.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"}} {"page_content": "#### Changes in API calls\nOne prominent distinction is that the native PyTorch FSDP is built upon separate CUDA streams for asynchronous execution in eager mode, while PyTorch/XLA runs in lazy mode and also does not support streams. In addition, TPU requires that all devices homogeneously run the same program. As a result, in the PyTorch/XLA FSDP implementation, CUDA calls and per-process heterogeneity need to be replaced by XLA APIs and alternative homogeneous implementations.\n\n#### Tensor Storage Handling", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"}} -{"page_content": "Another prominent distinction is how to free a tensor's storage, which is much harder in XLA than in CUDA. To implement ZeRO-3, one needs to free the storage of full parameters after a module's forward pass, so that the next module can reuse this memory buffer for subsequent computation. PyTorch's FSPD accomplishes this on CUDA by freeing the actual storage of a parameter `p` via `p.data.storage().resize_(0)`. However, XLA tensors do not have this `.storage()` handle given that the XLA HLO IRs are completely functional and do not provide any ops to deallocate a tensor or resize its storage. Below the PyTorch interface, only the XLA compiler can decide when to free a TPU device memory corresponding to an XLA tensor, and a prerequisite is that the memory can only be released when the tensor object gets deallocated in Python -- which cannot happen in FSDP because these parameter tensors are referenced as module attributes and also saved by PyTorch autograd for the backward pass.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"}} +{"page_content": "#### Tensor Storage Handling\n\nAnother prominent distinction is how to free a tensor's storage, which is much harder in XLA than in CUDA. To implement ZeRO-3, one needs to free the storage of full parameters after a module's forward pass, so that the next module can reuse this memory buffer for subsequent computation. PyTorch's FSPD accomplishes this on CUDA by freeing the actual storage of a parameter `p` via `p.data.storage().resize_(0)`. However, XLA tensors do not have this `.storage()` handle given that the XLA HLO IRs are completely functional and do not provide any ops to deallocate a tensor or resize its storage. Below the PyTorch interface, only the XLA compiler can decide when to free a TPU device memory corresponding to an XLA tensor, and a prerequisite is that the memory can only be released when the tensor object gets deallocated in Python -- which cannot happen in FSDP because these parameter tensors are referenced as module attributes and also saved by PyTorch autograd for the backward pass.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"}} {"page_content": "Our solution to this issue is to split a tensor's value properties from its autograd Variable properties, and to free a `nn.Parameter` tensor by setting its `.data` attribute to a dummy scalar of size 1. This way the actual data tensor for the full parameter gets dereferenced in Python so that XLA can recycle its memory for other computation, while autograd can still trace the base `nn.Parameter` as a weak reference to the parameter data. To get this to work, one also needs to handle views over the parameters as views in PyTorch also hold references to its actual data (this required fixing a shape-related issue with views in PyTorch/XLA).\n\n#### Working with XLA compiler", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"}} {"page_content": "The solution above should be enough to free full parameters if the XLA compiler faithfully preserves the operations and their execution order in our PyTorch program. But there is another problem -- XLA attempts to optimize the program to speed up its execution by applying common subexpression elimination (CSE) to the HLO IRs. In a naive implementation of FSDP, the XLA compiler typically eliminates the 2nd all-gather in the backward pass to reconstruct the full parameters when it sees that it is a repeated computation from the forward pass, and directly holds and reuses the full parameters we want to free up after the forward pass. To guard against this undesired compiler behavior, we introduced the [optimization barrier op](https://www.tensorflow.org/xla/operation_semantics#optimizationbarrier) into PyTorch/XLA and used it to stop eliminating the 2nd all-gather. This optimization barrier is also applied to a similar case of gradient checkpointing to prevent CSE between forward and backward passes that could eliminate the rematerialization.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"}} {"page_content": "In the future, if the distinctions between CUDA and XLA become not as prominent as mentioned above, it could be worth considering a merge of the PyTorch/XLA FSDP with the native PyTorch FSDP to have a unified interface.\n\n## Acknowledgments\n\nThanks to Junmin Hao from AWS for reviewing the PyTorch/XLA FSDP pull request. Thanks to Brian Hirsh from the Meta PyTorch team for support on the PyTorch core issues. Thanks to Isaack Karanja, Will Cromar, and Blake Hechtman from Google for support on GCP, XLA, and TPU issues.\n\nThanks to Piotr Dollar, Wan-Yen Lo, Alex Berg, Ryan Mark, Kaiming He, Xinlei Chen, Saining Xie, Shoubhik Debnath, Min Xu, and Vaibhav Aggarwal from Meta FAIR for various TPU-related discussions.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"Accelerated Diffusers with PyTorch 2.0\"\nauthor: Pedro Cuenca, Patrick von Platen, Suraj Patil\n---\n\nPyTorch 2.0 has just been released. Its flagship new feature is `torch.compile()`, a one-line code change that promises to automatically improve performance across codebases. We have previously [checked on that promise in Hugging Face Transformers and TIMM models](https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/), and delved deep into its [motivation, architecture and the road ahead](https://pytorch.org/get-started/pytorch-2.0/).", "metadata": {"source": "https://pytorch.org/blog/accelerated-diffusers-pt-20/", "category": "pytorch blogs"}} {"page_content": "As important as `torch.compile()` is, there\u2019s much more to PyTorch 2.0. Notably, PyTorch 2.0 incorporates several strategies to accelerate transformer blocks, and these improvements are very relevant for diffusion models too. Techniques such as [FlashAttention](https://arxiv.org/abs/2205.14135), for example, have become very popular in the diffusion community thanks to their ability to significantly speed up Stable Diffusion and achieve larger batch sizes, and they are now part of PyTorch 2.0.\n\nIn this post we discuss how attention layers are optimized in PyTorch 2.0 and how these optimization are applied to the popular [\ud83e\udde8 Diffusers library](https://github.com/huggingface/diffusers). We finish with a benchmark that shows how the use of PyTorch 2.0 and Diffusers immediately translates to significant performance improvements across different hardware.\n\n\n## Accelerating transformer blocks", "metadata": {"source": "https://pytorch.org/blog/accelerated-diffusers-pt-20/", "category": "pytorch blogs"}} -{"page_content": "PyTorch 2.0 includes a _scaled dot-product attention_ function as part of `torch.nn.functional`. This function encompasses several implementations that can be applied depending on the inputs and the hardware in use. Before PyTorch 2.0, you had to search for third-party implementations and install separate packages in order to take advantage of memory optimized algorithms, such as FlashAttention. The available implementations are:\n* FlashAttention, from the official [FlashAttention project](https://github.com/HazyResearch/flash-attention). \n* Memory-Efficient Attention, from the [xFormers project](https://github.com/facebookresearch/xformers).\n* A native C++ implementation suitable for non-CUDA devices or when high-precision is required.", "metadata": {"source": "https://pytorch.org/blog/accelerated-diffusers-pt-20/", "category": "pytorch blogs"}} +{"page_content": "## Accelerating transformer blocks\n\nPyTorch 2.0 includes a _scaled dot-product attention_ function as part of `torch.nn.functional`. This function encompasses several implementations that can be applied depending on the inputs and the hardware in use. Before PyTorch 2.0, you had to search for third-party implementations and install separate packages in order to take advantage of memory optimized algorithms, such as FlashAttention. The available implementations are:\n* FlashAttention, from the official [FlashAttention project](https://github.com/HazyResearch/flash-attention). \n* Memory-Efficient Attention, from the [xFormers project](https://github.com/facebookresearch/xformers).\n* A native C++ implementation suitable for non-CUDA devices or when high-precision is required.", "metadata": {"source": "https://pytorch.org/blog/accelerated-diffusers-pt-20/", "category": "pytorch blogs"}} {"page_content": "All these methods are available by default, and PyTorch will try to select the optimal one automatically through the use of the new scaled dot-product attention (SDPA) API. You can also individually toggle them for finer-grained control, see [the documentation](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention) for details.\n\n\n## Using scaled dot-product attention in diffusers", "metadata": {"source": "https://pytorch.org/blog/accelerated-diffusers-pt-20/", "category": "pytorch blogs"}} {"page_content": "The incorporation of Accelerated PyTorch 2.0 Transformer attention to the Diffusers library was achieved through the use of the [`set_attn_processor` method](https://huggingface.co/docs/diffusers/v0.13.0/en/api/models#diffusers.UNet2DConditionModel.set_attn_processor), which allows for pluggable attention modules to be configured. In this case, a [new attention processor was created](https://github.com/huggingface/diffusers/blob/856dad57/src/diffusers/models/cross_attention.py#L469), which is [enabled by default when PyTorch 2.0 is available](https://github.com/huggingface/diffusers/blob/856dad57bb7a9ee13af4a08492e524b0a145a2c5/src/diffusers/models/cross_attention.py#L105). For clarity, this is how you could enable it manually (but it\u2019s usually not necessary since diffusers will automatically take care of it):\n\n```\nfrom diffusers import StableDiffusionPipeline\nfrom diffusers.models.cross_attention import AttnProcessor2_0", "metadata": {"source": "https://pytorch.org/blog/accelerated-diffusers-pt-20/", "category": "pytorch blogs"}} {"page_content": "pipe = StableDiffusionPipeline.from_pretrained(\"runwayml/stable-diffusion-v1-5\")\npipe.to(\"cuda\")\npipe.unet.set_attn_processor(AttnProcessor2_0())\n\nprompt = \"a photo of an astronaut riding a horse on mars\"\nimage = pipe(prompt).images[0]\n```\n\n## Stable Diffusion Benchmark\n\nWe ran a number of tests using accelerated dot-product attention from PyTorch 2.0 in Diffusers. We installed diffusers from pip and used nightly versions of PyTorch 2.0, since our tests were performed before the official release. We also used `torch.set_float32_matmul_precision('high')` to enable additional fast matrix multiplication algorithms.\n\nWe compared results with the traditional attention implementation in `diffusers` (referred to as `vanilla` below) as well as with the best-performing solution in pre-2.0 PyTorch: PyTorch 1.13.1 with the xFormers package (v0.0.16) installed.", "metadata": {"source": "https://pytorch.org/blog/accelerated-diffusers-pt-20/", "category": "pytorch blogs"}} @@ -1414,7 +1418,7 @@ {"page_content": "---\nlayout: blog_detail\ntitle: \"Get Started with PyTorch 2.0 Summary and Overview\"\nauthor: Team PyTorch\nfeatured-img: \"assets/images/Pytorch_2_0_Animation_AdobeExpress.gif\"\n---\n\nIntroducing PyTorch 2.0, our first steps toward the next generation 2-series release of PyTorch. Over the last few years we have innovated and iterated from PyTorch 1.0 to the most recent 1.13 and moved to the newly formed PyTorch Foundation, part of the Linux Foundation.\n\nTo complement the PyTorch 2.0 announcement and conference, we have also posted a comprehensive introduction and technical overview within the Get Started menu at [https://pytorch.org/get-started/pytorch-2.0](https://pytorch.org/get-started/pytorch-2.0).\n\nWe also wanted to ensure you had all the information to quickly leverage PyTorch 2.0 in your models so we added the technical requirements, tutorial, user experience, Hugging Face benchmarks and FAQs to get you started today!", "metadata": {"source": "https://pytorch.org/blog/getting-started-with-pytorch-2.0/", "category": "pytorch blogs"}} {"page_content": "Finally we are launching a new \u201cAsk the Engineers: 2.0 Live Q&A\u201d series that allows you to go deeper on a range of topics with PyTorch subject matter experts. We hope this content is helpful for the entire community and level of users/contributors.\n\n[https://pytorch.org/get-started/pytorch-2.0](https://pytorch.org/get-started/pytorch-2.0)", "metadata": {"source": "https://pytorch.org/blog/getting-started-with-pytorch-2.0/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'New Library Releases in PyTorch 1.10, including TorchX, TorchAudio, TorchVision'\nauthor: Team PyTorch \n---\n\nToday, we are announcing a number of new features and improvements to PyTorch libraries, alongside the [PyTorch 1.10 release](https://pytorch.org/blog/pytorch-1.10-released/). Some highlights include:\n\nSome highlights include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"}} -{"page_content": "* **TorchX** - a new SDK for quickly building and deploying ML applications from research & development to production. \n* **TorchAudio** - Added text-to-speech pipeline, self-supervised model support, multi-channel support and MVDR beamforming module, RNN transducer (RNNT) loss function, and batch and filterbank support to `lfilter` function. See the TorchAudio release notes [here](https://github.com/pytorch/audio/releases).\n* **TorchVision** - Added new RegNet and EfficientNet models, FX based feature extraction added to utilities, two new Automatic Augmentation techniques: Rand Augment and Trivial Augment, and updated training recipes. See the TorchVision release notes [here](https://github.com/pytorch/vision/releases).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"}} +{"page_content": "Some highlights include:\n\n* **TorchX** - a new SDK for quickly building and deploying ML applications from research & development to production. \n* **TorchAudio** - Added text-to-speech pipeline, self-supervised model support, multi-channel support and MVDR beamforming module, RNN transducer (RNNT) loss function, and batch and filterbank support to `lfilter` function. See the TorchAudio release notes [here](https://github.com/pytorch/audio/releases).\n* **TorchVision** - Added new RegNet and EfficientNet models, FX based feature extraction added to utilities, two new Automatic Augmentation techniques: Rand Augment and Trivial Augment, and updated training recipes. See the TorchVision release notes [here](https://github.com/pytorch/vision/releases).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "# Introducing TorchX\nTorchX is a new SDK for quickly building and deploying ML applications from research & development to production. It offers various builtin components that encode MLOps best practices and make advanced features like distributed training and hyperparameter optimization accessible to all. \n\nUsers can get started with TorchX 0.1 with no added setup cost since it supports popular ML schedulers and pipeline orchestrators that are already widely adopted and deployed in production. No two production environments are the same. To comply with various use cases, TorchX\u2019s core APIs allow tons of customization at well-defined extension points so that even the most unique applications can be serviced without customizing the whole vertical stack.\n\nRead the [documentation](https://pytorch.org/torchx) for more details and try out this feature using this quickstart [tutorial](https://pytorch.org/torchx/latest/quickstart.html). \n\n\n# TorchAudio 0.10", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "# TorchAudio 0.10\n\n### [Beta] Text-to-speech pipeline\nTorchAudio now adds the Tacotron2 model and pretrained weights. It is now possible to build a text-to-speech pipeline with existing vocoder implementations like WaveRNN and Griffin-Lim. Building a TTS pipeline requires matching data processing and pretrained weights, which are often non-trivial to users. So TorchAudio introduces a bundle API so that constructing pipelines for specific pretrained weights is easy. The following example illustrates this.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "```python\n>>> import torchaudio\n>>>\n>>> bundle = torchaudio.pipelines.TACOTRON2_WAVERNN_CHAR_LJSPEECH\n>>>\n>>> # Build text processor, Tacotron2 and vocoder (WaveRNN) model\n>>> processor = bundle.get_text_processor()\n>>> tacotron2 = bundle.get_tacotron2()\nDownloading:\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 107M/107M [00:01<00:00, 87.9MB/s]\n>>> vocoder = bundle.get_vocoder()\nDownloading:\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 16.7M/16.7M [00:00<00:00, 78.1MB/s]\n>>>\n>>> text = \"Hello World!\"\n>>>\n>>> # Encode text\n>>> input, lengths = processor(text)\n>>>\n>>> # Generate (mel-scale) spectrogram\n>>> specgram, lengths, _ = tacotron2.infer(input, lengths)\n>>>\n>>> # Convert spectrogram to waveform\n>>> waveforms, lengths = vocoder(specgram, lengths)\n>>>\n>>> # Save audio\n>>> torchaudio.save('hello-world.wav', waveforms, vocoder.sample_rate)\n\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"}} @@ -1429,10 +1433,10 @@ {"page_content": "efficientnet = models.efficientnet_b0(pretrained=True)\nefficientnet.eval()\npredictions = efficientnet(x)\n```\nSee the full list of new models on the [torchvision.models](https://pytorch.org/vision/master/models.html) documentation page.\n\nWe would like to thank Ross Wightman and Luke Melas-Kyriazi for contributing the weights of the EfficientNet variants.\n\n### (Beta) FX-based Feature Extraction \nA new Feature Extraction method has been added to our utilities. It uses [torch.fx](https://pytorch.org/docs/stable/fx.html) and enables us to retrieve the outputs of intermediate layers of a network which is useful for feature extraction and visualization. \n\nHere is an example of how to use the new utility:\n\n```python\nimport torch\nfrom torchvision.models import resnet50\nfrom torchvision.models.feature_extraction import create_feature_extractor\n\n\nx = torch.rand(1, 3, 224, 224)\n\nmodel = resnet50()", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "model = resnet50()\n\nreturn_nodes = {\n\"layer4.2.relu_2\": \"layer4\"\n}\nmodel2 = create_feature_extractor(model, return_nodes=return_nodes)\nintermediate_outputs = model2(x)\n\nprint(intermediate_outputs['layer4'].shape)\n```\nWe would like to thank Alexander Soare for developing this utility.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "### (Stable) New Data Augmentations\nTwo new Automatic Augmentation techniques were added: [RandAugment](https://arxiv.org/abs/1909.13719) and [Trivial Augment](https://arxiv.org/abs/2103.10158). They apply a series of transformations on the original data to enhance them and to boost the performance of the models. The new techniques build on top of the previously added [AutoAugment](https://github.com/pytorch/vision/pull/3123) and focus on simplifying the approach, reducing the search space for the optimal policy and improving the performance gain in terms of accuracy. These techniques enable users to reproduce recipes to achieve state-of-the-art performance on the offered models. Additionally, it enables users to apply these techniques in order to do transfer learning and achieve optimal accuracy on new datasets.\n\nBoth methods can be used as drop-in replacement of the AutoAugment technique as seen below:\n\n```python\nfrom torchvision import transforms", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"}} -{"page_content": "t = transforms.RandAugment()\n# t = transforms.TrivialAugmentWide()\ntransformed = t(image)\n\ntransform = transforms.Compose([\ntransforms.Resize(256),\ntransforms.RandAugment(), # transforms.TrivialAugmentWide()\ntransforms.ToTensor()])\n```\nRead the [automatic augmentation transforms](https://pytorch.org/vision/master/transforms.html#automatic-augmentation-transforms) for more details.\n\nWe would like to thank Samuel G. M\u00fcller for contributing to Trivial Augment and for his help on refactoring the AA package.\n\n### Updated Training Recipes\nWe have updated our training reference scripts to add support for Exponential Moving Average, Label Smoothing, Learning-Rate Warmup, [Mixup](https://arxiv.org/abs/1710.09412), [Cutmix](https://arxiv.org/abs/1905.04899) and other [SOTA primitives](https://github.com/pytorch/vision/issues/3911). The above enabled us to improve the classification Acc@1 of some pre-trained models by over 4 points. A major update of the existing pre-trained weights is expected in the next release.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"}} -{"page_content": "Thanks for reading. If you\u2019re interested in these updates and want to join the PyTorch community, we encourage you to join [the discussion](https://discuss.pytorch.org/) forums and [open GitHub issues](https://github.com/pytorch/pytorch/issues). To get the latest news from PyTorch, follow us on [Twitter](https://twitter.com/PyTorch), [Medium](https://medium.com/pytorch), [YouTube](https://www.youtube.com/pytorch) and [LinkedIn](https://www.linkedin.com/company/pytorch). \n\nCheers!\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"}} +{"page_content": "```python\nfrom torchvision import transforms\n\nt = transforms.RandAugment()\n# t = transforms.TrivialAugmentWide()\ntransformed = t(image)\n\ntransform = transforms.Compose([\ntransforms.Resize(256),\ntransforms.RandAugment(), # transforms.TrivialAugmentWide()\ntransforms.ToTensor()])\n```\nRead the [automatic augmentation transforms](https://pytorch.org/vision/master/transforms.html#automatic-augmentation-transforms) for more details.\n\nWe would like to thank Samuel G. M\u00fcller for contributing to Trivial Augment and for his help on refactoring the AA package.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"}} +{"page_content": "### Updated Training Recipes\nWe have updated our training reference scripts to add support for Exponential Moving Average, Label Smoothing, Learning-Rate Warmup, [Mixup](https://arxiv.org/abs/1710.09412), [Cutmix](https://arxiv.org/abs/1905.04899) and other [SOTA primitives](https://github.com/pytorch/vision/issues/3911). The above enabled us to improve the classification Acc@1 of some pre-trained models by over 4 points. A major update of the existing pre-trained weights is expected in the next release.\n\nThanks for reading. If you\u2019re interested in these updates and want to join the PyTorch community, we encourage you to join [the discussion](https://discuss.pytorch.org/) forums and [open GitHub issues](https://github.com/pytorch/pytorch/issues). To get the latest news from PyTorch, follow us on [Twitter](https://twitter.com/PyTorch), [Medium](https://medium.com/pytorch), [YouTube](https://www.youtube.com/pytorch) and [LinkedIn](https://www.linkedin.com/company/pytorch). \n\nCheers!\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"Introducing TorchMultimodal - a library for accelerating exploration in Multimodal AI\"\nauthor: Kartikay Khandelwal, Ankita De\nfeatured-img: \"assets/images/torch-multimodal-feature-image.png\"\n---\n\nWe are announcing TorchMultimodal Beta, a PyTorch domain library for training SoTA multi-task multimodal models at scale. The library provides composable building blocks (modules, transforms, loss functions) to accelerate model development, SoTA model architectures (FLAVA, MDETR, Omnivore) from published research, training and evaluation scripts, as well as notebooks for exploring these models. The library is under active development, and we\u2019d love to hear your feedback! You can find more details on how to get started [here](https://github.com/facebookresearch/multimodal#installation).\n\n## Why TorchMultimodal?", "metadata": {"source": "https://pytorch.org/blog/introducing-torchmultimodal/", "category": "pytorch blogs"}} -{"page_content": "Interest is rising around AI models that understand multiple input types (text, images, videos and audio signals), and optionally use this understanding to generate different forms of outputs (sentences, pictures, videos). Recent work from FAIR such as [FLAVA](https://arxiv.org/abs/2112.04482), [Omnivore](https://arxiv.org/pdf/2201.08377.pdf) and [data2vec](https://arxiv.org/abs/2202.03555) have shown that [multimodal models for understanding](https://ai.facebook.com/blog/advances-in-multimodal-understanding-research-at-meta-ai/) are competitive with unimodal counterparts, and in some cases are establishing the new state-of-the art. Generative models such as [Make-a-video](https://ai.facebook.com/blog/generative-ai-text-to-video/) and [Make-a-scene](https://ai.facebook.com/blog/greater-creative-control-for-ai-image-generation/) are redefining what modern AI systems can do.", "metadata": {"source": "https://pytorch.org/blog/introducing-torchmultimodal/", "category": "pytorch blogs"}} +{"page_content": "## Why TorchMultimodal?\n\nInterest is rising around AI models that understand multiple input types (text, images, videos and audio signals), and optionally use this understanding to generate different forms of outputs (sentences, pictures, videos). Recent work from FAIR such as [FLAVA](https://arxiv.org/abs/2112.04482), [Omnivore](https://arxiv.org/pdf/2201.08377.pdf) and [data2vec](https://arxiv.org/abs/2202.03555) have shown that [multimodal models for understanding](https://ai.facebook.com/blog/advances-in-multimodal-understanding-research-at-meta-ai/) are competitive with unimodal counterparts, and in some cases are establishing the new state-of-the art. Generative models such as [Make-a-video](https://ai.facebook.com/blog/generative-ai-text-to-video/) and [Make-a-scene](https://ai.facebook.com/blog/greater-creative-control-for-ai-image-generation/) are redefining what modern AI systems can do.", "metadata": {"source": "https://pytorch.org/blog/introducing-torchmultimodal/", "category": "pytorch blogs"}} {"page_content": "As interest in multimodal AI has grown, researchers are looking for tools and libraries to quickly experiment with ideas, and build on top of the latest research in the field. While the PyTorch ecosystem has a rich repository of libraries and frameworks, it\u2019s not always obvious how components from these interoperate with each other, or how they can be stitched together to build SoTA multimodal models.\n\nTorchMultimodal solves this problem by providing:\n\n- **Composable and easy-to-use building blocks** which researchers can use to accelerate model development and experimentation in their own workflows. These are designed to be modular, and can be easily extended to handle new modalities.\n\n- **End-to-end examples for training and evaluating the latest models from research.** These should serve as starting points for ongoing/future research, as well as examples for using advanced features such as integrating with FSDP and activation checkpointing for scaling up model and batch sizes.", "metadata": {"source": "https://pytorch.org/blog/introducing-torchmultimodal/", "category": "pytorch blogs"}} {"page_content": "## Introducing TorchMultimodal\n\nTorchMultimodal is a PyTorch domain library for training multi-task multimodal models at scale. In the repository, we provide:\n\n- **[Building Blocks](https://github.com/facebookresearch/multimodal/tree/main/torchmultimodal)**. A collection of modular and composable building blocks like models, fusion layers, loss functions, datasets and utilities. Some examples include:\n\n - [Contrastive Loss with Temperature](https://github.com/facebookresearch/multimodal/blob/4d2236877467ff8f56aa1935dd92d7782751b135/torchmultimodal/modules/losses/contrastive_loss_with_temperature.py#L145). Commonly used function for training models like CLIP and FLAVA. We also include variants such as [ImageTextContrastiveLoss](https://github.com/facebookresearch/multimodal/blob/4d2236877467ff8f56aa1935dd92d7782751b135/torchmultimodal/modules/losses/albef.py#L14) used in models like ALBEF.", "metadata": {"source": "https://pytorch.org/blog/introducing-torchmultimodal/", "category": "pytorch blogs"}} {"page_content": "- [Codebook layers](https://github.com/facebookresearch/multimodal/blob/main/torchmultimodal/modules/layers/codebook.py#L31) which compresses high dimensional data by nearest neighbor lookup in an embedding space and is a vital component of VQVAEs (provided as a [model](https://github.com/facebookresearch/multimodal/blob/4d2236877467ff8f56aa1935dd92d7782751b135/torchmultimodal/models/vqvae.py#L26) in the repository).\n\n - [Shifted-window Attention](https://github.com/facebookresearch/multimodal/blob/main/torchmultimodal/modules/encoders/swin_transformer_3d_encoder.py#L76) window based multi-head self attention which is a vital component of encoders like Swin 3D Transformers.\n\n - [Components for CLIP.](https://github.com/facebookresearch/multimodal/tree/4d2236877467ff8f56aa1935dd92d7782751b135/torchmultimodal/models/clip) A popular model published by OpenAI which has proven to be extremely effective at learning text and image representations.", "metadata": {"source": "https://pytorch.org/blog/introducing-torchmultimodal/", "category": "pytorch blogs"}} @@ -1445,11 +1449,11 @@ {"page_content": "## Team\n\nThe primary contributors and developers of TorchMultimodal include Ankita De, Evan Smothers, Kartikay Khandelwal, Lan Gong, Laurence Rouesnel, Nahiyan Malik, Rafi Ayub and Yosua Michael Maranatha.", "metadata": {"source": "https://pytorch.org/blog/introducing-torchmultimodal/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'Announcing the Winners of the 2021 PyTorch Annual Hackathon'\nauthor: Team PyTorch\nfeatured-img: 'assets/images/social_hackathon21.png'\n---\n\nMore than 1,900 people worked hard in this year\u2019s PyTorch Annual Hackathon to create unique tools and applications for PyTorch developers and researchers.\n\n*Notice: None of the projects submitted to the hackathon are associated with or offered by Meta Platforms, Inc.*\n\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/", "category": "pytorch blogs"}} {"page_content": "This year, participants could enter their projects into following three categories:\n* **PyTorch Developer Tools**: a tool or library for improving productivity and efficiency for PyTorch researchers and developers.\n* **Web and Mobile Applications Powered by PyTorch**: a web or mobile interface and/or an embedded device built using PyTorch.\n* **PyTorch Responsible AI Development Tools**: a tool, library, or web/mobile app to support researchers and developers in creating responsible AI that factors in fairness, security, privacy, and more throughout its entire development process.\n\nThe virtual hackathon ran from September 8 through November 2, 2021, with more than 1,900 registered participants from 110 countries, submitting a total of 65 projects. Entrants were judged on their idea\u2019s quality, originality, potential impact, and how well they implemented it. All projects can be viewed [here](https://pytorch2021.devpost.com/project-gallery).\n\nMeet the winners of each category below!\n\n## PYTORCH DEVELOPER TOOLS", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/", "category": "pytorch blogs"}} -{"page_content": "#### **First Place: [RaNNC](https://devpost.com/software/rannc-rapid-neural-network-connector)**\nRaNNC is a middleware to automate hybrid model/data parallelism for training very large-scale neural networks capable of training 100 billion parameter models without any manual tuning.\n\n#### **Second Place: [XiTorch](https://devpost.com/software/xitorch-differentiable-scientific-computing-library)**\nXiTorch provides first and higher order gradients of functional routines, such as optimization, rootfinder, and ODE solver. It also contains operations for implicit linear operators (e.g. large matrix that is expressed only by its matrix-vector multiplication) such as symmetric eigen-decomposition, linear solve, and singular value decomposition.\n\n#### **Third Place: [TorchLiberator](https://devpost.com/software/torchliberator-partial-weight-loading)**\nTorchLiberator automates model surgery, finding the maximum correspondence between weights in two networks.", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/", "category": "pytorch blogs"}} +{"page_content": "## PYTORCH DEVELOPER TOOLS\n\n#### **First Place: [RaNNC](https://devpost.com/software/rannc-rapid-neural-network-connector)**\nRaNNC is a middleware to automate hybrid model/data parallelism for training very large-scale neural networks capable of training 100 billion parameter models without any manual tuning.\n\n#### **Second Place: [XiTorch](https://devpost.com/software/xitorch-differentiable-scientific-computing-library)**\nXiTorch provides first and higher order gradients of functional routines, such as optimization, rootfinder, and ODE solver. It also contains operations for implicit linear operators (e.g. large matrix that is expressed only by its matrix-vector multiplication) such as symmetric eigen-decomposition, linear solve, and singular value decomposition.\n\n#### **Third Place: [TorchLiberator](https://devpost.com/software/torchliberator-partial-weight-loading)**\nTorchLiberator automates model surgery, finding the maximum correspondence between weights in two networks.", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/", "category": "pytorch blogs"}} {"page_content": "#### **Honorable Mentions**\n* [PADL](https://devpost.com/software/doch) manages your entire PyTorch work flow with a single python abstraction and a beautiful functional API, so there\u2019s no more complex configuration or juggling preprocessing, postprocessing and forward passes.\n* [PyTree](https://devpost.com/software/pytree) is a PyTorch package for recursive neural networks that provides highly generic recursive neural network implementations as well as efficient batching methods. \n* [IndicLP](https://devpost.com/software/indiclp) makes it easier for developers and researchers to build applications and models in Indian Languages, thus making NLP a more diverse field. \n\n## WEB/MOBILE APPLICATIONS POWERED BY PYTORCH\n\n#### **First Place: [PyTorch Driving Guardian](https://devpost.com/software/pytorch-driving-guardian)**\nPyTorch Driving Guardian is a tool that monitors driver alertness, emotional state, and potential blind spots on the road.", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/", "category": "pytorch blogs"}} {"page_content": "#### **Second Place: [Kronia](https://devpost.com/software/kronia)**\nKronia is an Android mobile app built to maximize the harvest outputs for farmers. \n\n#### **Third Place: [Heyoh camera for Mac](https://devpost.com/software/heyoh-camera)**\nHeyoh is a Mac virtual camera for Zoom and Meets that augments live video by recognizing hand gestures and smiles and shows animated effects to other video participants.", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/", "category": "pytorch blogs"}} {"page_content": "#### **Honorable Mentions**\n* [Mamma AI](https://devpost.com/software/mamma-ai) is a tool that helps doctors with the breast cancer identification process by identifying areas likely to have cancer using ultrasonic and x-ray images. \n* [AgingClock](https://devpost.com/software/agingclock) is a tool that predicts biological age first with methylation genome data, then blood test data and eventually with multimodal omics and lifestyle data.\n* [Iris](https://devpost.com/software/iris-7s3yna) is an open source photos platform which is more of an alternative of Google Photos that includes features such as Listing photos, Detecting Categories, Detecting and Classifying Faces from Photos, Detecting and Clustering by Location and Things in Photos.\n\n## PYTORCH RESPONSIBLE AI DEVELOPMENT TOOLS", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/", "category": "pytorch blogs"}} -{"page_content": "#### **First Place: [FairWell](https://devpost.com/software/fairwell-a-tool-to-bid-goodbye-to-unknown-ai-biasness)**\nFairWell aims to address model bias on specific groups of people by allowing data scientists to evaluate their dataset and model predictions and take steps to make their datasets more inclusive and their models less biased. \n\n#### **Second Place: [promp2slip](https://devpost.com/software/promp2slip)**\nPromp2slip is a library that tests the ethics of language models by using natural adversarial texts. \n\n#### **Third Place: [Phorch](https://devpost.com/software/phorch)**\nPhorch adversarially attacks the data using FIGA (Feature Importance Guided Attack) and creates 3 different attack sets of data based on certain parameters. These features are utilized to implement adversarial training as a defense against FIGA using neural net architecture in PyTorch.", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/", "category": "pytorch blogs"}} +{"page_content": "## PYTORCH RESPONSIBLE AI DEVELOPMENT TOOLS\n\n#### **First Place: [FairWell](https://devpost.com/software/fairwell-a-tool-to-bid-goodbye-to-unknown-ai-biasness)**\nFairWell aims to address model bias on specific groups of people by allowing data scientists to evaluate their dataset and model predictions and take steps to make their datasets more inclusive and their models less biased. \n\n#### **Second Place: [promp2slip](https://devpost.com/software/promp2slip)**\nPromp2slip is a library that tests the ethics of language models by using natural adversarial texts. \n\n#### **Third Place: [Phorch](https://devpost.com/software/phorch)**\nPhorch adversarially attacks the data using FIGA (Feature Importance Guided Attack) and creates 3 different attack sets of data based on certain parameters. These features are utilized to implement adversarial training as a defense against FIGA using neural net architecture in PyTorch.", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/", "category": "pytorch blogs"}} {"page_content": "#### **Honorable Mentions**\n* [Greenops](https://devpost.com/software/greenops) helps to measure the footprints of deep learning models at training, testing and evaluating to reduce energy consumption and carbon footprints.\n* [Xaitk-saliency](https://devpost.com/software/xaitk-saliency) is an open-source, explainable AI toolkit for visual saliency algorithm interfaces and implementations, built for analytic and autonomy applications.\n\nThank you,\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch 1.8 Release, including Compiler and Distributed Training updates, and New Mobile Tutorials'\nauthor: Team PyTorch \n---", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-released/", "category": "pytorch blogs"}} {"page_content": "We are excited to announce the availability of PyTorch 1.8. This release is composed of more than 3,000 commits since 1.7. It includes major updates and new features for compilation, code optimization, frontend APIs for scientific computing, and AMD ROCm support through binaries that are available via pytorch.org. It also provides improved features for large-scale training for pipeline and model parallelism, and gradient compression. A few of the highlights include:\n1. Support for doing python to python functional transformations via ```torch.fx```;\n2. Added or stabilized APIs to support FFTs (```torch.fft```), Linear Algebra functions (```torch.linalg```), added support for autograd for complex tensors and updates to improve performance for calculating hessians and jacobians; and\n3. Significant updates and improvements to distributed training including: Improved NCCL reliability; Pipeline parallelism support; RPC profiling; and support for communication hooks adding gradient compression.\nSee the full release notes [here](https://github.com/pytorch/pytorch/releases).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-released/", "category": "pytorch blogs"}} @@ -1479,7 +1483,7 @@ {"page_content": "{:.table.table-striped.table-bordered}\n | Environment | Benefits of WebDataset |\n| ------------- | ------------- |\n| Local Cluster with AIStore | AIStore can be deployed easily as K8s containers and offers linear scalability and near 100% utilization of network and I/O bandwidth. Suitable for petascale deep learning. |\n| Cloud Computing | WebDataset deep learning jobs can be trained directly against datasets stored in cloud buckets; no volume plugins required. Local and cloud jobs work identically. Suitable for petascale learning. |\n| Local Cluster with existing distributed FS or object store | WebDataset\u2019s large sequential reads improve performance with existing distributed stores and eliminate the need for dedicated volume plugins. |\n| Educational Environments | WebDatasets can be stored on existing web servers and web caches, and can be accessed directly by students by URL |\n| Training on Workstations from Local Drives | Jobs can start training as the data still downloads. Data doesn\u2019t need to be unpacked for training. Ten-fold improvements in I/O performance on hard drives over random access file-based datasets. |\n| All Environments | Datasets are represented in an archival format and contain metadata such as file types. Data is compressed in native formats (JPEG, MP4, etc.). Data management, ETL-style jobs, and data transformations and I/O are simplified and easily parallelized. |", "metadata": {"source": "https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/", "category": "pytorch blogs"}} {"page_content": "We will be adding more examples giving benchmarks and showing how to use WebDataset in these environments over the coming months.\n\n## High-Performance\nFor high-performance computation on local clusters, the companion open-source [AIStore](https://github.com/NVIDIA/AIStore) server provides full disk to GPU I/O bandwidth, subject only to hardware constraints. [This Bigdata 2019 Paper](https://arxiv.org/abs/2001.01858) contains detailed benchmarks and performance measurements. In addition to benchmarks, research projects at NVIDIA and Microsoft have used WebDataset for petascale datasets and billions of training samples.\n\nBelow is a benchmark of AIStore with WebDataset clients using 12 server nodes with 10 rotational drives each.\n\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/", "category": "pytorch blogs"}} {"page_content": "The left axis shows the aggregate bandwidth from the cluster, while the right scale shows the measured per drive I/O bandwidth. WebDataset and AIStore scale linearly to about 300 clients, at which point they are increasingly limited by the maximum I/O bandwidth available from the rotational drives (about 150 MBytes/s per drive). For comparison, HDFS is shown. HDFS uses a similar approach to AIStore/WebDataset and also exhibits linear scaling up to about 192 clients; at that point, it hits a performance limit of about 120 MBytes/s per drive, and it failed when using more than 1024 clients. Unlike HDFS, the WebDataset-based code just uses standard URLs and HTTP to access data and works identically with local files, with files stored on web servers, and with AIStore. For comparison, NFS in similar experiments delivers about 10-20 MBytes/s per drive.\n\n## Storing Datasets in Tar Archives", "metadata": {"source": "https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/", "category": "pytorch blogs"}} -{"page_content": "The format used for WebDataset is standard POSIX tar archives, the same archives used for backup and data distribution. In order to use the format to store training samples for deep learning, we adopt some simple naming conventions:\n* datasets are POSIX tar archives\n* each training sample consists of adjacent files with the same basename\n* shards are numbered consecutively\n\nFor example, ImageNet is stored in 1282 separate 100 Mbyte shards with names ```pythonimagenet-train-000000.tar to imagenet-train-001281.tar,``` the contents of the first shard are:", "metadata": {"source": "https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/", "category": "pytorch blogs"}} +{"page_content": "## Storing Datasets in Tar Archives\n\nThe format used for WebDataset is standard POSIX tar archives, the same archives used for backup and data distribution. In order to use the format to store training samples for deep learning, we adopt some simple naming conventions:\n* datasets are POSIX tar archives\n* each training sample consists of adjacent files with the same basename\n* shards are numbered consecutively\n\nFor example, ImageNet is stored in 1282 separate 100 Mbyte shards with names ```pythonimagenet-train-000000.tar to imagenet-train-001281.tar,``` the contents of the first shard are:", "metadata": {"source": "https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/", "category": "pytorch blogs"}} {"page_content": "```python\n-r--r--r-- bigdata/bigdata 3 2020-05-08 21:23 n03991062_24866.cls\n-r--r--r-- bigdata/bigdata 108611 2020-05-08 21:23 n03991062_24866.jpg\n-r--r--r-- bigdata/bigdata 3 2020-05-08 21:23 n07749582_9506.cls\n-r--r--r-- bigdata/bigdata 129044 2020-05-08 21:23 n07749582_9506.jpg\n-r--r--r-- bigdata/bigdata 3 2020-05-08 21:23 n03425413_23604.cls\n-r--r--r-- bigdata/bigdata 106255 2020-05-08 21:23 n03425413_23604.jpg\n-r--r--r-- bigdata/bigdata 3 2020-05-08 21:23 n02795169_27274.cls\n```\n\nWebDataset datasets can be used directly from local disk, from web servers (hence the name), from cloud storage and object stores, just by changing a URL. WebDataset datasets can be used for training without unpacking, and training can even be carried out on streaming data, with no local storage.", "metadata": {"source": "https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/", "category": "pytorch blogs"}} {"page_content": "Shuffling during training is important for many deep learning applications, and WebDataset performs shuffling both at the shard level and at the sample level. Splitting of data across multiple workers is performed at the shard level using a user-provided ```shard_selection``` function that defaults to a function that splits based on ```get_worker_info.``` (WebDataset can be combined with the [tensorcom](https://github.com/NVLabs/tensorcom) library to offload decompression/data augmentation and provide RDMA and direct-to-GPU loading; see below.)\n\n## Code Sample\nHere are some code snippets illustrating the use of WebDataset in a typical PyTorch deep learning application (you can find a full example at [http://github.com/tmbdev/pytorch-imagenet-wds](http://github.com/tmbdev/pytorch-imagenet-wds).\n\n```python\nimport webdataset as wds\nimport ...\n\nsharedurl = \"/imagenet/imagenet-train-{000000..001281}.tar\"\n\nnormalize = transforms.Normalize(\n mean=[0.485, 0.456, 0.406],\n std=[0.229, 0.224, 0.225])", "metadata": {"source": "https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/", "category": "pytorch blogs"}} {"page_content": "preproc = transforms.Compose([\n transforms.RandomResizedCrop(224),\n transforms.RandomHorizontalFlip(),\n transforms.ToTensor(),\n normalize,\n])\n\ndataset = (\n wds.Dataset(sharedurl)\n .shuffle(1000)\n .decode(\"pil\")\n .rename(image=\"jpg;png\", data=\"json\")\n .map_dict(image=preproc)\n .to_tuple(\"image\", \"data\")\n)\n\nloader = torch.utils.data.DataLoader(dataset, batch_size=64, num_workers=8)\n\nfor inputs, targets in loader:\n ...\n ```\n\nThis code is nearly identical to the file-based I/O pipeline found in the PyTorch Imagenet example: it creates a preprocessing/augmentation pipeline, instantiates a dataset using that pipeline and a data source location, and then constructs a DataLoader instance from the dataset.", "metadata": {"source": "https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/", "category": "pytorch blogs"}} @@ -1490,10 +1494,10 @@ {"page_content": "[`torch.Tensor`](https://pytorch.org/docs/0.4.0/tensors.html) and `torch.autograd.Variable` are now the same class. More precisely, [`torch.Tensor`](https://pytorch.org/docs/0.4.0/tensors.html) is capable of tracking history and behaves like the old `Variable`; `Variable` wrapping continues to work as before but returns an object of type [`torch.Tensor`](https://pytorch.org/docs/0.4.0/tensors.html). This means that you don't need the `Variable` wrapper everywhere in your code anymore.\n\n### The `type()` of a [`Tensor`](https://pytorch.org/docs/0.4.0/tensors.html) has changed\n\nNote also that the `type()` of a Tensor no longer reflects the data type. Use `isinstance()` or `x.type()`instead:\n\n```python\n>>> x = torch.DoubleTensor([1, 1, 1])\n>>> print(type(x)) # was torch.DoubleTensor\n\"\"\n>>> print(x.type()) # OK: 'torch.DoubleTensor'\n'torch.DoubleTensor'\n>>> print(isinstance(x, torch.DoubleTensor)) # OK: True\nTrue\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"}} {"page_content": "### When does [`autograd`](https://pytorch.org/docs/0.4.0/autograd.html) start tracking history now?\n\n`requires_grad`, the central flag for [`autograd`](https://pytorch.org/docs/0.4.0/autograd.html), is now an attribute on `Tensors`. The same rules previously used for `Variables` applies to `Tensors`; [`autograd`](https://pytorch.org/docs/0.4.0/autograd.html) starts tracking history when any input `Tensor` of an operation has `requires_grad=True`. For example,", "metadata": {"source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"}} {"page_content": "```python\n>>> x = torch.ones(1) # create a tensor with requires_grad=False (default)\n>>> x.requires_grad\nFalse\n>>> y = torch.ones(1) # another tensor with requires_grad=False\n>>> z = x + y\n>>> # both inputs have requires_grad=False. so does the output\n>>> z.requires_grad\nFalse\n>>> # then autograd won't track this computation. let's verify!\n>>> z.backward()\nRuntimeError: element 0 of tensors does not require grad and does not have a grad_fn\n>>>\n>>> # now create a tensor with requires_grad=True\n>>> w = torch.ones(1, requires_grad=True)\n>>> w.requires_grad\nTrue\n>>> # add to the previous result that has require_grad=False\n>>> total = w + z\n>>> # the total sum now requires grad!\n>>> total.requires_grad\nTrue\n>>> # autograd can compute the gradients as well\n>>> total.backward()\n>>> w.grad\ntensor([ 1.])\n>>> # and no computation is wasted to compute gradients for x, y and z, which don't require grad\n>>> z.grad == x.grad == y.grad == None\nTrue\n```\n\n#### Manipulating `requires_grad` flag", "metadata": {"source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"}} -{"page_content": "Other than directly setting the attribute, you can change this flag `in-place` using [`my_tensor.requires_grad_()`](https://pytorch.org/docs/0.4.0/tensors.html#torch.Tensor.requires_grad_), or, as in the above example, at creation time by passing it in as an argument (default is `False`), e.g.,\n\n```python\n>>> existing_tensor.requires_grad_()\n>>> existing_tensor.requires_grad\nTrue\n>>> my_tensor = torch.zeros(3, 4, requires_grad=True)\n>>> my_tensor.requires_grad\nTrue\n```\n\n### What about `.data?`\n\n`.data` was the primary way to get the underlying `Tensor` from a `Variable`. After this merge, calling `y = x.data` still has similar semantics. So `y` will be a `Tensor` that shares the same data with `x`, is unrelated with the computation history of `x`, and has `requires_grad=False`.", "metadata": {"source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"}} +{"page_content": "#### Manipulating `requires_grad` flag\n\nOther than directly setting the attribute, you can change this flag `in-place` using [`my_tensor.requires_grad_()`](https://pytorch.org/docs/0.4.0/tensors.html#torch.Tensor.requires_grad_), or, as in the above example, at creation time by passing it in as an argument (default is `False`), e.g.,\n\n```python\n>>> existing_tensor.requires_grad_()\n>>> existing_tensor.requires_grad\nTrue\n>>> my_tensor = torch.zeros(3, 4, requires_grad=True)\n>>> my_tensor.requires_grad\nTrue\n```\n\n### What about `.data?`\n\n`.data` was the primary way to get the underlying `Tensor` from a `Variable`. After this merge, calling `y = x.data` still has similar semantics. So `y` will be a `Tensor` that shares the same data with `x`, is unrelated with the computation history of `x`, and has `requires_grad=False`.", "metadata": {"source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"}} {"page_content": "However, `.data` can be unsafe in some cases. Any changes on `x.data` wouldn't be tracked by `autograd`, and the computed gradients would be incorrect if `x` is needed in a backward pass. A safer alternative is to use [`x.detach()`](https://pytorch.org/docs/master/autograd.html#torch.Tensor.detach), which also returns a `Tensor` that shares data with `requires_grad=False`, but will have its in-place changes reported by `autograd` if `x` is needed in backward.\n\nHere is an example of the difference between `.data` and `x.detach()` (and why we recommend using `detach` in general).\n\nIf you use `Tensor.detach()`, the gradient computation is guaranteed to be correct.\n\n```python\n>>> a = torch.tensor([1,2,3.], requires_grad = True)\n>>> out = a.sigmoid()\n>>> c = out.detach()\n>>> c.zero_()\ntensor([ 0., 0., 0.])\n\n>>> out # modified by c.zero_() !!\ntensor([ 0., 0., 0.])", "metadata": {"source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"}} {"page_content": ">>> out.sum().backward() # Requires the original value of out, but that was overwritten by c.zero_()\nRuntimeError: one of the variables needed for gradient computation has been modified by an\n```\n\nHowever, using `Tensor.data` can be unsafe and can easily result in incorrect gradients when a tensor is required for gradient computation but modified in-place.\n\n```python\n>>> a = torch.tensor([1,2,3.], requires_grad = True)\n>>> out = a.sigmoid()\n>>> c = out.data\n>>> c.zero_()\ntensor([ 0., 0., 0.])\n\n>>> out # out was modified by c.zero_()\ntensor([ 0., 0., 0.])\n\n>>> out.sum().backward()\n>>> a.grad # The result is very, very wrong because `out` changed!\ntensor([ 0., 0., 0.])\n```\n\n## Support for 0-dimensional (scalar) Tensors", "metadata": {"source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"}} -{"page_content": "Previously, indexing into a `Tensor` vector (1-dimensional tensor) gave a Python number but indexing into a `Variable` vector gave (inconsistently!) a vector of size `(1,)`! Similar behavior existed with reduction functions, e.g. `tensor.sum()` would return a Python number, but `variable.sum()` would return a vector of size `(1,)`.\n\nFortunately, this release introduces proper scalar (0-dimensional tensor) support in PyTorch! Scalars can be created using the new `torch.tensor` function (which will be explained in more detail later; for now just think of it as the PyTorch equivalent of `numpy.array`). Now you can do things like:", "metadata": {"source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"}} +{"page_content": "## Support for 0-dimensional (scalar) Tensors\n\nPreviously, indexing into a `Tensor` vector (1-dimensional tensor) gave a Python number but indexing into a `Variable` vector gave (inconsistently!) a vector of size `(1,)`! Similar behavior existed with reduction functions, e.g. `tensor.sum()` would return a Python number, but `variable.sum()` would return a vector of size `(1,)`.\n\nFortunately, this release introduces proper scalar (0-dimensional tensor) support in PyTorch! Scalars can be created using the new `torch.tensor` function (which will be explained in more detail later; for now just think of it as the PyTorch equivalent of `numpy.array`). Now you can do things like:", "metadata": {"source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"}} {"page_content": "```python\n>>> torch.tensor(3.1416) # create a scalar directly\ntensor(3.1416)\n>>> torch.tensor(3.1416).size() # scalar is 0-dimensional\ntorch.Size([])\n>>> torch.tensor([3]).size() # compare to a vector of size 1\ntorch.Size([1])\n>>>\n>>> vector = torch.arange(2, 6) # this is a vector\n>>> vector\ntensor([ 2., 3., 4., 5.])\n>>> vector.size()\ntorch.Size([4])\n>>> vector[3] # indexing into a vector gives a scalar\ntensor(5.)\n>>> vector[3].item() # .item() gives the value as a Python number\n5.0\n>>> mysum = torch.tensor([2, 3]).sum()\n>>> mysum\ntensor(5)\n>>> mysum.size()\ntorch.Size([])\n```\n\n### Accumulating losses\n\nConsider the widely used pattern `total_loss += loss.data[0]`. Before 0.4.0. `loss` was a `Variable` wrapping a tensor of size `(1,)`, but in 0.4.0 `loss` is now a scalar and has `0` dimensions. Indexing into a scalar doesn't make sense (it gives a warning now, but will be a hard error in 0.5.0). Use `loss.item()` to get the Python number from a scalar.", "metadata": {"source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"}} {"page_content": "Note that if you don't convert to a Python number when accumulating losses, you may find increased memory usage in your program. This is because the right-hand-side of the above expression used to be a Python float, while it is now a zero-dim Tensor. The total loss is thus accumulating Tensors and their gradient history, which may keep around large autograd graphs for much longer than necessary.\n\n## Deprecation of volatile flag\n\nThe `volatile` flag is now deprecated and has no effect. Previously, any computation that involves a `Variable` with `volatile=True` wouldn't be tracked by `autograd`. This has now been replaced by a [set of more flexible context managers](https://pytorch.org/docs/0.4.0/torch.html#locally-disabling-gradient-computation) including `torch.no_grad()`, `torch.set_grad_enabled(grad_mode)`, and others.", "metadata": {"source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"}} {"page_content": "```python\n>>> x = torch.zeros(1, requires_grad=True)\n>>> with torch.no_grad():\n... y = x * 2\n>>> y.requires_grad\nFalse\n>>>\n>>> is_train = False\n>>> with torch.set_grad_enabled(is_train):\n... y = x * 2\n>>> y.requires_grad\nFalse\n>>> torch.set_grad_enabled(True) # this can also be used as a function\n>>> y = x * 2\n>>> y.requires_grad\nTrue\n>>> torch.set_grad_enabled(False)\n>>> y = x * 2\n>>> y.requires_grad\nFalse\n```\n\n## [`dtypes`](https://pytorch.org/docs/0.4.0/tensor_attributes.html#torch.torch.dtype), [`devices`](https://pytorch.org/docs/0.4.0/tensor_attributes.html#torch.torch.device) and NumPy-style creation functions", "metadata": {"source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"}} @@ -1541,8 +1545,8 @@ {"page_content": "Activation checkpointing is where the intermediate activations are freed during the forward pass, and a checkpoint is left as a placeholder. This generally increases available GPU memory by over 30%.\n\nThe tradeoff is that during the backward pass, these previously removed intermediate activations must be re-calculated again using information in the checkpoint (duplicate compute), but by leveraging the increased GPU memory, one can increase the batch size such that the net throughput can increase substantially.\n\n```python\n# verify we have FSDP activation support ready by importing:\nfrom torch.distributed.algorithms._checkpoint.checkpoint_wrapper import (\n checkpoint_wrapper,\n CheckpointImpl,\n apply_activation_checkpointing_wrapper,\n)\n```\n\n\nThe steps required to implement activation checkpointing is to first import the FSDP checkpointing functions. We need declare our checkpointer wrapper type which is non-reentrant and create a check function to identify which layer to wrap as follows", "metadata": {"source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "```python\nnon_reentrant_wrapper = partial(\n checkpoint_wrapper,\n offload_to_cpu=False,\n checkpoint_impl=CheckpointImpl.NO_REENTRANT,\n)\ncheck_fn = lambda submodule: isinstance(submodule, T5Block)\n```\n\n```python\napply_activation_checkpointing_wrapper(\n model, checkpoint_wrapper_fn=non_reentrant_wrapper, check_fn=check_fn\n )\n```\n\n_Important note - this must be run after the model has been initialized with FSDP._\n\nHowever, hopefully you\u2019ve seen how some initial tuning with FSDP options can have a large impact on your training performance. \n\nWith that, we turn our attention from how to scale within FSDP, to how to scale your server hardware for FSDP using AWS.\n\n**Large Scale Training with FSDP on AWS - _For multi-node prioritize high speed network_**", "metadata": {"source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "AWS provides several services that can be used to run distributed training with FSDP: [Amazon EC2 Accelerated Computing instances](https://aws.amazon.com/ec2/instance-types/#Accelerated_Computing), AWS [ParallelCluster](https://aws.amazon.com/hpc/parallelcluster/), and Amazon [Sagemaker](https://aws.amazon.com/sagemaker/features/?nc=sn&loc=2).\n\nIn this series of blog posts, we used [Amazon EC2 p4d](https://aws.amazon.com/ec2/instance-types/p4/) instances in a single-instance multi-GPU configuration and in a multi-instance configuration using AWS [ParallelCluster](https://aws.amazon.com/hpc/parallelcluster/) and SageMaker in order to run our training jobs.\n\nHere, we\u2019ll focus specifically on AWS parallel cluster and provide an overview of how to utilize it for training purposes.\n\n**AWS ParallelCluster Setup**", "metadata": {"source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "

AWS ParallelCluster is an open source, cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters on AWS. AWS ParallelCluster uses yaml configuration files to provision all the necessary resources. It also supports multiple instance types, job submission queues, shared file systems like Amazon EFS (NFS) or Amazon FSx for Lustre, and job schedulers like AWS Batch and Slurm.

\n\n

\n\n

\n\n**Workflow on Clusters**", "metadata": {"source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"}} -{"page_content": "The high level idea is to have a cluster that has a head node which controls the compute nodes. The actual training job runs on the compute nodes. Overall steps to run a training job on a cluster are as follows:\n\n1. Set up an AWS ParallelCuster (we discuss below)\n2. Connect to the head node, and import the training code/ setup the environment.\n3. Pull the data and place it in a shared folder that compute nodes can access (FSx Lustre drive).\n4. Run the training job using a job scheduler (in this case Slurm).\n\n**Setup AWS ParallelCuster**\n\nTo setup AWS ParallelCluster,\n\n1. **Deploy a network stack.** This step is optional since you could use your account default VPC and let AWS ParallelCluster create your subnets and security groups. However, we prefer to compartmentalize our desired network infrastructure and do this deployment via a CloudFormation stack.", "metadata": {"source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "**AWS ParallelCluster Setup**\n\n

AWS ParallelCluster is an open source, cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters on AWS. AWS ParallelCluster uses yaml configuration files to provision all the necessary resources. It also supports multiple instance types, job submission queues, shared file systems like Amazon EFS (NFS) or Amazon FSx for Lustre, and job schedulers like AWS Batch and Slurm.

\n\n

\n\n

", "metadata": {"source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"}} +{"page_content": "**Workflow on Clusters**\n\nThe high level idea is to have a cluster that has a head node which controls the compute nodes. The actual training job runs on the compute nodes. Overall steps to run a training job on a cluster are as follows:\n\n1. Set up an AWS ParallelCuster (we discuss below)\n2. Connect to the head node, and import the training code/ setup the environment.\n3. Pull the data and place it in a shared folder that compute nodes can access (FSx Lustre drive).\n4. Run the training job using a job scheduler (in this case Slurm).\n\n**Setup AWS ParallelCuster**\n\nTo setup AWS ParallelCluster,\n\n1. **Deploy a network stack.** This step is optional since you could use your account default VPC and let AWS ParallelCluster create your subnets and security groups. However, we prefer to compartmentalize our desired network infrastructure and do this deployment via a CloudFormation stack.", "metadata": {"source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "Since we deploy a public and a private subnet, we want to create them into an Availability Zone that contains our target instances, in this case p4d. We consult their availability in the region we use (us-east-1) through the following AWS CLI command:\n\n `aws ec2 describe-instance-type-offerings --location-type availability-zone \\ --filters Name=instance-type,Values=p4d.24xlarge --region us-east-1 --output table`\n\n We see three availability zones containing p4d instances, we pick one of them (`us-east-1c`, yours may be different) when deploying our network stack. This can be done with the AWS Console or the AWS CLI. In our case we use the latter as follows\n\n `aws cloudformation create-stack --stack-name VPC-Large-Scale --capabilities CAPABILITY_IAM --template-body file://VPC-Large-Scale.yaml --parameters ParameterKey=SubnetsAZ,ParameterValue=us-east-1c`", "metadata": {"source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "CloudFormation will deploy our new VPC, subnets, security groups and endpoints on our behalf. Once done, you can retrieve the IDs of the public and private subnets by querying the stack outputs and the values `PublicSubnet` and `PrivateSubnet`.\n\n For example, using the AWS CLI for the private subnet:\n\n `aws cloudformation describe-stacks --stack-name VPC-Large-Scale --query \"Stacks[0].Outputs[?OutputKey=='PrivateSubnet'].OutputValue\" --output text`\n\n2. **Create ParallelCluster,** The cluster configuration file specifies the resources for our cluster. These resources include instance type for Head node, compute nodes, access to S3 buckets, shared storage where our data will be located. We will use Amazon FSx for Lustre that offers a fully managed shared storage service with [Lustre]().", "metadata": {"source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"}} {"page_content": "[Here](https://github.com/lessw2020/t5_11/blob/main/hpc-cluster/cluster.yaml) is an example of a cluster configuration file. We can use AWs ParallelCluster CLI to create the cluster. Please note that the private and public subnet IDs will need to be replaced by the ones you retrieved earlier. You will be able to control the cluster using the AWS ParallelCluster CLI to start, stop, pause, etc.\n\n ```\n pcluster create-cluster --cluster-name my-hpc-cluster --cluster-configuration cluster.yaml\n ```\n\n3. **SSH to Head node -** once the cluster is ready, we can connect to the Head node using the SSH protocol, pull our training code with and place the data in the shared storage specified in the cluster configuration file.\n\n pcluster ssh --cluster-name cluster -i your-key_pair", "metadata": {"source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"}} @@ -1559,14 +1563,14 @@ {"page_content": "We made the following observations from Figure 4:\n\n- The streaming multiprocessor (SM) runs the model\u2019s CUDA kernels. Its utilization [1] is 9.1%, indicating that the parallel compute units on the GPU are not well utilized.\n- Tensor Core utilization is 0, meaning that Tensor Core (the mixed-precision compute unit on GPU) [2] is not used at all.\n- Max GPU memory utilization is 47.13%, indicating that half of the GPU memory is left unused.\n\n#### Step 4:\n\nCollect a GPU trace (aka Kineto trace) of the training loop as shown in Figure 5.\n\n

\n \n

\n\n

\nFigure 5: A GPU trace (aka Kineto trace) of the training loop.\n

", "metadata": {"source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"}} {"page_content": "Since commonly used PyTorch functions are already annotated, their names are automatically shown on the trace. With them, we can roughly divide the trace into the four phases in a training iteration: (1) data loading, (2) forward pass, (3) backward pass, (4) gradient optimization (note: In Figure 5, the \u201coptimizer\u201d phase is from the previous batch while the other three phases are from the current batch).\n\n### 2.2 Optimizations\n\nWe performed four simple optimizations that target the bottlenecks identified above, each requiring only a change in a config parameter or at most a few source lines. They are listed in Figure 6.", "metadata": {"source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"}} {"page_content": "| Optimization | Amount of changes | Bottlenecks addressed |\n| ------------ | ----------------- | --------------------- |\n|Tune `num_worker_threads` by trying a few possible values within the number of CPU cores on each host. | 1 source line | GPU totally idle time |\n| Double the batch sizes | 2 config parameters | GPU memory under-utilization |\n| Use [automatic mixed precision](https://pytorch.org/tutorials/recipes/recipes/amp_recipe.html) in PyTorch | 13 source lines | Zero Tensor Core utilization |\n| Use [mulitensor optimizer](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html#torch.optim.AdamW) in PyTorch | 1 source line | Many small GPU kernels in the optimizer |\n\n

\nFigure 6: Four simple optimizations applied.\n

\n\n## 3. Concluding Remarks", "metadata": {"source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"}} -{"page_content": "Performance tuning for PyTorch in production environments is increasingly important. A capable performance-debugging tool is a key to this process. We demonstrate with a case study on a production model that MAIProf is a powerful infrastructure for identifying optimization opportunities. \n\nAt Meta, MAIProf has been used by 100s of engineers, from performance novices to experts, to identify many more types of bottlenecks. These include slow data loading, small and/or slow GPU kernels, distributed training issues such as load imbalance and excessive communication. MAIProf covers major classes of models, including recommendation, vision, and natural language processing. In summary, it is now an indispensable tool for tuning the performance of production PyTorch workloads.\n\n## References", "metadata": {"source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"}} +{"page_content": "## 3. Concluding Remarks\n\nPerformance tuning for PyTorch in production environments is increasingly important. A capable performance-debugging tool is a key to this process. We demonstrate with a case study on a production model that MAIProf is a powerful infrastructure for identifying optimization opportunities. \n\nAt Meta, MAIProf has been used by 100s of engineers, from performance novices to experts, to identify many more types of bottlenecks. These include slow data loading, small and/or slow GPU kernels, distributed training issues such as load imbalance and excessive communication. MAIProf covers major classes of models, including recommendation, vision, and natural language processing. In summary, it is now an indispensable tool for tuning the performance of production PyTorch workloads.\n\n## References", "metadata": {"source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"}} {"page_content": "## References\n\n[1] [https://docs.nvidia.com/gameworks/content/developertools/desktop/analysis/report/ cudaexperiments/kernellevel/achievedoccupancy.htm](https://docs.nvidia.com/gameworks/content/developertools/desktop/analysis/report/cudaexperiments/kernellevel/achievedoccupancy.htm)\n\n[2] [https://www.nvidia.com/en-us/data-center/tensor-cores/](https://www.nvidia.com/en-us/data-center/tensor-cores/)", "metadata": {"source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'The torch.fft module: Accelerated Fast Fourier Transforms with Autograd in PyTorch'\nauthor: Mike Ruberry, Peter Bell, and Joe Spisak \n---\n\nThe Fast Fourier Transform (FFT) calculates the Discrete Fourier Transform in O(n log n) time. It is foundational to a wide variety of numerical algorithms and signal processing techniques since it makes working in signals\u2019 \u201cfrequency domains\u201d as tractable as working in their spatial or temporal domains.\n\nAs part of PyTorch\u2019s goal to support hardware-accelerated deep learning and scientific computing, we have invested in improving our FFT support, and with PyTorch 1.8, we are releasing the ``torch.fft`` module. This module implements the same functions as NumPy\u2019s ``np.fft`` module, but with support for accelerators, like GPUs, and autograd. \n\n## Getting started", "metadata": {"source": "https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/", "category": "pytorch blogs"}} {"page_content": "## Getting started\n\nGetting started with the new ``torch.fft`` module is easy whether you are familiar with NumPy\u2019s ``np.fft`` module or not. While complete documentation for each function in the module can be found [here](https://pytorch.org/docs/1.8.0/fft.html), a breakdown of what it offers is:\n\n* ``fft``, which computes a complex FFT over a single dimension, and ``ifft``, its inverse\n* the more general ``fftn`` and ``ifftn``, which support multiple dimensions\n* The \u201creal\u201d FFT functions, ``rfft``, ``irfft``, ``rfftn``, ``irfftn``, designed to work with signals that are real-valued in their time domains\n* The \"Hermitian\" FFT functions, ``hfft`` and ``ihfft``, designed to work with signals that are real-valued in their frequency domains\n* Helper functions, like ``fftfreq``, ``rfftfreq``, ``fftshift``, ``ifftshift``, that make it easier to manipulate signals", "metadata": {"source": "https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/", "category": "pytorch blogs"}} {"page_content": "We think these functions provide a straightforward interface for FFT functionality, as vetted by the NumPy community, although we are always interested in feedback and suggestions!\n\nTo better illustrate how easy it is to move from NumPy\u2019s ``np.fft`` module to PyTorch\u2019s ``torch.fft`` module, let\u2019s look at a NumPy implementation of a simple low-pass filter that removes high-frequency variance from a 2-dimensional image, a form of noise reduction or blurring:\n\n```python\nimport numpy as np\nimport numpy.fft as fft\n\ndef lowpass_np(input, limit):\n pass1 = np.abs(fft.rfftfreq(input.shape[-1])) < limit\n pass2 = np.abs(fft.fftfreq(input.shape[-2])) < limit\n kernel = np.outer(pass2, pass1)\n \n fft_input = fft.rfft2(input)\n return fft.irfft2(fft_input * kernel, s=input.shape[-2:])\n```\n\nNow let\u2019s see the same filter implemented in PyTorch:\n\n```python\nimport torch\nimport torch.fft as fft", "metadata": {"source": "https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/", "category": "pytorch blogs"}} -{"page_content": "def lowpass_torch(input, limit):\n pass1 = torch.abs(fft.rfftfreq(input.shape[-1])) < limit\n pass2 = torch.abs(fft.fftfreq(input.shape[-2])) < limit\n kernel = torch.outer(pass2, pass1)\n \n fft_input = fft.rfft2(input)\n return fft.irfft2(fft_input * kernel, s=input.shape[-2:])\n```\n\nNot only do current uses of NumPy\u2019s ``np.fft`` module translate directly to ``torch.fft``, the ``torch.fft`` operations also support tensors on accelerators, like GPUs and autograd. This makes it possible to (among other things) develop new neural network modules using the FFT.\n\n\n## Performance", "metadata": {"source": "https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/", "category": "pytorch blogs"}} +{"page_content": "```python\nimport torch\nimport torch.fft as fft\n\ndef lowpass_torch(input, limit):\n pass1 = torch.abs(fft.rfftfreq(input.shape[-1])) < limit\n pass2 = torch.abs(fft.fftfreq(input.shape[-2])) < limit\n kernel = torch.outer(pass2, pass1)\n \n fft_input = fft.rfft2(input)\n return fft.irfft2(fft_input * kernel, s=input.shape[-2:])\n```\n\nNot only do current uses of NumPy\u2019s ``np.fft`` module translate directly to ``torch.fft``, the ``torch.fft`` operations also support tensors on accelerators, like GPUs and autograd. This makes it possible to (among other things) develop new neural network modules using the FFT.\n\n\n## Performance", "metadata": {"source": "https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/", "category": "pytorch blogs"}} {"page_content": "## Performance\n\nThe ``torch.fft`` module is not only easy to use \u2014 it is also fast! PyTorch natively supports Intel\u2019s MKL-FFT library on Intel CPUs, and NVIDIA\u2019s cuFFT library on CUDA devices, and we have carefully optimized how we use those libraries to maximize performance. While your own results will depend on your CPU and CUDA hardware, computing Fast Fourier Transforms on CUDA devices can be many times faster than computing it on the CPU, especially for larger signals.\n\nIn the future, we may add support for additional math libraries to support more hardware. See below for where you can request additional hardware support.\n\n## Updating from older PyTorch versions", "metadata": {"source": "https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/", "category": "pytorch blogs"}} -{"page_content": "Some PyTorch users might know that older versions of PyTorch also offered FFT functionality with the ``torch.fft()`` function. Unfortunately, this function had to be removed because its name conflicted with the new module\u2019s name, and we think the new functionality is the best way to use the Fast Fourier Transform in PyTorch. In particular, ``torch.fft()`` was developed before PyTorch supported complex tensors, while the ``torch.fft`` module was designed to work with them.\n\nPyTorch also has a \u201cShort Time Fourier Transform\u201d, ``torch.stft``, and its inverse ``torch.istft``. These functions are being kept but updated to support complex tensors. \n\n## Future\n\nAs mentioned, PyTorch 1.8 offers the torch.fft module, which makes it easy to use the Fast Fourier Transform (FFT) on accelerators and with support for autograd. We encourage you to try it out!", "metadata": {"source": "https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/", "category": "pytorch blogs"}} +{"page_content": "## Updating from older PyTorch versions\n\nSome PyTorch users might know that older versions of PyTorch also offered FFT functionality with the ``torch.fft()`` function. Unfortunately, this function had to be removed because its name conflicted with the new module\u2019s name, and we think the new functionality is the best way to use the Fast Fourier Transform in PyTorch. In particular, ``torch.fft()`` was developed before PyTorch supported complex tensors, while the ``torch.fft`` module was designed to work with them.\n\nPyTorch also has a \u201cShort Time Fourier Transform\u201d, ``torch.stft``, and its inverse ``torch.istft``. These functions are being kept but updated to support complex tensors. \n\n## Future\n\nAs mentioned, PyTorch 1.8 offers the torch.fft module, which makes it easy to use the Fast Fourier Transform (FFT) on accelerators and with support for autograd. We encourage you to try it out!", "metadata": {"source": "https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/", "category": "pytorch blogs"}} {"page_content": "While this module has been modeled after NumPy\u2019s ``np.fft`` module so far, we are not stopping there. We are eager to hear from you, our community, on what FFT-related functionality you need, and we encourage you to create posts on our forums at [https://discuss.pytorch.org/](https://discuss.pytorch.org/), or [file issues on our Github](https://github.com/pytorch/pytorch/issues/new?assignees=&labels=&template=feature-request.md) with your feedback and requests. Early adopters have already started asking about Discrete Cosine Transforms and support for more hardware platforms, for example, and we are investigating those features now.\n\nWe look forward to hearing from you and seeing what the community does with PyTorch\u2019s new FFT functionality!", "metadata": {"source": "https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"PyTorch Internals Part II - The Build System\"\nauthor: \"Trevor Killeen\"\ndate: 2017-06-27 12:00:00 -0500\nredirect_from: /2017/06/27/Internals2.html\n---\n\nIn the first [post]({{ site.baseurl }}{% link _posts/2017-5-11-a-tour-of-pytorch-internals-1.md %}) I explained how we generate a `torch.Tensor` object that you can use in your Python interpreter. Next, I will explore the build system for PyTorch. The PyTorch codebase has a variety of components:\n\n - The core Torch libraries: TH, THC, THNN, THCUNN\n - Vendor libraries: CuDNN, NCCL\n - Python Extension libraries\n - Additional third-party libraries: NumPy, MKL, LAPACK\n\nHow does a simple invocation of `python setup.py install` do the work that allows you to call `import torch` and use the PyTorch library in your code?", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} {"page_content": "The first part of this document will explain the build process from and end-user point of view. This will explain how we take the components above to build the library. The second part of the document will be important for PyTorch developers. It will document ways to improve your iteration speed by building only a subset of the code that you are working on.\n\n### Setuptools and PyTorch's setup( ) function\n\nPython uses [Setuptools](https://setuptools.readthedocs.io/en/latest/index.html) to build the library. Setuptools is an extension to the original distutils system from the core Python library. The core component of Setuptools is the `setup.py` file which contains all the information needed to build the project. The most important function is the `setup()` function which serves as the main entry point. Let's take a look at the one in PyTorch:", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} @@ -1582,7 +1586,7 @@ {"page_content": "```bash\n(p3) killeent@devgpu047:lib (master)$ ldd libTHC.so.1\n\t...\n\tlibTH.so.1 => /home/killeent/github/pytorch/torch/lib/tmp_install/lib/./libTH.so.1 (0x00007f84478b7000)\n```\n\nThe way the `build_all.sh` specifies the include and library paths is a little messy but this is representative of the overall idea. Finally, at the end of the script:\n\n```bash\n# If all the builds succeed we copy the libraries, headers,\n# binaries to torch/lib\ncp $INSTALL_DIR/lib/* .\ncp THNN/generic/THNN.h .\ncp THCUNN/generic/THCUNN.h .\ncp -r $INSTALL_DIR/include .\ncp $INSTALL_DIR/bin/* .\n```\n\nAs we can see, at the end, we copy everything to the top-level `torch/lib` directory - explaining the contents we saw above. We'll see why we do this next:\n\n### NN Wrappers", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} {"page_content": "### NN Wrappers\n\nBriefly, let's touch on the last part of the `build_deps` command: `generate_nn_wrappers()`. We bind into the backend libraries using PyTorch's custom `cwrap` tooling, which we touched upon in a previous post. For binding `TH` and `THC` we manually write the YAML declarations for each function. However, due to the relative simplicity of the `THNN` and `THCUNN` libraries, we auto-generate both the cwrap declarations and the resulting C++ code.\n\nThe reason we copy the `THNN.h` and `THCUNN.h` header files into `torch/lib` is that this is where the `generate_nn_wrappers()` code expects these files to be located. `generate_nn_wrappers()` does a few things:", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} {"page_content": "1. Parses the header files, generating cwrap YAML declarations and writing them to output `.cwrap` files\n2. Calls `cwrap` with the appropriate plugins on these `.cwrap` files to generate source code for each\n3. Parses the headers *a second time* to generate `THNN_generic.h` - a library that takes `THPP` Tensors, PyTorch's \"generic\" C++ Tensor Library, and calls into the appropriate `THNN`/`THCUNN` library function based on the dynamic type of the Tensor\n\nIf we take a look into `torch/csrc/nn` after running `generate_nn_wrappers()` we can see the output:\n\n```bash\n(p3) killeent@devgpu047:nn (master)$ ls\nTHCUNN.cpp THCUNN.cwrap THNN.cpp THNN.cwrap THNN_generic.cpp THNN_generic.cwrap THNN_generic.h THNN_generic.inc.h\n```\n\nFor example, the code generates cwrap like:", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} -{"page_content": "```\n[[\n name: FloatBatchNormalization_updateOutput\n return: void\n cname: THNN_FloatBatchNormalization_updateOutput\n arguments:\n - void* state\n - THFloatTensor* input\n - THFloatTensor* output\n - type: THFloatTensor*\n name: weight\n nullable: True\n - type: THFloatTensor*\n name: bias\n nullable: True\n - THFloatTensor* running_mean\n - THFloatTensor* running_var\n - THFloatTensor* save_mean\n - THFloatTensor* save_std\n - bool train\n - double momentum\n - double eps\n]]\n```\n\nwith corresponding `.cpp`:\n\n```cpp\nextern \"C\" void THNN_FloatBatchNormalization_updateOutput(void*, THFloatTensor*, THFloatTensor*, THFloatTensor*, THFloatTensor*, THFloatTensor*, THFloatTensor*, THFloatTensor*, THFloatTensor*, bool, double, double);", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} +{"page_content": "For example, the code generates cwrap like:\n\n```\n[[\n name: FloatBatchNormalization_updateOutput\n return: void\n cname: THNN_FloatBatchNormalization_updateOutput\n arguments:\n - void* state\n - THFloatTensor* input\n - THFloatTensor* output\n - type: THFloatTensor*\n name: weight\n nullable: True\n - type: THFloatTensor*\n name: bias\n nullable: True\n - THFloatTensor* running_mean\n - THFloatTensor* running_var\n - THFloatTensor* save_mean\n - THFloatTensor* save_std\n - bool train\n - double momentum\n - double eps\n]]\n```\n\nwith corresponding `.cpp`:\n\n```cpp\nextern \"C\" void THNN_FloatBatchNormalization_updateOutput(void*, THFloatTensor*, THFloatTensor*, THFloatTensor*, THFloatTensor*, THFloatTensor*, THFloatTensor*, THFloatTensor*, THFloatTensor*, bool, double, double);", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} {"page_content": "PyObject * FloatBatchNormalization_updateOutput(PyObject *_unused, PyObject *args) {\n\t// argument checking, unpacking\n\t PyThreadState *_save = NULL;\n try {\n Py_UNBLOCK_THREADS;\n THNN_FloatBatchNormalization_updateOutput(arg_state, arg_input, arg_output, arg_weight, arg_bias, arg_running_mean, arg_running_var, arg_save_mean, arg_save_std, arg_train, arg_momentum, arg_eps);\n Py_BLOCK_THREADS;\n Py_RETURN_NONE;\n } catch (...) {\n if (_save) {\n Py_BLOCK_THREADS;\n }\n throw;\n }\n\n ...\n}\n```\n\nIn the `THPP` generated code, the function looks like this:\n\n```cpp\nvoid BatchNormalization_updateOutput(thpp::Tensor* input, thpp::Tensor* output, thpp::Tensor* weight, thpp::Tensor* bias, thpp::Tensor* running_mean, thpp::Tensor* running_var, thpp::Tensor* save_mean, thpp::Tensor* save_std, bool train, double momentum, double eps) {\n\t// Call appropriate THNN function based on tensor type, whether its on CUDA, etc.\n}\n```", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} {"page_content": "We will look a little more at how these source files are used later.\n\n### \"Building\" the Pure Python Modules\n\nNow that we have built the backend libraries (the \"dependencies\") we can move forward with building the actual PyTorch code. The next Setuptools command that runs is `build_py`, which is used to build all the \"Pure\" python modules in our library. These are the \"packages\" passed to `setup.py`.\n\nThe packages are found using the Setuptools' utility function `find_packages()`:", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} {"page_content": "```python\npackages = find_packages(exclude=('tools.*',))\n['torch', 'torch._thnn', 'torch.autograd', 'torch.backends', 'torch.cuda', 'torch.distributed', 'torch.legacy', 'torch.multiprocessing', 'torch.nn', 'torch.optim', 'torch.sparse', 'torch.utils', 'torch.autograd._functions', 'torch.backends.cudnn', 'torch.legacy.nn', 'torch.legacy.optim', 'torch.nn._functions', 'torch.nn.backends', 'torch.nn.modules', 'torch.nn.parallel', 'torch.nn.utils', 'torch.nn._functions.thnn', 'torch.utils.data', 'torch.utils.ffi', 'torch.utils.serialization', 'torch.utils.trainer', 'torch.utils.backcompat', 'torch.utils.trainer.plugins']\n```\n\nAs we can see, `find_package` has recursively traversed the `torch` directory, finding all the directory paths that have an `__init__.py` file.", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} @@ -1590,7 +1594,7 @@ {"page_content": "```bash\ncopying torch/autograd/_functions/blas.py -> build/lib.linux-x86_64-3.6/torch/autograd/_functions\n```\n\nWe also noted earlier that we could pass files and directories to the `package_data` keyword argument to the main `setup()` function, and that Setuptools would handle copying those files to the installation location. During `build_py`, these files are copied to the `build/` directory, so we also see lines like:\n\n```bash\ncopying torch/lib/libTH.so.1 -> build/lib.linux-x86_64-3.6/torch/lib\n...\ncopying torch/lib/include/THC/generic/THCTensor.h -> build/lib.linux-x86_64-3.6/torch/lib/include/THC/generic\n```\n\n### Building the Extension Modules\n\nFinally, we need to build the Extension Modules, i.e. the PyTorch modules written in C++ using the CPython backend. This also constitutes the majority of the code logic in `setup.py`. Our overridden `build_ext` Command has some special logic before the extensions themselves are actually built:", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} {"page_content": "```python\nfrom tools.cwrap import cwrap\nfrom tools.cwrap.plugins.THPPlugin import THPPlugin\nfrom tools.cwrap.plugins.ArgcountSortPlugin import ArgcountSortPlugin\nfrom tools.cwrap.plugins.AutoGPU import AutoGPU\nfrom tools.cwrap.plugins.BoolOption import BoolOption\nfrom tools.cwrap.plugins.KwargsPlugin import KwargsPlugin\nfrom tools.cwrap.plugins.NullableArguments import NullableArguments\nfrom tools.cwrap.plugins.CuDNNPlugin import CuDNNPlugin\nfrom tools.cwrap.plugins.WrapDim import WrapDim\nfrom tools.cwrap.plugins.AssertNDim import AssertNDim\nfrom tools.cwrap.plugins.Broadcast import Broadcast\nfrom tools.cwrap.plugins.ProcessorSpecificPlugin import ProcessorSpecificPlugin\n thp_plugin = THPPlugin()\n cwrap('torch/csrc/generic/TensorMethods.cwrap', plugins=[\n ProcessorSpecificPlugin(), BoolOption(), thp_plugin,\n AutoGPU(condition='IS_CUDA'), ArgcountSortPlugin(), KwargsPlugin(),\n AssertNDim(), WrapDim(), Broadcast()\n ])\n cwrap('torch/csrc/cudnn/cuDNN.cwrap', plugins=[\n CuDNNPlugin(), NullableArguments()\n ])\n```", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} {"page_content": "Recall above that I documented that we auto-generated C++ code for calling into the `THNN` etc. libraries. Here is where we bind `TH`, `THC` and `CuDNN`. We take the YAML declarations in `TensorMethods.cwrap`, and use them to generate output C++ source files that contain implementations that work within PyTorch's C++ Ecosystem. For example, a simple declaration like zero_:\n\n```\n[[\n name: zero_\n cname: zero\n return: self\n arguments:\n - THTensor* self\n]]\n```\n\nGenerates code like:\n\n```cpp\n PyObject * THPTensor_(zero_)(PyObject *self, PyObject *args, PyObject *kwargs) {\n\t...\n\tTHTensor_(zero)(LIBRARY_STATE arg_self);\n\t...\n}\n```\n\nIn the previous post we documented how these functions are tied to specific Tensor types, so I won't expand on that there. For the build process its enough to know that these C++ files are generated prior to the extension being built, because these source files are used during Extension compilation.\n\n### Specifying the Extensions", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} -{"page_content": "Unlike pure modules, it\u2019s not enough just to list modules or packages and expect the Setuptools to go out and find the right files; you have to specify the extension name, source file(s), and any compile/link requirements (include directories, libraries to link with, etc.).\n\nThe bulk (200~ LOC at the time of this writing) of the `setup.py` goes into specifying how to build these Extensions. Here, some of the choices we make in `build_all.sh` begin to make sense. For example, we saw that our build script specified a `tmp_install` directory where we installed our backend libraries. In our `setup.py` code, we reference this directory when adding to the list of directories containing header files to include:\n\n```python\n# tmp_install_path is torch/lib/tmp_install\ninclude_dirs += [\n cwd,\n os.path.join(cwd, \"torch\", \"csrc\"),\n tmp_install_path + \"/include\",\n tmp_install_path + \"/include/TH\",\n tmp_install_path + \"/include/THPP\",\n tmp_install_path + \"/include/THNN\",\n```", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} +{"page_content": "### Specifying the Extensions\n\nUnlike pure modules, it\u2019s not enough just to list modules or packages and expect the Setuptools to go out and find the right files; you have to specify the extension name, source file(s), and any compile/link requirements (include directories, libraries to link with, etc.).\n\nThe bulk (200~ LOC at the time of this writing) of the `setup.py` goes into specifying how to build these Extensions. Here, some of the choices we make in `build_all.sh` begin to make sense. For example, we saw that our build script specified a `tmp_install` directory where we installed our backend libraries. In our `setup.py` code, we reference this directory when adding to the list of directories containing header files to include:\n\n```python\n# tmp_install_path is torch/lib/tmp_install\ninclude_dirs += [\n cwd,\n os.path.join(cwd, \"torch\", \"csrc\"),\n tmp_install_path + \"/include\",\n tmp_install_path + \"/include/TH\",\n tmp_install_path + \"/include/THPP\",\n tmp_install_path + \"/include/THNN\",\n```", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} {"page_content": "Similarly, we copied the shared object libraries to `torch/csrc` at the end of the `build_all.sh` script. We reference these locations directly in our `setup.py` code when identifying libraries that we may link against:\n\n```python\n# lib_path is torch/lib\nTH_LIB = os.path.join(lib_path, 'libTH.so.1')\nTHS_LIB = os.path.join(lib_path, 'libTHS.so.1')\nTHC_LIB = os.path.join(lib_path, 'libTHC.so.1')\nTHCS_LIB = os.path.join(lib_path, 'libTHCS.so.1')\nTHNN_LIB = os.path.join(lib_path, 'libTHNN.so.1')\n# ...\n```\n\nLet's consider how we build the main `torch._C` Extension Module:\n\n```python\nC = Extension(\"torch._C\",\n libraries=main_libraries,\n sources=main_sources,\n language='c++',\n extra_compile_args=main_compile_args + extra_compile_args,\n include_dirs=include_dirs,\n library_dirs=library_dirs,\n extra_link_args=extra_link_args + main_link_args + [make_relative_rpath('lib')],\n )\n```", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} {"page_content": "- The main libraries are all the libraries we link against. This includes things like `shm`, PyTorch's shared memory management library, and also system libraries like `cudart` and `cudnn`. Note that the `TH` libraries *are not* listed here\n - The main sources are the C++ files that make up the C++ backend for PyTorch\n - The compile args are various flags that configure compilation. For example, we might want to add debug flags when compiling in debug mode\n - The include dirs are the paths to all the directories containing header files. This is also another example where the `build_all.sh` script is important - for example, we look for the `TH` header files in `torch/lib/tmp_install/include/TH` - which is the install location we specified with our CMake configuration\n - The library dirs are directories to search for shared libraries at link time. For example, we include `torch/lib` - the location we copied our `.so` files to at the end of `build_all.sh`, but also the paths to the CUDA and CuDNN directories\n - The link arguments are used when linking object files together to create the extension. In PyTorch, this includes more *normal* options like decided to link `libstdc++` statically. However, there is one key component: **this is where we link the backend TH libraries**. Note that we have lines like:", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} {"page_content": "```python\n# The explicit paths to .so files we described above\nmain_link_args = [TH_LIB, THS_LIB, THPP_LIB, THNN_LIB]\n```\n\nYou might be wondering why we do this as opposed to adding these libraries to the list we pass to the `libraries` keyword argument. After all, that is a list of libraries to link against. The issue is that Lua Torch installs often set the `LD_LIBRARY_PATH` variable, and thus we could mistakenly link against a `TH` library built for Lua Torch, instead of the library we have built locally. This would be problematic because the code could be out of date, and also there are various configuration options for Lua Torch's `TH` that would not play nicely with PyTorch.\n\nAs such, we manually specify the paths to the shared libraries we generated directly to the linker.", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} @@ -1603,7 +1607,7 @@ {"page_content": "But how does it work? Suppose we run `python setup.py build develop` in the PyTorch directory. The `build` command is run, building our dependencies (`TH`, `THPP`, etc.) and the extension libraries. However, if we look inside `site-packages`:\n\n```bash\n(p3) killeent@devgpu047:site-packages$ ls -la torch*\n-rw-r--r--. 1 killeent users 31 Jun 27 08:02 torch.egg-link\n```\n\nLooking at the contents of the `torch.egg-link` file, it simply references the PyTorch directory:\n\n```bash\n(p3) killeent@devgpu047:site-packages$ cat torch.egg-link\n/home/killeent/github/pytorch\n```\n\nIf we navigate back to the PyTorch directory, we see there is a new directory `torch.egg-info`:", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} {"page_content": "```bash\n(p3) killeent@devgpu047:pytorch (master)$ ls -la torch.egg-info/\ntotal 28\ndrwxr-xr-x. 2 killeent users 4096 Jun 27 08:09 .\ndrwxr-xr-x. 10 killeent users 4096 Jun 27 08:01 ..\n-rw-r--r--. 1 killeent users 1 Jun 27 08:01 dependency_links.txt\n-rw-r--r--. 1 killeent users 255 Jun 27 08:01 PKG-INFO\n-rw-r--r--. 1 killeent users 7 Jun 27 08:01 requires.txt\n-rw-r--r--. 1 killeent users 16080 Jun 27 08:01 SOURCES.txt\n-rw-r--r--. 1 killeent users 12 Jun 27 08:01 top_level.txt\n```\n\nThis file contains metadata about the PyTorch project. For example, `requirements.txt` lists all of the dependencies for setting up PyTorch:\n\n```bash\n(p3) killeent@devgpu047:pytorch (master)$ cat torch.egg-info/requires.txt\npyyaml\n```\n\nWithout going into too much detail, `develop` allows us to essentially treat the PyTorch repo itself as if it were in `site-packages`, so we can import the module and it just works:", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} {"page_content": "```bash\n(p3) killeent@devgpu047:~$ python\nPython 3.6.1 |Continuum Analytics, Inc.| (default, Mar 22 2017, 19:54:23)\n[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import torch\n>>> torch.__file__\n'/home/killeent/github/pytorch/torch/__init__.py'\n```\n\nAs a result, the following consequences hold:\n\n- If we change a Python source file, the changes are automatically picked up, and we don't have to run any commands to let the Python interpreter *see* this change\n- If we change a C++ Source File in one of the extension libraries, we can re-run the `develop` command, it will re-build the extension\n\nThus we can develop the PyTorch codebases seamlessly, and test our changes in an easy way.\n\n#### Working on the Dependency Libraries", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} -{"page_content": "If we are working on the dependencies (e.g. `TH`, `THPP`, etc.) we can re-build our changes more quickly by simply running the `build_deps` command directly. This will automatically call into `build_all.sh` to re-build our libraries, and copy the generated libraries appropriately. If we are using Setuptools `develop` mode, we will be using the local extension library built in the PyTorch directory. Because we have specified the paths to the shared libraries when compiling our Extension Libraries, the changes will be picked up:\n\n```bash\n# we are using the local extension\n(p3) killeent@devgpu047:~$ python\nPython 3.6.1 |Continuum Analytics, Inc.| (default, Mar 22 2017, 19:54:23)\n[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import torch\n>>> torch._C.__file__\n'/home/killeent/github/pytorch/torch/_C.cpython-36m-x86_64-linux-gnu.so'", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} +{"page_content": "#### Working on the Dependency Libraries\n\nIf we are working on the dependencies (e.g. `TH`, `THPP`, etc.) we can re-build our changes more quickly by simply running the `build_deps` command directly. This will automatically call into `build_all.sh` to re-build our libraries, and copy the generated libraries appropriately. If we are using Setuptools `develop` mode, we will be using the local extension library built in the PyTorch directory. Because we have specified the paths to the shared libraries when compiling our Extension Libraries, the changes will be picked up:\n\n```bash\n# we are using the local extension\n(p3) killeent@devgpu047:~$ python\nPython 3.6.1 |Continuum Analytics, Inc.| (default, Mar 22 2017, 19:54:23)\n[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import torch\n>>> torch._C.__file__\n'/home/killeent/github/pytorch/torch/_C.cpython-36m-x86_64-linux-gnu.so'", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} {"page_content": "# it references the local shared object library we just re-built\n(p3) killeent@devgpu047:~$ ldd /home/killeent/github/pytorch/torch/_C.cpython-36m-x86_64-linux-gnu.so\n# ...\nlibTH.so.1 => /home/killeent/github/pytorch/torch/lib/libTH.so.1 (0x00007f543d0e2000)\n# ...\n```\n\nAs such, we can test any changes here without having to do a full rebuild.\n\n#### 3rd Party Libraries\n\nPyTorch has dependencies on some 3rd party libraries. The usual mechanism for using these libraries is to install them via Anaconda, and then link against them. For example, we can use the `mkl` library with PyTorch by doing:\n\n```bash\n# installed to miniconda2/envs/p3/lib/libmkl_intel_lp64.so\nconda install mkl\n```\n\nAnd then as long as we have the path to this `lib` directory on our `$CMAKE_PREFIX_PATH`, it will successfully find this library when compiling:", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} {"page_content": "```bash\n# in the site-packages dir\n(p3) killeent@devgpu047:torch$ ldd _C.cpython-36m-x86_64-linux-gnu.so\n# ...\nlibmkl_intel_lp64.so => /home/killeent/local/miniconda2/envs/p3/lib/libmkl_intel_lp64.so (0x00007f3450bba000)\n# ...\n```\n\n### Not Covered, But Also Relevant\n\n- How `ccache` is used to speed up build times\n- How PyTorch's top-level `__init__.py` file handles the initial module import and pulling together all the various modules and extension libraries\n- The CMake build system, how the backend libraries are configured and built with CMake", "metadata": {"source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'An Overview of the PyTorch Mobile Demo Apps'\nauthor: Jeff Tang and Mark Saroufim\nfeatured-img: 'assets/images/android-demo-app.png'\ndate: 2021-06-18 12:00:00 -0500\n---\n\n\nPyTorch Mobile provides a runtime environment to execute state-of-the-art machine learning models on mobile devices. Latency is reduced, privacy preserved, and models can run on mobile devices anytime, anywhere.\n\nIn this blog post, we provide a quick overview of 10 currently available PyTorch Mobile powered demo apps running various state-of-the-art PyTorch 1.9 machine learning models spanning images, video, audio and text.\n\nIt\u2019s never been easier to deploy a state-of-the-art ML model to a phone. You don\u2019t need any domain knowledge in Machine Learning and we hope one of the below examples resonates enough with you to be the starting point for your next project.\n\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/mobile-demo-apps-overview/", "category": "pytorch blogs"}} @@ -1618,13 +1622,13 @@ {"page_content": " [iOS](https://github.com/pytorch/ios-demo-app/tree/master/SpeechRecognition) [Android](https://github.com/pytorch/android-demo-app/tree/master/SpeechRecognition)\n\n
\n \n
\n\n\nWe really hope one of these demo apps stood out for you. For the full list, make sure to visit the [iOS](https://github.com/pytorch/ios-demo-app) and [Android](https://github.com/pytorch/android-demo-app) demo app repos. You should also definitely check out the video [An Overview of the PyTorch Mobile Demo Apps](https://www.youtube.com/watch?v=Qb4vDm-ruwI) which provides both an overview of the PyTorch mobile demo apps and a deep dive into the PyTorch Video app for iOS and Android.", "metadata": {"source": "https://pytorch.org/blog/mobile-demo-apps-overview/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"Accelerating PyTorch Vision Models with Channels Last on CPU\"\nauthor: Mingfei Ma (Intel), Vitaly Fedyunin (Meta), Wei Wei (Meta)\nfeatured-img: '/assets/images/accelerating-pytorch-vision-models-with-channels-last-on-cpu-2.png'\n---\n\n## Overview\n\nMemory formats has significant impact on performance when running vision models, generally Channels Last is a more favorable from performance perspective due to better data locality.\n\nThis blog will introduce fundamental concepts of memory formats and demonstrate performance benefits using Channels Last on popular PyTorch vision models on Intel\u00ae Xeon\u00ae Scalable processors.\n\n## Memory Formats Introduction\n\nMemory format refers to data representation that describes how a multidimensional (nD) array is stored in linear (1D) memory address space. The concept of memory format has two aspects:", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"}} {"page_content": "- **Physical Order** is the layout of data storage in physical memory. For vision models, usually we talk about NCHW, NHWC. These are the descriptions of physical memory layout, also referred as Channels First and Channels Last respectively.\n- **Logical Order** is a convention on how to describe tensor shape and stride. In PyTorch, this convention is NCHW. No matter what the physical order is, tensor shape and stride will always be depicted in the order of NCHW.\n\nFig-1 is the physical memory layout of a tensor with shape of [1, 3, 4, 4] on both Channels First and Channels Last memory format (channels denoted as R, G, B respectively):\n\n

\n \n

\n\n

\nFig-1 Physical memory layout of Channels First and Channels Last\n

\n\n## Memory Formats Propagation", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"}} -{"page_content": "The general rule for PyTorch memory format propagation is to preserve the input tensor\u2019s memory format. Which means a Channels First input will generate a Channels First output and a Channels Last input will generate a Channels Last output. \n\nFor Convolution layers, PyTorch uses oneDNN (oneAPI Deep Neural Network Library) by default to achieve optimal performance on Intel CPUs. Since it is physically impossible to achieve highly optimized performance directly with Channels Frist memory format, input and weight are firstly converted to blocked format and then computed. oneDNN may choose different blocked formats according to input shapes, data type and hardware architecture, for vectorization and cache reuse purposes. The blocked format is opaque to PyTorch, so the output needs to be converted back to Channels First. Though blocked format would bring about optimal computing performance, the format conversions may add overhead and therefore offset the performance gain.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"}} +{"page_content": "## Memory Formats Propagation\n\nThe general rule for PyTorch memory format propagation is to preserve the input tensor\u2019s memory format. Which means a Channels First input will generate a Channels First output and a Channels Last input will generate a Channels Last output. \n\nFor Convolution layers, PyTorch uses oneDNN (oneAPI Deep Neural Network Library) by default to achieve optimal performance on Intel CPUs. Since it is physically impossible to achieve highly optimized performance directly with Channels Frist memory format, input and weight are firstly converted to blocked format and then computed. oneDNN may choose different blocked formats according to input shapes, data type and hardware architecture, for vectorization and cache reuse purposes. The blocked format is opaque to PyTorch, so the output needs to be converted back to Channels First. Though blocked format would bring about optimal computing performance, the format conversions may add overhead and therefore offset the performance gain.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"}} {"page_content": "On the other hand, oneDNN is optimized for Channels Last memory format to use it for optimal performance directly and PyTorch will simply pass a memory view to oneDNN. Which means the conversion of input and output tensor is saved. Fig-2 indicates memory format propagation behavior of convolution on PyTorch CPU (the solid arrow indicates a memory format conversion, and the dashed arrow indicates a memory view):\n\n

\n \n

\n\n

\nFig-2 CPU Conv memory format propagation\n

\n\nOn PyTorch, the default memory format is Channels First. In case a particular operator doesn't have support on Channels Last, the NHWC input would be treated as a non-contiguous NCHW and therefore fallback to Channels First, which will consume the previous memory bandwidth on CPU and result in suboptimal performance.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"}} {"page_content": "Therefore, it is very important to extend the scope of Channels Last support for optimal performance. And we have implemented Channels Last kernels for the commonly use operators in CV domain, applicable for both inference and training, such as:\n\n- Activations (e.g., ReLU, PReLU, etc.)\n- Convolution (e.g., Conv2d)\n- Normalization (e.g., BatchNorm2d, GroupNorm, etc.)\n- Pooling (e.g., AdaptiveAvgPool2d, MaxPool2d, etc.)\n- Shuffle (e.g., ChannelShuffle, PixelShuffle)\n\nRefer to [Operators-with-Channels-Last-support](https://github.com/pytorch/pytorch/wiki/Operators-with-Channels-Last-support) for details.\n\n## Native Level Optimization on Channels Last\n\nAs mentioned above, PyTorch uses oneDNN to achieve optimal performance on Intel CPUs for convolutions. The rest of memory format aware operators are optimized at PyTorch native level, which doesn\u2019t require any third-party library support.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"}} {"page_content": "- **Cache friendly parallelization scheme:** keep the same parallelization scheme for all the memory format aware operators, this will help increase data locality when passing each layer\u2019s output to the next.\n- **Vectorization on multiple archs:** generally, we can vectorize on the most inner dimension on Channels Last memory format. And each of the vectorized CPU kernels will be generated for both AVX2 and AVX512.\n\nWhile contributing to Channels Last kernels, we tried our best to optimize Channels First counterparts as well. The fact is some operators are physically impossible to achieve optimal performance on Channels First, such as Convolution, Pooling, etc.\n\n## Run Vision Models on Channels Last\n\nThe Channels Last related APIs are documented at [PyTorch memory format tutorial](https://pytorch.org/tutorials/intermediate/memory_format_tutorial.html). Typically, we can convert a 4D tensor from Channels First to Channels Last by:", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"}} {"page_content": "```python\n# convert x to channels last\n# suppose x\u2019s shape is (N, C, H, W)\n# then x\u2019s stride will be (HWC, 1, WC, C)\nx = x.to(memory_format=torch.channels_last)\n```\n\nTo run models on Channels Last memory format, simply need to convert input and model to Channels Last and then you are ready to go. The following is a minimal example showing how to run ResNet50 with TorchVision on Channels Last memory format:\n\n```python\nimport torch\nfrom torchvision.models import resnet50\n\nN, C, H, W = 1, 3, 224, 224\nx = torch.rand(N, C, H, W)\nmodel = resnet50()\nmodel.eval()\n\n# convert input and model to channels last\nx = x.to(memory_format=torch.channels_last)\nmodel = model.to(memory_format=torch.channels_last)\nmodel(x)\n```\n\nThe Channels Last optimization is implemented at native kernel level, which means you may apply other functionalities such as torch.fx and torch script together with Channels Last as well.\n\n## Performance Gains", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"}} {"page_content": "## Performance Gains\n\nWe benchmarked inference performance of TorchVision models on Intel\u00ae Xeon\u00ae Platinum 8380 CPU @ 2.3 GHz, single instance per socket (batch size = 2 x number of physical cores). Results show that Channels Last has 1.3x to 1.8x performance gain over Channels First.\n\n

\n \n

\n\nThe performance gain primarily comes from two aspects:\n\n- For Convolution layers, Channels Last saved the memory format conversion to blocked format for activations, which improves the overall computation efficiency.\n- For Pooling and Upsampling layers, Channels Last can use vectorized logic along the most inner dimension, e.g., \u201cC\u201d, while Channels First can\u2019t.\n\nFor memory format non aware layers, Channels Last and Channels First has the same performance.\n\n## Conclusion & Future Work", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"}} -{"page_content": "In this blog we introduced fundamental concepts of Channels Last and demonstrated the performance benefits of CPU using Channels Last on vision models. The current work is limited to 2D models at the current stage, and we will extend the optimization effort to 3D models in near future!\n\n## Acknowledgement\n\nThe results presented in this blog is a joint effort of Meta and Intel PyTorch team. Special thanks to Vitaly Fedyunin and Wei Wei from Meta who spent precious time and gave substantial assistance! Together we made one more step on the path of improving the PyTorch CPU eco system.\n\n## References\n\n- [PyTorch memory format tutorial](https://pytorch.org/tutorials/intermediate/memory_format_tutorial.html)\n- [oneDNN guide on memory formats](https://oneapi-src.github.io/oneDNN/dev_guide_understanding_memory_formats.html)\n- [PyTorch operators with Channels Last support](https://github.com/pytorch/pytorch/wiki/Operators-with-Channels-Last-support)", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"}} +{"page_content": "## Conclusion & Future Work\n\nIn this blog we introduced fundamental concepts of Channels Last and demonstrated the performance benefits of CPU using Channels Last on vision models. The current work is limited to 2D models at the current stage, and we will extend the optimization effort to 3D models in near future!\n\n## Acknowledgement\n\nThe results presented in this blog is a joint effort of Meta and Intel PyTorch team. Special thanks to Vitaly Fedyunin and Wei Wei from Meta who spent precious time and gave substantial assistance! Together we made one more step on the path of improving the PyTorch CPU eco system.\n\n## References\n\n- [PyTorch memory format tutorial](https://pytorch.org/tutorials/intermediate/memory_format_tutorial.html)\n- [oneDNN guide on memory formats](https://oneapi-src.github.io/oneDNN/dev_guide_understanding_memory_formats.html)\n- [PyTorch operators with Channels Last support](https://github.com/pytorch/pytorch/wiki/Operators-with-Channels-Last-support)", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'Announcing the PyTorch Enterprise Support Program'\nauthor: Team PyTorch\n---\n\nToday, we are excited to announce the [PyTorch Enterprise Support Program](http://pytorch.org/enterprise-support-program), a participatory program that enables service providers to develop and offer tailored enterprise-grade support to their customers. This new offering, built in collaboration between Facebook and Microsoft, was created in direct response to feedback from PyTorch enterprise users who are developing models in production at scale for mission-critical applications.\n\nThe PyTorch Enterprise Support Program is available to any service provider. It is designed to mutually benefit all program Participants by sharing and improving PyTorch long-term support (LTS), including contributions of hotfixes and other improvements found while working closely with customers and on their systems.", "metadata": {"source": "https://pytorch.org/blog/announcing-pytorch-enterprise/", "category": "pytorch blogs"}} {"page_content": "To benefit the open source community, all hotfixes developed by Participants will be tested and fed back to the LTS releases of PyTorch regularly through PyTorch\u2019s standard pull request process. To participate in the program, a service provider must apply and meet a set of program terms and certification requirements. Once accepted, the service provider becomes a program Participant and can offer a packaged PyTorch Enterprise support service with LTS, prioritized troubleshooting, useful integrations, and more.\n\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/announcing-pytorch-enterprise/", "category": "pytorch blogs"}} {"page_content": "As one of the founding members and an inaugural member of the PyTorch Enterprise Support Program, Microsoft is launching [PyTorch Enterprise on Microsoft Azure](https://Aka.ms/PyTorchEnterpriseHeroBlog) to deliver a reliable production experience for PyTorch users. Microsoft will support each PyTorch release for as long as it is current. In addition, it will support selected releases for two years, enabling a stable production experience. Microsoft Premier and Unified Support customers can access prioritized troubleshooting for hotfixes, bugs, and security patches at no additional cost. Microsoft will extensively test PyTorch releases for performance regression. The latest release of PyTorch will be integrated with [Azure Machine Learning](https://azure.microsoft.com/en-us/services/machine-learning/) and other PyTorch add-ons including [ONNX Runtime](https://www.onnxruntime.ai/) for faster inference.", "metadata": {"source": "https://pytorch.org/blog/announcing-pytorch-enterprise/", "category": "pytorch blogs"}} @@ -1633,7 +1637,7 @@ {"page_content": "In part 1 of the series, we will focus on the original implementation of the SSD algorithm as described on the [Single Shot MultiBox Detector paper](https://arxiv.org/abs/1512.02325). We will briefly give a high-level description of how the algorithm works, then go through its main components, highlight key parts of its code, and finally discuss how we trained the released model. Our goal is to cover all the necessary details to reproduce the model including those optimizations which are not covered on the paper but are part on the [original implementation](https://github.com/weiliu89/caffe/tree/ssd).\n\n# How Does SSD Work?\n\nReading the aforementioned paper is highly recommended but here is a quick oversimplified refresher. Our target is to detect the locations of objects in an image along with their categories. Here is the Figure 5 from the [SSD paper](https://arxiv.org/abs/1512.02325) with prediction examples of the model:", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"}} {"page_content": "
\n \n
\n\nThe SSD algorithm uses a CNN backbone, passes the input image through it and takes the convolutional outputs from different levels of the network. The list of these outputs are called feature maps. These feature maps are then passed through the Classification and Regression heads which are responsible for predicting the class and the location of the boxes.", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"}} {"page_content": "Since the feature maps of each image contain outputs from different levels of the network, their size varies and thus they can capture objects of different dimensions. On top of each, we tile several default boxes which can be thought as our rough prior guesses. For each default box, we predict whether there is an object (along with its class) and its offset (correction over the original location). During training time, we need to first match the ground truth to the default boxes and then we use those matches to estimate our loss. During inference, similar prediction boxes are combined to estimate the final predictions. \n\n# The SSD Network Architecture\n\nIn this section, we will discuss the key components of SSD. Our code follows closely [the paper](https://arxiv.org/abs/1512.02325) and makes use of many of the undocumented optimizations included in [the official implementation](https://github.com/weiliu89/caffe/tree/ssd).\n\n### DefaultBoxGenerator", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"}} -{"page_content": "The [DefaultBoxGenerator class](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/anchor_utils.py#L134) is responsible for generating the default boxes of SSD and operates similarly to the [AnchorGenerator](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/anchor_utils.py#L9) of FasterRCNN (for more info on their differences see pages 4-6 of the paper). It produces a set of predefined boxes of specific width and height which are tiled across the image and serve as the first rough prior guesses of where objects might be located. Here is Figure 1 from the SSD paper with a visualization of ground truths and default boxes:\n\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"}} +{"page_content": "### DefaultBoxGenerator\n\nThe [DefaultBoxGenerator class](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/anchor_utils.py#L134) is responsible for generating the default boxes of SSD and operates similarly to the [AnchorGenerator](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/anchor_utils.py#L9) of FasterRCNN (for more info on their differences see pages 4-6 of the paper). It produces a set of predefined boxes of specific width and height which are tiled across the image and serve as the first rough prior guesses of where objects might be located. Here is Figure 1 from the SSD paper with a visualization of ground truths and default boxes:\n\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"}} {"page_content": "The class is parameterized by a set of hyperparameters that control [their shape](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/anchor_utils.py#L139) and [tiling](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/anchor_utils.py#L140-L149). The implementation will provide [automatically good guesses](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/anchor_utils.py#L162-L171) with the default parameters for those who want to experiment with new backbones/datasets but one can also pass [optimized custom values](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/anchor_utils.py#L144-L147).\n\n### SSDMatcher", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"}} {"page_content": "The [SSDMatcher class](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/_utils.py#L348) extends the standard [Matcher](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/_utils.py#L227) used by FasterRCNN and it is responsible for matching the default boxes to the ground truth. After estimating the [IoUs of all combinations](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L349), we use the matcher to find for each default box the best [candidate](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/_utils.py#L296) ground truth with overlap higher than the [IoU threshold](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/_utils.py#L350-L351). The SSD version of the matcher has an extra step to ensure that each ground truth is matched with the default box that has the [highest overlap](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/_utils.py#L356-L360). The results of the matcher are used in the loss estimation during the training process of the model.", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"}} {"page_content": "### Classification and Regression Heads\n\nThe [SSDHead class](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L38) is responsible for initializing the Classification and Regression parts of the network. Here are a few notable details about their code:", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"}} @@ -1645,11 +1649,11 @@ {"page_content": "Here are the two core methods of the implementation:", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"}} {"page_content": "* The ```compute_loss``` method estimates the standard Multi-box loss as described on page 5 of the SSD paper. It uses the [smooth L1 loss](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L244) for regression and the standard [cross-entropy loss](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L262-L266) with [hard-negative sampling](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L268-L276) for classification. \n* As in all detection models, the forward method currently has different behaviour depending on whether the model is on training or eval mode. It starts by [resizing & normalizing the input images](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L309-L310) and then [passes them through the backbone](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L324-L325) to get the feature maps. The feature maps are then [passed through the head](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L331-L332) to get the predictions and then the method [generates the default boxes](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L334-L335). \n * If the model is on [training mode](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L339-L352), the forward will estimate the [IoUs of the default boxes with the ground truth](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L349), use the ```SSDmatcher``` to [produce matches](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L350) and finally [estimate the losses](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L352) by calling the ```compute_loss method```.\n * If the model is on [eval mode](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L353-L355), we first select the best detections by keeping only the ones that [pass the score threshold](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L384), select the [most promising boxes](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L388-L391) and run NMS to [clean up and select](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L401-L403) the best predictions. Finally we [postprocess the predictions](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L355) to resize them to the original image size.", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"}} {"page_content": "# The SSD300 VGG16 Model\n\nThe SSD is a family of models because it can be configured with different backbones and different Head configurations. In this section, we will focus on the provided [SSD pre-trained model](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L522-L523). We will discuss the details of its configuration and the training process used to reproduce the reported results.\n\n### Training process\n\nThe model was trained using the COCO dataset and all of its hyper-parameters and scripts can be found in our [references](https://github.com/pytorch/vision/blob/e35793a1a4000db1f9f99673437c514e24e65451/references/detection/README.md#ssd300-vgg16) folder. Below we provide details on the most notable aspects of the training process.\n\n### Paper Hyperparameters", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"}} -{"page_content": "In order to achieve the best possible results on COCO, we adopted the hyperparameters described on the section 3 of the paper concerning the optimizer configuration, the weight regularization etc. Moreover we found it useful to adopt the optimizations that appear in the [official implementation](https://github.com/weiliu89/caffe/blob/ssd/examples/ssd/ssd_coco.py#L310-L321) concerning the [tiling configuration](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L579-L581) of the DefaultBox generator. This optimization was not described in the paper but it was crucial for improving the detection precision of smaller objects. \n\n### Data Augmentation", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"}} -{"page_content": "Implementing the [SSD Data Augmentation strategy](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/references/detection/transforms.py#L20-L239) as described on page 6 and page 12 of the paper was critical to reproducing the results. More specifically the use of random \u201cZoom In\u201d and \u201cZoom Out\u201d transformations make the model robust to various input sizes and improve its precision on the small and medium objects. Finally since the VGG16 has quite a few parameters, the photometric distortions [included in the augmentations](https://github.com/pytorch/vision/blob/43d772067fe77965ec8fc49c799de5cea44b8aa2/references/detection/presets.py#L11-L18) have a regularization effect and help avoid the overfitting. \n\n### Weight Initialization & Input Scaling", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"}} +{"page_content": "### Paper Hyperparameters\n\nIn order to achieve the best possible results on COCO, we adopted the hyperparameters described on the section 3 of the paper concerning the optimizer configuration, the weight regularization etc. Moreover we found it useful to adopt the optimizations that appear in the [official implementation](https://github.com/weiliu89/caffe/blob/ssd/examples/ssd/ssd_coco.py#L310-L321) concerning the [tiling configuration](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L579-L581) of the DefaultBox generator. This optimization was not described in the paper but it was crucial for improving the detection precision of smaller objects. \n\n### Data Augmentation", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"}} +{"page_content": "### Data Augmentation\n\nImplementing the [SSD Data Augmentation strategy](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/references/detection/transforms.py#L20-L239) as described on page 6 and page 12 of the paper was critical to reproducing the results. More specifically the use of random \u201cZoom In\u201d and \u201cZoom Out\u201d transformations make the model robust to various input sizes and improve its precision on the small and medium objects. Finally since the VGG16 has quite a few parameters, the photometric distortions [included in the augmentations](https://github.com/pytorch/vision/blob/43d772067fe77965ec8fc49c799de5cea44b8aa2/references/detection/presets.py#L11-L18) have a regularization effect and help avoid the overfitting. \n\n### Weight Initialization & Input Scaling", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"}} {"page_content": "Another aspect that we found beneficial was to follow the [weight initialization scheme](https://github.com/intel/caffe/blob/master/models/intel_optimized_models/ssd/VGGNet/coco/SSD_300x300/train.prototxt) proposed by the paper. To do that, we had to adapt our input scaling method by [undoing the 0-1 scaling](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L583-L587) performed by ```ToTensor()``` and use [pre-trained ImageNet weights](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L24-L26) fitted with this scaling (shoutout to [Max deGroot](https://github.com/amdegroot) for providing them in his repo). All the weights of new convolutions were [initialized using Xavier](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L30-L35) and their biases were set to zero. After initialization, the network was [trained end-to-end](https://github.com/pytorch/vision/blob/33db2b3ebfdd2f73a9228f430fa7dd91c3b18078/torchvision/models/detection/ssd.py#L571-L572).", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"}} {"page_content": "### LR Scheme\n\nAs reported on the paper, after applying aggressive data augmentations it\u2019s necessary to train the models for longer. Our experiments confirm this and we had to tweak the Learning rate, batch sizes and overall steps to achieve the best results. Our [proposed learning scheme](https://github.com/pytorch/vision/blob/e35793a1a4000db1f9f99673437c514e24e65451/references/detection/README.md#ssd300-vgg16) is configured to be rather on the safe side, showed signs of plateauing between the steps and thus one is likely to be able to train a similar model by doing only 66% of our epochs.\n\n# Breakdown of Key Accuracy Improvements", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"}} -{"page_content": "It is important to note that implementing a model directly from a paper is an iterative process that circles between coding, training, bug fixing and adapting the configuration until we match the accuracies reported on the paper. Quite often it also involves simplifying the training recipe or enhancing it with more recent methodologies. It is definitely not a linear process where incremental accuracy improvements are achieved by improving a single direction at a time but instead involves exploring different hypothesis, making incremental improvements in different aspects and doing a lot of backtracking. \n\nWith that in mind, below we try to summarize the optimizations that affected our accuracy the most. We did this by grouping together the various experiments in 4 main groups and attributing the experiment improvements to the closest match. Note that the Y-axis of the graph starts from 18 instead from 0 to make the difference between optimizations more visible:", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"}} +{"page_content": "# Breakdown of Key Accuracy Improvements\n\nIt is important to note that implementing a model directly from a paper is an iterative process that circles between coding, training, bug fixing and adapting the configuration until we match the accuracies reported on the paper. Quite often it also involves simplifying the training recipe or enhancing it with more recent methodologies. It is definitely not a linear process where incremental accuracy improvements are achieved by improving a single direction at a time but instead involves exploring different hypothesis, making incremental improvements in different aspects and doing a lot of backtracking. \n\nWith that in mind, below we try to summarize the optimizations that affected our accuracy the most. We did this by grouping together the various experiments in 4 main groups and attributing the experiment improvements to the closest match. Note that the Y-axis of the graph starts from 18 instead from 0 to make the difference between optimizations more visible:", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"}} {"page_content": "
\n \n
\n\n| Model Configuration | mAP delta | mAP | \n| ------------- | ------------- | ------------- |\n| Baseline with \"FasterRCNN-style\" Hyperparams | - | 19.5 | \n| + Paper Hyperparams | 1.6 | 21.1 | \n| + Data Augmentation | 1.8 | 22.9 | \n| + Weight Initialization & Input Scaling | 1 | 23.9 | \n| + LR scheme | 1.2 | 25.1 | \n\nOur final model achieves an mAP of 25.1 and reproduces exactly the COCO results reported on the paper. Here is a [detailed breakdown](https://github.com/pytorch/vision/pull/3403) of the accuracy metrics.\n\n\nWe hope you found the part 1 of the series interesting. On the part 2, we will focus on the implementation of SSDlite and discuss its differences from SSD. Until then, we are looking forward to your feedback.", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"PyTorch 1.13 release, including beta versions of functorch and improved support for Apple\u2019s new M1 chips.\"\nauthor: Team PyTorch\nfeatured-img: \"/assets/images/blog-2022-10-25-Pytorch-1.13-Release.png\"\n---\n\nWe are excited to announce the release of PyTorch\u00ae 1.13 ([release note](https://github.com/pytorch/pytorch/releases/tag/v1.13.0))! This includes Stable versions of BetterTransformer. We deprecated CUDA 10.2 and 11.3 and completed migration of CUDA 11.6 and 11.7. Beta includes improved support for Apple M1 chips and functorch, a library that offers composable vmap (vectorization) and autodiff transforms, being included in-tree with the PyTorch release. This release is composed of over 3,749 commits and 467 contributors since 1.12.1. We want to sincerely thank our dedicated community for your contributions.\n\nSummary:", "metadata": {"source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"}} {"page_content": "Summary:\n\n- The [BetterTransformer](#stable-features) feature set supports fastpath execution for common Transformer models during Inference out-of-the-box, without the need to modify the model. Additional improvements include accelerated add+matmul linear algebra kernels for sizes commonly used in Transformer models and Nested Tensors is now enabled by default.\n\n- Timely [deprecating older CUDA versions](#introduction-of-cuda-116-and-117-and-deprecation-of-cuda-102-and-113) allows us to proceed with introducing the latest CUDA version as they are introduced by Nvidia\u00ae, and hence allows support for C++17 in PyTorch and new NVIDIA Open GPU Kernel Modules.\n\n- Previously, [functorch](#beta-features) was released out-of-tree in a separate package. After installing PyTorch, a user will be able to `import functorch` and use functorch without needing to install another package.", "metadata": {"source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"}} @@ -1657,30 +1661,30 @@ {"page_content": "", "metadata": {"source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"}} {"page_content": "Along with 1.13, we are also releasing major updates to the PyTorch libraries, more details can be found in this [blog](https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/).\n\n## Stable Features\n\n### (Stable) BetterTransformer API\n\nThe [BetterTransformer](https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/) feature set, first released in PyTorch 1.12, is stable. PyTorch BetterTransformer supports fastpath execution for common Transformer models during Inference out-of-the-box, without the need to modify the model. To complement the improvements in Better Transformer, we have also accelerated add+matmul linear algebra kernels for sizes commonly used in Transformer models.", "metadata": {"source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"}} {"page_content": "Reflecting the performance benefits for many NLP users, Nested Tensors use for Better Transformer is now enabled by default. To ensure compatibility, a mask check is performed to ensure a contiguous mask is supplied. In Transformer Encoder, the mask check for src_key_padding_mask may be suppressed by setting mask_check=False. This accelerates processing for users than can guarantee that only aligned masks are provided. Finally, better error messages are provided to diagnose incorrect inputs, together with improved diagnostics why fastpath execution cannot be used.\n\nBetter Transformer is directly integrated into the PyTorch TorchText library, enabling TorchText users to transparently and automatically take advantage of BetterTransformer speed and efficiency performance. ([Tutorial](https://pytorch.org/tutorials/beginner/bettertransformer_tutorial.html))\n\n

\n\n

\n\n ", "metadata": {"source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"}} -{"page_content": "

\nFigure: BetterTransformer fastpath execution is now stable and enables sparsity optimization using Nested Tensor representation as default\n

\n\n### Introduction of CUDA 11.6 and 11.7 and deprecation of CUDA 10.2 and 11.3\n\nTimely deprecating older CUDA versions allows us to proceed with introducing the latest CUDA version as they are introduced by Nvidia\u00ae, and hence allows developers to use the latest features of CUDA and benefit from correctness fixes provided by the latest version.\n\nDecommissioning of CUDA 10.2. CUDA 11 is the first CUDA version to support C++17. Hence decommissioning legacy CUDA 10.2 was a major step in adding support for C++17 in PyTorch. It also helps to improve PyTorch code by eliminating legacy CUDA 10.2 specific instructions.", "metadata": {"source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"}} +{"page_content": " \n\n

\nFigure: BetterTransformer fastpath execution is now stable and enables sparsity optimization using Nested Tensor representation as default\n

\n\n### Introduction of CUDA 11.6 and 11.7 and deprecation of CUDA 10.2 and 11.3\n\nTimely deprecating older CUDA versions allows us to proceed with introducing the latest CUDA version as they are introduced by Nvidia\u00ae, and hence allows developers to use the latest features of CUDA and benefit from correctness fixes provided by the latest version.\n\nDecommissioning of CUDA 10.2. CUDA 11 is the first CUDA version to support C++17. Hence decommissioning legacy CUDA 10.2 was a major step in adding support for C++17 in PyTorch. It also helps to improve PyTorch code by eliminating legacy CUDA 10.2 specific instructions.", "metadata": {"source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"}} {"page_content": "Decommissioning of CUDA 11.3 and introduction of CUDA 11.7 brings compatibility support for the new NVIDIA Open GPU Kernel Modules and another significant highlight is the lazy loading support. CUDA 11.7 is shipped with cuDNN 8.5.0 which contains a number of optimizations accelerating transformer-based models, 30% reduction in library size , and various improvements in the runtime fusion engine. Learn more on CUDA 11.7 with our [release notes](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html).\n\n## Beta Features\n\n### (Beta) functorch\n\nInspired by [Google\u00ae JAX](https://github.com/google/jax), functorch is a library that offers composable vmap (vectorization) and autodiff transforms. It enables advanced autodiff use cases that would otherwise be tricky to express in PyTorch. Examples include:", "metadata": {"source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"}} {"page_content": "\n\n\nWe\u2019re excited to announce that, as a first step towards closer integration with PyTorch, functorch has moved to inside the PyTorch library and no longer requires the installation of a separate functorch package. After installing PyTorch via conda or pip, you\u2019ll be able to `import functorch\u2019 in your program. Learn more with our [detailed instructions](https://pytorch.org/functorch/1.13/install.html), [nightly](https://pytorch.org/functorch/nightly/) and [release notes](https://github.com/pytorch/pytorch/releases).", "metadata": {"source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"}} {"page_content": "### (Beta) Intel\u00ae VTune\u2122 Profiler's Instrumentation and Tracing Technology APIs (ITT) integration\n\nPyTorch users are able to visualize op-level timeline of PyTorch scripts execution in Intel\u00ae VTune\u2122 Profiler when they need to analyze per-op performance with low-level performance metrics on Intel platforms.\n\n```python\nwith torch.autograd.profiler.emit_itt():\n for i in range(10):\n torch.itt.range_push('step_{}'.format(i))\n model(input)\n torch.itt.range_pop()\n```\n\n \nLearn more with our [tutorial](https://pytorch.org/tutorials/recipes/profile_with_itt.html).\n\n### (Beta) NNC: Add BF16 and Channels last support", "metadata": {"source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"}} -{"page_content": "TorchScript graph-mode inference performance on x86 CPU is boosted by adding channels last and BF16 support to NNC. PyTorch users may benefit from channels last optimization on most popular x86 CPUs and benefit from BF16 optimization on Intel Cooper Lake Processor and Sapphire Rapids Processor. >2X geomean performance boost is observed on broad vision models with these two optimizations on Intel Cooper Lake Processor.\n\nThe performance benefit can be obtained with existing TorchScript, channels last and BF16 Autocast APIs. See code snippet below. We will migrate the optimizations in NNC to the new PyTorch DL Compiler TorchInductor.\n\n ", "metadata": {"source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"}} -{"page_content": "```python\nimport torch\nimport torchvision.models as models\nmodel = models.resnet50(pretrained=True)\n# Convert the model to channels-last\nmodel = model.to(memory_format=torch.channels_last)\nmodel.eval()\ndata = torch.rand(1, 3, 224, 224)\n# Convert the data to channels-lastdata = data.to(memory_format=torch.channels_last)\n# Enable autocast to run with BF16\nwith torch.cpu.amp.autocast(), torch.no_grad():\n# Trace the model\nmodel = torch.jit.trace(model, torch.rand(1, 3, 224, 224))\n\tmodel = torch.jit.freeze(model)\n\t# Run the traced model\n\tmodel(data)\n```\n\n### (Beta) Support for M1 Devices\n\nSince v1.12, PyTorch has been offering native builds for Apple\u00ae silicon machines that use Apple's new M1 chip as a prototype feature. In this release, we bring this feature to beta, providing improved support across PyTorch's APIs.", "metadata": {"source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"}} +{"page_content": "### (Beta) NNC: Add BF16 and Channels last support\n\nTorchScript graph-mode inference performance on x86 CPU is boosted by adding channels last and BF16 support to NNC. PyTorch users may benefit from channels last optimization on most popular x86 CPUs and benefit from BF16 optimization on Intel Cooper Lake Processor and Sapphire Rapids Processor. >2X geomean performance boost is observed on broad vision models with these two optimizations on Intel Cooper Lake Processor.\n\nThe performance benefit can be obtained with existing TorchScript, channels last and BF16 Autocast APIs. See code snippet below. We will migrate the optimizations in NNC to the new PyTorch DL Compiler TorchInductor.\n\n ", "metadata": {"source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"}} +{"page_content": " \n\n```python\nimport torch\nimport torchvision.models as models\nmodel = models.resnet50(pretrained=True)\n# Convert the model to channels-last\nmodel = model.to(memory_format=torch.channels_last)\nmodel.eval()\ndata = torch.rand(1, 3, 224, 224)\n# Convert the data to channels-lastdata = data.to(memory_format=torch.channels_last)\n# Enable autocast to run with BF16\nwith torch.cpu.amp.autocast(), torch.no_grad():\n# Trace the model\nmodel = torch.jit.trace(model, torch.rand(1, 3, 224, 224))\n\tmodel = torch.jit.freeze(model)\n\t# Run the traced model\n\tmodel(data)\n```\n\n### (Beta) Support for M1 Devices\n\nSince v1.12, PyTorch has been offering native builds for Apple\u00ae silicon machines that use Apple's new M1 chip as a prototype feature. In this release, we bring this feature to beta, providing improved support across PyTorch's APIs.", "metadata": {"source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"}} {"page_content": "We now run tests for all submodules except `torch.distributed` on M1 macOS 12.6 instances. With this improved testing, we were able to fix features such as cpp extension and convolution correctness for certain inputs.\n\nTo get started, just install PyTorch v1.13 on your Apple silicon Mac running macOS 12 or later with a native version (arm64) of Python. Learn more with our [release notes](https://github.com/pytorch/pytorch/releases).\n\n## Prototype Features\n\n\n\n### (Prototype) Arm\u00ae Compute Library (ACL) backend support for AWS Graviton\n\nWe achieved substantial improvements for CV and NLP inference on aarch64 cpu with Arm Compute Library (acl) to enable acl backend for pytorch and torch-xla modules. Highlights include:\n ", "metadata": {"source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"}} {"page_content": "- Enabled mkldnn + acl as the default backend for aarch64 torch wheel.\n- Enabled mkldnn matmul operator for aarch64 bf16 device.\n- Brought TensorFlow xla+acl feature into torch-xla. We enhanced the TensorFlow xla with Arm Compute Library runtime for aarch64 cpu. These changes are included in TensorFlow master and then the upcoming TF 2.10. Once the torch-xla repo is updated for the tensorflow commit, it will have compiling support for torch-xla. We observed ~2.5-3x improvement for MLPerf Bert inference compared to the torch 1.12 wheel on Graviton3.\n\n### (Prototype) CUDA Sanitizer", "metadata": {"source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"}} -{"page_content": "When enabled, the sanitizer begins to analyze low-level CUDA operations invoked as a result of the user\u2019s PyTorch code to detect data race errors caused by unsynchronized data access from different CUDA streams. The errors found are then printed along with stack traces of faulty accesses, much like [Thread Sanitizer](https://clang.llvm.org/docs/ThreadSanitizer.html) does. An example of a simple error and the output produced by the sanitizer can be viewed [here](https://gist.github.com/sypneiwski/5989d634f7090913b80012be835e811d). It will be especially useful for machine learning applications, where corrupted data can be easy to miss for a human and the errors may not always manifest themselves; the sanitizer will always be able to detect them.\n\n### (Prototype) Limited Python 3.11 support", "metadata": {"source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"}} -{"page_content": "Binaries for Linux with Python 3.11 support are available to download via pip. Please follow the instructions on the [get started page](https://pytorch.org/get-started/locally/). Please note that Python 3.11 support is only a preview. In particular, features including Distributed, Profiler, FX and JIT might not be fully functional yet.", "metadata": {"source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"}} +{"page_content": "### (Prototype) CUDA Sanitizer\n\nWhen enabled, the sanitizer begins to analyze low-level CUDA operations invoked as a result of the user\u2019s PyTorch code to detect data race errors caused by unsynchronized data access from different CUDA streams. The errors found are then printed along with stack traces of faulty accesses, much like [Thread Sanitizer](https://clang.llvm.org/docs/ThreadSanitizer.html) does. An example of a simple error and the output produced by the sanitizer can be viewed [here](https://gist.github.com/sypneiwski/5989d634f7090913b80012be835e811d). It will be especially useful for machine learning applications, where corrupted data can be easy to miss for a human and the errors may not always manifest themselves; the sanitizer will always be able to detect them.\n\n### (Prototype) Limited Python 3.11 support", "metadata": {"source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"}} +{"page_content": "### (Prototype) Limited Python 3.11 support\n\nBinaries for Linux with Python 3.11 support are available to download via pip. Please follow the instructions on the [get started page](https://pytorch.org/get-started/locally/). Please note that Python 3.11 support is only a preview. In particular, features including Distributed, Profiler, FX and JIT might not be fully functional yet.", "metadata": {"source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"Celebrate PyTorch 2.0 with New Performance Features for AI Developers\"\nauthor: Intel\n---\n\nCongratulations to the PyTorch Foundation for its release of **PyTorch 2.0**! In this blog, I discuss the four features for which Intel made significant contributions to PyTorch 2.0:\n\n1. TorchInductor\n2. GNN\n3. INT8 Inference Optimization\n4. oneDNN Graph API\n\nWe at Intel are delighted to be part of the PyTorch community and appreciate the collaboration with and feedback from our colleagues at [Meta](http://www.meta.com/) as we co-developed these features.\n\n\nLet\u2019s get started.\n\n\n## 1. TorchInductor CPU FP32 Inference Optimized\n\n\nAs part of the PyTorch 2.0 compilation stack, TorchInductor CPU backend optimization brings notable performance improvements via graph compilation over the PyTorch eager mode.", "metadata": {"source": "https://pytorch.org/blog/celebrate-pytorch-2.0/", "category": "pytorch blogs"}} {"page_content": "The TorchInductor CPU backend is sped up by leveraging the technologies from the [Intel\u00ae Extension for PyTorch](http://github.com/intel/intel-extension-for-pytorch) for Conv/GEMM ops with post-op fusion and weight prepacking, and PyTorch ATen CPU kernels for memory-bound ops with explicit vectorization on top of OpenMP*-based thread parallelization.\n\n\nWith these optimizations on top of the powerful loop fusions in TorchInductor codegen, we achieved up to a **1.7x** FP32 inference performance boost over three representative deep learning benchmarks: TorchBench, HuggingFace, and timm1. Training and low-precision support are under development.\n\n\n### See the Improvements\n\n\nThe performance improvements on various backends are tracked on this [TouchInductor CPU Performance Dashboard](http://github.com/pytorch/pytorch/issues/93531#issuecomment-1457373890).\n\n\n## Improve Graph Neural Network (GNN) in PyG for Inference and Training Performance on CPU", "metadata": {"source": "https://pytorch.org/blog/celebrate-pytorch-2.0/", "category": "pytorch blogs"}} {"page_content": "GNN is a powerful tool to analyze graph structure data. This feature is designed to improve GNN inference and training performance on Intel\u00ae CPUs, including the new 4th Gen Intel\u00ae Xeon\u00ae Scalable processors.\n\n\nPyTorch Geometric (PyG) is a very popular library built upon PyTorch to perform GNN workflows. Currently on CPU, GNN models of PyG run slowly due to the lack of GNN-related sparse matrix multiplication operations (i.e., SpMM_reduce) and the lack of several critical kernel-level optimizations (scatter/gather, etc.) tuned for GNN compute.\n\n\nTo address this, optimizations are provided for message passing between adjacent neural network nodes:", "metadata": {"source": "https://pytorch.org/blog/celebrate-pytorch-2.0/", "category": "pytorch blogs"}} {"page_content": "* **scatter_reduce:** performance hotspot in message-passing when the edge index is stored in coordinate format (COO).\n* **gather:** backward computation of scatter_reduce, specially tuned for the GNN compute when the index is an expanded tensor.\n* **torch.sparse.mm with reduce flag:** performance hotspot in message-passing when the edge index is stored in compressed sparse row (CSR). Supported reduce flag for: sum, mean, amax, amin.\n\nEnd-to-end performance benchmark results for both inference and training on 3rd Gen Intel\u00ae Xeon\u00ae Scalable processors 8380 platform and on 4th Gen 8480+ platform are discussed in [Accelerating PyG on Intel CPUs](http://www.pyg.org/ns-newsarticle-accelerating-pyg-on-intel-cpus).\n\n\n## Optimize int8 Inference with Unified Quantization Backend for x86 CPU Platforms", "metadata": {"source": "https://pytorch.org/blog/celebrate-pytorch-2.0/", "category": "pytorch blogs"}} {"page_content": "The new X86 quantization backend is a combination of [FBGEMM](http://github.com/pytorch/FBGEMM) (Facebook General Matrix-Matrix Multiplication) and [oneAPI Deep Neural Network Library (oneDNN](http://spec.oneapi.io/versions/latest/elements/oneDNN/source/index.html)) backends and replaces FBGEMM as the default quantization backend for x86 platforms. The result: better end-to-end int8 inference performance than FBGEMM.\n\n\nUsers access the x86 quantization backend by default for x86 platforms, and the selection between different kernels is automatically done behind the scenes. The rules of selection are based on prior performance testing data done by Intel during feature development. Thus, the x86 backend replaces FBGEMM and may offer better performance, depending on the use case.\n\n\nThe selection rules are:", "metadata": {"source": "https://pytorch.org/blog/celebrate-pytorch-2.0/", "category": "pytorch blogs"}} -{"page_content": "* On platforms without VNNI (e.g., Intel\u00ae Core\u2122 i7 processors), FBGEMM is always used.\n* On platforms with VNNI (e.g., 2nd-4th Gen Intel\u00ae Xeon\u00ae Scalable processors and future platforms):\n * For linear, FBGEMM is always used.\n * For convolution layers, FBGEMM is used for depth-wise convolution whose layers > 100; otherwise, oneDNN is used.\n\nNote that as the kernels continue to evolve.\n\n\nThe selection rules above are subject to change to achieve better performance. Performance metrics for through-put speed-up ratios of unified x86 backend vs. pure FBGEMM are discussed in [[RFC] Unified quantization backend for x86 CPU platforms #83888](http://github.com/pytorch/pytorch/issues/83888).\n\n\n## Leverage oneDNN Graph API to Accelerate Inference on CPU", "metadata": {"source": "https://pytorch.org/blog/celebrate-pytorch-2.0/", "category": "pytorch blogs"}} +{"page_content": "The selection rules are:\n\n* On platforms without VNNI (e.g., Intel\u00ae Core\u2122 i7 processors), FBGEMM is always used.\n* On platforms with VNNI (e.g., 2nd-4th Gen Intel\u00ae Xeon\u00ae Scalable processors and future platforms):\n * For linear, FBGEMM is always used.\n * For convolution layers, FBGEMM is used for depth-wise convolution whose layers > 100; otherwise, oneDNN is used.\n\nNote that as the kernels continue to evolve.\n\n\nThe selection rules above are subject to change to achieve better performance. Performance metrics for through-put speed-up ratios of unified x86 backend vs. pure FBGEMM are discussed in [[RFC] Unified quantization backend for x86 CPU platforms #83888](http://github.com/pytorch/pytorch/issues/83888).\n\n\n## Leverage oneDNN Graph API to Accelerate Inference on CPU", "metadata": {"source": "https://pytorch.org/blog/celebrate-pytorch-2.0/", "category": "pytorch blogs"}} {"page_content": "[oneDNN Graph API](http://spec.oneapi.io/onednn-graph/latest/introduction.html) extends [oneDNN](http://spec.oneapi.io/versions/latest/elements/oneDNN/source/index.html) with a flexible graph API to maximize the optimization opportunity for generating efficient code on Intel\u00ae AI hardware. It automatically identifies the graph partitions to be accelerated via fusion. The [fusion patterns](http://github.com/oneapi-src/oneDNN/blob/dev-graph/doc/programming_model/ops_and_patterns.md#fusion-patterns) focus on fusing compute-intensive operations such as convolution, matmul, and their neighbor operations for both inference and training use cases.\n\n\nCurrently, BFloat16 and Float32 datatypes are supported and only inference workloads can be optimized. BF16 is only optimized on machines with Intel\u00ae Advanced Vector Extensions 512 (Intel\u00ae AVX-512) BF16 support.\n\n\nFew or no modifications are needed in PyTorch to support newer oneDNN Graph fusions/optimized kernels. To use oneDNN Graph, users can:", "metadata": {"source": "https://pytorch.org/blog/celebrate-pytorch-2.0/", "category": "pytorch blogs"}} {"page_content": "* Either use the API _torch.jit.enable_onednn_fusion(True)_ before JIT tracing a model, OR \u2026\n* Use its context manager, viz. _with torch.jit.fuser(\u201cfuser3\u201d)._\n* For accelerating [BFloat16 inference](http://github.com/pytorch/pytorch/tree/master/torch/csrc/jit/codegen/onednn#example-with-bfloat16), we rely on eager-mode AMP (Automatic Mixed Precision) support in PyTorch and disable JIT mode\u2019s AMP.\n\nSee the [PyTorch performance tuning guide](http://pytorch.org/tutorials/recipes/recipes/tuning_guide.html#use-onednn-graph-with-torchscript-for-inference).\n\n\n## Next Steps\n\n\n### Get the Software\n\n\n[Try out PyTorch 2.0](http://pytorch.org/get-started/locally/) and realize the performance benefits for yourself from these Intel-contributed features.", "metadata": {"source": "https://pytorch.org/blog/celebrate-pytorch-2.0/", "category": "pytorch blogs"}} {"page_content": "We encourage you to check out Intel\u2019s other [AI Tools](https://www.intel.com/content/www/us/en/developer/topic-technology/artificial-intelligence/tools.html) and [Framework](https://www.intel.com/content/www/us/en/developer/tools/frameworks/overview.html) optimizations and learn about the open, standards-based [oneAPI](https://www.intel.com/content/www/us/en/developer/tools/oneapi/overview.html) multiarchitecture, multivendor programming model that forms the foundation of Intel\u2019s AI software portfolio.\n\n\nFor more details about 4th Gen Intel Xeon Scalable processor, visit [AI Platform](https://www.intel.com/content/www/us/en/developer/topic-technology/artificial-intelligence/platform.html) where you can learn about how Intel is empowering developers to run high-performance, efficient end-to-end AI pipelines.\n\n\n### PyTorch Resources", "metadata": {"source": "https://pytorch.org/blog/celebrate-pytorch-2.0/", "category": "pytorch blogs"}} -{"page_content": "* [PyTorch Get Started](http://pytorch.org/get-started/pytorch-2.0/)\n* [Dev Discussions](http://dev-discuss.pytorch.org/t/pytorch-release-2-0-execution-update/1077)\n* [Documentation](http://pytorch.org/docs/2.0/)", "metadata": {"source": "https://pytorch.org/blog/celebrate-pytorch-2.0/", "category": "pytorch blogs"}} +{"page_content": "### PyTorch Resources\n\n* [PyTorch Get Started](http://pytorch.org/get-started/pytorch-2.0/)\n* [Dev Discussions](http://dev-discuss.pytorch.org/t/pytorch-release-2-0-execution-update/1077)\n* [Documentation](http://pytorch.org/docs/2.0/)", "metadata": {"source": "https://pytorch.org/blog/celebrate-pytorch-2.0/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"What Every User Should Know About Mixed Precision Training in PyTorch\"\nauthor: Syed Ahmed, Christian Sarofeen, Mike Ruberry, Eddie Yan, Natalia Gimelshein, Michael Carilli, Szymon Migacz, Piotr Bialecki, Paulius Micikevicius, Dusan Stosic, Dong Yang, and Naoya Maruyama\nfeatured-img: ''\n---", "metadata": {"source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "Efficient training of modern neural networks often relies on using lower precision data types. Peak float16 matrix multiplication and convolution performance is 16x faster than peak float32 performance on A100 GPUs. And since the float16 and bfloat16 data types are only half the size of float32 they can double the performance of bandwidth-bound kernels and reduce the memory required to train a network, allowing for larger models, larger batches, or larger inputs. Using a module like [torch.amp](https://pytorch.org/docs/master/amp.html) (short for \u201cAutomated Mixed Precision\u201d) makes it easy to get the speed and memory usage benefits of lower precision data types while preserving convergence behavior.", "metadata": {"source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "Going faster and using less memory is always advantageous \u2013 deep learning practitioners can test more model architectures and hyperparameters, and larger, more powerful models can be trained. Training very large models like those described in [Narayanan et al.](https://arxiv.org/pdf/2104.04473.pdf) and [Brown et al.](https://arxiv.org/pdf/2005.14165.pdf) (which take thousands of GPUs months to train even with expert handwritten optimizations) is infeasible without using mixed precision.\n\nWe\u2019ve talked about mixed precision techniques before ([here](https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/), [here](https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html), and [here](https://developer.nvidia.com/automatic-mixed-precision)), and this blog post is a summary of those techniques and an introduction if you\u2019re new to mixed precision.\n\n## Mixed Precision Training in Practice", "metadata": {"source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"}} -{"page_content": "Mixed precision training techniques \u2013 the use of the lower precision float16 or bfloat16 data types alongside the float32 data type \u2013 are broadly applicable and effective. See Figure 1 for a sampling of models successfully trained with mixed precision, and Figures 2 and 3 for example speedups using torch.amp.\n\n

\n \n

\n\n

\n Figure 1: Sampling of DL Workloads Successfully Trained with float16 (Source).\n

\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"}} +{"page_content": "## Mixed Precision Training in Practice\n\nMixed precision training techniques \u2013 the use of the lower precision float16 or bfloat16 data types alongside the float32 data type \u2013 are broadly applicable and effective. See Figure 1 for a sampling of models successfully trained with mixed precision, and Figures 2 and 3 for example speedups using torch.amp.\n\n

\n \n

\n\n

\n Figure 1: Sampling of DL Workloads Successfully Trained with float16 (Source).\n

\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "

\n Figure 2: Performance of mixed precision training using torch.amp on NVIDIA 8xV100 vs. float32 training on 8xV100 GPU. Bars represent the speedup factor of torch.amp over float32. \n(Higher is better.) (Source).\n

\n\n

\n \n

\n\n

\n Figure 3. Performance of mixed precision training using torch.amp on NVIDIA 8xA100 vs. 8xV100 GPU. Bars represent the speedup factor of A100 over V100.\n(Higher is Better.) (Source).\n

\n\nSee the [NVIDIA Deep Learning Examples repository](https://github.com/NVIDIA/DeepLearningExamples) for more sample mixed precision workloads.", "metadata": {"source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "Similar performance charts can be seen in [3D medical image analysis](https://nvlabs.github.io/eccv2020-mixed-precision-tutorial/files/dong_yang-mixed-precision-training-for-3d-medical-image-analysis.pdf), [gaze estimation](https://nvlabs.github.io/eccv2020-mixed-precision-tutorial/files/shalini_de_mello-mixed-precision-training-for-faze.pdf), [video synthesis](https://nvlabs.github.io/eccv2020-mixed-precision-tutorial/files/tingchun_wang-mixed-precision-vid2vid.pdf), [conditional GANs](https://nvlabs.github.io/eccv2020-mixed-precision-tutorial/files/mingyu_liu-amp-imaginaire.pdf), and [convolutional LSTMs](https://nvlabs.github.io/eccv2020-mixed-precision-tutorial/files/wonmin_byeon-mixed-precision-training-for-convolutional-tensor-train-lstm.pdf). [Huang et al](https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/). showed that mixed precision training is 1.5x to 5.5x faster over float32 on V100 GPUs, and an additional 1.3x to 2.5x faster on A100 GPUs on a variety of networks. On very large networks the need for mixed precision is even more evident. [Narayanan et al](https://arxiv.org/pdf/2104.04473.pdf). reports that it would take 34 days to train GPT-3 175B on 1024 A100 GPUs (with a batch size of 1536), but it\u2019s estimated it would take over a year using float32!", "metadata": {"source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "## Getting Started With Mixed Precision Using torch.amp\n\ntorch.amp, introduced in PyTorch 1.6, makes it easy to leverage mixed precision training using the float16 or bfloat16 dtypes. See this [blog post](https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/), [tutorial](https://pytorch.org/tutorials/recipes/recipes/amp_recipe.html), and [documentation](https://pytorch.org/docs/master/amp.html) for more details. Figure 4 shows an example of applying AMP with grad scaling to a network.\n\n```console\nimport torch\n# Creates once at the beginning of training\nscaler = torch.cuda.amp.GradScaler()\n\nfor data, label in data_iter:\n optimizer.zero_grad()\n # Casts operations to mixed precision\n with torch.amp.autocast(device_type=\u201ccuda\u201d, dtype=torch.float16):\n loss = model(data)\n\n # Scales the loss, and calls backward()\n # to create scaled gradients\n scaler.scale(loss).backward()", "metadata": {"source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"}} @@ -1692,9 +1696,9 @@ {"page_content": "

\nFigure 5: Relative peak throughput of float16 (FP16) vs float32 matrix multiplications on Volta and Ampere GPUs. On Ampere relative peak throughput for the TensorFloat32 (TF32) mode and bfloat16 matrix multiplications are shown, too. The relative peak throughput of low precision data types like float16 and bfloat16 vs. float32 matrix multiplications is expected to grow as new hardware is released.\n

\n\nPyTorch\u2019s torch.amp module makes it easy to get started with mixed precision, and we highly recommend using it to train faster and reduce memory usage. torch.amp supports both float16 and bfloat16 mixed precision.\n\nThere are still some networks that are tricky to train with mixed precision, and for these networks we recommend trying TF32 accelerated matrix multiplications on Ampere and later CUDA hardware. Networks are rarely so precision sensitive that they require full float32 precision for every operation.", "metadata": {"source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "If you have questions or suggestions for torch.amp or mixed precision support in PyTorch then let us know by posting to the [mixed precision category on the PyTorch Forums](https://discuss.pytorch.org/c/mixed-precision/27) or [filing an issue on the PyTorch GitHub page](https://github.com/pytorch/pytorch/issues/new/choose).", "metadata": {"source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'Everything you need to know about TorchVision\u2019s MobileNetV3 implementation'\nauthor: Vasilis Vryniotis and Francisco Massa\n---\n\nIn TorchVision v0.9, we released a series of [new mobile-friendly models](https://pytorch.org/blog/ml-models-torchvision-v0.9/) that can be used for Classification, Object Detection and Semantic Segmentation. In this article, we will dig deep into the code of the models, share notable implementation details, explain how we configured and trained them, and highlight important tradeoffs we made during their tuning. Our goal is to disclose technical details that typically remain undocumented in the original papers and repos of the models.\n\n### Network Architecture", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} -{"page_content": "The implementation of the [MobileNetV3 architecture](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py) follows closely the [original paper](https://arxiv.org/abs/1905.02244). It is customizable and offers different configurations for building Classification, Object Detection and Semantic Segmentation backbones. It was designed to follow a similar structure to MobileNetV2 and the two share [common building blocks](https://github.com/pytorch/vision/blob/cac8a97b0bd14eddeff56f87a890d5cc85776e18/torchvision/models/mobilenetv2.py#L32).", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} +{"page_content": "### Network Architecture\n\nThe implementation of the [MobileNetV3 architecture](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py) follows closely the [original paper](https://arxiv.org/abs/1905.02244). It is customizable and offers different configurations for building Classification, Object Detection and Semantic Segmentation backbones. It was designed to follow a similar structure to MobileNetV2 and the two share [common building blocks](https://github.com/pytorch/vision/blob/cac8a97b0bd14eddeff56f87a890d5cc85776e18/torchvision/models/mobilenetv2.py#L32).", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} {"page_content": "Off-the-shelf, we offer the two variants described on the paper: the [Large](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L196-L214) and the [Small](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L215-L229). Both are constructed using the same code with the only difference being their configuration which describes the number of blocks, their sizes, their activation functions etc.\n\n### Configuration parameters", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} -{"page_content": "Even though one can write a [custom InvertedResidual setting](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L105) and pass it to the MobileNetV3 class directly, for the majority of applications we can adapt the existing configs by passing parameters to the [model building methods](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L253). Some of the key configuration parameters are the following:", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} +{"page_content": "### Configuration parameters\n\nEven though one can write a [custom InvertedResidual setting](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L105) and pass it to the MobileNetV3 class directly, for the majority of applications we can adapt the existing configs by passing parameters to the [model building methods](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L253). Some of the key configuration parameters are the following:", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} {"page_content": "- The `width_mult` [parameter](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L188) is a multiplier that affects the number of channels of the model. The default value is 1 and by increasing or decreasing it one can change the number of filters of all convolutions, including the ones of the first and last layers. The implementation ensures that the number of filters is always a [multiple of 8](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L56-L57). This is a hardware optimization trick which allows for faster vectorization of operations.", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} {"page_content": "- The `reduced_tail` [parameter](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L188) halves the number of channels on the [last blocks](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L210-L214) of the network. This version is used by some Object Detection and Semantic Segmentation models. It\u2019s a speed optimization which is described on the [MobileNetV3 paper](https://arxiv.org/abs/1905.02244) and reportedly leads to a 15% latency reduction without a significant negative effect on accuracy.", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} {"page_content": "- The `dilated` [parameter](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L188) affects the [last 3](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L210-L212) InvertedResidual blocks of the model and turns their normal depthwise Convolutions to Atrous Convolutions. This is used to control the output stride of these blocks and has a [significant positive effect](https://arxiv.org/abs/1706.05587) on the accuracy of Semantic Segmentation models.\n\n### Implementation details\n\nBelow we provide additional information on some notable implementation details of the architecture.\nThe [MobileNetV3 class](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L101) is responsible for building a network out of the provided configuration. Here are some implementation details of the class:", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} @@ -1703,7 +1707,7 @@ {"page_content": "### Classification\n\nIn this section we provide benchmarks of the pre-trained models and details on how they were configured, trained and quantized.\n\n**Benchmarks**\n\nHere is how to initialize the pre-trained models:\n```\nlarge = torchvision.models.mobilenet_v3_large(pretrained=True, width_mult=1.0, reduced_tail=False, dilated=False)\nsmall = torchvision.models.mobilenet_v3_small(pretrained=True)\nquantized = torchvision.models.quantization.mobilenet_v3_large(pretrained=True)\n```\n\nBelow we have the detailed benchmarks between new and selected previous models. As we can see MobileNetV3-Large is a viable replacement of ResNet50 for users who are willing to sacrifice a bit of accuracy for a roughly 6x speed-up:", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} {"page_content": "| Model | Acc@1 | Acc@5 | Inference on CPU (sec) | # Params (M) |\n|-----------------------------|--------:|--------:|------------------------:|--------------:|\n| MobileNetV3-Large | 74.042 | 91.340 | 0.0411 | 5.48 |\n| MobileNetV3-Small | 67.668 | 87.402 | 0.0165 | 2.54 |\n| Quantized MobileNetV3-Large | 73.004 | 90.858 | 0.0162 | 2.96 |\n| MobileNetV2 | 71.880 | 90.290 | 0.0608 | 3.50 |\n| ResNet50 | 76.150 | 92.870 | 0.2545 | 25.56 |\n| ResNet18 | 69.760 | 89.080 | 0.1032 | 11.69 |\n\nNote that the inference times are measured on CPU. They are not absolute benchmarks, but they allow for relative comparisons between models.\n\n**Training process**", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} {"page_content": "**Training process**\n\nAll pre-trained models are configured with a width multiplier of 1, have full tails, are non-dilated, and were fitted on ImageNet. Both the Large and Small variants were trained using the same hyper-parameters and scripts which can be found in our [references](https://github.com/pytorch/vision/tree/c2ab0c59f42babf9ad01aa616cd8a901daac86dd/references/classification#mobilenetv3-large--small) folder. Below we provide details on the most notable aspects of the training process.\n\n **Achieving fast and stable training**", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} -{"page_content": "[Configuring RMSProp](https://github.com/pytorch/vision/blob/c2ab0c59f42babf9ad01aa616cd8a901daac86dd/references/classification/train.py#L172-L173) correctly was crucial to achieve fast training with numerical stability. The authors of the paper used TensorFlow in their experiments and in their runs they reported using [quite high](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet#v3) `rmsprop_epsilon` comparing to the default. Typically this hyper-parameter takes small values as it\u2019s used to avoid zero denominators, but in this specific model choosing the right value seems important to avoid numerical instabilities in the loss.", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} +{"page_content": "**Achieving fast and stable training**\n\n[Configuring RMSProp](https://github.com/pytorch/vision/blob/c2ab0c59f42babf9ad01aa616cd8a901daac86dd/references/classification/train.py#L172-L173) correctly was crucial to achieve fast training with numerical stability. The authors of the paper used TensorFlow in their experiments and in their runs they reported using [quite high](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet#v3) `rmsprop_epsilon` comparing to the default. Typically this hyper-parameter takes small values as it\u2019s used to avoid zero denominators, but in this specific model choosing the right value seems important to avoid numerical instabilities in the loss.", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} {"page_content": "Another important detail is that though PyTorch\u2019s and TensorFlow\u2019s RMSProp implementations typically behave similarly, there are [a few differences](https://github.com/pytorch/pytorch/issues/32545) with the most notable in our setup being how the epsilon hyperparameter is handled. More specifically, PyTorch adds the epsilon [outside of the square root calculation](https://github.com/tensorflow/tensorflow/blob/v2.5.0/tensorflow/python/training/rmsprop.py#L25) while TensorFlow [adds it inside](https://github.com/tensorflow/tensorflow/blob/v2.5.0/tensorflow/python/training/rmsprop.py#L25). The result of this implementation detail is that one needs to adjust the epsilon value while porting the hyper parameter of the paper. A reasonable approximation can be taken with the formula `PyTorch_eps = sqrt(TF_eps)`.\n\n**Increasing our accuracy by tuning hyperparameters & improving our training recipe**", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} {"page_content": "After configuring the optimizer to achieve fast and stable training, we turned into optimizing the accuracy of the model. There are a few techniques that helped us achieve this. First of all, to avoid overfitting we augmented out data using the AutoAugment algorithm, followed by RandomErasing. Additionally we tuned parameters such as the weight decay using cross validation. We also found beneficial to perform [weight averaging](https://github.com/pytorch/vision/blob/674e8140042c2a3cbb1eb9ebad1fa49501599130/references/classification/utils.py#L259) across different epoch checkpoints after the end of the training. Finally, though not used in our published training recipe, we found that using Label Smoothing, Stochastic Depth and LR noise injection improve the overall accuracy by over [1.5 points](https://rwightman.github.io/pytorch-image-models/training_hparam_examples/#mobilenetv3-large-100-75766-top-1-92542-top-5).", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} {"page_content": "The graph and table depict a simplified summary of the most important iterations for improving the accuracy of the MobileNetV3 Large variant. Note that the actual number of iterations done while training the model was significantly larger and that the progress in accuracy was not always monotonically increasing. Also note that the Y-axis of the graph starts from 70% instead from 0% to make the difference between iterations more visible:\n\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} @@ -1712,17 +1716,17 @@ {"page_content": "| Quantization Status | Acc@1 | Acc@5 |\n|----------------------------|--------:|--------:|\n| Non-quantized | 74.042 | 91.340 |\n| Quantized Aware Training | 73.004 | 90.858 |\n| Post-training Quantization | 71.160 | 89.834 |\n\n### Object Detection\n\nIn this section, we will first provide benchmarks of the released models, and then discuss how the MobileNetV3-Large backbone was used in a Feature Pyramid Network along with the FasterRCNN detector to perform Object Detection. We will also explain how the network was trained and tuned alongside with any tradeoffs we had to make. We will not cover details about how it was used with [SSDlite](https://github.com/pytorch/vision/blob/b94a4014a68d08f37697f4672729571a46f0042d/torchvision/models/detection/ssdlite.py) as this will be discussed on a future article.\n\n**Benchmarks**", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} {"page_content": "**Benchmarks**\n\nHere is how the models are initialized:\n```\nhigh_res = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(pretrained=True) \nlow_res = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_320_fpn(pretrained=True)\n```\n\nBelow are some benchmarks between new and selected previous models. As we can see the high resolution Faster R-CNN with MobileNetV3-Large FPN backbone seems a viable replacement of the equivalent ResNet50 model for those users who are willing to sacrifice few accuracy points for a 5x speed-up:", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} {"page_content": "| Model | mAP | Inference on CPU (sec) | # Params (M) |\n|--------------------------------------------------|------:|------------------------:|--------------:|\n| Faster R-CNN MobileNetV3-Large FPN (High-Res) | 32.8 | 0.8409 | 19.39 |\n| Faster R-CNN MobileNetV3-Large 320 FPN (Low-Res) | 22.8 | 0.1679 | 19.39 |\n| Faster R-CNN ResNet-50 FPN | 37.0 | 4.1514 | 41.76 |\n| RetinaNet ResNet-50 FPN | 36.4 | 4.8825 | 34.01 |\n\n**Implementation details**", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} -{"page_content": "The Detector uses a FPN-style backbone which extracts features from different convolutions of the MobileNetV3 model. [By default](https://github.com/pytorch/vision/blob/eca37cf735064702189ff5d5b1428cbe25ab2bcf/torchvision/models/detection/backbone_utils.py#L165-L166) the pre-trained model uses the output of the 13th InvertedResidual block and the output of the Convolution prior to the pooling layer but the implementation supports using the outputs of [more stages](https://github.com/pytorch/vision/blob/eca37cf735064702189ff5d5b1428cbe25ab2bcf/torchvision/models/detection/backbone_utils.py#L147-L150).", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} +{"page_content": "**Implementation details**\n\nThe Detector uses a FPN-style backbone which extracts features from different convolutions of the MobileNetV3 model. [By default](https://github.com/pytorch/vision/blob/eca37cf735064702189ff5d5b1428cbe25ab2bcf/torchvision/models/detection/backbone_utils.py#L165-L166) the pre-trained model uses the output of the 13th InvertedResidual block and the output of the Convolution prior to the pooling layer but the implementation supports using the outputs of [more stages](https://github.com/pytorch/vision/blob/eca37cf735064702189ff5d5b1428cbe25ab2bcf/torchvision/models/detection/backbone_utils.py#L147-L150).", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} {"page_content": "All feature maps extracted from the network have their output projected down to [256 channels](https://github.com/pytorch/vision/blob/eca37cf735064702189ff5d5b1428cbe25ab2bcf/torchvision/models/detection/backbone_utils.py#L160) by the FPN block as this greatly improves the speed of the network. These feature maps provided by the FPN backbone are used by the FasterRCNN detector to provide box and class predictions at [different scales](https://github.com/pytorch/vision/blob/7af30ee9ab64039d04150d118e8b72473184fd6e/torchvision/models/detection/faster_rcnn.py#L382-L389).\n\n**Training & Tuning process**\n\nWe currently offer two pre-trained models capable of doing object detection at different resolutions. Both models were trained on the COCO dataset using the same hyper-parameters and scripts which can be found in our [references](https://github.com/pytorch/vision/tree/e35793a1a4000db1f9f99673437c514e24e65451/references/detection#faster-r-cnn-mobilenetv3-large-fpn) folder.", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} {"page_content": "The [High Resolution detector](https://github.com/pytorch/vision/blob/7af30ee9ab64039d04150d118e8b72473184fd6e/torchvision/models/detection/faster_rcnn.py#L398-L399) was trained with images of 800-1333px, while the mobile-friendly [Low Resolution detector](https://github.com/pytorch/vision/blob/7af30ee9ab64039d04150d118e8b72473184fd6e/torchvision/models/detection/faster_rcnn.py#L398-L399) was trained with images of 320-640px. The reason why we provide two separate sets of pre-trained weights is because training a detector directly on the smaller images leads to a 5 mAP increase in precision comparing to passing small images to the pre-trained high-res model. Both backbones were initialized with weights fitted on ImageNet and the [3 last stages](https://github.com/pytorch/vision/blob/7af30ee9ab64039d04150d118e8b72473184fd6e/torchvision/models/detection/faster_rcnn.py#L377-L378) of their weights where fined-tuned during the training process.", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} {"page_content": "An additional speed optimization can be applied on the mobile-friendly model by [tuning the RPN NMS thresholds](https://github.com/pytorch/vision/blob/7af30ee9ab64039d04150d118e8b72473184fd6e/torchvision/models/detection/faster_rcnn.py#L423-L424). By sacrificing only 0.2 mAP of precision we were able to improve the CPU speed of the model by roughly 45%. The details of the optimization can be seen below:\n\n| Tuning Status | mAP | Inference on CPU (sec) |\n|---------------|------:|------------------------:|\n| Before | 23.0 | 0.2904 |\n| After | 22.8 | 0.1679 |\n\nBelow we provide some examples of visualizing the predictions of the Faster R-CNN MobileNetV3-Large FPN model:\n\n
\n \n
\n\n### Semantic Segmentation", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} -{"page_content": "In this section we will start by providing some benchmarks of the released pre-trained models. Then we will discuss how a MobileNetV3-Large backbone was combined with segmentation heads such as [LR-ASPP](https://arxiv.org/abs/1905.02244), [DeepLabV3](https://arxiv.org/abs/1706.05587) and the [FCN](https://arxiv.org/abs/1411.4038) to conduct Semantic Segmentation. We will also explain how the network was trained and propose a few optional optimization techniques for speed critical applications.\n\n**Benchmarks**\n\nThis is how to initialize the pre-trained models:\n\n```\nlraspp = torchvision.models.segmentation.lraspp_mobilenet_v3_large(pretrained=True) \ndeeplabv3 = torchvision.models.segmentation.deeplabv3_mobilenet_v3_large(pretrained=True)\n```", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} +{"page_content": "### Semantic Segmentation\n\nIn this section we will start by providing some benchmarks of the released pre-trained models. Then we will discuss how a MobileNetV3-Large backbone was combined with segmentation heads such as [LR-ASPP](https://arxiv.org/abs/1905.02244), [DeepLabV3](https://arxiv.org/abs/1706.05587) and the [FCN](https://arxiv.org/abs/1411.4038) to conduct Semantic Segmentation. We will also explain how the network was trained and propose a few optional optimization techniques for speed critical applications.\n\n**Benchmarks**\n\nThis is how to initialize the pre-trained models:\n\n```\nlraspp = torchvision.models.segmentation.lraspp_mobilenet_v3_large(pretrained=True) \ndeeplabv3 = torchvision.models.segmentation.deeplabv3_mobilenet_v3_large(pretrained=True)\n```", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} {"page_content": "Below are the detailed benchmarks between new and selected existing models. As we can see, the DeepLabV3 with a MobileNetV3-Large backbone is a viable replacement of FCN with ResNet50 for the majority of applications as it achieves similar accuracy with a 8.5x speed-up. We also observe that the LR-ASPP network supersedes the equivalent FCN in all metrics:", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} {"page_content": "| Model | mIoU | Global Pixel Acc | Inference on CPU (sec) | # Params (M) |\n|--------------------------------------|------:|------------------:|------------------------:|--------------:|\n| LR-ASPP MobileNetV3-Large | 57.9 | 91.2 | 0.3278 | 3.22 |\n| DeepLabV3 MobileNetV3-Large | 60.3 | 91.2 | 0.5869 | 11.03 |\n| FCN MobileNetV3-Large (not released) | 57.8 | 90.9 | 0.3702 | 5.05 |\n| DeepLabV3 ResNet50 | 66.4 | 92.4 | 6.3531 | 39.64 |\n| FCN ResNet50 | 60.5 | 91.4 | 5.0146 | 32.96 |\n\n### Implementation details\n\nIn this section we will discuss important implementation details of tested segmentation heads. Note that all models described in this section use a dilated MobileNetV3-Large backbone.\n\n**LR-ASPP**", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} {"page_content": "**LR-ASPP**\n\nThe LR-ASPP is the Lite variant of the Reduced Atrous Spatial Pyramid Pooling model proposed by the authors of the MobileNetV3 paper. Unlike the other segmentation models in TorchVision, it does not make use of an [auxiliary loss](https://github.com/pytorch/vision/blob/b94a4014a68d08f37697f4672729571a46f0042d/torchvision/models/segmentation/segmentation.py#L185-L186). Instead it uses [low and high-level features](https://github.com/pytorch/vision/blob/b94a4014a68d08f37697f4672729571a46f0042d/torchvision/models/segmentation/segmentation.py#L92-L100) with output strides of 8 and 16 respectively.", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} {"page_content": "Unlike the paper where a 49x49 AveragePooling layer with variable strides is used, [our implementation](https://github.com/pytorch/vision/blob/e2db2eddbb1699a59fbb5ccbec912979048ef3bf/torchvision/models/segmentation/lraspp.py#L53) uses an `AdaptiveAvgPool2d` layer to process the global features. This is because the authors of the paper tailored the head to the Cityscapes dataset while our focus is to provide a general purpose implementation that can work on multiple datasets. Finally our implementation always has a bilinear interpolation [before returning the output](https://github.com/pytorch/vision/blob/e2db2eddbb1699a59fbb5ccbec912979048ef3bf/torchvision/models/segmentation/lraspp.py#L35) to ensure that the sizes of the input and output images match exactly.\n\n**DeepLabV3 & FCN**", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} {"page_content": "**DeepLabV3 & FCN**\n\nThe combination of MobileNetV3 with DeepLabV3 and FCN follows closely the ones of other models and the stage estimation for these methods is identical to LR-ASPP. The only notable difference is that instead of using high and low level features, [we attach](https://github.com/pytorch/vision/blob/b94a4014a68d08f37697f4672729571a46f0042d/torchvision/models/segmentation/segmentation.py#L37-L45) the normal loss to the feature map with output stride 16 and an auxiliary loss on the feature map with output stride 8.\n\nFinally we should note that the FCN version of the model was not released because it was completely superseded by the LR-ASPP both in terms of speed and accuracy. The [pre-trained weights](https://github.com/pytorch/vision/pull/3276/commits/1641d5f4c7d41f534444fab340c598d61a91bd12#diff-ccff7af514d99eeb40416c8b9ec30f032d1a3f450aaa4057958ca39ab174452eL17) are still available and can be used with minimal changes to the code.\n\n### Training & Tuning process", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} -{"page_content": "We currently offer two MobileNetV3 pre-trained models capable of doing semantic segmentation: the LR-ASPP and the DeepLabV3. The backbones of the models were [initialized with ImageNet weights](https://github.com/pytorch/vision/blob/b94a4014a68d08f37697f4672729571a46f0042d/torchvision/models/segmentation/segmentation.py#L89-L90) and trained end-to-end. Both architectures were trained on the COCO dataset using the same scripts with similar hyper-parameters. Their details can be found in our [references](https://github.com/pytorch/vision/tree/a78d0d83d0a499fe8480d7a9f493676e746c4699/references/segmentation#deeplabv3_mobilenet_v3_large) folder.", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} +{"page_content": "### Training & Tuning process\n\nWe currently offer two MobileNetV3 pre-trained models capable of doing semantic segmentation: the LR-ASPP and the DeepLabV3. The backbones of the models were [initialized with ImageNet weights](https://github.com/pytorch/vision/blob/b94a4014a68d08f37697f4672729571a46f0042d/torchvision/models/segmentation/segmentation.py#L89-L90) and trained end-to-end. Both architectures were trained on the COCO dataset using the same scripts with similar hyper-parameters. Their details can be found in our [references](https://github.com/pytorch/vision/tree/a78d0d83d0a499fe8480d7a9f493676e746c4699/references/segmentation#deeplabv3_mobilenet_v3_large) folder.", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} {"page_content": "Normally, during inference the images are [resized to 520 pixels](https://github.com/pytorch/vision/blob/a78d0d83d0a499fe8480d7a9f493676e746c4699/references/segmentation/train.py#L30-L33). An optional speed optimization is to construct a Low Res configuration of the model by using the High-Res pre-trained weights and reducing the inference resizing to 320 pixels. This will improve the CPU execution times by roughly 60% while sacrificing a couple of mIoU points. The detailed numbers of this optimization can be found on the table below:", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} {"page_content": "| Low-Res Configuration | mIoU Difference | Speed Improvement | mIoU | Global Pixel Acc | Inference on CPU (sec) |\n|--------------------------------------|-----------------:|-------------------:|------:|------------------:|------------------------:|\n| LR-ASPP MobileNetV3-Large| -2.1 | 65.26% | 55.8 | 90.3 | 0.1139 |\n| DeepLabV3 MobileNetV3-Large | -3.8 | 63.86% | 56.5 | 90.3 | 0.2121 |\n| FCN MobileNetV3-Large (not released) | -3.0 | 57.57% | 54.8 | 90.1 | 0.1571 |\n\nHere are some examples of visualizing the predictions of the LR-ASPP MobileNetV3-Large model:\n\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} {"page_content": "We hope that you found this article interesting. We are looking forward to your feedback to see if this is the type of content you would like us to publish more often. If the community finds that such posts are useful, we will be happy to publish more articles that cover the implementation details of newly introduced Machine Learning models.", "metadata": {"source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"}} @@ -1735,22 +1739,22 @@ {"page_content": "Simultaneously, the technical governance of PyTorch has been a loosely structured community model of open-source development \u2014 A set of people maintaining PyTorch by area with their responsibility often tied to their individual identity rather than their employment. While we kept a codified list at the [PyTorch - Maintainers](https://pytorch.org/docs/stable/community/persons_of_interest.html) page, the technical governance was not formalized nor codified. As PyTorch scales as a community, the next step is to structure and codify. The [PyTorch Technical Governance](https://pytorch.org/docs/master/community/governance.html) now supports a hierarchical maintainer structure and clear outlining of processes around day to day work and escalations. This doesn\u2019t change how we run things, but it does add discipline and openness that at our scale feels essential and timely.", "metadata": {"source": "https://pytorch.org/blog/PyTorchfoundation/", "category": "pytorch blogs"}} {"page_content": "It\u2019s been an exciting journey since 2016. I am grateful for the experiences and people I\u2019ve met along the way. PyTorch started with a small group of contributors which have grown and diversified over the years, all bringing in new ideas and innovations that would not have been possible without our community. We want to continue the open-source spirit \u2013 for the community and by the community. Thank you to our contributors, maintainers, users, supporters and new foundation members. We look forward to the next chapter of PyTorch with the PyTorch Foundation.", "metadata": {"source": "https://pytorch.org/blog/PyTorchfoundation/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"Fast Beam Search Decoding in PyTorch with TorchAudio and Flashlight Text\"\nauthor: Caroline Chen, Jacob Kahn (@jacob_d_kahn)\nfeatured-img: \"/assets/images/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text-6.png\"\n---\n\nBeam search decoding with industry-leading speed from [Flashlight Text](https://github.com/flashlight/text) (part of the [Flashlight](https://arxiv.org/abs/2201.12465) ML framework) is now available with official support in [TorchAudio](https://pytorch.org/audio/0.12.0/models.decoder.html#ctcdecoder), bringing high-performance beam search and text utilities for speech and text applications built on top of PyTorch. The current integration supports CTC-style decoding, but it can be used for *any modeling setting that outputs token-level probability distributions over time steps*.\n\n## A brief beam search refresher", "metadata": {"source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"}} -{"page_content": "In speech and language settings, *beam search* is an efficient, greedy algorithm that can convert sequences of *continuous values* (i.e. probabilities or scores) into *graphs* or *sequences* (i.e. tokens, word-pieces, words) using *optional constraints* on valid sequences (i.e. a lexicon), *optional external scoring* (i.e. an LM which scores valid sequences), and other *score adjustments* for particular sequences.\n\nIn the example that follows, we'll consider \u2014 a token set of {\u03f5, a, b}, where \u03f5 is a special token that we can imagine denotes a space between words or a pause in speech. Graphics here and below are taken from Awni Hannun's excellent [distill.pub writeup](https://distill.pub/2017/ctc/) on CTC and beam search.\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"}} +{"page_content": "## A brief beam search refresher\n\nIn speech and language settings, *beam search* is an efficient, greedy algorithm that can convert sequences of *continuous values* (i.e. probabilities or scores) into *graphs* or *sequences* (i.e. tokens, word-pieces, words) using *optional constraints* on valid sequences (i.e. a lexicon), *optional external scoring* (i.e. an LM which scores valid sequences), and other *score adjustments* for particular sequences.\n\nIn the example that follows, we'll consider \u2014 a token set of {\u03f5, a, b}, where \u03f5 is a special token that we can imagine denotes a space between words or a pause in speech. Graphics here and below are taken from Awni Hannun's excellent [distill.pub writeup](https://distill.pub/2017/ctc/) on CTC and beam search.\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"}} {"page_content": "With a greedy-like approach, beam search considers the next viable token given an existing sequence of tokens \u2014 in the example above, a, b, b is a valid sequence, but a, b, a is not. We *rank* each possible next token at each step of the beam search according to a scoring function. Scoring functions (s) typically looks something like:\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"}} {"page_content": "Where **\u0177** is a potential path/sequence of tokens, **x** is the input *(P(\u0177|x)* represents the model's predictions over time), and \ud835\udefc is a weight on the language model probability *(P(y)* the probability of the sequence under the language model). Some scoring functions add *\ud835\udf37* which adjusts a score based on the length of the predicted sequence **|\u0177|**. This particular scoring function is used in [FAIR's prior work](https://arxiv.org/pdf/1911.08460.pdf) on end-to-end ASR, and there are many variations on scoring functions which can vary across application areas.", "metadata": {"source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"}} {"page_content": "Given a particular sequence, to assess the next viable token in that sequence (perhaps constrained by a set of allowed words or sequences, such as a lexicon of words), the beam search algorithm scores the sequence with each candidate token added, and sorts token candidates based on those scores. For efficiency and since the number of paths is exponential in the token set size, the *top-k* highest-scoring candidates are kept \u2014 *k* represents the *beam size*.\n\n

\n \n

\n\n

There are many other nuances with how beam search can progress: similar hypothesis sequences can be \u201cmerged\u201d, for instance.\n

", "metadata": {"source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"}} {"page_content": "The scoring function can be further augmented to up/down-weight token insertion or long or short words. Scoring with *stronger external language* models, while incurring computational cost, can also significantly improve performance; this is frequently referred to as *LM fusion*. There are many other knobs to tune for decoding \u2014 these are documented in [TorchAudio\u2019s documentation](https://pytorch.org/audio/0.12.0/models.decoder.html#ctcdecoder) and explored further in [TorchAudio\u2019s ASR Inference tutorial](https://pytorch.org/audio/0.12.0/tutorials/asr_inference_with_ctc_decoder_tutorial.html#beam-search-decoder-parameters). Since decoding is quite efficient, parameters can be easily swept and tuned.\n\nBeam search has been used in ASR extensively over the years in far too many works to cite, and in strong, recent results and systems including [wav2vec 2.0](https://proceedings.neurips.cc/paper/2020/file/92d1e1eb1cd6f9fba3227870bb6d7f07-Paper.pdf) and [NVIDIA's NeMo](https://developer.nvidia.com/nvidia-nemo).", "metadata": {"source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"}} {"page_content": "## Why beam search?\n\nBeam search remains a fast competitor to heavier-weight decoding approaches such as [RNN-Transducer](https://arxiv.org/pdf/1211.3711.pdf) that Google has invested in putting [on-device](https://ai.googleblog.com/2019/03/an-all-neural-on-device-speech.html) and has shown strong results with on [common benchmarks](https://arxiv.org/pdf/2010.10504.pdf). Autoregressive text models at scale can benefit from beam search as well. Among other things, beam search gives:", "metadata": {"source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"}} {"page_content": "- A flexible performance/latency tradeoff \u2014 by adjusting beam size and the external LM, users can sacrifice latency for accuracy or pay for more accurate results with a small latency cost. Decoding with no external LM can improve results at very little performance cost.\n- Portability without retraining \u2014 existing neural models can benefit from multiple decoding setups and plug-and-play with external LMs without training or fine-tuning.\n- A compelling complexity/accuracy tradeoff \u2014 adding beam search to an existing modeling pipeline incurs little additional complexity and can improve performance.\n\n## Performance Benchmarks", "metadata": {"source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"}} -{"page_content": "Today's most commonly-used beam search decoding libraries today that support external language model integration include Kensho's [pyctcdecode](https://github.com/kensho-technologies/pyctcdecode), NVIDIA's [NeMo toolkit](https://github.com/NVIDIA/NeMo/tree/stable/scripts/asr_language_modeling). We benchmark the TorchAudio + Flashlight decoder against them with a *wav2vec 2.0* base model trained on 100 hours of audio evaluated on [LibriSpeech](https://www.openslr.org/12) dev-other with the official [KenLM](https://github.com/kpu/kenlm/) 3-gram LM. Benchmarks were run on Intel E5-2698 CPUs on a single thread. All computation was in-memory \u2014 KenLM memory mapping was disabled as it wasn't widely supported.", "metadata": {"source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"}} +{"page_content": "## Performance Benchmarks\n\nToday's most commonly-used beam search decoding libraries today that support external language model integration include Kensho's [pyctcdecode](https://github.com/kensho-technologies/pyctcdecode), NVIDIA's [NeMo toolkit](https://github.com/NVIDIA/NeMo/tree/stable/scripts/asr_language_modeling). We benchmark the TorchAudio + Flashlight decoder against them with a *wav2vec 2.0* base model trained on 100 hours of audio evaluated on [LibriSpeech](https://www.openslr.org/12) dev-other with the official [KenLM](https://github.com/kpu/kenlm/) 3-gram LM. Benchmarks were run on Intel E5-2698 CPUs on a single thread. All computation was in-memory \u2014 KenLM memory mapping was disabled as it wasn't widely supported.", "metadata": {"source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"}} {"page_content": "When benchmarking, we measure the *time-to-WER (word error rate)* \u2014 because of subtle differences in the implementation of decoding algorithms and the complex relationships between parameters and decoding speed, some hyperparameters differed across runs. To fairly assess performance, we first sweep for parameters that achieve a baseline WER, minimizing beam size if possible.\n\n

\n \n

\n\n

\nDecoding performance on Librispeech dev-other of a pretrained wav2vec 2.0 model. TorchAudio + Flashlight decoding outperforms by an order of magnitude at low WERs.\n

\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"}} {"page_content": "

\nTime-to-WER results, deferring to smaller beam size, across decoders. The TorchAudio + Flashlight decoder scales far better with larger beam sizes and at lower WERs.\n

\n\n## TorchAudio API and Usage\n\nTorchAudio provides a Python API for CTC beam search decoding, with support for the following:\n\n- lexicon and lexicon-free decoding\n- KenLM n-gram language model integration\n- character and word-piece decoding\n- sample pretrained LibriSpeech KenLM models and corresponding lexicon and token files\n- various customizable beam search parameters (beam size, pruning threshold, LM weight...)\n\nTo set up the decoder, use the factory function torchaudio.models.decoder.ctc_decoder\n\n```python\nfrom torchaudio.models.decoder import ctc_decoder, download_pretrained_files\nfiles = download_pretrained_files(\"librispeech-4-gram\")\ndecoder = ctc_decoder(\n lexicon=files.lexicon,\n tokens=files.tokens,\n lm=files.lm,\n nbest=1,\n ... additional optional customizable args ...\n)\n```", "metadata": {"source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"}} {"page_content": "Given emissions of shape *(batch, time, num_tokens)*, the decoder will compute and return a List of batch Lists, each consisting of the nbest hypotheses corresponding to the emissions. Each hypothesis can be further broken down into tokens, words (if a lexicon is provided), score, and timesteps components.\n\n```python\nemissions = acoustic_model(waveforms) # (B, T, N)\nbatch_hypotheses = decoder(emissions) # List[List[CTCHypothesis]]\n\n# transcript for a lexicon decoder\ntranscripts = [\" \".join(hypo[0].words) for hypo in batch_hypotheses]\n\n# transcript for a lexicon free decoder, splitting by sil token\nbatch_tokens = [decoder.idxs_to_tokens(hypo[0].tokens) for hypo in batch_hypotheses]\ntranscripts = [\"\".join(tokens) for tokens in batch_tokens]\n```", "metadata": {"source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"}} {"page_content": "Please refer to the [documentation](https://pytorch.org/audio/stable/models.decoder.html#ctcdecoder) for more API details, and the tutorial ([ASR Inference Decoding](https://pytorch.org/audio/main/tutorials/asr_inference_with_ctc_decoder_tutorial.html)) or sample [inference script](https://github.com/pytorch/audio/tree/main/examples/asr/librispeech_ctc_decoder) for more usage examples.\n\n## Upcoming Improvements\n\n**Full NNLM support** \u2014 decoding with large neural language models (e.g. transformers) remains somewhat unexplored at scale. Already supported in Flashlight, we plan to add support in TorchAudio, allowing users to use custom decoder-compatible LMs. Custom word level language models are already available in the nightly TorchAudio build, and is slated to be released in TorchAudio 0.13.", "metadata": {"source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"}} {"page_content": "**Autoregressive/seq2seq decoding** \u2014 Flashlight Text also supports [sequence-to-sequence (seq2seq) decoding](https://github.com/flashlight/text/blob/main/flashlight/lib/text/decoder/LexiconSeq2SeqDecoder.h) for autoregressive models, which we hope to add bindings for and add to TorchAudio and TorchText with efficient GPU implementations as well.\n\n**Better build support** \u2014 to benefit from improvements in Flashlight Text, TorchAudio will directly submodule Flashlight Text to make upstreaming modifications and improvements easier. This is already in effect in the nightly TorchAudio build, and is slated to be released in TorchAudio 0.13.\n\n## Citation\n\nTo cite the decoder, please use the following:", "metadata": {"source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"}} -{"page_content": "```python\n@inproceedings{kahn2022flashlight,\n title={Flashlight: Enabling innovation in tools for machine learning},\n author={Kahn, Jacob D and Pratap, Vineel and Likhomanenko, Tatiana and Xu, Qiantong and Hannun, Awni and Cai, Jeff and Tomasello, Paden and Lee, Ann and Grave, Edouard and Avidov, Gilad and others},\n booktitle={International Conference on Machine Learning},\n pages={10557--10574},\n year={2022},\n organization={PMLR}\n}\n```\n```python\n@inproceedings{yang2022torchaudio,\n title={Torchaudio: Building blocks for audio and speech processing},\n author={Yang, Yao-Yuan and Hira, Moto and Ni, Zhaoheng and Astafurov, Artyom and Chen, Caroline and Puhrsch, Christian and Pollack, David and Genzel, Dmitriy and Greenberg, Donny and Yang, Edward Z and others},\n booktitle={ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},\n pages={6982--6986},\n year={2022},\n organization={IEEE}\n}\n```", "metadata": {"source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"}} +{"page_content": "To cite the decoder, please use the following:\n\n```python\n@inproceedings{kahn2022flashlight,\n title={Flashlight: Enabling innovation in tools for machine learning},\n author={Kahn, Jacob D and Pratap, Vineel and Likhomanenko, Tatiana and Xu, Qiantong and Hannun, Awni and Cai, Jeff and Tomasello, Paden and Lee, Ann and Grave, Edouard and Avidov, Gilad and others},\n booktitle={International Conference on Machine Learning},\n pages={10557--10574},\n year={2022},\n organization={PMLR}\n}\n```\n```python\n@inproceedings{yang2022torchaudio,\n title={Torchaudio: Building blocks for audio and speech processing},\n author={Yang, Yao-Yuan and Hira, Moto and Ni, Zhaoheng and Astafurov, Artyom and Chen, Caroline and Puhrsch, Christian and Pollack, David and Genzel, Dmitriy and Greenberg, Donny and Yang, Edward Z and others},\n booktitle={ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},\n pages={6982--6986},\n year={2022},\n organization={IEEE}\n}\n```", "metadata": {"source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'New PyTorch Library Releases in PyTorch 1.9, including TorchVision, TorchAudio, and more'\nauthor: Team PyTorch \n---\n\nToday, we are announcing updates to a number of PyTorch libraries, alongside the [PyTorch 1.9 release](https://pytorch.org/blog/pytorch-1.9-released/). The updates include new releases for the domain libraries including TorchVision, TorchText and TorchAudio. These releases, along with the PyTorch 1.9 release, include a number of new features and improvements that will provide a broad set of updates for the PyTorch community.\n\nSome highlights include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"}} -{"page_content": "* **TorchVision** - Added new SSD and SSDLite models, quantized kernels for object detection, GPU Jpeg decoding, and iOS support. See [release notes](https://github.com/pytorch/vision/releases) here.\n* **TorchAudio** - Added wav2vec 2.0 model deployable in non-Python environments (including C++, Android, and iOS). Many performance improvements in lfilter, spectral operations, resampling. Added options for quality control in sampling (i.e. Kaiser window support). Initiated the migration of complex tensors operations. Improved autograd support. See [release notes](https://github.com/pytorch/audio/releases) here.\n* **TorchText** - Added a new high-performance Vocab module that provides common functional APIs for NLP workflows. See [release notes](https://github.com/pytorch/text/releases) here.\n\nWe\u2019d like to thank the community for their support and work on this latest release.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"}} +{"page_content": "Some highlights include:\n\n* **TorchVision** - Added new SSD and SSDLite models, quantized kernels for object detection, GPU Jpeg decoding, and iOS support. See [release notes](https://github.com/pytorch/vision/releases) here.\n* **TorchAudio** - Added wav2vec 2.0 model deployable in non-Python environments (including C++, Android, and iOS). Many performance improvements in lfilter, spectral operations, resampling. Added options for quality control in sampling (i.e. Kaiser window support). Initiated the migration of complex tensors operations. Improved autograd support. See [release notes](https://github.com/pytorch/audio/releases) here.\n* **TorchText** - Added a new high-performance Vocab module that provides common functional APIs for NLP workflows. See [release notes](https://github.com/pytorch/text/releases) here.\n\nWe\u2019d like to thank the community for their support and work on this latest release.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "Features in PyTorch releases are classified as Stable, Beta, and Prototype. You can learn more about the definitions in [this blog post](https://pytorch.org/blog/pytorch-feature-classification-changes/). \n\n# TorchVision 0.10\n\n### (Stable) Quantized kernels for object detection \nThe forward pass of the nms and roi_align operators now support tensors with a quantized dtype, which can help lower the memory footprint of object detection models, particularly on mobile environments. For more details, refer to [the documentation](https://pytorch.org/vision/stable/ops.html#torchvision.ops.roi_align). \n\n### (Stable) Speed optimizations for Tensor transforms \nThe resize and flip transforms have been optimized and its runtime improved by up to 5x on the CPU.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "### (Stable) Documentation improvements \nSignificant improvements were made to the documentation. In particular, a new gallery of examples is available. These examples visually illustrate how each transform acts on an image, and also properly documents and illustrates the output of the segmentation models.\n\nThe example gallery will be extended in the future to provide more comprehensive examples and serve as a reference for common torchvision tasks. For more details, refer to [the documentation](https://pytorch.org/vision/stable/auto_examples/index.html).\n\n### (Beta) New models for detection \n[SSD](https://arxiv.org/abs/1512.02325) and [SSDlite](https://arxiv.org/abs/1801.04381) are two popular object detection architectures that are efficient in terms of speed and provide good results for low resolution pictures. In this release, we provide implementations for the original SSD model with VGG16 backbone and for its mobile-friendly variant SSDlite with MobileNetV3-Large backbone.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "The models were pre-trained on COCO train2017 and can be used as follows:\n\n```python\nimport torch\nimport torchvision\n\n# Original SSD variant\nx = [torch.rand(3, 300, 300), torch.rand(3, 500, 400)]\nm_detector = torchvision.models.detection.ssd300_vgg16(pretrained=True)\nm_detector.eval()\npredictions = m_detector(x)\n\n# Mobile-friendly SSDlite variant\nx = [torch.rand(3, 320, 320), torch.rand(3, 500, 400)]\nm_detector = torchvision.models.detection.ssdlite320_mobilenet_v3_large(pretrained=True)\nm_detector.eval()\npredictions = m_detector(x)\n```\n\nThe following accuracies can be obtained on COCO val2017 (full results available in [#3403](https://github.com/pytorch/vision/pull/3403) and [#3757](https://github.com/pytorch/vision/pull/3757)):\n\n\n {:.table.table-striped.table-bordered}\n| Model | mAP | mAP@50 | mAP@75 |\n| ------------- | ------------- | ------------- | ------------- |\n| SSD300 VGG16 | 25.1 | 41.5 | 26.2 | \n| SSDlite320 MobileNetV3-Large | 21.3 | 34.3 | 22.1 |", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"}} @@ -1767,12 +1771,12 @@ {"page_content": "```python\n#creating Vocab from text file\nimport io\nfrom torchtext.vocab import build_vocab_from_iterator\n#generator that yield list of tokens\ndef yield_tokens(file_path):\n with io.open(file_path, encoding = 'utf-8') as f:\n for line in f:\n yield line.strip().split()\n#get Vocab object\nvocab_obj = build_vocab_from_iterator(yield_tokens(file_path), specials=[\"\"])\n\n#creating Vocab through ordered dict\nfrom torchtext.vocab import vocab\nfrom collections import Counter, OrderedDict\ncounter = Counter([\"a\", \"a\", \"b\", \"b\", \"b\"])\nsorted_by_freq_tuples = sorted(counter.items(), key=lambda x: x[1], reverse=True)\nordered_dict = OrderedDict(sorted_by_freq_tuples)\nvocab_obj = vocab(ordered_dict)\n\n#common API usage\n\n#look-up index\nvocab_obj[\"a\"]\n\n#batch look-up indices\nvocab_obj.looup_indices([\"a\",\"b\"])\n#support forward API of PyTorch nn Modules\nvocab_obj([\"a\",\"b\"])\n\n#batch look-up tokens\nvocab_obj.lookup_tokens([0,1])", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "#set default index to return when token not found\nvocab_obj.set_default_index(0)\nvocab_obj[\"out_of_vocabulary\"] #prints 0\n```\n\nFor more details, refer to [the documentation](https://pytorch.org/text/stable/vocab.html). \n\nThanks for reading. If you\u2019re interested in these updates and want to join the PyTorch community, we encourage you to join [the discussion](https://discuss.pytorch.org/) forums and [open GitHub issues](https://github.com/pytorch/pytorch/issues). To get the latest news from PyTorch, follow us on [Facebook](https://www.facebook.com/pytorch/), [Twitter](https://twitter.com/PyTorch), [Medium](https://medium.com/pytorch), [YouTube](https://www.youtube.com/pytorch) or [LinkedIn](https://www.linkedin.com/company/pytorch). \n\nCheers!\n\n-Team PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'Everything You Need To Know About Torchvision\u2019s SSDlite Implementation'\nauthor: Vasilis Vryniotis\nfeatured-img: 'assets/images/mAP-of-SSD320-MobileNetV3-Large.png'\n---\n\nIn the [previous article](https://pytorch.org/blog/torchvision-ssd-implementation/), we\u2019ve discussed how the SSD algorithm works, covered its implementation details and presented its training process. If you have not read the previous blog post, I encourage you to check it out before continuing.\n\nIn this part 2 of the series, we will focus on the mobile-friendly variant of SSD called SSDlite. Our plan is to first go through the main components of the algorithm highlighting the parts that differ from the original SSD, then discuss how the released model was trained and finally provide detailed benchmarks for all the new Object Detection models that we explored.\n\n# The SSDlite Network Architecture", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"}} -{"page_content": "The SSDlite is an adaptation of SSD which was first briefly introduced on the [MobileNetV2 paper](https://arxiv.org/abs/1801.04381) and later reused on the [MobileNetV3 paper](https://arxiv.org/abs/1905.02244). Because the main focus of the two papers was to introduce novel CNN architectures, most of the implementation details of SSDlite were not clarified. Our code follows all the details presented on the two papers and where necessary fills the gaps from the [official implementation](https://github.com/tensorflow/models/tree/238922e98dd0e8254b5c0921b241a1f5a151782f/research/object_detection). \n\nAs noted before, the SSD is a family of models because one can configure it with different backbones (such as VGG, MobileNetV3 etc) and different Heads (such as using regular convolutions, separable convolutions etc). Thus many of the SSD components remain the same in SSDlite. Below we discuss only those that are different\n\n## Classification and Regression Heads", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"}} -{"page_content": "Following the Section 6.2 of the MobileNetV2 paper, SSDlite replaces the regular convolutions used on the original Heads with separable convolutions. Consequently, our implementation introduces [new heads](https://github.com/pytorch/vision/blob/b6f733046c9259f354d060cd808241a558d7d596/torchvision/models/detection/ssdlite.py#L65-L95) that use [3x3 Depthwise convolutions and 1x1 projections](https://github.com/pytorch/vision/blob/b6f733046c9259f354d060cd808241a558d7d596/torchvision/models/detection/ssdlite.py#L26-L36). Since all other components of the SSD method remain the same, to create an SSDlite model our implementation [initializes the SSDlite head](https://github.com/pytorch/vision/blob/b6f733046c9259f354d060cd808241a558d7d596/torchvision/models/detection/ssdlite.py#L222-L223) and passes it directly to the SSD constructor.\n\n## Backbone Feature Extractor", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"}} +{"page_content": "# The SSDlite Network Architecture\n\nThe SSDlite is an adaptation of SSD which was first briefly introduced on the [MobileNetV2 paper](https://arxiv.org/abs/1801.04381) and later reused on the [MobileNetV3 paper](https://arxiv.org/abs/1905.02244). Because the main focus of the two papers was to introduce novel CNN architectures, most of the implementation details of SSDlite were not clarified. Our code follows all the details presented on the two papers and where necessary fills the gaps from the [official implementation](https://github.com/tensorflow/models/tree/238922e98dd0e8254b5c0921b241a1f5a151782f/research/object_detection). \n\nAs noted before, the SSD is a family of models because one can configure it with different backbones (such as VGG, MobileNetV3 etc) and different Heads (such as using regular convolutions, separable convolutions etc). Thus many of the SSD components remain the same in SSDlite. Below we discuss only those that are different\n\n## Classification and Regression Heads", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"}} +{"page_content": "## Classification and Regression Heads\n\nFollowing the Section 6.2 of the MobileNetV2 paper, SSDlite replaces the regular convolutions used on the original Heads with separable convolutions. Consequently, our implementation introduces [new heads](https://github.com/pytorch/vision/blob/b6f733046c9259f354d060cd808241a558d7d596/torchvision/models/detection/ssdlite.py#L65-L95) that use [3x3 Depthwise convolutions and 1x1 projections](https://github.com/pytorch/vision/blob/b6f733046c9259f354d060cd808241a558d7d596/torchvision/models/detection/ssdlite.py#L26-L36). Since all other components of the SSD method remain the same, to create an SSDlite model our implementation [initializes the SSDlite head](https://github.com/pytorch/vision/blob/b6f733046c9259f354d060cd808241a558d7d596/torchvision/models/detection/ssdlite.py#L222-L223) and passes it directly to the SSD constructor.\n\n## Backbone Feature Extractor", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"}} {"page_content": "Our implementation introduces a new class for building MobileNet [feature extractors](https://github.com/pytorch/vision/blob/b6f733046c9259f354d060cd808241a558d7d596/torchvision/models/detection/ssdlite.py#L98). Following the Section 6.3 of the MobileNetV3 paper, the backbone returns the [output of the expansion layer](https://github.com/pytorch/vision/blob/b6f733046c9259f354d060cd808241a558d7d596/torchvision/models/detection/ssdlite.py#L106) of the Inverted Bottleneck block which has an output stride of 16 and the [output of the layer just before the pooling](https://github.com/pytorch/vision/blob/b6f733046c9259f354d060cd808241a558d7d596/torchvision/models/detection/ssdlite.py#L107) which has an output stride of 32. Moreover, all [extra blocks](https://github.com/pytorch/vision/blob/b6f733046c9259f354d060cd808241a558d7d596/torchvision/models/detection/ssdlite.py#L111-L116) of the backbone are replaced with [lightweight equivalents](https://github.com/pytorch/vision/blob/b6f733046c9259f354d060cd808241a558d7d596/torchvision/models/detection/ssdlite.py#L39-L54) which use a 1x1 compression, a separable 3x3 convolution with stride 2 and a 1x1 expansion. Finally to ensure that the heads have enough prediction power even when small [width multipliers](https://github.com/pytorch/vision/blob/b6f733046c9259f354d060cd808241a558d7d596/torchvision/models/detection/ssdlite.py#L99) are used, the [minimum depth](https://github.com/pytorch/vision/blob/b6f733046c9259f354d060cd808241a558d7d596/torchvision/models/detection/ssdlite.py#L110) size of all convolutions is controlled by the ```min_depth``` hyperparameter.", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"}} {"page_content": "# The SSDlite320 MobileNetV3-Large model\n\n
\n \n
\n\nThis section discusses the configuration of the provided [SSDlite pre-trained](https://github.com/pytorch/vision/blob/b6f733046c9259f354d060cd808241a558d7d596/torchvision/models/detection/ssdlite.py#L159-L162) model along with the training processes followed to replicate the paper results as closely as possible. \n\n## Training process\n\nAll of the hyperparameters and scripts used to train the model on the COCO dataset can be found in our [references](https://github.com/pytorch/vision/blob/e35793a1a4000db1f9f99673437c514e24e65451/references/detection/README.md#ssdlite320-mobilenetv3-large) folder. Here we discuss the most notable details of the training process.\n\n### Tuned Hyperparameters", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"}} -{"page_content": "Though the papers don\u2019t provide any information on the hyperparameters used for training the models (such as regularization, learning rate and the batch size), the parameters listed in the [configuration files](https://github.com/tensorflow/models/blob/238922e98dd0e8254b5c0921b241a1f5a151782f/research/object_detection/samples/configs/ssdlite_mobilenet_v3_large_320x320_coco.config) on the official repo were good starting points and using cross validation we adjusted them to their optimal values. All the above gave us a significant boost over the baseline SSD configuration.\n\n### Data Augmentation", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"}} -{"page_content": "Key important difference of SSDlite comparing to SSD is that the backbone of the first has only a fraction of the weights of the latter. This is why in SSDlite, the Data Augmentation focuses more on making the model robust to objects of variable sizes than trying to avoid overfitting. Consequently, SSDlite [uses only a subset](https://github.com/pytorch/vision/blob/43d772067fe77965ec8fc49c799de5cea44b8aa2/references/detection/presets.py#L19-L24) of the SSD transformations and this way it avoids the over-regularization of the model.\n\n### LR Scheme", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"}} +{"page_content": "### Tuned Hyperparameters\n\nThough the papers don\u2019t provide any information on the hyperparameters used for training the models (such as regularization, learning rate and the batch size), the parameters listed in the [configuration files](https://github.com/tensorflow/models/blob/238922e98dd0e8254b5c0921b241a1f5a151782f/research/object_detection/samples/configs/ssdlite_mobilenet_v3_large_320x320_coco.config) on the official repo were good starting points and using cross validation we adjusted them to their optimal values. All the above gave us a significant boost over the baseline SSD configuration.\n\n### Data Augmentation", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"}} +{"page_content": "### Data Augmentation\n\nKey important difference of SSDlite comparing to SSD is that the backbone of the first has only a fraction of the weights of the latter. This is why in SSDlite, the Data Augmentation focuses more on making the model robust to objects of variable sizes than trying to avoid overfitting. Consequently, SSDlite [uses only a subset](https://github.com/pytorch/vision/blob/43d772067fe77965ec8fc49c799de5cea44b8aa2/references/detection/presets.py#L19-L24) of the SSD transformations and this way it avoids the over-regularization of the model.\n\n### LR Scheme", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"}} {"page_content": "### LR Scheme\n\nDue to the reliance on Data Augmentation to make the model robust to small and medium sized objects, we found that it is particularly beneficial for the training recipe to use large number of epochs. More specifically by using roughly 3x more epochs than SSD we are able to increase our precision by 4.2mAP points and by using a 6x multiplier we improve by 4.9mAP. Increasing further the epochs seems to yield diminishing returns and makes the training too slow and impractical, nevertheless based on the [model configuration](https://github.com/tensorflow/models/blob/238922e98dd0e8254b5c0921b241a1f5a151782f/research/object_detection/samples/configs/ssdlite_mobilenet_v3_large_320x320_coco.config#L154) it seems that the authors of the paper used an equivalent *16x multiplier*. \n\n### Weight Initialization & Input Scaling & ReLU6", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"}} {"page_content": "A set of final optimizations that brought our implementation very close to the official one and helped us bridge the accuracy gap was training the backbone [from scratch](https://github.com/pytorch/vision/blob/b6f733046c9259f354d060cd808241a558d7d596/torchvision/models/detection/ssdlite.py#L139-L141) instead of initializing from ImageNet, adapting our [weight initialization scheme](https://github.com/pytorch/vision/blob/b6f733046c9259f354d060cd808241a558d7d596/torchvision/models/detection/ssdlite.py#L57-L62), changing our [Input Scaling](https://github.com/pytorch/vision/blob/b6f733046c9259f354d060cd808241a558d7d596/torchvision/models/detection/ssdlite.py#L216-L219) and replacing all standard ReLUs added on the SSDlite heads with ReLU6. Note that since we trained the model from random weights, we additionally applied the speed optimization described on the paper of using a [reduced tail](https://github.com/pytorch/vision/blob/b6f733046c9259f354d060cd808241a558d7d596/torchvision/models/detection/ssdlite.py#L196-L197) on the backbone.", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"}} {"page_content": "### Implementation Differences", "metadata": {"source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"}} @@ -1792,16 +1796,16 @@ {"page_content": "We took an open source implementation of a popular text-to-image diffusion model as a starting point and accelerated its generation using two optimizations available in PyTorch 2: compilation and fast attention implementation. Together with a few minor memory processing improvements in the code these optimizations give up to 49% inference speedup relative to the original implementation without [xFormers](https://github.com/facebookresearch/xformers), and 39% inference speedup relative to using the original code with xFormers (excluding the compilation time), depending on the GPU architecture and batch size. Importantly, the speedup comes without a need to install xFormers or any other extra dependencies.", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} {"page_content": "The table below shows the improvement in runtime between the original implementation with xFormers installed and our optimized version with PyTorch-integrated memory efficient attention (originally developed for and released in the [xFormers](https://github.com/facebookresearch/xformers) library) and PyTorch compilation. The compilation time is excluded.\n\n**Runtime improvement in % compared to original+xFormers**\n\nSee the absolute runtime numbers in section \u201cBenchmarking setup and results summary\u201d", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} {"page_content": "\n\n \n \n \n \n \n \n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
GPU\n Batch size 1\n Batch size 2\n Batch size 4\n
P100 (no compilation)\n -3.8\n 0.44\n 5.47\n
T4\n 2.12\n 10.51\n 14.2\n
A10\n -2.34\n 8.99\n 10.57\n
V100\n 18.63\n 6.39\n 10.43\n
A100\n 38.5\n 20.33\n 12.17\n
\n\n\nOne can notice the following:", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} -{"page_content": "* The improvements are significant for powerful GPUs like A100 and V100. For those GPUs the improvement is most pronounced for batch size 1\n* For less powerful GPUs we observe smaller speedups (or in two cases slight regressions). The batch size trend is reversed here: improvement is larger for larger batches\n\nIn the following sections we describe the applied optimizations and provide detailed benchmarking data, comparing the generation time with various optimization features on/off.\n\nSpecifically, we benchmark 5 configurations and the plots below compare their absolute performance for different GPUs and batch sizes. For definitions of these configurations see section \u201cBenchmarking setup and results\u201d.\n\n\n\n![Benchmark of denoising diffusion text-to-image generation across GPU architectures, batch size 1](/assets/images/2023-04-11-accelerated-generative-diffusion-models1.png){:style=\"max-height:800px; width:100%\"}", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} +{"page_content": "One can notice the following:\n\n\n\n* The improvements are significant for powerful GPUs like A100 and V100. For those GPUs the improvement is most pronounced for batch size 1\n* For less powerful GPUs we observe smaller speedups (or in two cases slight regressions). The batch size trend is reversed here: improvement is larger for larger batches\n\nIn the following sections we describe the applied optimizations and provide detailed benchmarking data, comparing the generation time with various optimization features on/off.\n\nSpecifically, we benchmark 5 configurations and the plots below compare their absolute performance for different GPUs and batch sizes. For definitions of these configurations see section \u201cBenchmarking setup and results\u201d.\n\n\n\n![Benchmark of denoising diffusion text-to-image generation across GPU architectures, batch size 1](/assets/images/2023-04-11-accelerated-generative-diffusion-models1.png){:style=\"max-height:800px; width:100%\"}", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} {"page_content": "![Benchmark of denoising diffusion text-to-image generation across GPU architectures, batch size 2](/assets/images/2023-04-11-accelerated-generative-diffusion-models2.png){:style=\"max-height:800px; width:100%\"} \n\n![Benchmark of denoising diffusion text-to-image generation across GPU architectures, batch size 1](/assets/images/2023-04-11-accelerated-generative-diffusion-models3.png){:style=\"max-height:800px; width:100%\"} \n\n\n\t\t\t\n\n\n## Optimizations \n\nHere we\u2019ll go into more detail about the optimizations introduced into the model code. These optimizations rely on features of PyTorch 2.0 which has been released recently. \n\n\n### Optimized Attention", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} -{"page_content": "One part of the code which we optimized is the scaled dot-product attention. Attention is known to be a heavy operation: naive implementation materializes the attention matrix, leading to time and memory complexity quadratic in sequence length. It is common for diffusion models to use attention (`CrossAttention`) as part of Transformer blocks in multiple parts of the U-Net. Since the U-Net runs at every sampling step, this becomes a critical point to optimize. Instead of custom attention implementation one can use `torch.nn.MultiheadAttention,` which in PyTorch 2 has optimized attention implementation is integrated into it. This optimization schematically boils down to the following pseudocode:\n\n\n\n```\nclass CrossAttention(nn.Module):\n def __init__(self, ...):\n # Create matrices: Q, K, V, out_proj\n ...\n def forward(self, x, context=None, mask=None):\n # Compute out = SoftMax(Q*K/sqrt(d))V\n # Return out_proj(out)\n \u2026\n```\n\ngets replaced with", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} +{"page_content": "### Optimized Attention\n\nOne part of the code which we optimized is the scaled dot-product attention. Attention is known to be a heavy operation: naive implementation materializes the attention matrix, leading to time and memory complexity quadratic in sequence length. It is common for diffusion models to use attention (`CrossAttention`) as part of Transformer blocks in multiple parts of the U-Net. Since the U-Net runs at every sampling step, this becomes a critical point to optimize. Instead of custom attention implementation one can use `torch.nn.MultiheadAttention,` which in PyTorch 2 has optimized attention implementation is integrated into it. This optimization schematically boils down to the following pseudocode:\n\n\n\n```\nclass CrossAttention(nn.Module):\n def __init__(self, ...):\n # Create matrices: Q, K, V, out_proj\n ...\n def forward(self, x, context=None, mask=None):\n # Compute out = SoftMax(Q*K/sqrt(d))V\n # Return out_proj(out)\n \u2026\n```\n\ngets replaced with", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} {"page_content": "gets replaced with\n\n```\nclass CrossAttention(nn.Module):\n def __init__(self, ...):\n self.mha = nn.MultiheadAttention(...)\n def forward(self, x, context):\n\treturn self.mha(x, context, context)\n```\n\n\nThe optimized implementation of attention was available already in PyTorch 1.13 (see [here](https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/)) and widely adopted (see e.g. [HuggingFace transformers library example](https://medium.com/pytorch/bettertransformer-out-of-the-box-performance-for-huggingface-transformers-3fbe27d50ab2)). In particular, it integrates memory-efficient attention from the [xFormers](https://github.com/facebookresearch/xformers) library and flash attention from [https://arxiv.org/abs/2205.14135](https://arxiv.org/abs/2205.14135). PyTorch 2.0 expands this to additional attention functions such as cross attention and custom kernels for further acceleration, making it applicable to diffusion models.", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} {"page_content": "Flash attention is available on GPUs with compute capability SM 7.5 or SM 8.x - for example, on T4, A10, and A100, which are included in our benchmark (you can check compute capability of each NVIDIA GPU [here](https://developer.nvidia.com/cuda-gpus#compute)). However, in our tests on A100 the memory efficient attention performed better than flash attention for the particular case of diffusion models, due to the small number of attention heads and small batch size. PyTorch understands this and in this case chooses memory efficient attention over flash attention when both are available (see the logic [here](https://github.com/pytorch/pytorch/blob/d8e795ecd53670682bd3b2e5ff1f378402b147d5/aten/src/ATen/native/transformers/cuda/sdp_utils.h#L33-L71)). For full control over the attention backends (memory-efficient attention, flash attention, \u201cvanilla math\u201d, or any future ones), power users can enable and disable them manually with the help of the context manager [torch.backends.cuda.sdp_kernel](https://pytorch.org/docs/master/backends.html#torch.backends.cuda.sdp_kernel).", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} {"page_content": "### Compilation\n\nCompilation is a [new feature of PyTorch 2.0](https://pytorch.org/get-started/pytorch-2.0/#user-experience), enabling significant speedups with a very simple user experience. To invoke the default behavior, simply wrap a PyTorch module or a function into `torch.compile`:\n\n\n```\nmodel = torch.compile(model)\n```\n\n\nPyTorch compiler then turns Python code into a set of instructions which can be executed efficiently without Python overhead. The compilation happens dynamically the first time the code is executed. With the default behavior, under the hood PyTorch utilized [TorchDynamo](https://pytorch.org/docs/master/dynamo/index.html) to compile the code and [TorchInductor](https://dev-discuss.pytorch.org/t/torchinductor-a-pytorch-native-compiler-with-define-by-run-ir-and-symbolic-shapes/747) to further optimize it. See [this tutorial](https://pytorch.org/tutorials/intermediate/dynamo_tutorial.html) for more details.", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} {"page_content": "Although the one-liner above is enough for compilation, certain modifications in the code can squeeze a larger speedup. In particular, one should avoid so-called graph breaks - places in the code which PyTorch can\u2019t compile. As opposed to previous PyTorch compilation approaches (like TorchScript), PyTorch 2 compiler doesn\u2019t break in this case. Instead it falls back on eager execution - so the code runs, but with reduced performance. We introduced a few minor changes to the model code to get rid of graph breaks. This included eliminating functions from libraries not supported by the compiler, such as `inspect.isfunction` and `einops.rearrange`. See this [doc](https://pytorch.org/docs/master/dynamo/faq.html#identifying-the-cause-of-a-graph-break) to learn more about graph breaks and how to eliminate them.", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} {"page_content": "Theoretically, one can apply `torch.compile `on the whole diffusion sampling loop. However, in practice it is enough to just compile the U-Net. The reason is that `torch.compile` doesn\u2019t yet have a loop analyzer and would recompile the code for each iteration of the sampling loop. Moreover, compiled sampler code is likely to generate graph breaks - so one would need to adjust it if one wants to get a good performance from the compiled version.\n\nNote that compilation [requires GPU compute capability >= SM 7.0](https://github.com/openai/triton/blob/b5d32896b1f89fc44a82f8df3bb010934c53f4f5/README.md?plain=1#L66-L68) to run in non-eager mode. This covers all GPUs in our benchmarks - T4, V100, A10, A100 - except for P100 (see the [full list](https://developer.nvidia.com/cuda-gpus#compute)). \n\n\n### Other optimizations", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} -{"page_content": "In addition, we have improved efficiency of GPU memory operations by eliminating some common pitfalls, e.g. creating a tensor on GPU directly rather than creating it on CPU and later moving to GPU. The places where such optimizations were necessary were determined by line-profiling and looking at CPU/GPU traces and [Flame Graphs](https://github.com/brendangregg/FlameGraph).\n\n\n## Benchmarking setup and results summary\n\nWe have two versions of code to compare: _original_ and _optimized_. On top of this, several optimization features (xFormers, PyTorch memory efficient attention, compilation) can be turned on/off. Overall, as mentioned in the introduction, we will be benchmarking 5 configurations:\n\n\n\n* _Original code without xFormers_\n* _Original code with xFormers_\n* _Optimized code with vanilla math attention backend and no compilation_\n* _Optimized code with memory-efficient attention backend and no compilation_\n* _Optimized code with memory-efficient attention backend and compilation_", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} -{"page_content": "As the _original version_ we took the version of the code which uses PyTorch 1.12 and a custom implementation of attention. The _optimized version_ uses `nn.MultiheadAttention` in `CrossAttention` and PyTorch 2.0.0.dev20230111+cu117. It also has a few other minor optimizations in PyTorch-related code. \n\nThe table below shows runtime of each version of the code in seconds, and the percentage improvement compared to the _original with xFormers. _The compilation time is excluded.\n\n**Runtimes for batch size 1. In parenthesis - relative improvement with respect to the \u201cOriginal with xFormers\u201d row**", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} +{"page_content": "### Other optimizations\n\nIn addition, we have improved efficiency of GPU memory operations by eliminating some common pitfalls, e.g. creating a tensor on GPU directly rather than creating it on CPU and later moving to GPU. The places where such optimizations were necessary were determined by line-profiling and looking at CPU/GPU traces and [Flame Graphs](https://github.com/brendangregg/FlameGraph).\n\n\n## Benchmarking setup and results summary\n\nWe have two versions of code to compare: _original_ and _optimized_. On top of this, several optimization features (xFormers, PyTorch memory efficient attention, compilation) can be turned on/off. Overall, as mentioned in the introduction, we will be benchmarking 5 configurations:", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} +{"page_content": "* _Original code without xFormers_\n* _Original code with xFormers_\n* _Optimized code with vanilla math attention backend and no compilation_\n* _Optimized code with memory-efficient attention backend and no compilation_\n* _Optimized code with memory-efficient attention backend and compilation_\n\nAs the _original version_ we took the version of the code which uses PyTorch 1.12 and a custom implementation of attention. The _optimized version_ uses `nn.MultiheadAttention` in `CrossAttention` and PyTorch 2.0.0.dev20230111+cu117. It also has a few other minor optimizations in PyTorch-related code. \n\nThe table below shows runtime of each version of the code in seconds, and the percentage improvement compared to the _original with xFormers. _The compilation time is excluded.\n\n**Runtimes for batch size 1. In parenthesis - relative improvement with respect to the \u201cOriginal with xFormers\u201d row**", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} {"page_content": "\n\n \n \n \n \n \n \n \n \n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Configuration\n P100\n T4\n A10\n V100\n A100\n
Original without xFormers\n 30.4s (-19.3%)\n 29.8s (-77.3%)\n 13.0s (-83.9%)\n 10.9s (-33.1%)\n 8.0s (-19.3%)\n
Original with xFormers\n 25.5s (0.0%)\n 16.8s (0.0%)\n 7.1s (0.0%)\n 8.2s (0.0%)\n 6.7s (0.0%)\n
Optimized with vanilla math attention, no compilation\n 27.3s (-7.0%)\n 19.9s (-18.7%)\n 13.2s (-87.2%)\n 7.5s (8.7%)\n 5.7s (15.1%)\n
Optimized with mem. efficient attention, no compilation\n 26.5s (-3.8%)\n 16.8s (0.2%)\n 7.1s (-0.8%)\n 6.9s (16.0%)\n 5.3s (20.6%)\n
Optimized with mem. efficient attention and compilation\n -\n 16.4s (2.1%)\n 7.2s (-2.3%)\n 6.6s (18.6%)\n 4.1s (38.5%)\n
", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} {"page_content": "**Runtimes for batch size 2**", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} {"page_content": "\n\n \n \n \n \n \n \n \n \n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Configuration\n P100\n T4\n A10\n V100\n A100\n
Original without xFormers\n 58.0s (-21.6%)\n 57.6s (-84.0%)\n 24.4s (-95.2%)\n 18.6s (-63.0%)\n 12.0s (-50.6%)\n
Original with xFormers\n 47.7s (0.0%)\n 31.3s (0.0%)\n 12.5s (0.0%)\n 11.4s (0.0%)\n 8.0s (0.0%)\n
Optimized with vanilla math attention, no compilation\n 49.3s (-3.5%)\n 37.9s (-21.0%)\n 17.8s (-42.2%)\n 12.7s (-10.7%)\n 7.8s (1.8%)\n
Optimized with mem. efficient attention, no compilation\n 47.5s (0.4%)\n 31.2s (0.5%)\n 12.2s (2.6%)\n 11.5s (-0.7%)\n 7.0s (12.6%)\n
Optimized with mem. efficient attention and compilation\n -\n 28.0s (10.5%)\n 11.4s (9.0%)\n 10.7s (6.4%)\n 6.4s (20.3%)\n
", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} @@ -1809,18 +1813,18 @@ {"page_content": "\n\n \n \n \n \n \n \n \n \n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Configuration\n P100\n T4\n A10\n V100\n A100\n
Original without xFormers\n 117.9s (-20.0%)\n 112.4s (-81.8%)\n 47.2s (-101.7%)\n 35.8s (-71.9%)\n 22.8s (-78.9%)\n
Original with xFormers\n 98.3s (0.0%)\n 61.8s (0.0%)\n 23.4s (0.0%)\n 20.8s (0.0%)\n 12.7s (0.0%)\n
Optimized with vanilla math attention, no compilation\n 101.1s (-2.9%)\n 73.0s (-18.0%)\n 28.3s (-21.0%)\n 23.3s (-11.9%)\n 14.5s (-13.9%)\n
Optimized with mem. efficient attention, no compilation\n 92.9s (5.5%)\n 61.1s (1.2%)\n 23.9s (-1.9%)\n 20.8s (-0.1%)\n 12.8s (-0.9%)\n
Optimized with mem. efficient attention and compilation\n -\n 53.1s (14.2%)\n 20.9s (10.6%)\n 18.6s (10.4%)\n 11.2s (12.2%)\n
", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} {"page_content": "To minimize fluctuations and external influence on the performance of the benchmarked code, we ran each version of the code one after another, and then repeated this sequence 10 times: A, B, C, D, E, A, B, \u2026 So the results of a typical run would look like the one in the picture below.. Note that one shouldn\u2019t rely on comparison of absolute run times between different graphs, but comparison of run times_ inside_ one graph is pretty reliable, thanks to our benchmarking setup.\n\n\n\n\n![Denoising diffusion model generation benchmarks](/assets/images/2023-04-11-accelerated-generative-diffusion-models4.png){:style=\"max-height:700px\"}", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} {"page_content": "Each run of text-to-image generation script produces several batches, the number of which is regulated by the CLI parameter `--n_iter`. In the benchmarks we used `n_iter = 2`, but introduced an additional \u201cwarm-up\u201d iteration, which doesn\u2019t contribute to the run time. This was necessary for the runs with compilation, because compilation happens the first time the code runs, and so the first iteration is much longer than all subsequent. To make comparison fair, we also introduced this additional \u201cwarm-up\u201d iteration to all other runs. \n\nThe numbers in the table above are for number of iterations 2 (plus a \u201cwarm-up one\u201d), prompt \u201dA photo\u201d, seed 1, PLMS sampler, and autocast turned on.\n\nBenchmarks were done using P100, V100, A100, A10 and T4 GPUs. The T4 benchmarks were done in Google Colab Pro. The A10 benchmarks were done on g5.4xlarge AWS instances with 1 GPU.\n\n\n## Conclusions and next steps", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} -{"page_content": "We have shown that new features of PyTorch 2 - compiler and optimized attention implementation - give performance improvements exceeding or comparable with what previously required installation of an external dependency (xFormers). PyTorch achieved this, in particular, by integrating memory efficient attention from xFormers into its codebase. This is a significant improvement for user experience, given that xFormers, being a state-of-the-art library, in many scenarios requires custom installation process and long builds.\n\nThere are a few natural directions in which this work can be continued:", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} +{"page_content": "## Conclusions and next steps\n\nWe have shown that new features of PyTorch 2 - compiler and optimized attention implementation - give performance improvements exceeding or comparable with what previously required installation of an external dependency (xFormers). PyTorch achieved this, in particular, by integrating memory efficient attention from xFormers into its codebase. This is a significant improvement for user experience, given that xFormers, being a state-of-the-art library, in many scenarios requires custom installation process and long builds.\n\nThere are a few natural directions in which this work can be continued:", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} {"page_content": "* The optimizations we implemented and described here are only benchmarked for text-to-image inference so far. It would be interesting to see how they affect training performance. PyTorch compilation can be directly applied to training; enabling training with PyTorch optimized attention is on the roadmap\n* We intentionally minimized changes to the original model code. Further profiling and optimization can probably bring more improvements\n* At the moment compilation is applied only to the U-Net model inside the sampler. Since there is a lot happening outside of U-Net (e.g. operations directly in the sampling loop), it would be beneficial to compile the whole sampler. However, this would require analysis of the compilation process to avoid recompilation at every sampling step\n* Current code only applies compilation within the PLMS sampler, but it should be trivial to extend it to other samplers\n* Besides text-to-image generation, diffusion models are also applied to other tasks - image-to-image and inpainting. It would be interesting to measure how their performance improves from PyTorch 2 optimizations", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} {"page_content": "See if you can increase performance of open source diffusion models using the methods we described, and share the results! \n\n\n## Resources", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} {"page_content": "* PyTorch 2.0 overview, which has a lot of information on `torch.compile:` [https://pytorch.org/get-started/pytorch-2.0/](https://pytorch.org/get-started/pytorch-2.0/) \n* Tutorial on `torch.compile`: [https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html)\n* General compilation troubleshooting: [https://pytorch.org/docs/master/dynamo/troubleshooting.html](https://pytorch.org/docs/master/dynamo/troubleshooting.html)\n* Details on graph breaks: [https://pytorch.org/docs/master/dynamo/faq.html#identifying-the-cause-of-a-graph-break](https://pytorch.org/docs/master/dynamo/faq.html#identifying-the-cause-of-a-graph-break)\n* Details on guards: [https://pytorch.org/docs/master/dynamo/guards-overview.html](https://pytorch.org/docs/master/dynamo/guards-overview.html)\n* Video deep dive on TorchDynamo [https://www.youtube.com/watch?v=egZB5Uxki0I](https://www.youtube.com/watch?v=egZB5Uxki0I) \n* Tutorial on optimized attention in PyTorch 1.12: [https://pytorch.org/tutorials/beginner/bettertransformer_tutorial.html](https://pytorch.org/tutorials/beginner/bettertransformer_tutorial.html)", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} {"page_content": "## Acknowledgements\n\nWe would like to thank Geeta Chauhan, Natalia Gimelshein, Patrick Labatut, Bert Maher, Mark Saroufim, Michael Voznesensky and Francisco Massa for their valuable advice and early feedback on the text.\n\nSpecial thanks to Yudong Tao initiating the work on using PyTorch native attention in diffusion models.", "metadata": {"source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"New Library Updates in PyTorch 2.0\"\n---\n\n## Summary\n\nWe are bringing a number of improvements to the current PyTorch libraries, alongside the [PyTorch 2.0 release](/blog/pytorch-2.0-release/). These updates demonstrate our focus on developing common and extensible APIs across all domains to make it easier for our community to build ecosystem projects on PyTorch. \n\nAlong with 2.0, we are also releasing a series of beta updates to the PyTorch domain libraries, including those that are in-tree, and separate libraries including TorchAudio, TorchVision, and TorchText. An update for TorchX is also being released as it moves to community supported mode. Please find the list of the latest stable versions and updates below.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"}} {"page_content": "**Latest Stable Library Versions (Full List)**\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
TorchArrow 0.1.0\n TorchRec 0.4.0\n TorchVision 0.15\n
TorchAudio 2.0\n TorchServe 0.7.1\n TorchX 0.4.0\n
TorchData 0.6.0\n TorchText 0.15.0\n PyTorch on XLA Devices 1.14\n
\n\n\n*To see [prior versions](https://pytorch.org/docs/stable/index.html) or (unstable) nightlies, click on versions in the top left menu above \u2018Search Docs\u2019.\n\n\n## TorchAudio \n\n### [Beta] Data augmentation operators", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"}} -{"page_content": "The release adds several data augmentation operators under torchaudio.functional and torchaudio.transforms:\n* torchaudio.functional.add_noise\n* torchaudio.functional.convolve\n* torchaudio.functional.deemphasis\n* torchaudio.functional.fftconvolve\n* torchaudio.functional.preemphasis\n* torchaudio.functional.speed\n* torchaudio.transforms.AddNoise\n* torchaudio.transforms.Convolve\n* torchaudio.transforms.Deemphasis\n* torchaudio.transforms.FFTConvolve\n* torchaudio.transforms.Preemphasis\n* torchaudio.transforms.Speed\n* torchaudio.transforms.SpeedPerturbation\n\nThe operators can be used to synthetically diversify training data to improve the generalizability of downstream models.\n\nFor usage details, please refer to the [functional](https://pytorch.org/audio/2.0.0/functional.html) and [transform](https://pytorch.org/audio/2.0.0/transforms.html) documentation and [Audio Data Augmentation](https://pytorch.org/audio/2.0.0/tutorials/audio_data_augmentation_tutorial.html) tutorial.\n\n\n### [Beta] WavLM and XLS-R models", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"}} -{"page_content": "The release adds two self-supervised learning models for speech and audio.\n\n* [WavLM](https://ieeexplore.ieee.org/document/9814838) that is robust to noise and reverberation.\n* [XLS-R](https://arxiv.org/abs/2111.09296) that is trained on cross-lingual datasets.\n\nBesides the model architectures, torchaudio also supports corresponding pre-trained pipelines:\n\n* torchaudio.pipelines.WAVLM_BASE\n* torchaudio.pipelines.WAVLM_BASE_PLUS\n* torchaudio.pipelines.WAVLM_LARGE\n* torchaudio.pipelines.WAV2VEC_XLSR_300M\n* torchaudio.pipelines.WAV2VEC_XLSR_1B\n* torchaudio.pipelines.WAV2VEC_XLSR_2B\n\nFor usage details, please refer to the [factory function](https://pytorch.org/audio/2.0.0/generated/torchaudio.models.Wav2Vec2Model.html#factory-functions) and [pre-trained pipelines](https://pytorch.org/audio/2.0.0/pipelines.html#id3) documentation.\n\n\n## TorchRL", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"}} +{"page_content": "### [Beta] Data augmentation operators\n\nThe release adds several data augmentation operators under torchaudio.functional and torchaudio.transforms:\n* torchaudio.functional.add_noise\n* torchaudio.functional.convolve\n* torchaudio.functional.deemphasis\n* torchaudio.functional.fftconvolve\n* torchaudio.functional.preemphasis\n* torchaudio.functional.speed\n* torchaudio.transforms.AddNoise\n* torchaudio.transforms.Convolve\n* torchaudio.transforms.Deemphasis\n* torchaudio.transforms.FFTConvolve\n* torchaudio.transforms.Preemphasis\n* torchaudio.transforms.Speed\n* torchaudio.transforms.SpeedPerturbation\n\nThe operators can be used to synthetically diversify training data to improve the generalizability of downstream models.\n\nFor usage details, please refer to the [functional](https://pytorch.org/audio/2.0.0/functional.html) and [transform](https://pytorch.org/audio/2.0.0/transforms.html) documentation and [Audio Data Augmentation](https://pytorch.org/audio/2.0.0/tutorials/audio_data_augmentation_tutorial.html) tutorial.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"}} +{"page_content": "### [Beta] WavLM and XLS-R models\n\nThe release adds two self-supervised learning models for speech and audio.\n\n* [WavLM](https://ieeexplore.ieee.org/document/9814838) that is robust to noise and reverberation.\n* [XLS-R](https://arxiv.org/abs/2111.09296) that is trained on cross-lingual datasets.\n\nBesides the model architectures, torchaudio also supports corresponding pre-trained pipelines:\n\n* torchaudio.pipelines.WAVLM_BASE\n* torchaudio.pipelines.WAVLM_BASE_PLUS\n* torchaudio.pipelines.WAVLM_LARGE\n* torchaudio.pipelines.WAV2VEC_XLSR_300M\n* torchaudio.pipelines.WAV2VEC_XLSR_1B\n* torchaudio.pipelines.WAV2VEC_XLSR_2B\n\nFor usage details, please refer to the [factory function](https://pytorch.org/audio/2.0.0/generated/torchaudio.models.Wav2Vec2Model.html#factory-functions) and [pre-trained pipelines](https://pytorch.org/audio/2.0.0/pipelines.html#id3) documentation.\n\n\n## TorchRL", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"}} {"page_content": "## TorchRL \n\nThe initial release of torchrl includes several features that span across the entire RL domain. TorchRL can already be used in online, offline, multi-agent, multi-task and distributed RL settings, among others. See below:\n\n\n### [Beta] Environment wrappers and transforms\n\ntorchrl.envs includes several wrappers around common environment libraries. This allows users to swap one library with another without effort. These wrappers build an interface between these simulators and torchrl:\n\n* dm_control: \n* Gym\n* Brax\n* EnvPool\n* Jumanji\n* Habitat\n\nIt also comes with many commonly used transforms and vectorized environment utilities that allow for a fast execution across simulation libraries. Please refer to the [documentation](https://pytorch.org/rl/reference/envs.html) for more detail.\n\n\n### [Beta] Datacollectors", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"}} -{"page_content": "Data collection in RL is made easy via the usage of single process or multiprocessed/distributed data collectors that execute the policy in the environment over a desired duration and deliver samples according to the user\u2019s needs. These can be found in torchrl.collectors and are documented [here](https://pytorch.org/rl/reference/collectors.html).\n\n\n### [Beta] Objective modules\n\nSeveral objective functions are included in torchrl.objectives, among which: \n\n* A generic PPOLoss class and derived ClipPPOLoss and KLPPOLoss\n* SACLoss and DiscreteSACLoss\n* DDPGLoss\n* DQNLoss\n* REDQLoss\n* A2CLoss\n* TD3Loss\n* ReinforceLoss\n* Dreamer\n\nVectorized value function operators also appear in the library. Check the documentation [here](https://pytorch.org/rl/reference/objectives.html).\n\n\n### [Beta] Models and exploration strategies\n\nWe provide multiple models, modules and exploration strategies. Get a detailed description in [the doc](https://pytorch.org/rl/reference/modules.html).\n\n\n### [Beta] Composable replay buffer", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"}} -{"page_content": "A composable replay buffer class is provided that can be used to store data in multiple contexts including single and multi-agent, on and off-policy and many more.. Components include:\n\n* Storages (list, physical or memory-based contiguous storages)\n* Samplers (Prioritized, sampler without repetition)\n* Writers\n* Possibility to add transforms\n\nReplay buffers and other data utilities are documented [here](https://pytorch.org/rl/reference/data.html).\n\n\n### [Beta] Logging tools and trainer\n\nWe support multiple logging tools including tensorboard, wandb and mlflow.\n\nWe provide a generic Trainer class that allows for easy code recycling and checkpointing.\n\nThese features are documented [here](https://pytorch.org/rl/reference/trainers.html).\n\n\n## TensorDict\n\nTensorDict is a new data carrier for PyTorch.\n\n\n### [Beta] TensorDict: specialized dictionary for PyTorch", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"}} +{"page_content": "### [Beta] Datacollectors\n\nData collection in RL is made easy via the usage of single process or multiprocessed/distributed data collectors that execute the policy in the environment over a desired duration and deliver samples according to the user\u2019s needs. These can be found in torchrl.collectors and are documented [here](https://pytorch.org/rl/reference/collectors.html).\n\n\n### [Beta] Objective modules\n\nSeveral objective functions are included in torchrl.objectives, among which: \n\n* A generic PPOLoss class and derived ClipPPOLoss and KLPPOLoss\n* SACLoss and DiscreteSACLoss\n* DDPGLoss\n* DQNLoss\n* REDQLoss\n* A2CLoss\n* TD3Loss\n* ReinforceLoss\n* Dreamer\n\nVectorized value function operators also appear in the library. Check the documentation [here](https://pytorch.org/rl/reference/objectives.html).\n\n\n### [Beta] Models and exploration strategies\n\nWe provide multiple models, modules and exploration strategies. Get a detailed description in [the doc](https://pytorch.org/rl/reference/modules.html).", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"}} +{"page_content": "### [Beta] Composable replay buffer\n\nA composable replay buffer class is provided that can be used to store data in multiple contexts including single and multi-agent, on and off-policy and many more.. Components include:\n\n* Storages (list, physical or memory-based contiguous storages)\n* Samplers (Prioritized, sampler without repetition)\n* Writers\n* Possibility to add transforms\n\nReplay buffers and other data utilities are documented [here](https://pytorch.org/rl/reference/data.html).\n\n\n### [Beta] Logging tools and trainer\n\nWe support multiple logging tools including tensorboard, wandb and mlflow.\n\nWe provide a generic Trainer class that allows for easy code recycling and checkpointing.\n\nThese features are documented [here](https://pytorch.org/rl/reference/trainers.html).\n\n\n## TensorDict\n\nTensorDict is a new data carrier for PyTorch.\n\n\n### [Beta] TensorDict: specialized dictionary for PyTorch", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"}} {"page_content": "TensorDict allows you to execute many common operations across batches of tensors carried by a single container. TensorDict supports many shape and device or storage operations, and can readily be used in distributed settings. Check the [documentation](https://pytorch-labs.github.io/tensordict/) to know more.\n\n\n### [Beta] @tensorclass: a dataclass for PyTorch\n\nLike TensorDict, [tensorclass](https://pytorch-labs.github.io/tensordict/reference/prototype.html) provides the opportunity to write dataclasses with built-in torch features such as shape or device operations. \n\n\n### [Beta] tensordict.nn: specialized modules for TensorDict\n\nThe [tensordict.nn module](https://pytorch-labs.github.io/tensordict/reference/nn.html) provides specialized nn.Module subclasses that make it easy to build arbitrarily complex graphs that can be executed with TensorDict inputs. It is compatible with the latest PyTorch features such as functorch, torch.fx and torch.compile.\n\n\n## TorchRec", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"}} {"page_content": "## TorchRec\n\n\n### [Beta] KeyedJaggedTensor All-to-All Redesign and Input Dist Fusion\n\nWe observed performance regression due to a bottleneck in sparse data distribution for models that have multiple, large KJTs to redistribute. \n\nTo combat this we altered the comms pattern to transport the minimum data required in the initial collective to support the collective calls for the actual KJT tensor data. This data sent in the initial collective, \u2018splits\u2019 means more data is transmitted over the comms stream overall, but the CPU is blocked for significantly shorter amounts of time leading to better overall QPS.\n\nFurthermore, we altered the TorchRec train pipeline to group the initial collective calls for the splits together before launching the more expensive KJT tensor collective calls. This fusion minimizes the CPU blocked time as launching each subsequent input distribution is no longer dependent on the previous input distribution.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"}} {"page_content": "With this feature, variable batch sizes are now natively supported across ranks. These features are documented [here](https://github.com/pytorch/torchrec/commit/d0d23bef8aef5a79a1061fbc842c97bb68b91463).\n\n\n## TorchVision \n\n\n### [Beta] Extending TorchVision\u2019s Transforms to Object Detection, Segmentation & Video tasks \n\nTorchVision is extending its Transforms API! Here is what\u2019s new:\n\n* You can use them not only for Image Classification but also for Object Detection, Instance & Semantic Segmentation and Video Classification.\n* You can use new functional transforms for transforming Videos, Bounding Boxes and Segmentation Masks.\n\nLearn more about these new transforms [from our docs](https://pytorch.org/vision/stable/auto_examples/), and submit any feedback in our [dedicated issue](https://github.com/pytorch/vision/issues/6753).\n\n\n## TorchText \n\n### [Beta] Adding scriptable T5 and Flan-T5 to the TorchText library with incremental decoding support!", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"}} @@ -1828,29 +1832,29 @@ {"page_content": "---\nlayout: blog_detail\ntitle: \"Deprecation of CUDA 11.6 and Python 3.7 Support\"\n---\n\nFor the upcoming PyTorch 2.0 feature release (target March 2023), we will target CUDA 11.7 as the stable version and CUDA 11.8 as the experimental version of CUDA and Python >=3.8, <=3.11. \n\nIf you are still using or depending on CUDA 11.6 or Python 3.7 builds, we strongly recommend moving to at least CUDA 11.7 and Python 3.8, as it would be the minimum versions required for PyTorch 2.0.\n\n**Please note that as of Feb 1, CUDA 11.6 and Python 3.7 are no longer included in the nightlies**\n\nPlease refer to the Release Compatibility Matrix for PyTorch releases:", "metadata": {"source": "https://pytorch.org/blog/deprecation-cuda-python-support/", "category": "pytorch blogs"}} {"page_content": "\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
PyTorch Version\n Python\n Stable CUDA\n Experimental CUDA\n
2.0\n >=3.8, <=3.11\n CUDA 11.7, CUDNN 8.5.0.96\n CUDA 11.8, CUDNN 8.7.0.84\n
1.13\n >=3.7, <=3.10\n CUDA 11.6, CUDNN 8.3.2.44\n CUDA 11.7, CUDNN 8.5.0.96\n
1.12\n >=3.7, <=3.10\n CUDA 11.3, CUDNN 8.3.2.44\n CUDA 11.6, CUDNN 8.3.2.44\n
\n\n\nAs of 2/1/2023\n\nFor more information on PyTorch releases, updated compatibility matrix and release policies, please see (and bookmark) [Readme](https://github.com/pytorch/pytorch/blob/master/RELEASE.md#release-compatibility-matrix).", "metadata": {"source": "https://pytorch.org/blog/deprecation-cuda-python-support/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'torchvision 0.3: segmentation, detection models, new datasets and more..'\nauthor: Francisco Massa\nredirect_from: /2019/05/23/torchvision03.html\n---\n\nPyTorch domain libraries like torchvision provide convenient access to common datasets and models that can be used to quickly create a state-of-the-art baseline. Moreover, they also provide common abstractions to reduce boilerplate code that users might have to otherwise repeatedly write. The torchvision 0.3 release brings several new features including models for semantic segmentation, object detection, instance segmentation, and person keypoint detection, as well as custom C++ / CUDA ops specific to computer vision.\n\n
\n \n
\n\n\n### New features include:", "metadata": {"source": "https://pytorch.org/blog/torchvision03/", "category": "pytorch blogs"}} -{"page_content": "**Reference training / evaluation scripts:** torchvision now provides, under the references/ folder, scripts for training and evaluation of the following tasks: classification, semantic segmentation, object detection, instance segmentation and person keypoint detection. These serve as a log of how to train a specific model and provide baseline training and evaluation scripts to quickly bootstrap research.\n\n**torchvision ops:** torchvision now contains custom C++ / CUDA operators. Those operators are specific to computer vision, and make it easier to build object detection models. These operators currently do not support PyTorch script mode, but support for it is planned for in the next release. Some of the ops supported include:", "metadata": {"source": "https://pytorch.org/blog/torchvision03/", "category": "pytorch blogs"}} +{"page_content": "### New features include:\n\n**Reference training / evaluation scripts:** torchvision now provides, under the references/ folder, scripts for training and evaluation of the following tasks: classification, semantic segmentation, object detection, instance segmentation and person keypoint detection. These serve as a log of how to train a specific model and provide baseline training and evaluation scripts to quickly bootstrap research.\n\n**torchvision ops:** torchvision now contains custom C++ / CUDA operators. Those operators are specific to computer vision, and make it easier to build object detection models. These operators currently do not support PyTorch script mode, but support for it is planned for in the next release. Some of the ops supported include:", "metadata": {"source": "https://pytorch.org/blog/torchvision03/", "category": "pytorch blogs"}} {"page_content": "* roi_pool (and the module version RoIPool)\n* roi_align (and the module version RoIAlign)\n* nms, for non-maximum suppression of bounding boxes\n* box_iou, for computing the intersection over union metric between two sets of bounding boxes\n* box_area, for computing the area of a set of bounding boxes\n\nHere are a few examples on using torchvision ops:\n\n```python\nimport torch\nimport torchvision\n\n# create 10 random boxes\nboxes = torch.rand(10, 4) * 100\n# they need to be in [x0, y0, x1, y1] format\nboxes[:, 2:] += boxes[:, :2]\n# create a random image\nimage = torch.rand(1, 3, 200, 200)\n# extract regions in `image` defined in `boxes`, rescaling\n# them to have a size of 3x3\npooled_regions = torchvision.ops.roi_align(image, [boxes], output_size=(3, 3))\n# check the size\nprint(pooled_regions.shape)\n# torch.Size([10, 3, 3, 3])\n\n# or compute the intersection over union between\n# all pairs of boxes\nprint(torchvision.ops.box_iou(boxes, boxes).shape)\n# torch.Size([10, 10])\n```", "metadata": {"source": "https://pytorch.org/blog/torchvision03/", "category": "pytorch blogs"}} {"page_content": "**New models and datasets:** torchvision now adds support for object detection, instance segmentation and person keypoint detection models. In addition, several popular datasets have been added. Note: The API is currently experimental and might change in future versions of torchvision. New models include:\n\n### Segmentation Models\n\nThe 0.3 release also contains models for dense pixelwise prediction on images.\nIt adds FCN and DeepLabV3 segmentation models, using a ResNet50 and ResNet101 backbones.\nPre-trained weights for ResNet101 backbone are available, and have been trained on a subset of COCO train2017, which contains the same 20 categories as those from Pascal VOC.\n\nThe pre-trained models give the following results on the subset of COCO val2017 which contain the same 20 categories as those present in Pascal VOC:\n\nNetwork | mean IoU | global pixelwise acc\n-- | -- | --\nFCN ResNet101 | 63.7 | 91.9\nDeepLabV3 ResNet101 | 67.4 | 92.4\n\n### Detection Models", "metadata": {"source": "https://pytorch.org/blog/torchvision03/", "category": "pytorch blogs"}} {"page_content": "### Detection Models\n\nNetwork | box AP | mask AP | keypoint AP\n-- | -- | -- | --\nFaster R-CNN ResNet-50 FPN trained on COCO | 37.0 | \u00a0 | \u00a0\nMask R-CNN ResNet-50 FPN trained on COCO | 37.9 | 34.6 | \u00a0\nKeypoint R-CNN ResNet-50 FPN trained on COCO | 54.6 | \u00a0 | 65.0\n\nThe implementations of the models for object detection, instance segmentation and keypoint detection are fast, specially during training.\n\nIn the following table, we use 8 V100 GPUs, with CUDA 10.0 and CUDNN 7.4 to report the results. During training, we use a batch size of 2 per GPU, and during testing a batch size of 1 is used.\n\nFor test time, we report the time for the model evaluation and post-processing (including mask pasting in image), but not the time for computing the precision-recall.\n\nNetwork | train time (s / it) | test time (s / it) | memory (GB)\n-- | -- | -- | --\nFaster R-CNN ResNet-50 FPN | 0.2288 | 0.0590 | 5.2\nMask R-CNN ResNet-50 FPN | 0.2728 | 0.0903 | 5.4\nKeypoint R-CNN ResNet-50 FPN | 0.3789 | 0.1242 | 6.8", "metadata": {"source": "https://pytorch.org/blog/torchvision03/", "category": "pytorch blogs"}} {"page_content": "You can load and use pre-trained detection and segmentation models with a few lines of code\n\n```python\nimport torchvision\n\nmodel = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)\n# set it to evaluation mode, as the model behaves differently\n# during training and during evaluation\nmodel.eval()\n\nimage = PIL.Image.open('/path/to/an/image.jpg')\nimage_tensor = torchvision.transforms.functional.to_tensor(image)\n\n# pass a list of (potentially different sized) tensors\n# to the model, in 0-1 range. The model will take care of\n# batching them together and normalizing\noutput = model([image_tensor])\n# output is a list of dict, containing the postprocessed predictions\n```\n\n### Classification Models\n\nThe following classification models were added:\n\n* GoogLeNet (Inception v1)\n* MobileNet V2\n* ShuffleNet v2\n* ResNeXt-50 32x4d and ResNeXt-101 32x8d\n\n### Datasets\n\nThe following datasets were added:", "metadata": {"source": "https://pytorch.org/blog/torchvision03/", "category": "pytorch blogs"}} -{"page_content": "* Caltech101, Caltech256, and CelebA\n* ImageNet dataset (improving on ImageFolder, provides class-strings)\n* Semantic Boundaries Dataset\n* VisionDataset as a base class for all datasets\n\n\nIn addition, we've added more image transforms, general improvements and bug fixes, as well as improved documentation.\n\n**See the full release notes [here](https://github.com/pytorch/vision/releases) as well as this getting started tutorial [on Google Colab here](https://colab.research.google.com/github/pytorch/vision/blob/temp-tutorial/tutorials/torchvision_finetuning_instance_segmentation.ipynb), which describes how to fine tune your own instance segmentation model on a custom dataset.**\n\nCheers!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/torchvision03/", "category": "pytorch blogs"}} +{"page_content": "### Datasets\n\nThe following datasets were added:\n\n* Caltech101, Caltech256, and CelebA\n* ImageNet dataset (improving on ImageFolder, provides class-strings)\n* Semantic Boundaries Dataset\n* VisionDataset as a base class for all datasets\n\n\nIn addition, we've added more image transforms, general improvements and bug fixes, as well as improved documentation.\n\n**See the full release notes [here](https://github.com/pytorch/vision/releases) as well as this getting started tutorial [on Google Colab here](https://colab.research.google.com/github/pytorch/vision/blob/temp-tutorial/tutorials/torchvision_finetuning_instance_segmentation.ipynb), which describes how to fine tune your own instance segmentation model on a custom dataset.**\n\nCheers!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/torchvision03/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'Prototype Features Now Available - APIs for Hardware Accelerated Mobile and ARM64 Builds'\nauthor: Team PyTorch\n---\n\nToday, we are announcing four PyTorch prototype features. The first three of these will enable Mobile machine-learning developers to execute models on the full set of hardware (HW) engines making up a system-on-chip (SOC). This gives developers options to optimize their model execution for unique performance, power, and system-level concurrency.\n\nThese features include enabling execution on the following on-device HW engines:\n* DSP and NPUs using the Android Neural Networks API (NNAPI), developed in collaboration with Google\n* GPU execution on Android via Vulkan\n* GPU execution on iOS via Metal\n\nThis release also includes developer efficiency benefits with newly introduced support for ARM64 builds for Linux.", "metadata": {"source": "https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/", "category": "pytorch blogs"}} {"page_content": "Below, you\u2019ll find brief descriptions of each feature with the links to get you started. These features are available through our [nightly builds](https://pytorch.org/). Reach out to us on the [PyTorch Forums](https://discuss.pytorch.org/) for any comment or feedback. We would love to get your feedback on those and hear how you are using them!\n\n## NNAPI Support with Google Android", "metadata": {"source": "https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/", "category": "pytorch blogs"}} -{"page_content": "The Google Android and PyTorch teams collaborated to enable support for Android\u2019s Neural Networks API (NNAPI) via PyTorch Mobile. Developers can now unlock high-performance execution on Android phones as their machine-learning models will be able to access additional hardware blocks on the phone\u2019s system-on-chip. NNAPI allows Android apps to run computationally intensive neural networks on the most powerful and efficient parts of the chips that power mobile phones, including DSPs (Digital Signal Processors) and NPUs (specialized Neural Processing Units). The API was introduced in Android 8 (Oreo) and significantly expanded in Android 10 and 11 to support a richer set of AI models. With this integration, developers can now seamlessly access NNAPI directly from PyTorch Mobile. This initial release includes fully-functional support for a core set of features and operators, and Google and Facebook will be working to expand capabilities in the coming months.", "metadata": {"source": "https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/", "category": "pytorch blogs"}} +{"page_content": "## NNAPI Support with Google Android\n\nThe Google Android and PyTorch teams collaborated to enable support for Android\u2019s Neural Networks API (NNAPI) via PyTorch Mobile. Developers can now unlock high-performance execution on Android phones as their machine-learning models will be able to access additional hardware blocks on the phone\u2019s system-on-chip. NNAPI allows Android apps to run computationally intensive neural networks on the most powerful and efficient parts of the chips that power mobile phones, including DSPs (Digital Signal Processors) and NPUs (specialized Neural Processing Units). The API was introduced in Android 8 (Oreo) and significantly expanded in Android 10 and 11 to support a richer set of AI models. With this integration, developers can now seamlessly access NNAPI directly from PyTorch Mobile. This initial release includes fully-functional support for a core set of features and operators, and Google and Facebook will be working to expand capabilities in the coming months.", "metadata": {"source": "https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/", "category": "pytorch blogs"}} {"page_content": "**Links**\n* [Android Blog: Android Neural Networks API 1.3 and PyTorch Mobile support](https://android-developers.googleblog.com/2020/11/android-neural-networks-api-13.html)\n* [PyTorch Medium Blog: Support for Android NNAPI with PyTorch Mobile](http://bit.ly/android-nnapi-pytorch-mobile-announcement)\n\n## PyTorch Mobile GPU support", "metadata": {"source": "https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/", "category": "pytorch blogs"}} -{"page_content": "Inferencing on GPU can provide great performance on many models types, especially those utilizing high-precision floating-point math. Leveraging the GPU for ML model execution as those found in SOCs from Qualcomm, Mediatek, and Apple allows for CPU-offload, freeing up the Mobile CPU for non-ML use cases. This initial prototype level support provided for on device GPUs is via the Metal API specification for iOS, and the Vulkan API specification for Android. As this feature is in an early stage: performance is not optimized and model coverage is limited. We expect this to improve significantly over the course of 2021 and would like to hear from you which models and devices you would like to see performance improvements on.\n\n**Links**\n* [Prototype source workflows](https://github.com/pytorch/tutorials/tree/master/prototype_source)\n\n## ARM64 Builds for Linux", "metadata": {"source": "https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/", "category": "pytorch blogs"}} -{"page_content": "We will now provide prototype level PyTorch builds for ARM64 devices on Linux. As we see more ARM usage in our community with platforms such as Raspberry Pis and Graviton(2) instances spanning both at the edge and on servers respectively. This feature is available through our [nightly builds](https://pytorch.org/).\n\nWe value your feedback on these features and look forward to collaborating with you to continuously improve them further!\n\nThank you,\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/", "category": "pytorch blogs"}} +{"page_content": "## PyTorch Mobile GPU support\n\nInferencing on GPU can provide great performance on many models types, especially those utilizing high-precision floating-point math. Leveraging the GPU for ML model execution as those found in SOCs from Qualcomm, Mediatek, and Apple allows for CPU-offload, freeing up the Mobile CPU for non-ML use cases. This initial prototype level support provided for on device GPUs is via the Metal API specification for iOS, and the Vulkan API specification for Android. As this feature is in an early stage: performance is not optimized and model coverage is limited. We expect this to improve significantly over the course of 2021 and would like to hear from you which models and devices you would like to see performance improvements on.\n\n**Links**\n* [Prototype source workflows](https://github.com/pytorch/tutorials/tree/master/prototype_source)\n\n## ARM64 Builds for Linux", "metadata": {"source": "https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/", "category": "pytorch blogs"}} +{"page_content": "## ARM64 Builds for Linux\n\nWe will now provide prototype level PyTorch builds for ARM64 devices on Linux. As we see more ARM usage in our community with platforms such as Raspberry Pis and Graviton(2) instances spanning both at the edge and on servers respectively. This feature is available through our [nightly builds](https://pytorch.org/).\n\nWe value your feedback on these features and look forward to collaborating with you to continuously improve them further!\n\nThank you,\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"PyTorch Enterprise Support Program Update\"\nauthor: Team PyTorch\nfeatured-img: \"\"\n---\n\nOn May 25, 2021, we announced the [PyTorch Enterprise Support Program](https://pytorch.org/blog/announcing-pytorch-enterprise/) (ESP) that enabled providers to develop and offer tailored enterprise-grade support to their customers.\n\nThe program enabled Program certified service providers to develop and offer tailored enterprise-grade support to their customers through contribution of hotfixes and other improvements requested by PyTorch enterprise users who were developing models in production at scale for mission-critical applications. However, as we evaluate community feedback, we found ongoing ESP support was not necessary at this time and will immediately divert these resources to other areas to improve the user experience for the entire community.", "metadata": {"source": "https://pytorch.org/blog/pytorch-enterprise-support-update/", "category": "pytorch blogs"}} {"page_content": "Today, we are removing the PyTorch long-term support (LTS 1.8.2) download link from the \u201cGet Started\u201d page from the \u201c[Start Locally](https://pytorch.org/get-started/locally/)\u201d download option in order to simplify the user experience. One can download PyTorch v1.8.2 in [previous versions](/get-started/previous-versions/#v182-with-lts-support). Please note that it is only supported for Python while it is being deprecated. If there are any updates to ESP/LTS, we will update future blogs.\n\n

\n \n

\n\nPlease reach out to [marketing@pytorch.org](mailto:marketing@pytorch.org) with any questions.", "metadata": {"source": "https://pytorch.org/blog/pytorch-enterprise-support-update/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: 'New Releases: PyTorch 1.2, torchtext 0.4, torchaudio 0.3, and torchvision 0.4'\nauthor: Team PyTorch\nredirect_from: /2019/08/06/pytorch_aug2019_releases.html\n---\n\nSince the release of PyTorch 1.0, we\u2019ve seen the community expand to add new tools, contribute to a growing set of models available in the PyTorch Hub, and continually increase usage in both research and production.\n\nFrom a core perspective, PyTorch has continued to add features to support both research and production usage, including the ability to bridge these two worlds via [TorchScript](https://pytorch.org/docs/stable/jit.html). Today, we are excited to announce that we have four new releases including PyTorch 1.2, torchvision 0.4, torchaudio 0.3, and torchtext 0.4. You can get started now with any of these releases at [pytorch.org](https://pytorch.org/get-started/locally/).\n\n# PyTorch 1.2", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"}} {"page_content": "# PyTorch 1.2\n\nWith PyTorch 1.2, the open source ML framework takes a major step forward for production usage with the addition of an improved and more polished TorchScript environment. These improvements make it even easier to ship production models, expand support for exporting ONNX formatted models, and enhance module level support for Transformers. In addition to these new features, [TensorBoard](https://pytorch.org/docs/stable/tensorboard.html) is now no longer experimental - you can simply type `from torch.utils.tensorboard import SummaryWriter` to get started.\n\n## TorchScript Improvements\n\nSince its release in PyTorch 1.0, TorchScript has provided a path to production for eager PyTorch models. The TorchScript compiler converts PyTorch models to a statically typed graph representation, opening up opportunities for\noptimization and execution in constrained environments where Python is not available. You can incrementally convert your model to TorchScript, mixing compiled code seamlessly with Python.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"}} {"page_content": "PyTorch 1.2 significantly expands TorchScript's support for the subset of Python used in PyTorch models and delivers a new, easier-to-use API for compiling your models to TorchScript. See the [migration guide](https://pytorch.org/docs/master/jit.html#migrating-to-pytorch-1-2-recursive-scripting-api) for details. Below is an example usage of the new API:\n\n```python\nimport torch\n\nclass MyModule(torch.nn.Module):\n def __init__(self, N, M):\n super(MyModule, self).__init__()\n self.weight = torch.nn.Parameter(torch.rand(N, M))\n\n def forward(self, input):\n if input.sum() > 0:\n output = self.weight.mv(input)\n else:\n output = self.weight + input\n return output\n\n# Compile the model code to a static representation\nmy_script_module = torch.jit.script(MyModule(3, 4))\n\n# Save the compiled code and model data so it can be loaded elsewhere\nmy_script_module.save(\"my_script_module.pt\")\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"}} {"page_content": "To learn more, see our [Introduction to TorchScript](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript.html) and [Loading a\nPyTorch Model in C++](https://pytorch.org/tutorials/advanced/cpp_export.html) tutorials.\n\n## Expanded ONNX Export", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"}} -{"page_content": "The [ONNX](http://onnx.ai/) community continues to grow with an open [governance structure](https://github.com/onnx/onnx/wiki/Expanded-ONNX-Steering-Committee-Announced!) and additional steering committee members, special interest groups (SIGs), and working groups (WGs). In collaboration with Microsoft, we\u2019ve added full support to export ONNX Opset versions 7(v1.2), 8(v1.3), 9(v1.4) and 10 (v1.5). We\u2019ve have also enhanced the constant folding pass to support Opset 10, the latest available version of ONNX. ScriptModule has also been improved including support for multiple outputs, tensor factories, and tuples as inputs and outputs. Additionally, users are now able to register their own symbolic to export custom ops, and specify the dynamic dimensions of inputs during export. Here is a summary of the all of the major improvements:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"}} +{"page_content": "## Expanded ONNX Export\n\nThe [ONNX](http://onnx.ai/) community continues to grow with an open [governance structure](https://github.com/onnx/onnx/wiki/Expanded-ONNX-Steering-Committee-Announced!) and additional steering committee members, special interest groups (SIGs), and working groups (WGs). In collaboration with Microsoft, we\u2019ve added full support to export ONNX Opset versions 7(v1.2), 8(v1.3), 9(v1.4) and 10 (v1.5). We\u2019ve have also enhanced the constant folding pass to support Opset 10, the latest available version of ONNX. ScriptModule has also been improved including support for multiple outputs, tensor factories, and tuples as inputs and outputs. Additionally, users are now able to register their own symbolic to export custom ops, and specify the dynamic dimensions of inputs during export. Here is a summary of the all of the major improvements:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"}} {"page_content": "* Support for multiple Opsets including the ability to export dropout, slice, flip, and interpolate in Opset 10.\n* Improvements to ScriptModule including support for multiple outputs, tensor factories, and tuples as inputs and outputs.\n* More than a dozen additional PyTorch operators supported including the ability to export a custom operator.\n* Many big fixes and test infra improvements.\n\nYou can try out the latest tutorial [here](https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html), contributed by @lara-hdr at Microsoft. A big thank you to the entire Microsoft team for all of their hard work to make this release happen!\n\n## nn.Transformer", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"}} {"page_content": "## nn.Transformer\n\nIn PyTorch 1.2, we now include a standard [nn.Transformer](https://pytorch.org/docs/stable/nn.html?highlight=transformer#torch.nn.Transformer) module, based on the paper \u201c[Attention is All You Need](https://arxiv.org/abs/1706.03762)\u201d. The `nn.Transformer` module relies entirely on an [attention mechanism](https://pytorch.org/docs/stable/nn.html?highlight=nn%20multiheadattention#torch.nn.MultiheadAttention) to draw global dependencies between input and output. The individual components of the `nn.Transformer` module are designed so they can be adopted independently. For example, the [nn.TransformerEncoder](https://pytorch.org/docs/stable/nn.html?highlight=nn%20transformerencoder#torch.nn.TransformerEncoder) can be used by itself, without the larger `nn.Transformer`. The new APIs include:\n\n* `nn.Transformer`\n* `nn.TransformerEncoder` and `nn.TransformerEncoderLayer`\n* `nn.TransformerDecoder` and `nn.TransformerDecoderLayer`", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"}} {"page_content": "
\n \n
\n\nSee the [Transformer Layers](https://pytorch.org/docs/stable/nn.html#transformer-layers) documentation for more information. See [here](https://github.com/pytorch/pytorch/releases) for the full PyTorch 1.2 release notes.\n\n# Domain API Library Updates", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"}} -{"page_content": "PyTorch domain libraries like torchvision, torchtext, and torchaudio provide convenient access to common datasets, models, and transforms that can be used to quickly create a state-of-the-art baseline. Moreover, they also provide common abstractions to reduce boilerplate code that users might have to otherwise repeatedly write. Since research domains have distinct requirements, an ecosystem of specialized libraries called domain APIs (DAPI) has emerged around PyTorch to simplify the development of new and existing algorithms in a number of fields. We\u2019re excited to release three updated DAPI libraries for text, audio, and vision that compliment the PyTorch 1.2 core release.\n\n## Torchaudio 0.3 with Kaldi Compatibility, New Transforms\n\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"}} +{"page_content": "# Domain API Library Updates\n\nPyTorch domain libraries like torchvision, torchtext, and torchaudio provide convenient access to common datasets, models, and transforms that can be used to quickly create a state-of-the-art baseline. Moreover, they also provide common abstractions to reduce boilerplate code that users might have to otherwise repeatedly write. Since research domains have distinct requirements, an ecosystem of specialized libraries called domain APIs (DAPI) has emerged around PyTorch to simplify the development of new and existing algorithms in a number of fields. We\u2019re excited to release three updated DAPI libraries for text, audio, and vision that compliment the PyTorch 1.2 core release.\n\n## Torchaudio 0.3 with Kaldi Compatibility, New Transforms\n\n
\n \n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"}} {"page_content": "Torchaudio specializes in machine understanding of audio waveforms. It is an ML library that provides relevant signal processing functionality (but is not a general signal processing library). It leverages PyTorch\u2019s GPU support to provide many tools and transformations for waveforms to make data loading and standardization easier and more readable. For example, it offers data loaders for waveforms using sox, and transformations such as spectrograms, resampling, and mu-law encoding and decoding.\n\nWe are happy to announce the availability of torchaudio 0.3.0, with a focus on standardization and complex numbers, a transformation (resample) and two new functionals (phase_vocoder, ISTFT), Kaldi compatibility, and a new tutorial. Torchaudio was redesigned to be an extension of PyTorch and a part of the domain APIs (DAPI) ecosystem.\n\n### Standardization", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"}} {"page_content": "### Standardization\n\nSignificant effort in solving machine learning problems goes into data preparation. In this new release, we've updated torchaudio's interfaces for its transformations to standardize around the following vocabulary and conventions.\n\nTensors are assumed to have channel as the first dimension and time as the last dimension (when applicable). This makes it consistent with PyTorch's dimensions. For size names, the prefix `n_` is used (e.g. \"a tensor of size (`n_freq`, `n_mel`)\") whereas dimension names do not have this prefix (e.g. \"a tensor of dimension (channel, time)\"). The input of all transforms and functions now assumes channel first. This is done to be consistent with PyTorch, which has channel followed by the number of samples. The channel parameter of all transforms and functions is now deprecated.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"}} {"page_content": "The output of `STFT` is (channel, frequency, time, 2), meaning for each channel, the columns are the Fourier transform of a certain window, so as we travel horizontally we can see each column (the Fourier transformed waveform) change over time. This matches the output of librosa so we no longer need to transpose in our test comparisons with `Spectrogram`, `MelScale`, `MelSpectrogram`, and `MFCC`. Moreover, because of these new conventions, we deprecated `LC2CL` and `BLC2CBL` which were used to transfer from one shape of signal to another.\n\nAs part of this release, we're also introducing support for complex numbers via tensors of dimension (..., 2), and providing `magphase` to convert such a tensor into its magnitude and phase, and similarly `complex_norm` and `angle`.\n\nThe details of the standardization are provided in the [README](https://github.com/pytorch/audio/blob/v0.3.0/README.md#Conventions).\n\n### Functionals, Transformations, and Kaldi Compatibility", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"}} @@ -1867,9 +1871,9 @@ {"page_content": "A few weeks ago, TorchVision v0.11 was released packed with numerous new primitives, models and training recipe improvements which allowed achieving state-of-the-art (SOTA) results. The project was dubbed \u201c[TorchVision with Batteries Included](https://github.com/pytorch/vision/issues/3911)\u201d and aimed to modernize our library. We wanted to enable researchers to reproduce papers and conduct research more easily by using common building blocks. Moreover, we aspired to provide the necessary tools to Applied ML practitioners to train their models on their own data using the same SOTA techniques as in research. Finally, we wanted to refresh our pre-trained weights and offer\u00a0better off-the-shelf models to our users, hoping that they would build better applications.", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} {"page_content": "Though there is still much work to be done, we wanted to share with you some exciting results from the above work. We will showcase how one can use the new tools included in TorchVision to achieve state-of-the-art results on a highly competitive and well-studied architecture such as ResNet50 [[1]](https://arxiv.org/abs/1512.03385). We will share the exact recipe used to improve our baseline by over 4.7 accuracy points to reach a final top-1 accuracy of 80.9% and share the journey for deriving the new training process. Moreover, we will show that this recipe generalizes well to other model variants and families. We hope that the above will influence future research for developing stronger generalizable training methodologies and will inspire the community to adopt and contribute to our efforts.\n\n## The Results\n\nUsing our new training recipe found on ResNet50, we\u2019ve refreshed the pre-trained weights of the following models:", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} {"page_content": "| Model | Accuracy@1 | Accuracy@5| \n|----------|:--------:|:----------:|\n| ResNet50 | 80.858 | 95.434| \n|----------|:--------:|:----------:|\n| ResNet101 | 81.886 | 95.780| \n|----------|:--------:|:----------:|\n| ResNet152 | 82.284 | 96.002| \n|----------|:--------:|:----------:|\n| ResNeXt50-32x4d | 81.198 | 95.340| \n\nNote that the accuracy of all models except RetNet50 can be further improved by adjusting their training parameters slightly, but our focus was to have a single robust recipe which performs well for all. \n\n**UPDATE:** We have refreshed the majority of popular classification models of TorchVision, you can find the details on this [blog post](https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/).\n\nThere are currently two ways to use the latest weights of the model.\n\n## Using the Multi-pretrained weight API", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} -{"page_content": "We are currently working on a new prototype mechanism which will extend the model builder methods of TorchVision to [support multiple weights](https://github.com/pytorch/vision/issues/4611). Along with the weights, we store useful [meta-data](https://github.com/pytorch/vision/blob/c5fb79f8fad60511c89957c4970cc2a5cfc8432e/torchvision/prototype/models/resnet.py#L94-L103) (such as the labels, the accuracy, links to recipe etc) and the preprocessing transforms necessary for using the models. Example:\n\n```python\n from PIL import Image\n from torchvision import prototype as P\n img = Image.open(\"test/assets/encode_jpeg/grace_hopper_517x606.jpg\")\n \u00a0\n # Initialize model\n weights = P.models.ResNet50_Weights.IMAGENET1K_V2\n model = P.models.resnet50(weights=weights)\n model.eval()", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} +{"page_content": "## Using the Multi-pretrained weight API\n\nWe are currently working on a new prototype mechanism which will extend the model builder methods of TorchVision to [support multiple weights](https://github.com/pytorch/vision/issues/4611). Along with the weights, we store useful [meta-data](https://github.com/pytorch/vision/blob/c5fb79f8fad60511c89957c4970cc2a5cfc8432e/torchvision/prototype/models/resnet.py#L94-L103) (such as the labels, the accuracy, links to recipe etc) and the preprocessing transforms necessary for using the models. Example:\n\n```python\n from PIL import Image\n from torchvision import prototype as P\n img = Image.open(\"test/assets/encode_jpeg/grace_hopper_517x606.jpg\")\n \u00a0\n # Initialize model\n weights = P.models.ResNet50_Weights.IMAGENET1K_V2\n model = P.models.resnet50(weights=weights)\n model.eval()", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} {"page_content": "# Initialize inference transforms\n preprocess = weights.transforms()\n \u00a0\n # Apply inference preprocessing transforms\n batch = preprocess(img).unsqueeze(0)\n prediction = model(batch).squeeze(0).softmax(0)\n \u00a0\n # Make predictions\n label = prediction.argmax().item()\n score = prediction[label].item()\n \u00a0\n # Use meta to get the labels\n category_name = weights.meta['categories'][label]\n print(f\"{category_name}: {100 * score}%\")\n```\n\n## Using the legacy API\n\nThose who don\u2019t want to use a prototype API have the option of accessing the new weights via the legacy API using the following approach:\n\n```python\n from torchvision.models import resnet\n \u00a0\n # Overwrite the URL of the previous weights\n resnet.model_urls[\"resnet50\"] = \"https://download.pytorch.org/models/resnet50-11ad3fa6.pth\"\n \u00a0\n # Initialize the model using the legacy API\n model = resnet.resnet50(pretrained=True)\n \u00a0\n # TODO: Apply preprocessing + call the model\n # ...\n```\n\n## The Training Recipe", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} -{"page_content": "Our goal was to use the newly introduced primitives of TorchVision to derive a new strong training recipe which achieves state-of-the-art results for the vanilla ResNet50 architecture when trained from scratch on ImageNet with no additional external data. Though by using architecture specific tricks\u00a0[[2]](https://arxiv.org/abs/1812.01187) one could further improve the accuracy, we\u2019ve decided not to include them so that the recipe can be used in other architectures. Our recipe\u00a0heavily focuses on simplicity and builds upon work by FAIR [[3]](https://arxiv.org/abs/2103.06877), [[4]](https://arxiv.org/abs/2106.14881), [[5]](https://arxiv.org/abs/1906.06423), [[6]](https://arxiv.org/abs/2012.12877), [[7]](https://arxiv.org/abs/2110.00476).\u00a0Our findings align with the\u00a0parallel study of Wightman et al. [[7]](https://arxiv.org/abs/2110.00476), who also report major accuracy improvements by focusing on the training recipes.\n\nWithout further ado, here are the main parameters of our recipe:", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} +{"page_content": "## The Training Recipe\n\nOur goal was to use the newly introduced primitives of TorchVision to derive a new strong training recipe which achieves state-of-the-art results for the vanilla ResNet50 architecture when trained from scratch on ImageNet with no additional external data. Though by using architecture specific tricks\u00a0[[2]](https://arxiv.org/abs/1812.01187) one could further improve the accuracy, we\u2019ve decided not to include them so that the recipe can be used in other architectures. Our recipe\u00a0heavily focuses on simplicity and builds upon work by FAIR [[3]](https://arxiv.org/abs/2103.06877), [[4]](https://arxiv.org/abs/2106.14881), [[5]](https://arxiv.org/abs/1906.06423), [[6]](https://arxiv.org/abs/2012.12877), [[7]](https://arxiv.org/abs/2110.00476).\u00a0Our findings align with the\u00a0parallel study of Wightman et al. [[7]](https://arxiv.org/abs/2110.00476), who also report major accuracy improvements by focusing on the training recipes.\n\nWithout further ado, here are the main parameters of our recipe:", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} {"page_content": "```python\n # Optimizer & LR scheme\n ngpus=8,\n batch_size=128,\u00a0 # per GPU\n\n epochs=600, \n opt='sgd', \u00a0\n momentum=0.9,\n\n lr=0.5, \n lr_scheduler='cosineannealinglr', \n lr_warmup_epochs=5, \n lr_warmup_method='linear', \n lr_warmup_decay=0.01, \n\n\n # Regularization and Augmentation\n weight_decay=2e-05, \n norm_weight_decay=0.0,\n\n label_smoothing=0.1, \n mixup_alpha=0.2, \n cutmix_alpha=1.0, \n auto_augment='ta_wide', \n random_erase=0.1, \n \n ra_sampler=True,\n ra_reps=4,\n\n\n # EMA configuration\n model_ema=True, \n model_ema_steps=32, \n model_ema_decay=0.99998, \n\n\n # Resizing\n interpolation='bilinear', \n val_resize_size=232, \n val_crop_size=224, \n train_crop_size=176,\n```\n\nUsing our standard [training reference script](https://github.com/pytorch/vision/tree/main/references/classification), we can train a ResNet50 using the following command:", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} {"page_content": "```\ntorchrun --nproc_per_node=8 train.py --model resnet50 --batch-size 128 --lr 0.5 \\\n--lr-scheduler cosineannealinglr --lr-warmup-epochs 5 --lr-warmup-method linear \\\n--auto-augment ta_wide --epochs 600 --random-erase 0.1 --weight-decay 0.00002\u00a0\\\n--norm-weight-decay 0.0 --label-smoothing 0.1 --mixup-alpha 0.2 --cutmix-alpha 1.0\u00a0\\\n--train-crop-size 176 --model-ema --val-resize-size 232 --ra-sampler --ra-reps 4\n```\n\n## Methodology\n\nThere are a few principles we kept in mind during our explorations:", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} {"page_content": "1. Training is a stochastic process and the validation metric we try to optimize is a random variable. This is due to the random weight initialization scheme employed and the existence of random effects during the training process. This means that we can\u2019t do a single run to assess the effect of a recipe change. The standard practice is doing multiple runs (usually 3 to 5) and studying the summarization stats (such as mean, std, median, max, etc).\n2. There is usually a significant interaction between different parameters, especially for techniques that focus on Regularization and reducing overfitting. Thus changing the value of one can have effects on the optimal configurations of others. To account for that one can either adopt a greedy search approach (which often leads to suboptimal results but tractable experiments) or apply grid search (which leads to better results but is computationally expensive). In this work, we used a mixture of both.\n3. Techniques that are non-deterministic or introduce noise usually require longer training cycles to improve model performance. To keep things tractable, we initially used short training cycles (small number of epochs) to decide which paths can be eliminated early and which should be explored using longer training.\n4. There is a risk of overfitting the validation dataset [[8]](https://arxiv.org/abs/1902.10811) because of the repeated experiments. To mitigate some of the risk, we apply only training optimizations that provide a significant accuracy improvements and use K-fold cross validation to verify optimizations done on the validation set. Moreover we confirm that our recipe ingredients generalize well on other models for which we didn\u2019t optimize the hyper-parameters.", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} @@ -1883,15 +1887,15 @@ {"page_content": "## Random Erasing\n\nAnother data augmentation technique known to help the classification accuracy is Random Erasing [[10]](https://arxiv.org/abs/1708.04896), [[11]](https://arxiv.org/abs/1708.04552). Often paired with Automatic Augmentation methods, it usually yields additional improvements in accuracy due to its regularization effect. In our experiments we tuned only the probability of applying the method via a grid search and found that it\u2019s beneficial to keep its probability at low levels, typically around 10%.\u00a0\n\nHere is the extra parameter introduced on top of the previous:\n\n```\nrandom_erase=0.1,\n```\n\nApplying Random Erasing increases our Acc@1 by further\u00a00.190 points.\n\n## Label Smoothing", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} {"page_content": "## Label Smoothing\n\nA good technique to reduce overfitting is to stop the model from becoming overconfident. This can be achieved by softening the ground truth using Label Smoothing\u00a0[[12]](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Szegedy_Rethinking_the_Inception_CVPR_2016_paper.pdf). There is a single parameter which controls the degree of smoothing (the higher the stronger) that we need to specify. Though optimizing it via grid search is possible, we found that values around 0.05-0.15 yield similar results, so to avoid overfitting it we used the same value as on the\u00a0paper that introduced it.\n\nBelow we can find the extra config added on this step:\n\n```\nlabel_smoothing=0.1,\n```\n\nWe use PyTorch\u2019s newly introduced\u00a0[CrossEntropyLoss](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html?highlight=label_smoothing) label_smoothing parameter and that increases our accuracy by an additional\u00a00.318 points.\n\n## Mixup and Cutmix", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} {"page_content": "## Mixup and Cutmix\n\nTwo data augmentation techniques often used to produce SOTA results are Mixup and Cutmix\u00a0[[13]](https://arxiv.org/abs/1710.09412), [[14]](https://arxiv.org/abs/1905.04899). They both provide strong regularization effects by softening not only the labels but also the images. In our setup we found it beneficial to apply one of them randomly with equal probability. Each is parameterized with a\u00a0hyperparameter alpha, which controls the shape of the Beta distribution from which the smoothing probability is sampled. We did a very limited grid search, focusing primarily on common values proposed on the papers.\u00a0\n\nBelow you will find the optimal values for the alpha parameters of the two techniques:\n\n```\nmixup_alpha=0.2, \ncutmix_alpha=1.0,\n```\n\nApplying mixup increases our accuracy by\u00a00.118 points and combining it with cutmix improves it by additional 0.278 points.\n\n## Weight Decay tuning", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} -{"page_content": "Our standard recipe uses L2 regularization to reduce overfitting. The Weight Decay parameter controls the degree of the regularization (the larger the stronger) and is applied universally to all learned parameters of the model by default. In this recipe, we apply two optimizations to the standard approach. First we perform grid search to tune the parameter of weight decay and second we disable weight decay for the parameters of the normalization layers.\u00a0\n\nBelow you can find the optimal configuration of weight decay for our recipe:\n\n```\nweight_decay=2e-05, \nnorm_weight_decay=0.0,\n```\n\nThe above update improves our accuracy by a further\u00a00.526 points, providing additional experimental evidence for a known fact that tuning weight decay has significant effects on the performance of the model. Our approach for separating the Normalization parameters from the rest was inspired by\u00a0[ClassyVision\u2019s](https://github.com/facebookresearch/ClassyVision) approach.\n\n## FixRes mitigations", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} -{"page_content": "An important property identified early in our experiments is the fact that the models performed significantly better if the resolution used during validation was increased from the 224x224 of training. This effect is studied in detail on the FixRes paper [[5]](https://arxiv.org/abs/1906.06423)\u00a0and two mitigations are proposed: a) one could try to reduce the training resolution so that the accuracy on the validation resolution is maximized or b) one could fine-tune the model on a two-phase training so that it adjusts on the target resolution. Since we didn\u2019t want to introduce a 2-phase training, we went for option a). This means that we reduced the train crop size from 224 and used grid search to find the one that maximizes the validation on resolution of 224x224.\n\nBelow you can see the optimal value used on our recipe:\n\n```\nval_crop_size=224, \ntrain_crop_size=176,\n```\n\nThe above optimization improved our accuracy by an additional 0.160 points and sped up our training by 10%.", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} +{"page_content": "## Weight Decay tuning\n\nOur standard recipe uses L2 regularization to reduce overfitting. The Weight Decay parameter controls the degree of the regularization (the larger the stronger) and is applied universally to all learned parameters of the model by default. In this recipe, we apply two optimizations to the standard approach. First we perform grid search to tune the parameter of weight decay and second we disable weight decay for the parameters of the normalization layers.\u00a0\n\nBelow you can find the optimal configuration of weight decay for our recipe:\n\n```\nweight_decay=2e-05, \nnorm_weight_decay=0.0,\n```\n\nThe above update improves our accuracy by a further\u00a00.526 points, providing additional experimental evidence for a known fact that tuning weight decay has significant effects on the performance of the model. Our approach for separating the Normalization parameters from the rest was inspired by\u00a0[ClassyVision\u2019s](https://github.com/facebookresearch/ClassyVision) approach.\n\n## FixRes mitigations", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} +{"page_content": "## FixRes mitigations\n\nAn important property identified early in our experiments is the fact that the models performed significantly better if the resolution used during validation was increased from the 224x224 of training. This effect is studied in detail on the FixRes paper [[5]](https://arxiv.org/abs/1906.06423)\u00a0and two mitigations are proposed: a) one could try to reduce the training resolution so that the accuracy on the validation resolution is maximized or b) one could fine-tune the model on a two-phase training so that it adjusts on the target resolution. Since we didn\u2019t want to introduce a 2-phase training, we went for option a). This means that we reduced the train crop size from 224 and used grid search to find the one that maximizes the validation on resolution of 224x224.\n\nBelow you can see the optimal value used on our recipe:\n\n```\nval_crop_size=224, \ntrain_crop_size=176,\n```\n\nThe above optimization improved our accuracy by an additional 0.160 points and sped up our training by 10%.", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} {"page_content": "It\u2019s worth noting that the FixRes effect still persists, meaning that the model continues to perform better on validation when we increase the resolution. Moreover, further reducing the training crop-size actually hurts the accuracy. This intuitively makes sense because one can only reduce the resolution so much before critical details start disappearing from the picture. Finally, we should note that the above FixRes mitigation seems to benefit models with similar depth to ResNet50. Deeper variants with larger receptive fields seem to be slightly negatively affected (typically by 0.1-0.2 points). Hence we consider this part of the recipe optional. Below we visualize the performance of the best available checkpoints (with the full recipe) for models trained with 176 and 224 resolution:", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} {"page_content": "
\n\"Best\n\"Best\n
\n\n## Exponential Moving Average (EMA)\n\nEMA is a technique that allows one to push the accuracy of a model without increasing its complexity or inference time. It performs an exponential moving average on the model weights and this leads to increased accuracy and more stable models. The averaging happens every few iterations and its decay parameter was tuned via grid search.\u00a0\n\nBelow you can see the optimal values for our recipe:\n\n```\nmodel_ema=True, \nmodel_ema_steps=32, \nmodel_ema_decay=0.99998,\n```", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} {"page_content": "The use of EMA increases our accuracy by\u00a00.254 points comparing to the previous step. Note that TorchVision\u2019s\u00a0[EMA implementation](https://github.com/pytorch/vision/pull/4406) is build on top of PyTorch\u2019s [AveragedModel](https://pytorch.org/docs/stable/optim.html#stochastic-weight-averaging) class with the key difference being that it averages not only the model parameters but also its buffers. Moreover, we have adopted tricks from\u00a0[Pycls](https://github.com/facebookresearch/pycls/tree/main/pycls)\u00a0which allow us to parameterize the decay in a way that doesn\u2019t depend on the number of epochs.\n\n## Inference Resize tuning", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} -{"page_content": "Unlike all other steps of the process which involved training models with different parameters, this optimization was done on top of the final model. During inference, the image is resized to a specific resolution and then a central 224x224 crop is taken from it. The original recipe used a resize size of 256, which caused a similar discrepancy as the one described on the FixRes paper [[5]](https://arxiv.org/abs/1906.06423). By bringing this resize value closer to the target inference resolution, one can improve the accuracy. To select the value we run a short grid search between interval [224, 256] with step of 8. To avoid overfitting, the value was selected using half of the validation set and confirmed using the other half.\n\nBelow you can see the optimal value used on our recipe:\n\n```\nval_resize_size=232,\n```", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} -{"page_content": "The above is an optimization which improved our accuracy by\u00a00.224 points.\u00a0It\u2019s worth noting that the optimal value for ResNet50 works also best for ResNet101, ResNet152 and ResNeXt50, which hints that it generalizes across models:\n\n\n
\n\"ResNet50\n\"ResNet101\n\"Best\n
\n\n## [UPDATE] Repeated Augmentation", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} -{"page_content": "Repeated Augmentation [[15]](https://arxiv.org/abs/1901.09335), [[16]](https://arxiv.org/abs/1902.05509) is another technique which can improve the overall accuracy and has been used by other strong recipes such as those at [[6]](https://arxiv.org/abs/2012.12877), [[7]](https://arxiv.org/abs/2110.00476). Tal Ben-Nun, a community contributor, has [further improved](https://github.com/pytorch/vision/pull/5201) upon our original recipe by proposing training the model with 4 repetitions. His contribution came after the release of this article.\n\nBelow you can see the optimal value used on our recipe:\n\n```\nra_sampler=True,\nra_reps=4,\n```\n\nThe above is the final optimization which improved our accuracy by\u00a00.184 points.\u00a0\n\n## Optimizations that were tested but not adopted", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} -{"page_content": "During the early stages of our research, we experimented with additional techniques, configurations and optimizations. Since our target was to keep our recipe as simple as possible, we decided not to include anything that didn\u2019t provide a significant improvement. Here are a few approaches that we took but didn\u2019t make it to our final recipe:", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} +{"page_content": "## Inference Resize tuning\n\nUnlike all other steps of the process which involved training models with different parameters, this optimization was done on top of the final model. During inference, the image is resized to a specific resolution and then a central 224x224 crop is taken from it. The original recipe used a resize size of 256, which caused a similar discrepancy as the one described on the FixRes paper [[5]](https://arxiv.org/abs/1906.06423). By bringing this resize value closer to the target inference resolution, one can improve the accuracy. To select the value we run a short grid search between interval [224, 256] with step of 8. To avoid overfitting, the value was selected using half of the validation set and confirmed using the other half.\n\nBelow you can see the optimal value used on our recipe:\n\n```\nval_resize_size=232,\n```", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} +{"page_content": "```\nval_resize_size=232,\n```\n\nThe above is an optimization which improved our accuracy by\u00a00.224 points.\u00a0It\u2019s worth noting that the optimal value for ResNet50 works also best for ResNet101, ResNet152 and ResNeXt50, which hints that it generalizes across models:\n\n\n
\n\"ResNet50\n\"ResNet101\n\"Best\n
\n\n## [UPDATE] Repeated Augmentation", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} +{"page_content": "## [UPDATE] Repeated Augmentation\n\nRepeated Augmentation [[15]](https://arxiv.org/abs/1901.09335), [[16]](https://arxiv.org/abs/1902.05509) is another technique which can improve the overall accuracy and has been used by other strong recipes such as those at [[6]](https://arxiv.org/abs/2012.12877), [[7]](https://arxiv.org/abs/2110.00476). Tal Ben-Nun, a community contributor, has [further improved](https://github.com/pytorch/vision/pull/5201) upon our original recipe by proposing training the model with 4 repetitions. His contribution came after the release of this article.\n\nBelow you can see the optimal value used on our recipe:\n\n```\nra_sampler=True,\nra_reps=4,\n```\n\nThe above is the final optimization which improved our accuracy by\u00a00.184 points.\u00a0\n\n## Optimizations that were tested but not adopted", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} +{"page_content": "## Optimizations that were tested but not adopted\n\nDuring the early stages of our research, we experimented with additional techniques, configurations and optimizations. Since our target was to keep our recipe as simple as possible, we decided not to include anything that didn\u2019t provide a significant improvement. Here are a few approaches that we took but didn\u2019t make it to our final recipe:", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} {"page_content": "- **Optimizers:** Using more complex optimizers such as Adam, RMSProp or SGD with Nesterov momentum didn\u2019t\u00a0provide significantly better results than vanilla SGD with momentum.\n- **LR Schedulers:**\u00a0We tried different LR Scheduler schemes such as StepLR and Exponential. Though the latter tends to work better with EMA, it often requires additional hyper-parameters such as defining the minimum LR to work well. Instead, we just use cosine annealing decaying the LR up to zero and choose the checkpoint with the highest accuracy.\n- **Automatic Augmentations:**\u00a0We\u2019ve tried different augmentation strategies such as AutoAugment and RandAugment. None of these outperformed the simpler parameter-free TrivialAugment.\n- **Interpolation:** Using bicubic or nearest interpolation didn\u2019t\u00a0provide significantly better results than bilinear.\n- **Normalization layers:** Using Sync Batch Norm didn\u2019t yield\u00a0significantly better results than using the regular Batch Norm.\n\n## Acknowledgements", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} {"page_content": "## Acknowledgements\n\nWe would like to thank\u00a0Piotr Dollar, Mannat Singh and Hugo Touvron for providing their insights and feedback during the development of the recipe and for their previous research work on which our recipe is based on. Their support was invaluable for achieving the above result. Moreover, we would like to thank\u00a0Prabhat Roy, Kai Zhang, Yiwen Song, Joel Schlosser, Ilqar Ramazanli, Francisco Massa, Mannat Singh, Xiaoliang Dai, Samuel Gabriel, Allen Goodman and Tal Ben-Nun for their contributions to the Batteries Included project.\n\n## References", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} {"page_content": "1. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. \u201cDeep Residual Learning for Image Recognition\u201d.\n2. Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, Mu Li. \u201cBag of Tricks for Image Classification with Convolutional Neural Networks\u201d\n3. Piotr Doll\u00e1r, Mannat Singh, Ross Girshick. \u201cFast and Accurate Model Scaling\u201d\n4. Tete Xiao, Mannat Singh, Eric Mintun, Trevor Darrell, Piotr Doll\u00e1r, Ross Girshick. \u201cEarly Convolutions Help Transformers See Better\u201d\n5. Hugo Touvron, Andrea Vedaldi, Matthijs Douze, Herv\u00e9 J\u00e9gou. \u201cFixing the train-test resolution discrepancy\n6. Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Herv\u00e9 J\u00e9gou. \u201cTraining data-efficient image transformers & distillation through attention\u201d\n7. Ross Wightman, Hugo Touvron, Herv\u00e9 J\u00e9gou. \u201cResNet strikes back: An improved training procedure in timm\u201d\n8. Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, Vaishaal Shankar. \u201cDo ImageNet Classifiers Generalize to ImageNet?\u201d\n9. Samuel G. M\u00fcller, Frank Hutter. \u201cTrivialAugment: Tuning-free Yet State-of-the-Art Data Augmentation\u201d\n10. Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, Yi Yang. \u201cRandom Erasing Data Augmentation\u201d\n11. Terrance DeVries, Graham W. Taylor. \u201cImproved Regularization of Convolutional Neural Networks with Cutout\u201d\n12. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, Zbigniew Wojna. \u201cRethinking the Inception Architecture for Computer Vision\u201d\n13. Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz. \u201cmixup: Beyond Empirical Risk Minimization\u201d\n14. Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, Youngjoon Yoo. \u201cCutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features\u201d\n15. Elad Hoffer, Tal Ben-Nun, Itay Hubara, Niv Giladi, Torsten Hoefler, Daniel Soudry. \u201cAugment your batch: better training with larger batches\u201d\n16. Maxim Berman, Herv\u00e9 J\u00e9gou, Andrea Vedaldi, Iasonas Kokkinos, Matthijs Douze. \u201cMultigrain: a unified image embedding for classes and instances\u201d", "metadata": {"source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"}} @@ -1911,7 +1915,8 @@ {"page_content": "\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
\nStable\n Beta\n Prototype\n Performance Improvements\n
\n\nAccelerated PT 2 Transformers\n \n\ntorch.compile\n \n\nDTensor\n \n\nCUDA support for 11.7 & 11.8 (deprecating CUDA 11.6) \n
\n \n\nPyTorch MPS Backend\n \n\nTensorParallel\n \n\nPython 3.8 (deprecating Python 3.7)\n
\n ", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} {"page_content": "Scaled dot product attention\n \n\n2D Parallel\n \n\nAWS Graviton3\n
\n \n\nfunctorch\n \n\nTorch.compile (dynamic=True)\n \n
\n Dispatchable Collectives\n \n
\n Torch.set_default & torch.device\n \n \n
\n ", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} {"page_content": "X86 quantization backend\n \n \n
\n \n\nGNN inference and training performance\n \n \n
\n\n\n*To see a full list of public 2.0, 1.13 and 1.12 feature submissions click [here](https://docs.google.com/spreadsheets/d/1H3jazwO8BBCwK8JwLNYspLiHfUrzshEtyqjL-X93I9g/edit#gid=790902532).\n\n\n## Stable Features\n\n\n### [Stable] Accelerated PyTorch 2 Transformers", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} -{"page_content": "The PyTorch 2.0 release includes a new high-performance implementation of the PyTorch Transformer API. In releasing Accelerated PT2 Transformers, our goal is to make training and deployment of state-of-the-art Transformer models affordable across the industry. This release introduces high-performance support for training and inference using a custom kernel architecture for scaled dot product attention (SPDA), extending the inference \u201cfastpath\u201d architecture, previously known as \"Better Transformer.\"\n\nSimilar to the \u201cfastpath\u201d architecture, custom kernels are fully integrated into the PyTorch Transformer API \u2013 thus, using the native Transformer and MultiHeadAttention API will enable users to:\n\n* transparently see significant speed improvements; \n* support many more use cases including models using Cross-Attention, Transformer Decoders, and for training models; and\n* continue to use fastpath inference for fixed and variable sequence length Transformer Encoder and Self Attention use cases.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} +{"page_content": "### [Stable] Accelerated PyTorch 2 Transformers\n\nThe PyTorch 2.0 release includes a new high-performance implementation of the PyTorch Transformer API. In releasing Accelerated PT2 Transformers, our goal is to make training and deployment of state-of-the-art Transformer models affordable across the industry. This release introduces high-performance support for training and inference using a custom kernel architecture for scaled dot product attention (SPDA), extending the inference \u201cfastpath\u201d architecture, previously known as \"Better Transformer.\"\n\nSimilar to the \u201cfastpath\u201d architecture, custom kernels are fully integrated into the PyTorch Transformer API \u2013 thus, using the native Transformer and MultiHeadAttention API will enable users to:", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} +{"page_content": "* transparently see significant speed improvements; \n* support many more use cases including models using Cross-Attention, Transformer Decoders, and for training models; and\n* continue to use fastpath inference for fixed and variable sequence length Transformer Encoder and Self Attention use cases.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} {"page_content": "To take full advantage of different hardware models and Transformer use cases, multiple SDPA custom kernels are supported (see below), with custom kernel selection logic that will pick the highest-performance kernel for a given model and hardware type. In addition to the existing Transformer API, model developers may also use the [scaled dot product attention](#beta-scaled-dot-product-attention-20) kernels directly by calling the new scaled_dot_product_attention() operator. Accelerated PyTorch 2 Transformers are integrated with torch.compile() . To use your model while benefiting from the additional acceleration of PT2-compilation (for inference or training), pre-process the model with `model = torch.compile(model)`.\n\nWe have achieved major speedups for training transformer models and in particular large language models with Accelerated PyTorch 2 Transformers using a combination of custom kernels and torch.compile().", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} {"page_content": "![alt_text](/assets/images/pytorch20post.png \"Accelerated PyTorch 2 speed\"){:width=\"100%\"}\nFigure: Using scaled dot product attention with custom kernels and torch.compile delivers significant speedups for training large language models, such as for [nanoGPT](https://github.com/karpathy/nanoGPT) shown here.\n\n\n\n## Beta Features\n\n\n### [Beta] torch.compile\n\ntorch.compile is the main API for PyTorch 2.0, which wraps your model and returns a compiled model. It is a fully additive (and optional) feature and hence 2.0 is 100% backward compatible by definition.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} {"page_content": "Underpinning torch.compile are new technologies \u2013 TorchDynamo, AOTAutograd, PrimTorch and TorchInductor:\n* TorchDynamo captures PyTorch programs safely using Python Frame Evaluation Hooks and is a significant innovation that was a result of 5 years of our R&D into safe graph capture. \n* AOTAutograd overloads PyTorch\u2019s autograd engine as a tracing autodiff for generating ahead-of-time backward traces. \n* PrimTorch canonicalizes ~2000+ PyTorch operators down to a closed set of ~250 primitive operators that developers can target to build a complete PyTorch backend. This substantially lowers the barrier of writing a PyTorch feature or backend. \n* TorchInductor is a deep learning compiler that generates fast code for multiple accelerators and backends. For NVIDIA and AMD GPUs, it uses OpenAI Triton as a key building block. For intel CPUs, we generate C++ code using multithreading, vectorized instructions and offloading appropriate operations to mkldnn when possible.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} @@ -1921,7 +1926,7 @@ {"page_content": "In previous versions of PyTorch, you had to rely on third-party implementations and install separate packages to take advantage of memory-optimized algorithms like [FlashAttention](https://github.com/HazyResearch/flash-attention). With PyTorch 2.0, all these implementations are readily available by default.\n\nThese implementations include [FlashAttention](https://arxiv.org/abs/2205.14135) from HazyResearch, Memory-Efficient Attention from the [xFormers](https://github.com/facebookresearch/xformers) project, and a native C++ implementation that is ideal for non-CUDA devices or when high-precision is required.\n\nPyTorch 2.0 will automatically select the optimal implementation for your use case, but you can also toggle them individually for finer-grained control. Additionally, the scaled dot product attention function can be used to build common transformer architecture components.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} {"page_content": "Learn more with the [documentation](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html?highlight=scaled_dot_product#torch.nn.functional.scaled_dot_product_attention) and this [tutorial](https://pytorch.org/tutorials/intermediate/scaled_dot_product_attention_tutorial.html).\n\n\n### [Beta] functorch -> torch.func \n\nInspired by [Google JAX](https://github.com/google/jax), functorch is a library that offers composable vmap (vectorization) and autodiff transforms. It enables advanced autodiff use cases that would otherwise be tricky to express in PyTorch. Examples include:\n* [model ensembling](https://pytorch.org/tutorials/intermediate/ensembling.html)\n* [efficiently computing jacobians and hessians](https://pytorch.org/tutorials/intermediate/jacobians_hessians.html)\n* [computing per-sample-gradients (or other per-sample quantities)](https://pytorch.org/tutorials/intermediate/per_sample_grads.html)", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} {"page_content": "We\u2019re excited to announce that, as the final step of upstreaming and integrating functorch into PyTorch, the functorch APIs are now available in the torch.func module. Our function transform APIs are identical to before, but we have changed how the interaction with NN modules work. Please see the [docs](https://pytorch.org/docs/master/func.html) and the [migration guide](https://pytorch.org/docs/master/func.migrating.html) for more details.\n\nFurthermore, we have [added support for torch.autograd.Function](https://pytorch.org/docs/master/notes/extending.func.html): one is now able to apply function transformations (e.g. vmap, grad, jvp) over torch.autograd.Function.\n\n\n### [Beta] Dispatchable Collectives", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} -{"page_content": "Dispatchable collectives is an improvement to the existing init_process_group() API which changes backend to an optional argument. For users, the main advantage of this feature is that it will allow them to write code that can run on both GPU and CPU machines without having to change the backend specification. The dispatchability feature will also make it easier for users to support both GPU and CPU collectives, as they will no longer need to specify the backend manually (e.g. \u201cNCCL\u201d or \u201cGLOO\u201d). Existing backend specifications by users will be honored and will not require change.\n\nUsage example:\n```\nimport torch.distributed.dist\n\u2026\n# old\ndist.init_process_group(backend=\u201dnccl\u201d, ...)\ndist.all_reduce(...) # with CUDA tensors works\ndist.all_reduce(...) # with CPU tensors does not work\n\n# new\ndist.init_process_group(...) # backend is optional\ndist.all_reduce(...) # with CUDA tensors works\ndist.all_reduce(...) # with CPU tensors works\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} +{"page_content": "### [Beta] Dispatchable Collectives\n\nDispatchable collectives is an improvement to the existing init_process_group() API which changes backend to an optional argument. For users, the main advantage of this feature is that it will allow them to write code that can run on both GPU and CPU machines without having to change the backend specification. The dispatchability feature will also make it easier for users to support both GPU and CPU collectives, as they will no longer need to specify the backend manually (e.g. \u201cNCCL\u201d or \u201cGLOO\u201d). Existing backend specifications by users will be honored and will not require change.\n\nUsage example:\n```\nimport torch.distributed.dist\n\u2026\n# old\ndist.init_process_group(backend=\u201dnccl\u201d, ...)\ndist.all_reduce(...) # with CUDA tensors works\ndist.all_reduce(...) # with CPU tensors does not work\n\n# new\ndist.init_process_group(...) # backend is optional\ndist.all_reduce(...) # with CUDA tensors works\ndist.all_reduce(...) # with CPU tensors works\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} {"page_content": "Learn more [here](https://pytorch.org/docs/master/distributed.html#torch.distributed.init_process_group).\n\n\n### [Beta] torch.set_default_device and torch.device as context manager\n\ntorch.set_default_device allows users to change the default device that factory functions in PyTorch allocate on. For example, if you torch.set_default_device(\u2018cuda\u2019), a call to torch.empty(2) will allocate on CUDA (rather than on CPU). You can also use torch.device as a context manager to change the default device on a local basis. This resolves a long standing feature request from PyTorch\u2019s initial release for a way to do this.\n\nLearn more [here](https://pytorch.org/tutorials/recipes/recipes/changing_default_device.html). \n\n\n### [Beta] \"X86\" as the new default quantization backend for x86 CPU", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} {"page_content": "The new X86 quantization backend, which utilizes FBGEMM and oneDNN kernel libraries, replaces FBGEMM as the default quantization backend for x86 CPU platforms and offers improved int8 inference performance compared to the original FBGEMM backend, leveraging the strengths of both libraries, with 1.3X \u2013 2X inference performance speedup measured on 40+ deep learning models. The new backend is functionally compatible with the original FBGEMM backend.\n\n\n**Table: Geomean Speedup of X86 Quantization Backend vs. FBGEMM Backend**\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
\n 1 core/instance\n 2 cores/instance\n 4 cores/instance\n 1 socket (32 cores)/instance\n
Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz\n 1.76X\n 1.80X\n 2.04X\n 1.34X\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} {"page_content": "By default, users on x86 platforms will utilize the x86 quantization backend and their PyTorch programs will remain unchanged when using the default backend. Alternatively, users have the option to specify \"X86\" as the quantization backend explicitly. Example code is shown below:\n\n```\nimport torch\nfrom torch.ao.quantization import get_default_qconfig_mappingfrom torch.quantization.quantize_fx\nimport prepare_fx, convert_fx\n \n# get default configuration\nqconfig_mapping = get_default_qconfig_mapping()\n \n# or explicitly specify the backend\n# qengine = 'x86'\n# torch.backends.quantized.engine = qengine\n# qconfig_mapping = get_default_qconfig_mapping(qengine)\n \n# construct fp32 model\nmodel_fp32 = ...\n \n# prepare\nprepared_model = prepare_fx(model_fp32, qconfig_mapping, example_inputs=x)\n \n# calibrate\n...\n \n# convert\nquantized_model = convert_fx(prepared_model)\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} @@ -1934,10 +1939,10 @@ {"page_content": "```\n# Assuming we have a model of the name 'model'\n \nexample_input = torch.rand(1, 3, 224, 224)\n \n# enable oneDNN Graph\ntorch.jit.enable_onednn_fusion(True)\n# Disable AMP for JIT\ntorch._C._jit_set_autocast_mode(False)\nwith torch.no_grad(), torch.cpu.amp.autocast():\n\tmodel = torch.jit.trace(model, (example_input))\n\tmodel = torch.jit.freeze(model)\n \t# 2 warm-ups (2 for tracing/scripting with an example, 3 without an example)\n\tmodel(example_input)\n\tmodel(example_input)\n \n\t# speedup would be observed in subsequent runs.\n\tmodel(example_input)\n```\n\n\nLearn more [here](https://pytorch.org/tutorials/recipes/recipes/tuning_guide.html#use-onednn-graph-with-torchscript-for-inference).\n\n\n## Prototype Features\n\n### Distributed API\n\n#### [Prototype] DTensor", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} {"page_content": "PyTorch [DistributedTensor](https://github.com/pytorch/pytorch/blob/master/torch/distributed/_tensor/README.md) (DTensor) is a prototyping effort with distributed tensor primitives to allow easier distributed computation authoring in the SPMD (Single Program Multiple Devices) paradigm. The primitives are simple but powerful when used to express tensor distributions with both sharded and replicated parallelism strategies. PyTorch DTensor empowered PyTorch [Tensor Parallelism](https://pytorch.org/docs/master/distributed.tensor.parallel.html) along with other advanced parallelism explorations. In addition, it also offers a uniform way to save/load state_dict for distributed checkpointing purposes, even when there\u2019re complex tensor distribution strategies such as combining tensor parallelism with parameter sharding in FSDP. More details can be found in this [RFC](https://github.com/pytorch/pytorch/issues/88838) and the [DTensor examples notebook](https://colab.research.google.com/drive/12Pl5fvh0eLPUrcVO7s6yY4n2_RZo8pLR#scrollTo=stYPKb9Beq4e).", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} {"page_content": "#### [Prototype] TensorParallel\n\nWe now support DTensor based Tensor Parallel which users can distribute their model parameters across different GPU devices. We also support Pairwise Parallel which shards two concatenated linear layers in a col-wise and row-wise style separately so that only one collective(all-reduce/reduce-scatter) is needed in the end. More details can be found in this [example](https://github.com/pytorch/examples/blob/main/distributed/tensor_parallelism/example.py).\n\n\n#### [Prototype] 2D Parallel\n\nWe implemented the integration of the aforementioned TP with FullyShardedDataParallel(FSDP) as 2D parallel to further scale large model training. More details can be found in this [slide](https://docs.google.com/presentation/d/17g6WqrO00rP3MsxbRENsPpjrlSkwiA_QB4r93_eB5is/edit?usp=sharing) and [code example](https://github.com/pytorch/pytorch/blob/master/test/distributed/tensor/parallel/test_2d_parallel.py).\n\n\n#### [Prototype] torch.compile(dynamic=True)", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} -{"page_content": "Experimental support for PT2 compilation with dynamic shapes is available in this release. Inference compilation with inductor for simple models is supported, but there are a lot of limitations:", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} +{"page_content": "#### [Prototype] torch.compile(dynamic=True)\n\nExperimental support for PT2 compilation with dynamic shapes is available in this release. Inference compilation with inductor for simple models is supported, but there are a lot of limitations:", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} {"page_content": "* Training available in a future release (This is partially fixed in nightlies!)\n* Minifier available in a future release.\n* It is easy to end up in a situation where the dimension you wanted to be dynamic gets specialized anyway. Some of these issues are fixed in nightlies, others are not.\n* We do not appropriately propagate Inductor guards to the top-level, this is tracked at [#96296](https://github.com/pytorch/pytorch/issues/96296).\n* Data-dependent operations like nonzero still require a graph break.\n* Dynamic does not work with non-standard modes like reduce-overhead or max-autotune.\n* There are many bugs in Inductor compilation. To track known bugs, check the [dynamic shapes](https://github.com/pytorch/pytorch/issues?q=is%3Aopen+is%3Aissue+label%3A%22module%3A+dynamic+shapes%22) label on the PyTorch issue tracker.\n\nFor the latest and greatest news about dynamic shapes support on master, check out [our status reports](https://dev-discuss.pytorch.org/t/state-of-symbolic-shapes-branch/777/43).", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} {"page_content": "## Highlights/Performance Improvements\n\n\n### [Deprecation of Cuda 11.6 and Python 3.7 support](https://pytorch.org/blog/deprecation-cuda-python-support/) for PyTorch 2.0\n\nIf you are still using or depending on CUDA 11.6 or Python 3.7 builds, we strongly recommend moving to at least CUDA 11.7 and Python 3.8, as it would be the minimum versions required for PyTorch 2.0. For more detail, please refer to the [Release Compatibility Matrix for PyTorch](https://github.com/pytorch/pytorch/blob/master/RELEASE.md#release-compatibility-matrix) releases.\n\n\n### Python 3.11 support on Anaconda Platform", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} -{"page_content": "Due to lack of Python 3.11 support for packages that PyTorch depends on, including NumPy, SciPy, SymPy, Pillow and others on the Anaconda platform. We will not be releasing Conda binaries compiled with Python 3.11 for PyTorch Release 2.0. The Pip packages with Python 3.11 support will be released, hence if you intend to use PyTorch 2.0 with Python 3.11 please use our Pip packages. Please note: Conda packages with Python 3.11 support will be made available on our nightly channel. Also we are planning on releasing Conda Python 3.11 binaries as part of future release once Anaconda provides these key dependencies. More information and instructions on how to download the Pip packages can be found [here](https://dev-discuss.pytorch.org/t/pytorch-2-0-message-concerning-python-3-11-support-on-anaconda-platform/1087).\n\n\n### Optimized PyTorch Inference with AWS Graviton processors", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} +{"page_content": "### Python 3.11 support on Anaconda Platform\n\nDue to lack of Python 3.11 support for packages that PyTorch depends on, including NumPy, SciPy, SymPy, Pillow and others on the Anaconda platform. We will not be releasing Conda binaries compiled with Python 3.11 for PyTorch Release 2.0. The Pip packages with Python 3.11 support will be released, hence if you intend to use PyTorch 2.0 with Python 3.11 please use our Pip packages. Please note: Conda packages with Python 3.11 support will be made available on our nightly channel. Also we are planning on releasing Conda Python 3.11 binaries as part of future release once Anaconda provides these key dependencies. More information and instructions on how to download the Pip packages can be found [here](https://dev-discuss.pytorch.org/t/pytorch-2-0-message-concerning-python-3-11-support-on-anaconda-platform/1087).\n\n\n### Optimized PyTorch Inference with AWS Graviton processors", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} {"page_content": "The optimizations focused on three key areas: GEMM kernels, bfloat16 support, primitive caching and the memory allocator. For aarch64 platforms, PyTorch supports Arm Compute Library (ACL) GEMM kernels via Mkldnn(OneDNN) backend. The ACL library provides Neon/SVE GEMM kernels for fp32 and bfloat16 formats. The bfloat16 support on c7g allows efficient deployment of bfloat16 trained, AMP (Automatic Mixed Precision) trained, or even the standard fp32 trained models. The standard fp32 models leverage bfloat16 kernels via OneDNN fast math mode, without any model quantization. Next we implemented primitive caching for conv, matmul and inner product operators. More information on the updated PyTorch user guide with the upcoming 2.0 release improvements and TorchBench benchmark details can be found [here](https://github.com/aws/aws-graviton-getting-started).", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"}} {"page_content": "---\nlayout: blog_detail\ntitle: \"Case Study: PathAI Uses PyTorch to Improve Patient Outcomes with AI-powered Pathology\"\nauthor: Logan Kilpatrick - Sr. Technology Advocate, Harshith Padigela - ML Engineer, Syed Ashar Javed - ML Technical Lead, Robert Egger - Biomedical Data Scientist\nfeatured-img: \"/assets/images/2022-7-15-PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology-1.png\"\n---\n\n\u200b[\u200bPathAI](https://pathai.com) is the leading provider of AI-powered technology tools and services for pathology (the study of disease). Our platform was built to enable substantial improvements to the accuracy of diagnosis and the measurement of therapeutic efficacy for complex diseases, leveraging modern approaches in machine learning like image segmentation, graph neural networks, and multiple instance learning.\n\n

\n \n

", "metadata": {"source": "https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/", "category": "pytorch blogs"}} {"page_content": "Traditional manual pathology is prone to [subjectivity and observer variability](https://www.journal-of-hepatology.eu/article/S0168-8278(20)30399-8/fulltext) that can negatively affect diagnoses and drug development trials. Before we dive into how we use PyTorch to improve our diagnosis workflow, let us first lay out the traditional analog Pathology workflow without machine learning.\n\n## How Traditional Biopharma Works\n\nThere are many avenues that biopharma companies take to discover novel therapeutics or diagnostics. One of those avenues relies heavily on the analysis of pathology slides to answer a variety of questions: how does a particular cellular communication pathway work? Can a specific disease state be linked to the presence or lack of a particular protein? Why did a particular drug in a clinical trial work for some patients but not others? Might there be an association between patient outcomes and a novel biomarker?", "metadata": {"source": "https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/", "category": "pytorch blogs"}}