diff --git "a/blogs_splitted_dataset.jsonl" "b/blogs_splitted_dataset.jsonl" deleted file mode 100644--- "a/blogs_splitted_dataset.jsonl" +++ /dev/null @@ -1,5934 +0,0 @@ -{"text": "---\nlayout: blog_detail\ntitle: \"PyTorch Trace Analysis for the Masses\"\nauthor: Anupam Bhatnagar, Xizhou Feng, Brian Coutinho, Yifan Liu, Sung-Han Lin, Louis Feng, and Yuzhen Huang\n---\n\nWe are excited to announce the public release of Holistic Trace Analysis (HTA), an open source performance analysis and visualization Python library for PyTorch users. HTA takes as input [Kineto traces](https://github.com/pytorch/kineto) collected by the [PyTorch profiler](https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/), which are complex and challenging to interpret, and up-levels the performance information contained in these traces. It was initially developed internally at Meta to understand and debug performance problems for large-scale distributed training jobs on GPUs. The multidisciplinary team has made a number of enhancements to HTA\u2019s features and scaled them to support state-of-the-art ML workloads.", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} -{"text": "ML researchers and systems engineers often struggle to computationally scale up their models because they are not aware of the performance bottlenecks in their workloads. The resources requested for a job (e.g. GPUs, memory) are often misaligned with the resources actually required due to lack of visibility \u201cunder the hood\u201d. To achieve the best performance from the hardware stack, it is imperative to understand the resource utilization and bottlenecks for distributed training workloads.\n\nThe initial HTA implementation was specifically targeted at Deep Learning Based Recommendation Models (DLRM). To make the features in HTA generic and applicable to use cases such as analyzing Vision and NLP models, we decided to refactor the HTA codebase and make the library available to the larger community. This new codebase has implemented several important ideas which lead to significant efficiency and performance improvements.", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} -{"text": "In this blog, we present several features implemented in the open source version of HTA, which can be used as a Python script as well as interactively in a Jupyter notebook. HTA provides the following features:\n\n1. **Breakdown by Dimensions**\n 1. **Temporal**: Breakdown of GPU time in terms of time spent in computation, communication, memory events, and idle time on a single node and across all ranks.\n 1. **Idle Time**: Breakdown of GPU idle time into waiting for the host, waiting for another kernel or attributed to an unknown cause.\n 1. **Kernel**: Find kernels with the longest duration on each rank.\n 1. **Communication Computation Overlap**: Calculate the percentage of time when communication overlaps computation.\n1. **Statistical Analysis**\n 1. **Kernel Duration Distribution**: Distribution of average time taken by longest kernels across different ranks.\n 1. **CUDA Kernel Launch**: Distributions of GPU kernels with very small duration, large duration, and excessive launch time.", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} -{"text": "1. **Augmented Counters (Memory bandwidth, Queue length)**: Augmented trace files which provide insights into memory copy bandwidth and number of outstanding operations on each CUDA stream.\n1. **Patterns**\n 1. **Frequent CUDA Kernels**: Find the CUDA kernels most frequently launched by any given PyTorch or user defined operator.\n1. **Trace Comparison**\n 1. **Trace Diff**: A trace comparison tool to identify and visualize the differences between traces.\n\nHTA source code is available to users via [Github](https://github.com/facebookresearch/HolisticTraceAnalysis). Users can request new features or build their own analysis using the core libraries and data structures provided in the codebase in addition to the features mentioned above.\n\n## GPU Training Performance Debugging 101\n\nTo understand the GPU performance in distributed training jobs, we consider how the model operators interact with the GPU devices and how such interactions are reflected in certain measurable metrics.", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} -{"text": "At a high level, we can break down the GPU operations in a model execution into three broad categories, henceforth referred to as kernel types: \n1. **Computation (COMP)** - Compute kernels execute compiled routines for matrix multiplication and similar numeric calculations. They are responsible for all of the number-crunching necessary for model execution. \n1. **Communication (COMM)** - Communication kernels are routines which are responsible for exchanging and synchronizing data between different GPU devices in a distributed training job. The NVIDIA Collective Communication Library (NCCL) is a widely used communication library and all its kernels have the prefix \u201cnccl\u201d. Example NCCL kernels include NCCL_AllGather, NCCL_ReduceScatter, NCCL_AllReduce, etc.", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} -{"text": "1. **Memory (MEM)** - Memory kernels manage the memory allocations/deallocations on the GPU devices and data movement between the memory space on the host and the GPUs. The memory kernels include Memcpy_H2D, Memcpy_D2H, Memcpy_D2D, Memset, etc. Here, H represents the Host and D represents the GPU Device. Thus, H2D, D2H, D2D stands for Host to Device, Device to Host and Device to Device respectively. \n\nBecause a modern GPU device like the NVIDIA A100 GPU is a massively parallel device which is capable of running multiple kernels simultaneously, it is possible to overlap the computation, communication, and memory kernels to reduce the model execution time. One common technique to achieve the overlap is to utilize multiple CUDA streams. A CUDA stream is a sequence of operations that execute on a GPU device in the order in which they are issued by the host code. Different CUDA streams can be interleaved and even run concurrently, thus achieving the effect of kernel overlap.", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} -{"text": "To help understand the above concepts, Figure 1 provides a timeline of the GPU kernels in a sample distributed training job on 8 GPUs for one iteration. In the figure below, each rank represents one GPU and the kernels on each GPU run on 6 CUDA streams. In the right column of the figure, you can see names of the GPU kernels used. In the middle of the figure, you see the overlap between compute and communicate kernels. This figure is created using the [plot_timeline example notebook](https://github.com/facebookresearch/HolisticTraceAnalysis/blob/main/examples/plot_timeline.ipynb) available in HTA.\n\n{:width=\"100%\"}\n\n*Figure 1. An example of the execution timeline of GPU Kernels across multiple ranks*\n{: style=\"text-align: center;\"}", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} -{"text": "{: style=\"text-align: center;\"}\n\nThe performance of multiple GPU training jobs is affected by multiple factors. Among these factors, how does a model execution create and orchestrate the GPU kernels plays a critical role. HTA provides insights on how the model execution interacts with the GPU devices and highlights the opportunities for performance improvement.\n\nWith the features we built in HTA, we aim to provide users insights into \u201cwhat is happening under the hood in a distributed GPU training?\u201d We briefly describe these features in the next few paragraphs.\n\n## Features in Holistic Trace Analysis \n\nFor most users, understanding the performance of GPU training jobs is nontrivial. Thus, we built this library to simplify the task of trace analysis and provide the user useful insights by examining the model execution traces. As the first step, we developed features which are important and generic enough so that most users can benefit from this library.", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} -{"text": "**Temporal Breakdown**: We begin by asking whether the GPU is spending time on computation, communication, memory events, or is it idle? To answer this question, the temporal breakdown feature presents a breakdown in terms of these categories. To achieve high training efficiency the code should maximize time used by computation kernels and minimize idle time and non-compute time (time used by communication or memory kernels). This is accomplished by implementing concurrent execution of computation kernels with communication or memory kernels. *Note that, during concurrent execution of computation kernels with communication/memory kernels the time spent by communication/memory kernels is accounted for under compute time.*\n\n{:width=\"100%\"}\n\n*Figure 2: Temporal Breakdown across 8 GPUs*\n{: style=\"text-align: center;\"}", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} -{"text": "{: style=\"text-align: center;\"}\n\n**Kernel Breakdown**: It is natural to ask which kernels are taking the most amount of time. The next feature breaks down the time spent within each kernel type (COMM, COMP, MEM) and sorts them by duration. We present this information for each kernel type and for each rank as a pie chart. See figure 3 below. \n\n{:width=\"100%\"}\n\n*Figure 3: Pie chart of top computation and communication kernels*\n{: style=\"text-align: center;\"}", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} -{"text": "{: style=\"text-align: center;\"}\n\n**Kernel Duration Distribution**: Subsequently, one can also ask - for any given kernel, what is the distribution of the time spent across the ranks? To answer this, HTA generates bar graphs for the average duration of a given kernel across all ranks. Additionally, the error bars in the bar graphs show the minimum and maximum amount of time taken by a given kernel on a given rank. Figure 4 below shows a discrepancy between average duration on rank 0 as compared to other ranks. This anomalous behavior on rank 0 guides the user on where to look for possible bugs.\n\n{:width=\"100%\"}\n\n*Figure 4: Average duration of NCCL AllReduce Kernel across 8 ranks*\n{: style=\"text-align: center;\"}", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} -{"text": "{: style=\"text-align: center;\"}\n\n**Communication Computation Overlap**: In distributed training, a significant amount of time is spent in communication and synchronization events among multiple GPU devices. To achieve high GPU efficiency (i.e. TFLOPS/GPU) it is vital to keep the GPU doing actual computation work. In other words, a GPU should not be blocked because of waiting for data from other GPUs. One way to measure the extent to which computation is blocked by data dependencies is to calculate the computation-communication overlap. Higher GPU efficiency is observed if communication events overlap computation events. Lack of communication and computation overlap will lead to the GPU being idle, thus the efficiency would be low. Thus, the communication computation overlap feature calculates the percentage of time communication and computation overlap in a job for each rank and generates a bar graph representation. See figure below. More precisely, we measure the following ratio", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} -{"text": "(time spent in computation while communicating) / (time spent in communication)\n{: style=\"text-align: center;\"}\n\n\n{:width=\"100%\"}\n\n*Figure 5: Communication computation overlap*\n{: style=\"text-align: center;\"}\n\n**Augmented Counters (Queue length, Memory bandwidth)**: To aid in debugging, HTA calculates the memory bandwidth statistics for D2H, H2D and D2D memory copy (memcpy) and memory set (memset) events. Additionally, HTA also computes the number of outstanding CUDA operations on each CUDA stream. We refer to this as queue length. When the queue length on a stream is 1024 or larger new events cannot be scheduled on that stream and the CPU will stall until the GPU events have processed. Additionally, HTA generates a new trace file containing tracks with the memory bandwidth and queue length time series. See Figure 6 below.\n\n{:width=\"100%\"}", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} -{"text": "*Figure 6: Memory Bandwidth and Queue Length*\n{: style=\"text-align: center;\"}\n\nThese primary features give us a peek into the system performance and help answer \u201cwhat is happening in the system?\u201d. As HTA evolves, we hope to address \u201cwhy is X happening?\u201d and also suggest possible solutions to overcome the bottlenecks.\n\n## Installation and Usage\n\n### Installation\n\nFor installing the HTA please refer to the [README](https://github.com/facebookresearch/HolisticTraceAnalysis/blob/main/README.md). In brief, the user is required to clone the [repo](https://github.com/facebookresearch/HolisticTraceAnalysis) and install the necessary Python packages via pip.\n\n### Usage", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} -{"text": "### Usage\n\nThis version of Holistic Trace Analysis is currently in beta and we recommend using HTA in a Jupyter notebook. A [demo notebook](https://github.com/facebookresearch/HolisticTraceAnalysis/blob/main/examples/trace_analysis_demo.ipynb) is provided for your convenience. To get started, import the hta package in a Jupyter notebook, create a TraceAnalysis object and off we go in exactly two lines of code.\n\n```python\nfrom hta.trace_analysis import TraceAnalysis\nanalyzer = TraceAnalysis(trace_dir = \u201c/trace/folder/path\u201d)\n```\n\n### Requirements\n\n- All trace files for a training or inference job must be stored in a unique folder.\n- Trace files are in json or gzipped json format.\n\n## FAQ\n\n#### Q. How can I install HTA?\n\nPlease see the [README](https://github.com/facebookresearch/HolisticTraceAnalysis/blob/main/README.md) in the root directory of the repository.\n\n#### Q. Is there any documentation on the features and API in HTA?", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} -{"text": "The documentation and detailed API is available [here](https://hta.readthedocs.io/).\n\n#### Q. Can you implement feature X?\n\nDepending on how widely the feature is needed and the level of effort required to implement it we would consider developing the feature. Please open a [Github Issue](https://github.com/facebookresearch/HolisticTraceAnalysis/issues) and tag it with the feature-request label.\n\n#### Q. Can I modify the code?\n\nPlease do and [send a PR](https://github.com/facebookresearch/HolisticTraceAnalysis/pulls) along the way, if you think it would be useful for others.\n\n#### Q. How can I collect traces in PyTorch?\n\nPlease refer to this tutorial [here](https://pytorch.org/tutorials/intermediate/tensorboard_profiler_tutorial.html#use-profiler-to-record-execution-events).\n\n#### Q. Can HTA be used at production scale?\n\nYes, please see a use case study [here](https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/).", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} -{"text": "---\nlayout: blog_detail\ntitle: 'PyTorch adds new dev tools as it hits production scale'\nauthor: The PyTorch Team\n---\n\n_This is a partial re-post of the original blog post on the Facebook AI Blog. The full post can be [viewed here](https://ai.facebook.com/blog/pytorch-adds-new-dev-tools-as-it-hits-production-scale/)_\n\nSince its release just a few months ago, [PyTorch 1.0](http://pytorch.org/) has been rapidly adopted as a powerful, flexible deep learning platform that enables engineers and researchers to move quickly from research to production. We are highlighting some of the ways the AI engineering and research community is using PyTorch 1.0. We\u2019re also sharing new details about the latest release, PyTorch 1.1, and showcasing some of the new development tools created by the community.", "source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"} -{"text": "Building on the initial launch of PyTorch in 2017, we partnered with the AI community to ship the stable release of PyTorch 1.0 [last December](https://code.fb.com/ai-research/pytorch-developer-ecosystem-expands-1-0-stable-release/). Along with enhanced production-oriented capabilities and deep integration with leading cloud platforms, PyTorch 1.0 expands on the open source library\u2019s core features, with the addition of PyTorch JIT (Just in time compilation) that seamlessly transitions between eager mode and graph mode to provide both flexibility and speed.", "source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"} -{"text": "Leading businesses across industries are beginning to use PyTorch to both facilitate their research and then also deploy at large scale for applications such as translation, computer vision, conversational interfaces, pharmaceutical research, factory optimization, and automated driving research. Community adoption of PyTorch has also continued to expand. Stanford, UC Berkeley, Caltech, and other universities are using PyTorch as a fundamental tool for their machine learning (ML) courses; new ecosystem projects have launched to support development on PyTorch; and major cloud platforms have expanded their integration with PyTorch.\n\n## Using PyTorch across industries\n\nMany leading businesses are moving to PyTorch 1.0 to accelerate development and deployment of new AI systems. Here are some examples:\n\n- Airbnb leveraged PyTorch's rich libraries and APIs for conversational AI and deployed a Smart Reply to help the company\u2019s service agents respond more effectively to customers.", "source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"} -{"text": "- [ATOM](https://atomscience.org/) is building a platform to generate and optimize new drug candidates significantly faster and with greater success than conventional processes. Using machine learning frameworks such as PyTorch, ATOM was able to design a variational autoencoder for representing diverse chemical structures and designing new drug candidates.\n- Genentech is utilizing PyTorch\u2019s flexible control structures and dynamic graphs to train deep learning models that will aid in the development of individualized cancer therapy.\n- Microsoft is using PyTorch across its organization to develop ML models at scale and deploy them via the ONNX Runtime. Using PyTorch, Microsoft Cognition has built distributed language models that scale to billions of words and are now in production in offerings such as Cognitive Services.", "source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"} -{"text": "- Toyota Research Institute (TRI) is developing a two-pronged approach toward automated driving with Toyota Guardian and Toyota Chauffeur technologies. The Machine Learning Team at TRI is creating new deep learning algorithms to leverage Toyota's 10 million sales per year data advantage. The flexibility of PyTorch has vastly accelerated their pace of exploration and its new production features will enable faster deployment towards their safety critical applications.\n\nFollowing the release of PyTorch 1.0 in December 2018, we\u2019re now announcing the availability of v1.1, which improves performance, adds new model understanding and visualization tools to improve usability, and provides new APIs.\n\nKey features of PyTorch v1.1 include:", "source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"} -{"text": "Key features of PyTorch v1.1 include:\n\n- [TensorBoard](https://www.tensorflow.org/tensorboard): First-class and native support for visualization and model debugging with TensorBoard, a web application suite for inspecting and understanding training runs and graphs. PyTorch now natively supports TensorBoard with a simple \u201cfrom torch.utils.tensorboard import SummaryWriter\u201d command.\n- JIT compiler: Improvements to just-in-time (JIT) compilation. These include various bug fixes as well as expanded capabilities in TorchScript, such as support for dictionaries, user classes, and attributes.\n- New APIs: Support for Boolean tensors and better support for custom recurrent neural networks.", "source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"} -{"text": "- Distributed Training: Improved performance for common models such as CNNs, added support for multi device modules including the ability to split models across GPUs while still using Distributed Data Parallel (DDP) and support for modules where not all parameters are used in every iteration (e.g. control flow, like adaptive softmax, etc). See the latest tutorials [here](https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html).\n\nWe\u2019ve also continued to partner with the community to foster projects and tools aimed at supporting ML engineers for needs ranging from improved model understanding to auto-tuning using AutoML methods. With the release of Ax and BoTorch (below), we will be sharing some of our core algorithms, including meta-learning for efficiently optimizing hyperparameters from based on historical tasks. We are excited to see this work open-sourced for the community to build on.", "source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"} -{"text": "This ecosystem includes open source projects and tools that have been deployed at production scale, as well as products and services from our partnership with industry leaders who share our vision of an open and collaborative AI community. Here are a few of the latest tools:\n\n- [BoTorch](https://ai.facebook.com/blog/open-sourcing-ax-and-botorch-new-ai-tools-for-adaptive-experimentation/): BoTorch is a research framework built on top of PyTorch to provide Bayesian optimization, a sample-efficient technique for sequential optimization of costly-to-evaluate black-box functions.\n- [Ax](https://ai.facebook.com/blog/open-sourcing-ax-and-botorch-new-ai-tools-for-adaptive-experimentation/): Ax is an ML platform for managing adaptive experiments. It enables researchers and engineers to systematically explore large configuration spaces in order to optimize machine learning models, infrastructure, and products.", "source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"} -{"text": "- [PyTorch-BigGraph](https://ai.facebook.com/blog/open-sourcing-pytorch-biggraph-for-faster-embeddings-of-extremely-large-graphs/): PBG is a distributed system for creating embeddings of very large graphs with billions of entities and trillions of edges. It includes support for sharding and negative sampling and it offers sample use cases based on Wikidata embeddings.\n- [Google AI Platform Notebooks](https://cloud.google.com/ai-platform-notebooks/): AI Platform Notebooks is a new, hosted JupyterLab service from Google Cloud Platform. Data scientists can quickly create virtual machines running JupyterLab with the latest version of PyTorch preinstalled. It is also tightly integrated with GCP services such as BigQuery, Cloud Dataproc, Cloud Dataflow, and AI Factory, making it easy to execute the full ML cycle without ever leaving JupyterLab.\n\nWe\u2019re also excited to see many interesting new projects from the broader PyTorch community. Highlights include:", "source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"} -{"text": "- [BigGAN-PyTorch](https://github.com/ajbrock/BigGAN-PyTorch):This is a full PyTorch reimplementation that uses gradient accumulation to provide the benefits of big batches on as few as four GPUs.\n- [GeomLoss](http://www.kernel-operations.io/geomloss/index.html): A Python API that defines PyTorch layers for geometric loss functions between sampled measures, images, and volumes. It includes MMD, Wasserstein, Sinkhorn, and more.\n\n
\n \n
\n \n
\nAccelerated GPU training and evaluation speedups over CPU-only (times faster)\n
\n\n\n## Getting Started\n\nTo get started, just install the latest [Preview (Nightly) build](https://pytorch.org/get-started/locally/) on your Apple silicon Mac running macOS 12.3 or later with a native version (arm64) of Python.\n \nYou can also learn more about Metal and MPS on [Apple\u2019s Metal page](https://developer.apple.com/metal/).", "source": "https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/", "category": "pytorch blogs"} -{"text": "\\* _Testing conducted by Apple in April 2022 using production Mac Studio systems with Apple M1 Ultra, 20-core CPU, 64-core GPU 128GB of RAM, and 2TB SSD. Tested with macOS Monterey 12.3, prerelease PyTorch 1.12, ResNet50 (batch size=128), HuggingFace BERT (batch size=64), and VGG16 (batch size=64). Performance tests are conducted using specific computer systems and reflect the approximate performance of Mac Studio._", "source": "https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/", "category": "pytorch blogs"} -{"text": "---\nlayout: blog_detail\ntitle: \"Accelerating Hugging Face and TIMM models with PyTorch 2.0\"\nauthor: Mark Saroufim\nfeatured-img: \"assets/images/pytorch-2.0-feature-img.png\"\n---\n\n`torch.compile()` makes it easy to experiment with different compiler backends to make PyTorch code faster with a single line decorator `torch.compile()`. It works either directly over an nn.Module as a drop-in replacement for `torch.jit.script()` but without requiring you to make any source code changes. We expect this one line code change to provide you with between 30%-2x training time speedups on the vast majority of models that you\u2019re already running.\n\n```python\n\nopt_module = torch.compile(module)\n\n```\n\ntorch.compile supports arbitrary PyTorch code, control flow, mutation and comes with experimental support for dynamic shapes. We\u2019re so excited about this development that we call it PyTorch 2.0.", "source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"} -{"text": "What makes this announcement different for us is we\u2019ve already benchmarked some of the most popular open source PyTorch models and gotten substantial speedups ranging from 30% to 2x [https://github.com/pytorch/torchdynamo/issues/681](https://github.com/pytorch/torchdynamo/issues/681).\n\nThere are no tricks here, we\u2019ve pip installed popular libraries like [https://github.com/huggingface/transformers](https://github.com/huggingface/transformers), [https://github.com/huggingface/accelerate](https://github.com/huggingface/accelerate) and [https://github.com/rwightman/pytorch-image-models](https://github.com/rwightman/pytorch-image-models) and then ran torch.compile() on them and that\u2019s it.\n\nIt\u2019s rare to get both performance and convenience, but this is why the core team finds PyTorch 2.0 so exciting. The Hugging Face team is also excited, in their words:\n\nRoss Wightman the primary maintainer of TIMM: \u201cPT 2.0 works out of the box with majority of timm models for inference and train workloads and no code changes\u201d", "source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"} -{"text": "Sylvain Gugger the primary maintainer of transformers and accelerate: \"With just one line of code to add, PyTorch 2.0 gives a speedup between 1.5x and 2.x in training Transformers models. This is the most exciting thing since mixed precision training was introduced!\"\n\nThis tutorial will show you exactly how to replicate those speedups so you can be as excited as to PyTorch 2.0 as we are.\n\n## Requirements and Setup\n\nFor GPU (newer generation GPUs will see drastically better performance)\n\n```\npip3 install numpy --pre torch --force-reinstall --extra-index-url https://download.pytorch.org/whl/nightly/cu117\n\n```\n\nFor CPU\n\n```\npip3 install --pre torch --extra-index-url https://download.pytorch.org/whl/nightly/cpu\n\n```\n\nOptional: Verify Installation\n\n```\ngit clone https://github.com/pytorch/pytorch\ncd tools/dynamo\npython verify_dynamo.py\n```\n\nOptional: Docker installation\n\nWe also provide all the required dependencies in the PyTorch nightly\nbinaries which you can download with\n\n```", "source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"} -{"text": "binaries which you can download with\n\n```\ndocker pull ghcr.io/pytorch/pytorch-nightly\n\n```\n\nAnd for ad hoc experiments just make sure that your container has access\nto all your GPUs\n\n```\ndocker run --gpus all -it ghcr.io/pytorch/pytorch-nightly:latest /bin/bash\n\n```\n\n## Getting started\n\n### a toy exmaple\n\nLet\u2019s start with a simple example and make things more complicated step\nby step. Please note that you\u2019re likely to see more significant speedups the newer your GPU is.\n\n```python\nimport torch\ndef fn(x, y):\n a = torch.sin(x).cuda()\n b = torch.sin(y).cuda()\n return a + b\nnew_fn = torch.compile(fn, backend=\"inductor\")\ninput_tensor = torch.randn(10000).to(device=\"cuda:0\")\na = new_fn(input_tensor, input_tensor)\n```\n\nThis example won\u2019t actually run faster but it\u2019s educational.", "source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"} -{"text": "example that features `torch.cos()` and `torch.sin()` which are examples of pointwise ops as in they operate element by element on a vector. A more famous pointwise op you might actually want to use would be something like `torch.relu()`.\n\nPointwise ops in eager mode are suboptimal because each one would need to read a tensor from memory, make some changes and then write back those changes.\n\nThe single most important optimization that PyTorch 2.0 does for you is fusion.\n\nSo back to our example we can turn 2 reads and 2 writes into 1 read and 1 write which is crucial especially for newer GPUs where the bottleneck is memory bandwidth (how quickly you can send data to a GPU) instead of compute (how quickly your GPU can crunch floating point operations)\n\nThe second most important optimization that PyTorch 2.0 does for you is CUDA graphs\n\nCUDA graphs help eliminate the overhead from launching individual kernels from a python program.", "source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"} -{"text": "torch.compile() supports many different backends but one that we\u2019re particularly excited about is Inductor which generates Triton kernels [https://github.com/openai/triton](https://github.com/openai/triton) which are written in Python yet outperform the vast majority of handwritten CUDA kernels. Suppose our example above was called trig.py we can actually inspect the code generated triton kernels by running.\n\n```\nTORCH_COMPILE_DEBUG=1 python trig.py\n```\n\n```python\n\n@pointwise(size_hints=[16384], filename=__file__, meta={'signature': {0: '*fp32', 1: '*fp32', 2: 'i32'}, 'device': 0, 'constants': {}, 'configs': [instance_descriptor(divisible_by_16=(0, 1, 2), equal_to_1=())]})\n@triton.jit\ndef kernel(in_ptr0, out_ptr0, xnumel, XBLOCK : tl.constexpr):\n xnumel = 10000\n xoffset = tl.program_id(0) * XBLOCK\n xindex = xoffset + tl.reshape(tl.arange(0, XBLOCK), [XBLOCK])\n xmask = xindex < xnumel\n x0 = xindex\n tmp0 = tl.load(in_ptr0 + (x0), xmask)\n tmp1 = tl.sin(tmp0)\n tmp2 = tl.sin(tmp1)", "source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"} -{"text": "tmp1 = tl.sin(tmp0)\n tmp2 = tl.sin(tmp1)\n tl.store(out_ptr0 + (x0 + tl.zeros([XBLOCK], tl.int32)), tmp2, xmask)\n\n```\n\nAnd you can verify that fusing the two `sins` did actually occur because the two `sin` operations occur within a single Triton kernel and the temporary variables are held in registers with very fast access.\n\n### a real model\n\nAs a next step let\u2019s try a real model like resnet50 from the PyTorch hub.\n\n```python\nimport torch\nmodel = torch.hub.load('pytorch/vision:v0.10.0', 'resnet18', pretrained=True)\nopt_model = torch.compile(model, backend=\"inductor\")\nmodel(torch.randn(1,3,64,64))\n\n```\n\nIf you actually run you may be surprised that the first run is slow and that\u2019s because the model is being compiled. Subsequent runs will be faster so it's common practice to warm up your model before you start benchmarking it.", "source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"} -{"text": "You may have noticed how we also passed in the name of a compiler explicitly here with \u201cinductor\u201d but it\u2019s not the only available backend, you can run in a REPL `torch._dynamo.list_backends()` to see the full list of available backends. For fun you should try out `aot_cudagraphs` or `nvfuser`.\n\n### Hugging Face models\n\nLet\u2019s do something a bit more interesting now, our community frequently\nuses pretrained models from transformers [https://github.com/huggingface/transformers](https://github.com/huggingface/transformers) or TIMM [https://github.com/rwightman/pytorch-image-models](https://github.com/rwightman/pytorch-image-models) and one of our design goals for PyTorch 2.0 was that any new compiler stack needs to work out of the box with the vast majority of models people actually run.\n\nSo we\u2019re going to directly download a pretrained model from the Hugging Face hub and optimize it\n\n```python\n\nimport torch\nfrom transformers import BertTokenizer, BertModel", "source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"} -{"text": "from transformers import BertTokenizer, BertModel\n# Copy pasted from here https://huggingface.co/bert-base-uncased\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\nmodel = BertModel.from_pretrained(\"bert-base-uncased\").to(device=\"cuda:0\")\nmodel = torch.compile(model) # This is the only line of code that we changed\ntext = \"Replace me by any text you'd like.\"\nencoded_input = tokenizer(text, return_tensors='pt').to(device=\"cuda:0\")\noutput = model(**encoded_input)\n\n```\n\nIf you remove the `to(device=\"cuda:0\")` from the model and `encoded_input` then PyTorch 2.0 will generate C++ kernels that will be optimized for running on your CPU. You can inspect both Triton or C++ kernels for BERT, they\u2019re obviously more complex than the trigonometry example we had above but you can similarly skim it and understand if you understand PyTorch.\n\nThe same code also works just fine if used with [https://github.com/huggingface/accelerate](https://github.com/huggingface/accelerate) and DDP", "source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"} -{"text": "Similarly let\u2019s try out a TIMM example\n\n```python\nimport timm\nimport torch\nmodel = timm.create_model('resnext101_32x8d', pretrained=True, num_classes=2)\nopt_model = torch.compile(model, backend=\"inductor\")\nopt_model(torch.randn(64,3,7,7))\n```\n\nOur goal with PyTorch was to build a breadth-first compiler that would speed up the vast majority of actual models people run in open source. The Hugging Face Hub ended up being an extremely valuable benchmarking tool for us, ensuring that any optimization we work on actually helps accelerate models people want to run.\n\nSo please try out PyTorch 2.0, enjoy the free perf and if you\u2019re not seeing it then please open an issue and we will make sure your model is supported [https://github.com/pytorch/torchdynamo/issues](https://github.com/pytorch/torchdynamo/issues)\n\nAfter all, we can\u2019t claim we\u2019re created a breadth-first unless YOUR models actually run faster.", "source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"} -{"text": "---\nlayout: blog_detail\ntitle: 'PyTorch 1.6 now includes Stochastic Weight Averaging'\nauthor: Pavel Izmailov, Andrew Gordon Wilson and Vincent Queneneville-Belair\n---\n\nDo you use stochastic gradient descent (SGD) or Adam? Regardless of the procedure you use to train your neural network, you can likely achieve significantly better generalization at virtually no additional cost with a simple new technique now natively supported in PyTorch 1.6, Stochastic Weight Averaging (SWA) [1]. Even if you have already trained your model, it\u2019s easy to realize the benefits of SWA by running SWA for a small number of epochs starting with a pre-trained model. [Again](https://twitter.com/MilesCranmer/status/1282140440892932096) and [again](https://twitter.com/leopd/status/1285969855062192129), researchers are discovering that SWA improves the performance of well-tuned models in a wide array of practical applications with little cost or effort!\n\n\nSWA has a wide range of applications and features:", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} -{"text": "SWA has a wide range of applications and features:\n* SWA significantly improves performance compared to standard training techniques in computer vision (e.g., VGG, ResNets, Wide ResNets and DenseNets on ImageNet and CIFAR benchmarks [1, 2]).\n* SWA provides state-of-the-art performance on key benchmarks in semi-supervised learning and domain adaptation [2].\n* SWA was shown to improve performance in language modeling (e.g., AWD-LSTM on WikiText-2 [4]) and policy-gradient methods in deep reinforcement learning [3].\n* SWAG, an extension of SWA, can approximate Bayesian model averaging in Bayesian deep learning and achieves state-of-the-art uncertainty calibration results in various settings. Moreover, its recent generalization MultiSWAG provides significant additional performance gains and mitigates double-descent [4, 10]. Another approach, Subspace Inference, approximates the Bayesian posterior in a small subspace of the parameter space around the SWA solution [5].", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} -{"text": "* SWA for low precision training, SWALP, can match the performance of full-precision SGD training, even with all numbers quantized down to 8 bits, including gradient accumulators [6].\n* SWA in parallel, SWAP, was shown to greatly speed up the training of neural networks by using large batch sizes and, in particular, set a record by training a neural network to 94% accuracy on CIFAR-10 in 27 seconds [11].\n\n\n\n \n
\n \n
TPU Accelerator - Num Devices\n | \nv4-64\n | \n
GPT2 Parameter Count\n | \n16B\n | \n
Layers Wrapped with FSDP\n | \nGPT2Block\n | \n
TFLOPs / Chip\n | \n275\n | \n
PFLOPs / Step\n | \n50\n | \n
50\n | \n|
Hardware Utilization\n | \n39%\n | \n
", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} -{"text": "
\n\t\n\t
\n\t\tFigure 1: ResNet-50 takes an image of a bird and transforms that into the abstract concept \"bird\". Source: Bird image from ImageNet.\n
\n\t\n\t
", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}
-{"text": "
\n\t\tFigure 2: ResNet-50 transforms the input image in multiple steps. Conceptually, we may access the intermediate transformation of the image after each one of these steps. Source: Bird image from ImageNet.\n
", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} -{"text": "
\n\t\n\t
\n\t\tFigure 3: Graphical representation of the result of symbolically tracing our example of a simple forward method.\n
\n\t\n\t
", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}
-{"text": "
\n\t\tFigure 4: Graphical representation of a residual skip connection. The middle node is like the main branch of a residual block, and the final node represents the sum of the input and output of the main branch.\n
\n\t\n\t
\n\t\tFigure 5: The individual operations within `submodule` may (left - within red box), may be consolidated into one node (right - node #2) if we consider the `submodule` as a \"leaf\" node.\n
\n \n
\n https://github.com/microsoft/torchgeo\n
", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "\n\nFor decades, Earth observation satellites, aircraft, and more recently UAV platforms have been collecting increasing amounts of imagery of the Earth\u2019s surface. With information about seasonal and long-term trends, remotely sensed imagery can be invaluable for solving some of the greatest challenges to humanity, including climate change adaptation, natural disaster monitoring, water resource management, and food security for a growing global population. From a computer vision perspective, this includes applications like land cover mapping (semantic segmentation), deforestation and flood monitoring (change detection), glacial flow (pixel tracking), hurricane tracking and intensity estimation (regression), and building and road detection (object detection, instance segmentation). By leveraging recent advancements in deep learning architectures, cheaper and more powerful GPUs, and petabytes of freely available satellite imagery datasets, we can come closer to solving these important problems.", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "\n \n
\nNational Oceanic and Atmospheric Administration satellite image of Hurricane Katrina, taken on August 28, 2005 (source). Geospatial machine learning libraries like TorchGeo can be used to detect, track, and predict future trajectories of hurricanes and other natural disasters.\n
\n\n# The challenges", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "In traditional computer vision datasets, such as ImageNet, the image files themselves tend to be rather simple and easy to work with. Most images have 3 spectral bands (RGB), are stored in common file formats like PNG or JPEG, and can be easily loaded with popular software libraries like [PIL](https://pillow.readthedocs.io/en/stable/) or [OpenCV](https://opencv.org/). Each image in these datasets is usually small enough to pass directly into a neural network. Furthermore, most of these datasets contain a finite number of well-curated images that are assumed to be independent and identically distributed, making train-val-test splits straightforward. As a result of this relative homogeneity, the same pre-trained models (e.g., CNNs pretrained on ImageNet) have shown to be effective across a wide range of vision tasks using transfer learning methods. Existing libraries, such as [torchvision](https://github.com/pytorch/vision), handle these simple cases well, and have been used to make large advances in vision tasks over the past decade.", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "Remote sensing imagery is not so uniform. Instead of simple RGB images, satellites tend to capture images that are multispectral ([Landsat 8](https://www.usgs.gov/landsat-missions) has 11 spectral bands) or even hyperspectral ([Hyperion](https://www.usgs.gov/centers/eros/science/usgs-eros-archive-earth-observing-one-eo-1-hyperion) has 242 spectral bands). These images capture information at a wider range of wavelengths (400 nm\u201315 \u00b5m), far outside of the visible spectrum. Different satellites also have very different spatial resolutions\u2014[GOES](https://www.goes.noaa.gov/) has a resolution of 4 km/px, [Maxar](https://www.maxar.com/products/satellite-imagery) imagery is 30 cm/px, and drone imagery resolution can be as high as 7 mm/px. These datasets almost always have a temporal component, with satellite revisists that are daily, weekly, or biweekly. Images often have overlap with other images in the dataset, and need to be stitched together based on geographic metadata. These images tend to be very large (e.g., 10K x 10K pixels), so it isn't possible to pass an entire image through a neural network. This data is distributed in hundreds of different raster and vector file formats like GeoTIFF and ESRI Shapefile, requiring specialty libraries like [GDAL](https://gdal.org/) to load.", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "\n \n
\nFrom left to right: Mercator, Albers Equal Area, and Interrupted Goode Homolosine projections (source). Geospatial data is associated with one of many different types of reference systems that project the 3D Earth onto a 2D representation. Combining data from different sources often involves re-projecting to a common reference system in order to ensure that all layers are aligned.\n
", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "\n\nAlthough each image is 2D, the Earth itself is 3D. In order to stitch together images, they first need to be projected onto a 2D representation of the Earth, called a coordinate reference system (CRS). Most people are familiar with equal angle representations like Mercator that distort the size of regions (Greenland looks larger than Africa even though Africa is 15x larger), but there are many other CRSs that are commonly used. Each dataset may use a different CRS, and each image within a single dataset may also be in a unique CRS. In order to use data from multiple layers, they must all share a common CRS, otherwise the data won't be properly aligned. For those who aren't familiar with remote sensing data, this can be a daunting task.\n\n\n \n
", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "
\n\n\nEven if you correctly georeference images during indexing, if you don't project them to a common CRS, you'll end up with rotated images with nodata values around them, and the images won't be pixel-aligned.\n
\n\n# The solution\n\nAt the moment, it can be quite challenging to work with both deep learning models and geospatial data without having expertise in both of these very different fields. To address these challenges, we've built TorchGeo, a PyTorch domain library for working with geospatial data. TorchGeo is designed to make it simple:\n\n1. for machine learning experts to work with geospatial data, and\n2. for remote sensing experts to explore machine learning solutions.", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "TorchGeo is not just a research project, but a production-quality library that uses continuous integration to test every commit with a range of Python versions on a range of platforms (Linux, macOS, Windows). It can be easily installed with any of your favorite package managers, including pip, conda, and [spack](https://spack.io):\n\n```\n$ pip install torchgeo\n```\n\nTorchGeo is designed to have the same API as other PyTorch domain libraries like torchvision, torchtext, and torchaudio. If you already use torchvision in your workflow for computer vision datasets, you can switch to TorchGeo by changing only a few lines of code. All TorchGeo datasets and samplers are compatible with the PyTorch ``DataLoader`` class, meaning that you can take advantage of wrapper libraries like [PyTorch Lightning](https://www.pytorchlightning.ai/) for distributed training. In the following sections, we'll explore possible use cases for TorchGeo to show how simple it is to use.\n\n# Geospatial datasets and samplers\n\n", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "
\n \n
\nExample application in which we combine A) a scene from Landsat 8 and B) Cropland Data Layer labels, even though these files are in different EPSG projections. We want to sample patches C) and D) from these datasets using a geospatial bounding box as an index.\n
\n\nMany remote sensing applications involve working with [*geospatial datasets*](https://torchgeo.readthedocs.io/en/latest/api/datasets.html#geospatial-datasets) \u2014datasets with geographic metadata. In TorchGeo, we define a ``GeoDataset`` class to represent these kinds of datasets. Instead of being indexed by an integer, each ``GeoDataset`` is indexed by a spatiotemporal bounding box, meaning that two or more datasets covering a different geographic extent can be intelligently combined.", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "In this example, we show how easy it is to work with geospatial data and to sample small image patches from a combination of Landsat and Cropland Data Layer (CDL) data using TorchGeo. First, we assume that the user has Landsat 7 and 8 imagery downloaded. Since Landsat 8 has more spectral bands than Landsat 7, we'll only use the bands that both satellites have in common. We'll create a single dataset including all images from both Landsat 7 and 8 data by taking the union between these two datasets.\n\n```c++\nfrom torch.utils.data import DataLoader\nfrom torchgeo.datasets import CDL, Landsat7, Landsat8, stack_samples\nfrom torchgeo.samplers import RandomGeoSampler\n\nlandsat7 = Landsat7(root=\"...\")\nlandsat8 = Landsat8(root=\"...\", bands=Landsat8.all_bands[1:-2])\nlandsat = landsat7 | landsat8\n```", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "landsat = landsat7 | landsat8\n```\n\nNext, we take the intersection between this dataset and the CDL dataset. We want to take the intersection instead of the union to ensure that we only sample from regions where we have both Landsat and CDL data. Note that we can automatically download and checksum CDL data. Also note that each of these datasets may contain files in different CRSs or resolutions, but TorchGeo automatically ensures that a matching CRS and resolution is used.\n\n```c++\ncdl = CDL(root=\"...\", download=True, checksum=True)\ndataset = landsat & cdl\n```", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "dataset = landsat & cdl\n```\n\nThis dataset can now be used with a PyTorch data loader. Unlike benchmark datasets, geospatial datasets often include very large images. For example, the CDL dataset consists of a single image covering the entire contiguous United States. In order to sample from these datasets using geospatial coordinates, TorchGeo defines a number of [*samplers*](https://torchgeo.readthedocs.io/en/latest/api/samplers.html). In this example, we'll use a random sampler that returns 256 x 256 pixel images and 10,000 samples per epoch. We'll also use a custom collation function to combine each sample dictionary into a mini-batch of samples.\n\n```c++\nsampler = RandomGeoSampler(dataset, size=256, length=10000)\ndataloader = DataLoader(dataset, batch_size=128, sampler=sampler, collate_fn=stack_samples)\n```\n\nThis data loader can now be used in your normal training/evaluation pipeline.\n\n```c++\nfor batch in dataloader:\n image = batch[\"image\"]\n mask = batch[\"mask\"]", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "mask = batch[\"mask\"]\n\n # train a model, or make predictions using a pre-trained model\n```\n\nMany applications involve intelligently composing datasets based on geospatial metadata like this. For example, users may want to:\n\n- Combine datasets for multiple image sources and treat them as equivalent (e.g., Landsat 7 and 8)\n- Combine datasets for disparate geospatial locations (e.g., Chesapeake NY and PA)\n\nThese combinations require that all queries are present in *at least one* dataset, and can be created using a ``UnionDataset``. Similarly, users may want to:\n\n- Combine image and target labels and sample from both simultaneously (e.g., Landsat and CDL)\n- Combine datasets for multiple image sources for multimodal learning or data fusion (e.g., Landsat and Sentinel)", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "These combinations require that all queries are present in *both* datasets, and can be created using an ``IntersectionDataset``. TorchGeo automatically composes these datasets for you when you use the intersection (``&``) and union \\(``|``\\) operators.\n\n# Multispectral and geospatial transforms\n\nIn deep learning, it's common to augment and transform the data so that models are robust to variations in the input space. Geospatial data can have variations such as seasonal changes and warping effects, as well as image processing and capture issues like cloud cover and atmospheric distortion. TorchGeo utilizes augmentations and transforms from the [Kornia](https://kornia.github.io/) library, which supports GPU acceleration and supports multispectral imagery with more than 3 channels.", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "Traditional geospatial analyses compute and visualize spectral indices which are combinations of multispectral bands. Spectral indices are designed to highlight areas of interest in a multispectral image relevant to some application, such as vegetation health, areas of man-made change or increasing urbanization, or snow cover. TorchGeo supports numerous [*transforms*](https://torchgeo.readthedocs.io/en/latest/api/transforms.html), which can compute common spectral indices and append them as additional bands to a multispectral image tensor.\n\nBelow, we show a simple example where we compute the Normalized Difference Vegetation Index (NDVI) on a Sentinel-2 image. NDVI measures the presence of vegetation and vegetation health and is computed as the normalized difference between the red and near-infrared (NIR) spectral bands. Spectral index transforms operate on sample dictionaries returned from TorchGeo datasets and append the resulting spectral index to the image channel dimension.", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "First, we instantiate a Sentinel-2 dataset and load a sample image. Then, we plot the true color (RGB) representation of this data to see the region we are looking at.\n\n```c++\nimport matplotlib.pyplot as plt\nfrom torchgeo.datasets import Sentinel2\nfrom torchgeo.transforms import AppendNDVI\n\ndataset = Sentinel2(root=\"...\")\nsample = dataset[...]\nfig = dataset.plot(sample)\nplt.show()\n```\n\nNext, we instantiate and compute an NDVI transform, appending this new channel to the end of the image. Sentinel-2 imagery uses index 0 for its red band and index 3 for its NIR band. In order to visualize the data, we also normalize the image. NDVI values can range from -1 to 1, but we want to use the range 0 to 1 for plotting.\n\n```c++\ntransform = AppendNDVI(index_red=0, index_nir=3)\nsample = transform(sample)\nsample[\"image\"][-1] = (sample[\"image\"][-1] + 1) / 2\nplt.imshow(sample[\"image\"][-1], cmap=\"RdYlGn_r\")\nplt.show()\n```\n\n\n \n
\nTrue color (left) and NDVI (right) of the Texas Hill Region, taken on November 16, 2018 by the Sentinel-2 satellite. In the NDVI image, red indicates water bodies, yellow indicates barren soil, light green indicates unhealthy vegetation, and dark green indicates healthy vegetation.\n
\n\n# Benchmark datasets\n\nOne of the driving factors behind progress in computer vision is the existence of standardized benchmark datasets like ImageNet and MNIST. Using these datasets, researchers can directly compare the performance of different models and training procedures to determine which perform the best. In the remote sensing domain, there are many such datasets, but due to the aforementioned difficulties of working with this data and the lack of existing libraries for loading these datasets, many researchers opt to use their own custom datasets.", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "One of the goals of TorchGeo is to provide easy-to-use data loaders for these existing datasets. TorchGeo includes a number of [*benchmark datasets*](https://torchgeo.readthedocs.io/en/latest/api/datasets.html#non-geospatial-datasets) \u2014datasets that include both input images and target labels. This includes datasets for tasks like image classification, regression, semantic segmentation, object detection, instance segmentation, change detection, and more.\n\nIf you've used torchvision before, these types of datasets should be familiar. In this example, we'll create a dataset for the Northwestern Polytechnical University (NWPU) very-high-resolution ten-class (VHR-10) geospatial object detection dataset. This dataset can be automatically downloaded, checksummed, and extracted, just like with torchvision.\n\n```c++\nfrom torch.utils.data import DataLoader\nfrom torchgeo.datasets import VHR10\n\ndataset = VHR10(root=\"...\", download=True, checksum=True)", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "dataloader = DataLoader(dataset, batch_size=128, shuffle=True, num_workers=4)\n\nfor batch in dataloader:\n image = batch[\"image\"]\n label = batch[\"label\"]\n\n # train a model, or make predictions using a pre-trained model\n```\n\nAll TorchGeo datasets are compatible with PyTorch data loaders, making them easy to integrate into existing training workflows. The only difference between a benchmark dataset in TorchGeo and a similar dataset in torchvision is that each dataset returns a dictionary with keys for each PyTorch ``Tensor``.\n\n\n \n
\nExample predictions from a Mask R-CNN model trained on the NWPU VHR-10 dataset. The model predicts sharp bounding boxes and masks for all objects with high confidence scores.\n
\n\n# Reproducibility with PyTorch Lightning", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "\n\n# Reproducibility with PyTorch Lightning\n\nAnother key goal of TorchGeo is reproducibility. For many of these benchmark datasets, there is no predefined train-val-test split, or the predefined split has issues with class imbalance or geographic distribution. As a result, the performance metrics reported in the literature either can't be reproduced, or aren't indicative of how well a pre-trained model would work in a different geographic location.", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "In order to facilitate direct comparisons between results published in the literature and further reduce the boilerplate code needed to run experiments with datasets in TorchGeo, we have created PyTorch Lightning [*datamodules*](https://torchgeo.readthedocs.io/en/latest/api/datamodules.html) with well-defined train-val-test splits and [*trainers*](https://torchgeo.readthedocs.io/en/latest/api/trainers.html) for various tasks like classification, regression, and semantic segmentation. These datamodules show how to incorporate augmentations from the kornia library, include preprocessing transforms (with pre-calculated channel statistics), and let users easily experiment with hyperparameters related to the data itself (as opposed to the modeling process). Training a semantic segmentation model on the Inria Aerial Image Labeling dataset is as easy as a few imports and four lines of code.\n\n```c++\nfrom pytorch_lightning import Trainer\nfrom torchgeo.datamodules import InriaAerialImageLabelingDataModule", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "from torchgeo.trainers import SemanticSegmentationTask\n\ndatamodule = InriaAerialImageLabelingDataModule(root_dir=\"...\", batch_size=64, num_workers=6)\ntask = SemanticSegmentationTask(segmentation_model=\"unet\", encoder_weights=\"imagenet\", learning_rate=0.1)\ntrainer = Trainer(gpus=1, default_root_dir=\"...\")\n\ntrainer.fit(model=task, datamodule=datamodule)\n```\n\n\n \n
\nBuilding segmentations produced by a U-Net model trained on the Inria Aerial Image Labeling dataset. Reproducing these results is as simple as a few imports and four lines of code, making comparison of different models and training techniques simple and easy.\n
", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "\n\nIn our [preprint](https://arxiv.org/abs/2111.08872) we show a set of results that use the aforementioned datamodules and trainers to benchmark simple modeling approaches for several of the datasets in TorchGeo. For example, we find that a simple ResNet-50 can achieve state-of-the-art performance on the [So2Sat](https://ieeexplore.ieee.org/document/9014553) dataset. These types of baseline results are important for evaluating the contribution of different modeling choices when tackling problems with remotely sensed data.\n\n# Future work and contributing\n\nThere is still a lot of remaining work to be done in order to make TorchGeo as easy to use as possible, especially for users without prior deep learning experience. One of the ways in which we plan to achieve this is by expanding our tutorials to include subjects like \"writing a custom dataset\" and \"transfer learning\", or tasks like \"land cover mapping\" and \"object detection\".", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "Another important project we are working on is pre-training models. Most remote sensing researchers work with very small labeled datasets, and could benefit from pre-trained models and transfer learning approaches. TorchGeo is the first deep learning library to provide models pre-trained on multispectral imagery. Our goal is to provide models for different image modalities (optical, SAR, multispectral) and specific platforms (Landsat, Sentinel, MODIS) as well as benchmark results showing their performance with different amounts of training data. Self-supervised learning is a promising method for training such models. Satellite imagery datasets often contain petabytes of imagery, but accurately labeled datasets are much harder to come by. Self-supervised learning methods will allow us to train directly on the raw imagery without needing large labeled datasets.", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "Aside from these larger projects, we're always looking to add new datasets, data augmentation transforms, and sampling strategies. If you're Python savvy and interested in contributing to TorchGeo, we would love to see contributions! TorchGeo is open source under an MIT license, so you can use it in almost any project.\n\nExternal links:\n\n- **Homepage**: [https://github.com/microsoft/torchgeo](https://github.com/microsoft/torchgeo)\n- **Documentation**: [https://torchgeo.readthedocs.io/](https://torchgeo.readthedocs.io/)\n- **PyPI**: [https://pypi.org/project/torchgeo/](https://pypi.org/project/torchgeo/)\n- **Paper**: [https://arxiv.org/abs/2111.08872](https://arxiv.org/abs/2111.08872)\n\nIf you like TorchGeo, give us a star on GitHub! And if you use TorchGeo in your work, please cite our paper.\n\n# Acknowledgments", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "# Acknowledgments\n\n*We would like to thank all TorchGeo contributors for their efforts in creating the library, the Microsoft AI for Good program for support, and the PyTorch Team for their guidance. This research is part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993), the State of Illinois, and as of December, 2019, the National Geospatial-Intelligence Agency. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications. The research was supported in part by NSF grants IIS-1908104, OAC-1934634, and DBI-2021898.*", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} -{"text": "---\nlayout: blog_detail\ntitle: 'What\u2019s New in PyTorch Profiler 1.9?'\nauthor: Sabrina Smai, Program Manager on the AI Framework team at Microsoft\n---\n\nPyTorch Profiler v1.9 has been released! The goal of this new release (previous [PyTorch Profiler release](https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/)) is to provide you with new state-of-the-art tools to help diagnose and fix machine learning performance issues regardless of whether you are working on one or numerous machines. The objective is to target the execution steps that are the most costly in time and/or memory, and visualize the work load distribution between GPUs and CPUs. \n\nHere is a summary of the five major features being released:", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"} -{"text": "1.\t**Distributed Training View**: This helps you understand how much time and memory is consumed in your distributed training job. Many issues occur when you take a training model and split the load into worker nodes to be run in parallel as it can be a black box. The overall model goal is to speed up model training. This distributed training view will help you diagnose and debug issues within individual nodes. \n2.\t**Memory View**: This view allows you to understand your memory usage better. This tool will help you avoid the famously pesky Out of Memory error by showing active memory allocations at various points of your program run. \n3.\t**GPU Utilization Visualization**: This tool helps you make sure that your GPU is being fully utilized. \n4.\t**Cloud Storage Support**: Tensorboard plugin can now read profiling data from Azure Blob Storage, Amazon S3, and Google Cloud Platform.", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"} -{"text": "5.\t**Jump to Source Code**: This feature allows you to visualize stack tracing information and jump directly into the source code. This helps you quickly optimize and iterate on your code based on your profiling results. \n\n## Getting Started with PyTorch Profiling Tool\nPyTorch includes a profiling functionality called \u00ab PyTorch Profiler \u00bb. The PyTorch Profiler tutorial can be found [here](https://pytorch.org/tutorials/intermediate/tensorboard_profiler_tutorial.html).\n\nTo instrument your PyTorch code for profiling, you must:\n\n$ pip install torch-tb-profiler\n\n```python\nimport torch.profiler as profiler\nWith profiler.profile(XXXX)\n```\n\n**Comments**:\n\n\u2022 For CUDA and CPU profiling, see [below](https://github.com/pytorch/kineto/blob/master/tb_plugin/examples/resnet50_profiler_api.py): \n```\nwith torch.profiler.profile( \nactivities=[ \ntorch.profiler.ProfilerActivity.CPU, \ntorch.profiler.ProfilerActivity.CUDA], \n```", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"} -{"text": "torch.profiler.ProfilerActivity.CUDA], \n```\n\n\u2022\tWith profiler.record_function(\u201c$NAME\u201d): allows putting a decorator (a tag associated to a name) for a block of function\n\n\u2022\tProfile_memory=True parameter under profiler.profile allows you to profile CPU and GPU memory footprint\n\n## Visualizing PyTorch Model Performance using PyTorch Profiler\n\n### Distributed Training \n\nRecent advances in deep learning argue for the value of large datasets and large models, which requires you to scale out model training to more computational resources. Distributed Data Parallel (DDP) and NVIDIA Collective Communications Library (NCCL) are the widely adopted paradigms in PyTorch for accelerating your deep learning training. \n\nIn this release of PyTorch Profiler, DDP with NCCL backend is now supported.\n\nFigure: A straggler example
\nOverview details: Resnet50_batchsize4
\nOverview details: Resnet50_batchsize32
\nGify: Jump to Source using Visual Studio Code Plug In UI
\n\n \n
\n Figure 1: Instrumentation build of PyTorch\n
\n\n```python", "source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"} -{"text": "\n\n```python\n# Clone the PyTorch repo\ngit clone https://github.com/pytorch/pytorch.git\ncd pytorch\n\n# Build the model_tracer\nUSE_NUMPY=0 USE_DISTRIBUTED=0 USE_CUDA=0 TRACING_BASED=1 \\\n python setup.py develop\n```\n\nNow this instrumentation build is used to run a model inference with representative inputs. The **model_tracer** binary observes parts of the instrumentation build that were activated during the inference run, and dumps it to a YAML file.\n\n\n \n
\n Figure 2: YAML file generated by running model(s) on an instrumentation build\n
\n\n```python\n# Generate YAML file\n./build/bin/model_tracer \\\n --model_input_path /tmp/path_to_model.ptl \\\n --build_yaml_path /tmp/selected_ops.yaml\n```", "source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"} -{"text": "--build_yaml_path /tmp/selected_ops.yaml\n```\n\nNow we build the PyTorch Runtime again, but this time using the YAML file generated by the tracer. The runtime now only includes those parts that are needed for this model. This is called **\u201cSelectively built PyTorch runtime\u201d** in the diagram below.\n\n```python\n# Clean out cached configuration\nmake clean\n\n# Build PyTorch using Selected Operators (from the YAML file)\n# using the host toolchain, and use this generated library\nBUILD_PYTORCH_MOBILE_WITH_HOST_TOOLCHAIN=1 \\\nUSE_LIGHTWEIGHT_DISPATCH=0 \\\nBUILD_LITE_INTERPRETER=1 \\\nSELECTED_OP_LIST=/tmp/selected_ops.yaml \\\nTRACING_BASED=1 \\\n ./scripts/build_mobile.sh\n```\n\n\n \n
\n Figure 3: Selective Build of PyTorch and model execution on a selectively built PyTorch runtime\n
\n\n### Show me the code!", "source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"} -{"text": "\n\n### Show me the code!\n\nWe\u2019ve put together a [notebook](https://gist.github.com/dhruvbird/65fd800983f362a72d78afe68031568c) to illustrate what the process above looks like in code using a simple PyTorch model. \n\nFor a more hands-on tutorial to deploy this on Android/iOS [this tutorial](https://pytorch.org/tutorials/prototype/tracing_based_selective_build.html) should be helpful.\n\n## Technical FAQs\n\n### Why is Tracing needed for a Selective Build of PyTorch?\n\nIn PyTorch, CPU kernels can call other operators via the [PyTorch Dispatcher](http://blog.ezyang.com/2020/09/lets-talk-about-the-pytorch-dispatcher/). Simply including the set of root operators called directly by the model is not sufficient as there might be many more being called under-the-hood transitively. Running the model on representative inputs and observing the actual list of operators called (aka \u201ctracing\u201d) is the most accurate way of determining what parts of PyTorch are used.", "source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"} -{"text": "Additionally, factors such as which dtypes a kernel should handle are also runtime features that depend on actual input provided to the model. Hence, the tracing mechanism is extremely suitable for this purpose.\n\n### Which features can be selected (in or out) by using Tracing Based Selective Build?\n\nThe following features can be selected for the PyTorch runtime during the tracing based selective build process:\n\n1. [CPU/QuantizedCPU](https://codebrowser.bddppq.com/pytorch/pytorch/build/aten/src/ATen/) kernels for [PyTorch\u2019s ATen Operators](https://pytorch.org/cppdocs/): If a PyTorch Operator is not needed by a model targeted at a selectively built runtime, then the registration of that CPU kernel is omitted in the runtime. This is controlled via [Torchgen code-gen](https://github.com/pytorch/pytorch/blob/master/torchgen/gen.py).", "source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"} -{"text": "2. [Primary Operators](https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/runtime/register_prim_ops.cpp): This is controlled by a macro named [TORCH_SELECTIVE_SCHEMA](https://codebrowser.bddppq.com/pytorch/pytorch/torch/library.h.html) (via templated selective build) that either selects a primary operator or de-selects it based on information in a generated header file.\n3. Code that handles [specific dtypes](https://codebrowser.bddppq.com/pytorch/pytorch/aten/src/ATen/Dispatch.h.html) in CPU kernels: This is performed by generating exception throws in specific case statements in the switch case generated by the macro [AT_PRIVATE_CHECK_SELECTIVE_BUILD](https://codebrowser.bddppq.com/pytorch/pytorch/aten/src/ATen/Dispatch.h.html#_M/AT_PRIVATE_CHECK_SELECTIVE_BUILD).", "source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"} -{"text": "4. Registration of [Custom C++ Classes](https://pytorch.org/tutorials/advanced/torch_script_custom_classes.html) that extend PyTorch: This is controlled by the macro [TORCH_SELECTIVE_CLASS](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/quantized/cpu/fbgemm_utils.cpp#L385-L386), which can be used when registering Custom C++ Classes. The [torch::selective_class_<>](https://github.com/pytorch/pytorch/blob/master/torch/custom_class.h#L443-L460) helper is to be used in conjunction with the macro [TORCH_SELECTIVE_CLASS](https://codebrowser.bddppq.com/pytorch/pytorch/torch/library.h.html#_M/TORCH_SELECTIVE_CLASS).\n\n### What is the structure of the YAML file used during the build?\n\nThe YAML file generated after tracing looks like the example below. It encodes all the elements of the \u201cselectable\u201d build feature as specified above.\n\n```python\ninclude_all_non_op_selectives: false\nbuild_features: []\noperators:\n aten::add.Tensor:\n is_used_for_training: false\n is_root_operator: true", "source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"} -{"text": "is_root_operator: true\n include_all_overloads: false\n aten::len.t:\n is_used_for_training: false\n is_root_operator: true\n include_all_overloads: false\nkernel_metadata:\n _local_scalar_dense_cpu:\n - Float\n add_stub:\n - Float\n copy_:\n - Bool\n - Byte\n mul_cpu:\n - Float\ncustom_classes: []\n```\n\n### How exactly is code eliminated from the generated binary?\n\nDepending on the specific scenario, there are 2 main techniques that are used to hint the compiler and linker about unused and unreachable code. This code is then cleaned up by the compiler or linker as unreachable code.\n\n#### [1] Unreferenced functions removed by the Linker\n\nWhen a function that isn\u2019t transitively referenced from any visible function is present in the compiled object files that are being linked together, the linker will remove it (if the right build flags are provided). This is leveraged in 2 scenarios by the selective build system.\n\n##### Kernel Registration in the Dispatcher", "source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"} -{"text": "##### Kernel Registration in the Dispatcher\n\nIf an operator\u2019s kernel isn\u2019t needed, then it isn\u2019t registered with the dispatcher. An unregistered kernel means that the function is unreachable, and it will be removed by the linker.\n\n##### Templated Selective Build\n\nThe general idea here is that a class template specialization is used to select a class that either captures a reference to a function or not (depending on whether it\u2019s used) and the linker can come along and clean out the unreferenced function.\n\nFor example, in the code below, there\u2019s no reference to the function \u201c`fn2`\u201d, so it will be cleaned up by the linker since it\u2019s not referenced anywhere.\n\n```python\n#include\n \n
\n Figure 4: Dead Code Elimination by C++ Compilers\n
", "source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"} -{"text": "\n\nThis property is leveraged in the bodies of PyTorch kernel implementations that have a lot of repeated code to handle multiple dtypes of a Tensor. A [dtype](https://pytorch.org/docs/stable/tensor_attributes.html) is the underlying data-type that the Tensor stores elements of. This can be one of float, double, int64, bool, int8, etc\u2026\n\nAlmost every PyTorch CPU kernel uses a macro of the form AT_DISPATCH_ALL_TYPES* that is used to substitute some code specialized for every dtype that the kernel needs to handle. For example:\n\n```python\nAT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND3(\n kBool, kHalf, kBFloat16, dtype, \"copy_kernel\", [&] {\n cpu_kernel_vec(\n iter,\n [=](scalar_t a) -> scalar_t { return a; },\n [=](Vectorized\n \n
", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} -{"text": "
\n \n
\n Source: https://engineering.fb.com/2021/07/15/open-source/fsdp/\n
\n\nPyTorch provides a native API, [DistributedDataParallel](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html) (DDP) to enable data parallelism which can be used as a module wrapper as showcased below. Please see PyTorch Distributed [documentation](https://pytorch.org/docs/stable/distributed.html#) for more details.\n\n```Python\nfrom torchmultimodal.models.flava.model import flava_model_for_pretraining\nimport torch\nimport torch.distributed as dist\n\nmodel = flava_model_for_pretraining().cuda()\n# Initialize PyTorch Distributed process groups\n# Please see https://pytorch.org/tutorials/intermediate/dist_tuto.html for details\ndist.init_process_group(backend=\u201dnccl\u201d)\n# Wrap model in DDP", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} -{"text": "# Wrap model in DDP\nmodel = torch.nn.parallel.DistributedDataParallel(model, device_ids=[torch.cuda.current_device()])\n```\n\n## Fully Sharded Data Parallel\n\nGPU memory usage of a training application can roughly be broken down into model inputs, intermediate activations (needed for gradient computation), model parameters, gradients, and optimizer states. Scaling a model will typically increase each of these elements. Scaling a model with DDP can eventually result in out-of-memory issues when a single GPU's memory becomes insufficient since it replicates the parameters, gradients, and optimizer states on all workers.", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} -{"text": "To reduce this replication and save GPU memory, we can shard the model parameters, gradients, and optimizer states across all workers with each worker only managing a single shard. This technique was popularized by the [ZeRO-3](https://arxiv.org/abs/1910.02054) approach developed by Microsoft. A PyTorch-native implementation of this approach is available as [FullyShardedDataParallel](https://pytorch.org/docs/stable/fsdp.html) (FSDP) API, released as a beta feature in PyTorch 1.12. During a module\u2019s forward and backward passes, FSDP unshards the model parameters as needed for computation (using all-gather) and reshards them after computation. It synchronizes gradients using the reduce-scatter collective to ensure sharded gradients are globally averaged. The forward and backward pass flow of a model wrapped in FSDP are detailed below:\n\n\n \n
", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} -{"text": "
\n\n\n Source: https://engineering.fb.com/2021/07/15/open-source/fsdp/\n
\n\nTo use FSDP, the submodules of a model need to be wrapped with the API to control when specific submodules are sharded or unsharded. FSDP provides an auto-wrapping API (see the [auto_wrap_policy](https://pytorch.org/docs/stable/fsdp.html#torch.distributed.fsdp.FullyShardedDataParallel) argument) that can be used out of the box as well as several [wrapping policies](https://github.com/pytorch/pytorch/blob/master/torch/distributed/fsdp/wrap.py) and the ability to [write your own policy](https://github.com/pytorch/pytorch/blob/75c0e3a471c19b883feca15fd4ecfabedf746691/torch/distributed/fsdp/fully_sharded_data_parallel.py#L858).", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} -{"text": "The following example demonstrates wrapping the FLAVA model with FSDP. We specify the auto-wrapping policy as `transformer_auto_wrap_policy`. This will wrap individual transformer layers (`TransformerEncoderLayer`), the image transformer (`ImageTransformer`), text encoder (`BERTTextEncoder`) and multimodal encoder (`FLAVATransformerWithoutEmbeddings`) as individual FSDP units. This uses a recursive wrapping approach for efficient memory management. For example, after an individual transformer layer\u2019s forward or backward pass is finished, its parameters are discarded, freeing up memory thereby reducing peak memory usage.", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} -{"text": "FSDP also provides a number of configurable options to tune the performance of applications. For example, in our use case, we illustrate the use of the new `limit_all_gathers` flag, which prevents all-gathering model parameters too early thereby alleviating memory pressure on the application. We encourage users to experiment with this flag which can potentially improve the performance of applications with high active memory usage.\n\n```Python\nimport torch\nfrom torch.distributed.fsdp import FullyShardedDataParallel as FSDP\nfrom torch.distributed.fsdp.wrap import transformer_auto_wrap_policy\nfrom torchmultimodal.models.flava.model import flava_model_for_pretraining\nfrom torchmultimodal.models.flava.text_encoder import BertTextEncoder\nfrom torchmultimodal.models.flava.image_encoder import ImageTransformer\nfrom torchmultimodal.models.flava.transformer import FLAVATransformerWithoutEmbeddings\nfrom torchmultimodal.modules.layers.transformer import TransformerEncoderLayer", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} -{"text": "model = flava_model_for_pretraining().cuda()\ndist.init_process_group(backend=\u201dnccl\u201d)\n\nmodel = FSDP(\n model,\n device_id=torch.cuda.current_device(),\n auto_wrap_policy=partial(\n transformer_auto_wrap_policy,\n transformer_layer_cls={\n TransformerEncoderLayer,\n ImageTransformer,\n BERTTextEncoder,\n FLAVATransformerWithoutEmbeddings\n },\n ),\n limit_all_gathers=True,\n )\n```\n\n## Activation Checkpointing", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} -{"text": ")\n```\n\n## Activation Checkpointing\n\nAs discussed above, intermediate activations, model parameters, gradients, and optimizer states contribute to the overall GPU memory usage. FSDP can reduce memory consumption due to the latter three but does not reduce memory consumed by activations. Memory used by activations increases with increase in batch size or number of hidden layers. Activation checkpointing is a technique to decrease this memory usage by recomputing the activations during the backward pass instead of holding them in memory for a specific checkpointed module. For example, we observed ~4x reduction in the peak active memory after forward pass by applying activation checkpointing to the 2.7B parameter model.", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} -{"text": "PyTorch offers a wrapper based activation checkpointing API. In particular, `checkpoint_wrapper` allows users to wrap an individual module with checkpointing, and `apply_activation_checkpointing` allows users to specify a policy with which to wrap modules within an overall module with checkpointing. Both these APIs can be applied to most models as they do not require any modifications to the model definition code. However, if more granular control over checkpointed segments, such as checkpointing specific functions within a module, is required, the functional `torch.utils.checkpoint` [API](https://pytorch.org/docs/stable/checkpoint.html) can be leveraged, although this requires modification to the model code. The application of the activation checkpointing wrapper to individual FLAVA transformer layers (denoted by `TransformerEncoderLayer`) is shown below. For a thorough description of activation checkpointing, please see the description in the [PyTorch documentation](https://pytorch.org/docs/stable/checkpoint.html).", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} -{"text": "```Python\nfrom torchmultimodal.models.flava.model import flava_model_for_pretraining\nfrom torch.distributed.algorithms._checkpoint.checkpoint_wrapper import apply_activation_checkpointing, checkpoint_wrapper, CheckpointImpl\nfrom torchmultimodal.modules.layers.transformer import TransformerEncoderLayer\n\nmodel = flava_model_for_pretraining()\ncheckpoint_tformer_layers_policy = lambda submodule: isinstance(submodule, TransformerEncoderLayer)\n\napply_activation_checkpointing(\n model,\n checkpoint_wrapper_fn=checkpoint_wrapper,\n check_fn=checkpoint_tformer_layers_policy,\n )\n```\nUsed together, wrapping FLAVA transformer layers with activation checkpointing and wrapping the overall model with FSDP as demonstrated above, we are able to scale FLAVA to 10B parameters.\n\n## Experiments", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} -{"text": "## Experiments\n\nWe conduct an empirical study about the impact of the different optimizations from the previous section on system performance. For all our experiments, we use a single node with 8 A100 40GB GPUs and run the pretraining for 1000 iterations. All runs also used PyTorch\u2019s [automatic mixed precision](https://pytorch.org/docs/stable/amp.html) with the bfloat16 data type. [TensorFloat32](https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices) format is also enabled to improve matmul performance on the A100. We define throughput as the average number of items (text or image) processed per second (we ignore the first 100 iterations while measuring throughput to account for warmup). We leave training to convergence and its impact on downstream task metrics as an area for future study.", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} -{"text": "Figure 1 plots the throughput for each model configuration and optimization, both with a local batch size of 8 and then with the maximum batch size possible on 1 node. Absence of a data point for a model variant for an optimization indicates that the model could not be trained on a single node.\n\nFigure 2 plots the maximum possible batch size per worker for each optimization. We observe a few things:\n\n1. Scaling model size: DDP is only able to fit the 350M and 900M model on a node. With FSDP, due to memory savings, we are able to train ~3x bigger models compared to DDP (i.e. the 1.8B and 2.7B variants). Combining activation checkpointing (AC) with FSDP enables training even bigger models, on the order of ~10x compared to DDP (i.e. 4.8B and 10B variants)\n2. Throughput:", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} -{"text": "2. Throughput:\n - For smaller model sizes, at a constant batch size of 8, the throughput for DDP is slightly higher than or equal to FSDP, explainable by the additional communication required by FSDP. It is lowest for FSDP and AC combined together. This is because AC re-runs checkpointed forward passes during the backwards pass, trading off additional computation for memory savings. However, in the case of the 2.7B model, FSDP + AC actually has higher throughput compared to FSDP alone. This is because the 2.7B model with FSDP is operating close to the memory limit even at batch size 8 triggering CUDA malloc retries which tend to slow down training. AC helps with reducing the memory pressure and leads to no retries.", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} -{"text": "- For DDP and FSDP + AC, the throughput increases with an increase in batch size for each model. For FSDP alone, this is true for smaller variants. However, with the 1.8B and 2.7B parameter models, we observe throughput degradation when increasing batch size. A potential reason for this, as noted above also, is that at the memory limit, PyTorch\u2019s CUDA memory management may have to retry cudaMalloc calls and/or run expensive defragmentation steps to find free memory blocks to handle the workload\u2019s memory requirements which can result in training slowdown.", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} -{"text": "- For larger models that can only be trained with FSDP (1.8B, 2.7B, 4.8B) the setting with highest throughput achieved is with FSDP + AC scaling to the maximum batch size. For 10B, we observe nearly equal throughput for smaller and maximum batch size. This might be counterintuitive as AC results in increased computation and maxing out batch size potentially leads to expensive defragmentation operations due to operating at CUDA memory limit. However, for these large models, the increase in batch size is large enough to mask this overhead.\n\n\n \n
\n Figure 1: Training throughput for different configurations\n
\n\n\n \n
\n Figure 2: Max local batchsize possible for different configurations\n
\n\n## Conclusion", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} -{"text": "\n\n## Conclusion\n\nAs the world moves towards multimodal foundation models, scaling model parameters and efficient training is becoming an area of focus. The PyTorch ecosystem aims to accelerate innovation in this field by providing different tools to the research community, both for training and scaling multimodal models. With FLAVA, we laid out an example of scaling a model for multimodal understanding. In the future, we plan to add support for other kinds of models like the ones for multimodal generation and demonstrate their scaling factors. We also hope to automate many of these scaling and memory saving techniques (such as sharding and activation checkpointing) to reduce the amount of user experimentation needed to achieve the desired scale and maximum training throughput.\n\n## References\n\n- [Introducing TorchMultimodal - a library for accelerating exploration in Multimodal AI](https://pytorch.org/blog/introducing-torchmultimodal/)", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} -{"text": "- [FLAVA paper](https://deploy-preview-1186--pytorch-dot-org-preview.netlify.app/blog/introducing-torchmultimodal/)\n- [Introducing Pytorch FSDP](https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/)", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} -{"text": "---\nlayout: blog_detail\ntitle: \"A BetterTransformer for Fast Transformer Inference\"\nauthor: Michael Gschwind, Eric Han, Scott Wolchok, Rui Zhu, Christian Puhrsch\nfeatured-img: \"/assets/images/2022-7-12-a-better-transformer-for-fast-transformer-encoder-inference-3.png\"\n---\n\n**tl;dr** Transformers achieve state-of-the-art performance for NLP, and are becoming popular for a myriad of other tasks. They are computationally expensive which has been a blocker to their widespread productionisation. Launching with PyTorch 1.12, BetterTransformer implements a backwards-compatible fast path of `torch.nn.TransformerEncoder` for Transformer Encoder Inference and does not require model authors to modify their models. BetterTransformer improvements can exceed 2x in speedup and throughput for many common execution scenarios. To use BetterTransformer, [install](https://pytorch.org/get-started/locally/) PyTorch 1.12 and start using high-quality, high-performance Transformer models with the PyTorch API today.", "source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"} -{"text": "\n \n
\nDiagram of the Transformer Encoder Architecture (from \"Attention Is All You Need\"). During Inference, the entire module will execute as a single PyTorch-native function.\n
\n\nIn this blog post, we share the following topics \u2014 Performance Improvements, Backwards compatibility, and Taking advantage of the FastPath. Learn more about these topics below. \n\n## Performance Improvements", "source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"} -{"text": "## Performance Improvements\n\nBetterTransformer launches with accelerated native implementations of MultiHeadAttention and TransformerEncoderLayer for CPUs and GPUs. These fast paths are integrated in the standard PyTorch Transformer APIs, and will accelerate [TransformerEncoder](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoder.html), [TransformerEncoderLayer](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html) and [MultiHeadAttention](https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html) nn.modules. These new modules implement two types of optimizations: (1) fused kernels combine multiple individual operators normally used to implement Transformers to provide a more efficient implementation, and (2) take advantage of sparsity in the inputs to avoid performing unnecessary operations on padding tokens. Padding tokens frequently account for a large fraction of input batches in many Transformer models used for Natural Language Processing.", "source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"} -{"text": "## Backwards compatibility\n\nAdvantageously, **no model changes are necessary to benefit from the performance boost offered by BetterTransformer.** To benefit from fast path execution, inputs and operating conditions must satisfy some access conditions (see below). While the internal implementation of Transformer APIs has changed, PyTorch 1.12 maintains strict compatibility with Transformer modules shipped in previous versions, enabling PyTorch users to use models created and trained with previous PyTorch releases while benefiting from BetterTransformer improvements.\n\nIn addition to enabling the PyTorch nn.Modules, BetterTransformer provides improvements for PyTorch libraries. Performance benefits will become available through two different enablement paths:", "source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"} -{"text": "1. Transparent acceleration: Current users of PyTorch nn.Modules such as [MultiHeadAttention](https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html) as well as higher-level Transformer components will benefit from the improved performance of the new nn.Modules automatically. An example of this is the [visual transformer (ViT)](https://arxiv.org/abs/2010.11929) implementation used in the torchvision library ([code link](https://github.com/pytorch/vision/blob/87cde716b7f108f3db7b86047596ebfad1b88380/torchvision/models/vision_transformer.py#L103)).", "source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"} -{"text": "2. Torchtext library acceleration: As part of this project, we have optimized Torchtext to build on the PyTorch core API to benefit from BetterTransformer enhancements while maintaining strict and transparent compatibility with previous library versions and models trained with previous Torchtext versions. Using PyTorch Transformers in Torchtext also ensures that Torchtext will benefit from expected future enhancements to the PyTorch Transformer implementation.\n\n## Taking advantage of the Fastpath\n\nBetterTransformer is a fastpath for the PyTorch Transformer API. The fastpath is a native, specialized implementation of key Transformer functions for CPU and GPU that applies to common Transformer use cases.", "source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"} -{"text": "To take advantage of input sparsity (i.e. padding) in accelerating your model (see Figure 2), set the keyword argument `enable_nested_tensor=True` when instantiating a [TransformerEncoder](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoder.html) and pass in the `src_key_padding_mask` argument (which denotes padding tokens) during inference. This requires the padding mask to be contiguous, which is the typical case.", "source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"} -{"text": "Currently, the BetterTransformer speedup only applies to transformer encoder models used in inference. To benefit from fastpath execution, models must be composed of any of the following components: [TransformerEncoder](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoder.html), [TransformerEncoderLayer](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html) or [MultiheadAttention](https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html) (MHA). Fastpath execution is also subject to some criteria. Most importantly, the model must be executed in inference mode and operate on input tensors that do not collect gradient tape information (e.g., running with torch.no_grad). The full list of conditions can be found at these links for [nn.MultiHeadAttention](https://github.com/pytorch/pytorch/blob/29189d2ba8e583b2355cd0e9517a1ee742ba12cf/torch/nn/modules/activation.py#L1060) and [nn.TransformerEncoder](https://github.com/pytorch/pytorch/blob/29189d2ba8e583b2355cd0e9517a1ee742ba12cf/torch/nn/modules/transformer.py#L206), respectively. If the criteria are not met, control flows to the legacy PyTorch 1.11 Transformer implementation which has the same API, but lacks the fastpath performance boost.", "source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"} -{"text": "Other transformer models (such as decoder models) which use the PyTorch MultiheadAttention module will benefit from the BetterTransformer fastpath. Planned future work is to expand the end-to-end BetterTransformer fastpath to models based on [TransformerDecoder](https://pytorch.org/docs/stable/generated/torch.nn.TransformerDecoder.html) to support popular seq2seq and decoder-only (e.g., [OPT](https://ai.facebook.com/blog/democratizing-access-to-large-scale-language-models-with-opt-175b/)) model architectures, and to training.\n\n## Speedups\n\nThe following graphs show the performance achieved for the [BERT](https://arxiv.org/abs/1810.04805)-base model with small and large-scale inputs:\n\n\n \n
\nFigure 1: PyTorch 1.12 Improvements with BetterTransformer fastpath execution\n
\n\n", "source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"} -{"text": "
\n\n\n \n
\nFigure 2: PyTorch 1.12 Improvements with BetterTransformer fastpath execution
\nwith sparsity optimization enabled by enable_nested_tensor=True\n
\n\n
\nC++ stores multi-dimensional data in row-major format.\n
\n\n
\n\n
\n\n
\n\n
\n\n
\n\n
", "source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"} -{"text": "
\n\n
\n\n
\n\n
Optimization\n | \nPerformance Bottleneck Addressed\n | \n
Combining Input Sparse Features\n | \nHost-to-device memory copy\n | \n
Horizontal fusion\n | \nGPU kernel launch overhead\n | \n
Overlapping Computation with Communication\n | \nEmbedding all-to-all access time\n | \n
\n
\n
\n \n
", "source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"} -{"text": "AMD, along with key PyTorch codebase developers (including those at Meta AI), delivered a set of updates to the ROCm\u2122 open software ecosystem that brings stable support for AMD Instinct\u2122 accelerators as well as many Radeon\u2122 GPUs. This now gives PyTorch developers the ability to build their next great AI solutions leveraging AMD GPU accelerators & ROCm. The support from PyTorch community in identifying gaps, prioritizing key updates, providing feedback for performance optimizing and supporting our journey from \u201cBeta\u201d to \u201cStable\u201d was immensely helpful and we deeply appreciate the strong collaboration between the two teams at AMD and PyTorch. The move for ROCm support from \u201cBeta\u201d to \u201cStable\u201d came in the PyTorch 1.12 release (June 2022) brings the added support to easily run PyTorch on native environment without having to configure custom dockers. This is a sign of confidence about the quality of support and performance of PyTorch using AMD Instinct and ROCm. The results of these collaborative efforts are evident in the performance measured on key industry benchmarks like Microsoft\u2019s SuperBench shown below in Graph 1.", "source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"} -{"text": "
\n\n\u201cWe are excited to see the significant impact of developers at AMD to contribute to and extend features within PyTorch to make AI models run in a more performant, efficient, and scalable way. A great example of this is the thought-leadership around unified memory approaches between the framework and future hardware systems, and we look forward to seeing that feature progress.\u201d
\n- Soumith Chintala, PyTorch lead-maintainer and Director of Engineering, Meta AI\n
\n \n
\nFigure 1. FSDP workflow\n
\n\nUsually, model layers are wrapped with FSDP in a nested way, so that only layers in a single FSDP instance need to gather the full parameters to a single device during forward or backward computations. The gathered full parameters will be freed immediately after computation, and the freed memory can be used for the next layer\u2019s computation. In this way, peak GPU memory could be saved and thus training can be scaled to use a larger model size or larger batch size. To further maximize memory efficiency, FSDP can offload the parameters, gradients and optimizer states to CPUs when the instance is not active in the computation.\n\n### Using FSDP in PyTorch\n\nThere are two ways to wrap a model with PyTorch FSDP. Auto wrapping is a drop-in replacement for DDP; manual wrapping needs minimal changes of model definition code with the ability to explore complex sharding strategies.", "source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"} -{"text": "#### Auto Wrapping\n\nModel layers should be wrapped in FSDP in a nested way to save peak memory and enable communication and computation overlapping. The simplest way to do it is auto wrapping, which can serve as a drop-in replacement for DDP without changing the rest of the code.\n\nfsdp_auto_wrap_policy argument allows specifying a callable function to recursively wrap layers with FSDP. default_auto_wrap_policy function provided by the PyTorch FSDP recursively wraps layers with the number of parameters larger than 100M. You can supply your own wrapping policy as needed. The example of writing a customized wrapping policy is shown in the [FSDP API doc](https://pytorch.org/docs/stable/fsdp.html).\n\nIn addition, cpu_offload could be configured optionally to offload wrapped parameters to CPUs when these parameters are not used in computation. This can further improve memory efficiency at the cost of data transfer overhead between host and device.\n\nThe example below shows how FSDP is wrapped using auto wrapping.", "source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"} -{"text": "```python\nfrom torch.distributed.fsdp import (\n FullyShardedDataParallel,\n CPUOffload,\n)\nfrom torch.distributed.fsdp.wrap import (\n default_auto_wrap_policy,\n)\nimport torch.nn as nn\n \nclass model(nn.Module):\n def __init__(self):\n super().__init__()\n self.layer1 = nn.Linear(8, 4)\n self.layer2 = nn.Linear(4, 16)\n self.layer3 = nn.Linear(16, 4)\n \nmodel = DistributedDataParallel(model())\nfsdp_model = FullyShardedDataParallel(\n model(),\n fsdp_auto_wrap_policy=default_auto_wrap_policy,\n cpu_offload=CPUOffload(offload_params=True),\n)\n```\n\n#### Manual Wrapping\n\nManual wrapping can be useful to explore complex sharding strategies by applying `wrap` selectively to some parts of the model. Overall settings can be passed to the enable_wrap() context manager.\n\n```python\nfrom torch.distributed.fsdp import (\n FullyShardedDataParallel,\n CPUOffload,\n)\nfrom torch.distributed.fsdp.wrap import (\n enable_wrap,\n wrap,\n)\nimport torch.nn as nn\nfrom typing import Dict", "source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"} -{"text": "import torch.nn as nn\nfrom typing import Dict\n \n \nclass model(nn.Module):\n def __init__(self):\n super().__init__()\n self.layer1 = wrap(nn.Linear(8, 4))\n self.layer2 = nn.Linear(4, 16)\n self.layer3 = wrap(nn.Linear(16, 4))\n \nwrapper_kwargs = Dict(cpu_offload=CPUOffload(offload_params=True))\nwith enable_wrap(wrapper_cls=FullyShardedDataParallel, **wrapper_kwargs):\n fsdp_model = wrap(model())\n```\n\nAfter wrapping the model with FSDP using one of the two above approaches, the model can be trained in a similar way as local training, like this:\n\n```python\noptim = torch.optim.Adam(fsdp_model.parameters(), lr=0.0001)\nfor sample, label in next_batch():\n out = fsdp_model(input)\n loss = criterion(out, label)\n loss.backward()\n optim.step()\n```\n\n### Benchmark Results", "source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"} -{"text": "optim.step()\n```\n\n### Benchmark Results\n\nWe ran extensive scaling tests for 175B and 1T GPT models on AWS clusters using PyTorch FSDP. Each cluster node is an instance with 8 [NVIDIA A100-SXM4-40GB](https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/a100/pdf/nvidia-a100-datasheet-us-nvidia-1758950-r4-web.pdf) GPUs, and inter-nodes are connected via AWS Elastic Fabric Adapter (EFA) with 400 Gbps network bandwidth.\n\nGPT models are implemented using [minGPT](https://github.com/karpathy/minGPT). A randomly generated input dataset is used for benchmarking purposes. All experiments ran with 50K vocabulary size, fp16 precision and [SGD](https://pytorch.org/docs/stable/generated/torch.optim.SGD.html) optimizer.\n\n| Model | Number of layers | Hidden size | Attention heads | Model size, billions of parameters |\n|----------|------------------|-------------|-----------------|------------------------------------|", "source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"} -{"text": "| GPT 175B | 96 | 12288 | 96 | 175 |\n| GPT 1T | 128 | 25600 | 160 | 1008 |\n\nIn addition to using FSDP with parameters CPU offloading in the experiments, the [activation checkpointing feature](https://pytorch.org/docs/stable/checkpoint.html) in PyTorch is also applied in the tests.\n\nThe maximum per-GPU throughput of 159 teraFLOP/s (51% of NVIDIA A100 peak theoretical performance 312 teraFLOP/s/GPU) is achieved with batch size 20 and sequence length 512 on 128 GPUs for the GPT 175B model; further increase of the number of GPUs leads to per-GPU throughput degradation because of growing communication between the nodes.", "source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"} -{"text": "For the GPT 1T model, the maximum per-GPU throughput of 84 teraFLOP/s (27% of the peak teraFLOP/s) is achieved with batch size 4 and sequence length 2048 on 128 GPUs. However, further increase of the number of GPUs doesn\u2019t affect the per-GPU throughput too much because we observed that the largest bottleneck in the 1T model training is not from communication but from the slow CUDA cache allocator when peak GPU memory is reaching the limit. The use of A100 80G GPUs with larger memory capacity will mostly resolve this issue and also help scale the batch size to achieve much larger throughput.\n\n\n \n
\n \n
\n \n
\n \n
\n \n
* Trained on the training data of MUSDB-HQ dataset.
** Trained on both training and test sets of MUSDB-HQ and 150 extra songs from an internal database that were specifically produced for Meta.
\n\n
\n\n
Device \n | \nConcurrency \n | \n# Requests\n | \n#workers\n | \nBatch size\n | \nPayload/image\n | \nOptimization \n | \nThroughput \n | \nLatency P99\n | \n||
CPU\n | \n10\n | \n1000\n | \n1\n | \n1\n | \nsmall\n | \nN/A\n | \n3.45\n | \n305.3 ms\n | \n||
CPU\n | \n1\n | \n1000\n | \n1\n | \n1\n | \nsmall\n | \nN/A\n | \n3.45\n | \n291.8 ms\n | \n||
GPU\n | \n10\n | \n1000\n | \n1\n | \n1\n | \nsmall\n | \nN/A\n | \n41.05", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} -{"text": " | \nN/A\n | \n41.05\n | \n25.48 ms\n | \n
GPU\n | \n1\n | \n1000\n | \n1\n | \n1\n | \nsmall\n | \nN/A\n | \n42.21\n | \n23.6 ms\n | \n||
GPU\n | \n10\n | \n1000\n | \n1\n | \n4\n | \nsmall\n | \nN/A\n | \n54.78\n | \n73.62 ms\n | \n||
GPU\n | \n10\n | \n1000\n | \n1\n | \n4\n | \nsmall\n | \nmodel.half()\n | \n78.62\n | \n50.69 ms\n | \n||
GPU\n | \n10\n | \n1000\n | \n1\n | \n8\n | \nsmall\n | \nmodel.half()\n | \n85.29\n | \n94.4 ms\n | \n
\n\n
\n\n
\nFigure 2: Image-classification scaling result.\n
\n\n**Figure 3** shows the *object-detection* scaling result. It plots the epoch time for training [ViTDet](https://arxiv.org/abs/2203.16527) [9] with different ViT backbones on COCO using 128 A100-80GB GPUs.\n\n\n\n
\nFigure 3: Object-detection scaling result.\n
", "source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"} -{"text": "\n\n**Figure 4** shows the *video-understanding* scaling result. It plots the epoch time for training [MViTv2](https://arxiv.org/abs/2112.01526) [10] models on [Kinetics 400](https://www.deepmind.com/open-source/kinetics) [11] using 128 V100 (32 GB) GPUs in FP32.\n\n\n\n
\nFigure 4: Video-understanding scaling result.\n
\n\n**Figure 5** shows the optimization result with the ViT-H model in Figure 2 on 8 A100-40GB GPUs.\nThree versions are used: (1) the baseline uses PyTorch\u2019s DDP [12] with AMP O1, (2) FSDP + AMP-O2 + other optimizations, and (3) FSDP + FP16 + other optimizations. These optimizations altogether speed up the training by up to 2.2x.\n\n\n\n
\nFigure 5: Training speedups from various optimizations.", "source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"} -{"text": "
\n\n## 5. Concluding Remarks\n\nWe have demonstrated the use of PyTorch with FairScale\u2019s FullyShardedDataParallel (FSDP) API in writing large vision transformer models. We discuss our techniques for scaling and optimizing these models on a GPU cluster. We hope that this article can motivate others to develop large-scale ML models with PyTorch and its ecosystem.\n\n## References\n\n[1] [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377)\n\n[2] [Revisiting Weakly Supervised Pre-Training of Visual Perception Models](https://arxiv.org/abs/2201.08371)\n\n[3] [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929v2)\n\n[4] [fairscale.nn.FullyShardedDataParallel](https://fairscale.readthedocs.io/en/stable/api/nn/fsdp.html)\n\n[5] [Pipeline parallelism in PyTorch](https://pytorch.org/docs/stable/pipeline.html)\n\n[6] [Automatic Mixed Precision (AMP) in PyTorch](https://pytorch.org/docs/stable/amp.html#module-torch.amp)", "source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"} -{"text": "[7] [xformers](https://github.com/facebookresearch/xformers)\n\n[8] [The bfloat16 numerical format](https://cloud.google.com/tpu/docs/bfloat16)\n\n[9] [Exploring Plain Vision Transformer Backbones for Object Detection](https://arxiv.org/abs/2203.16527)\n\n[10] [MViTv2: Improved Multiscale Vision Transformers for Classification and Detection](https://arxiv.org/abs/2112.01526)\n\n[11] [https://www.deepmind.com/open-source/kinetics](https://www.deepmind.com/open-source/kinetics)\n\n[12] [Getting Started with Distributed Data Parallel (DDP)](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html)", "source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"} -{"text": "---\nlayout: blog_detail\ntitle: 'Announcing PyTorch Ecosystem Day'\nauthor: Team PyTorch\n---\n\nWe\u2019re proud to announce our first PyTorch Ecosystem Day. The virtual, one-day event will focus completely on our Ecosystem and Industry PyTorch communities!\n\n\nPyTorch is a deep learning framework of choice for academics and companies, all thanks to its rich ecosystem of tools and strong community. As with our developers, our ecosystem partners play a pivotal role in the development and growth of the community.\n\n\n\n
\nFigure 1: Example of an augmented computational graph\n
\n\n
\nFigure 2: Animation that shows the graph creation\n
\n\n
", "source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"} -{"text": "
\n\n
\n\n
\n\n
\n\n
\nFigure 1: the Parameter Number of Transformer Models Increases Dramatically.\n
\n\n
\n
\n\n
\nFigure 3. The process of PipeTransformer\u2019s automated and elastic pipelining to accelerate distributed training of Transformer models\n
\n\n
\nFigure 4: An Animation to Show the Dynamics of PipeTransformer\n
\n\n
\nFigure 5. Overview of PipeTransformer Training System\n
\n\n
\nFigure 6. The partition boundary is in the middle of a skip connection\n
\n\n
\n
\n\n
\n
\n\n
", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}
-{"text": "
\nFigure 7. Pipeline Bubble: , and
denote forward, backward, and the optimizer update of micro-batch
on device
, respectively. The total bubble size in each iteration is
times per micro-batch forward and backward cost.\n
\n\n
\nFigure 8. AutoDP: handling dynamical data-parallel with messaging between double process groups (Process 0-7 belong to machine 0, while process 8-15 belong to machine 1)\n
\n\n
\n
", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} -{"text": "
\n\n
\nFigure 9. Speedup Breakdown (ViT on ImageNet)\n
\n\n
", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}
-{"text": "
\nFigure 10. Tuning in Freezing Algorithm\n
\n\n
\nFigure 11. Optimal chunk number in the elastic pipeline\n
\n\n
\nFigure 12. the timing of caching\n
\n\n
\n\tFigure 1. Benefits of using CUDA graphs\n
", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} -{"text": "
\n\n
\n Figure 2. Looking at a typical neural network, all the kernel launches for NCCL AllReduce can be bundled into a graph to reduce overhead launch time.\n
", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} -{"text": "
\n\n
\n Figure 3: NSight timeline plot of Mask R-CNN\n
\n\n
\n Figure 4: CUDA graphs optimization\n
\n\t\n\t
\n\tFigure 5. By using a fixed size tensor and a boolean mask as described in the text, we are able to eliminate CPU synchronizations needed for dynamic sized tensors \n
\n\t\n
\n\t\n
\n\tFigure 6: CUDA graphs optimization for the DLRM model.\n
\n \n
\n \n
\nFigure 2: Transcript of a patient-doctor conversation\n
\n\n\n \n
\nFigure 3: Excerpt of an AI-generated medical report. HPI stands for History of present illness.\n
\n\n## Text Summarization with PyTorch and Fairseq", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} -{"text": "## Text Summarization with PyTorch and Fairseq\n\n[PyTorch](https://pytorch.org/) is an open-source machine learning framework developed by Facebook that helps researchers prototype Deep Learning models. The [Fairseq](https://github.com/pytorch/fairseq) toolkit is built on top of PyTorch and focuses on sequence generation tasks, such as Neural Machine Translation (NMT) or Text Summarization. Fairseq features an active community that is continuously providing reference implementations of state-of-the-art models. It contains many built-in components (model architectures, modules, loss functions, and optimizers) and is easily extendable with plugins.", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} -{"text": "Text summarization constitutes a significant challenge in NLP. We need models capable of generating a short version of a document while retaining the key points and avoiding uninformative content. These challenges can be addressed with different approaches. 1). Abstractive text summarization aimed at training models that can generate a summary in narrative form. 2). Extractive methods where the models are trained to select the most important parts from the input text. 3). A combination of the two, where the essential parts from the input are selected and then summarized in an abstractive fashion. Hence, summarization can be accomplished via a single end-to-end network or as a pipeline of extractive and abstractive components. To that end, Fairseq provides all the necessary tools to be successful in our endeavor. It features either end-to-end models such as the classical Transformer, different types of Language Models and pre-trained versions that enable researchers to focus on what matters most\u2014to build state-of-the-art models that generate valuable reports.", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} -{"text": "However, we are not just summarizing the transcribed conversation; we generate high-quality medical reports, which have many considerations.\n\n* Every section of a medical report is different in terms of content, structure, fluency, etc.\n* All medical facts mentioned in the conversation should be present in the report, for example, a particular treatment or dosage.\n* In the healthcare domain, the vocabulary is extensive, and models need to deal with medical terminology.\n* Patient-doctor conversations are usually much longer than the final report.", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} -{"text": "All these challenges require our researchers to run a battery of extensive experiments. Thanks to the flexibility of PyTorch and Fairseq, their productivity has greatly increased. Further, the ecosystem offers an easy path from ideation, implementation, experimentation, and final roll-out to production. Using multiple GPUs or CPUs is as simple as providing an additional argument to the tools, and because of the tight Python integration, PyTorch code can be easily debugged.", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} -{"text": "In our continuous effort to contribute to the open-source community, features have been developed at Nuance and pushed to the Fairseq GitHub repository. These try to overcome some of the challenges mentioned such as, facilitating copying of, especially rare or unseen, words from the input to summary, training speedups by improving Tensor Core utilization, and ensuring TorchScript compatibility of different Transformer configurations. Following, we will show an example of how to train a Transformer model with a Pointer Generator mechanism (Transformer-PG), which can copy words from the input.\n\n## How to build a Transformer model with a Pointer Generator mechanism\n\nIn this step-by-step guide, it is assumed the user has already installed PyTorch and Fairseq.\n\n### 1. Create a vocabulary and extend it with source position markers:\n\nThese markers will allow the model to point to any word in the input sequence.\n\n```python\nvocab_size=Model Type | \nPreferred scheme | \nWhy | \n
---|---|---|
LSTM/RNN | \nDynamic Quantization | \nThroughput dominated by compute/memory bandwidth for weights | \n
BERT/Transformer | \nDynamic Quantization | \nThroughput dominated by compute/memory bandwidth for weights | \n
CNN | \nStatic Quantization | \nThroughput limited by memory bandwidth for activations | \n
CNN | \nQuantization Aware Training | \nIn the case where accuracy can't be achieved with static quantization | \n
Model | \nFloat Latency (ms) | ", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} -{"text": "Float Latency (ms) | \nQuantized Latency (ms) | \nInference Performance Gain | \nDevice | \nNotes | \n
BERT | \n581 | \n313 | \n1.8x | \nXeon-D2191 (1.6GHz) | \nBatch size = 1, Maximum sequence length= 128, Single thread, x86-64, Dynamic quantization | \n|
Resnet-50 | \n214 | \n103 | \n2x | \nXeon-D2191 (1.6GHz) | \nSingle thread, x86-64, Static quantization | \n|
Mobilenet-v2 | \n97 | \n17 | ", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} -{"text": "17 | \n5.7x | \nSamsung S9 | \nStatic quantization, Floating point numbers are based on Caffe2 run-time and are not optimized | \n
Model | \nTop-1 Accuracy (Float) | \nTop-1 Accuracy (Quantized) | \nQuantization scheme | \n|
Googlenet | \n69.8 | \n69.7 | \nStatic post training quantization | \n|
Inception-v3 | \n77.5 | \n77.1 | \nStatic post training quantization | \n|
ResNet-18 | \n69.8 | \n69.4 | \nStatic post training quantization | \n|
Resnet-50 | \n76.1 | \n75.9 | \nStatic post training quantization | \n|
ResNext-101 32x8d | \n79.3 | \n79 | \nStatic post training quantization | \n|
Mobilenet-v2 | \n71.9 | \n71.6 | \nQuantization Aware Training | \n|
Shufflenet-v2 | \n69.4 | ", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} -{"text": "69.4 | \n68.4 | \nStatic post training quantization | \n
Model | \nF1 (GLUEMRPC) Float | \nF1 (GLUEMRPC) Quantized | \nQuantization scheme | \n
BERT | \n0.902 | \n0.895 | \nDynamic quantization | \n
\n\n
", "source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"} -{"text": "
\n\n\nFigure 1: Scaling of T5-XL (3B) and T5-XXL (11B) from 1 node to 64 nodes\n
\n\n\n\n
\nFigure 2: TFLOPs/sec usage for T5-XL(3B) and T5-XXL (11B) as we increase number of nodes\n
\n\n## IBM Cloud AI System and Middleware\n\nThe AI infrastructure used for this work is a large-scale AI system on IBM Cloud consisting of nearly 200 nodes, each node with 8 NVIDIA A100 80GB cards, 96 vCPUs, and 1.2TB CPU RAM. The GPU cards within a node are connected via NVLink with a card-to-card bandwidth of 600GBps. Nodes are connected by 2 x 100Gbps Ethernet links with SRIOV based TCP/IP stack, providing a usable bandwidth of 120Gbps.", "source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"} -{"text": "The IBM Cloud AI System has been production-ready since May of 2022 and is configured with the OpenShift container platform to run AI workloads. We also built a software stack for production AI workloads that provide end-to-end tools for training workloads. The middleware leverages Ray for pre and post processing workloads and PyTorch for training of models. We also integrate a Kubernetes native scheduler, MCAD, that manages multiple jobs with job queuing, gang scheduling, prioritization, and quota management. A multi-NIC CNI discovers all available network interfaces and handles them as a single NIC pool enabling optimized use of the network interfaces in Kubernetes. Finally, CodeFlare CLI supports a single pane for observability of the full stack using a desktop CLI (e.g., GPU utilization, application metrics like loss, gradient norm).\n\n\n \n
", "source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"} -{"text": "
\n\n\nFigure 3: Foundation Model Middleware Stack\n
\n\n### Conclusion and Future Work\n\nIn conclusion, we demonstrated how we can achieve remarkable scaling of FSDP APIs over non-InfiniBand networks. We identified the bottleneck that had limited scaling to less than 20% efficiency for 11B parameter model training. After identifying the issue, we were able to correct this with a new rate limiter control to ensure a more optimal balance of reserved memory and communication overlap relative to compute time. With this improvement, we were able to achieve 90% scaling efficiency (a 4.5x improvement), at 256 GPUs and 80% at 512 GPUs for training of the 11B parameter model. In addition, the 3B parameter model scales extremely well with 95% efficiency even as we increase the number of GPUs to 512.", "source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"} -{"text": "This is a first in the industry to achieve such scaling efficiencies for up to 11B parameter models using Kubernetes with vanilla Ethernet and PyTorch native FSDP API\u2019s. This improvement enables users to train huge models on a Hybrid Cloud platform in a cost efficient and sustainable manner.\n\nWe plan on continuing to investigate scaling with decoder only models and increasing the size of these models to 100B+ parameters. From a system design perspective, we are exploring capabilities such as RoCE and GDR that can improve latencies of communications over Ethernet networks.\n\n## Acknowledgements\n\nThis blog was possible because of contributions from both PyTorch Distributed and IBM Research teams.\n\nFrom the PyTorch Distributed team, we would like to thank Less Wright, Hamid Shojanazeri, Geeta Chauhan, Shen Li, Rohan Varma, Yanli Zhao, Andrew Gu, Anjali Sridhar, Chien-Chin Huang, and Bernard Nguyen.", "source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"} -{"text": "From the IBM Research team, we would like to thank Linsong Chu, Sophia Wen, Lixiang (Eric) Luo, Marquita Ellis, Davis Wertheimer, Supriyo Chakraborty, Raghu Ganti, Mudhakar Srivatsa, Seetharami Seelam, Carlos Costa, Abhishek Malvankar, Diana Arroyo, Alaa Youssef, Nick Mitchell.\n\n## Appendix\n\n#### Teraflop computation\n\nThe T5-XXL (11B) architecture has two types of T5 blocks, one is an encoder and the second is a decoder. Following the approach of Megatron-LM, where each matrix multiplication requires 2m\u00d7k\u00d7n FLOPs, where the first matrix is of size m\u00d7k and the second is k\u00d7n. The encoder block consists of self-attention and feed forward layers, whereas the decoder block consists of self-attention, cross-attention, and feed forward layers.", "source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"} -{"text": "The attention (both self and cross) block consists of a QKV projection, which requires 6Bsh2 operations, an attention matrix computation requiring 2Bs2h operations, an attention over values which needs 2Bs2h computations, and the post-attention linear projection requires 2Bsh2 operations. Finally, the feed forward layer requires 15Bsh2 operations. \n\nThe total for an encoder block is 23Bsh2+4Bs2h, whereas for a decoder block, it comes to 31Bsh2+8Bs2h. With a total of 24 encoder and 24 decoder blocks and 2 forward passes (as we discard the activations) and one backward pass (equivalent to two forward passes), the final FLOPs computation comes to be 96\u00d7(54Bsh2+ 12Bs2h) + 6BshV. Here, B is the batch size per GPU, s is sequence length, h is hidden state size, and V is vocabulary size. \nWe repeat a similar computation for T5-XL (3B) architecture, which is slightly different.", "source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"} -{"text": "---\nlayout: blog_detail\ntitle: \"Empowering PyTorch on Intel\u00ae Xeon\u00ae Scalable processors with Bfloat16\"\nauthor: Mingfei Ma (Intel), Vitaly Fedyunin (Meta), Wei Wei (Meta)\nfeatured-img: '\\assets\\images\\empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16.png'\n---\n\n## Overview\n\nRecent years, the growing complexity of AI models have been posing requirements on hardware for more and more compute capability. Reduced precision numeric format has been proposed to address this problem. Bfloat16 is a custom 16-bit floating point format for AI which consists of one sign bit, eight exponent bits, and seven mantissa bits. With the same dynamic range as float32, bfloat16 doesn\u2019t require a special handling such as loss scaling. Therefore, bfloat16 is a drop-in replacement for float32 when running deep neural networks for both inference and training.", "source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"} -{"text": "The 3rd Gen Intel\u00ae Xeon\u00ae Scalable processor (codenamed Cooper Lake), is the first general purpose x86 CPU with native bfloat16 support. Three new bfloat16 instructions were introduced in Intel\u00ae Advanced Vector Extensions-512 (Intel\u00ae AVX-512): VCVTNE2PS2BF16, VCVTNEPS2BF16, and VDPBF16PS. The first two instructions perform conversion from float32 to bfloat16, and the last one performs a dot product of bfloat16 pairs. Bfloat16 theoretical compute throughput is doubled over float32 on Cooper Lake. On the next generation of Intel\u00ae Xeon\u00ae Scalable Processors, bfloat16 compute throughput will be further enhanced through Advanced Matrix Extensions (Intel\u00ae AMX) instruction set extension.", "source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"} -{"text": "Intel and Meta previously collaborated to enable bfloat16 on PyTorch, and the related work was published in an earlier [blog](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Intel-and-Facebook-Accelerate-PyTorch-Performance-with-3rd-Gen/post/1335659) during launch of Cooper Lake. In that blog, we introduced the hardware advancement for native bfloat16 support and showcased a performance boost of 1.4x to 1.6x of bfloat16 over float32 from DLRM, ResNet-50 and ResNext-101-32x4d.\n\nIn this blog, we will introduce the latest software enhancement on bfloat16 in PyTorch 1.12, which would apply to much broader scope of user scenarios and showcase even higher performance boost.\n\n## Native Level Optimization on Bfloat16", "source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"} -{"text": "## Native Level Optimization on Bfloat16\n\nOn PyTorch CPU bfloat16 path, the compute intensive operators, e.g., convolution, linear and bmm, use oneDNN (oneAPI Deep Neural Network Library) to achieve optimal performance on Intel CPUs with AVX512_BF16 or AMX support. The other operators, such as tensor operators and neural network operators, are optimized at PyTorch native level. We have enlarged bfloat16 kernel level optimizations to majority of operators on dense tensors, both inference and training applicable (sparse tensor bfloat16 support will be covered in future work), specifically:", "source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"} -{"text": "- **Bfloat16 vectorization**: Bfloat16 is stored as unsigned 16-bit integer, which requires it to be casted to float32 for arithmetic operations such as add, mul, etc. Specifically, each bfloat16 vector will be converted to two float32 vectors, processed accordingly and then converted back. While for non-arithmetic operations such as cat, copy, etc., it is a straight memory copy and no data type conversion will be involved.\n- **Bfloat16 reduction**: Reduction on bfloat16 data uses float32 as accumulation type to guarantee numerical stability, e.g., sum, BatchNorm2d, MaxPool2d, etc.\n- **Channels Last optimization**: For vision models, Channels Last is the preferable memory format over Channels First from performance perspective. We have implemented fully optimized CPU kernels for all the commonly used CV modules on channels last memory format, taking care of both float32 and bfloat16.\n\n## Run Bfloat16 with Auto Mixed Precision", "source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"} -{"text": "## Run Bfloat16 with Auto Mixed Precision\n\nTo run model on bfloat16, typically user can either explicitly convert the data and model to bfloat16, for example:\n\n```console\n# with explicit conversion\ninput = input.to(dtype=torch.bfloat16)\nmodel = model.to(dtype=torch.bfloat16)\n```\n\nor utilize torch.amp (Automatic Mixed Precision) package. The autocast instance serves as context managers or decorators that allow regions of your script to run in mixed precision, for example:\n\n```console\n# with AMP\nwith torch.autocast(device_type=\"cpu\", dtype=torch.bfloat16):\n output = model(input)\n```\n\nGenerally, the explicit conversion approach and AMP approach have similar performance. Even though, we recommend run bfloat16 models with AMP, because:\n\n- **Better user experience with automatic fallback**: If your script includes operators that don\u2019t have bfloat16 support, autocast will implicitly convert them back to float32 while the explicit converted model will give a runtime error.", "source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"} -{"text": "- **Mixed data type for activation and parameters**: Unlike the explicit conversion which converts all the model parameters to bfloat16, AMP mode will run in mixed data type. To be specific, input/output will be kept in bfloat16 while parameters, e.g., weight/bias, will be kept in float32. The mixed data type of activation and parameters will help improve performance while maintaining the accuracy.\n\n## Performance Gains\n\nWe benchmarked inference performance of TorchVision models on Intel\u00ae Xeon\u00ae Platinum 8380H CPU @ 2.90GHz (codenamed Cooper Lake), single instance per socket (batch size = 2 x number of physical cores). Results show that bfloat16 has 1.4x to 2.2x performance gain over float32.\n\n\n \n
\n \n
", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} -{"text": "
\n \n
\nFigure 1: Performance gains of 8 training scenarios from HuggingFace\u2019s Transformer repository. First performance boost in the dark green is due to replacing the optimizer with an NVIDIA Apex fused AdamW optimizer. The light green is due to adding nvFuser. Models were run with batch size and sequence lengths of [64, 128], [8, 512], [2, 1024], [64, 128], [8, 512], [8, src_seql=512, tgt_seql=128], [8, src_seql=1024, tgt_seql=128], and [8, 512] respectively. All networks were run with Automatic Mixed Precision (AMP) enabled with dtype=float16.\n
", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} -{"text": "\n\nWhile these speedups are significant, it\u2019s important to understand that nvFuser doesn\u2019t (yet) automate everything about running networks quickly. For HuggingFace Transformers, for example, it was important to use the AdamW fused optimizer from [NVIDIA\u2019s Apex repository](https://github.com/NVIDIA/apex) as the optimizer otherwise consumed a large portion of runtime. Using the fused AdamW optimizer to make the network faster exposes the next major performance bottleneck \u2014 memory bound operations. These operations are optimized by nvFuser, providing another large performance boost. With the fused optimizer and nvFuser enabled, the training speed of these networks improved between 1.12x to 1.5x.", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} -{"text": "HuggingFace Transformer models were run with [the torch.amp module](https://pytorch.org/docs/stable/amp.html). (\u201camp\u201d stands for Automated Mixed Precision, see the [\u201cWhat Every User Should Know about Mixed Precision in PyTorch\u201d](https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/) blog post for details.) An option to use nvFuser was added to HuggingFace\u2019sTrainer. If you have [TorchDynamo installed](https://github.com/pytorch/torchdynamo#requirements-and-setup) you can activate it to enable nvFuser in HuggingFace by passing *torchdynamo = \u2018nvfuser\u2019* to the Trainer class.\nnvFuser has great support for normalization kernels and related fusions frequently found in Natural Language Processing (NLP) models, and it is recommended users try nvFuser in their NLP workloads.\n\n## PyTorch Image Models (TIMM) Benchmarks", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} -{"text": "## PyTorch Image Models (TIMM) Benchmarks\nnvFuser, can also significantly reduce the training time of TIMM networks, up to over 1.3x vs. eager PyTorch, and up to 1.44x vs. eager PyTorch when combined with the torch.amp module. Figure 1 shows nvFuser\u2019s speedup without torch.amp, and when torch.amp is used with the NHWC (\u201cchannels last\u201d) and NCHW (\u201cchannels first\u201d) formats. nvFuser is integrated in TIMM through FuncTorch tracing directly (without TorchDynamo) and can be used by adding the [--aot-autograd command line argument](https://github.com/rwightman/pytorch-image-models/commit/ca991c1fa57373286b9876aa63370fd19f5d6032) when running the TIMM benchmark or training script.\n\n\n \n
", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} -{"text": "
\n\n\nFigure 1: The Y-axis is the performance gain nvFuser provides over not using nvFuser. A value of 1.0 means no change in perf, 2.0 would mean nvFuser is twice as fast, 0.5 would mean nvFuser takes twice the time to run. Square markers are with float16 Automatic Mixed Precision (AMP) and channels first contiguous inputs, circle markers are float32 inputs, and triangles are with float16 AMP and channels last contiguous inputs. Missing data points are due to an error being encountered when tracing.\n
", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} -{"text": "When running with float32 precision nvFuser provides a 1.12x geometric mean (\u201cgeomean\u201d) speedup on TIMM networks, and when running with torch.amp and \u201cchannels first\u201d it provides a 1.14x geomean speedup. However, nvFuser currently doesn\u2019t speedup torch.amp and \u201cchannels last\u201d training (a .9x geomean regression), so we recommend not using it in those cases. We are actively working on improving \u201cchannels last\u201d performance now, and soon we will have two additional optimization strategies (grid persistent optimizations for channels-last normalizations and fast transposes) which we expect will provide speedups comparable to \u201cchannels first\u201d in PyTorch version 1.13 and later. Many of nvFuser\u2019s optimizations can also help in inference cases. However, in PyTorch when running inference on small batch sizes, the performance is typically limited by CPU overhead, which nvFuser can\u2019t completely remove or fix. Therefore, typically the most important optimization for inference is to enable [CUDA Graphs](https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/) when possible. Once CUDA Graphs is enabled, then it can also be beneficial to also enable fusion through nvFuser. Performance of inference is shown in Figure 2 and Figure 3. Inference is only run with float16 AMP as it is uncommon to run inference workloads in full float32 precision.", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} -{"text": "\n \n
\n \n
\nFigure 2: Performance gains of enabling CUDA Graphs, and CUDA Graphs with nvFuser compared to the performance of native PyTorch without CUDA Graphs and nvFuser across TIMM models with float16 AMP, channels first inputs, and a batch size of 1 and 8 respectively. There is a geomean speedup of 2.74x with CUDA Graphs and 2.71x with CUDA Graphs + nvFuser respectively. nvFuser provides a maximum regression of 0.68x and a maximum performance gain of 2.74x (relative to CUDA Graphs without nvFuser). Performance gain is measured relative to the average time per iteration PyTorch takes without CUDA Graphs and without nvFuser. Models are sorted by how much additional performance nvFuser is providing.\n
\n\n", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} -{"text": "
\n\n\n \n
\n \n
\nFigure 3: Performance gains of enabling CUDA Graphs, and CUDA Graphs with nvFuser compared to the performance of native PyTorch without CUDA Graphs and nvFuser across TIMM models with AMP, channels last inputs, and a batch size of 1 and 8 respectively. There is a geomean speedup of 2.29x with CUDA Graphs and 2.95x with CUDA Graphs + nvFuser respectively. nvFuser provides a maximum regression of 0.86x and a maximum performance gain of 3.82x (relative to CUDA Graphs without nvFuser). Performance gain is measured relative to the average time per iteration PyTorch takes without CUDA Graphs and without nvFuser. Models are sorted by how much additional performance nvFuser is providing.\n
", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} -{"text": "\n\nSo far nvFuser performance has not been tuned for inference workloads so its performance benefit is not consistent across all cases. However, there are still many models that benefit significantly from nvFuser during inference and we encourage users to try nvFuser in inference workloads to see if you would benefit today. Performance of nvFuser in inference workloads will improve in the future and if you\u2019re interested in nvFuser in inference workloads please reach out to us on the PyTorch forums.\n\n## Getting Started - Accelerate Your Scripts with nvFuser", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} -{"text": "We\u2019ve created [a tutorial](https://pytorch.org/tutorials/intermediate/nvfuser_intro_tutorial.html) demonstrating how to take advantage of nvFuser to accelerate part of a standard transformer block, and how nvFuser can be used to define fast and novel operations. There are still some rough edges in nvFuser that we\u2019re working hard on improving as we\u2019ve outlined in this blog post. However we\u2019ve also demonstrated some great improvements for training speed on multiple networks in HuggingFace and TIMM and we expect there are opportunities in your networks where nvFuser can help today, and many more opportunities it will help in the future.\nIf you would like to learn more about nvFuser we recommend watching our presentations from NVIDIA\u2019s GTC conference [GTC 2022](https://www.nvidia.com/en-us/on-demand/session/gtcspring22-s41958/) and [GTC 2021](https://www.nvidia.com/en-us/on-demand/session/gtcspring21-s31952/).", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} -{"text": "---\nlayout: blog_detail\ntitle: 'Introducing PyTorch Profiler - the new and improved performance tool'\nauthor: Maxim Lukiyanov - Principal PM at Microsoft, Guoliang Hua - Principal Engineering Manager at Microsoft, Geeta Chauhan - Partner Engineering Lead at Facebook, Gisle Dankel - Tech Lead at Facebook\n---\n\nAlong with [PyTorch 1.8.1 release](https://github.com/pytorch/pytorch/releases/tag/v1.8.1), we are excited to announce PyTorch Profiler \u2013 the new and improved performance debugging profiler for PyTorch. Developed as part of a collaboration between Microsoft and Facebook, the PyTorch Profiler is an open-source tool that enables accurate and efficient performance analysis and troubleshooting for large-scale deep learning models.", "source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"} -{"text": "Analyzing and improving large-scale deep learning model performance is an ongoing challenge that grows in importance as the model sizes increase. For a long time, PyTorch users had a hard time solving this challenge due to the lack of available tools. There were standard performance debugging tools that provide GPU hardware level information but missed PyTorch-specific context of operations. In order to recover missed information, users needed to combine multiple tools together or manually add minimum correlation information to make sense of the data. There was also the autograd profiler (```torch.autograd.profiler```) which can capture information about PyTorch operations but does not capture detailed GPU hardware-level information and cannot provide support for visualization.", "source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"} -{"text": "The new PyTorch Profiler (```torch.profiler```) is a tool that brings both types of information together and then builds experience that realizes the full potential of that information. This new profiler collects both GPU hardware and PyTorch related information, correlates them, performs automatic detection of bottlenecks in the model, and generates recommendations on how to resolve these bottlenecks. All of this information from the profiler is visualized for the user in TensorBoard. The new Profiler API is natively supported in PyTorch and delivers the simplest experience available to date where users can profile their models without installing any additional packages and see results immediately in TensorBoard with the new PyTorch Profiler plugin. Below is the screenshot of PyTorch Profiler - automatic bottleneck detection. \n\n\n \n
", "source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"} -{"text": "
\n \n
\n \n
\n \n
\n \n
\n \n
\n \n
\n \n
\n \n
\n \n
\n Fig 1. PyTorch <3 Quantization\n
\n \n
Fig 2. Clipping ranges (in purple) for affine and symmetric schemes\n
\n \n
Fig 3. Per-Channel uses one set of qparams for each channel. Per-tensor uses the same qparams for the entire tensor.\n
", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} -{"text": "
\n \n
\n Fig 4. Steps in Post-Training Static Quantization\n
\n \n
\n Fig 5. Steps in Quantization-Aware Training", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}
-{"text": "Fig 5. Steps in Quantization-Aware Training\n
\n \n
\n Fig 6. Comparison of PTQ and QAT convergence [3]\n
\n \n
Fig 7. Fake Quantization in the forward and backward pass \n
Image source: https://developer.nvidia.com/blog/achieving-fp32-accuracy-for-int8-inference-using-quantization-aware-training-with-tensorrt\n
\n \n
\n Fig 8. Comparing model weights and activations\n
\n \n
\n Fig 9. Suggested quantization workflow\n
\n \n
\nFgiure 1: Correspondence of `backward`/`grad` arguments in the graphs.\n
\n\n# Going Inside the Autograd Engine\n\n## Refreshing Concepts: Nodes and Edges\n\nAs we saw in [2](https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/)\nThe computational graph comprises `Node` and `Edge` objects. Please read that post if you haven\u2019t done it yet.\n\n### Nodes", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} -{"text": "### Nodes\n\n`Node` objects are defined in [`torch/csrc/autograd/function.h`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/function.h#L105-L176), and they provide an overload of `operator()` for the associated function and a list of edges to do the graph traversal. Note that `Node` is a base class that autograd functions inherit from and override the `apply` method to execute the backward function.\n```c++\nstruct TORCH_API Node : std::enable_shared_from_this\n \n
", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} -{"text": "
\n\n\nFigure 2: Example of the Topological Number calculation\n
\n\n### Edges\n\nThe [`Edge`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/edge.h#L14-L39) object links `Node`s together, and its implementation is straightforward.\n\n```c++\nstruct Edge {\n ...\n /// The function this `Edge` points to.\n std::shared_ptr\n \n
\nFigure 3: Number of dependencies for each node\n
", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} -{"text": "\n\nFinally, the [`init_to_execute`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L1281-L1383) call, this is the one that populates the `GraphTask::exec_info_` map in case that `inputs` were specified in the python `backward` call. It iterates the graph again, starting from the root, and records in the `exec_info_` map the intermediate nodes needed to calculate only the given `inputs` gradients.\n\n```c++\n // Queue the root\n if (skip_dummy_node) {\n InputBuffer input_buffer(roots.at(0).function->num_inputs());\n auto input = inputs.at(0);\n\n\n input_buffer.add(roots.at(0).input_nr,\n std::move(input),\n input_stream,\n opt_next_stream);\n\n execute_with_graph_task(graph_task, graph_root, std::move(input_buffer));\n } else {\n execute_with_graph_task(graph_task, graph_root, InputBuffer(variable_list()));\n }", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} -{"text": "}\n // Avoid a refcount bump for the Future, since we check for refcount in\n // DistEngine (see TORCH_INTERNAL_ASSERT(futureGrads.use_count() == 1)\n // in dist_engine.cpp).\n auto& fut = graph_task->future_result_;\n fut->wait();\n return fut->value().toTensorVector();\n}\n\n```\n\nAnd now, we are ready to start the actual execution by creating the `InputBuffer`. In case we only have one root variable, we begin by copying the value of the `inputs` tensor (this is the `gradients` passed to python `backward`) in position 0 of the input_buffer. This is a small optimization that avoids running the `RootNode` for no reason. Also, if the rest of the graph is not on the cpu, we directly start on that worker while the `RootNode` is always placed on the cpu ready queue. Details of the workers and ready queues are explained in the section below.\n\nOn the other hand, if we have multiple roots, the `GraphRoot` object also holds the inputs, so it is enough to pass it an empty `InputBuffer`.", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} -{"text": "## Graph Traversal and Node Execution\n### Devices, Threads and Queues\n\nBefore diving into the actual execution, we need to see how the engine is structured.\n\nFirst of all, the engine is multithreaded with one thread per device. For example, the caller thread is associated with the CPU while additional threads are created and associated with each GPU or other devices available in the system. Each thread tracks its device using thread-local storage in the [`worker_device`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L69) variable. In addition, the threads have a queue of tasks to be executed also located in thread-local storage, the [`local_ready_queue`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L103-L104). This is where work is queued for this thread to execute in the `thread_main` function that is explained later.", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} -{"text": "You will wonder how the device where a task should be executed is decided. The `InputBuffer` class has a [`device()`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/input_buffer.cpp#L173-L189) function that returns the first non-cpu device of all its tensors.\nThis function is used together with [`Engine::ready_queue`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L1181-L1190) to select the queue to queue a task.\n\n```c++\nauto Engine::ready_queue(std::shared_ptr\n \n
\nFigure 4: Correspondence between forward and backward functions inputs and outputs\n
", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} -{"text": "\n\nWe now iterate through these edges and check if the associated functions are ready to be executed.\n\n```c++\n // Check if the next function is ready to be computed\n bool is_ready = false;\n auto& dependencies = graph_task->dependencies_;\n auto it = dependencies.find(next.function.get());\n\n if (it == dependencies.end()) {\n auto name = next.function->name();\n throw std::runtime_error(std::string(\"dependency not found for \") + name);\n } else if (--it->second == 0) {\n dependencies.erase(it);\n is_ready = true;\n }\n\n auto& not_ready = graph_task->not_ready_;\n auto not_ready_it = not_ready.find(next.function.get());\n\n```\n\nFor this, we check the `graph_task->dependencies_` map. We decrement the counter, and if it reaches 0, we mark the function pointed by the edge ready to be executed. Following, we prepare the input buffers of the tasks indicated by the next edges.\n\n```c++\n if (not_ready_it == not_ready.end()) {\n if (!exec_info_.empty()) {", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} -{"text": "if (!exec_info_.empty()) {\n // Skip functions that aren't supposed to be executed\n }\n\n // Creates an InputBuffer and moves the output to the corresponding input position\n InputBuffer input_buffer(next.function->num_inputs());\n input_buffer.add(next.input_nr,\n std::move(output),\n opt_parent_stream,\n opt_next_stream);\n\n if (is_ready) {\n auto queue = ready_queue(cpu_ready_queue, input_buffer.device());\n queue->push(\n NodeTask(graph_task, next.function, std::move(input_buffer)));\n } else {\n not_ready.emplace(next.function.get(), std::move(input_buffer));\n }\n\n```", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} -{"text": "}\n\n```\n\nHere, we look for the task in the `graph_task->not_ready_` map. If it is not present, we create a new `InputBuffer` object and set the current output in the `input_nr` position of the buffer associated with the edge. If the task is ready to be executed, we enqueue it in the appropriate device `ready_queue` and complete the execution. However, if the task is not ready and we have seen it before, it is present in the `not_ready_map_`.\n\n```c++\n } else {\n // The function already has a buffer\n auto &input_buffer = not_ready_it->second;\n // Accumulates into buffer\n input_buffer.add(next.input_nr,\n std::move(output),\n opt_parent_stream,\n opt_next_stream);\n if (is_ready) {\n auto queue = ready_queue(cpu_ready_queue, input_buffer.device());\n queue->push(NodeTask(graph_task, next.function, std::move(input_buffer)));\n not_ready.erase(not_ready_it);\n }\n }\n }\n}\n\n```", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} -{"text": "}\n }\n }\n}\n\n```\n\nIn this case, we accumulate the output in the existing `input_buffer` instead of creating a new one. Once all the tasks are processed, the worker thread exits the loop and complete.\nAll this process is summarized in the animation in Figure 5. We see how a thread peeks at the tasks in the ready queue and decrements the next nodes' dependencies, unlocking them for execution.\n\n\n \n
\nFigure 5: Animation of the execution of the computational graph\n
\n\n## Flow with Reentrant Backward\n\nAs we saw above, the reentrant backward problem is when the currently executed function does a nested call to `backward`. When this happens, the thread running this function goes all the way down to `execute_with_graph_task` as in the non-reentrant case, but here is when things are different.\n\n```c++\nc10::intrusive_ptrArchitecture\n | \nResNet18 (45MB)\n | \n
Workers\n | \n64, 128, 256\n | \n
Backend\n | \nNCCL\n | \n
GPU\n | \nTesla T4, 16 GB memory\n | \n
Batch size\n | \n32 x ## of workers\n | \n
Straggler Duration\n | \n1 sec\n | \n
Straggler Rate\n | \n1% on 128 and 256 GPUs, 2% on 64 GPUs\n | \n
\n \n
epochs | \nMixed Precision Top 1(%) | ", "source": "https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/", "category": "pytorch blogs"} -{"text": "TF32 Top1(%) | \n
90 | \n76.93 | \n76.85 | \n
epochs | \nMixed Precision Top 1(%) | \nFP32 Top1(%) | \n
50 | \n76.25 | \n76.26 | \n
90 | \n77.09 | \n77.01 | \n
250 | \n78.42 | \n78.30 | \n
\n", "source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"} -{"text": "Researchers from **Uber**, **Northeastern** and **Stanford** came together to form an active probabilistic programming community around their packages [Pyro](http://pyro.ai/) and [ProbTorch](https://github.com/probtorch/probtorch). They are actively developing the torch.distributions core package. This community is so active and fast-moving, we had our first pytorch-probabilistic-programming meetup at NIPS 2017 with Fritz Obermeyer, Noah Goodman, Jan-Willem van de Meent, Brooks Paige, Dustin Tran and 22 additional attendees discussing how to make the world bayesian.\n\nWe're releasing @PyTorch-QRNN, 2-17x faster than NVIDIA's cuDNN LSTM.
— Smerity (@Smerity) October 9, 2017
Speed thanks to 50 lines of CUDA via CuPy.https://t.co/KaWhN4yDZd pic.twitter.com/yoLYj3pMI0
\n", "source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"} -{"text": "I've been using PyTorch a few months now and I've never felt better. I have more energy. My skin is clearer. My eye sight has improved.
— Andrej Karpathy (@karpathy) May 26, 2017
\n\n\nTalk to your doctor to find out if PyTorch is right for you.
— Sean Robertson (@sprobertson) May 26, 2017
\n", "source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"} -{"text": "PyTorch gave me so much life that my skin got cleared, my grades are up, my bills are paid and my crops are watered.
— Adam Will \u00f0\ufe0f\u200d\u00f0 (@adam_will_do_it) May 26, 2017
\n", "source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"} -{"text": "---\nlayout: blog_detail\ntitle: 'PyTorch for AMD ROCm\u2122 Platform now available as Python package'\nauthor: Niles Burbank \u2013 Director PM at AMD, Mayank Daga \u2013 Director, Deep Learning Software at AMD\n---\n\nWith the PyTorch 1.8 release, we are delighted to announce a new installation option for users of\nPyTorch on the ROCm\u2122 open software platform. An installable Python package is now hosted on\npytorch.org, along with instructions for local installation in the same simple, selectable format as\nPyTorch packages for CPU-only configurations and other GPU platforms. PyTorch on ROCm includes full\ncapability for mixed-precision and large-scale training using AMD\u2019s MIOpen & RCCL libraries. This\nprovides a new option for data scientists, researchers, students, and others in the community to get\nstarted with accelerated PyTorch using AMD GPUs.\n\nSo have I! But my hair is also shiner and I've lost weight. @PyTorch for the win. https://t.co/qgU4oIOB4K
— Mariya (@thinkmariya) May 26, 2017
\n \n
\n Accelerated GPU training and evaluation speedups over CPU-only (times faster)\n
\n\nAlongside the new MPS device support, the M1 binaries for Core and Domain libraries that have been available for the last few releases are now an official prototype feature. These binaries can be used to run PyTorch natively on Apple Silicon.", "source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"} -{"text": "### (Prototype) BetterTransformer: Fastpath execution for Transformer Encoder Inference\n\nPyTorch now supports CPU and GPU fastpath implementations (\u201cBetterTransformer\u201d) for several Transformer Encoder modules including TransformerEncoder, TransformerEncoderLayer, and MultiHeadAttention (MHA). The BetterTransformer fastpath architecture Better Transformer is consistently faster \u2013 2x for many common execution scenarios, depending on model and input characteristics. The new BetterTransformer-enabled modules are API compatible with previous releases of the PyTorch Transformer API and will accelerate existing models if they meet fastpath execution requirements, as well as read models trained with previous versions of PyTorch. PyTorch 1.12 includes: \n- BetterTransformer integration for Torchtext\u2019s pretrained RoBERTa and XLM-R models\n- Torchtext which builds on the PyTorch Transformer API", "source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"} -{"text": "- Fastpath execution for improved performance by reducing execution overheads with fused kernels which combines multiple operators into a single kernel\n- Option to achieve additional speedups by taking advantage of data sparsity during the processing of padding tokens in natural-language processing (by setting enable_nested_tensor=True when creating a TransformerEncoder)\n- Diagnostics to help users understand why fastpath execution did not occur\n \n\n\n \n
\n \n
\n\nIn December, we announced PyTorch Live, a toolkit for building AI-powered mobile prototypes in minutes. The initial release included a command-line interface to set up a development environment and an SDK for building AI-powered experiences in React Native. Today, we're excited to share that PyTorch Live will now be known as PlayTorch. This new release provides an improved and simplified developer experience. PlayTorch development is independent from the PyTorch project and the PlayTorch code repository is moving into the Meta Research GitHub organization.\n\n## A New Workflow: The PlayTorch App", "source": "https://pytorch.org/blog/introducing-the-playtorch-app/", "category": "pytorch blogs"} -{"text": "## A New Workflow: The PlayTorch App\n\nThe PlayTorch team is excited to announce that we have partnered with [Expo](https://expo.dev) to change the way AI powered mobile experiences are built. Our new release simplifies the process of building mobile AI experiences by eliminating the need for a complicated development environment. You will now be able to build cross platform AI powered prototypes from the very browser you are using to read this blog.\n\nIn order to make this happen, we are releasing the [PlayTorch app](https://playtorch.dev/) which is able to run AI-powered experiences built in the [Expo Snack](https://snack.expo.dev/@playtorch/playtorch-starter?supportedPlatforms=my-device) web based code editor.\n\n\n \n
\n \n
\n \n
\n \n
\n \n
\n \n
\n \n
Figure 1: Computational graph of f(x, y) = log(x*y)
\nFigure 2: Computational graph extended after executing the logarithm
\nFigure 3: Computational graph extended after executing the logarithm
\nFigure 4: Computational graph extended for reverse auto differentiation
\n>>> import torch\n
>>> x = torch.tensor([0.5, 0.75], requires_grad=True)\n
>>> y = torch.log(x[0] * x[1]) * torch.sin(x[1])\n
>>> y.backward(1.0)\n
>>> x.grad\n tensor([1.3633,\n 0.1912])", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} -{"text": "tensor([1.3633,\n 0.1912])\n
Figure 5: Computational Graph extended with the backward pass
\nFigure 6: How the chain rule is applied in backward differentiation
\nRecommended: shows why the backprop is formally expressed with the Jacobian
\n\n \n
\n \n
\n \n
\n \n
\n\n
\n\n
Optimization \n | \nT5 Model \n | \nThroughput Improvement \n | \n
Mixed Precision\n | \n3 B\n | \n5x\n | \n
11 B\n | \n10x\n | \n|
Activation Checkpointing (AC)\n | \n3 B\n | \n10x\n | \n
11 B\n | \n100x\n | \n|
Transformer Wrapping Policy\n | \n3 B\n | \n2x\n | \n3 B\n | \n2x\n | \n \n
11 B\n | \nUnable to run the experiment without the Transformer wrapping policy.\n | \n|
Full Shard Strategy\n | \n3 B\n | \n1.5x\n | \n
11 B\n | \nNot able to run with Zero2\n | \n
\n\n
", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} -{"text": "
\n\n
\n\n
\n\n
\n\n
\n\n
\n\n
\n\n
\n\n
\n\n
\n\n
AWS ParallelCluster is an open source, cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters on AWS. AWS ParallelCluster uses yaml configuration files to provision all the necessary resources. It also supports multiple instance types, job submission queues, shared file systems like Amazon EFS (NFS) or Amazon FSx for Lustre, and job schedulers like AWS Batch and Slurm.
\n\n\n\n
\n \n
\nFigure 1: A simplified illustration of the Meta\u2019s AI performance profiling (MAIProf) infrastructure.\n
", "source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"} -{"text": "\n\nFigure 1 gives a simplified illustration of the AI performance profiling infrastructure at Meta. ML research and performance engineers submit through the User Portal a profiling request for a training job to the Profiling Service, which subsequently broadcasts the request to all the GPU hosts running the training job. When the Monitoring Daemon on a GPU host receives the profiling request, it will notify the Kineto GPU tracer (built on top of NVIDIA\u2019s libcupti) inside the PyTorch program corresponding to the training job. As a result, Kineto traces will be collected and uploaded to the Object Store asynchronously (in more details: there is one Kineto trace collected for each individual GPU, each is treated and stored as a blob; an example will be given in Section 2). Meanwhile, MAIProf also collects a variety of aggregated performance metrics: the Monitoring Daemon on every GPU host continuously reads performance counters from NVIDIA\u2019s DCGM/NVML and logs them to a Time Series DB.", "source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"} -{"text": "Once both trace and metrics collections are completed, the Profiling Service will automatically download traces from the Object Store for trace analysis and performance metrics from the Time Series DB for metric analysis. Finally, an overall profiling report with detailed and insightful analysis is delivered to the user.\n\nTo serve production uses, we deliberately made the following design choices for MAIProf:\n\n- **No source-code change required in the PyTorch models**: profiling is triggered by sampling the execution of an unmodified model for a user-specified amount of time.\n- **Provide a holistic view of performance**: MAIProf performs system-wide analysis that cover both CPU and GPU. Under the hood, it invokes various CPU tools (e.g., Python tracer, Autograd Observer) and GPU tools (e.g., Kineto, DCGM) and correlates their results.", "source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"} -{"text": "- **Provide multiple tools that target a wide range of AI partitioners**: At Meta, there are engineers with different backgrounds who may need to tune their AI workload performance. Some of them are AI experts while others are general software engineers. Therefore, MAIProf provides a variety of tools for different levels of performance debugging, from high-level automatic trace comprehension to low-level trace analysis.\n- **Support distributed GPU profiling**: MAIProf can collect profiling data from multiple hosts, each with multiple GPUs. It then shows a combined view/analysis of the entire system.\n- **Highly scalable**: MAIProf is built as a service on top of existing infrastructures in Meta data centers such as a scalable storage system called Manifold. Its profiling capability can be easily scaled by adding more machines in the service pool with the increase of workloads.\n\n## 2. Case Study: Optimizing a Protection PyTorch Model", "source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"} -{"text": "To be concrete, we use a case study on a protection PyTorch model used in production. First, we discuss our steps for identifying the performance bottlenecks in the model with MAIProf. Then we describe the corresponding optimizations applied and their impacts.\n\n### 2.1 Performance Bottlenecks\n\n#### Step 1: \n\nInspect the CPU and GPU utilization on the same timeline, as shown in Figure 2.\n\n\n \n
\nFigure 2: CPU usage over time (the top) vs. GPU usage over time (the bottom).\n
\n\nThe first performance anomaly we noticed in Figure 2 is the pattern: *\u201cGPU-idle, GPU-active, GPU-idle, GPU-active \u2026\u201d* throughout the training. Overall, the GPU is idle for more than half of the training time (this is bad for performance because the GPU is a higher-performance device and so we want it to be utilized as much as possible).\n\n#### Step 2:", "source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"} -{"text": "#### Step 2:\n\nCollect a Python function call trace on the CPU with MAIProf while the GPU is idle, which is shown in Figure 3.\n\n\n \n
\nFigure 3: A Python call trace.\n
", "source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"} -{"text": "Figure 3: A Python call trace.\n\n\nThe Python trace shows that most of the CPU time is spent inside a Python function `sharded_iterrows()`. From the source code of the model, we learned that this function processes a big feature table in parallel. The number of worker threads used is controlled by a configurable parameter (`num_worker_threads`). Also, after investigating how the feature table is generated, we understood the performance anomaly: the training dataset is too large to fit in the CPU memory all at once; it needs to be broken into multiple sub-datasets, each has sufficient data for running 10 epochs. Consequently, a new sub-dataset needs to be read from the disk to memory every 10 epochs, during which the GPU is totally idle.\n\n#### Step 3:\n\nCollect GPU performance metrics, which is shown in Figure 4.\n\n\n \n
", "source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"} -{"text": "
\n\n\nFigure 4: GPU performance metrics in MAIProf.\n
\n\nWe made the following observations from Figure 4:\n\n- The streaming multiprocessor (SM) runs the model\u2019s CUDA kernels. Its utilization [1] is 9.1%, indicating that the parallel compute units on the GPU are not well utilized.\n- Tensor Core utilization is 0, meaning that Tensor Core (the mixed-precision compute unit on GPU) [2] is not used at all.\n- Max GPU memory utilization is 47.13%, indicating that half of the GPU memory is left unused.\n\n#### Step 4:\n\nCollect a GPU trace (aka Kineto trace) of the training loop as shown in Figure 5.\n\n\n \n
\nFigure 5: A GPU trace (aka Kineto trace) of the training loop.\n
", "source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"} -{"text": "\n\nSince commonly used PyTorch functions are already annotated, their names are automatically shown on the trace. With them, we can roughly divide the trace into the four phases in a training iteration: (1) data loading, (2) forward pass, (3) backward pass, (4) gradient optimization (note: In Figure 5, the \u201coptimizer\u201d phase is from the previous batch while the other three phases are from the current batch).\n\n### 2.2 Optimizations\n\nWe performed four simple optimizations that target the bottlenecks identified above, each requiring only a change in a config parameter or at most a few source lines. They are listed in Figure 6.\n\n| Optimization | Amount of changes | Bottlenecks addressed |\n| ------------ | ----------------- | --------------------- |\n|Tune `num_worker_threads` by trying a few possible values within the number of CPU cores on each host. | 1 source line | GPU totally idle time |\n| Double the batch sizes | 2 config parameters | GPU memory under-utilization |", "source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"} -{"text": "| Use [automatic mixed precision](https://pytorch.org/tutorials/recipes/recipes/amp_recipe.html) in PyTorch | 13 source lines | Zero Tensor Core utilization |\n| Use [mulitensor optimizer](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html#torch.optim.AdamW) in PyTorch | 1 source line | Many small GPU kernels in the optimizer |\n\n\nFigure 6: Four simple optimizations applied.\n
\n\n## 3. Concluding Remarks\n\nPerformance tuning for PyTorch in production environments is increasingly important. A capable performance-debugging tool is a key to this process. We demonstrate with a case study on a production model that MAIProf is a powerful infrastructure for identifying optimization opportunities.", "source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"} -{"text": "At Meta, MAIProf has been used by 100s of engineers, from performance novices to experts, to identify many more types of bottlenecks. These include slow data loading, small and/or slow GPU kernels, distributed training issues such as load imbalance and excessive communication. MAIProf covers major classes of models, including recommendation, vision, and natural language processing. In summary, it is now an indispensable tool for tuning the performance of production PyTorch workloads.\n\n## References\n\n[1] [https://docs.nvidia.com/gameworks/content/developertools/desktop/analysis/report/ cudaexperiments/kernellevel/achievedoccupancy.htm](https://docs.nvidia.com/gameworks/content/developertools/desktop/analysis/report/cudaexperiments/kernellevel/achievedoccupancy.htm)\n\n[2] [https://www.nvidia.com/en-us/data-center/tensor-cores/](https://www.nvidia.com/en-us/data-center/tensor-cores/)", "source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"} -{"text": "---\nlayout: blog_detail\ntitle: 'The torch.fft module: Accelerated Fast Fourier Transforms with Autograd in PyTorch'\nauthor: Mike Ruberry, Peter Bell, and Joe Spisak \n---\n\nThe Fast Fourier Transform (FFT) calculates the Discrete Fourier Transform in O(n log n) time. It is foundational to a wide variety of numerical algorithms and signal processing techniques since it makes working in signals\u2019 \u201cfrequency domains\u201d as tractable as working in their spatial or temporal domains.\n\nAs part of PyTorch\u2019s goal to support hardware-accelerated deep learning and scientific computing, we have invested in improving our FFT support, and with PyTorch 1.8, we are releasing the ``torch.fft`` module. This module implements the same functions as NumPy\u2019s ``np.fft`` module, but with support for accelerators, like GPUs, and autograd. \n\n## Getting started", "source": "https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/", "category": "pytorch blogs"} -{"text": "## Getting started\n\nGetting started with the new ``torch.fft`` module is easy whether you are familiar with NumPy\u2019s ``np.fft`` module or not. While complete documentation for each function in the module can be found [here](https://pytorch.org/docs/1.8.0/fft.html), a breakdown of what it offers is:\n\n* ``fft``, which computes a complex FFT over a single dimension, and ``ifft``, its inverse\n* the more general ``fftn`` and ``ifftn``, which support multiple dimensions\n* The \u201creal\u201d FFT functions, ``rfft``, ``irfft``, ``rfftn``, ``irfftn``, designed to work with signals that are real-valued in their time domains\n* The \"Hermitian\" FFT functions, ``hfft`` and ``ihfft``, designed to work with signals that are real-valued in their frequency domains\n* Helper functions, like ``fftfreq``, ``rfftfreq``, ``fftshift``, ``ifftshift``, that make it easier to manipulate signals", "source": "https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/", "category": "pytorch blogs"} -{"text": "We think these functions provide a straightforward interface for FFT functionality, as vetted by the NumPy community, although we are always interested in feedback and suggestions!\n\nTo better illustrate how easy it is to move from NumPy\u2019s ``np.fft`` module to PyTorch\u2019s ``torch.fft`` module, let\u2019s look at a NumPy implementation of a simple low-pass filter that removes high-frequency variance from a 2-dimensional image, a form of noise reduction or blurring:\n\n```python\nimport numpy as np\nimport numpy.fft as fft\n\ndef lowpass_np(input, limit):\n pass1 = np.abs(fft.rfftfreq(input.shape[-1])) < limit\n pass2 = np.abs(fft.fftfreq(input.shape[-2])) < limit\n kernel = np.outer(pass2, pass1)\n \n fft_input = fft.rfft2(input)\n return fft.irfft2(fft_input * kernel, s=input.shape[-2:])\n```\n\nNow let\u2019s see the same filter implemented in PyTorch:\n\n```python\nimport torch\nimport torch.fft as fft\n\ndef lowpass_torch(input, limit):\n pass1 = torch.abs(fft.rfftfreq(input.shape[-1])) < limit", "source": "https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/", "category": "pytorch blogs"} -{"text": "pass2 = torch.abs(fft.fftfreq(input.shape[-2])) < limit\n kernel = torch.outer(pass2, pass1)\n \n fft_input = fft.rfft2(input)\n return fft.irfft2(fft_input * kernel, s=input.shape[-2:])\n```\n\nNot only do current uses of NumPy\u2019s ``np.fft`` module translate directly to ``torch.fft``, the ``torch.fft`` operations also support tensors on accelerators, like GPUs and autograd. This makes it possible to (among other things) develop new neural network modules using the FFT.\n\n\n## Performance\n\nThe ``torch.fft`` module is not only easy to use \u2014 it is also fast! PyTorch natively supports Intel\u2019s MKL-FFT library on Intel CPUs, and NVIDIA\u2019s cuFFT library on CUDA devices, and we have carefully optimized how we use those libraries to maximize performance. While your own results will depend on your CPU and CUDA hardware, computing Fast Fourier Transforms on CUDA devices can be many times faster than computing it on the CPU, especially for larger signals.", "source": "https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/", "category": "pytorch blogs"} -{"text": "In the future, we may add support for additional math libraries to support more hardware. See below for where you can request additional hardware support.\n\n## Updating from older PyTorch versions\n\nSome PyTorch users might know that older versions of PyTorch also offered FFT functionality with the ``torch.fft()`` function. Unfortunately, this function had to be removed because its name conflicted with the new module\u2019s name, and we think the new functionality is the best way to use the Fast Fourier Transform in PyTorch. In particular, ``torch.fft()`` was developed before PyTorch supported complex tensors, while the ``torch.fft`` module was designed to work with them.\n\nPyTorch also has a \u201cShort Time Fourier Transform\u201d, ``torch.stft``, and its inverse ``torch.istft``. These functions are being kept but updated to support complex tensors. \n\n## Future", "source": "https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/", "category": "pytorch blogs"} -{"text": "## Future\n\nAs mentioned, PyTorch 1.8 offers the torch.fft module, which makes it easy to use the Fast Fourier Transform (FFT) on accelerators and with support for autograd. We encourage you to try it out!\n\nWhile this module has been modeled after NumPy\u2019s ``np.fft`` module so far, we are not stopping there. We are eager to hear from you, our community, on what FFT-related functionality you need, and we encourage you to create posts on our forums at [https://discuss.pytorch.org/](https://discuss.pytorch.org/), or [file issues on our Github](https://github.com/pytorch/pytorch/issues/new?assignees=&labels=&template=feature-request.md) with your feedback and requests. Early adopters have already started asking about Discrete Cosine Transforms and support for more hardware platforms, for example, and we are investigating those features now.\n\nWe look forward to hearing from you and seeing what the community does with PyTorch\u2019s new FFT functionality!", "source": "https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/", "category": "pytorch blogs"} -{"text": "---\nlayout: blog_detail\ntitle: \"PyTorch Internals Part II - The Build System\"\nauthor: \"Trevor Killeen\"\ndate: 2017-06-27 12:00:00 -0500\nredirect_from: /2017/06/27/Internals2.html\n---\n\nIn the first [post]({{ site.baseurl }}{% link _posts/2017-5-11-a-tour-of-pytorch-internals-1.md %}) I explained how we generate a `torch.Tensor` object that you can use in your Python interpreter. Next, I will explore the build system for PyTorch. The PyTorch codebase has a variety of components:\n\n - The core Torch libraries: TH, THC, THNN, THCUNN\n - Vendor libraries: CuDNN, NCCL\n - Python Extension libraries\n - Additional third-party libraries: NumPy, MKL, LAPACK\n\nHow does a simple invocation of `python setup.py install` do the work that allows you to call `import torch` and use the PyTorch library in your code?", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} -{"text": "The first part of this document will explain the build process from and end-user point of view. This will explain how we take the components above to build the library. The second part of the document will be important for PyTorch developers. It will document ways to improve your iteration speed by building only a subset of the code that you are working on.\n\n### Setuptools and PyTorch's setup( ) function\n\nPython uses [Setuptools](https://setuptools.readthedocs.io/en/latest/index.html) to build the library. Setuptools is an extension to the original distutils system from the core Python library. The core component of Setuptools is the `setup.py` file which contains all the information needed to build the project. The most important function is the `setup()` function which serves as the main entry point. Let's take a look at the one in PyTorch:\n\n```python\nsetup(name=\"torch\", version=version,\n description=\"Tensors and Dynamic neural networks in Python with strong GPU acceleration\",", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} -{"text": "ext_modules=extensions,\n cmdclass={\n 'build': build,\n 'build_py': build_py,\n 'build_ext': build_ext,\n 'build_deps': build_deps,\n 'build_module': build_module,\n 'develop': develop,\n 'install': install,\n 'clean': clean,\n },\n packages=packages,\n package_data={'torch': [\n 'lib/*.so*', 'lib/*.dylib*',\n 'lib/torch_shm_manager',\n 'lib/*.h',\n 'lib/include/TH/*.h', 'lib/include/TH/generic/*.h',\n 'lib/include/THC/*.h', 'lib/include/THC/generic/*.h']},\n install_requires=['pyyaml'],\n )\n```\n\nThe function is composed entirely of keyword arguments, which serve two purposes:\n\n- Metadata (e.g. name, description, version)\n- The contents of the package\n\nWe are concerned with #2. Let's break down the individual components:", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} -{"text": "- **ext_modules**: Python modules are either \"pure\" modules, containing only Python code, or \"extension\" modules written in the low-level language of the Python implementation. Here we are listing the extension modules in the build, including the main `torch._C` library that contains our Python Tensor\n - **cmdclass**: When using the `setup.py` script from the command line, the user must specify one or more \"commands\", code snippets that perform a specific action. For example, the \"install\" command builds and installs the package. This mapping routes specific commands to functions in `setup.py` that implement them\n - **packages**: The list of packages in the project. These are \"pure\" - i.e. they only contain Python code. These are defined elsewhere in `setup.py`\n - **package_data**: Additional files that need to be installed into a package: in this case the header files and shared libraries that the build will generate must be included in our installation", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} -{"text": "- **install_requires**: In order to build PyTorch, we need pyyaml. Setuptools will handle making sure that pyyaml will be available, downloading and installing it if necessary\n\nWe will consider these components in more detail, but for now it is instructive to look at the end product of an installation -- i.e. what Setuptools does after building the code.\n\n### site_packages\n\nThird party packages are by default installed into the `lib/\n \n
\nFig-1 Physical memory layout of Channels First and Channels Last\n
\n\n## Memory Formats Propagation", "source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"} -{"text": "\n\n## Memory Formats Propagation\n\nThe general rule for PyTorch memory format propagation is to preserve the input tensor\u2019s memory format. Which means a Channels First input will generate a Channels First output and a Channels Last input will generate a Channels Last output. \n\nFor Convolution layers, PyTorch uses oneDNN (oneAPI Deep Neural Network Library) by default to achieve optimal performance on Intel CPUs. Since it is physically impossible to achieve highly optimized performance directly with Channels Frist memory format, input and weight are firstly converted to blocked format and then computed. oneDNN may choose different blocked formats according to input shapes, data type and hardware architecture, for vectorization and cache reuse purposes. The blocked format is opaque to PyTorch, so the output needs to be converted back to Channels First. Though blocked format would bring about optimal computing performance, the format conversions may add overhead and therefore offset the performance gain.", "source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"} -{"text": "On the other hand, oneDNN is optimized for Channels Last memory format to use it for optimal performance directly and PyTorch will simply pass a memory view to oneDNN. Which means the conversion of input and output tensor is saved. Fig-2 indicates memory format propagation behavior of convolution on PyTorch CPU (the solid arrow indicates a memory format conversion, and the dashed arrow indicates a memory view):\n\n\n \n
\nFig-2 CPU Conv memory format propagation\n
\n\nOn PyTorch, the default memory format is Channels First. In case a particular operator doesn't have support on Channels Last, the NHWC input would be treated as a non-contiguous NCHW and therefore fallback to Channels First, which will consume the previous memory bandwidth on CPU and result in suboptimal performance.", "source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"} -{"text": "Therefore, it is very important to extend the scope of Channels Last support for optimal performance. And we have implemented Channels Last kernels for the commonly use operators in CV domain, applicable for both inference and training, such as:\n\n- Activations (e.g., ReLU, PReLU, etc.)\n- Convolution (e.g., Conv2d)\n- Normalization (e.g., BatchNorm2d, GroupNorm, etc.)\n- Pooling (e.g., AdaptiveAvgPool2d, MaxPool2d, etc.)\n- Shuffle (e.g., ChannelShuffle, PixelShuffle)\n\nRefer to [Operators-with-Channels-Last-support](https://github.com/pytorch/pytorch/wiki/Operators-with-Channels-Last-support) for details.\n\n## Native Level Optimization on Channels Last\n\nAs mentioned above, PyTorch uses oneDNN to achieve optimal performance on Intel CPUs for convolutions. The rest of memory format aware operators are optimized at PyTorch native level, which doesn\u2019t require any third-party library support.", "source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"} -{"text": "- **Cache friendly parallelization scheme:** keep the same parallelization scheme for all the memory format aware operators, this will help increase data locality when passing each layer\u2019s output to the next.\n- **Vectorization on multiple archs:** generally, we can vectorize on the most inner dimension on Channels Last memory format. And each of the vectorized CPU kernels will be generated for both AVX2 and AVX512.\n\nWhile contributing to Channels Last kernels, we tried our best to optimize Channels First counterparts as well. The fact is some operators are physically impossible to achieve optimal performance on Channels First, such as Convolution, Pooling, etc.\n\n## Run Vision Models on Channels Last\n\nThe Channels Last related APIs are documented at [PyTorch memory format tutorial](https://pytorch.org/tutorials/intermediate/memory_format_tutorial.html). Typically, we can convert a 4D tensor from Channels First to Channels Last by:\n\n```python\n# convert x to channels last\n# suppose x\u2019s shape is (N, C, H, W)", "source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"} -{"text": "# suppose x\u2019s shape is (N, C, H, W)\n# then x\u2019s stride will be (HWC, 1, WC, C)\nx = x.to(memory_format=torch.channels_last)\n```\n\nTo run models on Channels Last memory format, simply need to convert input and model to Channels Last and then you are ready to go. The following is a minimal example showing how to run ResNet50 with TorchVision on Channels Last memory format:\n\n```python\nimport torch\nfrom torchvision.models import resnet50\n\nN, C, H, W = 1, 3, 224, 224\nx = torch.rand(N, C, H, W)\nmodel = resnet50()\nmodel.eval()\n\n# convert input and model to channels last\nx = x.to(memory_format=torch.channels_last)\nmodel = model.to(memory_format=torch.channels_last)\nmodel(x)\n```\n\nThe Channels Last optimization is implemented at native kernel level, which means you may apply other functionalities such as torch.fx and torch script together with Channels Last as well.\n\n## Performance Gains", "source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"} -{"text": "## Performance Gains\n\nWe benchmarked inference performance of TorchVision models on Intel\u00ae Xeon\u00ae Platinum 8380 CPU @ 2.3 GHz, single instance per socket (batch size = 2 x number of physical cores). Results show that Channels Last has 1.3x to 1.8x performance gain over Channels First.\n\n\n \n
Stable | \nBeta | \nPrototype | \n|
---|---|---|---|
Better Transformer | \nEnable Intel\u00ae VTune\u2122 Profiler\u2019s Instrumentation and Tracing Technology APIs | \nArm\u00ae Compute Library backend support for AWS Graviton | \n|
CUDA 10.2 and 11.3 CI/CD Deprecation | \nExtend NNC to support channels last and bf16 | \nCUDA Sanitizer | \n|
\n | Functorch now in PyTorch Core Library | \n", "source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"} -{"text": " | \n |
\n | Beta Support for M1 devices | \n\n |
\n\n
", "source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"} -{"text": "
\n\n \n\n\nFigure: BetterTransformer fastpath execution is now stable and enables sparsity optimization using Nested Tensor representation as default\n
\n\n### Introduction of CUDA 11.6 and 11.7 and deprecation of CUDA 10.2 and 11.3\n\nTimely deprecating older CUDA versions allows us to proceed with introducing the latest CUDA version as they are introduced by Nvidia\u00ae, and hence allows developers to use the latest features of CUDA and benefit from correctness fixes provided by the latest version.\n\nDecommissioning of CUDA 10.2. CUDA 11 is the first CUDA version to support C++17. Hence decommissioning legacy CUDA 10.2 was a major step in adding support for C++17 in PyTorch. It also helps to improve PyTorch code by eliminating legacy CUDA 10.2 specific instructions.", "source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"} -{"text": "Decommissioning of CUDA 11.3 and introduction of CUDA 11.7 brings compatibility support for the new NVIDIA Open GPU Kernel Modules and another significant highlight is the lazy loading support. CUDA 11.7 is shipped with cuDNN 8.5.0 which contains a number of optimizations accelerating transformer-based models, 30% reduction in library size , and various improvements in the runtime fusion engine. Learn more on CUDA 11.7 with our [release notes](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html).\n\n## Beta Features\n\n### (Beta) functorch\n\nInspired by [Google\u00ae JAX](https://github.com/google/jax), functorch is a library that offers composable vmap (vectorization) and autodiff transforms. It enables advanced autodiff use cases that would otherwise be tricky to express in PyTorch. Examples include:\n\n\n \n
\n Figure 1: Sampling of DL Workloads Successfully Trained with float16 (Source).\n
\n\n\n \n
\n Figure 2: Performance of mixed precision training using torch.amp on NVIDIA 8xV100 vs. float32 training on 8xV100 GPU. Bars represent the speedup factor of torch.amp over float32.", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} -{"text": "(Higher is better.) (Source).\n
\n\n\n \n
\n Figure 3. Performance of mixed precision training using torch.amp on NVIDIA 8xA100 vs. 8xV100 GPU. Bars represent the speedup factor of A100 over V100.\n(Higher is Better.) (Source).\n
\n\nSee the [NVIDIA Deep Learning Examples repository](https://github.com/NVIDIA/DeepLearningExamples) for more sample mixed precision workloads.", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} -{"text": "Similar performance charts can be seen in [3D medical image analysis](https://nvlabs.github.io/eccv2020-mixed-precision-tutorial/files/dong_yang-mixed-precision-training-for-3d-medical-image-analysis.pdf), [gaze estimation](https://nvlabs.github.io/eccv2020-mixed-precision-tutorial/files/shalini_de_mello-mixed-precision-training-for-faze.pdf), [video synthesis](https://nvlabs.github.io/eccv2020-mixed-precision-tutorial/files/tingchun_wang-mixed-precision-vid2vid.pdf), [conditional GANs](https://nvlabs.github.io/eccv2020-mixed-precision-tutorial/files/mingyu_liu-amp-imaginaire.pdf), and [convolutional LSTMs](https://nvlabs.github.io/eccv2020-mixed-precision-tutorial/files/wonmin_byeon-mixed-precision-training-for-convolutional-tensor-train-lstm.pdf). [Huang et al](https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/). showed that mixed precision training is 1.5x to 5.5x faster over float32 on V100 GPUs, and an additional 1.3x to 2.5x faster on A100 GPUs on a variety of networks. On very large networks the need for mixed precision is even more evident. [Narayanan et al](https://arxiv.org/pdf/2104.04473.pdf). reports that it would take 34 days to train GPT-3 175B on 1024 A100 GPUs (with a batch size of 1536), but it\u2019s estimated it would take over a year using float32!", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} -{"text": "## Getting Started With Mixed Precision Using torch.amp\n\ntorch.amp, introduced in PyTorch 1.6, makes it easy to leverage mixed precision training using the float16 or bfloat16 dtypes. See this [blog post](https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/), [tutorial](https://pytorch.org/tutorials/recipes/recipes/amp_recipe.html), and [documentation](https://pytorch.org/docs/master/amp.html) for more details. Figure 4 shows an example of applying AMP with grad scaling to a network.\n\n```console\nimport torch\n# Creates once at the beginning of training\nscaler = torch.cuda.amp.GradScaler()\n\nfor data, label in data_iter:\n optimizer.zero_grad()\n # Casts operations to mixed precision\n with torch.amp.autocast(device_type=\u201ccuda\u201d, dtype=torch.float16):\n loss = model(data)\n\n # Scales the loss, and calls backward()\n # to create scaled gradients\n scaler.scale(loss).backward()\n\n # Unscales gradients and calls\n # or skips optimizer.step()", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} -{"text": "# or skips optimizer.step()\n scaler.step(optimizer)\n\n # Updates the scale for next iteration\n scaler.update()\n```\n\n\n Figure 4: AMP recipe\n
\n\n### Picking The Right Approach\n\nOut-of-the-box mixed precision training with either float16 or bfloat16 is effective at speeding up the convergence of many deep learning models, but some models may require more careful numerical accuracy management. Here are some options:\n\n- Full float32 precision. Floating point tensors and modules are created in float32 precision by default in PyTorch, but this is a historic artifact not representative of training most modern deep learning networks. It\u2019s rare that networks need this much numerical accuracy.", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} -{"text": "- Enabling TensorFloat32 (TF32) mode. On Ampere and later CUDA devices matrix multiplications and convolutions can use the TensorFloat32 (TF32) mode for faster but slightly less accurate computations. See the [Accelerating AI Training with NVIDIA TF32 Tensor Cores](https://developer.nvidia.com/blog/accelerating-ai-training-with-tf32-tensor-cores/) blog post for more details. By default PyTorch enables TF32 mode for convolutions but not matrix multiplications, and unless a network requires full float32 precision we recommend enabling this setting for matrix multiplications, too (see the documentation [here](https://pytorch.org/docs/master/generated/torch.set_float32_matmul_precision.html?highlight=precision#torch.set_float32_matmul_precision) for how to do so). It can significantly speed up computations with typically negligible loss of numerical accuracy.", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} -{"text": "- Using torch.amp with bfloat16 or float16. Both these low precision floating point data types are usually comparably fast, but some networks may only converge with one vs the other. If a network requires more precision it may need to use float16, and if a network requires more dynamic range it may need to use bfloat16, whose dynamic range is equal to that of float32. If overflows are observed, for example, then we suggest trying bfloat16.\n\nThere are even more advanced options than those presented here, like using torch.amp\u2019s autocasting for only parts of a model, or managing mixed precision directly. These topics are largely beyond the scope of this blog post, but see the \u201cBest Practices\u201d section below.\n\n### Best Practices\n\nWe strongly recommend using mixed precision with torch.amp or the TF32 mode (on Ampere and later CUDA devices) whenever possible when training a network. If one of those approaches doesn\u2019t work, however, we recommend the following:", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} -{"text": "- High Performance Computing (HPC) applications, regression tasks, and generative networks may simply require full float32 IEEE precision to converge as expected.\n- Try selectively applying torch.amp. In particular we recommend first disabling it on regions performing operations from the torch.linalg module or when doing pre- or post-processing. These operations are often especially sensitive. Note that TF32 mode is a global switch and can\u2019t be used selectively on regions of a network. Enable TF32 first to check if a network\u2019s operators are sensitive to the mode, otherwise disable it.\n- If you encounter type mismatches while using torch.amp we don\u2019t suggest inserting manual casts to start. This error is indicative of something being off with the network, and it\u2019s usually worth investigating first.", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} -{"text": "- Figure out by experimentation if your network is sensitive to range and/or precision of a format. For example [fine-tuning bfloat16-pretrained models in float16](https://github.com/huggingface/transformers/pull/10956) can easily run into range issues in float16 because of the potentially large range from training in bfloat16, so users should stick with bfloat16 fine-tuning if the model was trained in bfloat16.\n- The performance gain of mixed precision training can depend on multiple factors (e.g. compute-bound vs memory-bound problems) and users should use the [tuning guide](https://pytorch.org/tutorials/recipes/recipes/tuning_guide.html) to remove other bottlenecks in their training scripts. Although having similar theoretical performance benefits, BF16 and FP16 can have different speeds in practice. It\u2019s recommended to try the mentioned formats and use the one with best speed while maintaining the desired numeric behavior.", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} -{"text": "For more details, refer to the [AMP Tutorial](https://pytorch.org/tutorials/recipes/recipes/amp_recipe.html), [Training Neural Networks with Tensor Cores](https://nvlabs.github.io/eccv2020-mixed-precision-tutorial/.), and see the post \u201c[More In-Depth Details of Floating Point Precision](https://dev-discuss.pytorch.org/t/more-in-depth-details-of-floating-point-precision/654)\" on PyTorch Dev Discussion.\n\n## Conclusion\n\nMixed precision training is an essential tool for training deep learning models on modern hardware, and it will become even more important in the future as the performance gap between lower precision operations and float32 continues to grow on newer hardware, as reflected in Figure 5.\n\n\n \n
", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} -{"text": "
\n\n\nFigure 5: Relative peak throughput of float16 (FP16) vs float32 matrix multiplications on Volta and Ampere GPUs. On Ampere relative peak throughput for the TensorFloat32 (TF32) mode and bfloat16 matrix multiplications are shown, too. The relative peak throughput of low precision data types like float16 and bfloat16 vs. float32 matrix multiplications is expected to grow as new hardware is released.\n
\n\nPyTorch\u2019s torch.amp module makes it easy to get started with mixed precision, and we highly recommend using it to train faster and reduce memory usage. torch.amp supports both float16 and bfloat16 mixed precision.\n\nThere are still some networks that are tricky to train with mixed precision, and for these networks we recommend trying TF32 accelerated matrix multiplications on Ampere and later CUDA hardware. Networks are rarely so precision sensitive that they require full float32 precision for every operation.", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} -{"text": "If you have questions or suggestions for torch.amp or mixed precision support in PyTorch then let us know by posting to the [mixed precision category on the PyTorch Forums](https://discuss.pytorch.org/c/mixed-precision/27) or [filing an issue on the PyTorch GitHub page](https://github.com/pytorch/pytorch/issues/new/choose).", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} -{"text": "---\nlayout: blog_detail\ntitle: 'Everything you need to know about TorchVision\u2019s MobileNetV3 implementation'\nauthor: Vasilis Vryniotis and Francisco Massa\n---\n\nIn TorchVision v0.9, we released a series of [new mobile-friendly models](https://pytorch.org/blog/ml-models-torchvision-v0.9/) that can be used for Classification, Object Detection and Semantic Segmentation. In this article, we will dig deep into the code of the models, share notable implementation details, explain how we configured and trained them, and highlight important tradeoffs we made during their tuning. Our goal is to disclose technical details that typically remain undocumented in the original papers and repos of the models.\n\n### Network Architecture", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} -{"text": "### Network Architecture\n\nThe implementation of the [MobileNetV3 architecture](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py) follows closely the [original paper](https://arxiv.org/abs/1905.02244). It is customizable and offers different configurations for building Classification, Object Detection and Semantic Segmentation backbones. It was designed to follow a similar structure to MobileNetV2 and the two share [common building blocks](https://github.com/pytorch/vision/blob/cac8a97b0bd14eddeff56f87a890d5cc85776e18/torchvision/models/mobilenetv2.py#L32).", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} -{"text": "Off-the-shelf, we offer the two variants described on the paper: the [Large](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L196-L214) and the [Small](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L215-L229). Both are constructed using the same code with the only difference being their configuration which describes the number of blocks, their sizes, their activation functions etc.\n\n### Configuration parameters", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} -{"text": "### Configuration parameters\n\nEven though one can write a [custom InvertedResidual setting](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L105) and pass it to the MobileNetV3 class directly, for the majority of applications we can adapt the existing configs by passing parameters to the [model building methods](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L253). Some of the key configuration parameters are the following:", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} -{"text": "- The `width_mult` [parameter](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L188) is a multiplier that affects the number of channels of the model. The default value is 1 and by increasing or decreasing it one can change the number of filters of all convolutions, including the ones of the first and last layers. The implementation ensures that the number of filters is always a [multiple of 8](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L56-L57). This is a hardware optimization trick which allows for faster vectorization of operations.", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} -{"text": "- The `reduced_tail` [parameter](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L188) halves the number of channels on the [last blocks](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L210-L214) of the network. This version is used by some Object Detection and Semantic Segmentation models. It\u2019s a speed optimization which is described on the [MobileNetV3 paper](https://arxiv.org/abs/1905.02244) and reportedly leads to a 15% latency reduction without a significant negative effect on accuracy.", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} -{"text": "- The `dilated` [parameter](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L188) affects the [last 3](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L210-L212) InvertedResidual blocks of the model and turns their normal depthwise Convolutions to Atrous Convolutions. This is used to control the output stride of these blocks and has a [significant positive effect](https://arxiv.org/abs/1706.05587) on the accuracy of Semantic Segmentation models.\n\n### Implementation details\n\nBelow we provide additional information on some notable implementation details of the architecture.\nThe [MobileNetV3 class](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L101) is responsible for building a network out of the provided configuration. Here are some implementation details of the class:", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} -{"text": "- The last convolution block expands the output of the last InvertedResidual block by a [factor of 6](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L149). The implementation is aligned with the Large and Small configurations described on the paper and can adapt to different values of the multiplier parameter.\n\n- Similarly to other models such as MobileNetV2, a dropout layer is placed just before the final Linear layer of the classifier.\n\nThe [InvertedResidual class](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L60) is the main building block of the network. Here are some notable implementation details of the block along with its visualization which comes from Figure 4 of the paper:", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} -{"text": "- There is no [expansion step](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L73-L76) if the input channels and the expanded channels are the same. This happens on the first convolution block of the network.\n\n- There is always a [projection step](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L86-L88) even when the expanded channels are the same as the output channels.\n\n- The activation method of the depthwise block is placed [before](https://github.com/pytorch/vision/blob/11bf27e37190b320216c349e39b085fb33aefed1/torchvision/models/mobilenetv3.py#L82-L84) the Squeeze-and-Excite layer as this improves marginally the accuracy.\n\n\n \n
\n \n
\n \n
There are many other nuances with how beam search can progress: similar hypothesis sequences can be \u201cmerged\u201d, for instance.\n
", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} -{"text": "\n\nThe scoring function can be further augmented to up/down-weight token insertion or long or short words. Scoring with *stronger external language* models, while incurring computational cost, can also significantly improve performance; this is frequently referred to as *LM fusion*. There are many other knobs to tune for decoding \u2014 these are documented in [TorchAudio\u2019s documentation](https://pytorch.org/audio/0.12.0/models.decoder.html#ctcdecoder) and explored further in [TorchAudio\u2019s ASR Inference tutorial](https://pytorch.org/audio/0.12.0/tutorials/asr_inference_with_ctc_decoder_tutorial.html#beam-search-decoder-parameters). Since decoding is quite efficient, parameters can be easily swept and tuned.", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} -{"text": "Beam search has been used in ASR extensively over the years in far too many works to cite, and in strong, recent results and systems including [wav2vec 2.0](https://proceedings.neurips.cc/paper/2020/file/92d1e1eb1cd6f9fba3227870bb6d7f07-Paper.pdf) and [NVIDIA's NeMo](https://developer.nvidia.com/nvidia-nemo).\n\n## Why beam search?\n\nBeam search remains a fast competitor to heavier-weight decoding approaches such as [RNN-Transducer](https://arxiv.org/pdf/1211.3711.pdf) that Google has invested in putting [on-device](https://ai.googleblog.com/2019/03/an-all-neural-on-device-speech.html) and has shown strong results with on [common benchmarks](https://arxiv.org/pdf/2010.10504.pdf). Autoregressive text models at scale can benefit from beam search as well. Among other things, beam search gives:", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} -{"text": "- A flexible performance/latency tradeoff \u2014 by adjusting beam size and the external LM, users can sacrifice latency for accuracy or pay for more accurate results with a small latency cost. Decoding with no external LM can improve results at very little performance cost.\n- Portability without retraining \u2014 existing neural models can benefit from multiple decoding setups and plug-and-play with external LMs without training or fine-tuning.\n- A compelling complexity/accuracy tradeoff \u2014 adding beam search to an existing modeling pipeline incurs little additional complexity and can improve performance.\n\n## Performance Benchmarks", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} -{"text": "## Performance Benchmarks\n\nToday's most commonly-used beam search decoding libraries today that support external language model integration include Kensho's [pyctcdecode](https://github.com/kensho-technologies/pyctcdecode), NVIDIA's [NeMo toolkit](https://github.com/NVIDIA/NeMo/tree/stable/scripts/asr_language_modeling). We benchmark the TorchAudio + Flashlight decoder against them with a *wav2vec 2.0* base model trained on 100 hours of audio evaluated on [LibriSpeech](https://www.openslr.org/12) dev-other with the official [KenLM](https://github.com/kpu/kenlm/) 3-gram LM. Benchmarks were run on Intel E5-2698 CPUs on a single thread. All computation was in-memory \u2014 KenLM memory mapping was disabled as it wasn't widely supported.", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} -{"text": "When benchmarking, we measure the *time-to-WER (word error rate)* \u2014 because of subtle differences in the implementation of decoding algorithms and the complex relationships between parameters and decoding speed, some hyperparameters differed across runs. To fairly assess performance, we first sweep for parameters that achieve a baseline WER, minimizing beam size if possible.\n\n\n \n
\nDecoding performance on Librispeech dev-other of a pretrained wav2vec 2.0 model. TorchAudio + Flashlight decoding outperforms by an order of magnitude at low WERs.\n
\n\n\n \n
", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} -{"text": "
\n\n\nTime-to-WER results, deferring to smaller beam size, across decoders. The TorchAudio + Flashlight decoder scales far better with larger beam sizes and at lower WERs.\n
\n\n## TorchAudio API and Usage\n\nTorchAudio provides a Python API for CTC beam search decoding, with support for the following:\n\n- lexicon and lexicon-free decoding\n- KenLM n-gram language model integration\n- character and word-piece decoding\n- sample pretrained LibriSpeech KenLM models and corresponding lexicon and token files\n- various customizable beam search parameters (beam size, pruning threshold, LM weight...)\n\nTo set up the decoder, use the factory function torchaudio.models.decoder.ctc_decoder\n\n```python\nfrom torchaudio.models.decoder import ctc_decoder, download_pretrained_files\nfiles = download_pretrained_files(\"librispeech-4-gram\")\ndecoder = ctc_decoder(\n lexicon=files.lexicon,\n tokens=files.tokens,\n lm=files.lm,\n nbest=1,\n ... additional optional customizable args ...\n)\n```", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} -{"text": ")\n```\n\nGiven emissions of shape *(batch, time, num_tokens)*, the decoder will compute and return a List of batch Lists, each consisting of the nbest hypotheses corresponding to the emissions. Each hypothesis can be further broken down into tokens, words (if a lexicon is provided), score, and timesteps components.\n\n```python\nemissions = acoustic_model(waveforms) # (B, T, N)\nbatch_hypotheses = decoder(emissions) # List[List[CTCHypothesis]]\n\n# transcript for a lexicon decoder\ntranscripts = [\" \".join(hypo[0].words) for hypo in batch_hypotheses]\n\n# transcript for a lexicon free decoder, splitting by sil token\nbatch_tokens = [decoder.idxs_to_tokens(hypo[0].tokens) for hypo in batch_hypotheses]\ntranscripts = [\"\".join(tokens) for tokens in batch_tokens]\n```", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} -{"text": "```\n\nPlease refer to the [documentation](https://pytorch.org/audio/stable/models.decoder.html#ctcdecoder) for more API details, and the tutorial ([ASR Inference Decoding](https://pytorch.org/audio/main/tutorials/asr_inference_with_ctc_decoder_tutorial.html)) or sample [inference script](https://github.com/pytorch/audio/tree/main/examples/asr/librispeech_ctc_decoder) for more usage examples.\n\n## Upcoming Improvements\n\n**Full NNLM support** \u2014 decoding with large neural language models (e.g. transformers) remains somewhat unexplored at scale. Already supported in Flashlight, we plan to add support in TorchAudio, allowing users to use custom decoder-compatible LMs. Custom word level language models are already available in the nightly TorchAudio build, and is slated to be released in TorchAudio 0.13.", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} -{"text": "**Autoregressive/seq2seq decoding** \u2014 Flashlight Text also supports [sequence-to-sequence (seq2seq) decoding](https://github.com/flashlight/text/blob/main/flashlight/lib/text/decoder/LexiconSeq2SeqDecoder.h) for autoregressive models, which we hope to add bindings for and add to TorchAudio and TorchText with efficient GPU implementations as well.\n\n**Better build support** \u2014 to benefit from improvements in Flashlight Text, TorchAudio will directly submodule Flashlight Text to make upstreaming modifications and improvements easier. This is already in effect in the nightly TorchAudio build, and is slated to be released in TorchAudio 0.13.\n\n## Citation\n\nTo cite the decoder, please use the following:\n\n```python\n@inproceedings{kahn2022flashlight,\n title={Flashlight: Enabling innovation in tools for machine learning},", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} -{"text": "author={Kahn, Jacob D and Pratap, Vineel and Likhomanenko, Tatiana and Xu, Qiantong and Hannun, Awni and Cai, Jeff and Tomasello, Paden and Lee, Ann and Grave, Edouard and Avidov, Gilad and others},\n booktitle={International Conference on Machine Learning},\n pages={10557--10574},\n year={2022},\n organization={PMLR}\n}\n```\n```python\n@inproceedings{yang2022torchaudio,\n title={Torchaudio: Building blocks for audio and speech processing},\n author={Yang, Yao-Yuan and Hira, Moto and Ni, Zhaoheng and Astafurov, Artyom and Chen, Caroline and Puhrsch, Christian and Pollack, David and Genzel, Dmitriy and Greenberg, Donny and Yang, Edward Z and others},\n booktitle={ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},\n pages={6982--6986},\n year={2022},\n organization={IEEE}\n}\n```", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} -{"text": "---\nlayout: blog_detail\ntitle: 'New PyTorch Library Releases in PyTorch 1.9, including TorchVision, TorchAudio, and more'\nauthor: Team PyTorch \n---\n\nToday, we are announcing updates to a number of PyTorch libraries, alongside the [PyTorch 1.9 release](https://pytorch.org/blog/pytorch-1.9-released/). The updates include new releases for the domain libraries including TorchVision, TorchText and TorchAudio. These releases, along with the PyTorch 1.9 release, include a number of new features and improvements that will provide a broad set of updates for the PyTorch community.\n\nSome highlights include:\n\n* **TorchVision** - Added new SSD and SSDLite models, quantized kernels for object detection, GPU Jpeg decoding, and iOS support. See [release notes](https://github.com/pytorch/vision/releases) here.", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} -{"text": "* **TorchAudio** - Added wav2vec 2.0 model deployable in non-Python environments (including C++, Android, and iOS). Many performance improvements in lfilter, spectral operations, resampling. Added options for quality control in sampling (i.e. Kaiser window support). Initiated the migration of complex tensors operations. Improved autograd support. See [release notes](https://github.com/pytorch/audio/releases) here.\n* **TorchText** - Added a new high-performance Vocab module that provides common functional APIs for NLP workflows. See [release notes](https://github.com/pytorch/text/releases) here.\n\nWe\u2019d like to thank the community for their support and work on this latest release.\n\nFeatures in PyTorch releases are classified as Stable, Beta, and Prototype. You can learn more about the definitions in [this blog post](https://pytorch.org/blog/pytorch-feature-classification-changes/). \n\n# TorchVision 0.10\n\n### (Stable) Quantized kernels for object detection", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} -{"text": "The forward pass of the nms and roi_align operators now support tensors with a quantized dtype, which can help lower the memory footprint of object detection models, particularly on mobile environments. For more details, refer to [the documentation](https://pytorch.org/vision/stable/ops.html#torchvision.ops.roi_align). \n\n### (Stable) Speed optimizations for Tensor transforms \nThe resize and flip transforms have been optimized and its runtime improved by up to 5x on the CPU. \n\n### (Stable) Documentation improvements \nSignificant improvements were made to the documentation. In particular, a new gallery of examples is available. These examples visually illustrate how each transform acts on an image, and also properly documents and illustrates the output of the segmentation models.", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} -{"text": "The example gallery will be extended in the future to provide more comprehensive examples and serve as a reference for common torchvision tasks. For more details, refer to [the documentation](https://pytorch.org/vision/stable/auto_examples/index.html).\n\n### (Beta) New models for detection \n[SSD](https://arxiv.org/abs/1512.02325) and [SSDlite](https://arxiv.org/abs/1801.04381) are two popular object detection architectures that are efficient in terms of speed and provide good results for low resolution pictures. In this release, we provide implementations for the original SSD model with VGG16 backbone and for its mobile-friendly variant SSDlite with MobileNetV3-Large backbone.\n\nThe models were pre-trained on COCO train2017 and can be used as follows:\n\n```python\nimport torch\nimport torchvision\n\n# Original SSD variant\nx = [torch.rand(3, 300, 300), torch.rand(3, 500, 400)]\nm_detector = torchvision.models.detection.ssd300_vgg16(pretrained=True)\nm_detector.eval()\npredictions = m_detector(x)", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} -{"text": "m_detector.eval()\npredictions = m_detector(x)\n\n# Mobile-friendly SSDlite variant\nx = [torch.rand(3, 320, 320), torch.rand(3, 500, 400)]\nm_detector = torchvision.models.detection.ssdlite320_mobilenet_v3_large(pretrained=True)\nm_detector.eval()\npredictions = m_detector(x)\n```\n\nThe following accuracies can be obtained on COCO val2017 (full results available in [#3403](https://github.com/pytorch/vision/pull/3403) and [#3757](https://github.com/pytorch/vision/pull/3757)):\n\n\n {:.table.table-striped.table-bordered}\n| Model | mAP | mAP@50 | mAP@75 |\n| ------------- | ------------- | ------------- | ------------- |\n| SSD300 VGG16 | 25.1 | 41.5 | 26.2 | \n| SSDlite320 MobileNetV3-Large | 21.3 | 34.3 | 22.1 |\n\n\nFor more details, refer to [the documentation](https://pytorch.org/vision/stable/models.html#id37).\n\n### (Beta) JPEG decoding on the GPU", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} -{"text": "### (Beta) JPEG decoding on the GPU \nDecoding jpegs is now possible on GPUs with the use of [nvjpeg](https://developer.nvidia.com/nvjpeg), which should be readily available in your CUDA setup. The decoding time of a single image should be about 2 to 3 times faster than with libjpeg on CPU. While the resulting tensor will be stored on the GPU device, the input raw tensor still needs to reside on the host (CPU), because the first stages of the decoding process take place on the host:\nfrom torchvision.io.image import read_file, decode_jpeg\n\n```python\ndata = read_file('path_to_image.jpg') # raw data is on CPU\nimg = decode_jpeg(data, device='cuda') # decoded image in on GPU\n```\nFor more details, see [the documentation](https://pytorch.org/vision/stable/io.html#torchvision.io.decode_jpeg).\n\n### (Beta) iOS support", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} -{"text": "### (Beta) iOS support \nTorchVision 0.10 now provides pre-compiled iOS binaries for its C++ operators, which means you can run Faster R-CNN and Mask R-CNN on iOS. An example app on how to build a program leveraging those ops can be found [here](https://github.com/pytorch/vision/tree/master/ios/VisionTestApp). \n\n# TorchAudio 0.9.0\n\n### (Stable) Complex Tensor Migration \nTorchAudio has functions that handle complex-valued tensors. These functions follow a convention to use an extra dimension to represent real and imaginary parts. In PyTorch 1.6, the native complex type was introduced. As its API is getting stable, torchaudio has started to migrate to the native complex type.", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} -{"text": "In this release, we added support for native complex tensors, and you can opt-in to use them. Using the native complex types, we have verified that affected functions continue to support autograd and TorchScript, moreover, switching to native complex types improves their performance. For more details, refer to [pytorch/audio#1337](https://github.com/pytorch/audio/issues/1337). \n\n### (Stable) Filtering Improvement", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} -{"text": "### (Stable) Filtering Improvement \nIn release 0.8, we added the C++ implementation of the core part of ```lfilter``` for CPU, which improved the performance. In this release, we optimized some internal operations of the CPU implementation for further performance improvement. We also added autograd support to both CPU and GPU. Now ```lfilter``` and all the ```biquad``` filters (```biquad```, ```band_biquad```, ```bass_biquad```, ```treble_biquad```, ```allpass_biquad```, ```lowpass_biquad```, ```highpass_biquad```, ```bandpass_biquad```, ```equalizer_biquad``` and ```bandrefect_biquad```) benefit from the performance improvement and support autograd. We also moved the implementation of overdrive to C++ for performance improvement. For more details, refer to [the documentation](https://pytorch.org/audio/0.9.0/functional.html#lfilter).\n \n### (Stable) Improved Autograd Support", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} -{"text": "### (Stable) Improved Autograd Support \nAlong with the work of Complex Tensor Migration and Filtering Improvement, we also added autograd tests to transforms. `lfilter`, `biquad` and its variants, and most transforms are now guaranteed to support autograd. For more details, refer to [the release note](https://github.com/pytorch/audio/releases).\n\n### (Stable) Improved Windows Support \nTorchaudio implements some operations in C++ for reasons such as performance and integration with third-party libraries. These C++ components were only available on Linux and macOS. In this release, we have added support to Windows. With this, the efficient filtering implementation mentioned above is also available on Windows.\n\nHowever, please note that not all the C++ components are available for Windows. \u201csox_io\u201d backend and ```torchaudio.functional.compute_kaldi_pitch``` are not supported. \n\n### (Stable) I/O Functions Migration", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} -{"text": "### (Stable) I/O Functions Migration \nSince the 0.6 release, we have continuously improved I/O functionality. Specifically, in 0.8 we changed the default backend from \u201csox\u201d to \u201csox_io\u201d and applied the same switch to API of the \u201csoundfile\u201d backend. The 0.9 release concludes this migration by removing the deprecated backends. For more details, please refer to [#903](https://github.com/pytorch/audio/issues/903). \n\n### (Beta) Wav2Vec2.0 Model", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} -{"text": "### (Beta) Wav2Vec2.0 Model\nWe have added the model architectures from [Wav2Vec2.0](https://arxiv.org/abs/2006.11477). You can import fine-tuned models parameters published on [fairseq](https://github.com/pytorch/fairseq/tree/master/examples/wav2vec) and [Hugging Face Hub](https://huggingface.co/models?filter=wav2vec2). Our model definition supports TorchScript, and it is possible to deploy the model to non-Python environments, such as C++, [Android](https://github.com/pytorch/android-demo-app/tree/master/SpeechRecognition) and [iOS](https://github.com/pytorch/ios-demo-app/tree/master/SpeechRecognition). \n\nThe following code snippet illustrates such a use case. Please check out our [c++ example directory](https://github.com/pytorch/audio/tree/master/examples/libtorchaudio) for the complete example. Currently, it is designed for running inference. If you would like more support for training, please file a feature request.\n\n```python\n# Import fine-tuned model from Hugging Face Hub\nimport transformers", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} -{"text": "import transformers\nfrom torchaudio.models.wav2vec2.utils import import_huggingface_model\n\noriginal = Wav2Vec2ForCTC.from_pretrained(\"facebook/wav2vec2-base-960h\")\nimported = import_huggingface_model(original)\n```\n\n```python\n# Import fine-tuned model from fairseq\nimport fairseq\nfrom torchaudio.models.wav2vec2.utils import import_fairseq_model\n\noriginal, _, _ = fairseq.checkpoint_utils.load_model_ensemble_and_task(\n [\"wav2vec_small_960h.pt\"], arg_overrides={'data': \"GPU\n | \nBatch size 1\n | \nBatch size 2\n | \nBatch size 4\n | \n||
P100 (no compilation)\n | \n-3.8\n | \n0.44\n | \n5.47\n | \n||
T4\n | \n2.12\n | \n10.51\n | ", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} -{"text": "2.12\n | \n10.51\n | \n14.2\n | \n
A10\n | \n-2.34\n | \n8.99\n | \n10.57\n | \n||
V100\n | \n18.63\n | \n6.39\n | \n10.43\n | \n||
A100\n | \n38.5\n | \n20.33\n | \n12.17\n | \n
Configuration\n | \nP100\n | \nT4\n | \nA10\n | \nV100\n | \nA100\n | \n
Original without xFormers\n | \n30.4s (-19.3%)\n | \n29.8s (-77.3%)\n | \n13.0s (-83.9%)\n | \n10.9s (-33.1%)\n | \n8.0s (-19.3%)\n | \n
Original with xFormers\n | \n25.5s (0.0%)\n | \n16.8s (0.0%)\n | \n7.1s (0.0%)\n | \n8.2s (0.0%)\n | \n6.7s (0.0%)\n | \n
Optimized with vanilla math attention, no compilation\n | \n27.3s (-7.0%)\n | \n19.9s (-18.7%)\n | \n13.2s (-87.2%)\n | \n7.5s (8.7%)\n | \n5.7s (15.1%)\n | \n
Optimized with mem. efficient attention, no compilation\n | \n26.5s (-3.8%)\n | \n16.8s (0.2%)\n | \n7.1s (-0.8%)\n | \n6.9s (16.0%)\n | \n5.3s (20.6%)\n | \n5.3s (20.6%)\n | \n \n
Optimized with mem. efficient attention and compilation\n | \n-\n | \n16.4s (2.1%)\n | \n7.2s (-2.3%)\n | \n6.6s (18.6%)\n | \n4.1s (38.5%)\n | \n
Configuration\n | \nP100\n | \nT4\n | \nA10\n | \nV100\n | \nA100\n | \n||
Original without xFormers\n | \n58.0s (-21.6%)\n | \n57.6s (-84.0%)\n | \n24.4s (-95.2%)\n | \n18.6s (-63.0%)\n | \n12.0s (-50.6%)\n | \n||
Original with xFormers\n | \n47.7s (0.0%)\n | \n31.3s (0.0%)", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} -{"text": " | 47.7s (0.0%)\n | \n31.3s (0.0%)\n | \n12.5s (0.0%)\n | \n11.4s (0.0%)\n | \n8.0s (0.0%)\n | \n
Optimized with vanilla math attention, no compilation\n | \n49.3s (-3.5%)\n | \n37.9s (-21.0%)\n | \n17.8s (-42.2%)\n | \n12.7s (-10.7%)\n | \n7.8s (1.8%)\n | \n||
Optimized with mem. efficient attention, no compilation\n | \n47.5s (0.4%)\n | \n31.2s (0.5%)\n | \n12.2s (2.6%)\n | \n11.5s (-0.7%)\n | \n7.0s (12.6%)\n | \n||
Optimized with mem. efficient attention and compilation\n | \n-\n | \n28.0s (10.5%)\n | \n11.4s (9.0%)\n | \n10.7s (6.4%)\n | \n6.4s (20.3%)\n | \n
Configuration\n | \nP100\n | \nT4\n | \nA10\n | \nV100\n | \nA100\n | \n|
Original without xFormers\n | \n117.9s (-20.0%)\n | \n112.4s (-81.8%)\n | \n47.2s (-101.7%)\n | \n35.8s (-71.9%)\n | \n22.8s (-78.9%)\n | \n|
Original with xFormers\n | \n98.3s (0.0%)\n | \n61.8s (0.0%)\n | \n23.4s (0.0%)\n | \n20.8s (0.0%)\n | \n12.7s (0.0%)\n | \n|
Optimized with vanilla math attention, no compilation\n | \n101.1s (-2.9%)\n | \n73.0s (-18.0%)\n | \n28.3s (-21.0%)\n | \n23.3s (-11.9%)\n | ", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} -{"text": "\n23.3s (-11.9%)\n | \n14.5s (-13.9%)\n | \n
Optimized with mem. efficient attention, no compilation\n | \n92.9s (5.5%)\n | \n61.1s (1.2%)\n | \n23.9s (-1.9%)\n | \n20.8s (-0.1%)\n | \n12.8s (-0.9%)\n | \n|
Optimized with mem. efficient attention and compilation\n | \n-\n | \n53.1s (14.2%)\n | \n20.9s (10.6%)\n | \n18.6s (10.4%)\n | \n11.2s (12.2%)\n | \n
TorchArrow 0.1.0\n | \nTorchRec 0.4.0\n | \nTorchVision 0.15\n | \n
TorchVision 0.15\n | \n||
TorchAudio 2.0\n | \nTorchServe 0.7.1\n | \nTorchX 0.4.0\n | \n
TorchData 0.6.0\n | \nTorchText 0.15.0\n | \nPyTorch on XLA Devices 1.14\n | \n
PyTorch Version\n | \nPython\n | \nStable CUDA\n | \nExperimental CUDA\n | \n
2.0\n | \n>=3.8, <=3.11\n | \nCUDA 11.7, CUDNN 8.5.0.96\n | \nCUDA 11.8, CUDNN 8.7.0.84\n | \n
1.13\n | \n>=3.7, <=3.10\n | \nCUDA 11.6, CUDNN 8.3.2.44\n | \nCUDA 11.7, CUDNN 8.5.0.96\n | \n
1.12\n | \n>=3.7, <=3.10\n | \nCUDA 11.3, CUDNN 8.3.2.44\n | \nCUDA 11.6, CUDNN 8.3.2.44\n | \n
\n \n
\n\n
\nStable\n | \nBeta\n | \nPrototype\n | \nPerformance Improvements\n | \n|
\n\nAccelerated PT 2 Transformers\n | \n\n\ntorch.compile\n | \n\n\nDTensor\n | \n\n\nCUDA support for 11.7 & 11.8 (deprecating CUDA 11.6) \n | \n|
\n | \n\n\nPyTorch MPS Backend\n | \n\n\nTensorParallel\n | \n", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} -{"text": " | \n\n\nPython 3.8 (deprecating Python 3.7)\n | \n
\n | \n\n\nScaled dot product attention\n | \n\n\n2D Parallel\n | \n\n\nAWS Graviton3\n | \n|
\n | \n\n\nfunctorch\n | \n\n\nTorch.compile (dynamic=True)\n | \n\n | \n|
\n | \nDispatchable Collectives\n | \n\n | \n||
\n | \nTorch.set_default & torch.device\n | \n\n | \n\n | \n|
\n | \n", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} -{"text": " | \n|||
\n | \n\n\nX86 quantization backend\n | \n\n | \n\n | \n|
\n | \n\n\nGNN inference and training performance\n | \n\n | \n\n | \n
\n | \n1 core/instance\n | \n2 cores/instance\n | \n4 cores/instance\n | \n1 socket (32 cores)/instance\n | \n
Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz\n | \n1.76X\n | \n1.80X\n | \n2.04X\n | \n1.34X\n | \n
Model-Dataset\n | \nOption\n | \nSpeedup Ratio\n | \n
GCN-Reddit (inference)\n | \n512-2-64-dense\n | \n1.22x\n | \n
1024-3-128-dense\n | \n1.25x\n | \n|
512-2-64-sparse\n | \n1.31x\n | \n|
1024-3-128-sparse\n | \n1.68x\n | ", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} -{"text": "\n1.68x\n | \n
512-2-64-dense\n | \n1.22x\n | \n|
\nGraphSage-ogbn-products (inference)\n | \n1024-3-128-dense\n | \n1.15x\n | \n
512-2-64-sparse\n | \n1.20x\n | \n|
1024-3-128-sparse\n | \n1.33x\n | \n|
full-batch-sparse\n | \n4.07x\n | \n|
GCN-PROTEINS (training)\n | \n3-32\n | \n1.67x\n | \n
GCN-REDDIT-BINARY (training)\n | \n3-32\n | \n1.67x\n | \n
GCN-Reddit (training)\n | \n512-2-64-dense\n | \n1.20x\n | \n
1024-3-128-dense\n | \n1.12x\n | \n
\n \n
\n \n
\n \n