text
stringlengths
0
1.73k
source
stringlengths
35
119
category
stringclasses
2 values
We found very significant performance improvements over vanilla attention across the board, without even using torch.compile(). An out of the box installation of PyTorch 2.0 and diffusers yields about 50% speedup on A100 and between 35% and 50% on 4090 GPUs, depending on batch size. Performance improvements are more pronounced for modern CUDA architectures such as Ada (4090) or Ampere (A100), but they are still very significant for older architectures still heavily in use in cloud services. In addition to faster speeds, the accelerated transformers implementation in PyTorch 2.0 allows much larger batch sizes to be used. A single 40GB A100 GPU runs out of memory with a batch size of 10, and 24 GB high-end consumer cards such as 3090 and 4090 cannot generate 8 images at once. Using PyTorch 2.0 and diffusers we could achieve batch sizes of 48 for 3090 and 4090, and 64 for A100. This is of great significance for cloud services and applications, as they can efficiently process more images at a time.
https://pytorch.org/blog/accelerated-diffusers-pt-20/
pytorch blogs
When compared with PyTorch 1.13.1 + xFormers, the new accelerated transformers implementation is still faster and requires no additional packages or dependencies. In this case we found moderate speedups of up to 2% on datacenter cards such as A100 or T4, but performance was great on the two last generations of consumer cards: up to 20% speed improvement on 3090 and between 10% and 45% on 4090, depending on batch size. When torch.compile() is used, we get an additional performance boost of (typically) 2% and 3% over the previous improvements. As compilation takes some time, this is better geared towards user-facing inference services or training. Results in float16
https://pytorch.org/blog/accelerated-diffusers-pt-20/
pytorch blogs
When we consider float16 inference, the performance improvements of the accelerated transformers implementation in PyTorch 2.0 are between 20% and 28% over standard attention, across all the GPUs we tested, except for the 4090, which belongs to the more modern Ada architecture. This GPU benefits from a dramatic performance improvement when using PyTorch 2.0 nightlies. With respect to optimized SDPA vs xFormers, results are usually on par for most GPUs, except again for the 4090. Adding torch.compile() to the mix boosts performance a few more percentage points across the board. Conclusions
https://pytorch.org/blog/accelerated-diffusers-pt-20/
pytorch blogs
Conclusions PyTorch 2.0 comes with multiple features to optimize the crucial components of the foundational transformer block, and they can be further improved with the use of torch.compile. These optimizations lead to significant memory and time improvements for diffusion models, and remove the need for third-party library installations. To take advantage of these speed and memory improvements all you have to do is upgrade to PyTorch 2.0 and use diffusers >= 0.13.0. For more examples and in-detail benchmark numbers, please also have a look at the Diffusers with PyTorch 2.0 docs. Acknowledgement The authors are grateful to the PyTorch team for creating such excellent software.
https://pytorch.org/blog/accelerated-diffusers-pt-20/
pytorch blogs
layout: blog_detail title: "Get Started with PyTorch 2.0 Summary and Overview" author: Team PyTorch featured-img: "assets/images/Pytorch_2_0_Animation_AdobeExpress.gif" Introducing PyTorch 2.0, our first steps toward the next generation 2-series release of PyTorch. Over the last few years we have innovated and iterated from PyTorch 1.0 to the most recent 1.13 and moved to the newly formed PyTorch Foundation, part of the Linux Foundation. To complement the PyTorch 2.0 announcement and conference, we have also posted a comprehensive introduction and technical overview within the Get Started menu at https://pytorch.org/get-started/pytorch-2.0. We also wanted to ensure you had all the information to quickly leverage PyTorch 2.0 in your models so we added the technical requirements, tutorial, user experience, Hugging Face benchmarks and FAQs to get you started today!
https://pytorch.org/blog/getting-started-with-pytorch-2.0/
pytorch blogs
Finally we are launching a new “Ask the Engineers: 2.0 Live Q&A” series that allows you to go deeper on a range of topics with PyTorch subject matter experts. We hope this content is helpful for the entire community and level of users/contributors. https://pytorch.org/get-started/pytorch-2.0
https://pytorch.org/blog/getting-started-with-pytorch-2.0/
pytorch blogs
layout: blog_detail title: 'New Library Releases in PyTorch 1.10, including TorchX, TorchAudio, TorchVision' author: Team PyTorch Today, we are announcing a number of new features and improvements to PyTorch libraries, alongside the PyTorch 1.10 release. Some highlights include: Some highlights include: TorchX - a new SDK for quickly building and deploying ML applications from research & development to production. TorchAudio - Added text-to-speech pipeline, self-supervised model support, multi-channel support and MVDR beamforming module, RNN transducer (RNNT) loss function, and batch and filterbank support to lfilter function. See the TorchAudio release notes here.
https://pytorch.org/blog/pytorch-1.10-new-library-releases/
pytorch blogs
TorchVision - Added new RegNet and EfficientNet models, FX based feature extraction added to utilities, two new Automatic Augmentation techniques: Rand Augment and Trivial Augment, and updated training recipes. See the TorchVision release notes here. Introducing TorchX TorchX is a new SDK for quickly building and deploying ML applications from research & development to production. It offers various builtin components that encode MLOps best practices and make advanced features like distributed training and hyperparameter optimization accessible to all.
https://pytorch.org/blog/pytorch-1.10-new-library-releases/
pytorch blogs
Users can get started with TorchX 0.1 with no added setup cost since it supports popular ML schedulers and pipeline orchestrators that are already widely adopted and deployed in production. No two production environments are the same. To comply with various use cases, TorchX’s core APIs allow tons of customization at well-defined extension points so that even the most unique applications can be serviced without customizing the whole vertical stack. Read the documentation for more details and try out this feature using this quickstart tutorial. TorchAudio 0.10 [Beta] Text-to-speech pipeline
https://pytorch.org/blog/pytorch-1.10-new-library-releases/
pytorch blogs
[Beta] Text-to-speech pipeline TorchAudio now adds the Tacotron2 model and pretrained weights. It is now possible to build a text-to-speech pipeline with existing vocoder implementations like WaveRNN and Griffin-Lim. Building a TTS pipeline requires matching data processing and pretrained weights, which are often non-trivial to users. So TorchAudio introduces a bundle API so that constructing pipelines for specific pretrained weights is easy. The following example illustrates this. ```python import torchaudio bundle = torchaudio.pipelines.TACOTRON2_WAVERNN_CHAR_LJSPEECH Build text processor, Tacotron2 and vocoder (WaveRNN) model processor = bundle.get_text_processor() tacotron2 = bundle.get_tacotron2() Downloading: 100%|███████████████████████████████| 107M/107M [00:01<00:00, 87.9MB/s] vocoder = bundle.get_vocoder() Downloading: 100%|███████████████████████████████| 16.7M/16.7M [00:00<00:00, 78.1MB/s] text = "Hello World!" Encode text
https://pytorch.org/blog/pytorch-1.10-new-library-releases/
pytorch blogs
text = "Hello World!" Encode text input, lengths = processor(text) Generate (mel-scale) spectrogram specgram, lengths, _ = tacotron2.infer(input, lengths) Convert spectrogram to waveform waveforms, lengths = vocoder(specgram, lengths) Save audio torchaudio.save('hello-world.wav', waveforms, vocoder.sample_rate) ``` For the details of this API please refer to the documentation. You can also try this from the tutorial. (Beta) Self-Supervised Model Support
https://pytorch.org/blog/pytorch-1.10-new-library-releases/
pytorch blogs
(Beta) Self-Supervised Model Support TorchAudio added HuBERT model architecture and pre-trained weight support for wav2vec 2.0 and HuBERT. HuBERT and wav2vec 2.0 are novel ways for audio representation learning and they yield high accuracy when fine-tuned on downstream tasks. These models can serve as baseline in future research, therefore, TorchAudio is providing a simple way to run the model. Similar to the TTS pipeline, the pretrained weights and associated information, such as expected sample rates and output class labels (for fine-tuned weights) are put together as a bundle, so that they can be used to build pipelines. The following example illustrates this. ```python import torchaudio bundle = torchaudio.pipelines.HUBERT_ASR_LARGE Build the model and load pretrained weight. model = bundle.get_model() Downloading: 100%|███████████████████████████████| 1.18G/1.18G [00:17<00:00, 73.8MB/s] Check the corresponding labels of the output. labels = bundle.get_labels()
https://pytorch.org/blog/pytorch-1.10-new-library-releases/
pytorch blogs
labels = bundle.get_labels() print(labels) ('', '', '', '', '|', 'E', 'T', 'A', 'O', 'N', 'I', 'H', 'S', 'R', 'D', 'L', 'U', 'M', 'W', 'C', 'F', 'G', 'Y', 'P', 'B', 'V', 'K', "'", 'X', 'J', 'Q', 'Z') Infer the label probability distribution waveform, sample_rate = torchaudio.load(hello-world.wav') emissions, _ = model(waveform) Pass emission to (hypothetical) decoder transcripts = ctc_decode(emissions, labels) print(transcripts[0]) HELLO WORLD ``` Please refer to the documentation for more details and try out this feature using this tutorial. (Beta) Multi-channel support and MVDR beamforming
https://pytorch.org/blog/pytorch-1.10-new-library-releases/
pytorch blogs
Far-field speech recognition is a more challenging task compared to near-field recognition. Multi-channel methods such as beamforming help reduce the noises and enhance the target speech. TorchAudio now adds support for differentiable Minimum Variance Distortionless Response (MVDR) beamforming on multi-channel audio using Time-Frequency masks. Researchers can easily assemble it with any multi-channel ASR pipeline. There are three solutions (ref_channel, stv_evd, stv_power) and it supports single-channel and multi-channel (perform average in the method) masks. It provides an online option that recursively updates the parameters for streaming audio. We also provide a tutorial on how to apply MVDR beamforming to the multi-channel audio in the example directory. ```python from torchaudio.transforms import MVDR, Spectrogram, InverseSpectrogram Load the multi-channel noisy audio waveform_mix, sr = torchaudio.load('mix.wav') Initialize the stft and istft modules
https://pytorch.org/blog/pytorch-1.10-new-library-releases/
pytorch blogs
Initialize the stft and istft modules stft = Spectrogram(n_fft=1024, hop_length=256, return_complex=True, power=None) istft = InverseSpectrogram(n_fft=1024, hop_length=256) Get the noisy spectrogram specgram_mix = stft(waveform_mix) Get the Time-Frequency mask via machine learning models mask = model(waveform) Initialize the MVDR module mvdr = MVDR(ref_channel=0, solution=”ref_channel”, multi_mask=False) Apply MVDR beamforming specgram_enhanced = mvdr(specgram_mix, mask) Get the enhanced waveform via iSTFT waveform_enhanced = istft(specgram_enhanced, length=waveform.shape[-1]) ``` Please refer to the documentation for more details and try out this feature using the MVDR tutorial. (Beta) RNN Transducer Loss
https://pytorch.org/blog/pytorch-1.10-new-library-releases/
pytorch blogs
(Beta) RNN Transducer Loss The RNN transducer (RNNT) loss is part of the RNN transducer pipeline, which is a popular architecture for speech recognition tasks. Recently it has gotten attention for being used in a streaming setting, and has also achieved state-of-the-art WER for the LibriSpeech benchmark. TorchAudio’s loss function supports float16 and float32 logits, has autograd and torchscript support, and can be run on both CPU and GPU, which has a custom CUDA kernel implementation for improved performance. The implementation is consistent with the original loss function in Sequence Transduction with Recurrent Neural Networks, but relies on code from Alignment Restricted Streaming Recurrent Neural Network Transducer. Special thanks to Jay Mahadeokar and Ching-Feng Yeh for their code contributions and guidance. Please refer to the documentation for more details.
https://pytorch.org/blog/pytorch-1.10-new-library-releases/
pytorch blogs
(Beta) Batch support and filter bank support torchaudio.functional.lfilter now supports batch processing and multiple filters. (Prototype) Emformer Module Automatic speech recognition (ASR) research and productization have increasingly focused on on-device applications. Towards supporting such efforts, TorchAudio now includes Emformer, a memory-efficient transformer architecture that has achieved state-of-the-art results on LibriSpeech in low-latency streaming scenarios, as a prototype feature. Please refer to the documentation for more details. GPU Build GPU builds that support custom CUDA kernels in TorchAudio, like the one being used for RNN transducer loss, have been added. Following this change, TorchAudio’s binary distribution now includes CPU-only versions and CUDA-enabled versions. To use CUDA-enabled binaries, PyTorch also needs to be compatible with CUDA. TorchVision 0.11
https://pytorch.org/blog/pytorch-1.10-new-library-releases/
pytorch blogs
TorchVision 0.11 (Stable) New Models RegNet and EfficientNet are two popular architectures that can be scaled to different computational budgets. In this release we include 22 pre-trained weights for their classification variants. The models were trained on ImageNet and the accuracies of the pre-trained models obtained on ImageNet val can be found below (see #4403, #4530 and #4293 for more details). The models can be used as follows: import torch from torchvision import models x = torch.rand(1, 3, 224, 224) regnet = models.regnet_y_400mf(pretrained=True) regnet.eval() predictions = regnet(x) efficientnet = models.efficientnet_b0(pretrained=True) efficientnet.eval() predictions = efficientnet(x)
https://pytorch.org/blog/pytorch-1.10-new-library-releases/
pytorch blogs
predictions = efficientnet(x) See the full list of new models on the [torchvision.models](https://pytorch.org/vision/master/models.html) documentation page. We would like to thank Ross Wightman and Luke Melas-Kyriazi for contributing the weights of the EfficientNet variants. ### (Beta) FX-based Feature Extraction A new Feature Extraction method has been added to our utilities. It uses [torch.fx](https://pytorch.org/docs/stable/fx.html) and enables us to retrieve the outputs of intermediate layers of a network which is useful for feature extraction and visualization. Here is an example of how to use the new utility: ```python import torch from torchvision.models import resnet50 from torchvision.models.feature_extraction import create_feature_extractor x = torch.rand(1, 3, 224, 224) model = resnet50() return_nodes = { "layer4.2.relu_2": "layer4" } model2 = create_feature_extractor(model, return_nodes=return_nodes) intermediate_outputs = model2(x) print(intermediate_outputs['layer4'].shape)
https://pytorch.org/blog/pytorch-1.10-new-library-releases/
pytorch blogs
print(intermediate_outputs['layer4'].shape) ``` We would like to thank Alexander Soare for developing this utility. (Stable) New Data Augmentations Two new Automatic Augmentation techniques were added: RandAugment and Trivial Augment. They apply a series of transformations on the original data to enhance them and to boost the performance of the models. The new techniques build on top of the previously added AutoAugment and focus on simplifying the approach, reducing the search space for the optimal policy and improving the performance gain in terms of accuracy. These techniques enable users to reproduce recipes to achieve state-of-the-art performance on the offered models. Additionally, it enables users to apply these techniques in order to do transfer learning and achieve optimal accuracy on new datasets.
https://pytorch.org/blog/pytorch-1.10-new-library-releases/
pytorch blogs
Both methods can be used as drop-in replacement of the AutoAugment technique as seen below: from torchvision import transforms t = transforms.RandAugment() # t = transforms.TrivialAugmentWide() transformed = t(image) transform = transforms.Compose([ transforms.Resize(256), transforms.RandAugment(), # transforms.TrivialAugmentWide() transforms.ToTensor()]) Read the automatic augmentation transforms for more details. We would like to thank Samuel G. Müller for contributing to Trivial Augment and for his help on refactoring the AA package. Updated Training Recipes
https://pytorch.org/blog/pytorch-1.10-new-library-releases/
pytorch blogs
Updated Training Recipes We have updated our training reference scripts to add support for Exponential Moving Average, Label Smoothing, Learning-Rate Warmup, Mixup, Cutmix and other SOTA primitives. The above enabled us to improve the classification Acc@1 of some pre-trained models by over 4 points. A major update of the existing pre-trained weights is expected in the next release. Thanks for reading. If you’re interested in these updates and want to join the PyTorch community, we encourage you to join the discussion forums and open GitHub issues. To get the latest news from PyTorch, follow us on Twitter, Medium, YouTube and LinkedIn. Cheers! Team PyTorch
https://pytorch.org/blog/pytorch-1.10-new-library-releases/
pytorch blogs
layout: blog_detail title: "Introducing TorchMultimodal - a library for accelerating exploration in Multimodal AI" author: Kartikay Khandelwal, Ankita De featured-img: "assets/images/torch-multimodal-feature-image.png" We are announcing TorchMultimodal Beta, a PyTorch domain library for training SoTA multi-task multimodal models at scale. The library provides composable building blocks (modules, transforms, loss functions) to accelerate model development, SoTA model architectures (FLAVA, MDETR, Omnivore) from published research, training and evaluation scripts, as well as notebooks for exploring these models. The library is under active development, and we’d love to hear your feedback! You can find more details on how to get started here. Why TorchMultimodal?
https://pytorch.org/blog/introducing-torchmultimodal/
pytorch blogs
Why TorchMultimodal? Interest is rising around AI models that understand multiple input types (text, images, videos and audio signals), and optionally use this understanding to generate different forms of outputs (sentences, pictures, videos). Recent work from FAIR such as FLAVA, Omnivore and data2vec have shown that multimodal models for understanding are competitive with unimodal counterparts, and in some cases are establishing the new state-of-the art. Generative models such as Make-a-video and Make-a-scene are redefining what modern AI systems can do.
https://pytorch.org/blog/introducing-torchmultimodal/
pytorch blogs
As interest in multimodal AI has grown, researchers are looking for tools and libraries to quickly experiment with ideas, and build on top of the latest research in the field. While the PyTorch ecosystem has a rich repository of libraries and frameworks, it’s not always obvious how components from these interoperate with each other, or how they can be stitched together to build SoTA multimodal models. TorchMultimodal solves this problem by providing: Composable and easy-to-use building blocks which researchers can use to accelerate model development and experimentation in their own workflows. These are designed to be modular, and can be easily extended to handle new modalities. End-to-end examples for training and evaluating the latest models from research. These should serve as starting points for ongoing/future research, as well as examples for using advanced features such as integrating with FSDP and activation checkpointing for scaling up model and batch sizes.
https://pytorch.org/blog/introducing-torchmultimodal/
pytorch blogs
Introducing TorchMultimodal TorchMultimodal is a PyTorch domain library for training multi-task multimodal models at scale. In the repository, we provide: Building Blocks. A collection of modular and composable building blocks like models, fusion layers, loss functions, datasets and utilities. Some examples include: Contrastive Loss with Temperature. Commonly used function for training models like CLIP and FLAVA. We also include variants such as ImageTextContrastiveLoss used in models like ALBEF.
https://pytorch.org/blog/introducing-torchmultimodal/
pytorch blogs
Codebook layers which compresses high dimensional data by nearest neighbor lookup in an embedding space and is a vital component of VQVAEs (provided as a model in the repository). Shifted-window Attention window based multi-head self attention which is a vital component of encoders like Swin 3D Transformers. Components for CLIP. A popular model published by OpenAI which has proven to be extremely effective at learning text and image representations.
https://pytorch.org/blog/introducing-torchmultimodal/
pytorch blogs
Multimodal GPT. An abstraction that extends OpenAI’s GPT architecture for multimodal generation when combined with the generation utility. MultiHeadAttention. A critical component for attention-based models with support for fast auto-regressive decoding.
https://pytorch.org/blog/introducing-torchmultimodal/
pytorch blogs
Examples. A collection of examples that show how to combine these building blocks with components and common infrastructure (Lightning, TorchMetrics) from across the PyTorch Ecosystem to replicate state-of-the-art models published in literature. We currently provide five examples, which include. FLAVA [paper]. Official code for the paper accepted at CVPR, including a tutorial on finetuning FLAVA.
https://pytorch.org/blog/introducing-torchmultimodal/
pytorch blogs
MDETR [paper]. Collaboration with authors from NYU to provide an example which alleviates interoperability pain points in the PyTorch ecosystem, including a notebook on using MDETR for phrase grounding and visual question answering. Omnivore [paper]. First example in TorchMultimodal of a model which deals with Video and 3D data, including a notebook for exploring the model.
https://pytorch.org/blog/introducing-torchmultimodal/
pytorch blogs
MUGEN [paper]. Foundational work for auto-regressive generation and retrieval, including demos for text-video generation and retrieval with a large-scale synthetic dataset enriched from OpenAI coinrun. ALBEF [paper] Code for the model, including a notebook for using this model for Visual Question Answering. The following code snippet showcases an example usage of several TorchMultimodal components related to CLIP: ```python instantiate clip transform
https://pytorch.org/blog/introducing-torchmultimodal/
pytorch blogs
# instantiate clip transform clip_transform = CLIPTransform() # pass the transform to your dataset. Here we use coco captions dataset = CocoCaptions(root= ..., annFile=..., transforms=clip_transform) dataloader = DataLoader(dataset, batch_size=16) # instantiate model. Here we use clip with vit-L as the image encoder model= clip_vit_l14() # define loss and other things needed for training clip_loss = ContrastiveLossWithTemperature() optim = torch.optim.AdamW(model.parameters(), lr = 1e-5) epochs = 1 # write your train loop for _ in range(epochs): for batch_idx, batch in enumerate(dataloader): image, text = batch image_embeddings, text_embeddings = model(image, text) loss = contrastive_loss_with_temperature(image_embeddings, text_embeddings) loss.backward() optimizer.step()
https://pytorch.org/blog/introducing-torchmultimodal/
pytorch blogs
loss.backward() optimizer.step() ``` Apart from the code, we are also releasing a tutorial for fine-tuning multimodal foundation models, and a blog post (with code pointers) on how to scale up such models using techniques from PyTorch Distributed (FSDP and activation checkpointing). We hope such examples and tutorials will serve to demystify a number of advanced features available in the PyTorch ecosystem. What’s Next? While this is an exciting launch, there’s a lot more to come. The library is under development and we are working on adding some of the exciting developments in the space of diffusion models, and examples to showcase common trends from research. As you explore and use the library, we’d love to hear any feedback you might have! You can find more details on how to get started here. Team
https://pytorch.org/blog/introducing-torchmultimodal/
pytorch blogs
Team The primary contributors and developers of TorchMultimodal include Ankita De, Evan Smothers, Kartikay Khandelwal, Lan Gong, Laurence Rouesnel, Nahiyan Malik, Rafi Ayub and Yosua Michael Maranatha.
https://pytorch.org/blog/introducing-torchmultimodal/
pytorch blogs
layout: blog_detail title: 'Announcing the Winners of the 2021 PyTorch Annual Hackathon' author: Team PyTorch featured-img: 'assets/images/social_hackathon21.png' More than 1,900 people worked hard in this year’s PyTorch Annual Hackathon to create unique tools and applications for PyTorch developers and researchers. Notice: None of the projects submitted to the hackathon are associated with or offered by Meta Platforms, Inc. This year, participants could enter their projects into following three categories: * PyTorch Developer Tools: a tool or library for improving productivity and efficiency for PyTorch researchers and developers. * Web and Mobile Applications Powered by PyTorch: a web or mobile interface and/or an embedded device built using PyTorch.
https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/
pytorch blogs
PyTorch Responsible AI Development Tools: a tool, library, or web/mobile app to support researchers and developers in creating responsible AI that factors in fairness, security, privacy, and more throughout its entire development process. The virtual hackathon ran from September 8 through November 2, 2021, with more than 1,900 registered participants from 110 countries, submitting a total of 65 projects. Entrants were judged on their idea’s quality, originality, potential impact, and how well they implemented it. All projects can be viewed here. Meet the winners of each category below! PYTORCH DEVELOPER TOOLS First Place: RaNNC RaNNC is a middleware to automate hybrid model/data parallelism for training very large-scale neural networks capable of training 100 billion parameter models without any manual tuning.
https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/
pytorch blogs
Second Place: XiTorch XiTorch provides first and higher order gradients of functional routines, such as optimization, rootfinder, and ODE solver. It also contains operations for implicit linear operators (e.g. large matrix that is expressed only by its matrix-vector multiplication) such as symmetric eigen-decomposition, linear solve, and singular value decomposition. Third Place: TorchLiberator TorchLiberator automates model surgery, finding the maximum correspondence between weights in two networks. Honorable Mentions PADL manages your entire PyTorch work flow with a single python abstraction and a beautiful functional API, so there’s no more complex configuration or juggling preprocessing, postprocessing and forward passes.
https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/
pytorch blogs
PyTree is a PyTorch package for recursive neural networks that provides highly generic recursive neural network implementations as well as efficient batching methods. IndicLP makes it easier for developers and researchers to build applications and models in Indian Languages, thus making NLP a more diverse field. WEB/MOBILE APPLICATIONS POWERED BY PYTORCH First Place: PyTorch Driving Guardian PyTorch Driving Guardian is a tool that monitors driver alertness, emotional state, and potential blind spots on the road. Second Place: Kronia Kronia is an Android mobile app built to maximize the harvest outputs for farmers. Third Place: Heyoh camera for Mac
https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/
pytorch blogs
Heyoh is a Mac virtual camera for Zoom and Meets that augments live video by recognizing hand gestures and smiles and shows animated effects to other video participants. Honorable Mentions Mamma AI is a tool that helps doctors with the breast cancer identification process by identifying areas likely to have cancer using ultrasonic and x-ray images. AgingClock is a tool that predicts biological age first with methylation genome data, then blood test data and eventually with multimodal omics and lifestyle data. Iris is an open source photos platform which is more of an alternative of Google Photos that includes features such as Listing photos, Detecting Categories, Detecting and Classifying Faces from Photos, Detecting and Clustering by Location and Things in Photos. PYTORCH RESPONSIBLE AI DEVELOPMENT TOOLS
https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/
pytorch blogs
PYTORCH RESPONSIBLE AI DEVELOPMENT TOOLS First Place: FairWell FairWell aims to address model bias on specific groups of people by allowing data scientists to evaluate their dataset and model predictions and take steps to make their datasets more inclusive and their models less biased. Second Place: promp2slip Promp2slip is a library that tests the ethics of language models by using natural adversarial texts. Third Place: Phorch Phorch adversarially attacks the data using FIGA (Feature Importance Guided Attack) and creates 3 different attack sets of data based on certain parameters. These features are utilized to implement adversarial training as a defense against FIGA using neural net architecture in PyTorch. Honorable Mentions
https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/
pytorch blogs
Honorable Mentions Greenops helps to measure the footprints of deep learning models at training, testing and evaluating to reduce energy consumption and carbon footprints. Xaitk-saliency is an open-source, explainable AI toolkit for visual saliency algorithm interfaces and implementations, built for analytic and autonomy applications. Thank you, Team PyTorch
https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/
pytorch blogs
layout: blog_detail title: 'PyTorch 1.8 Release, including Compiler and Distributed Training updates, and New Mobile Tutorials' author: Team PyTorch We are excited to announce the availability of PyTorch 1.8. This release is composed of more than 3,000 commits since 1.7. It includes major updates and new features for compilation, code optimization, frontend APIs for scientific computing, and AMD ROCm support through binaries that are available via pytorch.org. It also provides improved features for large-scale training for pipeline and model parallelism, and gradient compression. A few of the highlights include: 1. Support for doing python to python functional transformations via torch.fx; 2. Added or stabilized APIs to support FFTs (torch.fft), Linear Algebra functions (torch.linalg), added support for autograd for complex tensors and updates to improve performance for calculating hessians and jacobians; and
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
Significant updates and improvements to distributed training including: Improved NCCL reliability; Pipeline parallelism support; RPC profiling; and support for communication hooks adding gradient compression. See the full release notes here. Along with 1.8, we are also releasing major updates to PyTorch libraries including TorchCSPRNG, TorchVision, TorchText and TorchAudio. For more on the library releases, see the post here. As previously noted, features in PyTorch releases are classified as Stable, Beta and Prototype. You can learn more about the definitions in the post here. New and Updated APIs
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
New and Updated APIs The PyTorch 1.8 release brings a host of new and updated API surfaces ranging from additional APIs for NumPy compatibility, also support for ways to improve and scale your code for performance at both inference and training time. Here is a brief summary of the major features coming in this release: [Stable] Torch.fft support for high performance NumPy style FFTs As part of PyTorch’s goal to support scientific computing, we have invested in improving our FFT support and with PyTorch 1.8, we are releasing the torch.fft module. This module implements the same functions as NumPy’s np.fft module, but with support for hardware acceleration and autograd. * See this blog post for more details * Documentation [Beta] Support for NumPy style linear algebra functions via torch.linalg
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
The torch.linalg module, modeled after NumPy’s np.linalg module, brings NumPy-style support for common linear algebra operations including Cholesky decompositions, determinants, eigenvalues and many others. * Documentation [Beta] Python code Transformations with FX FX allows you to write transformations of the form transform(input_module : nn.Module) -> nn.Module, where you can feed in a Module instance and get a transformed Module instance out of it.
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
This kind of functionality is applicable in many scenarios. For example, the FX-based Graph Mode Quantization product is releasing as a prototype contemporaneously with FX. Graph Mode Quantization automates the process of quantizing a neural net and does so by leveraging FX’s program capture, analysis and transformation facilities. We are also developing many other transformation products with FX and we are excited to share this powerful toolkit with the community. Because FX transforms consume and produce nn.Module instances, they can be used within many existing PyTorch workflows. This includes workflows that, for example, train in Python then deploy via TorchScript.
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
You can read more about FX in the official documentation. You can also find several examples of program transformations implemented using torch.fx here. We are constantly improving FX and invite you to share any feedback you have about the toolkit on the forums or issue tracker. We’d like to acknowledge TorchScript tracing, Apache MXNet hybridize, and more recently JAX as influences for program acquisition via tracing. We’d also like to acknowledge Caffe2, JAX, and TensorFlow as inspiration for the value of simple, directed dataflow graph program representations and transformations over those representations.
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
Distributed Training The PyTorch 1.8 release added a number of new features as well as improvements to reliability and usability. Concretely, support for: Stable level async error/timeout handling was added to improve NCCL reliability; and stable support for RPC based profiling. Additionally, we have added support for pipeline parallelism as well as gradient compression through the use of communication hooks in DDP. Details are below: [Beta] Pipeline Parallelism As machine learning models continue to grow in size, traditional Distributed DataParallel (DDP) training no longer scales as these models don’t fit on a single GPU device. The new pipeline parallelism feature provides an easy to use PyTorch API to leverage pipeline parallelism as part of your training loop. * RFC
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
Documentation [Beta] DDP Communication Hook The DDP communication hook is a generic interface to control how to communicate gradients across workers by overriding the vanilla allreduce in DistributedDataParallel. A few built-in communication hooks are provided including PowerSGD, and users can easily apply any of these hooks to optimize communication. Additionally, the communication hook interface can also support user-defined communication strategies for more advanced use cases. * RFC * Documentation Additional Prototype Features for Distributed Training In addition to the major stable and beta distributed training features in this release, we also have a number of prototype features available in our nightlies to try out and provide feedback. We have linked in the draft docs below for reference:
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
(Prototype) ZeroRedundancyOptimizer - Based on and in partnership with the Microsoft DeepSpeed team, this feature helps reduce per-process memory footprint by sharding optimizer states across all participating processes in the ProcessGroup gang. Refer to this documentation for more details. (Prototype) Process Group NCCL Send/Recv - The NCCL send/recv API was introduced in v2.7 and this feature adds support for it in NCCL process groups. This feature will provide an option for users to implement collective operations at Python layer instead of C++ layer. Refer to this documentation and code examples to learn more.
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
(Prototype) CUDA-support in RPC using TensorPipe - This feature should bring consequent speed improvements for users of PyTorch RPC with multiple-GPU machines, as TensorPipe will automatically leverage NVLink when available, and avoid costly copies to and from host memory when exchanging GPU tensors between processes. When not on the same machine, TensorPipe will fall back to copying the tensor to host memory and sending it as a regular CPU tensor. This will also improve the user experience as users will be able to treat GPU tensors like regular CPU tensors in their code. Refer to this documentation for more details.
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
(Prototype) Remote Module - This feature allows users to operate a module on a remote worker like using a local module, where the RPCs are transparent to the user. In the past, this functionality was implemented in an ad-hoc way and overall this feature will improve the usability of model parallelism on PyTorch. Refer to this documentation for more details. PyTorch Mobile Support for PyTorch Mobile is expanding with a new set of tutorials to help new users launch models on-device quicker and give existing users a tool to get more out of our framework. These include: * Image segmentation DeepLabV3 on iOS * Image segmentation DeepLabV3 on Android
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
Our new demo apps also include examples of image segmentation, object detection, neural machine translation, question answering, and vision transformers. They are available on both iOS and Android: * iOS demo app * Android demo app In addition to performance improvements on CPU for MobileNetV3 and other models, we also revamped our Android GPU backend prototype for broader models coverage and faster inferencing: * Android tutorial Lastly, we are launching the PyTorch Mobile Lite Interpreter as a prototype feature in this release. The Lite Interpreter allows users to reduce the runtime binary size. Please try these out and send us your feedback on the PyTorch Forums. All our latest updates can be found on the PyTorch Mobile page [Prototype] PyTorch Mobile Lite Interpreter
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
[Prototype] PyTorch Mobile Lite Interpreter PyTorch Lite Interpreter is a streamlined version of the PyTorch runtime that can execute PyTorch programs in resource constrained devices, with reduced binary size footprint. This prototype feature reduces binary sizes by up to 70% compared to the current on-device runtime in the current release. * iOS/Android Tutorial Performance Optimization In 1.8, we are releasing the support for benchmark utils to enable users to better monitor performance. We are also opening up a new automated quantization API. See the details below: (Beta) Benchmark utils Benchmark utils allows users to take accurate performance measurements, and provides composable tools to help with both benchmark formulation and post processing. This expected to be helpful for contributors to PyTorch to quickly understand how their contributions are impacting PyTorch performance. Example: ```python
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
Example: from torch.utils.benchmark import Timer results = [] for num_threads in [1, 2, 4]: timer = Timer( stmt="torch.add(x, y, out=out)", setup=""" n = 1024 x = torch.ones((n, n)) y = torch.ones((n, 1)) out = torch.empty((n, n)) """, num_threads=num_threads, ) results.append(timer.blocked_autorange(min_run_time=5)) print( f"{num_threads} thread{'s' if num_threads > 1 else ' ':<4}" f"{results[-1].median * 1e6:>4.0f} us " + (f"({results[0].median / results[-1].median:.1f}x)" if num_threads > 1 else '') ) 1 thread 376 us 2 threads 189 us (2.0x) 4 threads 99 us (3.8x) Documentation Tutorial (Prototype) FX Graph Mode Quantization
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
(Prototype) FX Graph Mode Quantization FX Graph Mode Quantization is the new automated quantization API in PyTorch. It improves upon Eager Mode Quantization by adding support for functionals and automating the quantization process, although people might need to refactor the model to make the model compatible with FX Graph Mode Quantization (symbolically traceable with torch.fx). * Documentation * Tutorials: * (Prototype) FX Graph Mode Post Training Dynamic Quantization * (Prototype) FX Graph Mode Post Training Static Qunatization * (Prototype) FX Graph Mode Quantization User Guide Hardware Support [Beta] Ability to Extend the PyTorch Dispatcher for a new backend in C++
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
In PyTorch 1.8, you can now create new out-of-tree devices that live outside the pytorch/pytorch repo. The tutorial linked below shows how to register your device and keep it in sync with native PyTorch devices. * Tutorial [Beta] AMD GPU Binaries Now Available Starting in PyTorch 1.8, we have added support for ROCm wheels providing an easy onboarding to using AMD GPUs. You can simply go to the standard PyTorch installation selector and choose ROCm as an installation option and execute the provided command. Thanks for reading, and if you are excited about these updates and want to participate in the future of PyTorch, we encourage you to join the discussion forums and open GitHub issues. Cheers! Team PyTorch
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
layout: blog_detail title: 'Efficient PyTorch I/O library for Large Datasets, Many Files, Many GPUs' author: Alex Aizman, Gavin Maltby, Thomas Breuel Data sets are growing bigger every day and GPUs are getting faster. This means there are more data sets for deep learning researchers and engineers to train and validate their models. Many datasets for research in still image recognition are becoming available with 10 million or more images, including OpenImages and Places. million YouTube videos (YouTube 8M) consume about 300 TB in 720p, used for research in object recognition, video analytics, and action recognition. The Tobacco Corpus consists of about 20 million scanned HD pages, useful for OCR and text analytics research.
https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/
pytorch blogs
Although the most commonly encountered big data sets right now involve images and videos, big datasets occur in many other domains and involve many other kinds of data types: web pages, financial transactions, network traces, brain scans, etc. However, working with the large amount of data sets presents a number of challenges: Dataset Size: datasets often exceed the capacity of node-local disk storage, requiring distributed storage systems and efficient network access. Number of Files: datasets often consist of billions of files with uniformly random access patterns, something that often overwhelms both local and network file systems. Data Rates: training jobs on large datasets often use many GPUs, requiring aggregate I/O bandwidths to the dataset of many GBytes/s; these can only be satisfied by massively parallel I/O systems. Shuffling and Augmentation: training data needs to be shuffled and augmented prior to training.
https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/
pytorch blogs
Scalability: users often want to develop and test on small datasets and then rapidly scale up to large datasets. Traditional local and network file systems, and even object storage servers, are not designed for these kinds of applications. The WebDataset I/O library for PyTorch, together with the optional AIStore server and Tensorcom RDMA libraries, provide an efficient, simple, and standards-based solution to all these problems. The library is simple enough for day-to-day use, is based on mature open source standards, and is easy to migrate to from existing file-based datasets.
https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/
pytorch blogs
Using WebDataset is simple and requires little effort, and it will let you scale up the same code from running local experiments to using hundreds of GPUs on clusters or in the cloud with linearly scalable performance. Even on small problems and on your desktop, it can speed up I/O tenfold and simplifies data management and processing of large datasets. The rest of this blog post tells you how to get started with WebDataset and how it works. The WebDataset Library The WebDataset library provides a simple solution to the challenges listed above. Currently, it is available as a separate library (github.com/tmbdev/webdataset), but it is on track for being incorporated into PyTorch (see RFC 38419). The WebDataset implementation is small (about 1500 LOC) and has no external dependencies.
https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/
pytorch blogs
Instead of inventing a new format, WebDataset represents large datasets as collections of POSIX tar archive files consisting of the original data files. The WebDataset library can use such tar archives directly for training, without the need for unpacking or local storage. WebDataset scales perfectly from small, local datasets to petascale datasets and training on hundreds of GPUs and allows data to be stored on local disk, on web servers, or dedicated file servers. For container-based training, WebDataset eliminates the need for volume plugins or node-local storage. As an additional benefit, datasets need not be unpacked prior to training, simplifying the distribution and use of research data.
https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/
pytorch blogs
WebDataset implements PyTorch’s IterableDataset interface and can be used like existing DataLoader-based code. Since data is stored as files inside an archive, existing loading and data augmentation code usually requires minimal modification. The WebDataset library is a complete solution for working with large datasets and distributed training in PyTorch (and also works with TensorFlow, Keras, and DALI via their Python APIs). Since POSIX tar archives are a standard, widely supported format, it is easy to write other tools for manipulating datasets in this format. E.g., the tarp command is written in Go and can shuffle and process training datasets. Benefits
https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/
pytorch blogs
Benefits The use of sharded, sequentially readable formats is essential for very large datasets. In addition, it has benefits in many other environments. WebDataset provides a solution that scales well from small problems on a desktop machine to very large deep learning problems in clusters or in the cloud. The following table summarizes some of the benefits in different environments. {:.table.table-striped.table-bordered} | Environment | Benefits of WebDataset | | ------------- | ------------- | | Local Cluster with AIStore | AIStore can be deployed easily as K8s containers and offers linear scalability and near 100% utilization of network and I/O bandwidth. Suitable for petascale deep learning. | | Cloud Computing | WebDataset deep learning jobs can be trained directly against datasets stored in cloud buckets; no volume plugins required. Local and cloud jobs work identically. Suitable for petascale learning. |
https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/
pytorch blogs
| Local Cluster with existing distributed FS or object store | WebDataset’s large sequential reads improve performance with existing distributed stores and eliminate the need for dedicated volume plugins. | | Educational Environments | WebDatasets can be stored on existing web servers and web caches, and can be accessed directly by students by URL | | Training on Workstations from Local Drives | Jobs can start training as the data still downloads. Data doesn’t need to be unpacked for training. Ten-fold improvements in I/O performance on hard drives over random access file-based datasets. | | All Environments | Datasets are represented in an archival format and contain metadata such as file types. Data is compressed in native formats (JPEG, MP4, etc.). Data management, ETL-style jobs, and data transformations and I/O are simplified and easily parallelized. | We will be adding more examples giving benchmarks and showing how to use WebDataset in these environments over the coming months. High-Performance
https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/
pytorch blogs
High-Performance For high-performance computation on local clusters, the companion open-source AIStore server provides full disk to GPU I/O bandwidth, subject only to hardware constraints. This Bigdata 2019 Paper contains detailed benchmarks and performance measurements. In addition to benchmarks, research projects at NVIDIA and Microsoft have used WebDataset for petascale datasets and billions of training samples. Below is a benchmark of AIStore with WebDataset clients using 12 server nodes with 10 rotational drives each.
https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/
pytorch blogs
The left axis shows the aggregate bandwidth from the cluster, while the right scale shows the measured per drive I/O bandwidth. WebDataset and AIStore scale linearly to about 300 clients, at which point they are increasingly limited by the maximum I/O bandwidth available from the rotational drives (about 150 MBytes/s per drive). For comparison, HDFS is shown. HDFS uses a similar approach to AIStore/WebDataset and also exhibits linear scaling up to about 192 clients; at that point, it hits a performance limit of about 120 MBytes/s per drive, and it failed when using more than 1024 clients. Unlike HDFS, the WebDataset-based code just uses standard URLs and HTTP to access data and works identically with local files, with files stored on web servers, and with AIStore. For comparison, NFS in similar experiments delivers about 10-20 MBytes/s per drive. Storing Datasets in Tar Archives
https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/
pytorch blogs
Storing Datasets in Tar Archives The format used for WebDataset is standard POSIX tar archives, the same archives used for backup and data distribution. In order to use the format to store training samples for deep learning, we adopt some simple naming conventions: * datasets are POSIX tar archives * each training sample consists of adjacent files with the same basename * shards are numbered consecutively For example, ImageNet is stored in 1282 separate 100 Mbyte shards with names pythonimagenet-train-000000.tar to imagenet-train-001281.tar, the contents of the first shard are: ```python -r--r--r-- bigdata/bigdata 3 2020-05-08 21:23 n03991062_24866.cls -r--r--r-- bigdata/bigdata 108611 2020-05-08 21:23 n03991062_24866.jpg -r--r--r-- bigdata/bigdata 3 2020-05-08 21:23 n07749582_9506.cls -r--r--r-- bigdata/bigdata 129044 2020-05-08 21:23 n07749582_9506.jpg -r--r--r-- bigdata/bigdata 3 2020-05-08 21:23 n03425413_23604.cls
https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/
pytorch blogs
-r--r--r-- bigdata/bigdata 106255 2020-05-08 21:23 n03425413_23604.jpg -r--r--r-- bigdata/bigdata 3 2020-05-08 21:23 n02795169_27274.cls ``` WebDataset datasets can be used directly from local disk, from web servers (hence the name), from cloud storage and object stores, just by changing a URL. WebDataset datasets can be used for training without unpacking, and training can even be carried out on streaming data, with no local storage. Shuffling during training is important for many deep learning applications, and WebDataset performs shuffling both at the shard level and at the sample level. Splitting of data across multiple workers is performed at the shard level using a user-provided shard_selection function that defaults to a function that splits based on get_worker_info. (WebDataset can be combined with the tensorcom library to offload decompression/data augmentation and provide RDMA and direct-to-GPU loading; see below.) Code Sample
https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/
pytorch blogs
Code Sample Here are some code snippets illustrating the use of WebDataset in a typical PyTorch deep learning application (you can find a full example at http://github.com/tmbdev/pytorch-imagenet-wds. ```python import webdataset as wds import ... sharedurl = "/imagenet/imagenet-train-{000000..001281}.tar" normalize = transforms.Normalize( mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) preproc = transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), normalize, ]) dataset = ( wds.Dataset(sharedurl) .shuffle(1000) .decode("pil") .rename(image="jpg;png", data="json") .map_dict(image=preproc) .to_tuple("image", "data") ) loader = torch.utils.data.DataLoader(dataset, batch_size=64, num_workers=8) for inputs, targets in loader: ... ```
https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/
pytorch blogs
for inputs, targets in loader: ... ``` This code is nearly identical to the file-based I/O pipeline found in the PyTorch Imagenet example: it creates a preprocessing/augmentation pipeline, instantiates a dataset using that pipeline and a data source location, and then constructs a DataLoader instance from the dataset. WebDataset uses a fluent API for a configuration that internally builds up a processing pipeline. Without any added processing stages, In this example, WebDataset is used with the PyTorch DataLoader class, which replicates DataSet instances across multiple threads and performs both parallel I/O and parallel data augmentation. WebDataset instances themselves just iterate through each training sample as a dictionary: ```python load from a web server using a separate client process sharedurl = "pipe:curl -s http://server/imagenet/imagenet-train-{000000..001281}.tar" dataset = wds.Dataset(sharedurl) for sample in dataset: # sample["jpg"] contains the raw image data
https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/
pytorch blogs
sample["jpg"] contains the raw image data # sample["cls"] contains the class ... ``` For a general introduction to how we handle large scale training with WebDataset, see these YouTube videos. Related Software AIStore is an open-source object store capable of full-bandwidth disk-to-GPU data delivery (meaning that if you have 1000 rotational drives with 200 MB/s read speed, AIStore actually delivers an aggregate bandwidth of 200 GB/s to the GPUs). AIStore is fully compatible with WebDataset as a client, and in addition understands the WebDataset format, permitting it to perform shuffling, sorting, ETL, and some map-reduce operations directly in the storage system. AIStore can be thought of as a remix of a distributed object store, a network file system, a distributed database, and a GPU-accelerated map-reduce implementation.
https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/
pytorch blogs
tarp is a small command-line program for splitting, merging, shuffling, and processing tar archives and WebDataset datasets. tensorcom is a library supporting distributed data augmentation and RDMA to GPU. pytorch-imagenet-wds contains an example of how to use WebDataset with ImageNet, based on the PyTorch ImageNet example. Bigdata 2019 Paper with Benchmarks Check out the library and provide your feedback for RFC 38419.
https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/
pytorch blogs
layout: blog_detail title: 'PyTorch 0.4.0 Migration Guide' redirect_from: /2018/04/22/0_4_0-migration-guide.html Welcome to the migration guide for PyTorch 0.4.0. In this release we introduced many exciting new features and critical bug fixes, with the goal of providing users a better and cleaner interface. In this guide, we will cover the most important changes in migrating existing code from previous versions: Tensors and Variables have merged Support for 0-dimensional (scalar) Tensors Deprecation of the volatile flag dtypes, devices, and Numpy-style Tensor creation functions Writing device-agnostic code New edge-case constraints on names of submodules, parameters, and buffers in nn.Module Merging Tensor and Variable and classes
https://pytorch.org/blog/pytorch-0_4_0-migration-guide/
pytorch blogs
torch.Tensor and torch.autograd.Variable are now the same class. More precisely, torch.Tensor is capable of tracking history and behaves like the old Variable; Variable wrapping continues to work as before but returns an object of type torch.Tensor. This means that you don't need the Variable wrapper everywhere in your code anymore. The type() of a Tensor has changed Note also that the type() of a Tensor no longer reflects the data type. Use isinstance() or x.type()instead: >>> x = torch.DoubleTensor([1, 1, 1]) >>> print(type(x)) # was torch.DoubleTensor "<class 'torch.Tensor'>" >>> print(x.type()) # OK: 'torch.DoubleTensor' 'torch.DoubleTensor' >>> print(isinstance(x, torch.DoubleTensor)) # OK: True True
https://pytorch.org/blog/pytorch-0_4_0-migration-guide/
pytorch blogs
True ``` When does autograd start tracking history now? requires_grad, the central flag for autograd, is now an attribute on Tensors. The same rules previously used for Variables applies to Tensors; autograd starts tracking history when any input Tensor of an operation has requires_grad=True. For example, ```python x = torch.ones(1) # create a tensor with requires_grad=False (default) x.requires_grad False y = torch.ones(1) # another tensor with requires_grad=False z = x + y both inputs have requires_grad=False. so does the output z.requires_grad False then autograd won't track this computation. let's verify! z.backward() RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn now create a tensor with requires_grad=True w = torch.ones(1, requires_grad=True)
https://pytorch.org/blog/pytorch-0_4_0-migration-guide/
pytorch blogs
w = torch.ones(1, requires_grad=True) w.requires_grad True add to the previous result that has require_grad=False total = w + z the total sum now requires grad! total.requires_grad True autograd can compute the gradients as well total.backward() w.grad tensor([ 1.]) and no computation is wasted to compute gradients for x, y and z, which don't require grad z.grad == x.grad == y.grad == None True #### Manipulating `requires_grad` flag Other than directly setting the attribute, you can change this flag `in-place` using [`my_tensor.requires_grad_()`](https://pytorch.org/docs/0.4.0/tensors.html#torch.Tensor.requires_grad_), or, as in the above example, at creation time by passing it in as an argument (default is `False`), e.g., ```python >>> existing_tensor.requires_grad_() >>> existing_tensor.requires_grad True >>> my_tensor = torch.zeros(3, 4, requires_grad=True) >>> my_tensor.requires_grad True What about .data?
https://pytorch.org/blog/pytorch-0_4_0-migration-guide/
pytorch blogs
True ``` What about .data? .data was the primary way to get the underlying Tensor from a Variable. After this merge, calling y = x.data still has similar semantics. So y will be a Tensor that shares the same data with x, is unrelated with the computation history of x, and has requires_grad=False. However, .data can be unsafe in some cases. Any changes on x.data wouldn't be tracked by autograd, and the computed gradients would be incorrect if x is needed in a backward pass. A safer alternative is to use x.detach(), which also returns a Tensor that shares data with requires_grad=False, but will have its in-place changes reported by autograd if x is needed in backward. Here is an example of the difference between .data and x.detach() (and why we recommend using detach in general). If you use Tensor.detach(), the gradient computation is guaranteed to be correct. ```python
https://pytorch.org/blog/pytorch-0_4_0-migration-guide/
pytorch blogs
>>> a = torch.tensor([1,2,3.], requires_grad = True) >>> out = a.sigmoid() >>> c = out.detach() >>> c.zero_() tensor([ 0., 0., 0.]) >>> out # modified by c.zero_() !! tensor([ 0., 0., 0.]) >>> out.sum().backward() # Requires the original value of out, but that was overwritten by c.zero_() RuntimeError: one of the variables needed for gradient computation has been modified by an However, using Tensor.data can be unsafe and can easily result in incorrect gradients when a tensor is required for gradient computation but modified in-place. >>> a = torch.tensor([1,2,3.], requires_grad = True) >>> out = a.sigmoid() >>> c = out.data >>> c.zero_() tensor([ 0., 0., 0.]) >>> out # out was modified by c.zero_() tensor([ 0., 0., 0.]) >>> out.sum().backward() >>> a.grad # The result is very, very wrong because `out` changed! tensor([ 0., 0., 0.]) Support for 0-dimensional (scalar) Tensors
https://pytorch.org/blog/pytorch-0_4_0-migration-guide/
pytorch blogs
Support for 0-dimensional (scalar) Tensors Previously, indexing into a Tensor vector (1-dimensional tensor) gave a Python number but indexing into a Variable vector gave (inconsistently!) a vector of size (1,)! Similar behavior existed with reduction functions, e.g. tensor.sum() would return a Python number, but variable.sum() would return a vector of size (1,). Fortunately, this release introduces proper scalar (0-dimensional tensor) support in PyTorch! Scalars can be created using the new torch.tensor function (which will be explained in more detail later; for now just think of it as the PyTorch equivalent of numpy.array). Now you can do things like: ```python torch.tensor(3.1416) # create a scalar directly tensor(3.1416) torch.tensor(3.1416).size() # scalar is 0-dimensional torch.Size([]) torch.tensor([3]).size() # compare to a vector of size 1 torch.Size([1]) vector = torch.arange(2, 6) # this is a vector vector tensor([ 2., 3., 4., 5.])
https://pytorch.org/blog/pytorch-0_4_0-migration-guide/
pytorch blogs
vector tensor([ 2., 3., 4., 5.]) vector.size() torch.Size([4]) vector[3] # indexing into a vector gives a scalar tensor(5.) vector[3].item() # .item() gives the value as a Python number 5.0 mysum = torch.tensor([2, 3]).sum() mysum tensor(5) mysum.size() torch.Size([]) ``` Accumulating losses Consider the widely used pattern total_loss += loss.data[0]. Before 0.4.0. loss was a Variable wrapping a tensor of size (1,), but in 0.4.0 loss is now a scalar and has 0 dimensions. Indexing into a scalar doesn't make sense (it gives a warning now, but will be a hard error in 0.5.0). Use loss.item() to get the Python number from a scalar.
https://pytorch.org/blog/pytorch-0_4_0-migration-guide/
pytorch blogs
Note that if you don't convert to a Python number when accumulating losses, you may find increased memory usage in your program. This is because the right-hand-side of the above expression used to be a Python float, while it is now a zero-dim Tensor. The total loss is thus accumulating Tensors and their gradient history, which may keep around large autograd graphs for much longer than necessary. Deprecation of volatile flag The volatile flag is now deprecated and has no effect. Previously, any computation that involves a Variable with volatile=True wouldn't be tracked by autograd. This has now been replaced by a set of more flexible context managers including torch.no_grad(), torch.set_grad_enabled(grad_mode), and others. ```python x = torch.zeros(1, requires_grad=True) with torch.no_grad(): ... y = x * 2 y.requires_grad False is_train = False
https://pytorch.org/blog/pytorch-0_4_0-migration-guide/
pytorch blogs
y.requires_grad False is_train = False with torch.set_grad_enabled(is_train): ... y = x * 2 y.requires_grad False torch.set_grad_enabled(True) # this can also be used as a function y = x * 2 y.requires_grad True torch.set_grad_enabled(False) y = x * 2 y.requires_grad False ``` dtypes, devices and NumPy-style creation functions In previous versions of PyTorch, we used to specify data type (e.g. float vs double), device type (cpu vs cuda) and layout (dense vs sparse) together as a "tensor type". For example, torch.cuda.sparse.DoubleTensor was the Tensor type representing the double data type, living on CUDA devices, and with COO sparse tensor layout.
https://pytorch.org/blog/pytorch-0_4_0-migration-guide/
pytorch blogs
In this release, we introduce torch.dtype, torch.device and torch.layout classes to allow better management of these properties via NumPy-style creation functions. torch.dtype Below is a complete list of available torch.dtypes (data types) and their corresponding tensor types. Data type torch.dtype Tensor types 32-bit floating point torch.float32 or torch.float torch.*.FloatTensor 64-bit floating point torch.float64 or torch.double torch.*.DoubleTensor 16-bit floating point torch.float16 or torch.half torch.*.HalfTensor
https://pytorch.org/blog/pytorch-0_4_0-migration-guide/
pytorch blogs
| 8-bit integer (unsigned) | torch.uint8 | torch.*.ByteTensor | 8-bit integer (signed) | torch.int8 | torch.*.CharTensor | 16-bit integer (signed) | torch.int16 or torch.short | torch.*.ShortTensor | 32-bit integer (signed) | torch.int32 or torch.int | torch.*.IntTensor | 64-bit integer (signed) | torch.int64 or torch.long | torch.*.LongTensor The dtype of a tensor can be access via its dtype attribute. torch.device A torch.device contains a device type ('cpu' or 'cuda') and optional device ordinal (id) for the device type. It can be initialized with torch.device('{device_type}') or torch.device('{device_type}:{device_ordinal}').
https://pytorch.org/blog/pytorch-0_4_0-migration-guide/
pytorch blogs
If the device ordinal is not present, this represents the current device for the device type; e.g., torch.device('cuda') is equivalent to torch.device('cuda:X') where X is the result of torch.cuda.current_device(). The device of a tensor can be accessed via its device attribute. torch.layout torch.layout represents the data layout of a Tensor. Currently torch.strided (dense tensors, the default) and torch.sparse_coo (sparse tensors with COO format) are supported. The layout of a tensor can be access via its layout attribute. Creating Tensors
https://pytorch.org/blog/pytorch-0_4_0-migration-guide/
pytorch blogs
Creating Tensors Methods that create a Tensor now also take in dtype, device, layout, and requires_grad options to specify the desired attributes on the returned Tensor. For example, >>> device = torch.device("cuda:1") >>> x = torch.randn(3, 3, dtype=torch.float64, device=device) tensor([[-0.6344, 0.8562, -1.2758], [ 0.8414, 1.7962, 1.0589], [-0.1369, -1.0462, -0.4373]], dtype=torch.float64, device='cuda:1') >>> x.requires_grad # default is False False >>> x = torch.zeros(3, requires_grad=True) >>> x.requires_grad True torch.tensor(data, ...)
https://pytorch.org/blog/pytorch-0_4_0-migration-guide/
pytorch blogs
torch.tensor is one of the newly added tensor creation methods. It takes in array-like data of all kinds and copies the contained values into a new Tensor. As mentioned earlier, torch.tensor is the PyTorch equivalent of NumPy's numpy.arrayconstructor. Unlike the torch.*Tensor methods, you can also create zero-dimensional Tensors (aka scalars) this way (a single python number is treated as a Size in the torch.*Tensor methods). Moreover, if a dtype argument isn't given, it will infer the suitable dtype given the data. It is the recommended way to create a tensor from existing data like a Python list. For example, ```python cuda = torch.device("cuda") torch.tensor([[1], [2], [3]], dtype=torch.half, device=cuda) tensor([[ 1], [ 2], [ 3]], device='cuda:0') torch.tensor(1) # scalar
https://pytorch.org/blog/pytorch-0_4_0-migration-guide/
pytorch blogs
torch.tensor(1) # scalar tensor(1) torch.tensor([1, 2.3]).dtype # type inferece torch.float32 torch.tensor([1, 2]).dtype # type inferece torch.int64 ``` We've also added more tensor creation methods. Some of them have torch.*_like and/or tensor.new_* variants. torch.*_like takes in an input Tensor instead of a shape. It returns a Tensor with same attributes as the input Tensor by default unless otherwise specified: ```python x = torch.randn(3, dtype=torch.float64) torch.zeros_like(x) tensor([ 0., 0., 0.], dtype=torch.float64) torch.zeros_like(x, dtype=torch.int) tensor([ 0, 0, 0], dtype=torch.int32) ``` tensor.new_* can also create Tensors with same attributes as tensor, but it always takes in a shape argument: ```python x = torch.randn(3, dtype=torch.float64) x.new_ones(2) tensor([ 1., 1.], dtype=torch.float64) x.new_ones(4, dtype=torch.int) tensor([ 1, 1, 1, 1], dtype=torch.int32) ```
https://pytorch.org/blog/pytorch-0_4_0-migration-guide/
pytorch blogs
``` To specify the desired shape, you can either use a tuple (e.g., torch.zeros((2, 3))) or variable arguments (e.g., torch.zeros(2, 3)) in most cases. Name Returned Tensor torch.*_like variant tensor.new_* variant torch.empty uninitialized memory ✔ ✔ torch.zeros all zeros ✔ ✔ torch.ones all ones ✔ ✔ torch.full filled with a given value ✔ ✔ torch.rand i.i.d. continuous Uniform[0, 1) ✔ torch.randn i.i.d. Normal(0, 1) ✔
https://pytorch.org/blog/pytorch-0_4_0-migration-guide/
pytorch blogs
| torch.randint | i.i.d. discrete Uniform in given range | ✔ | | torch.randperm | random permutation of {0, 1, ..., n - 1} | | torch.tensor | copied from existing data (list, NumPy ndarray, etc.) | | ✔ | | torch.from_numpy* | from NumPy ndarray (sharing storage without copying) | | torch.arange, torch.range, and torch.linspace | uniformly spaced values in a given range | | torch.logspace | logarithmically spaced values in a given range | | torch.eye | identity matrix |
https://pytorch.org/blog/pytorch-0_4_0-migration-guide/
pytorch blogs
*: torch.from_numpy only takes in a NumPy ndarray as its input argument. Writing device-agnostic code Previous versions of PyTorch made it difficult to write code that was device agnostic (i.e. that could run on both CUDA-enabled and CPU-only machines without modification). PyTorch 0.4.0 makes this easier in two ways: The device attribute of a Tensor gives the torch.device for all Tensors (get_device only works for CUDA tensors) The to method of Tensors and Modules can be used to easily move objects to different devices (instead of having to call cpu() or cuda() based on the context) We recommend the following pattern: ```python at beginning of the script device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") ... then whenever you get a new Tensor or Module this won't copy if they are already on the desired device
https://pytorch.org/blog/pytorch-0_4_0-migration-guide/
pytorch blogs
input = data.to(device) model = MyModule(...).to(device) ``` New edge-case constraints on names of submodules, parameters, and buffers in nn.Module name that is an empty string or contains "." is no longer permitted in module.add_module(name, value), module.add_parameter(name, value) or module.add_buffer(name, value) because such names may cause lost data in the state_dict. If you are loading a checkpoint for modules containing such names, please update the module definition and patch the state_dict before loading it. Code Samples (Putting it all together) To get a flavor of the overall recommended changes in 0.4.0, let's look at a quick example for a common code pattern in both 0.3.1 and 0.4.0: 0.3.1 (old): ```python model = MyRNN() if use_cuda: model = model.cuda() # train total_loss = 0 for input, target in train_loader: input, target = Variable(input), Variable(target) hidden = Variable(torch.zeros(*h_shape)) # init hidden if use_cuda:
https://pytorch.org/blog/pytorch-0_4_0-migration-guide/
pytorch blogs
if use_cuda: input, target, hidden = input.cuda(), target.cuda(), hidden.cuda() ... # get loss and optimize total_loss += loss.data[0] # evaluate for input, target in test_loader: input = Variable(input, volatile=True) if use_cuda: ... ... ``` 0.4.0 (new): ```python # torch.device object used throughout this script device = torch.device("cuda" if use_cuda else "cpu") model = MyRNN().to(device) # train total_loss = 0 for input, target in train_loader: input, target = input.to(device), target.to(device) hidden = input.new_zeros(*h_shape) # has the same device & dtype as input ... # get loss and optimize total_loss += loss.item() # get Python number from 1-element Tensor # evaluate with torch.no_grad(): # operations inside don't track history for input, target in test_loader: ... ```
https://pytorch.org/blog/pytorch-0_4_0-migration-guide/
pytorch blogs
... ``` Thank you for reading! Please refer to our documentation and release notes for more details. Happy PyTorch-ing!
https://pytorch.org/blog/pytorch-0_4_0-migration-guide/
pytorch blogs
layout: blog_detail title: "Efficient Large-Scale Training with Pytorch FSDP and AWS" author: Less Wright, Hamid Shojanazeri, Geeta Chauhan featured-img: "assets/images/largeblog_index_1.png" Cutting-edge AI models are becoming extremely large. The cost and overhead of training these models is increasing rapidly, and involves large amounts of engineering and guesswork to find the right training regime. FSDP reduces these costs significantly by enabling you to train much larger models with the same amount of resources. FSDP lowers the memory footprint on your GPUs, and is usable via a lightweight configuration that requires substantially less effort, typically with just a few lines of code.
https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/
pytorch blogs
The main performance gains in FSDP come from maximizing the overlap between network communication and model computation, and eliminating the memory redundancy inherent in traditional data parallel training (DDP). PyTorch FSDP can train models approximately 4x larger on the same server resources as DDP and 20x larger if we combine activation checkpointing and activation offloading. Since PyTorch 1.12, FSDP is now in beta status, and has added a number of new features that can be tuned to further accelerate your model training. In this series of blog posts, we will explain multiple performance optimizations you can run with FSDP to boost your distributed training speed and model sizes within the context of your available server resources. We use the HuggingFace T5 3B, 11B and DeepVit, in fine-tuning mode, as the running examples throughout the series.
https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/
pytorch blogs
As a preview of some of the optimizations discussed in this series, we show the before and after performance scaled in Flops below (Note that these results can vary based on your server resources and model architecture). *T5 3B Performance measured on AWS A100 and A10 servers. Original with no optimizations and Tuned with the applied optimization *T5 11B Performance measured on A100 servers. Original with no optimizations and Tuned with the applied optimization
https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/
pytorch blogs
In this first post, we will provide a quick overview of FSDP and how it can make training large- scale AI models more efficient. We will highlight briefly the multiple performance options available, and dive deeper into the details on these in upcoming posts. We will then conclude with an overview on how to leverage AWS parallel cluster for large- scale training with FSDP. Optimization T5 Model Throughput Improvement Mixed Precision 3 B 5x 11 B 10x Activation Checkpointing (AC) 3 B 10x 11 B 100x Transformer Wrapping Policy 3 B 2x
https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/
pytorch blogs
3 B 2x 11 B Unable to run the experiment without the Transformer wrapping policy. Full Shard Strategy 3 B 1.5x 11 B Not able to run with Zero2 Performance optimization gains on T5 models over non-optimized.
https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/
pytorch blogs