Search is not available for this dataset
title
string | authors
string | abstract
string | pdf
string | arXiv
string | bibtex
string | url
string | detail_url
string | tags
string | supp
string | string |
---|---|---|---|---|---|---|---|---|---|---|
Light Field Super-Resolution With Zero-Shot Learning | Zhen Cheng, Zhiwei Xiong, Chang Chen, Dong Liu, Zheng-Jun Zha | Deep learning provides a new avenue for light field super-resolution (SR). However, the domain gap caused by drastically different light field acquisition conditions poses a main obstacle in practice. To fill this gap, we propose a zero-shot learning framework for light field SR, which learns a mapping to super-resolve the reference view with examples extracted solely from the input low-resolution light field itself. Given highly limited training data under the zero-shot setting, however, we observe that it is difficult to train an end-to-end network successfully. Instead, we divide this challenging task into three sub-tasks, i.e., pre-upsampling, view alignment, and multi-view aggregation, and then conquer them separately with simple yet efficient CNNs. Moreover, the proposed framework can be readily extended to finetune the pre-trained model on a source dataset to better adapt to the target input, which further boosts the performance of light field SR in the wild. Experimental results validate that our method not only outperforms classic non-learning-based methods, but also generalizes better to unseen light fields than state-of-the-art deep-learning-based methods when the domain gap is large. | https://openaccess.thecvf.com/content/CVPR2021/papers/Cheng_Light_Field_Super-Resolution_With_Zero-Shot_Learning_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Cheng_Light_Field_Super-Resolution_With_Zero-Shot_Learning_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Cheng_Light_Field_Super-Resolution_With_Zero-Shot_Learning_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Cheng_Light_Field_Super-Resolution_CVPR_2021_supplemental.pdf | null |
Spherical Confidence Learning for Face Recognition | Shen Li, Jianqing Xu, Xiaqing Xu, Pengcheng Shen, Shaoxin Li, Bryan Hooi | An emerging line of research has found that spherical spaces better match the underlying geometry of facial images, as evidenced by the state-of-the-art facial recognition methods which benefit empirically from spherical representations. Yet, these approaches rely on deterministic embeddings and hence suffer from the feature ambiguity dilemma, whereby ambiguous or noisy images are mapped into poorly learned regions of representation space, leading to inaccuracies. Probabilistic Face Embeddings (PFE) is the first attempt to address this dilemma. However, we theoretically and empirically identify two main failures of PFE when it is applied to spherical deterministic embeddings aforementioned. To address these issues, in this paper, we propose a novel framework for face confidence learning in spherical space. Mathematically, we extend the von Mises Fisher density to its r-radius counterpart and derive a new optimization objective in closed form. Theoretically, the proposed probabilistic framework provably allows for better interpretability, leading to principled feature comparison and pooling. Extensive experimental results on multiple challenging benchmarks confirm our hypothesis and theory, and showcase the advantages of our framework over prior probabilistic methods and spherical deterministic embeddings in various face recognition tasks. | https://openaccess.thecvf.com/content/CVPR2021/papers/Li_Spherical_Confidence_Learning_for_Face_Recognition_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Li_Spherical_Confidence_Learning_for_Face_Recognition_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Li_Spherical_Confidence_Learning_for_Face_Recognition_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Li_Spherical_Confidence_Learning_CVPR_2021_supplemental.pdf | null |
Three Ways To Improve Semantic Segmentation With Self-Supervised Depth Estimation | Lukas Hoyer, Dengxin Dai, Yuhua Chen, Adrian Koring, Suman Saha, Luc Van Gool | Training deep networks for semantic segmentation requires large amounts of labeled training data, which presents a major challenge in practice, as labeling segmentation masks is a highly labor-intensive process. To address this issue, we present a framework for semi-supervised semantic segmentation, which is enhanced by self-supervised monocular depth estimation from unlabeled image sequences. In particular, we propose three key contributions: (1) We transfer knowledge from features learned during self-supervised depth estimation to semantic segmentation, (2) we implement a strong data augmentation by blending images and labels using the geometry of the scene, and (3) we utilize the depth feature diversity as well as the level of difficulty of learning depth in a student-teacher framework to select the most useful samples to be annotated for semantic segmentation. We validate the proposed model on the Cityscapes dataset, where all three modules demonstrate significant performance gains, and we achieve state-of-the-art results for semi-supervised semantic segmentation. The implementation is available at https://github.com/lhoyer/improving_segmentation_with_selfsupervised_depth. | https://openaccess.thecvf.com/content/CVPR2021/papers/Hoyer_Three_Ways_To_Improve_Semantic_Segmentation_With_Self-Supervised_Depth_Estimation_CVPR_2021_paper.pdf | http://arxiv.org/abs/2012.10782 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Hoyer_Three_Ways_To_Improve_Semantic_Segmentation_With_Self-Supervised_Depth_Estimation_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Hoyer_Three_Ways_To_Improve_Semantic_Segmentation_With_Self-Supervised_Depth_Estimation_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Hoyer_Three_Ways_To_CVPR_2021_supplemental.pdf | null |
Cross-Modal Contrastive Learning for Text-to-Image Generation | Han Zhang, Jing Yu Koh, Jason Baldridge, Honglak Lee, Yinfei Yang | The output of text-to-image synthesis systems should be coherent, clear, photo-realistic scenes with high semantic fidelity to their conditioned text descriptions. Our Cross-Modal Contrastive Generative Adversarial Network (XMC-GAN) addresses this challenge by maximizing the mutual information between image and text. It does this via multiple contrastive losses which capture inter-modality and intra-modality correspondences. XMC-GAN uses an attentional self-modulation generator, which enforces strong text-image correspondence, and a contrastive discriminator, which acts as a critic as well as a feature encoder for contrastive learning. The quality of XMC-GAN's output is a major step up from previous models, as we show on three challenging datasets. On MS-COCO, not only does XMC-GAN improve state-of-the-art FID from 24.70 to 9.33, but--more importantly--people prefer XMC-GAN by 77.3 for image quality and 74.1 for image-text alignment, compared to three other recent models. XMC-GAN also generalizes to the challenging Localized Narratives dataset (which has longer, more detailed descriptions), improving state-of-the-art FID from 48.70 to 14.12. Lastly, we train and evaluate XMC-GAN on the challenging Open Images data, establishing a strong benchmark FID score of 26.91. | https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_Cross-Modal_Contrastive_Learning_for_Text-to-Image_Generation_CVPR_2021_paper.pdf | http://arxiv.org/abs/2101.04702 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Cross-Modal_Contrastive_Learning_for_Text-to-Image_Generation_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Cross-Modal_Contrastive_Learning_for_Text-to-Image_Generation_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Zhang_Cross-Modal_Contrastive_Learning_CVPR_2021_supplemental.pdf | null |
Lifting 2D StyleGAN for 3D-Aware Face Generation | Yichun Shi, Divyansh Aggarwal, Anil K. Jain | We propose a framework, called LiftedGAN, that disentangles and lifts a pre-trained StyleGAN2 for 3D-aware face generation. Our model is "3D-aware" in the sense that it is able to (1) disentangle the latent space of StyleGAN2 into texture, shape, viewpoint, lighting and (2) generate 3D components for rendering synthetic images. Unlike most previous methods, our method is completely self-supervised, i.e. it neither requires any manual annotation nor 3DMM model for training. Instead, it learns to generate images as well as their 3D components by distilling the prior knowledge in StyleGAN2 with a differentiable renderer. The proposed model is able to output both the 3D shape and texture, allowing explicit pose and lighting control over generated images. Qualitative and quantitative results show the superiority of our approach over existing methods on 3D-controllable GANs in content controllability while generating realistic high quality images. | https://openaccess.thecvf.com/content/CVPR2021/papers/Shi_Lifting_2D_StyleGAN_for_3D-Aware_Face_Generation_CVPR_2021_paper.pdf | http://arxiv.org/abs/2011.13126 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Shi_Lifting_2D_StyleGAN_for_3D-Aware_Face_Generation_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Shi_Lifting_2D_StyleGAN_for_3D-Aware_Face_Generation_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Shi_Lifting_2D_StyleGAN_CVPR_2021_supplemental.pdf | null |
iMiGUE: An Identity-Free Video Dataset for Micro-Gesture Understanding and Emotion Analysis | Xin Liu, Henglin Shi, Haoyu Chen, Zitong Yu, Xiaobai Li, Guoying Zhao | We introduce a new dataset for the emotional artificial intelligence research: identity-free video dataset for micro-gesture understanding and emotion analysis (iMiGUE). Different from existing public datasets, iMiGUE focuses on nonverbal body gestures without using any identity information, while the predominant researches of emotion analysis concern sensitive biometric data, like face and speech. Most importantly, iMiGUE focuses on micro-gestures, i,e., unintentional behaviors driven by inner feelings, which are different from ordinary scope of gestures from other gesture datasets which are mostly intentionally performed for illustrative purposes. Furthermore, iMiGUE is designed to evaluate the ability of models to analyze the emotional states by integrating information of recognized micro-gesture, rather than just recognizing prototypes in the sequences separately (or isolatedly). This is because the real need for emotion AI is to understand the emotional states behind gestures in a holistic way. Moreover, to counter for the challenge of imbalanced samples distribution of this dataset, an unsupervised learning method is proposed to capture latent representations from the micro-gesture sequences themselves. We systematically investigate representative methods on this dataset, and comprehensive experimental results reveal several interesting insights from the iMiGUE, e,g., micro-gesture-based analysis can promote emotion understanding. We confirm that the new iMiGUE dataset could advance studies of micro-gesture and emotion AI. | https://openaccess.thecvf.com/content/CVPR2021/papers/Liu_iMiGUE_An_Identity-Free_Video_Dataset_for_Micro-Gesture_Understanding_and_Emotion_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Liu_iMiGUE_An_Identity-Free_Video_Dataset_for_Micro-Gesture_Understanding_and_Emotion_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Liu_iMiGUE_An_Identity-Free_Video_Dataset_for_Micro-Gesture_Understanding_and_Emotion_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Liu_iMiGUE_An_Identity-Free_CVPR_2021_supplemental.pdf | null |
MeGA-CDA: Memory Guided Attention for Category-Aware Unsupervised Domain Adaptive Object Detection | Vibashan VS, Vikram Gupta, Poojan Oza, Vishwanath A. Sindagi, Vishal M. Patel | Existing approaches for unsupervised domain adaptive object detection perform feature alignment via adversarial training. While these methods achieve reasonable improvements in performance, they typically perform category-agnostic domain alignment, thereby resulting in negative transfer of features. To overcome this issue, in this work, we attempt to incorporate category information into the domain adaptation process by proposing Memory Guided Attention for Category-Aware Domain Adaptation (MeGA-CDA). The proposed method consists of employing category-wise discriminators to ensure category-aware feature alignment for learning domain-invariant discriminative features. However, since the category information is not available for the target samples, we propose to generate memory-guided category-specific attention maps which are then used to route the features appropriately to the corresponding category discriminator. The proposed method is evaluated on several benchmark datasets and is shown to outperform existing approaches. | https://openaccess.thecvf.com/content/CVPR2021/papers/VS_MeGA-CDA_Memory_Guided_Attention_for_Category-Aware_Unsupervised_Domain_Adaptive_Object_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/VS_MeGA-CDA_Memory_Guided_Attention_for_Category-Aware_Unsupervised_Domain_Adaptive_Object_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/VS_MeGA-CDA_Memory_Guided_Attention_for_Category-Aware_Unsupervised_Domain_Adaptive_Object_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/VS_MeGA-CDA_Memory_Guided_CVPR_2021_supplemental.pdf | null |
Nutrition5k: Towards Automatic Nutritional Understanding of Generic Food | Quin Thames, Arjun Karpur, Wade Norris, Fangting Xia, Liviu Panait, Tobias Weyand, Jack Sim | Understanding the nutritional content of food from visual data is a challenging computer vision problem, with the potential to have a positive and widespread impact on public health. Studies in this area are limited to existing datasets in the field that lack sufficient diversity or labels required for training models with nutritional understanding capability. We introduce Nutrition5k, a novel dataset of 5k diverse, real world food dishes with corresponding video streams, depth images, component weights, and high accuracy nutritional content annotation. We demonstrate the potential of this dataset by training a computer vision algorithm capable of predicting the caloric and macronutrient values of a complex, real world dish at an accuracy that outperforms professional nutritionists. Further we present a baseline for incorporating depth sensor data to improve nutrition predictions. We release Nutrition5k in the hope that it will accelerate innovation in the space of nutritional understanding. The dataset is available at https://github.com/google-research-datasets/Nutrition5k. | https://openaccess.thecvf.com/content/CVPR2021/papers/Thames_Nutrition5k_Towards_Automatic_Nutritional_Understanding_of_Generic_Food_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.03375 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Thames_Nutrition5k_Towards_Automatic_Nutritional_Understanding_of_Generic_Food_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Thames_Nutrition5k_Towards_Automatic_Nutritional_Understanding_of_Generic_Food_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Thames_Nutrition5k_Towards_Automatic_CVPR_2021_supplemental.pdf | null |
Extreme Low-Light Environment-Driven Image Denoising Over Permanently Shadowed Lunar Regions With a Physical Noise Model | Ben Moseley, Valentin Bickel, Ignacio G. Lopez-Francos, Loveneesh Rana | Recently, learning-based approaches have achieved impressive results in the field of low-light image denoising. Some state of the art approaches employ a rich physical model to generate realistic training data. However, the performance of these approaches ultimately depends on the realism of the physical model, and many works only concentrate on everyday photography. In this work we present a denoising approach for extremely low-light images of permanently shadowed regions (PSRs) on the lunar surface, taken by the Narrow Angle Camera on board the Lunar Reconnaissance Orbiter satellite. Our approach extends existing learning-based approaches by combining a physical noise model of the camera with real noise samples and training image scene selection based on 3D ray tracing to generate realistic training data. We also condition our denoising model on the camera's environmental metadata at the time of image capture (such as the camera's temperature and age), showing that this improves performance. Our quantitative and qualitative results show that our method strongly outperforms the existing calibration routine for the camera and other baselines. Our results could significantly impact lunar science and exploration, for example by aiding the identification of surface water-ice and reducing uncertainty in rover and human traverse planning into PSRs. | https://openaccess.thecvf.com/content/CVPR2021/papers/Moseley_Extreme_Low-Light_Environment-Driven_Image_Denoising_Over_Permanently_Shadowed_Lunar_Regions_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Moseley_Extreme_Low-Light_Environment-Driven_Image_Denoising_Over_Permanently_Shadowed_Lunar_Regions_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Moseley_Extreme_Low-Light_Environment-Driven_Image_Denoising_Over_Permanently_Shadowed_Lunar_Regions_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Moseley_Extreme_Low-Light_Environment-Driven_CVPR_2021_supplemental.pdf | null |
Unsupervised Discovery of the Long-Tail in Instance Segmentation Using Hierarchical Self-Supervision | Zhenzhen Weng, Mehmet Giray Ogut, Shai Limonchik, Serena Yeung | Instance segmentation is an active topic in computer vision that is usually solved by using supervised learning approaches over very large datasets composed of object level masks. Obtaining such a dataset for any new domain can be very expensive and time-consuming. In addition, models trained on certain annotated categories do not generalize well to unseen objects. The goal of this paper is to propose a method that can perform unsupervised discovery of long-tail categories in instance segmentation, through learning instance embeddings of masked regions. Leveraging rich relationship and hierarchical structure between objects in the images, we propose self-supervised losses for learning mask embeddings. Trained on COCO dataset without additional annotations of the long-tail objects, our model is able to discover novel and more fine-grained objects than the common categories in COCO. We show that the model achieves competitive quantitative results on LVIS as compared to the supervised and partially supervised methods. | https://openaccess.thecvf.com/content/CVPR2021/papers/Weng_Unsupervised_Discovery_of_the_Long-Tail_in_Instance_Segmentation_Using_Hierarchical_CVPR_2021_paper.pdf | http://arxiv.org/abs/2104.01257 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Weng_Unsupervised_Discovery_of_the_Long-Tail_in_Instance_Segmentation_Using_Hierarchical_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Weng_Unsupervised_Discovery_of_the_Long-Tail_in_Instance_Segmentation_Using_Hierarchical_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Weng_Unsupervised_Discovery_of_CVPR_2021_supplemental.pdf | null |
How Privacy-Preserving Are Line Clouds? Recovering Scene Details From 3D Lines | Kunal Chelani, Fredrik Kahl, Torsten Sattler | Visual localization is the problem of estimating the camera pose of a given image with respect to a known scene. Visual localization algorithms are a fundamental building block in advanced computer vision applications, including Mixed and Virtual Reality systems. Many algorithms used in practice represent the scene through a Structure-from-Motion (SfM) point cloud, where each 3D point is associated with one or more local image features. Establishing 2D-3D matches between features in a query image and the 3D points through descriptor matching Visual localization is the problem of estimating the camera pose of a given image with respect to a known scene. Visual localization algorithms are a fundamental building block in advanced computer vision applications, including Mixed and Virtual Reality systems. Many algorithms used in practice represent the scene through a Structure-from-Motion (SfM) point cloud and use 2D-3D matches between a query image and the 3D points for camera pose estimation. As recently shown, image details can be accurately recovered from SfM point clouds by translating renderings of the sparse point clouds to images. To address the resulting potential privacy risks for user-generated content, it was recently proposed to lift point clouds to line clouds by replacing 3D points by randomly oriented 3D lines passing through these points. The resulting representation is unintelligible to humans and effectively prevents point cloud-to-image translation. This paper shows that a significant amount of information about the 3D scene geometry is preserved in these line clouds, allowing us to (approximately) recover the 3D point positions and thus to (approximately) recover image content. Our approach is based on the observation that the closest points between lines can yield a good approximation to the original 3D points. Code is available at \href https://github.com/kunalchelani/Line2Point https://github.com/kunalchelani/Line2Point . | https://openaccess.thecvf.com/content/CVPR2021/papers/Chelani_How_Privacy-Preserving_Are_Line_Clouds_Recovering_Scene_Details_From_3D_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.05086 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Chelani_How_Privacy-Preserving_Are_Line_Clouds_Recovering_Scene_Details_From_3D_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Chelani_How_Privacy-Preserving_Are_Line_Clouds_Recovering_Scene_Details_From_3D_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Chelani_How_Privacy-Preserving_Are_CVPR_2021_supplemental.pdf | null |
Multi-View 3D Reconstruction of a Texture-Less Smooth Surface of Unknown Generic Reflectance | Ziang Cheng, Hongdong Li, Yuta Asano, Yinqiang Zheng, Imari Sato | Recovering the 3D geometry of a purely texture-less object with generally unknown surface reflectance (e.g. nonLambertian) is regarded as a challenging task in multiview reconstruction. The major obstacle revolves around establishing cross-view correspondences where photometric constancy is violated. This paper proposes a simple and practical solution to overcome this challenge based on a co-located camera-light scanner device. Unlike existing solutions, we do not explicitly solve for correspondence. Instead, we argue the problem is generally well-posed by multi-view geometrical and photometric constraints, and can be solved from a small number of input views. We formulate the reconstruction task as a joint energy minimization over the surface geometry and reflectance. Despite this energy is highly non-convex, we develop an optimization algorithm that robustly recovers globally optimal shape and reflectance even from a random initialization. Extensive experiments on both simulated and real data have validated our method, and possible future extensions are discussed | https://openaccess.thecvf.com/content/CVPR2021/papers/Cheng_Multi-View_3D_Reconstruction_of_a_Texture-Less_Smooth_Surface_of_Unknown_CVPR_2021_paper.pdf | http://arxiv.org/abs/2105.11599 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Cheng_Multi-View_3D_Reconstruction_of_a_Texture-Less_Smooth_Surface_of_Unknown_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Cheng_Multi-View_3D_Reconstruction_of_a_Texture-Less_Smooth_Surface_of_Unknown_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Cheng_Multi-View_3D_Reconstruction_CVPR_2021_supplemental.pdf | null |
Rectification-Based Knowledge Retention for Continual Learning | Pravendra Singh, Pratik Mazumder, Piyush Rai, Vinay P. Namboodiri | Deep learning models suffer from catastrophic forgetting when trained in an incremental learning setting. In this work, we propose a novel approach to address the task incremental learning problem, which involves training a model on new tasks that arrive in an incremental manner. The task incremental learning problem becomes even more challenging when the test set contains classes that are not part of the train set, i.e., a task incremental generalized zero-shot learning problem. Our approach can be used in both the zero-shot and non zero-shot task incremental learning settings. Our proposed method uses weight rectifications and affine transformations in order to adapt the model to different tasks that arrive sequentially. Specifically, we adapt the network weights to work for new tasks by "rectifying" the weights learned from the previous task. We learn these weight rectifications using very few parameters. We additionally learn affine transformations on the outputs generated by the network in order to better adapt them for the new task. We perform experiments on several datasets in both zero-shot and non zero-shot task incremental learning settings and empirically show that our approach achieves state-of-the-art results. Specifically, our approach outperforms the state-of-the-art non zero-shot task incremental learning method by over 5% on the CIFAR-100 dataset. Our approach also significantly outperforms the state-of-the-art task incremental generalized zero-shot learning method by absolute margins of 6.91% and 6.33% for the AWA1 and CUB datasets, respectively. We validate our approach using various ablation studies. | https://openaccess.thecvf.com/content/CVPR2021/papers/Singh_Rectification-Based_Knowledge_Retention_for_Continual_Learning_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.16597 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Singh_Rectification-Based_Knowledge_Retention_for_Continual_Learning_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Singh_Rectification-Based_Knowledge_Retention_for_Continual_Learning_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Singh_Rectification-Based_Knowledge_Retention_CVPR_2021_supplemental.pdf | null |
Scale-Aware Automatic Augmentation for Object Detection | Yukang Chen, Yanwei Li, Tao Kong, Lu Qi, Ruihang Chu, Lei Li, Jiaya Jia | We propose Scale-aware AutoAug to learn data augmentation policies for object detection. We define a new scale-aware search space, where both image- and box-level augmentations are designed for maintaining scale invariance. Upon this search space, we propose a new search metric, termed Pareto Scale Balance, to facilitate search with high efficiency. In experiments, Scale-aware AutoAug yields significant and consistent improvement on various object detectors (e.g., RetinaNet, Faster R-CNN, Mask R-CNN, and FCOS), even compared with strong multi-scale training baselines. Our searched augmentation policies are transferable to other datasets and box-level tasks beyond object detection (e.g., instance segmentation and keypoint estimation) to improve performance. The search cost is much less than previous automated augmentation approaches for object detection. It is notable that our searched policies have meaningful patterns, which intuitively provide valuable insight for human data augmentation design. | https://openaccess.thecvf.com/content/CVPR2021/papers/Chen_Scale-Aware_Automatic_Augmentation_for_Object_Detection_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.17220 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Chen_Scale-Aware_Automatic_Augmentation_for_Object_Detection_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Chen_Scale-Aware_Automatic_Augmentation_for_Object_Detection_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Chen_Scale-Aware_Automatic_Augmentation_CVPR_2021_supplemental.pdf | null |
Towards Robust Classification Model by Counterfactual and Invariant Data Generation | Chun-Hao Chang, George Alexandru Adam, Anna Goldenberg | Despite the success of machine learning applications in science, industry, and society in general, many approaches are known to be non-robust, often relying on spurious correlations to make predictions. Spuriousness occurs when some features correlate with labels but are not causal; relying on such features prevents models from generalizing to unseen environments where such correlations break. In this work, we focus on image classification and propose two data generation processes to reduce spuriousness. Given human annotations of the subset of the features responsible (causal) for the labels (e.g. bounding boxes), we modify this causal set to generate a surrogate image that no longer has the same label (i.e. a counterfactual image). We also alter non-causal features to generate images still recognized as the original labels, which helps to learn a model invariant to these features. In several challenging datasets, our data generations outperform state-of-the-art methods in accuracy when spurious correlations break, and increase the saliency focus on causal features providing better explanations. | https://openaccess.thecvf.com/content/CVPR2021/papers/Chang_Towards_Robust_Classification_Model_by_Counterfactual_and_Invariant_Data_Generation_CVPR_2021_paper.pdf | http://arxiv.org/abs/2106.01127 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Chang_Towards_Robust_Classification_Model_by_Counterfactual_and_Invariant_Data_Generation_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Chang_Towards_Robust_Classification_Model_by_Counterfactual_and_Invariant_Data_Generation_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Chang_Towards_Robust_Classification_CVPR_2021_supplemental.pdf | null |
Fully Convolutional Networks for Panoptic Segmentation | Yanwei Li, Hengshuang Zhao, Xiaojuan Qi, Liwei Wang, Zeming Li, Jian Sun, Jiaya Jia | In this paper, we present a conceptually simple, strong, and efficient framework for panoptic segmentation, called Panoptic FCN. Our approach aims to represent and predict foreground things and background stuff in a unified fully convolutional pipeline. In particular, Panoptic FCN encodes each object instance or stuff category into a specific kernel weight with the proposed kernel generator and produces the prediction by convolving the high-resolution feature directly. With this approach, instance-aware and semantically consistent properties for things and stuff can be respectively satisfied in a simple generate-kernel-then-segment workflow. Without extra boxes for localization or instance separation, the proposed approach outperforms previous box-based and -free models with high efficiency on COCO, Cityscapes, and Mapillary Vistas datasets with single scale input. Our code is made publicly available at https://github.com/Jia-Research-Lab/PanopticFCN. | https://openaccess.thecvf.com/content/CVPR2021/papers/Li_Fully_Convolutional_Networks_for_Panoptic_Segmentation_CVPR_2021_paper.pdf | http://arxiv.org/abs/2012.00720 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Li_Fully_Convolutional_Networks_for_Panoptic_Segmentation_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Li_Fully_Convolutional_Networks_for_Panoptic_Segmentation_CVPR_2021_paper.html | CVPR 2021 | null | null |
Benchmarking Representation Learning for Natural World Image Collections | Grant Van Horn, Elijah Cole, Sara Beery, Kimberly Wilber, Serge Belongie, Oisin Mac Aodha | Recent progress in self-supervised learning has resulted in models that are capable of extracting rich representations from image collections without requiring any explicit label supervision. However, to date the vast majority of these approaches have restricted themselves to training on standard benchmark datasets such as ImageNet. We argue that fine-grained visual categorization problems, such as plant and animal species classification, provide an informative testbed for self-supervised learning. In order to facilitate progress in this area we present two new natural world visual classification datasets, iNat2021 and NeWT. The former consists of 2.7M images from 10k different species uploaded by users of the citizen science application iNaturalist. We designed the latter, NeWT, in collaboration with domain experts with the aim of benchmarking the performance of representation learning algorithms on a suite of challenging natural world binary classification tasks that go beyond standard species classification. These two new datasets allow us to explore questions related to large-scale representation and transfer learning in the context of fine-grained categories. We provide a comprehensive analysis of feature extractors trained with and without supervision on ImageNet and iNat2021, shedding light on the strengths and weaknesses of different learned features across a diverse set of tasks. We find that features produced by standard supervised methods still outperform those produced by self-supervised approaches such as SimCLR. However, improved self-supervised learning methods are constantly being released and the iNat2021 and NeWT datasets are a valuable resource for tracking their progress. | https://openaccess.thecvf.com/content/CVPR2021/papers/Van_Horn_Benchmarking_Representation_Learning_for_Natural_World_Image_Collections_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.16483 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Van_Horn_Benchmarking_Representation_Learning_for_Natural_World_Image_Collections_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Van_Horn_Benchmarking_Representation_Learning_for_Natural_World_Image_Collections_CVPR_2021_paper.html | CVPR 2021 | null | null |
PGT: A Progressive Method for Training Models on Long Videos | Bo Pang, Gao Peng, Yizhuo Li, Cewu Lu | Convolutional video models have an order of magnitude larger computational complexity than their counterpart image-level models. Constrained by computational resources, there is no model or training method that can train long video sequences end-to-end. Currently, the main-stream method is to split a raw video into clips, leading to incomplete fragmentary temporal information flow. Inspired by natural language processing techniques dealing with long sentences, we propose to treat videos as serial fragments satisfying Markov property, and train it as a whole by progressively propagating information through the temporal dimension in multiple steps. This progressive training (PGT) method is able to train long videos end-to-end with limited resources and ensures the effective transmission of information. As a general and robust training method, we empirically demonstrate that it yields significant performance improvements on different models and datasets. As an illustrative example, the proposed method improves SlowOnly network by 3.7 mAP on Charades and 1.9 top-1 accuracy on Kinetics with negligible parameter and computation overhead. The code is attached in supplementary files and will be published with this paper. | https://openaccess.thecvf.com/content/CVPR2021/papers/Pang_PGT_A_Progressive_Method_for_Training_Models_on_Long_Videos_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.11313 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Pang_PGT_A_Progressive_Method_for_Training_Models_on_Long_Videos_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Pang_PGT_A_Progressive_Method_for_Training_Models_on_Long_Videos_CVPR_2021_paper.html | CVPR 2021 | null | null |
Prioritized Architecture Sampling With Monto-Carlo Tree Search | Xiu Su, Tao Huang, Yanxi Li, Shan You, Fei Wang, Chen Qian, Changshui Zhang, Chang Xu | One-shot neural architecture search (NAS) methods significantly reduce the search cost by considering the whole search space as one network, which only needs to be trained once. However, current methods select each operation independently without considering previous layers. Besides, the historical information obtained with huge computation costs is usually used only once and then discarded. In this paper, we introduce a sampling strategy based on Monte Carlo tree search (MCTS) with the search space modeled as a Monte Carlo tree (MCT), which captures the dependency among layers. Furthermore, intermediate results are stored in the MCT for future decisions and a better exploration-exploitation balance. Concretely, MCT is updated using the training loss as a reward to the architecture performance; for accurately evaluating the numerous nodes, we propose node communication and hierarchical node selection methods in the training and search stages, respectively, making better uses of the operation rewards and hierarchical information. Moreover, for a fair comparison of different NAS methods, we construct an open-source NAS benchmark of a macro search space evaluated on CIFAR-10, namely NAS-Bench-Macro. Extensive experiments on NAS-Bench-Macro and ImageNet demonstrate that our method significantly improves search efficiency and performance. For example, by only searching 20 architectures, our obtained architecture achieves 78.0% top-1 accuracy with 442M FLOPs on ImageNet. Code (Benchmark) is available at: https://github.com/xiusu/NAS-Bench-Macro. | https://openaccess.thecvf.com/content/CVPR2021/papers/Su_Prioritized_Architecture_Sampling_With_Monto-Carlo_Tree_Search_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.11922 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Su_Prioritized_Architecture_Sampling_With_Monto-Carlo_Tree_Search_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Su_Prioritized_Architecture_Sampling_With_Monto-Carlo_Tree_Search_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Su_Prioritized_Architecture_Sampling_CVPR_2021_supplemental.pdf | null |
HumanGPS: Geodesic PreServing Feature for Dense Human Correspondences | Feitong Tan, Danhang Tang, Mingsong Dou, Kaiwen Guo, Rohit Pandey, Cem Keskin, Ruofei Du, Deqing Sun, Sofien Bouaziz, Sean Fanello, Ping Tan, Yinda Zhang | In this paper, we address the problem of building pixel-wise dense correspondences between human images under arbitrary camera viewpoints and body poses. Previous methods either assume small motions or rely on discriminative descriptors extracted from local patches, which cannot handle large motion or visually ambiguous body parts, e.g. left v.s. right hand. In contrast, we propose a deep learning framework that maps each pixel to a feature space, where the feature distances reflect the geodesic distances among pixels as if they were projected onto the surface of 3D human scans. To this end, we introduce novel loss functions to push features apart according to their geodesic distances on the surface inside and across images. Without any semantic annotation, the features automatically learn to differentiate visually similar parts and align different subjects into a unified feature space. Extensive experiments show that the learned features can produce accurate correspondences between images with remarkable generalization capabilities on both intra and inter subjects. We demonstrate the effectiveness of our method on a variety of applications such as optical flow, non-rigid tracking, occlusions detection, and human dense pose regression. | https://openaccess.thecvf.com/content/CVPR2021/papers/Tan_HumanGPS_Geodesic_PreServing_Feature_for_Dense_Human_Correspondences_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.15573 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Tan_HumanGPS_Geodesic_PreServing_Feature_for_Dense_Human_Correspondences_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Tan_HumanGPS_Geodesic_PreServing_Feature_for_Dense_Human_Correspondences_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Tan_HumanGPS_Geodesic_PreServing_CVPR_2021_supplemental.zip | null |
Read Like Humans: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition | Shancheng Fang, Hongtao Xie, Yuxin Wang, Zhendong Mao, Yongdong Zhang | Linguistic knowledge is of great benefit to scene text recognition. However, how to effectively model linguistic rules in end-to-end deep networks remains a research challenge. In this paper, we argue that the limited capacity of language models comes from: 1) implicitly language modeling; 2) unidirectional feature representation; and 3) language model with noise input. Correspondingly, we propose an autonomous, bidirectional and iterative ABINet for scene text recognition. Firstly, the autonomous suggests to block gradient flow between vision and language models to enforce explicitly language modeling. Secondly, a novel bidirectional cloze network (BCN) as the language model is proposed based on bidirectional feature representation. Thirdly, we propose an execution manner of iterative correction for language model which can effectively alleviate the impact of noise input. Additionally, based on the ensemble of iterative predictions, we propose a self-training method which can learn from unlabeled images effectively. Extensive experiments indicate that ABINet has superiority on low-quality images and achieves state-of-the-art results on several mainstream benchmarks. Besides, the ABINet trained with ensemble self-training shows promising improvement in realizing human-level recognition. | https://openaccess.thecvf.com/content/CVPR2021/papers/Fang_Read_Like_Humans_Autonomous_Bidirectional_and_Iterative_Language_Modeling_for_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.06495 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Fang_Read_Like_Humans_Autonomous_Bidirectional_and_Iterative_Language_Modeling_for_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Fang_Read_Like_Humans_Autonomous_Bidirectional_and_Iterative_Language_Modeling_for_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Fang_Read_Like_Humans_CVPR_2021_supplemental.pdf | null |
Generic Perceptual Loss for Modeling Structured Output Dependencies | Yifan Liu, Hao Chen, Yu Chen, Wei Yin, Chunhua Shen | The perceptual loss has been widely used as an effective loss term in image synthesis tasks including image super-resolution [16], and style transfer [14]. It was believed that the success lies in the high-level perceptual feature representations extracted from CNNs pretrained with a large set of images. Here we reveal that what matters is the network structure instead of the trained weights. Without any learning, the structure of a deep network is sufficient to capture the dependencies between multiple levels of variable statistics using multiple layers of CNNs. This insight removes the requirements of pre-training and a particular network structure (commonly, VGG) that are previously assumed for the perceptual loss, thus enabling a significantly wider range of applications. To this end, we demonstrate that a randomly-weighted deep CNN can be used to model the structured dependencies of outputs. On a few dense per-pixel prediction tasks such as semantic segmentation, depth estimation, and instance segmentation, we show improved results of using the extended randomized perceptual loss, compared to the baselines using pixel-wise loss alone. We hope that this simple, extended perceptual loss may serve as a generic structured-output loss that is applicable to most structured output learning tasks. | https://openaccess.thecvf.com/content/CVPR2021/papers/Liu_Generic_Perceptual_Loss_for_Modeling_Structured_Output_Dependencies_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.10571 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Liu_Generic_Perceptual_Loss_for_Modeling_Structured_Output_Dependencies_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Liu_Generic_Perceptual_Loss_for_Modeling_Structured_Output_Dependencies_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Liu_Generic_Perceptual_Loss_CVPR_2021_supplemental.pdf | null |
Style-Based Point Generator With Adversarial Rendering for Point Cloud Completion | Chulin Xie, Chuxin Wang, Bo Zhang, Hao Yang, Dong Chen, Fang Wen | In this paper, we proposed a novel Style-based Point Generator with Adversarial Rendering (SpareNet) for point cloud completion. Firstly, we present the channel-attentive EdgeConv to fully exploit the local structures as well as the global shape in point features. Secondly, we observe that the concatenation manner used by vanilla foldings limits its potential of generating a complex and faithful shape. Enlightened by the success of StyleGAN, we regard the shape feature as style code that modulates the normalization layers during the folding, which considerably enhances its capability. Thirdly, we realize that existing point supervisions, e.g., Chamfer Distance or Earth Mover's Distance, cannot faithfully reflect the perceptual quality of the reconstructed points. To address this, we propose to project the completed points to depth maps with a differentiable renderer and apply adversarial training to advocate the perceptual realism under different viewpoints. Comprehensive experiments on ShapeNet and KITTI prove the effectiveness of our method, which achieves state-of-the-art quantitative performance while offering superior visual quality. | https://openaccess.thecvf.com/content/CVPR2021/papers/Xie_Style-Based_Point_Generator_With_Adversarial_Rendering_for_Point_Cloud_Completion_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.02535 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Xie_Style-Based_Point_Generator_With_Adversarial_Rendering_for_Point_Cloud_Completion_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Xie_Style-Based_Point_Generator_With_Adversarial_Rendering_for_Point_Cloud_Completion_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Xie_Style-Based_Point_Generator_CVPR_2021_supplemental.pdf | null |
Neural Architecture Search With Random Labels | Xuanyang Zhang, Pengfei Hou, Xiangyu Zhang, Jian Sun | In this paper, we investigate a new variant of neural architecture search (NAS) paradigm -- searching with random labels (RLNAS). The task sounds counter-intuitive for most existing NAS algorithms since random label provides few information on the performance of each candidate architecture. Instead, we propose a novel NAS framework based on ease-of-convergence hypothesis, which requires only random labels during searching. The algorithm involves two steps: first, we train a SuperNet using random labels; second, from the SuperNet we extract the sub-network whose weights change most significantly during the training. Extensive experiments are evaluated on multiple datasets (e.g. NAS-Bench-201 and ImageNet) and multiple search spaces (e.g. DARTS-like and MobileNet-like). Very surprisingly, RLNAS achieves comparable or even better results compared with state-of-the-art NAS methods such as PC-DARTS, Single Path One-Shot, even though the counterparts utilize full ground truth labels for searching. We hope our finding could inspire new understandings on the essential of NAS. | https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_Neural_Architecture_Search_With_Random_Labels_CVPR_2021_paper.pdf | http://arxiv.org/abs/2101.11834 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Neural_Architecture_Search_With_Random_Labels_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Neural_Architecture_Search_With_Random_Labels_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Zhang_Neural_Architecture_Search_CVPR_2021_supplemental.pdf | null |
Towards Long-Form Video Understanding | Chao-Yuan Wu, Philipp Krahenbuhl | Our world offers a never-ending stream of visual stimuli, yet today's vision systems only accurately recognize patterns within a few seconds. These systems understand the present, but fail to contextualize it in past or future events. In this paper, we study long-form video understanding. We introduce a framework for modeling long-form videos and develop evaluation protocols on large-scale datasets. We show that existing state-of-the-art short-term models are limited for long-form tasks. A novel object-centric transformer-based video recognition architecture performs significantly better on 7 diverse tasks. It also outperforms comparable state-of-the-art on the AVA dataset. | https://openaccess.thecvf.com/content/CVPR2021/papers/Wu_Towards_Long-Form_Video_Understanding_CVPR_2021_paper.pdf | http://arxiv.org/abs/2106.11310 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Wu_Towards_Long-Form_Video_Understanding_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Wu_Towards_Long-Form_Video_Understanding_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Wu_Towards_Long-Form_Video_CVPR_2021_supplemental.pdf | null |
Shape and Material Capture at Home | Daniel Lichy, Jiaye Wu, Soumyadip Sengupta, David W. Jacobs | In this paper, we present a technique for estimating the geometry and reflectance of objects using only a camera, flashlight, and optionally a tripod. We propose a simple data capture technique in which the user goes around the object, illuminating it with a flashlight and capturing only a few images. Our main technical contribution is the introduction of a recursive neural architecture, which can predict geometry and reflectance at 2^kx2^k resolution given an input image at 2^kx2^k and estimated geometry and reflectance from the previous step at 2^(k-1)x2^(k-1). This recursive architecture, termed RecNet, is trained with 256x256 resolution but can easily operate on 1024x1024 images during inference. We show that our method produces more accurate surface normal and albedo, especially in regions of specular highlights and cast shadows, compared to previous approaches, given three or fewer input images. | https://openaccess.thecvf.com/content/CVPR2021/papers/Lichy_Shape_and_Material_Capture_at_Home_CVPR_2021_paper.pdf | http://arxiv.org/abs/2104.06397 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Lichy_Shape_and_Material_Capture_at_Home_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Lichy_Shape_and_Material_Capture_at_Home_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Lichy_Shape_and_Material_CVPR_2021_supplemental.pdf | null |
Deep Polarization Imaging for 3D Shape and SVBRDF Acquisition | Valentin Deschaintre, Yiming Lin, Abhijeet Ghosh | We present a novel method for efficient acquisition of shape and spatially varying reflectance of 3D objects using polarization cues. Unlike previous works that have exploited polarization to estimate material or object appearance under certain constraints (known shape or multiview acquisition), we lift such restrictions by coupling polarization imaging with deep learning to achieve high quality estimate of 3D object shape (surface normals and depth) and SVBRDF using single-view polarization imaging under frontal flash illumination. In addition to acquired polarization images, we provide our deep network with strong novel cues related to shape and reflectance, in the form of a normalized Stokes map and an estimate of diffuse color. We additionally describe modifications to network architecture and training loss which provide further qualitative improvements. We demonstrate our approach to achieve superior results compared to recent works employing deep learning in conjunction with flash illumination. | https://openaccess.thecvf.com/content/CVPR2021/papers/Deschaintre_Deep_Polarization_Imaging_for_3D_Shape_and_SVBRDF_Acquisition_CVPR_2021_paper.pdf | http://arxiv.org/abs/2105.02875 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Deschaintre_Deep_Polarization_Imaging_for_3D_Shape_and_SVBRDF_Acquisition_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Deschaintre_Deep_Polarization_Imaging_for_3D_Shape_and_SVBRDF_Acquisition_CVPR_2021_paper.html | CVPR 2021 | null | null |
Convolutional Neural Network Pruning With Structural Redundancy Reduction | Zi Wang, Chengcheng Li, Xiangyang Wang | Convolutional neural network (CNN) pruning has become one of the most successful network compression approaches in recent years. Existing works on network pruning usually focus on removing the least important filters in the network to achieve compact architectures. In this study, we claim that identifying structural redundancy plays a more essential role than finding unimportant filters, theoretically and empirically. We first statistically model the network pruning problem in a redundancy reduction perspective and find that pruning in the layer(s) with the most structural redundancy outperforms pruning the least important filters across all layers. Based on this finding, we then propose a network pruning approach that identifies structural redundancy of a CNN and prunes filers in the selected layer(s) with the most redundancy. Experiments on various benchmark network architectures and datasets show that our proposed approach significantly outperforms the previous state-of-the-art. | https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_Convolutional_Neural_Network_Pruning_With_Structural_Redundancy_Reduction_CVPR_2021_paper.pdf | http://arxiv.org/abs/2104.03438 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Wang_Convolutional_Neural_Network_Pruning_With_Structural_Redundancy_Reduction_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Wang_Convolutional_Neural_Network_Pruning_With_Structural_Redundancy_Reduction_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Wang_Convolutional_Neural_Network_CVPR_2021_supplemental.pdf | null |
T-vMF Similarity for Regularizing Intra-Class Feature Distribution | Takumi Kobayashi | Deep convolutional neural networks (CNNs) leverage large-scale training dataset to produce remarkable performance on various image classification tasks. It, however, is difficult to effectively train the CNNs on some realistic learning situations such as regarding class imbalance, small-scale and label noises. Regularizing CNNs works well on learning with such deteriorated training datasets by mitigating overfitting issues. In this work, we propose a method to effectively impose regularization on feature representation learning. By focusing on the angle between a feature and a classifier which is embedded in cosine similarity at the classification layer, we formulate a novel similarity beyond the cosine based on von Mises-Fisher distribution of directional statistics. In contrast to the cosine similarity, our similarity is compact while having heavy tail, which contributes to regularizing intra-class feature distribution to improve generalization performance. Through the experiments on some realistic learning situations such as of imbalance, small-scale and noisy labels, we demonstrate the effectiveness of the proposed method for training CNNs, in comparison to the other regularization methods. Codes are available at https://github.com/tk1980/tvMF. | https://openaccess.thecvf.com/content/CVPR2021/papers/Kobayashi_T-vMF_Similarity_for_Regularizing_Intra-Class_Feature_Distribution_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Kobayashi_T-vMF_Similarity_for_Regularizing_Intra-Class_Feature_Distribution_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Kobayashi_T-vMF_Similarity_for_Regularizing_Intra-Class_Feature_Distribution_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Kobayashi_T-vMF_Similarity_for_CVPR_2021_supplemental.pdf | null |
Surrogate Gradient Field for Latent Space Manipulation | Minjun Li, Yanghua Jin, Huachun Zhu | Generative adversarial networks (GANs) can generate high-quality images from sampled latent codes. Recent works attempt to edit an image by manipulating its underlying latent code, but rarely go beyond the basic task of attribute adjustment. We propose the first method that enables manipulation with multidimensional condition such as keypoints and captions. Specifically, we design an algorithm that searches for a new latent code that satisfies the target condition based on the Surrogate Gradient Field (SGF) induced by an auxiliary mapping network. For quantitative comparison, we propose a metric to evaluate the disentanglement of manipulation methods. Thorough experimental analysis on the facial attribute adjustment task shows that our method outperforms state-of-the-art methods in disentanglement. We further apply our method to tasks of various condition modalities to demonstrate that our method can alter complex image properties such as keypoints and captions. | https://openaccess.thecvf.com/content/CVPR2021/papers/Li_Surrogate_Gradient_Field_for_Latent_Space_Manipulation_CVPR_2021_paper.pdf | http://arxiv.org/abs/2104.09065 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Li_Surrogate_Gradient_Field_for_Latent_Space_Manipulation_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Li_Surrogate_Gradient_Field_for_Latent_Space_Manipulation_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Li_Surrogate_Gradient_Field_CVPR_2021_supplemental.pdf | null |
SCF-Net: Learning Spatial Contextual Features for Large-Scale Point Cloud Segmentation | Siqi Fan, Qiulei Dong, Fenghua Zhu, Yisheng Lv, Peijun Ye, Fei-Yue Wang | How to learn effective features from large-scale point clouds for semantic segmentation has attracted increasing attention in recent years. Addressing this problem, we propose a learnable module that learns Spatial Contextual Features from large-scale point clouds, called SCF in this paper. The proposed module mainly consists of three blocks, including the local polar representation block, the dual-distance attentive pooling block, and the global contextual feature block. For each 3D point, the local polar representation block is firstly explored to construct a spatial representation that is invariant to the z-axis rotation, then the dual-distance attentive pooling block is designed to utilize the representations of its neighbors for learning more discriminative local features according to both the geometric and feature distances among them, and finally, the global contextual feature block is designed to learn a global context for each 3D point by utilizing its spatial location and the volume ratio of the neighborhood to the global point cloud. The proposed module could be easily embedded into various network architectures for point cloud segmentation, naturally resulting in a new 3D semantic segmentation network with an encoder-decoder architecture, called SCF-Net in this work. Extensive experimental results on two public datasets demonstrate that the proposed SCF-Net performs better than several state-of-the-art methods in most cases. | https://openaccess.thecvf.com/content/CVPR2021/papers/Fan_SCF-Net_Learning_Spatial_Contextual_Features_for_Large-Scale_Point_Cloud_Segmentation_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Fan_SCF-Net_Learning_Spatial_Contextual_Features_for_Large-Scale_Point_Cloud_Segmentation_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Fan_SCF-Net_Learning_Spatial_Contextual_Features_for_Large-Scale_Point_Cloud_Segmentation_CVPR_2021_paper.html | CVPR 2021 | null | null |
UnsupervisedR&R: Unsupervised Point Cloud Registration via Differentiable Rendering | Mohamed El Banani, Luya Gao, Justin Johnson | Aligning partial views of a scene into a single whole is essential to understanding one's environment and is a key component of numerous robotics tasks such as SLAM and SfM. Recent approaches have proposed end-to-end systems that can outperform traditional methods by leveraging pose supervision. However, with the rising prevalence of cameras with depth sensors, we can expect a new stream of raw RGB-D data without the annotations needed for supervision. We propose UnsupervisedR&R: an end-to-end unsupervised approach to learning point cloud registration from raw RGB-D video. The key idea is to leverage differentiable alignment and rendering to enforce photometric and geometric consistency between frames. We evaluate our approach on indoor scene datasets and find that we outperform existing traditional approaches with classical and learned descriptors while being competitive with supervised geometric point cloud registration approaches. | https://openaccess.thecvf.com/content/CVPR2021/papers/Banani_UnsupervisedRR_Unsupervised_Point_Cloud_Registration_via_Differentiable_Rendering_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Banani_UnsupervisedRR_Unsupervised_Point_Cloud_Registration_via_Differentiable_Rendering_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Banani_UnsupervisedRR_Unsupervised_Point_Cloud_Registration_via_Differentiable_Rendering_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Banani_UnsupervisedRR_Unsupervised_Point_CVPR_2021_supplemental.pdf | null |
ZeroScatter: Domain Transfer for Long Distance Imaging and Vision Through Scattering Media | Zheng Shi, Ethan Tseng, Mario Bijelic, Werner Ritter, Felix Heide | Adverse weather conditions, including snow, rain, and fog, pose a major challenge for both human and computer vision. Handling these environmental conditions is essential for safe decision making, especially in autonomous vehicles, robotics, and drones. Most of today's supervised imaging and vision approaches, however, rely on training data collected in the real world that is biased towards good weather conditions, with dense fog, snow, and heavy rain as outliers in these datasets. Without training data, let alone paired data, existing autonomous vehicles often limit themselves to good conditions and stop when dense fog or snow is detected. In this work, we tackle the lack of supervised training data by combining synthetic and indirect supervision. We present ZeroScatter, a domain transfer method for converting RGB-only captures taken in adverse weather into clear daytime scenes. ZeroScatter exploits model-based, temporal, multi-view, multi-modal, and adversarial cues in a joint fashion, allowing us to train on unpaired, biased data. We assess the proposed method on in-the-wild captures, and the proposed method outperforms existing monocular descattering approaches by 2.8 dB PSNR on controlled fog chamber measurements. | https://openaccess.thecvf.com/content/CVPR2021/papers/Shi_ZeroScatter_Domain_Transfer_for_Long_Distance_Imaging_and_Vision_Through_CVPR_2021_paper.pdf | http://arxiv.org/abs/2102.05847 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Shi_ZeroScatter_Domain_Transfer_for_Long_Distance_Imaging_and_Vision_Through_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Shi_ZeroScatter_Domain_Transfer_for_Long_Distance_Imaging_and_Vision_Through_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Shi_ZeroScatter_Domain_Transfer_CVPR_2021_supplemental.zip | null |
Defending Multimodal Fusion Models Against Single-Source Adversaries | Karren Yang, Wan-Yi Lin, Manash Barman, Filipe Condessa, Zico Kolter | Beyond achieving high performance across many vision tasks, multimodal models are expected to be robust to single-source faults due to the availability of redundant information between modalities. In this paper, we investigate the robustness of multimodal neural networks against worst-case (i.e., adversarial) perturbations on a single modality. We first show that standard multimodal fusion models are vulnerable to single-source adversaries: an attack on any single modality can overcome the correct information from multiple unperturbed modalities and cause the model to fail. This surprising vulnerability holds across diverse multimodal tasks and necessitates a solution. Motivated by this finding, we propose an adversarially robust fusion strategy that trains the model to compare information coming from all the input sources, detect inconsistencies in the perturbed modality compared to the other modalities, and only allow information from the unperturbed modalities to pass through. Our approach significantly improves on state-of-the-art methods in single-source robustness, achieving gains of 7.8-25.2% on action recognition, 19.7-48.2% on object detection, and 1.6-6.7% on sentiment analysis, without degrading performance on unperturbed (i.e., clean) data. | https://openaccess.thecvf.com/content/CVPR2021/papers/Yang_Defending_Multimodal_Fusion_Models_Against_Single-Source_Adversaries_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Yang_Defending_Multimodal_Fusion_Models_Against_Single-Source_Adversaries_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Yang_Defending_Multimodal_Fusion_Models_Against_Single-Source_Adversaries_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Yang_Defending_Multimodal_Fusion_CVPR_2021_supplemental.pdf | null |
Generalized Domain Adaptation | Yu Mitsuzumi, Go Irie, Daiki Ikami, Takashi Shibata | Many variants of unsupervised domain adaptation (UDA) problems have been proposed and solved individually. Its side effect is that a method that works for one variant is often ineffective for or not even applicable to another, which has prevented practical applications. In this paper, we give a general representation of UDA problems, named Generalized Domain Adaptation (GDA). GDA covers the major variants as special cases, which allows us to organize them in a comprehensive framework. Moreover, this generalization leads to a new challenging setting where existing methods fail, such as when domain labels are unknown, and class labels are only partially given to each domain. We propose a novel approach to the new setting. The key to our approach is self-supervised class-destructive learning, which enables the learning of class-invariant representations and domain-adversarial classifiers without using any domain labels. Extensive experiments using three benchmark datasets demonstrate that our method outperforms the state-of-the-art UDA methods in the new setting and that it is competitive in existing UDA variations as well. | https://openaccess.thecvf.com/content/CVPR2021/papers/Mitsuzumi_Generalized_Domain_Adaptation_CVPR_2021_paper.pdf | http://arxiv.org/abs/2106.01656 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Mitsuzumi_Generalized_Domain_Adaptation_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Mitsuzumi_Generalized_Domain_Adaptation_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Mitsuzumi_Generalized_Domain_Adaptation_CVPR_2021_supplemental.pdf | null |
AGORA: Avatars in Geography Optimized for Regression Analysis | Priyanka Patel, Chun-Hao P. Huang, Joachim Tesch, David T. Hoffmann, Shashank Tripathi, Michael J. Black | While the accuracy of 3D human pose estimation from images has steadily improved on benchmark datasets, the best methods still fail in many real-world scenarios. This suggests that there is a domain gap between current datasets and common scenes containing people. To obtain ground-truth 3D pose, current datasets limit the complexity of clothing, environmental conditions, number of subjects, and occlusion. Moreover, current datasets evaluate sparse 3D joint locations corresponding to the major joints of the body, ignoring the hand pose and the face shape. To evaluate the current state-of-the-art methods on more challenging images, and to drive the field to address new problems, we introduce AGORA, a synthetic dataset with high realism and highly accurate ground truth. Here we use 4240 commercially-available, high-quality, textured human scans in diverse poses and natural clothing; this includes 257 scans of children. We create reference 3D poses and body shapes by fitting the SMPL-X body model (with face and hands) to the 3D scans, taking into account clothing. We create around 14K training and 3K test images by rendering between 5 and 15 people per image using either image-based lighting or rendered 3D environments, taking care to make the images physically plausible and photoreal. In total, AGORA consists of 173K individual person crops. We evaluate existing state-of-the-art methods for 3D human pose estimation on this dataset and find that most methods perform poorly on images of children. Hence, we extend the SMPL-X model to better capture the shape of children. Additionally, we fine-tune methods on AGORA and show improved performance on both AGORA and 3DPW, confirming the realism of the dataset. We provide all the registered 3D reference training data, rendered images, and a web-based evaluation site at https://agora.is.tue.mpg.de/. | https://openaccess.thecvf.com/content/CVPR2021/papers/Patel_AGORA_Avatars_in_Geography_Optimized_for_Regression_Analysis_CVPR_2021_paper.pdf | http://arxiv.org/abs/2104.14643 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Patel_AGORA_Avatars_in_Geography_Optimized_for_Regression_Analysis_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Patel_AGORA_Avatars_in_Geography_Optimized_for_Regression_Analysis_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Patel_AGORA_Avatars_in_CVPR_2021_supplemental.pdf | null |
Exploring and Distilling Posterior and Prior Knowledge for Radiology Report Generation | Fenglin Liu, Xian Wu, Shen Ge, Wei Fan, Yuexian Zou | Automatically generating radiology reports can improve current clinical practice in diagnostic radiology. On one hand, it can relieve radiologists from the heavy burden of report writing; On the other hand, it can remind radiologists of abnormalities and avoid the misdiagnosis and missed diagnosis. Yet, this task remains a challenging job for data-driven neural networks, due to the serious visual and textual data biases. To this end, we propose a Posterior-and-Prior Knowledge Exploring-and-Distilling approach (PPKED) to imitate the working patterns of radiologists, who will first examine the abnormal regions and assign the disease topic tags to the abnormal regions, and then rely on the years of prior medical knowledge and prior working experience accumulations to write reports. Thus, the PPKED includes three modules: Posterior Knowledge Explorer (PoKE), Prior Knowledge Explorer (PrKE) and Multi-domain Knowledge Distiller (MKD). In detail, PoKE explores the posterior knowledge, which provides explicit abnormal visual regions to alleviate visual data bias; PrKE explores the prior knowledge from the prior medical knowledge graph (medical knowledge) and prior radiology reports (working experience) to alleviate textual data bias. The explored knowledge is distilled by the MKD to generate the final reports. Evaluated on MIMIC-CXR and IU-Xray datasets, our method is able to outperform previous state-of-the-art models on these two datasets. | https://openaccess.thecvf.com/content/CVPR2021/papers/Liu_Exploring_and_Distilling_Posterior_and_Prior_Knowledge_for_Radiology_Report_CVPR_2021_paper.pdf | http://arxiv.org/abs/2106.06963 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Liu_Exploring_and_Distilling_Posterior_and_Prior_Knowledge_for_Radiology_Report_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Liu_Exploring_and_Distilling_Posterior_and_Prior_Knowledge_for_Radiology_Report_CVPR_2021_paper.html | CVPR 2021 | null | null |
Rotation Coordinate Descent for Fast Globally Optimal Rotation Averaging | Alvaro Parra, Shin-Fang Chng, Tat-Jun Chin, Anders Eriksson, Ian Reid | Under mild conditions on the noise level of the measurements, rotation averaging satisfies strong duality, which enables global solutions to be obtained via semidefinite programming (SDP) relaxation. However, generic solvers for SDP are rather slow in practice, even on rotation averaging instances of moderate size, thus developing specialised algorithms is vital. In this paper, we present a fast algorithm that achieves global optimality called rotation coordinate descent (RCD). Unlike block coordinate descent (BCD) which solves SDP by updating the semidefinite matrix in a row-by-row fashion, RCD directly maintains and updates all valid rotations throughout the iterations. This obviates the need to store a large dense semidefinite matrix. We mathematically prove the convergence of our algorithm and empirically show its superior efficiency over state-of-the-art global methods on a variety of problem configurations. Maintaining valid rotations also facilitates incorporating local optimisation routines for further speed-ups. Moreover, our algorithm is simple to implement. | https://openaccess.thecvf.com/content/CVPR2021/papers/Parra_Rotation_Coordinate_Descent_for_Fast_Globally_Optimal_Rotation_Averaging_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.08292 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Parra_Rotation_Coordinate_Descent_for_Fast_Globally_Optimal_Rotation_Averaging_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Parra_Rotation_Coordinate_Descent_for_Fast_Globally_Optimal_Rotation_Averaging_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Parra_Rotation_Coordinate_Descent_CVPR_2021_supplemental.pdf | null |
Extreme Rotation Estimation Using Dense Correlation Volumes | Ruojin Cai, Bharath Hariharan, Noah Snavely, Hadar Averbuch-Elor | We present a technique for estimating the relative 3D rotation of an RGB image pair in an extreme setting, where the images have little or no overlap. We observe that, even when images do not overlap, there may be rich hidden cues as to their geometric relationship, such as light source directions, vanishing points, and symmetries present in the scene. We propose a network design that can automatically learn such implicit cues by comparing all pairs of points between the two input images. Our method therefore constructs dense feature correlation volumes and processes these to predict relative 3D rotations. Our predictions are formed over a fine-grained discretization of rotations, bypassing difficulties associated with regressing 3D rotations. We demonstrate our approach on a large variety of extreme RGB image pairs, including indoor and outdoor images captured under different lighting conditions and geographic locations. Our evaluation shows that our model can successfully estimate relative rotations among non-overlapping images without compromising performance over overlapping image pairs. | https://openaccess.thecvf.com/content/CVPR2021/papers/Cai_Extreme_Rotation_Estimation_Using_Dense_Correlation_Volumes_CVPR_2021_paper.pdf | http://arxiv.org/abs/2104.13530 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Cai_Extreme_Rotation_Estimation_Using_Dense_Correlation_Volumes_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Cai_Extreme_Rotation_Estimation_Using_Dense_Correlation_Volumes_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Cai_Extreme_Rotation_Estimation_CVPR_2021_supplemental.pdf | null |
Capsule Network Is Not More Robust Than Convolutional Network | Jindong Gu, Volker Tresp, Han Hu | The Capsule Network is widely believed to be more robust than Convolutional Networks. However, there lack comprehensive comparisons between these two networks, and it is also unknown which components in the CapsNet affect its robustness. In this paper, we first carefully examine the special designs in CapsNet differing from that of a ConvNet, commonly used for image classification. The examination reveals 5 major new/different components in CapsNet: a transformation process, a dynamic routing layer, a squashing function, a marginal loss other than cross-entropy loss, and an additional class-conditional reconstruction loss for regularization. Along with these major differences, we comprehensively ablate their behavior on 3 kinds of robustness, including affine transformation, overlapping digits, and semantic representation. The study reveals that some designs which are thought critical to CapsNet actually can harm its robustness, i.e., the dynamic routing layer and the transformation process, while others are beneficial for the robustness. Based on these findings, we propose enhanced ConvNets simply by introducing the essential components behind the CapsNet's success. The proposed simple ConvNets can achieve better robustness than the CapsNet. | https://openaccess.thecvf.com/content/CVPR2021/papers/Gu_Capsule_Network_Is_Not_More_Robust_Than_Convolutional_Network_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.15459 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Gu_Capsule_Network_Is_Not_More_Robust_Than_Convolutional_Network_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Gu_Capsule_Network_Is_Not_More_Robust_Than_Convolutional_Network_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Gu_Capsule_Network_Is_CVPR_2021_supplemental.pdf | null |
BASAR:Black-Box Attack on Skeletal Action Recognition | Yunfeng Diao, Tianjia Shao, Yong-Liang Yang, Kun Zhou, He Wang | Skeletal motion plays a vital role in human activity recognition as either an independent data source or a complement. The robustness of skeleton-based activity recognizers has been questioned recently, which shows that they are vulnerable to adversarial attacks when the full-knowledge of the recognizer is accessible to the attacker. However, this white-box requirement is overly restrictive in most scenarios and the attack is not truly threatening. In this paper, we show that such threats do exist under black-box settings too. To this end, we propose the first black-box adversarial attack method BASAR. Through BASAR, we show that adversarial attack is not only truly a threat but also can be extremely deceitful, because on-manifold adversarial samples are rather common in skeletal motions, in contrast to the common belief that adversarial samples only exist off-manifold. Through exhaustive evaluation and comparison, we show that BASAR can deliver successful attacks across models, data, and attack modes. Through harsh perceptual studies, we show that it achieves effective yet imperceptible attacks. By analyzing the attack on different activity recognizers, BASAR helps identify the potential causes of their vulnerability and provides insights on what classifiers are likely to be more robust against attack. | https://openaccess.thecvf.com/content/CVPR2021/papers/Diao_BASARBlack-Box_Attack_on_Skeletal_Action_Recognition_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.05266 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Diao_BASARBlack-Box_Attack_on_Skeletal_Action_Recognition_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Diao_BASARBlack-Box_Attack_on_Skeletal_Action_Recognition_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Diao_BASARBlack-Box_Attack_on_CVPR_2021_supplemental.zip | null |
Self-Supervised Learning on 3D Point Clouds by Learning Discrete Generative Models | Benjamin Eckart, Wentao Yuan, Chao Liu, Jan Kautz | While recent pre-training tasks on 2D images have proven very successful for transfer learning, pre-training for 3D data remains challenging. In this work, we introduce a general method for 3D self-supervised representation learning that 1) remains agnostic to the underlying neural network architecture, and 2) specifically leverages the geometric nature of 3D point cloud data. The proposed task softly segments 3D points into a discrete number of geometric partitions. A self-supervised loss is formed under the interpretation that these soft partitions implicitly parameterize a latent Gaussian Mixture Model (GMM), and that this generative model establishes a data likelihood function. Our pretext task can therefore be viewed in terms of an encoder-decoder paradigm that squeezes learned representations through an implicitly defined parametric discrete generative model bottleneck. We show that any existing neural network architecture designed for supervised point cloud segmentation can be repurposed for the proposed unsupervised pretext task. By maximizing data likelihood with respect to the soft partitions formed by the unsupervised point-wise segmentation network, learned representations are encouraged to contain compositionally rich geometric information. In tests, we show that our method naturally induces semantic separation in feature space, resulting in state-of-the-art performance on downstream applications like model classification and semantic segmentation. | https://openaccess.thecvf.com/content/CVPR2021/papers/Eckart_Self-Supervised_Learning_on_3D_Point_Clouds_by_Learning_Discrete_Generative_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Eckart_Self-Supervised_Learning_on_3D_Point_Clouds_by_Learning_Discrete_Generative_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Eckart_Self-Supervised_Learning_on_3D_Point_Clouds_by_Learning_Discrete_Generative_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Eckart_Self-Supervised_Learning_on_CVPR_2021_supplemental.pdf | null |
Iso-Points: Optimizing Neural Implicit Surfaces With Hybrid Representations | Wang Yifan, Shihao Wu, Cengiz Oztireli, Olga Sorkine-Hornung | Neural implicit functions have emerged as a powerful representation for surfaces in 3D. Such a function can encode a high quality surface with intricate details into the parameters of a deep neural network. However, optimizing for the parameters for accurate and robust reconstructions remains a challenge especially when the input data is noisy or incomplete. In this work, we develop a hybrid neural surface representation that allows us to impose geometry-aware sampling and regularization, which significantly improves the fidelity of reconstructions. We propose to use iso-points as an explicit representation for a neural implicit function. These points are computed and updated on-the-fly during training to capture important geometric features and impose geometric constraints on the optimization. We demonstrate that our method can be adopted to improve state-of-the-art techniques for reconstructing neural implicit surfaces from multi-view images or point clouds. Quantitative and qualitative evaluations show that, compared with existing sampling and optimization methods, our approach allows faster convergence, better generalization, and accurate recovery of details and topology. | https://openaccess.thecvf.com/content/CVPR2021/papers/Yifan_Iso-Points_Optimizing_Neural_Implicit_Surfaces_With_Hybrid_Representations_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Yifan_Iso-Points_Optimizing_Neural_Implicit_Surfaces_With_Hybrid_Representations_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Yifan_Iso-Points_Optimizing_Neural_Implicit_Surfaces_With_Hybrid_Representations_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Yifan_Iso-Points_Optimizing_Neural_CVPR_2021_supplemental.pdf | null |
Dense Relation Distillation With Context-Aware Aggregation for Few-Shot Object Detection | Hanzhe Hu, Shuai Bai, Aoxue Li, Jinshi Cui, Liwei Wang | Conventional deep learning based methods for object detection require a large amount of bounding box annotations for training, which is expensive to obtain such high quality annotated data. Few-shot object detection, which learns to adapt to novel classes with only a few annotated examples, is very challenging since the fine-grained feature of novel object can be easily overlooked with only a few data available. In this work, aiming to fully exploit features of annotated novel object and capture fine-grained features of query object, we propose Dense Relation Distillation with Context-aware Aggregation (DCNet) to tackle the few-shot detection problem. Built on the meta-learning based framework, Dense Relation Distillation module targets at fully exploiting support features, where support features and query feature are densely matched, covering all spatial locations in a feed-forward fashion. The abundant usage of the guidance information endows model the capability to handle common challenges such as appearance changes and occlusions. Moreover, to better capture scale-aware features, Context-aware Aggregation module adaptively harnesses features from different scales for a more comprehensive feature representation. Extensive experiments illustrate that our proposed approach achieves state-of-the-art results on PASCAL VOC and MS COCO datasets. Code will be made available at https://github.com/hzhupku/DCNet. | https://openaccess.thecvf.com/content/CVPR2021/papers/Hu_Dense_Relation_Distillation_With_Context-Aware_Aggregation_for_Few-Shot_Object_Detection_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.17115 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Hu_Dense_Relation_Distillation_With_Context-Aware_Aggregation_for_Few-Shot_Object_Detection_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Hu_Dense_Relation_Distillation_With_Context-Aware_Aggregation_for_Few-Shot_Object_Detection_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Hu_Dense_Relation_Distillation_CVPR_2021_supplemental.pdf | null |
End-to-End Human Object Interaction Detection With HOI Transformer | Cheng Zou, Bohan Wang, Yue Hu, Junqi Liu, Qian Wu, Yu Zhao, Boxun Li, Chenguang Zhang, Chi Zhang, Yichen Wei, Jian Sun | We propose HOI Transformer to tackle human object interaction (HOI) detection in an end-to-end manner. Current approaches either decouple HOI task into separated stages of object detection and interaction classification or introduce surrogate interaction problem. In contrast, our method, named HOI Transformer, streamlines the HOI pipeline by eliminating the need for many hand-designed components. HOI Transformer reasons about the relations of objects and humans from global image context and directly predicts HOI instances in parallel. A quintuple matching loss is introduced to force HOI predictions in a unified way. Our method is conceptually much simpler and demonstrates improved accuracy. Without bells and whistles, HOI Transformer achieve 26.61% AP on HICO-DET and 52.9% AProle on V-COCO, surpassing previous methods with the advantage of being much simpler. We hope our approach will serve as a simple and effective alternative for HOI tasks. Code is available at https://github.com/bbepoch/HoiTransformer. | https://openaccess.thecvf.com/content/CVPR2021/papers/Zou_End-to-End_Human_Object_Interaction_Detection_With_HOI_Transformer_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.04503 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Zou_End-to-End_Human_Object_Interaction_Detection_With_HOI_Transformer_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Zou_End-to-End_Human_Object_Interaction_Detection_With_HOI_Transformer_CVPR_2021_paper.html | CVPR 2021 | null | null |
How Does Topology Influence Gradient Propagation and Model Performance of Deep Networks With DenseNet-Type Skip Connections? | Kartikeya Bhardwaj, Guihong Li, Radu Marculescu | DenseNets introduce concatenation-type skip connections that achieve state-of-the-art accuracy in several computer vision tasks. In this paper, we reveal that the topology of the concatenation-type skip connections is closely related to the gradient propagation which, in turn, enables a predictable behavior of DNNs' test performance. To this end, we introduce a new metric called NN-Mass to quantify how effectively information flows through DNNs. Moreover, we empirically show that NN-Mass also works for other types of skip connections, e.g., for ResNets, Wide-ResNets (WRNs), and MobileNets, which contain addition-type skip connections (i.e., residuals or inverted residuals). As such, for both DenseNet-like CNNs and ResNets/WRNs/MobileNets, our theoretically grounded NN-Mass can identify models with similar accuracy, despite having significantly different size/compute requirements. Detailed experiments on both synthetic and real datasets (e.g., MNIST, CIFAR-10, CIFAR-100, ImageNet) provide extensive evidence for our insights. Finally, the closed-form equation of our NN-Mass enables us to design significantly compressed DenseNets (for CIFAR-10) and MobileNets (for ImageNet) directly at initialization without time-consuming training and/or searching. | https://openaccess.thecvf.com/content/CVPR2021/papers/Bhardwaj_How_Does_Topology_Influence_Gradient_Propagation_and_Model_Performance_of_CVPR_2021_paper.pdf | http://arxiv.org/abs/1910.00780 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Bhardwaj_How_Does_Topology_Influence_Gradient_Propagation_and_Model_Performance_of_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Bhardwaj_How_Does_Topology_Influence_Gradient_Propagation_and_Model_Performance_of_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Bhardwaj_How_Does_Topology_CVPR_2021_supplemental.pdf | null |
Multi-Shot Temporal Event Localization: A Benchmark | Xiaolong Liu, Yao Hu, Song Bai, Fei Ding, Xiang Bai, Philip H. S. Torr | Current developments in temporal event or action localization usually target actions captured by a single camera. However, extensive events or actions in the wild may be captured as a sequence of shots by multiple cameras at different positions. In this paper, we propose a new and challenging task called multi-shot temporal event localization, and accordingly, collect a large-scale dataset called MUlti-Shot EventS (MUSES). MUSES has 31,477 event instances for a total of 716 video hours. The core nature of MUSES is the frequent shot cuts, for an average of 19 shots per instance and 176 shots per video, which induces large intra-instance variations. Our comprehensive evaluations show that the state-of-the-art method in temporal action localization only achieves an mAP of 13.1% at IoU=0.5. As a minor contribution, we present a simple baseline approach for handling the intra-instance variations, which reports an mAP of 18.9% on MUSES and 56.9% on THUMOS14 at IoU=0.5. To facilitate research in this direction, we release the dataset and the project code at https://songbai.site/muses/. | https://openaccess.thecvf.com/content/CVPR2021/papers/Liu_Multi-Shot_Temporal_Event_Localization_A_Benchmark_CVPR_2021_paper.pdf | http://arxiv.org/abs/2012.09434 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Liu_Multi-Shot_Temporal_Event_Localization_A_Benchmark_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Liu_Multi-Shot_Temporal_Event_Localization_A_Benchmark_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Liu_Multi-Shot_Temporal_Event_CVPR_2021_supplemental.pdf | null |
We Are More Than Our Joints: Predicting How 3D Bodies Move | Yan Zhang, Michael J. Black, Siyu Tang | A key step towards understanding human behavior is the prediction of 3D human motion. Successful solutions have many applications in human tracking, HCI, and graphics. Most previous work focuses on predicting a time series of future 3D joint locations given a sequence 3D joints from the past. This Euclidean formulation generally works better than predicting pose in terms of joint rotations. Body joint locations, however, do not fully constrain 3D human pose, leaving degrees of freedom (like rotation about a limb) undefined. Note that 3D joints can be viewed as a sparse point cloud. Thus the problem of human motion prediction can be seen as a problem of point cloud prediction. With this observation, we instead predict a sparse set of locations on the body surface that correspond to motion capture markers. Given such markers, we fit a parametric body model to recover the 3D body of the person. These sparse surface markers also carry detailed information about human movement that is not present in the joints, increasing the naturalness of the predicted motions. Using the AMASS dataset, we train MOJO (More than Our JOints), which is a novel variational autoencoder with a latent DCT space that generates motions from latent frequencies. MOJO preserves the full temporal resolution of the input motion, and sampling from the latent frequencies explicitly introduces high-frequency components into the generated motion. We note that motion prediction methods accumulate errors over time, resulting in joints or markers that diverge from true human bodies. To address this, we fit the SMPL-X body model to the predictions at each time step, projecting the solution back onto the space of valid bodies, before propagating the new markers in time. Quantitative and qualitative experiments show that our approach produces state-of-the-art results and realistic 3D body animations. The code is available for research purposes at https://yz-cnsdqz.github.io/MOJO/MOJO.html . | https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_We_Are_More_Than_Our_Joints_Predicting_How_3D_Bodies_CVPR_2021_paper.pdf | http://arxiv.org/abs/2012.00619 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_We_Are_More_Than_Our_Joints_Predicting_How_3D_Bodies_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_We_Are_More_Than_Our_Joints_Predicting_How_3D_Bodies_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Zhang_We_Are_More_CVPR_2021_supplemental.pdf | null |
Spatially-Adaptive Pixelwise Networks for Fast Image Translation | Tamar Rott Shaham, Michael Gharbi, Richard Zhang, Eli Shechtman, Tomer Michaeli | We introduce a new generator architecture, aimed at fast and efficient high-resolution image-to-image translation. We design the generator to be an extremely lightweight function of the full-resolution image. In fact, we use pixel-wise networks; that is, each pixel is processed independently of others, through a composition of simple affine transformations and nonlinearities. We take three important steps to equip such a seemingly simple function with adequate expressivity. First, the parameters of the pixel-wise networks are spatially varying so they can represent a broader function class than simple 1x1 convolutions. Second, these parameters are predicted by a fast convolutional network that processes an aggressively low-resolution representation of the input. Third, we augment the input image with a sinusoidal encoding of spatial coordinates, which provides an effective inductive bias for generating realistic novel high-frequency image content. As a result, our model is up to 18x faster than state-of-the-art baselines. We achieve this speedup while generating comparable visual quality across different image resolutions and translation domains. | https://openaccess.thecvf.com/content/CVPR2021/papers/Shaham_Spatially-Adaptive_Pixelwise_Networks_for_Fast_Image_Translation_CVPR_2021_paper.pdf | http://arxiv.org/abs/2012.02992 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Shaham_Spatially-Adaptive_Pixelwise_Networks_for_Fast_Image_Translation_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Shaham_Spatially-Adaptive_Pixelwise_Networks_for_Fast_Image_Translation_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Shaham_Spatially-Adaptive_Pixelwise_Networks_CVPR_2021_supplemental.pdf | null |
PointFlow: Flowing Semantics Through Points for Aerial Image Segmentation | Xiangtai Li, Hao He, Xia Li, Duo Li, Guangliang Cheng, Jianping Shi, Lubin Weng, Yunhai Tong, Zhouchen Lin | Aerial Image Segmentation is a particular semantic segmentation problem and has several challenging characteristics that general semantic segmentation does not have. There are two critical issues: The one is an extremely foreground-background imbalanced distribution and the other is multiple small objects along with complex background. Such problems make the recent dense affinity context modeling perform poorly even compared with baselines due to over-introduced background context. To handle these problems, we propose a point-wise affinity propagation module based on the FPN framework, named PointFlow. Rather than dense affinity learning, a sparse affinity map is generated upon selected points between the adjacent features, which reduces the noise introduced by the background while keeping efficiency. In particular, we design a dual point matcher to select points from the salient area and object boundaries, respectively. The former samples salient points while the latter samples points from the object boundaries. Experimental results on three different aerial segmentation datasets suggest that the proposed method is more effective and efficient than state-of-the-art general semantic segmentation methods. Especially, our methods achieve the best speed and accuracy trade-off on three aerial benchmarks. Further experiments on three general semantic segmentation datasets prove the generality of our method. Both code and models will be available for further research. | https://openaccess.thecvf.com/content/CVPR2021/papers/Li_PointFlow_Flowing_Semantics_Through_Points_for_Aerial_Image_Segmentation_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.06564 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Li_PointFlow_Flowing_Semantics_Through_Points_for_Aerial_Image_Segmentation_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Li_PointFlow_Flowing_Semantics_Through_Points_for_Aerial_Image_Segmentation_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Li_PointFlow_Flowing_Semantics_CVPR_2021_supplemental.pdf | null |
Deep Stable Learning for Out-of-Distribution Generalization | Xingxuan Zhang, Peng Cui, Renzhe Xu, Linjun Zhou, Yue He, Zheyan Shen | Approaches based on deep neural networks have achieved striking performance when testing data and training data share similar distribution, but can significantly fail otherwise. Therefore, eliminating the impact of distribution shifts between training and testing data is crucial for building performance-promising deep models. Conventional methods assume either the known heterogeneity of training data (e.g. domain labels) or the approximately equal capacities of different domains. In this paper, we consider a more challenging case where neither of the above assumptions holds. We propose to address this problem by removing the dependencies between features via learning weights for training samples, which helps deep models get rid of spurious correlations and, in turn, concentrate more on the true connection between discriminative features and labels. Extensive experiments clearly demonstrate the effectiveness of our method on multiple distribution generalization benchmarks compared with state-of-the-art counterparts. Through extensive experiments on distribution generalization benchmarks including PACS, VLCS, MNIST-M, and NICO, we show the effectiveness of our method compared with state-of-the-art counterparts. | https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_Deep_Stable_Learning_for_Out-of-Distribution_Generalization_CVPR_2021_paper.pdf | http://arxiv.org/abs/2104.07876 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Deep_Stable_Learning_for_Out-of-Distribution_Generalization_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Deep_Stable_Learning_for_Out-of-Distribution_Generalization_CVPR_2021_paper.html | CVPR 2021 | null | null |
Continual Learning via Bit-Level Information Preserving | Yujun Shi, Li Yuan, Yunpeng Chen, Jiashi Feng | Continual learning tackles the setting of learning different tasks sequentially. Despite the lots of previous solutions, most of them still suffer significant forgetting or expensive memory cost. In this work, targeted at these problems, we first study the continual learning process through the lens of information theory and observe that forgetting of a model stems from the loss of information gain on its parameters from the previous tasks when learning a new task. From this viewpoint, we then propose a novel continual learning approach called Bit-Level Information Preserving (BLIP) that preserves the information gain on model parameters through updating the parameters at the bit level, which can be conveniently implemented with parameter quantization. More specifically, BLIP first trains a neural network with weight quantization on the new incoming task and then estimates information gain on each parameter provided by the task data to determine the bits to be frozen to prevent forgetting. We conduct extensive experiments ranging from classification tasks to reinforcement learning tasks, and the results show that our method produces better or on par results comparing to previous state-of-the-arts. Indeed, BLIP achieves close to zero forgetting while only requiring constant memory overheads throughout continual learning. | https://openaccess.thecvf.com/content/CVPR2021/papers/Shi_Continual_Learning_via_Bit-Level_Information_Preserving_CVPR_2021_paper.pdf | http://arxiv.org/abs/2105.04444 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Shi_Continual_Learning_via_Bit-Level_Information_Preserving_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Shi_Continual_Learning_via_Bit-Level_Information_Preserving_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Shi_Continual_Learning_via_CVPR_2021_supplemental.pdf | null |
Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting | Ayan Kumar Bhunia, Pinaki Nath Chowdhury, Yongxin Yang, Timothy M. Hospedales, Tao Xiang, Yi-Zhe Song | Self-supervised learning has gained prominence due to its efficacy at learning powerful representations from unlabelled data that achieve excellent performance on many challenging downstream tasks. However, supervision-free pre-text tasks are challenging to design and usually modality specific. Although there is a rich literature of self-supervised methods for either spatial (such as images) or temporal data (sound or text) modalities, a common pre-text task that benefits both modalities is largely missing. In this paper, we are interested in defining a self-supervised pre-text task for sketches and handwriting data. This data is uniquely characterised by its existence in dual modalities of rasterized images and vector coordinate sequences. We address and exploit this dual representation by proposing two novel cross-modal translation pre-text tasks for self-supervised feature learning: Vectorization and Rasterization. Vectorization learns to map image space to vector coordinates and rasterization maps vector coordinates to image space. We show that our learned encoder modules benefit both raster-based and vector-based downstream approaches to analysing hand-drawn data. Empirical evidence shows that our novel pre-text tasks surpass existing single and multi-modal self-supervision methods. | https://openaccess.thecvf.com/content/CVPR2021/papers/Bhunia_Vectorization_and_Rasterization_Self-Supervised_Learning_for_Sketch_and_Handwriting_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.13716 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Bhunia_Vectorization_and_Rasterization_Self-Supervised_Learning_for_Sketch_and_Handwriting_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Bhunia_Vectorization_and_Rasterization_Self-Supervised_Learning_for_Sketch_and_Handwriting_CVPR_2021_paper.html | CVPR 2021 | null | null |
Generating Diverse Structure for Image Inpainting With Hierarchical VQ-VAE | Jialun Peng, Dong Liu, Songcen Xu, Houqiang Li | Given an incomplete image without additional constraint, image inpainting natively allows for multiple solutions as long as they appear plausible. Recently, multiple-solution inpainting methods have been proposed and shown the potential of generating diverse results. However, these methods have difficulty in ensuring the quality of each solution, e.g. they produce distorted structure and/or blurry texture. We propose a two-stage model for diverse inpainting, where the first stage generates multiple coarse results each of which has a different structure, and the second stage refines each coarse result separately by augmenting texture. The proposed model is inspired by the hierarchical vector quantized variational auto-encoder (VQ-VAE), whose hierarchical architecture disentangles structural and textural information. In addition, the vector quantization in VQ-VAE enables autoregressive modeling of the discrete distribution over the structural information. Sampling from the distribution can easily generate diverse and high-quality structures, making up the first stage of our model. In the second stage, we propose a structural attention module inside the texture generation network, where the module utilizes the structural information to capture distant correlations. We further reuse the VQ-VAE to calculate two feature losses, which help improve structure coherence and texture realism, respectively. Experimental results on CelebA-HQ, Places2, and ImageNet datasets show that our method not only enhances the diversity of the inpainting solutions but also improves the visual quality of the generated multiple images. Code and models are available at: https://github.com/USTC-JialunPeng/Diverse-Structure-Inpainting. | https://openaccess.thecvf.com/content/CVPR2021/papers/Peng_Generating_Diverse_Structure_for_Image_Inpainting_With_Hierarchical_VQ-VAE_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.10022 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Peng_Generating_Diverse_Structure_for_Image_Inpainting_With_Hierarchical_VQ-VAE_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Peng_Generating_Diverse_Structure_for_Image_Inpainting_With_Hierarchical_VQ-VAE_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Peng_Generating_Diverse_Structure_CVPR_2021_supplemental.pdf | null |
Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation | Mingi Ji, Seungjae Shin, Seunghyun Hwang, Gibeom Park, Il-Chul Moon | Knowledge distillation is a method of transferring the knowledge from a pretrained complex teacher model to a student model, so a smaller network can replace a large teacher network at the deployment stage. To reduce the necessity of training a large teacher model, the recent literatures introduced a self-knowledge distillation, which trains a student network progressively to distill its own knowledge without a pretrained teacher network. While Self-knowledge distillation is largely divided into a data augmentation based approach and an auxiliary network based approach, the data augmentation approach looses its local information in the augmentation process, which hinders its applicability to diverse vision tasks, such as semantic segmentation. Moreover, these knowledge distillation approaches do not receive the refined feature maps, which are prevalent in the object detection and semantic segmentation community. This paper proposes a novel self-knowledge distillation method, Feature Refinement via Self-Knowledge Distillation (FRSKD), which utilizes an auxiliary self-teacher network to transfer a refined knowledge for the classifier network. Our proposed method, FRSKD, can utilize both soft label and feature-map distillations for the self-knowledge distillation. Therefore, FRSKD can be applied to classification, and semantic segmentation, which emphasize preserving the local information. We demonstrate the effectiveness of FRSKD by enumerating its performance improvements in diverse tasks and benchmark datasets. The implemented code will be open-sourced. | https://openaccess.thecvf.com/content/CVPR2021/papers/Ji_Refine_Myself_by_Teaching_Myself_Feature_Refinement_via_Self-Knowledge_Distillation_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.08273 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Ji_Refine_Myself_by_Teaching_Myself_Feature_Refinement_via_Self-Knowledge_Distillation_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Ji_Refine_Myself_by_Teaching_Myself_Feature_Refinement_via_Self-Knowledge_Distillation_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Ji_Refine_Myself_by_CVPR_2021_supplemental.zip | null |
Self-Supervised Visibility Learning for Novel View Synthesis | Yujiao Shi, Hongdong Li, Xin Yu | We address the problem of novel view synthesis (NVS) from a few sparse source view images. Conventional image-based rendering methods estimate scene geometry and synthesize novel views in two separate steps. However, erroneous geometry estimation will decrease NVS performance as view synthesis highly depends on the quality of estimated scene geometry. In this paper, we propose an end-to-end NVS framework to eliminate the error propagation issue. To be specific, we construct a volume under the target view and design a source-view visibility estimation (SVE) module to determine the visibility of the target-view voxels in each source view. Next, we aggregate the visibility of all source views to achieve a consensus volume. Each voxel in the consensus volume indicates a surface existence probability. Then, we present a soft ray-casting (SRC) mechanism to find the most front surface in the target view (i.e. depth). Specifically, our SRC traverses the consensus volume along viewing rays and then estimates a depth probability distribution. We then warp and aggregate source view pixels to synthesize a novel view based on the estimated source-view visibility and target-view depth. At last, our network is trained in an end-to-end self-supervised fashion, thus significantly alleviating error accumulation in view synthesis. Experimental results demonstrate that our method generates novel views in higher quality compared to the state-of-the-art. | https://openaccess.thecvf.com/content/CVPR2021/papers/Shi_Self-Supervised_Visibility_Learning_for_Novel_View_Synthesis_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.15407 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Shi_Self-Supervised_Visibility_Learning_for_Novel_View_Synthesis_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Shi_Self-Supervised_Visibility_Learning_for_Novel_View_Synthesis_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Shi_Self-Supervised_Visibility_Learning_CVPR_2021_supplemental.pdf | null |
End-to-End Human Pose and Mesh Reconstruction with Transformers | Kevin Lin, Lijuan Wang, Zicheng Liu | We present a new method, called MEsh TRansfOrmer (METRO), to reconstruct 3D human pose and mesh vertices from a single image. Our method uses a transformer encoder to jointly model vertex-vertex and vertex-joint interactions, and outputs 3D joint coordinates and mesh vertices simultaneously. Compared to existing techniques that regress pose and shape parameters, METRO does not rely on any parametric mesh models like SMPL, thus it can be easily extended to other objects such as hands. We further relax the mesh topology and allow the transformer self-attention mechanism to freely attend between any two vertices, making it possible to learn non-local relationships among mesh vertices and joints. With the proposed masked vertex modeling, our method is more robust and effective in handling challenging situations like partial occlusions. METRO generates new state-of-the-art results for human mesh reconstruction on the public Human3.6M and 3DPW datasets. Moreover, we demonstrate the generalizability of METRO to 3D hand reconstruction in the wild, outperforming existing state-of-the-art methods on FreiHAND dataset. | https://openaccess.thecvf.com/content/CVPR2021/papers/Lin_End-to-End_Human_Pose_and_Mesh_Reconstruction_with_Transformers_CVPR_2021_paper.pdf | http://arxiv.org/abs/2012.09760 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Lin_End-to-End_Human_Pose_and_Mesh_Reconstruction_with_Transformers_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Lin_End-to-End_Human_Pose_and_Mesh_Reconstruction_with_Transformers_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Lin_End-to-End_Human_Pose_CVPR_2021_supplemental.pdf | null |
CapsuleRRT: Relationships-Aware Regression Tracking via Capsules | Ding Ma, Xiangqian Wu | Regression tracking has gained more and more attention thanks to its easy-to-implement characteristics, while existing regression trackers rarely consider the relationships between the object parts and the complete object. This would ultimately result in drift from the target object when missing some parts of the target object. Recently, Capsule Network (CapsNet) has shown promising results for image classification benefits from its part-object relationships mechanism, while CapsNet is known for its high computational demand even when carrying out simple tasks. Therefore, a primitive adaptation of CapsNet to regression tracking does not make sense, since this will seriously affect speed of a tracker. To solve these problems, we first explore the spatial-temporal relationships endowed by the CapsNet for regression tracking. The entire regression framework, dubbed CapsuleRRT, consists of three parts. One is S-Caps, which captures the spatial relationships between the parts and the object. Meanwhile, a T-Caps module is designed to exploit the temporal relationships within the target. The response of the target is obtained by STCaps Learning. Further, a prior-guided capsule routing algorithm is proposed to generate more accurate capsule assignments for subsequent frames. Apart from this, the heavy computation burden in CapsNet is addressed with a knowledge distillation pose matrix compression strategy that exploits more tight and discriminative representation with few samples. Extensive experimental results show that CapsuleRRT performs favorably against state-of-the-art methods in terms of accuracy and speed. | https://openaccess.thecvf.com/content/CVPR2021/papers/Ma_CapsuleRRT_Relationships-Aware_Regression_Tracking_via_Capsules_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Ma_CapsuleRRT_Relationships-Aware_Regression_Tracking_via_Capsules_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Ma_CapsuleRRT_Relationships-Aware_Regression_Tracking_via_Capsules_CVPR_2021_paper.html | CVPR 2021 | null | null |
Test-Time Fast Adaptation for Dynamic Scene Deblurring via Meta-Auxiliary Learning | Zhixiang Chi, Yang Wang, Yuanhao Yu, Jin Tang | In this paper, we tackle the problem of dynamic scene deblurring. Most existing deep end-to-end learning approaches adopt the same generic model for all unseen test images. These solutions are sub-optimal, as they fail to utilize the internal information within a specific image. On the other hand, a self-supervised approach, SelfDeblur, enables internal-training within a test image from scratch, but it does not fully take advantages of large external dataset. In this work, we propose a novel self-supervised meta-auxiliary learning to improve the performance of deblurring by integrating both external and internal learning. Concretely, we build a self-supervised auxiliary reconstruction task which shares a portion of the network with the primary deblurring task. The two tasks are jointly trained on an external dataset. Furthermore, we propose a meta-auxiliary training scheme to further optimize the pre-trained model as a base learner which is applicable for fast adaptation at test time. During training, the performance of both tasks is coupled. Therefore, we are able to exploit the internal information at test time via the auxiliary task to enhance the performance of deblurring. Extensive experimental results across evaluation datasets demonstrate the effectiveness of test-time adaptation of the proposed method. | https://openaccess.thecvf.com/content/CVPR2021/papers/Chi_Test-Time_Fast_Adaptation_for_Dynamic_Scene_Deblurring_via_Meta-Auxiliary_Learning_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Chi_Test-Time_Fast_Adaptation_for_Dynamic_Scene_Deblurring_via_Meta-Auxiliary_Learning_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Chi_Test-Time_Fast_Adaptation_for_Dynamic_Scene_Deblurring_via_Meta-Auxiliary_Learning_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Chi_Test-Time_Fast_Adaptation_CVPR_2021_supplemental.pdf | null |
Anycost GANs for Interactive Image Synthesis and Editing | Ji Lin, Richard Zhang, Frieder Ganz, Song Han, Jun-Yan Zhu | Generative adversarial networks (GANs) have enabled photorealistic image synthesis and editing. However, due to the high computational cost of large-scale generators (e.g., StyleGAN2), it usually takes seconds to see the results of a single edit on edge devices, prohibiting interactive user experience. In this paper, inspired by quick preview features in modern rendering software, we propose Anycost GAN for interactive natural image editing. We train the Anycost GAN to support elastic resolutions and channels for faster image generation at versatile speeds. Running subsets of the full generator produce outputs that are perceptually similar to the full generator, making them a good proxy for a quick preview. By using sampling-based multi-resolution training, adaptive-channel training, and a generator-conditioned discriminator, the anycost generator can be evaluated at various configurations while achieving better image quality compared to separately trained models. Furthermore, we develop new encoder training and latent code optimization techniques to encourage consistency between the different sub-generators during image projection. Anycost GAN can be executed at various cost budgets (up to 10x computation reduction) and adapt to a wide range of hardware and latency requirements. When deployed on desktop CPUs and edge devices, our model can provide perceptually similar previews at 6-12x speedup, enabling interactive image editing. The code and demo are publicly available. | https://openaccess.thecvf.com/content/CVPR2021/papers/Lin_Anycost_GANs_for_Interactive_Image_Synthesis_and_Editing_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.03243 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Lin_Anycost_GANs_for_Interactive_Image_Synthesis_and_Editing_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Lin_Anycost_GANs_for_Interactive_Image_Synthesis_and_Editing_CVPR_2021_paper.html | CVPR 2021 | null | null |
TrafficSim: Learning To Simulate Realistic Multi-Agent Behaviors | Simon Suo, Sebastian Regalado, Sergio Casas, Raquel Urtasun | Simulation has the potential to massively scale evaluation of self-driving systems, enabling rapid development as well as safe deployment. Bridging the gap between simulation and the real world requires realistic multi-agent behaviors. Existing simulation environments rely on heuristic-based models that directly encode traffic rules, which cannot capture irregular maneuvers (e.g., nudging, U-turns) and complex interactions (e.g., yielding, merging). In contrast, we leverage real-world data to learn directly from human demonstration, and thus capture more naturalistic driving behaviors. To this end, we propose TrafficSim, a multi-agent behavior model for realistic traffic simulation. In particular, we parameterize the policy with an implicit latent variable model that generates socially-consistent plans for all actors in the scene jointly. To learn a robust policy amenable for long horizon simulation, we unroll the policy in training and optimize through the fully differentiable simulation across time. Our learning objective incorporates both human demonstrations as well as common sense. We show TrafficSim generates significantly more realistic traffic scenarios as compared to a diverse set of baselines. Notably, we can exploit trajectories generated by TrafficSim as effective data augmentation for training better motion planner. | https://openaccess.thecvf.com/content/CVPR2021/papers/Suo_TrafficSim_Learning_To_Simulate_Realistic_Multi-Agent_Behaviors_CVPR_2021_paper.pdf | http://arxiv.org/abs/2101.06557 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Suo_TrafficSim_Learning_To_Simulate_Realistic_Multi-Agent_Behaviors_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Suo_TrafficSim_Learning_To_Simulate_Realistic_Multi-Agent_Behaviors_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Suo_TrafficSim_Learning_To_CVPR_2021_supplemental.zip | null |
Monocular 3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks | Yu Cheng, Bo Wang, Bo Yang, Robby T. Tan | In monocular video 3D multi-person pose estimation, inter-person occlusion and close interactions can cause human detection to be erroneous and human-joints grouping to be unreliable. Existing top-down methods rely on human detection and thus suffer from these problems. Existing bottom-up methods do not use human detection, but they process all persons at once at the same scale, causing them to be sensitive to multiple-persons scale variations. To address these challenges, we propose the integration of top-down and bottom-up approaches to exploit their strengths. Our top-down network estimates human joints from all persons instead of one in an image patch, making it robust to possible erroneous bounding boxes. Our bottom-up network incorporates human-detection based normalized heatmaps, allowing the network to be more robust in handling scale variations. Finally, the estimated 3D poses from the top-down and bottom-up networks are fed into our integration network for final 3D poses. Besides the integration of top-down and bottom-up networks, unlike existing pose discriminators that are designed solely for single person, and consequently cannot assess natural inter-person interactions, we propose a two-person pose discriminator that enforces natural two-person interactions. Lastly, we also apply a semi-supervised method to overcome the 3D ground-truth data scarcity. Our quantitative and qualitative evaluations show the effectiveness of our method compared to the state-of-the-art baselines. | https://openaccess.thecvf.com/content/CVPR2021/papers/Cheng_Monocular_3D_Multi-Person_Pose_Estimation_by_Integrating_Top-Down_and_Bottom-Up_CVPR_2021_paper.pdf | http://arxiv.org/abs/2104.01797 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Cheng_Monocular_3D_Multi-Person_Pose_Estimation_by_Integrating_Top-Down_and_Bottom-Up_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Cheng_Monocular_3D_Multi-Person_Pose_Estimation_by_Integrating_Top-Down_and_Bottom-Up_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Cheng_Monocular_3D_Multi-Person_CVPR_2021_supplemental.pdf | null |
Space-Time Distillation for Video Super-Resolution | Zeyu Xiao, Xueyang Fu, Jie Huang, Zhen Cheng, Zhiwei Xiong | Compact video super-resolution (VSR) networks can be easily deployed on resource-limited devices, e.g., smart-phones and wearable devices, but have considerable performance gaps compared with complicated VSR networks that require a large amount of computing resources. In this paper, we aim to improve the performance of compact VSR networks without changing their original architectures, through a knowledge distillation approach that transfers knowledge from a complicated VSR network to a compact one. Specifically, we propose a space-time distillation (STD) scheme to exploit both spatial and temporal knowledge in the VSR task. For space distillation, we extract spatial attention maps that hints the high-frequency video content from both networks, which are further used for transferring spatial modeling ability. For time distillation, we narrow the performance gap between compact models and complicated models by distilling the feature similarity of the temporal memory cells, which is encoded from the sequence of feature maps generated in the training clips using ConvLSTM. During the training process, STD can be easily incorporated into any network without changing the original network architecture. Experimental results on standard benchmarks demonstrate that, in resource-constrained situations, the proposed method notably improve the performance of existing VSR networks without increasing the inference time. | https://openaccess.thecvf.com/content/CVPR2021/papers/Xiao_Space-Time_Distillation_for_Video_Super-Resolution_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Xiao_Space-Time_Distillation_for_Video_Super-Resolution_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Xiao_Space-Time_Distillation_for_Video_Super-Resolution_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Xiao_Space-Time_Distillation_for_CVPR_2021_supplemental.pdf | null |
Robust Audio-Visual Instance Discrimination | Pedro Morgado, Ishan Misra, Nuno Vasconcelos | We present a self-supervised learning method to learn audio and video representations. Prior work uses the natural correspondence between audio and video to define a standard cross-modal instance discrimination task, where a model is trained to match representations from the two modalities. However, the standard approach introduces two sources of training noise. First, audio-visual correspondences often produce faulty positives since the audio and video signals can be uninformative of each other. To limit the detrimental impact of faulty positives, we optimize a weighted contrastive learning loss, which down-weighs their contribution to the overall loss. Second, since self-supervised contrastive learning relies on random sampling of negative instances, instances that are semantically similar to the base instance can be used as faulty negatives. To alleviate the impact of faulty negatives, we propose to optimize an instance discrimination loss with a soft target distribution that estimates relationships between instances. We validate our contributions through extensive experiments on action recognition tasks and show that they address the problems of audio-visual instance discrimination and improve transfer learning performance. | https://openaccess.thecvf.com/content/CVPR2021/papers/Morgado_Robust_Audio-Visual_Instance_Discrimination_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.15916 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Morgado_Robust_Audio-Visual_Instance_Discrimination_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Morgado_Robust_Audio-Visual_Instance_Discrimination_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Morgado_Robust_Audio-Visual_Instance_CVPR_2021_supplemental.pdf | null |
High-Fidelity and Arbitrary Face Editing | Yue Gao, Fangyun Wei, Jianmin Bao, Shuyang Gu, Dong Chen, Fang Wen, Zhouhui Lian | Cycle consistency is widely used for face editing. However, we observe that the generator tends to find a tricky way to hide information from the original image to satisfy the constraint of cycle consistency, making it impossible to maintain the rich details (e.g., wrinkles and moles) of nonediting areas. In this work, we propose a simple yet effective method named HifaFace to address the above-mentioned problem from two perspectives. First, we relieve the pressure of the generator to synthesize rich details by directly feeding the high-frequency information of the input image into the end of the generator. Second, we adopt an additional discriminator to encourage the generator to synthesize rich details. Specifically, we apply wavelet transformation to transform the image into multi-frequency domains, among which the high-frequency parts can be used to recover the rich details. We also notice that a fine-grained and wider-range control for the attribute is of great importance for face editing. To achieve this goal, we propose a novel attribute regression loss. Powered by the proposed framework, we achieve high-fidelity and arbitrary face editing, outperforming other state-of-the-art approaches. | https://openaccess.thecvf.com/content/CVPR2021/papers/Gao_High-Fidelity_and_Arbitrary_Face_Editing_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.15814 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Gao_High-Fidelity_and_Arbitrary_Face_Editing_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Gao_High-Fidelity_and_Arbitrary_Face_Editing_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Gao_High-Fidelity_and_Arbitrary_CVPR_2021_supplemental.pdf | null |
Explicit Knowledge Incorporation for Visual Reasoning | Yifeng Zhang, Ming Jiang, Qi Zhao | Existing explainable and explicit visual reasoning methods only perform reasoning based on visual evidence but do not take into account knowledge beyond what is in the visual scene. To addresses the knowledge gap between visual reasoning methods and the semantic complexity of real-world images, we present the first explicit visual reasoning method that incorporates external knowledge and models high-order relational attention for improved generalizability and explainability. Specifically, we propose a knowledge incorporation network that explicitly creates and includes new graph nodes for entities and predicates from external knowledge bases to enrich the semantics of the scene graph used in explicit reasoning. We then create a novel Graph-Relate module to perform high-order relational attention on the enriched scene graph. By explicitly introducing structured external knowledge and high-order relational attention, our method demonstrates significant generalizability and explainability over the state-of-the-art visual reasoning approaches on the GQA and VQAv2 datasets. | https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_Explicit_Knowledge_Incorporation_for_Visual_Reasoning_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Explicit_Knowledge_Incorporation_for_Visual_Reasoning_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Explicit_Knowledge_Incorporation_for_Visual_Reasoning_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Zhang_Explicit_Knowledge_Incorporation_CVPR_2021_supplemental.pdf | null |
Progressive Unsupervised Learning for Visual Object Tracking | Qiangqiang Wu, Jia Wan, Antoni B. Chan | In this paper, we propose a progressive unsupervised learning (PUL) framework, which entirely removes the need for annotated training videos in visual tracking. Specifically, we first learn a background discrimination (BD) model that effectively distinguishes an object from background in a contrastive learning way. We then employ the BD model to progressively mine temporal corresponding patches (i.e., patches connected by a track) in sequential frames. As the BD model is imperfect and thus the mined patch pairs are noisy, we propose a noise-robust loss function to more effectively learn temporal correspondences from this noisy data. We use the proposed noise robust loss to train backbone networks of Siamese trackers. Without online fine-tuning or adaptation, our unsupervised real-time Siamese trackers can outperform state-of-the-art unsupervised deep trackers and achieve competitive results to the supervised baselines. | https://openaccess.thecvf.com/content/CVPR2021/papers/Wu_Progressive_Unsupervised_Learning_for_Visual_Object_Tracking_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Wu_Progressive_Unsupervised_Learning_for_Visual_Object_Tracking_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Wu_Progressive_Unsupervised_Learning_for_Visual_Object_Tracking_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Wu_Progressive_Unsupervised_Learning_CVPR_2021_supplemental.pdf | null |
IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for Visual Object Tracking | Shuai Jia, Yibing Song, Chao Ma, Xiaokang Yang | Adversarial attack arises due to the vulnerability of deep neural networks to perceive input samples injected with imperceptible perturbations. Recently, adversarial attack has been applied to visual object tracking to evaluate the robustness of deep trackers. Assuming that the model structures of deep trackers are known, a variety of white-box attack approaches to visual tracking have demonstrated promising results. However, the model knowledge about deep trackers is usually unavailable in real applications. In this paper, we propose a decision-based black-box attack method for visual object tracking. In contrast to existing black-box adversarial attack methods that deal with static images for image classification, we propose IoU attack that sequentially generates perturbations based on the predicted IoU scores from both current and historical frames. By decreasing the IoU scores, the proposed attack method degrades the accuracy of temporal coherent bounding boxes (i.e., object motions) accordingly. In addition, we transfer the learned perturbations to the next few frames to initialize temporal motion attacks. We validate the proposed IoU attack on state-of-the-art deep trackers (i.e., detection based, correlation filter based, and long-term trackers). Extensive experiments on the benchmark datasets indicate the effectiveness of the proposed IoU attack method. The source code is available at https://github.com/VISION-SJTU/IoUattack. | https://openaccess.thecvf.com/content/CVPR2021/papers/Jia_IoU_Attack_Towards_Temporally_Coherent_Black-Box_Adversarial_Attack_for_Visual_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.14938 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Jia_IoU_Attack_Towards_Temporally_Coherent_Black-Box_Adversarial_Attack_for_Visual_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Jia_IoU_Attack_Towards_Temporally_Coherent_Black-Box_Adversarial_Attack_for_Visual_CVPR_2021_paper.html | CVPR 2021 | null | null |
Deep Graph Matching Under Quadratic Constraint | Quankai Gao, Fudong Wang, Nan Xue, Jin-Gang Yu, Gui-Song Xia | Recently, deep learning based methods have demonstrated promising results on the graph matching problem, by relying on the descriptive capability of deep features extracted on graph nodes. However, one main limitation with existing deep graph matching (DGM) methods lies in their ignorance of explicit constraint of graph structures, which may lead the model to be trapped into local minimum in training. In this paper, we propose to explicitly formulate pairwise graph structures as a quadratic constraint incorporated into the DGM framework. The quadratic constraint minimizes the pairwise structural discrepancy between graphs, which can reduce the ambiguities brought by only using the extracted CNN features. Moreover, we present a differentiable implementation to the quadratic constrained-optimization such that it is compatible with the unconstrained deep learning optimizer. To give more precise and proper supervision, a well-designed false matching loss against class imbalance is proposed, which can better penalize the false negatives and false positives with less overfitting. Exhaustive experiments demonstrate that our method achieves competitive performance on real-world datasets. The code is available at: https://github.com/Zerg-Overmind/QC-DGM. | https://openaccess.thecvf.com/content/CVPR2021/papers/Gao_Deep_Graph_Matching_Under_Quadratic_Constraint_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.06643 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Gao_Deep_Graph_Matching_Under_Quadratic_Constraint_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Gao_Deep_Graph_Matching_Under_Quadratic_Constraint_CVPR_2021_paper.html | CVPR 2021 | null | null |
Multi-Label Activity Recognition Using Activity-Specific Features and Activity Correlations | Yanyi Zhang, Xinyu Li, Ivan Marsic | Multi-label activity recognition is designed for recognizing multiple activities that are performed simultaneously or sequentially in each video. Most recent activity recognition networks focus on single-activities, that assume only one activity in each video. These networks extract shared features for all the activities, which are not designed for multi-label activities. We introduce an approach to multi-label activity recognition that extracts independent feature descriptors for each activity and learns activity correlations. This structure can be trained end-to-end and plugged into any existing network structures for video classification. Our method outperformed state-of-the-art approaches on four multi-label activity recognition datasets. To better understand the activity-specific features that the system generated, we visualized these activity-specific features in the Charades dataset. The code will be released later. | https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_Multi-Label_Activity_Recognition_Using_Activity-Specific_Features_and_Activity_Correlations_CVPR_2021_paper.pdf | http://arxiv.org/abs/2009.07420 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Multi-Label_Activity_Recognition_Using_Activity-Specific_Features_and_Activity_Correlations_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Multi-Label_Activity_Recognition_Using_Activity-Specific_Features_and_Activity_Correlations_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Zhang_Multi-Label_Activity_Recognition_CVPR_2021_supplemental.pdf | null |
Learning High Fidelity Depths of Dressed Humans by Watching Social Media Dance Videos | Yasamin Jafarian, Hyun Soo Park | A key challenge of learning the geometry of dressed humans lies in the limited availability of the ground truth data (e.g., 3D scanned models), which results in the performance degradation of 3D human reconstruction when applying to real world imagery. We address this challenge by leveraging a new data resource: a number of social media dance videos that span diverse appearance, clothing styles, performances, and identities. Each video depicts dynamic movements of the body and clothes of a single person while lacking the 3D ground truth geometry. To utilize these videos, we present a new method to use the local transformation that warps the predicted local geometry of the person from an image to that of the other image at a different time instant. With the transformation, the predicted geometry can be self-supervised by the warped geometry from the other image. In addition, we jointly learn the depth along with the surface normals, which are highly responsive to local texture, wrinkle, and shade by maximizing their geometric consistency. Our method is end-to-end trainable, resulting in high fidelity depth estimation that predicts fine geometry faithful to the input real image. We demonstrate that our method outperforms the state-of-the-art human depth estimation and human shape recovery approaches on both real and rendered images. | https://openaccess.thecvf.com/content/CVPR2021/papers/Jafarian_Learning_High_Fidelity_Depths_of_Dressed_Humans_by_Watching_Social_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.03319 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Jafarian_Learning_High_Fidelity_Depths_of_Dressed_Humans_by_Watching_Social_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Jafarian_Learning_High_Fidelity_Depths_of_Dressed_Humans_by_Watching_Social_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Jafarian_Learning_High_Fidelity_CVPR_2021_supplemental.zip | null |
Unpaired Image-to-Image Translation via Latent Energy Transport | Yang Zhao, Changyou Chen | Image-to-image translation aims to preserve source contents while translating to discriminative target styles between two visual domains. Most works apply adversarial learning in the ambient image space, which could be computationally expensive and challenging to train. In this paper, we propose to deploy an energy-based model (EBM) in the latent space of a pretrained autoencoder for this task. The pretrained autoencoder serves as both a latent code extractor and an image reconstruction worker. Our model, LETIT, is based on the assumption that two domains share the same latent space, where latent representation is implicitly decomposed as a content code and a domain-specific style code. Instead of explicitly extracting the two codes and applying adaptive instance normalization to combine them, our latent EBM can implicitly learn to transport the source style code to the target style code while preserving the content code, an advantage over existing image translation methods. This simplified solution is also more efficient in the one-sided unpaired image translation setting. Qualitative and quantitative comparisons demonstrate superior translation quality and faithfulness for content preservation. Our model is the first to be applicable to 1024x1024-resolution unpaired image translation to the best of our knowledge. Code is available at https://github.com/YangNaruto/latent-energy-transport. | https://openaccess.thecvf.com/content/CVPR2021/papers/Zhao_Unpaired_Image-to-Image_Translation_via_Latent_Energy_Transport_CVPR_2021_paper.pdf | http://arxiv.org/abs/2012.00649 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Zhao_Unpaired_Image-to-Image_Translation_via_Latent_Energy_Transport_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Zhao_Unpaired_Image-to-Image_Translation_via_Latent_Energy_Transport_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Zhao_Unpaired_Image-to-Image_Translation_CVPR_2021_supplemental.pdf | null |
VLN BERT: A Recurrent Vision-and-Language BERT for Navigation | Yicong Hong, Qi Wu, Yuankai Qi, Cristian Rodriguez-Opazo, Stephen Gould | Accuracy of many visiolinguistic tasks has benefited significantly from the application of vision-and-language (V&L) BERT. However, its application for the task of vision-and-language navigation (VLN) remains limited. One reason for this is the difficulty adapting the BERT architecture to the partially observable Markov decision process present in VLN, requiring history-dependent attention and decision making. In this paper we propose a recurrent BERT model that is time-aware for use in VLN. Specifically, we equip the BERT model with a recurrent function that maintains cross-modal state information for the agent. Through extensive experiments on R2R and REVERIE we demonstrate that our model can replace more complex encoder-decoder models to achieve state-of-the-art results. Moreover, our approach can be generalised to other transformer-based architectures, supports pre-training, and is capable of solving navigation and referring expression tasks simultaneously. | https://openaccess.thecvf.com/content/CVPR2021/papers/Hong_VLN_BERT_A_Recurrent_Vision-and-Language_BERT_for_Navigation_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Hong_VLN_BERT_A_Recurrent_Vision-and-Language_BERT_for_Navigation_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Hong_VLN_BERT_A_Recurrent_Vision-and-Language_BERT_for_Navigation_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Hong_VLN_BERT_A_CVPR_2021_supplemental.pdf | null |
Content-Aware GAN Compression | Yuchen Liu, Zhixin Shu, Yijun Li, Zhe Lin, Federico Perazzi, Sun-Yuan Kung | Generative adversarial networks (GANs), e.g., StyleGAN2, play a vital role in various image generation and synthesis tasks, yet their notoriously high computational cost hinders their efficient deployment on edge devices. Directly applying generic compression approaches yields poor results on GANs, which motivates a number of recent GAN compression works. While prior works mainly accelerate conditional GANs, e.g., pix2pix and CycleGAN, compressing state-of-the-art unconditional GANs has rarely been explored and is more challenging. In this paper, we propose novel approaches for unconditional GAN compression. We first introduce effective channel pruning and knowledge distillation schemes specialized for unconditional GANs. We then propose a novel content-aware method to guide the processes of both pruning and distillation. With content-awareness, we can effectively prune channels that are unimportant to the contents of interest, e.g., human faces, and focus our distillation on these regions, which significantly enhances the distillation quality. On StyleGAN2 and SN-GAN, we achieve a substantial improvement over the state-of-the-art compression method. Notably, we reduce the FLOPs of StyleGAN2 by 11x with visually negligible image quality loss compared to the full-size model. More interestingly, when applied to various image manipulation tasks, our compressed model forms a smoother and better disentangled latent manifold, making it more effective for image editing. | https://openaccess.thecvf.com/content/CVPR2021/papers/Liu_Content-Aware_GAN_Compression_CVPR_2021_paper.pdf | http://arxiv.org/abs/2104.02244 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Liu_Content-Aware_GAN_Compression_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Liu_Content-Aware_GAN_Compression_CVPR_2021_paper.html | CVPR 2021 | null | null |
FBI-Denoiser: Fast Blind Image Denoiser for Poisson-Gaussian Noise | Jaeseok Byun, Sungmin Cha, Taesup Moon | We consider the challenging blind denoising problem for Poisson-Gaussian noise, in which no additional information about clean images or noise level parameters is available. Particularly, when only "single" noisy images are available for training a denoiser, the denoising performance of existing methods was not satisfactory. Recently, the blind pixelwise affine image denoiser (BP-AIDE) was proposed and significantly improved the performance in the above setting, to the extent that it is competitive with denoisers which utilized additional information. However, BP-AIDE seriously suffered from slow inference time due to the inefficiency of noise level estimation procedure and that of the blind-spot network (BSN) architecture it used. To that end, we propose Fast Blind Image Denoiser (FBI-Denoiser) for Poisson-Gaussian noise, which consists of two neural network models; 1) PGE-Net that estimates Poisson-Gaussian noise parameters 2000 times faster than the conventional methods and 2) FBI-Net that realizes a much more efficient BSN for pixelwise affine denoiser in terms of the number of parameters and inference speed. Consequently, we show that our FBI-Denoiser blindly trained solely based on single noisy images can achieve the state-of-the-art performance on several real-world noisy image benchmark datasets with much faster inference time (X 10), compared to BP-AIDE. | https://openaccess.thecvf.com/content/CVPR2021/papers/Byun_FBI-Denoiser_Fast_Blind_Image_Denoiser_for_Poisson-Gaussian_Noise_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Byun_FBI-Denoiser_Fast_Blind_Image_Denoiser_for_Poisson-Gaussian_Noise_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Byun_FBI-Denoiser_Fast_Blind_Image_Denoiser_for_Poisson-Gaussian_Noise_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Byun_FBI-Denoiser_Fast_Blind_CVPR_2021_supplemental.pdf | null |
Hijack-GAN: Unintended-Use of Pretrained, Black-Box GANs | Hui-Po Wang, Ning Yu, Mario Fritz | While Generative Adversarial Networks (GANs) show increasing performance and the level of realism is becoming indistinguishable from natural images, this also comes with high demands on data and computation. We show that state-of-the-art GAN models -- such as they are being publicly released by researchers and industry -- can be used for a range of applications beyond unconditional image generation. We achieve this by an iterative scheme that also allows gaining control over the image generation process despite the highly non-linear latent spaces of the latest GAN models. We demonstrate that this opens up the possibility to re-use state-of-the-art, difficult to train, pre-trained GANs with a high level of control even if only black-box access is granted. Our work also raises concerns and awareness that the use cases of a published GAN model may well reach beyond the creators' intention, which needs to be taken into account before a full public release. Code is available at https://github.com/a514514772/hijackgan. | https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_Hijack-GAN_Unintended-Use_of_Pretrained_Black-Box_GANs_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Wang_Hijack-GAN_Unintended-Use_of_Pretrained_Black-Box_GANs_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Wang_Hijack-GAN_Unintended-Use_of_Pretrained_Black-Box_GANs_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Wang_Hijack-GAN_Unintended-Use_of_CVPR_2021_supplemental.pdf | null |
LiDAR R-CNN: An Efficient and Universal 3D Object Detector | Zhichao Li, Feng Wang, Naiyan Wang | LiDAR-based 3D detection in point cloud is essential in the perception system of autonomous driving. In this paper, we present LiDAR R-CNN, a second stage detector that can generally improve any existing 3D detector. To fulfill the real-time and high precision requirement in practice, we resort to point-based approach other than the popular voxel-based approach. However, we find an overlooked issue in previous work: Naively applying point-based methods like PointNet could make the learned features ignore the size of proposals. To this end, we analyze this problem in detail and propose several methods to remedy it, which bring significant performance improvement. Comprehensive experimental results on real-world datasets like Waymo Open Dataset (WOD) and KITTI dataset with various popular detectors demonstrate the universality and superiority of our LiDAR R-CNN. In particular, based on one variant of PointPillars, our method could achieve new state-of-the-art results with minor cost. Codes will be released at https://github.com/tusimple/LiDAR_RCNN. | https://openaccess.thecvf.com/content/CVPR2021/papers/Li_LiDAR_R-CNN_An_Efficient_and_Universal_3D_Object_Detector_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Li_LiDAR_R-CNN_An_Efficient_and_Universal_3D_Object_Detector_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Li_LiDAR_R-CNN_An_Efficient_and_Universal_3D_Object_Detector_CVPR_2021_paper.html | CVPR 2021 | null | null |
Line Segment Detection Using Transformers Without Edges | Yifan Xu, Weijian Xu, David Cheung, Zhuowen Tu | In this paper, we present a joint end-to-end line segment detection algorithm using Transformers that is post-processing and heuristics-guided intermediate processing (edge/junction/region detection) free. Our method, named LinE segment TRansformers (LETR), takes advantages of having integrated tokenized queries, a self-attention mechanism, and encoding-decoding strategy within Transformers by skipping standard heuristic designs for the edge element detection and perceptual grouping processes. We equip Transformers with a multi-scale encoder/decoder strategy to perform fine-grained line segment detection under a direct endpoint distance loss. This loss term is particularly suitable for detecting geometric structures such as line segments that are not conveniently represented by the standard bounding box representations. The Transformers learn to gradually refine line segments through layers of self-attention. In our experiments, we show state-of-the-art results on Wireframe and YorkUrban benchmarks. | https://openaccess.thecvf.com/content/CVPR2021/papers/Xu_Line_Segment_Detection_Using_Transformers_Without_Edges_CVPR_2021_paper.pdf | http://arxiv.org/abs/2101.01909 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Xu_Line_Segment_Detection_Using_Transformers_Without_Edges_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Xu_Line_Segment_Detection_Using_Transformers_Without_Edges_CVPR_2021_paper.html | CVPR 2021 | null | null |
Region-Aware Adaptive Instance Normalization for Image Harmonization | Jun Ling, Han Xue, Li Song, Rong Xie, Xiao Gu | Image composition plays a common but important role in photo editing. To acquire photo-realistic composite images, one must adjust the appearance and visual style of the foreground to be compatible with the background. Existing deep learning methods for harmonizing composite images directly learn an image mapping network from the composite to real one, without explicit exploration on visual style consistency between the background and the foreground images. To ensure the visual style consistency between the foreground and the background, in this paper, we treat image harmonization as a style transfer problem. In particular, we propose a simple yet effective Region-aware Adaptive Instance Normalization (RAIN) module, which explicitly formulates the visual style from the background and adaptively applies them to the foreground. With our settings, our RAIN module can be used as a drop-in module for existing image harmonization networks and is able to bring significant improvements. Extensive experiments on the existing image harmonization benchmark datasets show the superior capability of the proposed method. Code is available at https://github.com/junleen/RainNet . | https://openaccess.thecvf.com/content/CVPR2021/papers/Ling_Region-Aware_Adaptive_Instance_Normalization_for_Image_Harmonization_CVPR_2021_paper.pdf | http://arxiv.org/abs/2106.02853 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Ling_Region-Aware_Adaptive_Instance_Normalization_for_Image_Harmonization_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Ling_Region-Aware_Adaptive_Instance_Normalization_for_Image_Harmonization_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Ling_Region-Aware_Adaptive_Instance_CVPR_2021_supplemental.pdf | null |
Learning Tensor Low-Rank Prior for Hyperspectral Image Reconstruction | Shipeng Zhang, Lizhi Wang, Lei Zhang, Hua Huang | Snapshot hyperspectral imaging has been developed to capture the spectral information of dynamic scenes. In this paper, we propose a deep neural network by learning the tensor low-rank prior of hyperspectral images (HSI) in the feature domain to promote the reconstruction quality. Our method is inspired by the canonical-polyadic (CP) decomposition theory, where a low-rank tensor can be expressed as a weight summation of several rank-1 component tensors. Specifically, we first learn the tensor low-rank prior of the image features with two steps: (a) we generate rank-1 tensors with discriminative components to collect the contextual information from both spatial and channel dimensions of the image features; (b) we aggregate those rank-1 tensors into a low-rank tensor as a 3D attention map to exploit the global correlation and refine the image features. Then, we integrate the learned tensor low-rank prior into an iterative optimization algorithm to obtain an end-to-end HSI reconstruction. Experiments on both synthetic and real data demonstrate the superiority of our method. | https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_Learning_Tensor_Low-Rank_Prior_for_Hyperspectral_Image_Reconstruction_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Learning_Tensor_Low-Rank_Prior_for_Hyperspectral_Image_Reconstruction_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Learning_Tensor_Low-Rank_Prior_for_Hyperspectral_Image_Reconstruction_CVPR_2021_paper.html | CVPR 2021 | null | null |
Unsupervised Learning of Depth and Depth-of-Field Effect From Natural Images With Aperture Rendering Generative Adversarial Networks | Takuhiro Kaneko | Understanding the 3D world from 2D projected natural images is a fundamental challenge in computer vision and graphics. Recently, an unsupervised learning approach has garnered considerable attention owing to its advantages in data collection. However, to mitigate training limitations, typical methods need to impose assumptions for viewpoint distribution (e.g., a dataset containing various viewpoint images) or object shape (e.g., symmetric objects). These assumptions often restrict applications; for instance, the application to non-rigid objects or images captured from similar viewpoints (e.g., flower or bird images) remains a challenge. To complement these approaches, we propose aperture rendering generative adversarial networks (AR-GANs), which equip aperture rendering on top of GANs, and adopt focus cues to learn the depth and depth-of-field (DoF) effect of unlabeled natural images. To address the ambiguities triggered by unsupervised setting (i.e., ambiguities between smooth texture and out-of-focus blurs, and between foreground and background blurs), we develop DoF mixture learning, which enables the generator to learn real image distribution while generating diverse DoF images. In addition, we devise a center focus prior to guiding the learning direction. In the experiments, we demonstrate the effectiveness of AR-GANs in various datasets, such as flower, bird, and face images, demonstrate their portability by incorporating them into other 3D representation learning GANs, and validate their applicability in shallow DoF rendering. | https://openaccess.thecvf.com/content/CVPR2021/papers/Kaneko_Unsupervised_Learning_of_Depth_and_Depth-of-Field_Effect_From_Natural_Images_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Kaneko_Unsupervised_Learning_of_Depth_and_Depth-of-Field_Effect_From_Natural_Images_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Kaneko_Unsupervised_Learning_of_Depth_and_Depth-of-Field_Effect_From_Natural_Images_CVPR_2021_paper.html | CVPR 2021 | null | null |
Sign-Agnostic Implicit Learning of Surface Self-Similarities for Shape Modeling and Reconstruction From Raw Point Clouds | Wenbin Zhao, Jiabao Lei, Yuxin Wen, Jianguo Zhang, Kui Jia | Shape modeling and reconstruction from raw point clouds of objects stand as a fundamental challenge in vision and graphics research. Classical methods consider analytic shape priors; however, their performance is degraded when the scanned points deviate from the ideal conditions of cleanness and completeness. Important progress has been recently made by data-driven approaches, which learn global and/or local models of implicit surface representations from auxiliary sets of training shapes. Motivated from a universal phenomenon that self-similar shape patterns of local surface patches repeat across the entire surface of an object, we aim to push forward the data-driven strategies and propose to learn a local implicit surface network for a shared, adaptive modeling of the entire surface for a direct surface reconstruction from raw point cloud; we also enhance the leveraging of surface self-similarities by improving correlations among the optimized latent codes of individual surface patches. Given that orientations of raw points could be unavailable or noisy, we extend signagnostic learning into our local implicit model, which enables our recovery of signed implicit fields of local surfaces from the unsigned inputs. We term our framework as Sign-Agnostic Implicit Learning of Surface Self-Similarities (SAIL-S3). With a global post-optimization of local sign flipping, SAIL-S3 is able to directly model raw, un-oriented point clouds and reconstruct high-quality object surfaces. Experiments show its superiority over existing methods. | https://openaccess.thecvf.com/content/CVPR2021/papers/Zhao_Sign-Agnostic_Implicit_Learning_of_Surface_Self-Similarities_for_Shape_Modeling_and_CVPR_2021_paper.pdf | http://arxiv.org/abs/2012.07498 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Zhao_Sign-Agnostic_Implicit_Learning_of_Surface_Self-Similarities_for_Shape_Modeling_and_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Zhao_Sign-Agnostic_Implicit_Learning_of_Surface_Self-Similarities_for_Shape_Modeling_and_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Zhao_Sign-Agnostic_Implicit_Learning_CVPR_2021_supplemental.pdf | null |
Towards More Flexible and Accurate Object Tracking With Natural Language: Algorithms and Benchmark | Xiao Wang, Xiujun Shu, Zhipeng Zhang, Bo Jiang, Yaowei Wang, Yonghong Tian, Feng Wu | Tracking by natural language specification is a new rising research topic that aims at locating the target object in the video sequence based on its language description. Compared with traditional bounding box (BBox) based tracking, this setting guides object tracking with high-level semantic information, addresses the ambiguity of BBox, and links local and global search organically together. Those benefits may bring more flexible, robust and accurate tracking performance in practical scenarios. However, existing natural language initialized trackers are developed and compared on benchmark datasets proposed for tracking-by-BBox, which can't reflect the true power of tracking-by-language. In this work, we propose a new benchmark specifically dedicated to the tracking-by-language, including a large scale dataset, strong and diverse baseline methods. Specifically, we collect 2k video sequences (contains a total of 1,244,340 frames, 663 words) and split 1300/700 for the train/testing respectively. We densely annotate one sentence in English and corresponding bounding boxes of the target object for each video. We also introduce two new challenges into TNL2K for the object tracking task, i.e., adversarial samples and modality switch. A strong baseline method based on an adaptive local-global-search scheme is proposed for future works to compare. We believe this benchmark will greatly boost related researches on natural language guided tracking. | https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_Towards_More_Flexible_and_Accurate_Object_Tracking_With_Natural_Language_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.16746 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Wang_Towards_More_Flexible_and_Accurate_Object_Tracking_With_Natural_Language_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Wang_Towards_More_Flexible_and_Accurate_Object_Tracking_With_Natural_Language_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Wang_Towards_More_Flexible_CVPR_2021_supplemental.pdf | null |
On Learning the Geodesic Path for Incremental Learning | Christian Simon, Piotr Koniusz, Mehrtash Harandi | Neural networks notoriously suffer from the problem of catastrophic forgetting, the phenomenon of forgetting the past knowledge when acquiring new knowledge. Overcoming catastrophic forgetting is of significant importance to emulate the process of "incremental learning", where the model is capable of learning from sequential experience in an efficient and robust way. State-of-the-art techniques for incremental learning make use of knowledge distillation towards preventing catastrophic forgetting. Therein, one updates the network while ensuring that the network's responses to previously seen concepts remain stable throughout updates. This in practice is done by minimizing the dissimilarity between current and previous responses of the network one way or another. Our work contributes a novel method to the arsenal of distillation techniques. In contrast to previous state of the art, we propose to firstly construct low-dimensional manifolds for previous and current responses and minimize the dissimilarity between the responses along the geodesic connecting the manifolds. This induces a more formidable knowledge distillation with smooth properties which preserves the past knowledge more efficiently as observed by our comprehensive empirical study. | https://openaccess.thecvf.com/content/CVPR2021/papers/Simon_On_Learning_the_Geodesic_Path_for_Incremental_Learning_CVPR_2021_paper.pdf | http://arxiv.org/abs/2104.08572 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Simon_On_Learning_the_Geodesic_Path_for_Incremental_Learning_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Simon_On_Learning_the_Geodesic_Path_for_Incremental_Learning_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Simon_On_Learning_the_CVPR_2021_supplemental.pdf | null |
The Lottery Tickets Hypothesis for Supervised and Self-Supervised Pre-Training in Computer Vision Models | Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Michael Carbin, Zhangyang Wang | The computer vision world has been re-gaining enthusiasm in various pre-trained models, including both classical ImageNet supervised pre-training and recently emerged self-supervised pre-training such as simCLR and MoCo. Pre-trained weights often boost a wide range of downstream tasks including classification, detection, and segmentation. Latest studies suggest that pre-training benefits from gigantic model capacity. We are hereby curious and ask: after pre-training, does a pre-trained model indeed have to stay large for its downstream transferability? In this paper, we examine supervised and self-supervised pre-trained models through the lens of the lottery ticket hypothesis (LTH). LTH identifies highly sparse matching subnetworks that can be trained in isolation from (nearly) scratch yet still reach the full models' performance. We extend the scope of LTH and question whether matching subnetworks still exist in pre-trained computer vision models, that enjoy the same downstream transfer performance. Our extensive experiments convey an overall positive message: from all pre-trained weights obtained by ImageNet classification, simCLR, and MoCo, we are consistently able to locate such matching subnetworks at 59.04% to 96.48% sparsity that transfer universally to multiple downstream tasks, whose performance see no degradation compared to using full pre-trained weights. Further analyses reveal that subnetworks found from different pre-training tend to yield diverse mask structures and perturbation sensitivities. We conclude that the core LTH observations remain generally relevant in the pre-training paradigm of computer vision, but more delicate discussions are needed in some cases. Codes and pre-trained models will be made available at: https://github.com/VITA-Group/CV_LTH_Pre-training. | https://openaccess.thecvf.com/content/CVPR2021/papers/Chen_The_Lottery_Tickets_Hypothesis_for_Supervised_and_Self-Supervised_Pre-Training_in_CVPR_2021_paper.pdf | http://arxiv.org/abs/2012.06908 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Chen_The_Lottery_Tickets_Hypothesis_for_Supervised_and_Self-Supervised_Pre-Training_in_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Chen_The_Lottery_Tickets_Hypothesis_for_Supervised_and_Self-Supervised_Pre-Training_in_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Chen_The_Lottery_Tickets_CVPR_2021_supplemental.pdf | null |
Iterative Shrinking for Referring Expression Grounding Using Deep Reinforcement Learning | Mingjie Sun, Jimin Xiao, Eng Gee Lim | In this paper, we are tackling the proposal-free referring expression grounding task, aiming at localizing the target object according to a query sentence, without relying on off-the-shelf object proposals. Existing proposal-free methods employ a query-image matching branch to select the highest-score point in the image feature map as the target box center, with its width and height predicted by another branch. Such methods, however, fail to utilize the contextual relation between the target and reference objects, and lack interpretability on its reasoning procedure. To solve these problems, we propose an iterative shrinking mechanism to localize the target, where the shrinking direction is decided by a reinforcement learning agent, with all contents within the current image patch comprehensively considered. Beside, the sequential shrinking process enables to demonstrate the reasoning about how to iteratively find the target. Experiments show that the proposed method boosts the accuracy by 4.32% against the previous state-of-the-art (SOTA) method on the RefCOCOg dataset, where query sentences are long and complex, with many targets referred by other reference objects. | https://openaccess.thecvf.com/content/CVPR2021/papers/Sun_Iterative_Shrinking_for_Referring_Expression_Grounding_Using_Deep_Reinforcement_Learning_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.05187 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Sun_Iterative_Shrinking_for_Referring_Expression_Grounding_Using_Deep_Reinforcement_Learning_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Sun_Iterative_Shrinking_for_Referring_Expression_Grounding_Using_Deep_Reinforcement_Learning_CVPR_2021_paper.html | CVPR 2021 | null | null |
Simulating Unknown Target Models for Query-Efficient Black-Box Attacks | Chen Ma, Li Chen, Jun-Hai Yong | Many adversarial attacks have been proposed to investigate the security issues of deep neural networks. In the black-box setting, current model stealing attacks train a substitute model to counterfeit the functionality of the target model. However, the training requires querying the target model. Consequently, the query complexity remains high, and such attacks can be defended easily. This study aims to train a generalized substitute model called "Simulator", which can mimic the functionality of any unknown target model. To this end, we build the training data with the form of multiple tasks by collecting query sequences generated during the attacks of various existing networks. The learning process uses a mean square error-based knowledge-distillation loss in the meta-learning to minimize the difference between the Simulator and the sampled networks. The meta-gradients of this loss are then computed and accumulated from multiple tasks to update the Simulator and subsequently improve generalization. When attacking a target model that is unseen in training, the trained Simulator can accurately simulate its functionality using its limited feedback. As a result, a large fraction of queries can be transferred to the Simulator, thereby reducing query complexity. Results of the comprehensive experiments conducted using the CIFAR-10, CIFAR-100, and TinyImageNet datasets demonstrate that the proposed approach reduces query complexity by several orders of magnitude compared to the baseline method. The implementation source code is released online at https://github.com/machanic/SimulatorAttack. | https://openaccess.thecvf.com/content/CVPR2021/papers/Ma_Simulating_Unknown_Target_Models_for_Query-Efficient_Black-Box_Attacks_CVPR_2021_paper.pdf | http://arxiv.org/abs/2009.00960 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Ma_Simulating_Unknown_Target_Models_for_Query-Efficient_Black-Box_Attacks_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Ma_Simulating_Unknown_Target_Models_for_Query-Efficient_Black-Box_Attacks_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Ma_Simulating_Unknown_Target_CVPR_2021_supplemental.pdf | null |
Diffusion Probabilistic Models for 3D Point Cloud Generation | Shitong Luo, Wei Hu | We present a probabilistic model for point cloud generation, which is fundamental for various 3D vision tasks such as shape completion, upsampling, synthesis and data augmentation. Inspired by the diffusion process in non-equilibrium thermodynamics, we view points in point clouds as particles in a thermodynamic system in contact with a heat bath, which diffuse from the original distribution to a noise distribution. Point cloud generation thus amounts to learning the reverse diffusion process that transforms the noise distribution to the distribution of a desired shape. Specifically, we propose to model the reverse diffusion process for point clouds as a Markov chain conditioned on certain shape latent. We derive the variational bound in closed form for training and provide implementations of the model. Experimental results demonstrate that our model achieves competitive performance in point cloud generation and auto-encoding. The code is available at https://github.com/luost26/diffusion-point-cloud | https://openaccess.thecvf.com/content/CVPR2021/papers/Luo_Diffusion_Probabilistic_Models_for_3D_Point_Cloud_Generation_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.01458 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Luo_Diffusion_Probabilistic_Models_for_3D_Point_Cloud_Generation_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Luo_Diffusion_Probabilistic_Models_for_3D_Point_Cloud_Generation_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Luo_Diffusion_Probabilistic_Models_CVPR_2021_supplemental.pdf | null |
Dual Pixel Exploration: Simultaneous Depth Estimation and Image Restoration | Liyuan Pan, Shah Chowdhury, Richard Hartley, Miaomiao Liu, Hongguang Zhang, Hongdong Li | The dual-pixel (DP) hardware works by splitting each pixel in half and creating an image pair in a single snapshot. Several works estimate depth/inverse depth by treating the DP pair as a stereo pair. However, dual-pixel disparity only occurs in image regions with the defocus blur. The heavy defocus blur in DP pairs affects the performance of matching-based depth estimation approaches. Instead of removing the blur effect blindly, we study the formation of the DP pair which links the blur and the depth information. In this paper, we propose a mathematical DP model which can benefit depth estimation by the blur. These explorations motivate us to propose an end-to-end DDDNet (DP-based Depth and Deblur Network) to jointly estimate the depth and restore the image. Moreover, we define a reblur loss, which reflects the relationship of the DP image formation process with depth information, to regularise our depth estimate in training. To meet the requirement of a large amount of data for learning, we propose the first DP image simulator which allows us to create datasets with DP pairs from any existing RGBD dataset. As a side contribution, we collect a real dataset for further research. Extensive experimental evaluation on both synthetic and real datasets shows that our approach achieves competitive performance compared to state-of-the-art approaches. | https://openaccess.thecvf.com/content/CVPR2021/papers/Pan_Dual_Pixel_Exploration_Simultaneous_Depth_Estimation_and_Image_Restoration_CVPR_2021_paper.pdf | http://arxiv.org/abs/2012.00301 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Pan_Dual_Pixel_Exploration_Simultaneous_Depth_Estimation_and_Image_Restoration_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Pan_Dual_Pixel_Exploration_Simultaneous_Depth_Estimation_and_Image_Restoration_CVPR_2021_paper.html | CVPR 2021 | null | null |
Guided Integrated Gradients: An Adaptive Path Method for Removing Noise | Andrei Kapishnikov, Subhashini Venugopalan, Besim Avci, Ben Wedin, Michael Terry, Tolga Bolukbasi | Integrated Gradients (IG) is a commonly used feature attribution method for deep neural networks. While IG has many desirable properties, the method often produces spurious/noisy pixel attributions in regions that are not related to the predicted class when applied to visual models. While this has been previously noted, most existing solutions are aimed at addressing the symptoms by explicitly reducing the noise in the resulting attributions. In this work, we show that one of the causes of the problem is the accumulation of noise along the IG path. To minimize the effect of this source of noise, we propose adapting the attribution path itself -- conditioning the path not just on the image but also on the model being explained. We introduce Adaptive Path Methods (APMs) as a generalization of path methods, and Guided IG as a specific instance of an APM. Empirically, Guided IG creates saliency maps better aligned with the model's prediction and the input image that is being explained. We show through qualitative and quantitative experiments that Guided IG outperforms other, related methods in nearly every experiment. | https://openaccess.thecvf.com/content/CVPR2021/papers/Kapishnikov_Guided_Integrated_Gradients_An_Adaptive_Path_Method_for_Removing_Noise_CVPR_2021_paper.pdf | http://arxiv.org/abs/2106.09788 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Kapishnikov_Guided_Integrated_Gradients_An_Adaptive_Path_Method_for_Removing_Noise_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Kapishnikov_Guided_Integrated_Gradients_An_Adaptive_Path_Method_for_Removing_Noise_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Kapishnikov_Guided_Integrated_Gradients_CVPR_2021_supplemental.zip | null |
Spatiotemporal Registration for Event-Based Visual Odometry | Daqi Liu, Alvaro Parra, Tat-Jun Chin | A useful application of event sensing is visual odometry, especially in settings that require high-temporal resolution. The state-of-the-art method of contrast maximisation recovers the motion from a batch of events by maximising the contrast of the image of warped events. However, the cost scales with image resolution and the temporal resolution can be limited by the need for large batch sizes to yield sufficient structure in the contrast image (see supplementary material for demonstration program). In this work, we propose spatiotemporal registration as a compelling technique for event-based rotational motion estimation. We theoretically justify the approach and establish its fundamental and practical advantages over contrast maximisation. In particular, spatiotemporal registration also produces feature tracks as a by-product, which directly supports an efficient visual odometry pipeline with graph-based optimisation for motion averaging. The simplicity of our visual odometry pipeline allows it to process more than 1 M events/second. We also contribute a new event dataset for visual odometry, where motion sequences with large velocity variations were acquired using a high-precision robot arm. Our dataset will be published after the reviewing period. | https://openaccess.thecvf.com/content/CVPR2021/papers/Liu_Spatiotemporal_Registration_for_Event-Based_Visual_Odometry_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.05955 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Liu_Spatiotemporal_Registration_for_Event-Based_Visual_Odometry_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Liu_Spatiotemporal_Registration_for_Event-Based_Visual_Odometry_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Liu_Spatiotemporal_Registration_for_CVPR_2021_supplemental.pdf | null |
Temporal Action Segmentation From Timestamp Supervision | Zhe Li, Yazan Abu Farha, Jurgen Gall | Temporal action segmentation approaches have been very successful recently. However, annotating videos with frame-wise labels to train such models is very expensive and time consuming. While weakly supervised methods trained using only ordered action lists require less annotation effort, the performance is still worse than fully supervised approaches. In this paper, we propose to use timestamp supervision for the temporal action segmentation task. Timestamps require a comparable annotation effort to weakly supervised approaches, and yet provide a more supervisory signal. To demonstrate the effectiveness of timestamp supervision, we propose an approach to train a segmentation model using only timestamps annotations. Our approach uses the model output and the annotated timestamps to generate frame-wise labels by detecting the action changes. We further introduce a confidence loss that forces the predicted probabilities to monotonically decrease as the distance to the timestamps increases. This ensures that all and not only the most distinctive frames of an action are learned during training. The evaluation on four datasets shows that models trained with timestamps annotations achieve comparable performance to the fully supervised approaches. | https://openaccess.thecvf.com/content/CVPR2021/papers/Li_Temporal_Action_Segmentation_From_Timestamp_Supervision_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.06669 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Li_Temporal_Action_Segmentation_From_Timestamp_Supervision_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Li_Temporal_Action_Segmentation_From_Timestamp_Supervision_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Li_Temporal_Action_Segmentation_CVPR_2021_supplemental.pdf | null |
Data-Free Model Extraction | Jean-Baptiste Truong, Pratyush Maini, Robert J. Walls, Nicolas Papernot | Current model extraction attacks assume that the adversary has access to a surrogate dataset with characteristics similar to the proprietary data used to train the victim model. This requirement precludes the use of existing model extraction techniques on valuable models, such as those trained on rare or hard to acquire datasets. In contrast, we propose data-free model extraction methods that do not require a surrogate dataset. Our approach adapts techniques from the area of data-free knowledge transfer for model extraction. As part of our study, we identify that the choice of loss is critical to ensuring that the extracted model is an accurate replica of the victim model. Furthermore, we address difficulties arising from the adversary's limited access to the victim model in a black-box setting. For example, we recover the model's logits from its probability predictions to approximate gradients. We find that the proposed data-free model extraction approach achieves high-accuracy with reasonable query complexity -- 0.99x and 0.92x the victim model accuracy on SVHN and CIFAR-10 datasets given 2M and 20M queries respectively. | https://openaccess.thecvf.com/content/CVPR2021/papers/Truong_Data-Free_Model_Extraction_CVPR_2021_paper.pdf | http://arxiv.org/abs/2011.14779 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Truong_Data-Free_Model_Extraction_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Truong_Data-Free_Model_Extraction_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Truong_Data-Free_Model_Extraction_CVPR_2021_supplemental.pdf | null |
PointAugmenting: Cross-Modal Augmentation for 3D Object Detection | Chunwei Wang, Chao Ma, Ming Zhu, Xiaokang Yang | Camera and LiDAR are two complementary sensors for 3D object detection in the autonomous driving context. Camera provides rich texture and color cues while LiDAR specializes in relative distance sensing. The challenge of 3D object detection lies in effectively fusing 2D camera images with 3D LiDAR points. In this paper, we present a novel cross-modal 3D object detection algorithm, named PointAugmenting. On one hand, PointAugmenting decorates point clouds with corresponding point-wise CNN features extracted by pretrained 2D detection models, and then performs 3D object detection over the decorated point clouds. In comparison with highly abstract semantic segmentation scores to decorate point clouds, CNN features from detection networks adapt to object appearance variations, achieving significant improvement. On the other hand, PointAugmenting benefits from a novel cross-modal data augmentation algorithm, which consistently pastes virtual objects into images and point clouds during network training. Extensive experiments on the large-scale nuScenes and Waymo datasets demonstrate the effectiveness and efficiency of our PointAugmenting. Notably, PointAugmenting outperforms the LiDAR-only baseline detector by +6.5% mAP and achieves the new state-of-the-art results on the nuScenes leaderboard to date. | https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_PointAugmenting_Cross-Modal_Augmentation_for_3D_Object_Detection_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Wang_PointAugmenting_Cross-Modal_Augmentation_for_3D_Object_Detection_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Wang_PointAugmenting_Cross-Modal_Augmentation_for_3D_Object_Detection_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Wang_PointAugmenting_Cross-Modal_Augmentation_CVPR_2021_supplemental.zip | null |
Learning Feature Aggregation for Deep 3D Morphable Models | Zhixiang Chen, Tae-Kyun Kim | 3D morphable models are widely used for the shape representation of an object class in computer vision and graphics applications. In this work, we focus on deep 3D morphable models that directly apply deep learning on 3D mesh data with a hierarchical structure to capture information at multiple scales. While great efforts have been made to design the convolution operator, how to best aggregate vertex features across hierarchical levels deserves further attention. In contrast to resorting to mesh decimation, we propose an attention based module to learn mapping matrices for better feature aggregation across hierarchical levels. Specifically, the mapping matrices are generated by a compatibility function of the keys and queries. The keys and queries are trainable variables, learned by optimizing the target objective, and shared by all data samples of the same object class. Our proposed module can be used as a train-only drop-in replacement for the feature aggregation in existing architectures for both downsampling and upsampling. Our experiments show that through the end-to-end training of the mapping matrices, we achieve state-of-the-art results on a variety of 3D shape datasets in comparison to existing morphable models. | https://openaccess.thecvf.com/content/CVPR2021/papers/Chen_Learning_Feature_Aggregation_for_Deep_3D_Morphable_Models_CVPR_2021_paper.pdf | http://arxiv.org/abs/2105.02173 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Chen_Learning_Feature_Aggregation_for_Deep_3D_Morphable_Models_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Chen_Learning_Feature_Aggregation_for_Deep_3D_Morphable_Models_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Chen_Learning_Feature_Aggregation_CVPR_2021_supplemental.pdf | null |
There Is More Than Meets the Eye: Self-Supervised Multi-Object Detection and Tracking With Sound by Distilling Multimodal Knowledge | Francisco Rivera Valverde, Juana Valeria Hurtado, Abhinav Valada | Attributes of sound inherent to objects can provide valuable cues to learn rich representations for object detection and tracking. Furthermore, the co-occurrence of audiovisual events in videos can be exploited to localize objects over the image field by solely monitoring the sound in the environment. Thus far, this has only been feasible in scenarios where the camera is static and for single object detection. Moreover, the robustness of these methods has been limited as they primarily rely on RGB images which are highly susceptible to illumination and weather changes. In this work, we present the novel self-supervised MM-DistillNet framework consisting of multiple teachers that leverage diverse modalities including RGB, depth, and thermal images, to simultaneously exploit complementary cues and distill knowledge into a single audio student network. We propose the new MTA loss function that facilitates the distillation of information from multimodal teachers in a self-supervised manner. Additionally, we propose a novel self-supervised pretext task for the audio student that enables us to not rely on labor-intensive manual annotations. We introduce a large-scale multimodal dataset with over 113,000 time-synchronized frames of RGB, depth, thermal, and audio modalities. Extensive experiments demonstrate that our approach outperforms state-of-the-art methods while being able to detect multiple objects using only sound during inference and even while moving. | https://openaccess.thecvf.com/content/CVPR2021/papers/Valverde_There_Is_More_Than_Meets_the_Eye_Self-Supervised_Multi-Object_Detection_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.01353 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Valverde_There_Is_More_Than_Meets_the_Eye_Self-Supervised_Multi-Object_Detection_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Valverde_There_Is_More_Than_Meets_the_Eye_Self-Supervised_Multi-Object_Detection_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Valverde_There_Is_More_CVPR_2021_supplemental.pdf | null |
DeRF: Decomposed Radiance Fields | Daniel Rebain, Wei Jiang, Soroosh Yazdani, Ke Li, Kwang Moo Yi, Andrea Tagliasacchi | With the advent of Neural Radiance Fields (NeRF), neural networks can now render novel views of a 3D scene with quality that fools the human eye. Yet, generating these images is very computationally intensive, limiting their applicability in practical scenarios. In this paper, we propose a technique based on spatial decomposition capable of mitigating this issue. Our key observation is that there are diminishing returns in employing larger (deeper and/or wider) networks. Hence, we propose to spatially decompose a scene and dedicate smaller networks for each decomposed part. When working together, these networks can render the whole scene. This allows us near-constant inference time regardless of the number of decomposed parts. Moreover, we show that a Voronoi spatial decomposition is preferable for this purpose, as it is provably compatible with the Painter's Algorithm for efficient and GPU-friendly rendering. Our experiments show that for real-world scenes, our method provides up to 3x more efficient inference than NeRF (with the same rendering quality), or an improvement of up to 1.0 dB in PSNR (for the same inference cost). | https://openaccess.thecvf.com/content/CVPR2021/papers/Rebain_DeRF_Decomposed_Radiance_Fields_CVPR_2021_paper.pdf | http://arxiv.org/abs/2011.12490 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Rebain_DeRF_Decomposed_Radiance_Fields_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Rebain_DeRF_Decomposed_Radiance_Fields_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Rebain_DeRF_Decomposed_Radiance_CVPR_2021_supplemental.zip | null |
Group-aware Label Transfer for Domain Adaptive Person Re-identification | Kecheng Zheng, Wu Liu, Lingxiao He, Tao Mei, Jiebo Luo, Zheng-Jun Zha | Unsupervised Domain Adaptive (UDA) person re-identification (ReID) aims at adapting the model trained on a labeled source-domain dataset to a target-domain dataset without any further annotations. Most successful UDA-ReID approaches combine clustering-based pseudo-label prediction with representation learning and perform the two steps in an alternating fashion. However, offline interaction between these two steps may allow noisy pseudo labels to substantially hinder the capability of the model. In this paper, we propose a Group-aware Label Transfer (GLT) algorithm, which enables the online interaction and mutual promotion of pseudo-label prediction and representation learning. Specifically, a label transfer algorithm simultaneously uses pseudo labels to train the data while refining the pseudo labels as an online clustering algorithm. It treats the online label refinery problem as an optimal transport problem, which explores the minimum cost for assigning M samples to N pseudo labels. More importantly, we introduce a group-aware strategy to assign implicit attribute group IDs to samples. The combination of the online label refining algorithm and the group-aware strategy can better correct the noisy pseudo label in an online fashion and narrow down the search space of the target identity. The effectiveness of the proposed GLT is demonstrated by the experimental results (Rank-1 accuracy) for Market1501\toDukeMTMC (82.0%) and DukeMTMC\toMarket1501 (92.2%), remarkably closing the gap between unsupervised and supervised performance on person re-identification. | https://openaccess.thecvf.com/content/CVPR2021/papers/Zheng_Group-aware_Label_Transfer_for_Domain_Adaptive_Person_Re-identification_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.12366 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Zheng_Group-aware_Label_Transfer_for_Domain_Adaptive_Person_Re-identification_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Zheng_Group-aware_Label_Transfer_for_Domain_Adaptive_Person_Re-identification_CVPR_2021_paper.html | CVPR 2021 | null | null |
MR Image Super-Resolution With Squeeze and Excitation Reasoning Attention Network | Yulun Zhang, Kai Li, Kunpeng Li, Yun Fu | High-quality high-resolution (HR) magnetic resonance (MR) images afford more detailed information for reliable diagnosis and quantitative image analyses. Deep convolutional neural networks (CNNs) have shown promising ability for MR image super-resolution (SR) given low-resolution (LR) MR images. The LR MR images usually share some visual characteristics: repeating patterns, relatively simpler structures, and less informative background. Most previous CNN-based SR methods treat the spatial pixels (including the background) equally. They also fail to sense the entire space of the input, which is critical for high-quality MR image SR. To address those problems, we propose squeeze and excitation reasoning attention networks (SERAN) for accurate MR image SR. We propose to squeeze attention from global spatial information of the input and obtain global descriptors. Such global descriptors enhance the network's ability to focus on more informative regions and structures in MR images. We further build relationship among those global descriptors and propose primitive relationship reasoning attention. The global descriptors are further refined with the learned attention. To fully make use of the aggregated information, we adaptively recalibrate feature responses with learned adaptive attention vectors. These attention vectors select a subset of global descriptors to complement each spatial location for accurate details and texture reconstruction. We propose squeeze and excitation attention with residual scaling, which not only stabilizes the training but also makes it flexible to other basic networks. Extensive experiments show the effectiveness of our proposed SERAN, which clearly surpasses state-of-the-art methods on benchmarks quantitatively and qualitatively. | https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_MR_Image_Super-Resolution_With_Squeeze_and_Excitation_Reasoning_Attention_Network_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_MR_Image_Super-Resolution_With_Squeeze_and_Excitation_Reasoning_Attention_Network_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_MR_Image_Super-Resolution_With_Squeeze_and_Excitation_Reasoning_Attention_Network_CVPR_2021_paper.html | CVPR 2021 | null | null |
BABEL: Bodies, Action and Behavior With English Labels | Abhinanda R. Punnakkal, Arjun Chandrasekaran, Nikos Athanasiou, Alejandra Quiros-Ramirez, Michael J. Black | Understanding the semantics of human movement -- the what, how and why of the movement -- is an important problem that requires datasets of human actions with semantic labels. Existing datasets take one of two approaches. Large-scale video datasets contain many action labels but do not contain ground-truth 3D human motion. Alternatively, motion-capture (mocap) datasets have precise body motions but are limited to a small number of actions. To address this, we present BABEL, a large dataset with language labels describing the actions being performed in mocap sequences. BABEL consists of language labels for over 43 hours of mocap sequences from AMASS, containing over 250 unique actions. Each action label in BABEL is precisely aligned with the duration of the corresponding action in the mocap sequence. BABEL also allows overlap of multiple actions, that may each span different durations. This results in a total of over 66000 action segments. The dense annotations can be leveraged for tasks like action recognition, temporal localization, motion synthesis, etc. To demonstrate the value of BABEL as a benchmark, we evaluate the performance of models on 3D action recognition. We demonstrate that BABEL poses interesting learning challenges that are applicable to real-world scenarios, and can serve as a useful benchmark for progress in 3D action recognition. The dataset, baseline methods, and evaluation code are available and supported for academic research purposes at https://babel.is.tue.mpg.de/. | https://openaccess.thecvf.com/content/CVPR2021/papers/Punnakkal_BABEL_Bodies_Action_and_Behavior_With_English_Labels_CVPR_2021_paper.pdf | http://arxiv.org/abs/2106.09696 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Punnakkal_BABEL_Bodies_Action_and_Behavior_With_English_Labels_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Punnakkal_BABEL_Bodies_Action_and_Behavior_With_English_Labels_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Punnakkal_BABEL_Bodies_Action_CVPR_2021_supplemental.zip | null |
Subsets and Splits