Search is not available for this dataset
title
string
authors
string
abstract
string
pdf
string
arXiv
string
bibtex
string
url
string
detail_url
string
tags
string
supp
string
string
L2M-GAN: Learning To Manipulate Latent Space Semantics for Facial Attribute Editing
Guoxing Yang, Nanyi Fei, Mingyu Ding, Guangzhen Liu, Zhiwu Lu, Tao Xiang
A deep facial attribute editing model strives to meet two requirements: (1) attribute correctness -- the target attribute should correctly appear on the edited face image; (2) irrelevance preservation -- any irrelevant information (e.g., identity) should not be changed after editing. Meeting both requirements challenges the state-of-the-art works which resort to either spatial attention or latent space factorization. Specifically, the former assume that each attribute has well-defined local support regions; they are often more effective for editing a local attribute than a global one. The latter factorize the latent space of a fixed pretrained GAN into different attribute-relevant parts, but they cannot be trained end-to-end with the GAN, leading to sub-optimal solutions. To overcome these limitations, we propose a novel latent space factorization model, called L2M-GAN, which is learned end-to-end and effective for editing both local and global attributes. The key novel components are: (1) A latent space vector of the GAN is factorized into an attribute-relevant and irrelevant codes with an orthogonality constraint imposed to ensure disentanglement. (2) An attribute-relevant code transformer is learned to manipulate the attribute value; crucially, the transformed code are subject to the same orthogonality constraint. By forcing both the original attribute-relevant latent code and the edited code to be disentangled from any attribute-irrelevant code, our model strikes the perfect balance between attribute correctness and irrelevance preservation. Extensive experiments on CelebA-HQ show that our L2M-GAN achieves significant improvements over the state-of-the-arts.
https://openaccess.thecvf.com/content/CVPR2021/papers/Yang_L2M-GAN_Learning_To_Manipulate_Latent_Space_Semantics_for_Facial_Attribute_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Yang_L2M-GAN_Learning_To_Manipulate_Latent_Space_Semantics_for_Facial_Attribute_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Yang_L2M-GAN_Learning_To_Manipulate_Latent_Space_Semantics_for_Facial_Attribute_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Yang_L2M-GAN_Learning_To_CVPR_2021_supplemental.pdf
null
IIRC: Incremental Implicitly-Refined Classification
Mohamed Abdelsalam, Mojtaba Faramarzi, Shagun Sodhani, Sarath Chandar
We introduce the 'Incremental Implicitly-Refined Classification (IIRC)' setup, an extension to the class incremental learning setup where the incoming batches of classes have two granularity levels. i.e., each sample could have a high-level (coarse) label like 'bear' and a low-level (fine) label like 'polar bear'. Only one label is provided at a time, and the model has to figure out the other label if it has already learned it. This setup is more aligned with real-life scenarios, where a learner usually interacts with the same family of entities multiple times, discovers more granularity about them, while still trying not to forget previous knowledge. Moreover, this setup enables evaluating models for some important lifelong learning challenges that cannot be easily addressed under the existing setups. These challenges can be motivated by the example "if a model was trained on the class 'bear' in one task and on 'polar bear' in another task, will it forget the concept of 'bear', will it rightfully infer that a 'polar bear' is still a 'bear'? and will it wrongfully associate the label of 'polar bear' to other breeds of 'bear'?". We develop a standardized benchmark that enables evaluating models on the IIRC setup. We evaluate several state-of-the-art lifelong learning algorithms and highlight their strengths and limitations. For example, distillation-based methods perform relatively well but are prone to incorrectly predicting too many labels per image. We hope that the proposed setup, along with the benchmark, would provide a meaningful problem setting to the practitioners.
https://openaccess.thecvf.com/content/CVPR2021/papers/Abdelsalam_IIRC_Incremental_Implicitly-Refined_Classification_CVPR_2021_paper.pdf
http://arxiv.org/abs/2012.12477
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Abdelsalam_IIRC_Incremental_Implicitly-Refined_Classification_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Abdelsalam_IIRC_Incremental_Implicitly-Refined_Classification_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Abdelsalam_IIRC_Incremental_Implicitly-Refined_CVPR_2021_supplemental.pdf
null
Learning To Fuse Asymmetric Feature Maps in Siamese Trackers
Wencheng Han, Xingping Dong, Fahad Shahbaz Khan, Ling Shao, Jianbing Shen
Recently, Siamese-based trackers have achieved promising performance in visual tracking. Most recent Siamese-based trackers typically employ a depth-wise cross-correlation (DW-XCorr) to obtain multi-channel correlation information from the two feature maps (target and search region). However, DW-XCorr has several limitations within Siamese-based tracking: it can easily be fooled by distractors, has fewer activated channels, and provides weak discrimination of object boundaries. Further, DW-XCorr is a handcrafted parameter-free module and cannot fully benefit from offline learning on large-scale data. We propose a learnable module, called the asymmetric convolution (ACM), which learns to better capture the semantic correlation information in offline training on large-scale data. Different from DW-XCorr and its predecessor(XCorr), which regard a single feature map as the convolution kernel, our ACM decomposes the convolution operation on a concatenated feature map into two mathematically equivalent operations, thereby avoiding the need for the feature maps to be of the same size (width and height)during concatenation. Our ACM can incorporate useful prior information, such as bounding-box size, with standard visual features. Furthermore, ACM can easily be integrated into existing Siamese trackers based on DW-XCorror XCorr. To demonstrate its generalization ability, we integrate ACM into three representative trackers: SiamFC, SiamRPN++, and SiamBAN. Our experiments reveal the benefits of the proposed ACM, which outperforms existing methods on six tracking benchmarks. On the LaSOT test set, our ACM-based tracker obtains a significant improvement of 5.8% in terms of success (AUC), over the baseline.
https://openaccess.thecvf.com/content/CVPR2021/papers/Han_Learning_To_Fuse_Asymmetric_Feature_Maps_in_Siamese_Trackers_CVPR_2021_paper.pdf
http://arxiv.org/abs/2012.02776
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Han_Learning_To_Fuse_Asymmetric_Feature_Maps_in_Siamese_Trackers_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Han_Learning_To_Fuse_Asymmetric_Feature_Maps_in_Siamese_Trackers_CVPR_2021_paper.html
CVPR 2021
null
null
Generalizing to the Open World: Deep Visual Odometry With Online Adaptation
Shunkai Li, Xin Wu, Yingdian Cao, Hongbin Zha
Despite learning-based visual odometry (VO) has shown impressive results in recent years, the pretrained networks may easily collapse in unseen environments. The large domain gap between training and testing data makes them difficult to generalize to new scenes. In this paper, we propose an online adaptation framework for deep VO with the assistance of scene-agnostic geometric computations and Bayesian inference. In contrast to learning-based pose estimation, our method solves pose from optical flow and depth while the single-view depth estimation is continuously improved with new observations by online learned uncertainties. Meanwhile, an online learned photometric uncertainty is used for further depth and pose optimization by a differentiable Gauss-Newton layer. Our method enables fast adaptation of deep VO networks to unseen environments in a self-supervised manner. Extensive experiments including Cityscapes to KITTI and outdoor KITTI to indoor TUM demonstrate that our method achieves state-of-the-art generalization ability among self-supervised VO methods.
https://openaccess.thecvf.com/content/CVPR2021/papers/Li_Generalizing_to_the_Open_World_Deep_Visual_Odometry_With_Online_CVPR_2021_paper.pdf
http://arxiv.org/abs/2103.15279
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Li_Generalizing_to_the_Open_World_Deep_Visual_Odometry_With_Online_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Li_Generalizing_to_the_Open_World_Deep_Visual_Odometry_With_Online_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Li_Generalizing_to_the_CVPR_2021_supplemental.pdf
null
PQA: Perceptual Question Answering
Yonggang Qi, Kai Zhang, Aneeshan Sain, Yi-Zhe Song
Perceptual organization remains one of the very few established theories on the human visual system. It underpinned many pre-deep seminal works on segmentation and detection, yet research has seen a rapid decline since the preferential shift to learning deep models. Of the limited attempts, most aimed at interpreting complex visual scenes using perceptual organizational rules. This has however been proven to be sub-optimal, since models were unable to effectively capture the visual complexity in real-world imagery. In this paper, we rejuvenate the study of perceptual organization, by advocating two positional changes: (i) we examine purposefully generated synthetic data, instead of complex real imagery, and (ii) we ask machines to synthesize novel perceptually-valid patterns, instead of explaining existing data. Our overall answer lies with the introduction of a novel visual challenge -- the challenge of perceptual question answering (PQA). Upon observing example perceptual question-answer pairs, the goal for PQA is to solve similar questions by generating answers entirely from scratch (see Figure 1). Our first contribution is therefore the first dataset of perceptual question-answer pairs, each generated specifically for a particular Gestalt principle. We then borrow insights from human psychology to design an agent that casts perceptual organization as a self-attention problem, where a proposed grid-to-grid mapping network directly generates answer patterns from scratch. Experiments show our agent to outperform a selection of naive and strong baselines. A human study however indicates that ours uses astronomically more data to learn when compared to an average human, necessitating future research (with or without our dataset).
https://openaccess.thecvf.com/content/CVPR2021/papers/Qi_PQA_Perceptual_Question_Answering_CVPR_2021_paper.pdf
http://arxiv.org/abs/2104.03589
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Qi_PQA_Perceptual_Question_Answering_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Qi_PQA_Perceptual_Question_Answering_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Qi_PQA_Perceptual_Question_CVPR_2021_supplemental.pdf
null
Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink
Ranjie Duan, Xiaofeng Mao, A. K. Qin, Yuefeng Chen, Shaokai Ye, Yuan He, Yun Yang
Though it is well known that the performance of deep neural networks (DNNs) degrades under certain light conditions, there exists no study on the threats of light beams emitted from some physical source as adversarial attacker on DNNs in a real-world scenario. In this work, we show by simply using a laser beam that DNNs are easily fooled. To this end, we propose a novel attack method called Adversarial Laser Beam (AdvLB), which enables manipulation of laser beam's physical parameters to perform adversarial attack. Experiments demonstrate the effectiveness of our proposed approach in both digital- and physical-settings. We further empirically analyze the evaluation results and reveal that the proposed laser beam attack may lead to some interesting prediction errors of the state-of-the-art DNNs. We envisage that the proposed AdvLB method enriches the current family of adversarial attacks and builds the foundation for future robustness studies for light.
https://openaccess.thecvf.com/content/CVPR2021/papers/Duan_Adversarial_Laser_Beam_Effective_Physical-World_Attack_to_DNNs_in_a_CVPR_2021_paper.pdf
http://arxiv.org/abs/2103.06504
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Duan_Adversarial_Laser_Beam_Effective_Physical-World_Attack_to_DNNs_in_a_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Duan_Adversarial_Laser_Beam_Effective_Physical-World_Attack_to_DNNs_in_a_CVPR_2021_paper.html
CVPR 2021
null
null
Robust Point Cloud Registration Framework Based on Deep Graph Matching
Kexue Fu, Shaolei Liu, Xiaoyuan Luo, Manning Wang
3D point cloud registration is a fundamental problem in computer vision and robotics. Recently, learning-based point cloud registration methods have made great progress. However, these methods are sensitive to outliers, which lead to more incorrect correspondences. In this paper, we propose a novel deep graph matching-based framework for point cloud registration. Specifically, we first transform point clouds into graphs and extract deep features for each point. Then, we develop a module based on deep graph matching to calculate a soft correspondence matrix. By using graph matching, not only the local geometry of each point but also its structure and topology in a larger range are considered in establishing correspondences, so that more correct correspondences are found. We train the network with a loss directly defined on the correspondences, and in the test stage the soft correspondences are transformed into hard one-to-one correspondences so that registration can be performed by singular value decomposition. Furthermore, we introduce a transformer-based method to generate edges for graph construction, which further improves the quality of the correspondences. Extensive experiments on registering clean, noisy, partial-to-partial and unseen category point clouds show that the proposed method achieves state-of-the-art performance. The code will be made publicly available at https://github.com/fukexue/RGM.
https://openaccess.thecvf.com/content/CVPR2021/papers/Fu_Robust_Point_Cloud_Registration_Framework_Based_on_Deep_Graph_Matching_CVPR_2021_paper.pdf
http://arxiv.org/abs/2103.04256
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Fu_Robust_Point_Cloud_Registration_Framework_Based_on_Deep_Graph_Matching_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Fu_Robust_Point_Cloud_Registration_Framework_Based_on_Deep_Graph_Matching_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Fu_Robust_Point_Cloud_CVPR_2021_supplemental.pdf
null
Dense Contrastive Learning for Self-Supervised Visual Pre-Training
Xinlong Wang, Rufeng Zhang, Chunhua Shen, Tao Kong, Lei Li
To date, most existing self-supervised learning methods are designed and optimized for image classification. These pre-trained models can be sub-optimal for dense prediction tasks due to the discrepancy between image-level prediction and pixel-level prediction. To fill this gap, we aim to design an effective, dense self-supervised learning method that directly works at the level of pixels (or local features) by taking into account the correspondence between local features. We present dense contrastive learning, which implements self-supervised learning by optimizing a pairwise contrastive (dis)similarity loss at the pixel level between two views of input images. Compared to the baseline method MoCo-v2, our method introduces negligible computation overhead (only <1% slower), but demonstrates consistently superior performance when transferring to downstream dense prediction tasks including object detection, semantic segmentation and instance segmentation; and outperforms the state-of-the-art methods by a large margin. Specifically, over the strong MoCo-v2 baseline, our method achieves significant improvements of 2.0% AP on PASCAL VOC object detection, 1.1% AP on COCO object detection, 0.9% AP on COCO instance segmentation, 3.0% mIoU on PASCAL VOC semantic segmentation and 1.8% mIoU on Cityscapes semantic segmentation. Code and models are available at: https://git.io/DenseCL
https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_Dense_Contrastive_Learning_for_Self-Supervised_Visual_Pre-Training_CVPR_2021_paper.pdf
http://arxiv.org/abs/2011.09157
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Wang_Dense_Contrastive_Learning_for_Self-Supervised_Visual_Pre-Training_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Wang_Dense_Contrastive_Learning_for_Self-Supervised_Visual_Pre-Training_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Wang_Dense_Contrastive_Learning_CVPR_2021_supplemental.pdf
null
Birds of a Feather: Capturing Avian Shape Models From Images
Yufu Wang, Nikos Kolotouros, Kostas Daniilidis, Marc Badger
Animals are diverse in shape, but building a deformable shape model for a new species is not always possible due to the lack of 3D data. We present a method to capture new species using an articulated template and images of that species. In this work, we focus mainly on birds. Although birds represent almost twice the number of species as mammals, no accurate shape model is available. To capture a novel species, we first fit the articulated template to each training sample. By disentangling pose and shape, we learn a shape space that captures variation both among species and within each species from image evidence. We learn models of multiple species from the CUB dataset, and contribute new species-specific and multi-species shape models that are useful for downstream reconstruction tasks. Using a low-dimensional embedding, we show that our learned 3D shape space better reflects the phylogenetic relationships among birds than learned perceptual features.
https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_Birds_of_a_Feather_Capturing_Avian_Shape_Models_From_Images_CVPR_2021_paper.pdf
http://arxiv.org/abs/2105.09396
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Wang_Birds_of_a_Feather_Capturing_Avian_Shape_Models_From_Images_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Wang_Birds_of_a_Feather_Capturing_Avian_Shape_Models_From_Images_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Wang_Birds_of_a_CVPR_2021_supplemental.pdf
null
Learning Temporal Consistency for Low Light Video Enhancement From Single Images
Fan Zhang, Yu Li, Shaodi You, Ying Fu
Single image low light enhancement is an important task and it has many practical applications. Most existing methods adopt a single image approach. Although their performance is satisfying on a static single image, we found, however, they suffer serious temporal instability when handling low light videos. We notice the problem is because existing data-driven methods are trained from single image pairs where no temporal information is available. Unfortunately, training from real temporally consistent data is also problematic because it is impossible to collect pixel-wisely paired low and normal light videos under controlled environments in large scale and diversities with noise of identical statistics. In this paper, we propose a novel method to enforce the temporal stability in low light video enhancement with only static images. The key idea is to learn and infer motion field (optical flow) from a single image and synthesize short range video sequences. Our strategy is general and can extend to large scale datasets directly. Based on this idea, we propose our method which can infer motion prior for single image low light video enhancement and enforce temporal consistency. Rigorous experiments and user study demonstrate the state-of-the-art performance of our proposed method. Our code and model will be publicly available at https://github.com/zkawfanx/StableLLVE.
https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_Learning_Temporal_Consistency_for_Low_Light_Video_Enhancement_From_Single_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Learning_Temporal_Consistency_for_Low_Light_Video_Enhancement_From_Single_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Learning_Temporal_Consistency_for_Low_Light_Video_Enhancement_From_Single_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Zhang_Learning_Temporal_Consistency_CVPR_2021_supplemental.pdf
null
Brain Image Synthesis With Unsupervised Multivariate Canonical CSCl4Net
Yawen Huang, Feng Zheng, Danyang Wang, Weilin Huang, Matthew R. Scott, Ling Shao
Recent advances in neuroscience have highlighted the effectiveness of multi-modal medical data for investigating certain pathologies and understanding human cognition. However, obtaining full sets of different modalities is limited by various factors, such as long acquisition times, high examination costs and artifact suppression. In addition, the complexity, high dimensionality and heterogeneity of neuroimaging data remains another key challenge in leveraging existing randomized scans effectively, as data of the same modality is often measured differently by different machines. There is a clear need to go beyond the traditional imaging-dependent process and synthesize anatomically specific target-modality data from a source input. In this paper, we propose to learn dedicated features that cross both intre- and intra-modal variations using a novel CSCl_4Net. Through an initial unification of intra-modal data in the feature maps and multivariate canonical adaptation, CSCl_4Net facilitates feature-level mutual transformation. The positive definite Riemannian manifold-penalized data fidelity term further enables CSCl_4Net to reconstruct missing measurements according to transformed features. Finally, the maximization l_4-norm boils down to a computationally efficient optimization problem. Extensive experiments validate the ability and robustness of our CSCl_4Net compared to the state-of-the-art methods on multiple datasets.
https://openaccess.thecvf.com/content/CVPR2021/papers/Huang_Brain_Image_Synthesis_With_Unsupervised_Multivariate_Canonical_CSCl4Net_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Huang_Brain_Image_Synthesis_With_Unsupervised_Multivariate_Canonical_CSCl4Net_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Huang_Brain_Image_Synthesis_With_Unsupervised_Multivariate_Canonical_CSCl4Net_CVPR_2021_paper.html
CVPR 2021
null
null
Inverse Simulation: Reconstructing Dynamic Geometry of Clothed Humans via Optimal Control
Jingfan Guo, Jie Li, Rahul Narain, Hyun Soo Park
This paper studies the problem of inverse cloth simulation---to estimate shape and time-varying poses of the underlying body that generates physically plausible cloth motion, which matches to the point cloud measurements on the clothed humans. A key innovation is to represent the dynamics of the cloth geometry using a dynamical system that is controlled by the body states (shape and pose). This allows us to express the cloth motion as a resultant of external (skin friction and gravity) and internal (elasticity) forces. Inspired by the theory of optimal control, we optimize the body states such that the simulated cloth motion is matched to the point cloud measurements, and the analytic gradient of the simulator is back-propagated to update the body states. We propose a cloth relaxation scheme to initialize the cloth state, which ensures the physical validity. Our method produces physically plausible and temporally smooth cloth and body movements that are faithful to the measurements, and shows superior performance compared to the existing methods. As a byproduct, the stress and strain that are applied to the body and clothes can be recovered.
https://openaccess.thecvf.com/content/CVPR2021/papers/Guo_Inverse_Simulation_Reconstructing_Dynamic_Geometry_of_Clothed_Humans_via_Optimal_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Guo_Inverse_Simulation_Reconstructing_Dynamic_Geometry_of_Clothed_Humans_via_Optimal_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Guo_Inverse_Simulation_Reconstructing_Dynamic_Geometry_of_Clothed_Humans_via_Optimal_CVPR_2021_paper.html
CVPR 2021
null
null
Rotation Equivariant Siamese Networks for Tracking
Deepak K. Gupta, Devanshu Arya, Efstratios Gavves
Rotation is among the long prevailing, yet still unresolved, hard challenges encountered in visual object tracking. The existing deep learning-based tracking algorithms use regular CNNs that are inherently translation equivariant, but not designed to tackle rotations. In this paper, we first demonstrate that in the presence of rotation instances in videos, the performance of existing trackers is severely affected. To circumvent the adverse effect of rotations, we present rotation-equivariant Siamese networks (RE-SiamNets), built through the use of group-equivariant convolutional layers comprising steerable filters. SiamNets allow estimating the change in orientation of the object in an unsupervised manner, thereby facilitating its use in relative 2D pose estimation as well. We further show that this change in orientation can be used to impose an additional motion constraint in Siamese tracking through imposing restriction on the change in orientation between two consecutive frames. For benchmarking, we present Rotation Tracking Benchmark (RTB), a dataset comprising a set of videos with rotation instances. Through experiments on two popular Siamese architectures, we show that RE-SiamNets handle the problem of rotation very well and outperform their regular counterparts. Further, RE-SiamNets can accurately estimate the relative change in pose of the target in an unsupervised fashion, namely the in-plane rotation the target has sustained with respect to the reference frame.
https://openaccess.thecvf.com/content/CVPR2021/papers/Gupta_Rotation_Equivariant_Siamese_Networks_for_Tracking_CVPR_2021_paper.pdf
http://arxiv.org/abs/2012.13078
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Gupta_Rotation_Equivariant_Siamese_Networks_for_Tracking_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Gupta_Rotation_Equivariant_Siamese_Networks_for_Tracking_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Gupta_Rotation_Equivariant_Siamese_CVPR_2021_supplemental.pdf
null
Learning Decision Trees Recurrently Through Communication
Stephan Alaniz, Diego Marcos, Bernt Schiele, Zeynep Akata
Integrated interpretability without sacrificing the prediction accuracy of decision making algorithms has the potential of greatly improving their value to the user. Instead of assigning a label to an image directly, we propose to learn iterative binary sub-decisions, inducing sparsity and transparency in the decision making process. The key aspect of our model is its ability to build a decision tree whose structure is encoded into the memory representation of a Recurrent Neural Network jointly learned by two models communicating through message passing. In addition, our model assigns a semantic meaning to each decision in the form of binary attributes, providing concise, semantic and relevant rationalizations to the user. On three benchmark image classification datasets, including the large-scale ImageNet, our model generates human interpretable binary decision sequences explaining the predictions of the network while maintaining state-of-the-art accuracy.
https://openaccess.thecvf.com/content/CVPR2021/papers/Alaniz_Learning_Decision_Trees_Recurrently_Through_Communication_CVPR_2021_paper.pdf
http://arxiv.org/abs/1902.01780
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Alaniz_Learning_Decision_Trees_Recurrently_Through_Communication_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Alaniz_Learning_Decision_Trees_Recurrently_Through_Communication_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Alaniz_Learning_Decision_Trees_CVPR_2021_supplemental.pdf
null
PatchmatchNet: Learned Multi-View Patchmatch Stereo
Fangjinhua Wang, Silvano Galliani, Christoph Vogel, Pablo Speciale, Marc Pollefeys
We present PatchmatchNet, a novel and learnable cascade formulation of Patchmatch for high-resolution multi-view stereo. With high computation speed and low memory requirement, PatchmatchNet can process higher resolution imagery and is more suited to run on resource limited devices than competitors that employ 3D cost volume regularization. For the first time we introduce an iterative multi-scale Patchmatch in an end-to-end trainable architecture and improve the Patchmatch core algorithm with a novel and learned adaptive propagation and evaluation scheme for each iteration. Extensive experiments show a very competitive performance and generalization for our method on DTU, Tanks & Temples and ETH3D, but at a significantly higher efficiency than all existing top-performing models: at least two and a half times faster than state-of-the-art methods with twice less memory usage. Code is available at https://github.com/FangjinhuaWang/PatchmatchNet.
https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_PatchmatchNet_Learned_Multi-View_Patchmatch_Stereo_CVPR_2021_paper.pdf
http://arxiv.org/abs/2012.01411
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Wang_PatchmatchNet_Learned_Multi-View_Patchmatch_Stereo_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Wang_PatchmatchNet_Learned_Multi-View_Patchmatch_Stereo_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Wang_PatchmatchNet_Learned_Multi-View_CVPR_2021_supplemental.pdf
null
Instance Level Affinity-Based Transfer for Unsupervised Domain Adaptation
Astuti Sharma, Tarun Kalluri, Manmohan Chandraker
Domain adaptation deals with training models using large scale labeled data from a specific source domain and then adapting the knowledge to certain target domains that have few or no labels. Many prior works learn domain agnostic feature representations for this purpose using a global distribution alignment objective which does not take into account the finer class specific structure in the source and target domains. We address this issue in our work and propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA. We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process. ILA-DA simultaneously accounts for intra-class clustering as well as inter-class separation among the categories, resulting in less noisy classifier boundaries, improved transferability and increased accuracy. We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets and provide insights into the proposed alignment approach. Code will be made publicly available at https://github.com/astuti/ILA-DA.
https://openaccess.thecvf.com/content/CVPR2021/papers/Sharma_Instance_Level_Affinity-Based_Transfer_for_Unsupervised_Domain_Adaptation_CVPR_2021_paper.pdf
http://arxiv.org/abs/2104.01286
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Sharma_Instance_Level_Affinity-Based_Transfer_for_Unsupervised_Domain_Adaptation_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Sharma_Instance_Level_Affinity-Based_Transfer_for_Unsupervised_Domain_Adaptation_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Sharma_Instance_Level_Affinity-Based_CVPR_2021_supplemental.pdf
null
COMPLETER: Incomplete Multi-View Clustering via Contrastive Prediction
Yijie Lin, Yuanbiao Gou, Zitao Liu, Boyun Li, Jiancheng Lv, Xi Peng
In this paper, we study two challenging problems in incomplete multi-view clustering analysis, namely, i) how to learn an informative and consistent representation among different views without the help of labels and ii) how to recover the missing views from data. To this end, we propose a novel objective that incorporates representation learning and data recovery into a unified framework from the view of information theory. To be specific, the informative and consistent representation is learned by maximizing the mutual information across different views through contrastive learning, and the missing views are recovered by minimizing the conditional entropy of different views through dual prediction. To the best of our knowledge, this could be the first work to provide a theoretical framework that unifies the consistent representation learning and cross-view data recovery. Extensive experimental results show the proposed method remarkably outperforms 10 competitive multi-view clustering methods on four challenging datasets. The code is available at https://pengxi.me.
https://openaccess.thecvf.com/content/CVPR2021/papers/Lin_COMPLETER_Incomplete_Multi-View_Clustering_via_Contrastive_Prediction_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Lin_COMPLETER_Incomplete_Multi-View_Clustering_via_Contrastive_Prediction_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Lin_COMPLETER_Incomplete_Multi-View_Clustering_via_Contrastive_Prediction_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Lin_COMPLETER_Incomplete_Multi-View_CVPR_2021_supplemental.pdf
null
Image-to-Image Translation via Hierarchical Style Disentanglement
Xinyang Li, Shengchuan Zhang, Jie Hu, Liujuan Cao, Xiaopeng Hong, Xudong Mao, Feiyue Huang, Yongjian Wu, Rongrong Ji
Recently, image-to-image translation has made significant progress in achieving both multi-label (i.e., translation conditioned on different labels) and multi-style (i.e., generation with diverse styles) tasks. However, due to the unexplored independence and exclusiveness in the labels, existing endeavors are defeated by involving uncontrolled manipulations to the translation results. In this paper, we propose Hierarchical Style Disentanglement (HiSD) to address this issue. Specifically, we organize the labels into a hierarchical tree structure, in which independent tags, exclusive attributes, and disentangled styles are allocated from top to bottom. Correspondingly, a new translation process is designed to adapt the above structure, in which the styles are identified for controllable translations. Both qualitative and quantitative results on the CelebA-HQ dataset verify the ability of the proposed HiSD. We hope our method will serve as a solid baseline and provide fresh insights with the hierarchically organized annotations for future research in image-to-image translation. The code will be released.
https://openaccess.thecvf.com/content/CVPR2021/papers/Li_Image-to-Image_Translation_via_Hierarchical_Style_Disentanglement_CVPR_2021_paper.pdf
http://arxiv.org/abs/2103.01456
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Li_Image-to-Image_Translation_via_Hierarchical_Style_Disentanglement_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Li_Image-to-Image_Translation_via_Hierarchical_Style_Disentanglement_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Li_Image-to-Image_Translation_via_CVPR_2021_supplemental.pdf
null
What Can Style Transfer and Paintings Do for Model Robustness?
Hubert Lin, Mitchell van Zuijlen, Sylvia C. Pont, Maarten W.A. Wijntjes, Kavita Bala
A common strategy for improving model robustness is through data augmentations. Data augmentations encourage models to learn desired invariances, such as invariance to horizontal flipping or small changes in color. Recent work has shown that arbitrary style transfer can be used as a form of data augmentation to encourage invariance to textures by creating painting-like images from photographs. However, a stylized photograph is not quite the same as an artist-created painting. Artists depict perceptually meaningful cues in paintings so that humans can recognize salient components in scenes, an emphasis which is not enforced in style transfer. Therefore, we study how style transfer and paintings differ in their impact on model robustness. First, we investigate the role of paintings as style images for stylization-based data augmentation. We find that style transfer functions well even without paintings as style images. Second, we show that learning from paintings as a form of perceptual data augmentation can improve model robustness. Finally, we investigate the invariances learned from stylization and from paintings, and show that models learn different invariances from these differing forms of data. Our results provide insights into how stylization improves model robustness, and provide evidence that artist-created paintings can be a valuable source of data for model robustness.
https://openaccess.thecvf.com/content/CVPR2021/papers/Lin_What_Can_Style_Transfer_and_Paintings_Do_for_Model_Robustness_CVPR_2021_paper.pdf
http://arxiv.org/abs/2011.14477
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Lin_What_Can_Style_Transfer_and_Paintings_Do_for_Model_Robustness_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Lin_What_Can_Style_Transfer_and_Paintings_Do_for_Model_Robustness_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Lin_What_Can_Style_CVPR_2021_supplemental.pdf
null
Taming Transformers for High-Resolution Image Synthesis
Patrick Esser, Robin Rombach, Bjorn Ommer
Designed to learn long-range interactions on sequential data, transformers continue to show state-of-the-art results on a wide variety of tasks. In contrast to CNNs, they contain no inductive bias that prioritizes local interactions. This makes them expressive, but also computationally infeasible for long sequences, such as high-resolution images. We demonstrate how combining the effectiveness of the inductive bias of CNNs with the expressivity of transformers enables them to model and thereby synthesize high-resolution images. We show how to (i) use CNNs to learn a context-rich vocabulary of image constituents, and in turn (ii) utilize transformers to efficiently model their composition within high-resolution images. Our approach is readily applied to conditional synthesis tasks, where both non-spatial information, such as object classes, and spatial information, such as segmentations, can control the generated image. In particular, we present the first results on semantically-guided synthesis of megapixel images with transformers.
https://openaccess.thecvf.com/content/CVPR2021/papers/Esser_Taming_Transformers_for_High-Resolution_Image_Synthesis_CVPR_2021_paper.pdf
http://arxiv.org/abs/2012.09841
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Esser_Taming_Transformers_for_High-Resolution_Image_Synthesis_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Esser_Taming_Transformers_for_High-Resolution_Image_Synthesis_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Esser_Taming_Transformers_for_CVPR_2021_supplemental.pdf
null
Learning the Predictability of the Future
Didac Suris, Ruoshi Liu, Carl Vondrick
We introduce a framework for learning from unlabeled video what is predictable in the future. Instead of committing up front to features to predict, our approach learns from data which features are predictable. Based on the observation that hyperbolic geometry naturally and compactly encodes hierarchical structure, we propose a predictive model in hyperbolic space. When the model is most confident, it will predict at a concrete level of the hierarchy, but when the model is not confident, it learns to automatically select a higher level of abstraction. Experiments on two established datasets show the key role of hierarchical representations for action prediction. Although our representation is trained with unlabeled video, visualizations show that action hierarchies emerge in the representation.
https://openaccess.thecvf.com/content/CVPR2021/papers/Suris_Learning_the_Predictability_of_the_Future_CVPR_2021_paper.pdf
http://arxiv.org/abs/2101.01600
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Suris_Learning_the_Predictability_of_the_Future_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Suris_Learning_the_Predictability_of_the_Future_CVPR_2021_paper.html
CVPR 2021
null
null
Multiple Instance Captioning: Learning Representations From Histopathology Textbooks and Articles
Jevgenij Gamper, Nasir Rajpoot
We present ARCH, a computational pathology (CP) multiple instance captioning dataset to facilitate dense supervision of CP tasks. Existing CP datasets focus on narrow tasks; ARCH on the other hand contains dense diagnostic and morphological descriptions for a range of stains, tissue types and pathologies. Using intrinsic dimensionality estimation, we show that ARCH is the only CP dataset to (ARCH-)rival its computer vision analog MS-COCO Captions. We conjecture that an encoder pre-trained on dense image captions learns transferable representations for most CP tasks. We support the conjecture with evidence that ARCH representation transfers to a variety of pathology sub-tasks better than ImageNet features or representations obtained via self-supervised or multi-task learning on pathology images alone. We release our best model and invite other researchers to test it on their CP tasks.
https://openaccess.thecvf.com/content/CVPR2021/papers/Gamper_Multiple_Instance_Captioning_Learning_Representations_From_Histopathology_Textbooks_and_Articles_CVPR_2021_paper.pdf
http://arxiv.org/abs/2103.05121
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Gamper_Multiple_Instance_Captioning_Learning_Representations_From_Histopathology_Textbooks_and_Articles_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Gamper_Multiple_Instance_Captioning_Learning_Representations_From_Histopathology_Textbooks_and_Articles_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Gamper_Multiple_Instance_Captioning_CVPR_2021_supplemental.pdf
null
Beyond Max-Margin: Class Margin Equilibrium for Few-Shot Object Detection
Bohao Li, Boyu Yang, Chang Liu, Feng Liu, Rongrong Ji, Qixiang Ye
Few-shot object detection has made encouraging progress by reconstructing novel class objects using the feature representation learned upon a set of base classes. However, an implicit contradiction about reconstruction and classification is unfortunately ignored. On the one hand, to precisely reconstruct novel classes, the distributions of base classes should be close to those of novel classes (min-margin). On the other hand, to perform accurate classification, the distributions of either two classes must be far away from each other (max-margin). In this paper, we propose a class margin equilibrium (CME) approach, with the aim to optimize both feature space partition and novel class reconstruction in a systematic way. CME first converts the few-shot detection problem to the few-shot classification problem by using a fully connection layer to decouple localization features. CME then reserves adequate margin space for novel classes by introducing simple-yet-effective class margin loss during feature learning. Finally, CME pursues margin equilibrium by disturbing the features of novel class instances in an adversarial min-max fashion. Experiments on Pascal VOC and MS-COCO datasets show that CME improves two baseline detectors (up to 5% in average), achieving new state-of-the-art performance.
https://openaccess.thecvf.com/content/CVPR2021/papers/Li_Beyond_Max-Margin_Class_Margin_Equilibrium_for_Few-Shot_Object_Detection_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Li_Beyond_Max-Margin_Class_Margin_Equilibrium_for_Few-Shot_Object_Detection_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Li_Beyond_Max-Margin_Class_Margin_Equilibrium_for_Few-Shot_Object_Detection_CVPR_2021_paper.html
CVPR 2021
null
null
Consistent Instance False Positive Improves Fairness in Face Recognition
Xingkun Xu, Yuge Huang, Pengcheng Shen, Shaoxin Li, Jilin Li, Feiyue Huang, Yong Li, Zhen Cui
Demographic bias is a significant challenge in practical face recognition systems. Several methods have been proposed to reduce the bias, which rely on accurate demographic annotations. However, such annotations are usually not available in real scenarios. Moreover, these methods are explicitly designed for a specific demographic group divided by a predefined attribute, which is typically not general across different demographic groups divided by various attributes, such as race, gender, and age. In this paper, we propose a false positive rate penalty loss, which mitigates face recognition bias by increasing the consistency of instance false positive rate (FPR). Specifically, we first define the instance FPR as the ratio between the number of the non-target similarities above a unified threshold and the total number of the non-target similarities. The unified threshold is estimated for a given total FPR. Then, we introduce an additional false positive penalty term into the softmaxbased losses to promote the consistency of instance FPRs. Compared with the previous debiasing methods, our method requires no demographic annotations and can mitigate the bias across demographic groups divided by various kinds of attribute which are no need to be predefined in training. Extensive experimental results on popular benchmarks demonstrate the superiority of our method over state-of-the art competitors.
https://openaccess.thecvf.com/content/CVPR2021/papers/Xu_Consistent_Instance_False_Positive_Improves_Fairness_in_Face_Recognition_CVPR_2021_paper.pdf
http://arxiv.org/abs/2106.05519
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Xu_Consistent_Instance_False_Positive_Improves_Fairness_in_Face_Recognition_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Xu_Consistent_Instance_False_Positive_Improves_Fairness_in_Face_Recognition_CVPR_2021_paper.html
CVPR 2021
null
null
Learning Dynamic Network Using a Reuse Gate Function in Semi-Supervised Video Object Segmentation
Hyojin Park, Jayeon Yoo, Seohyeong Jeong, Ganesh Venkatesh, Nojun Kwak
Current state-of-the-art approaches for Semi-supervised Video Object Segmentation (Semi-VOS) propagates information from previous frames to generate segmentation mask for the current frame. This results in high-quality segmentation across challenging scenarios such as changes in appearance and occlusion. But it also leads to unnecessary computations for stationary or slow-moving objects where the change across frames is minimal. In this work, we exploit this observation by using temporal information to quickly identify frames with minimal change and skip the heavyweight mask generation step. To realize this efficiency, we propose a novel dynamic network that estimates change across frames and decides which path -- computing a full network or reusing previous frame's feature -- to choose depending on the expected similarity. Experimental results show that our approach significantly improves inference speed without much accuracy degradation on challenging Semi-VOS datasets -- DAVIS 16, DAVIS 17, and YouTube-VOS. Furthermore, our approach can be applied to multiple Semi-VOS methods demonstrating its generality. The code is available in https://github.com/HYOJINPARK/Reuse VOS.
https://openaccess.thecvf.com/content/CVPR2021/papers/Park_Learning_Dynamic_Network_Using_a_Reuse_Gate_Function_in_Semi-Supervised_CVPR_2021_paper.pdf
http://arxiv.org/abs/2012.11655
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Park_Learning_Dynamic_Network_Using_a_Reuse_Gate_Function_in_Semi-Supervised_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Park_Learning_Dynamic_Network_Using_a_Reuse_Gate_Function_in_Semi-Supervised_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Park_Learning_Dynamic_Network_CVPR_2021_supplemental.pdf
null
RaScaNet: Learning Tiny Models by Raster-Scanning Images
Jaehyoung Yoo, Dongwook Lee, Changyong Son, Sangil Jung, ByungIn Yoo, Changkyu Choi, Jae-Joon Han, Bohyung Han
Deploying deep convolutional neural networks on ultra-low power systems is challenging due to the extremely limited resources. Especially, the memory becomes a bottleneck as the systems put a hard limit on the size of on-chip memory. Because peak memory explosion in the lower layers is critical even in tiny models, the size of an input image should be reduced with sacrifice in accuracy. To overcome this drawback, we propose a novel Raster-Scanning Network, named RaScaNet, inspired by raster-scanning in image sensors. RaScaNet reads only a few rows of pixels at a time using a convolutional neural network and then sequentially learns the representation of the whole image using a recurrent neural network. The proposed method operates on an ultra-low power system without input size reduction; it requires 15.9-24.3x smaller peak memory and 5.3-12.9x smaller weight memory than the state-of-the-art tiny models. Moreover, RaScaNet fully exploits on-chip SRAM and cache memory of the system as the sum of the peak memory and the weight memory does not exceed 60 KB, improving the power efficiency of the system. In our experiments, we demonstrate the binary classification performance of RaScaNet on Visual Wake Words and Pascal VOC datasets.
https://openaccess.thecvf.com/content/CVPR2021/papers/Yoo_RaScaNet_Learning_Tiny_Models_by_Raster-Scanning_Images_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Yoo_RaScaNet_Learning_Tiny_Models_by_Raster-Scanning_Images_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Yoo_RaScaNet_Learning_Tiny_Models_by_Raster-Scanning_Images_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Yoo_RaScaNet_Learning_Tiny_CVPR_2021_supplemental.pdf
null
AGQA: A Benchmark for Compositional Spatio-Temporal Reasoning
Madeleine Grunde-McLaughlin, Ranjay Krishna, Maneesh Agrawala
Visual events are a composition of temporal actions involving actors spatially interacting with objects. When developing computer vision models that can reason about compositional spatio-temporal events, we need benchmarks that can analyze progress and uncover shortcomings. Existing video question answering benchmarks are useful, but they often conflate multiple sources of error into one accuracy metric and have strong biases that models can exploit, making it difficult to pinpoint model weaknesses. We present Action Genome Question Answering (AGQA), a new benchmark for compositional spatio-temporal reasoning. AGQA contains 192M unbalanced question answer pairs for 9.6K videos. We also provide a balanced subset of 3.9M question answer pairs, 3 orders of magnitude larger than existing benchmarks, that minimizes bias by balancing the answer distributions and types of question structures. Although human evaluators marked 86.02% of our question-answer pairs as correct, the best model achieves only 47.74% accuracy. In addition, AGQA introduces multiple training/test splits to test for various reasoning abilities, including generalization to novel compositions, to indirect references, and to more compositional steps. Using AGQA, we evaluate modern visual reasoning systems, demonstrating that the best models barely perform better than non-visual baselines exploiting linguistic biases and that none of the existing models generalize to novel compositions unseen during training.
https://openaccess.thecvf.com/content/CVPR2021/papers/Grunde-McLaughlin_AGQA_A_Benchmark_for_Compositional_Spatio-Temporal_Reasoning_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Grunde-McLaughlin_AGQA_A_Benchmark_for_Compositional_Spatio-Temporal_Reasoning_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Grunde-McLaughlin_AGQA_A_Benchmark_for_Compositional_Spatio-Temporal_Reasoning_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Grunde-McLaughlin_AGQA_A_Benchmark_CVPR_2021_supplemental.pdf
null
Exploring intermediate representation for monocular vehicle pose estimation
Shichao Li, Zengqiang Yan, Hongyang Li, Kwang-Ting Cheng
We present a new learning-based framework to recover vehicle pose in SO(3) from a single RGB image. In contrast to previous works that map local appearance to observation angles, we explore a progressive approach by extracting meaningful Intermediate Geometrical Representations (IGRs) to estimate egocentric vehicle orientation. This approach features a deep model that transforms perceived intensities to IGRs, which are mapped to a 3D representation encoding object orientation in the camera coordinate system. Core problems are what IGRs to use and how to learn them more effectively. We answer the former question by designing IGRs based on an interpolated cuboid that derives from primitive 3D annotation readily. The latter question motivates us to incorporate geometry knowledge with a new loss function based on a projective invariant. This loss function allows unlabeled data to be used in the training stage to improve representation learning. Without additional labels, our system outperforms previous monocular RGB-based methods for joint vehicle detection and pose estimation on the KITTI benchmark, achieving performance even comparable to stereo methods. Code and pre-trained models are available at this HTTPS URL.
https://openaccess.thecvf.com/content/CVPR2021/papers/Li_Exploring_intermediate_representation_for_monocular_vehicle_pose_estimation_CVPR_2021_paper.pdf
http://arxiv.org/abs/2011.08464
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Li_Exploring_intermediate_representation_for_monocular_vehicle_pose_estimation_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Li_Exploring_intermediate_representation_for_monocular_vehicle_pose_estimation_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Li_Exploring_intermediate_representation_CVPR_2021_supplemental.zip
null
Shallow Feature Matters for Weakly Supervised Object Localization
Jun Wei, Qin Wang, Zhen Li, Sheng Wang, S. Kevin Zhou, Shuguang Cui
Weakly supervised object localization (WSOL) aims to localize objects by only utilizing image-level labels. Class activation maps (CAMs) are the commonly used features to achieve WSOL. However, previous CAM-based methods did not take full advantage of the shallow features, despite their importance for WSOL. Because shallow features are easily buried in background noise through conventional fusion. In this paper, we propose a simple but effective Shallow feature-aware Pseudo supervised Object Localization (SPOL) model for accurate WSOL, which makes the utmost of low-level features embedded in shallow layers. In practice, our SPOL model first generates the CAMs through a novel element-wise multiplication of shallow and deep feature maps, which filters the background noise and generates sharper boundaries robustly. Besides, we further propose a general class-agnostic segmentation model to achieve the accurate object mask, by only using the initial CAMs as the pseudo label without any extra annotation. Eventually, a bounding box extractor is applied to the object mask to locate the target. Experiments verify that our SPOL outperforms the state-of-the-art on both CUB-200 and ImageNet-1K benchmarks, achieving 93.44% and 67.15% (i.e., 3.93% and 2.13% improvement) Top-5 localization accuracy, respectively.
https://openaccess.thecvf.com/content/CVPR2021/papers/Wei_Shallow_Feature_Matters_for_Weakly_Supervised_Object_Localization_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Wei_Shallow_Feature_Matters_for_Weakly_Supervised_Object_Localization_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Wei_Shallow_Feature_Matters_for_Weakly_Supervised_Object_Localization_CVPR_2021_paper.html
CVPR 2021
null
null
Capturing Omni-Range Context for Omnidirectional Segmentation
Kailun Yang, Jiaming Zhang, Simon Reiss, Xinxin Hu, Rainer Stiefelhagen
Convolutional Networks (ConvNets) excel at semantic segmentation and have become a vital component for perception in autonomous driving. Enabling an all-encompassing view of street-scenes, omnidirectional cameras present themselves as a perfect fit in such systems. Most segmentation models for parsing urban environments operate on common, narrow Field of View (FoV) images. Transferring these models from the domain they were designed for to 360-degree perception, their performance drops dramatically, e.g., by an absolute 30.0% (mIoU) on established test-beds. To bridge the gap in terms of FoV and structural distribution between the imaging domains, we introduce Efficient Concurrent Attention Networks (ECANets), directly capturing the inherent long-range dependencies in omnidirectional imagery. In addition to the learned attention-based contextual priors that can stretch across 360-degree images, we upgrade model training by leveraging multi-source and omni-supervised learning, taking advantage of both: Densely labeled and unlabeled data originating from multiple datasets. To foster progress in panoramic image segmentation, we put forward and extensively evaluate models on Wild PAnoramic Semantic Segmentation (WildPASS), a dataset designed to capture diverse scenes from all around the globe. Our novel model, training regimen and multi-source prediction fusion elevate the performance (mIoU) to new state-of-the-art results on the public PASS (60.2%) and the fresh WildPASS (69.0%) benchmarks.
https://openaccess.thecvf.com/content/CVPR2021/papers/Yang_Capturing_Omni-Range_Context_for_Omnidirectional_Segmentation_CVPR_2021_paper.pdf
http://arxiv.org/abs/2103.05687
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Yang_Capturing_Omni-Range_Context_for_Omnidirectional_Segmentation_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Yang_Capturing_Omni-Range_Context_for_Omnidirectional_Segmentation_CVPR_2021_paper.html
CVPR 2021
null
null
PLADE-Net: Towards Pixel-Level Accuracy for Self-Supervised Single-View Depth Estimation With Neural Positional Encoding and Distilled Matting Loss
Juan Luis Gonzalez, Munchurl Kim
In this paper, we propose a self-supervised single-view pixel-level accurate depth estimation network, called PLADE-Net. The PLADE-Net is the first work that shows unprecedented accuracy levels, exceeding 95% in terms of the \delta^1 metric on the challenging KITTI dataset. Our PLADE-Net is based on a new network architecture with neural positional encoding and a novel loss function that borrows from the closed-form solution of the matting Laplacian to learn pixel-level accurate depth estimation from stereo images. Neural positional encoding allows our PLADE-Net to obtain more consistent depth estimates by letting the network reason about location-specific image properties such as lens and projection distortions. Our novel distilled matting Laplacian loss allows our network to predict sharp depths at object boundaries and more consistent depths in highly homogeneous regions. Our proposed method outperforms all previous self-supervised single-view depth estimation methods by a large margin on the challenging KITTI dataset, with unprecedented levels of accuracy. Furthermore, our PLADE-Net, naively extended for stereo inputs, outperforms the most recent self-supervised stereo methods, even without any advanced blocks like 1D correlations, 3D convolutions, or spatial pyramid pooling. We present extensive ablation studies and experiments that support our method's effectiveness on the KITTI, CityScapes, and Make3D datasets.
https://openaccess.thecvf.com/content/CVPR2021/papers/Gonzalez_PLADE-Net_Towards_Pixel-Level_Accuracy_for_Self-Supervised_Single-View_Depth_Estimation_With_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Gonzalez_PLADE-Net_Towards_Pixel-Level_Accuracy_for_Self-Supervised_Single-View_Depth_Estimation_With_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Gonzalez_PLADE-Net_Towards_Pixel-Level_Accuracy_for_Self-Supervised_Single-View_Depth_Estimation_With_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Gonzalez_PLADE-Net_Towards_Pixel-Level_CVPR_2021_supplemental.pdf
null
Reciprocal Landmark Detection and Tracking With Extremely Few Annotations
Jianzhe Lin, Ghazal Sahebzamani, Christina Luong, Fatemeh Taheri Dezaki, Mohammad Jafari, Purang Abolmaesumi, Teresa Tsang
Localization of anatomical landmarks to perform two-dimensional measurements in echocardiography is part of routine clinical workflow in cardiac disease diagnosis. Automatic localization of those landmarks is highly desirable to improve workflow and reduce interobserver variability. Training a machine learning framework to perform such localization is hindered given the sparse nature of gold standard labels; only few percent of cardiac cine series frames are normally manually labeled for clinical use. In this paper, we propose a new end-to-end reciprocal detection and tracking model that is specifically designed to handle the sparse nature of echocardiography labels. The model is trained using few annotated frames across the entire cardiac cine sequence to generate consistent detection and tracking of landmarks, and an adversarial training for the model is proposed to take advantage of these annotated frames. The superiority of the proposed reciprocal model is demonstrated using a series of experiments.
https://openaccess.thecvf.com/content/CVPR2021/papers/Lin_Reciprocal_Landmark_Detection_and_Tracking_With_Extremely_Few_Annotations_CVPR_2021_paper.pdf
http://arxiv.org/abs/2101.11224
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Lin_Reciprocal_Landmark_Detection_and_Tracking_With_Extremely_Few_Annotations_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Lin_Reciprocal_Landmark_Detection_and_Tracking_With_Extremely_Few_Annotations_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Lin_Reciprocal_Landmark_Detection_CVPR_2021_supplemental.pdf
null
Practical Single-Image Super-Resolution Using Look-Up Table
Younghyun Jo, Seon Joo Kim
A number of super-resolution (SR) algorithms from interpolation to deep neural networks (DNN) have emerged to restore or create missing details of the input low-resolution image. As mobile devices and display hardware develops, the demand for practical SR technology has increased. Current state-of-the-art SR methods are based on DNNs for better quality. However, they are feasible when executed by using a parallel computing module (e.g. GPUs), and have been difficult to apply to general uses such as end-user software, smartphones, and televisions. To this end, we propose an efficient and practical approach for the SR by adopting look-up table (LUT). We train a deep SR network with a small receptive field and transfer the output values of the learned deep model to the LUT. At test time, we retrieve the precomputed HR output values from the LUT for query LR input pixels. The proposed method can be performed very quickly because it does not require a large number of floating point operations. Experimental results show the efficiency and the effectiveness of our method. Especially, our method runs faster while showing better quality compared to bicubic interpolation.
https://openaccess.thecvf.com/content/CVPR2021/papers/Jo_Practical_Single-Image_Super-Resolution_Using_Look-Up_Table_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Jo_Practical_Single-Image_Super-Resolution_Using_Look-Up_Table_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Jo_Practical_Single-Image_Super-Resolution_Using_Look-Up_Table_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Jo_Practical_Single-Image_Super-Resolution_CVPR_2021_supplemental.pdf
null
Removing the Background by Adding the Background: Towards Background Robust Self-Supervised Video Representation Learning
Jinpeng Wang, Yuting Gao, Ke Li, Yiqi Lin, Andy J. Ma, Hao Cheng, Pai Peng, Feiyue Huang, Rongrong Ji, Xing Sun
Self-supervised learning has shown great potentials in improving the video representation ability of deep neural networks by getting supervision from the data itself. However, some of the current methods tend to cheat from the background, i.e., the prediction is highly dependent on the video background instead of the motion, making the model vulnerable to background changes. To mitigate the model reliance towards the background, we propose to remove the background impact by adding the background. That is, given a video, we randomly select a static frame and add it to every other frames to construct a distracting video sample. Then we force the model to pull the feature of the distracting video and the feature of the original video closer, so that the model is explicitly restricted to resist the background influence, focusing more on the motion changes. We term our method as Background Erasing (BE). It is worth noting that the implementation of our method is so simple and neat and can be added to most of the SOTA methods without much efforts. Specifically, BE brings 16.4% and 19.1% improvements with MoCo on the severely biased datasets UCF101 and HMDB51, and 14.5% improvement on the less biased dataset Diving48.
https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_Removing_the_Background_by_Adding_the_Background_Towards_Background_Robust_CVPR_2021_paper.pdf
http://arxiv.org/abs/2009.05769
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Wang_Removing_the_Background_by_Adding_the_Background_Towards_Background_Robust_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Wang_Removing_the_Background_by_Adding_the_Background_Towards_Background_Robust_CVPR_2021_paper.html
CVPR 2021
null
null
GDR-Net: Geometry-Guided Direct Regression Network for Monocular 6D Object Pose Estimation
Gu Wang, Fabian Manhardt, Federico Tombari, Xiangyang Ji
6D pose estimation from a single RGB image is a fundamental task in computer vision. The current top-performing deep learning-based methods rely on an indirect strategy, i.e., first establishing 2D-3D correspondences between the coordinates in the image plane and object coordinate system, and then applying a variant of the PnP/RANSAC algorithm. However, this two-stage pipeline is not end-to-end trainable, thus is hard to be employed for many tasks requiring differentiable poses. On the other hand, methods based on direct regression are currently inferior to geometry-based methods. In this work, we perform an in-depth investigation on both direct and indirect methods, and propose a simple yet effective Geometry-guided Direct Regression Network (GDR-Net) to learn the 6D pose in an end-to-end manner from dense correspondence-based intermediate geometric representations. Extensive experiments show that our approach remarkably outperforms state-of-the-art methods on LM, LM-O and YCB-V datasets. Code is available at https://git.io/GDR-Net.
https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_GDR-Net_Geometry-Guided_Direct_Regression_Network_for_Monocular_6D_Object_Pose_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Wang_GDR-Net_Geometry-Guided_Direct_Regression_Network_for_Monocular_6D_Object_Pose_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Wang_GDR-Net_Geometry-Guided_Direct_Regression_Network_for_Monocular_6D_Object_Pose_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Wang_GDR-Net_Geometry-Guided_Direct_CVPR_2021_supplemental.pdf
null
Point Cloud Upsampling via Disentangled Refinement
Ruihui Li, Xianzhi Li, Pheng-Ann Heng, Chi-Wing Fu
Point clouds produced by 3D scanning are often sparse, non-uniform, and noisy. Recent upsampling approaches aim to generate a dense point set, while achieving both distribution uniformity and proximity-to-surface, and possibly amending small holes, all in a single network. After revisiting the task, we propose to disentangle the task based on its multi-objective nature and formulate two cascaded sub-networks, a dense generator and a spatial refiner. The dense generator infers a coarse but dense output that roughly describes the underlying surface, while the spatial refiner further fine-tunes the coarse output by adjusting the location of each point. Specifically, we design a pair of local and global refinement units in the spatial refiner to evolve a coarse feature map. Also, in the spatial refiner, we regress a per-point offset vector to further adjust the coarse outputs in fine scale. Extensive qualitative and quantitative results on both synthetic and real-scanned datasets demonstrate the superiority of our method over the state-of-the-arts.
https://openaccess.thecvf.com/content/CVPR2021/papers/Li_Point_Cloud_Upsampling_via_Disentangled_Refinement_CVPR_2021_paper.pdf
http://arxiv.org/abs/2106.04779
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Li_Point_Cloud_Upsampling_via_Disentangled_Refinement_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Li_Point_Cloud_Upsampling_via_Disentangled_Refinement_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Li_Point_Cloud_Upsampling_CVPR_2021_supplemental.pdf
null
Feature-Level Collaboration: Joint Unsupervised Learning of Optical Flow, Stereo Depth and Camera Motion
Cheng Chi, Qingjie Wang, Tianyu Hao, Peng Guo, Xin Yang
Precise estimation of optical flow, stereo depth and camera motion are important for the real-world 3D scene understanding and visual perception. Since the three tasks are tightly coupled with the inherent 3D geometric constraints, current studies have demonstrated that the three tasks can be improved through jointly optimizing geometric loss functions of several individual networks. In this paper, we show that effective feature-level collaboration of the networks for the three respective tasks could achieve much greater performance improvement for all three tasks than only loss-level joint optimization. Specifically, we propose a single network to combine and improve the three tasks. The network extracts the features of two consecutive stereo images, and simultaneously estimates optical flow, stereo depth and camera motion. The whole network mainly contains four parts: (I) a feature-sharing encoder to extract features of input images, which can enhance features' representation ability; (II) a pooled decoder to estimate both optical flow and stereo depth; (III) a camera pose estimation module which fuses optical flow and stereo depth information; (IV) a cost volume complement module to improve the performance of optical flow in static and occluded regions. Our method achieves state-of-the-art performance among the joint unsupervised methods, including optical flow and stereo depth estimation on KITTI 2012 and 2015 benchmarks, and camera motion estimation on KITTI VO dataset.
https://openaccess.thecvf.com/content/CVPR2021/papers/Chi_Feature-Level_Collaboration_Joint_Unsupervised_Learning_of_Optical_Flow_Stereo_Depth_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Chi_Feature-Level_Collaboration_Joint_Unsupervised_Learning_of_Optical_Flow_Stereo_Depth_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Chi_Feature-Level_Collaboration_Joint_Unsupervised_Learning_of_Optical_Flow_Stereo_Depth_CVPR_2021_paper.html
CVPR 2021
null
null
A Generalized Loss Function for Crowd Counting and Localization
Jia Wan, Ziquan Liu, Antoni B. Chan
Previous work shows that a better density map representation can improve the performance of crowd counting. In this paper, we investigate learning the density map representation through an unbalanced optimal transport problem, and propose a generalized loss function to learn density maps for crowd counting and localization. We prove that pixel-wise L2 loss and Bayesian loss are special cases and suboptimal solutions to our proposed loss function. A perspective-guided transport cost function is further proposed to better handle the perspective transformation in crowd images. Since the predicted density will be pushed toward annotation positions, the density map prediction will be sparse and can naturally be used for localization. Finally, the proposed loss outperforms other losses on four large-scale datasets for counting, and achieves the best localization performance on NWPU-Crowd and UCF-QNRF.
https://openaccess.thecvf.com/content/CVPR2021/papers/Wan_A_Generalized_Loss_Function_for_Crowd_Counting_and_Localization_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Wan_A_Generalized_Loss_Function_for_Crowd_Counting_and_Localization_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Wan_A_Generalized_Loss_Function_for_Crowd_Counting_and_Localization_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Wan_A_Generalized_Loss_CVPR_2021_supplemental.pdf
null
Learning Fine-Grained Segmentation of 3D Shapes Without Part Labels
Xiaogang Wang, Xun Sun, Xinyu Cao, Kai Xu, Bin Zhou
Existing learning-based approaches to 3D shape segmentation usually formulate it as a semantic labeling problem, assuming that all parts of training shapes are annotated with a given set of labels. This assumption, however, is unrealistic for training fine-grained segmentation on large datasets since the annotation of fine-grained parts is extremely tedious. In this paper, we approach the problem with deep clustering, where the key idea is to learn part priors from a dataset with fine-grained segmentation but no part annotations. Given point sampled 3D shapes, we model the clustering priors of points with a similarity matrix and achieve part-based segmentation through minimizing a novel low rank loss. Further, since fine-grained parts can be very tiny, a 3D shape has to be densely sampled to ensure the tiny parts are well captured and segmented. To handle densely sampled point sets, we adopt a divide-and-conquer scheme. We first partition the large point set into a number of blocks. Each block is segmented using a deep-clustering-based part prior network (PriorNet) trained in a category-agnostic manner. We then train MergeNet, a graph convolution network, to merge the segments of all blocks to form the final segmentation result. Our method is evaluated with a challenging benchmark of fine-grained segmentation, showing significant advantage over the state-of-the-art ones.
https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_Learning_Fine-Grained_Segmentation_of_3D_Shapes_Without_Part_Labels_CVPR_2021_paper.pdf
http://arxiv.org/abs/2103.13030
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Wang_Learning_Fine-Grained_Segmentation_of_3D_Shapes_Without_Part_Labels_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Wang_Learning_Fine-Grained_Segmentation_of_3D_Shapes_Without_Part_Labels_CVPR_2021_paper.html
CVPR 2021
null
null
Fine-Grained Shape-Appearance Mutual Learning for Cloth-Changing Person Re-Identification
Peixian Hong, Tao Wu, Ancong Wu, Xintong Han, Wei-Shi Zheng
Recently, person re-identification (Re-ID) has achieved great progress. However, current methods largely depend on color appearance, which is not reliable when a person changes the clothes. Cloth-changing Re-ID is challenging since pedestrian images with clothes change exhibit large intra-class variation and small inter-class variation. Some significant features for identification are embedded in unobvious body shape differences across pedestrians. To explore such body shape cues for cloth-changing Re-ID, we propose a Fine-grained Shape-Appearance Mutual learning framework (FSAM), a two-stream framework that learns fine-grained discriminative body shape knowledge in a shape stream and transfers it to an appearance stream to complement the cloth-unrelated knowledge in the appearance features. Specifically, in the shape stream, FSAM learns fine-grained discriminative mask with the guidance of identities and extracts fine-grained body shape features by a pose-specific multi-branch network. To complement cloth-unrelated shape knowledge in the appearance stream, dense interactive mutual learning is performed across low-level and high-level features to transfer knowledge from shape stream to appearance stream, which enables the appearance stream to be deployed independently without extra computation for mask estimation. We evaluated our method on benchmark cloth-changing Re-ID datasets and achieved the start-of-the-art performance.
https://openaccess.thecvf.com/content/CVPR2021/papers/Hong_Fine-Grained_Shape-Appearance_Mutual_Learning_for_Cloth-Changing_Person_Re-Identification_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Hong_Fine-Grained_Shape-Appearance_Mutual_Learning_for_Cloth-Changing_Person_Re-Identification_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Hong_Fine-Grained_Shape-Appearance_Mutual_Learning_for_Cloth-Changing_Person_Re-Identification_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Hong_Fine-Grained_Shape-Appearance_Mutual_CVPR_2021_supplemental.pdf
null
DeepSurfels: Learning Online Appearance Fusion
Marko Mihajlovic, Silvan Weder, Marc Pollefeys, Martin R. Oswald
We present DeepSurfels, a novel hybrid scene representation for geometry and appearance information. DeepSurfels combines explicit and neural building blocks to jointly encode geometry and appearance information. In contrast to established representations, DeepSurfels better represents high-frequency textures, is well-suited for online updates of appearance information, and can be easily combined with machine learning methods. We further present an end-to-end trainable online appearance fusion pipeline that fuses information from RGB images into the proposed scene representation and is trained using self-supervision imposed by the reprojection error with respect to the input images. Our method compares favorably to classical texture mapping approaches as well as recent learning-based techniques. Moreover, we demonstrate lower runtime, improved generalization capabilities, and better scalability to larger scenes compared to existing methods.
https://openaccess.thecvf.com/content/CVPR2021/papers/Mihajlovic_DeepSurfels_Learning_Online_Appearance_Fusion_CVPR_2021_paper.pdf
http://arxiv.org/abs/2012.14240
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Mihajlovic_DeepSurfels_Learning_Online_Appearance_Fusion_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Mihajlovic_DeepSurfels_Learning_Online_Appearance_Fusion_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Mihajlovic_DeepSurfels_Learning_Online_CVPR_2021_supplemental.pdf
null
Joint Negative and Positive Learning for Noisy Labels
Youngdong Kim, Juseung Yun, Hyounguk Shon, Junmo Kim
Training of Convolutional Neural Networks (CNNs) with data with noisy labels is known to be a challenge. Based on the fact that directly providing the label to the data (Positive Learning; PL) has a risk of allowing CNNs to memorize the contaminated labels for the case of noisy data, the indirect learning approach that uses complementary labels (Negative Learning for Noisy Labels; NLNL) has proven to be highly effective in preventing overfitting to noisy data as it reduces the risk of providing faulty target. NLNL further employs a three-stage pipeline to improve convergence. As a result, filtering noisy data through the NLNL pipeline is cumbersome, increasing the training cost. In this study, we propose a novel improvement of NLNL, named Joint Negative and Positive Learning (JNPL), that unifies the filtering pipeline into a single stage. JNPL trains CNN via two losses, NL+ and PL+, which are improved upon NL and PL loss functions, respectively. We analyze the fundamental issue of NL loss function and develop new NL+ loss function producing gradient that enhances the convergence of noisy data. Furthermore, PL+ loss function is designed to enable faster convergence to expected-to-be-clean data. We show that the NL+ and PL+ train CNN simultaneously, significantly simplifying the pipeline, allowing greater ease of practical use compared to NLNL. With a simple semi-supervised training technique, our method achieves state-of-the-art accuracy for noisy data classification based on the superior filtering ability.
https://openaccess.thecvf.com/content/CVPR2021/papers/Kim_Joint_Negative_and_Positive_Learning_for_Noisy_Labels_CVPR_2021_paper.pdf
http://arxiv.org/abs/2104.06574
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Kim_Joint_Negative_and_Positive_Learning_for_Noisy_Labels_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Kim_Joint_Negative_and_Positive_Learning_for_Noisy_Labels_CVPR_2021_paper.html
CVPR 2021
null
null
Generalizing Face Forgery Detection With High-Frequency Features
Yuchen Luo, Yong Zhang, Junchi Yan, Wei Liu
Current face forgery detection methods achieve high accuracy under the within-database scenario where training and testing forgeries are synthesized by the same algorithm. However, few of them gain satisfying performance under the cross-database scenario where training and testing forgeries are synthesized by different algorithms. In this paper, we find that current CNN-based detectors tend to overfit to method-specific color textures and thus fail to generalize. Observing that image noises remove color textures and expose discrepancies between authentic and tampered regions, we propose to utilize the high-frequency noises for face forgery detection. We carefully devise three functional modules to take full advantage of the high-frequency features. The first is the multi-scale high-frequency feature extraction module that extracts high-frequency noises at multiple scales and composes a novel modality. The second is the residual-guided spatial attention module that guides the low-level RGB feature extractor to concentrate more on forgery traces from a new perspective. The last is the cross-modality attention module that leverages the correlation between the two complementary modalities to promote feature learning for each other. Comprehensive evaluations on several benchmark databases corroborate the superior generalization performance of our proposed method.
https://openaccess.thecvf.com/content/CVPR2021/papers/Luo_Generalizing_Face_Forgery_Detection_With_High-Frequency_Features_CVPR_2021_paper.pdf
http://arxiv.org/abs/2103.12376
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Luo_Generalizing_Face_Forgery_Detection_With_High-Frequency_Features_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Luo_Generalizing_Face_Forgery_Detection_With_High-Frequency_Features_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Luo_Generalizing_Face_Forgery_CVPR_2021_supplemental.pdf
null
The Heterogeneity Hypothesis: Finding Layer-Wise Differentiated Network Architectures
Yawei Li, Wen Li, Martin Danelljan, Kai Zhang, Shuhang Gu, Luc Van Gool, Radu Timofte
In this paper, we tackle the problem of convolutional neural network design. Instead of focusing on the design of the overall architecture, we investigate a design space that is usually overlooked, i.e. adjusting the channel configurations of predefined networks. We find that this adjustment can be achieved by shrinking widened baseline networks and leads to superior performance. Based on that, we articulate the "heterogeneity hypothesis": with the same training protocol, there exists a layer-wise differentiated network architecture (LW-DNA) that can outperform the original network with regular channel configurations but with a lower level of model complexity. The LW-DNA models are identified without extra computational cost or training time compared with the original network. This constraint leads to controlled experiments which direct the focus to the importance of layer-wise specific channel configurations. LW-DNA models come with advantages related to overfitting, i.e. the relative relationship between model complexity and dataset size. Experiments are conducted on various networks and datasets for image classification, visual tracking and image restoration. The resultant LW-DNA models consistently outperform the baseline models. Code is available at https://github.com/ofsoundof/Heterogeneity_Hypothesis.git.
https://openaccess.thecvf.com/content/CVPR2021/papers/Li_The_Heterogeneity_Hypothesis_Finding_Layer-Wise_Differentiated_Network_Architectures_CVPR_2021_paper.pdf
http://arxiv.org/abs/2006.16242
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Li_The_Heterogeneity_Hypothesis_Finding_Layer-Wise_Differentiated_Network_Architectures_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Li_The_Heterogeneity_Hypothesis_Finding_Layer-Wise_Differentiated_Network_Architectures_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Li_The_Heterogeneity_Hypothesis_CVPR_2021_supplemental.pdf
null
Robust Neural Routing Through Space Partitions for Camera Relocalization in Dynamic Indoor Environments
Siyan Dong, Qingnan Fan, He Wang, Ji Shi, Li Yi, Thomas Funkhouser, Baoquan Chen, Leonidas J. Guibas
Localizing the camera in a known indoor environment is a key building block for scene mapping, robot navigation, AR, etc. Recent advances estimate the camera pose via optimization over the 2D/3D-3D correspondences established between the coordinates in 2D/3D camera space and 3D world space. Such a mapping is estimated with either a convolution neural network or a decision tree using only the static input image sequence, which makes these approaches vulnerable to dynamic indoor environments that are quite common yet challenging in the real world. To address the aforementioned issues, in this paper, we propose a novel outlier-aware neural tree which bridges the two worlds, deep learning and decision tree approaches. It builds on three important blocks: (a) a hierarchical space partition over the indoor scene to construct the decision tree; (b) a neural routing function, implemented as a deep classification network, employed for better 3D scene understanding; and (c) an outlier rejection module used to filter out dynamic points during the hierarchical routing process. Our proposed algorithm is evaluated on the RIO-10 benchmark developed for camera relocalization in dynamic indoor environments. It achieves robust neural routing through space partitions and outperforms the state-of-the-art approaches by around 30% on camera pose accuracy, while running comparably fast for evaluation.
https://openaccess.thecvf.com/content/CVPR2021/papers/Dong_Robust_Neural_Routing_Through_Space_Partitions_for_Camera_Relocalization_in_CVPR_2021_paper.pdf
http://arxiv.org/abs/2012.04746
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Dong_Robust_Neural_Routing_Through_Space_Partitions_for_Camera_Relocalization_in_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Dong_Robust_Neural_Routing_Through_Space_Partitions_for_Camera_Relocalization_in_CVPR_2021_paper.html
CVPR 2021
null
null
Facial Action Unit Detection With Transformers
Geethu Miriam Jacob, Bjorn Stenger
The Facial Action Coding System is a taxonomy for fine-grained facial expression analysis. This paper proposes a method for detecting Facial Action Units (FAU), which define particular face muscle activity, from an input image. FAU detection is formulated as a multi-task learning problem, where image features and attention maps are input to a branch for each action unit to extract discriminative feature embeddings, using a new loss function, the Center Contrastive (CC) loss. We employ a new FAU correlation network, based on a transformer encoder architecture, to capture the relationships between different action units for the wide range of expressions in the training data. The resulting features are shown to yield high classification performance. We validate our design choices, including the use of CC loss and Tversky loss functions, in ablative experiments. We show that the proposed method outperforms state-of-theart techniques on two public datasets, BP4D and DISFA, with an absolute improvement of the F1-score of over 2% on each.
https://openaccess.thecvf.com/content/CVPR2021/papers/Jacob_Facial_Action_Unit_Detection_With_Transformers_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Jacob_Facial_Action_Unit_Detection_With_Transformers_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Jacob_Facial_Action_Unit_Detection_With_Transformers_CVPR_2021_paper.html
CVPR 2021
null
null
Exploiting Aliasing for Manga Restoration
Minshan Xie, Menghan Xia, Tien-Tsin Wong
As a popular entertainment art form, manga enriches the line drawings details with bitonal screentones. However, manga resources over the Internet usually show screentone artifacts because of inappropriate scanning/rescaling resolution. In this paper, we propose an innovative two-stage method to restore quality bitonal manga from degraded ones. Our key observation is that the aliasing induced by downsampling bitonal screentones can be utilized as informative clues to infer the original resolution and screentones. First, we predict the target resolution from the degraded manga via the Scale Estimation Network (SE-Net) with spatial voting scheme. Then, at the target resolution, we restore the region-wise bitonal screentones via the Manga Restoration Network (MR-Net) discriminatively, depending on the degradation degree. Specifically, the original screentones are directly restored in pattern-identifiable regions, and visually plausible screentones are synthesized in pattern-agnostic regions. Quantitative evaluation on synthetic data and visual assessment on real-world cases illustrate the effectiveness of our method.
https://openaccess.thecvf.com/content/CVPR2021/papers/Xie_Exploiting_Aliasing_for_Manga_Restoration_CVPR_2021_paper.pdf
http://arxiv.org/abs/2105.06830
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Xie_Exploiting_Aliasing_for_Manga_Restoration_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Xie_Exploiting_Aliasing_for_Manga_Restoration_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Xie_Exploiting_Aliasing_for_CVPR_2021_supplemental.pdf
null
Discovering Hidden Physics Behind Transport Dynamics
Peirong Liu, Lin Tian, Yubo Zhang, Stephen Aylward, Yueh Lee, Marc Niethammer
Transport processes are ubiquitous. They are, for example, at the heart of optical flow approaches; or of perfusion imaging, where blood transport is assessed, most commonly by injecting a tracer. An advection-diffusion equation is widely used to describe these transport phenomena. Our goal is estimating the underlying physics of advection-diffusion equations, expressed as velocity and diffusion tensor fields. We propose a learning framework (YETI) building on an auto-encoder structure between 2D and 3D image time-series, which incorporates the advection-diffusion model. To help with identifiability, we develop an advection-diffusion simulator which allows pre-training of our model by supervised learning using the velocity and diffusion tensor fields. Instead of directly learning these velocity and diffusion tensor fields, we introduce representations that assure incompressible flow and symmetric positive semi-definite diffusion fields and demonstrate the additional benefits of these representations on improving estimation accuracy. We further use transfer learning to apply YETI on a public brain magnetic resonance (MR) perfusion dataset of stroke patients and show its ability to successfully distinguish stroke lesions from normal brain regions via the estimated velocity and diffusion tensor fields.
https://openaccess.thecvf.com/content/CVPR2021/papers/Liu_Discovering_Hidden_Physics_Behind_Transport_Dynamics_CVPR_2021_paper.pdf
http://arxiv.org/abs/2011.12222
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Liu_Discovering_Hidden_Physics_Behind_Transport_Dynamics_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Liu_Discovering_Hidden_Physics_Behind_Transport_Dynamics_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Liu_Discovering_Hidden_Physics_CVPR_2021_supplemental.pdf
null
Cross-View Gait Recognition With Deep Universal Linear Embeddings
Shaoxiong Zhang, Yunhong Wang, Annan Li
Gait is considered an attractive biometric identifier for its non-invasive and non-cooperative features compared with other biometric identifiers such as fingerprint and iris. At present, cross-view gait recognition methods always establish representations from various deep convolutional networks for recognition and ignore the potential dynamical information of the gait sequences. If assuming that pedestrians have different walking patterns, gait recognition can be performed by calculating their dynamical features from each view. This paper introduces the Koopman operator theory to gait recognition, which can find an embedding space for a global linear approximation of a nonlinear dynamical system. Furthermore, a novel framework based on convolutional variational autoencoder and deep Koopman embedding is proposed to approximate the Koopman operators, which is used as dynamical features from the linearized embedding space for cross-view gait recognition. It gives solid physical interpretability for a gait recognition system. Experiments on a large public dataset, OU-MVLP, prove the effectiveness of the proposed method.
https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_Cross-View_Gait_Recognition_With_Deep_Universal_Linear_Embeddings_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Cross-View_Gait_Recognition_With_Deep_Universal_Linear_Embeddings_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Cross-View_Gait_Recognition_With_Deep_Universal_Linear_Embeddings_CVPR_2021_paper.html
CVPR 2021
null
null
Tuning IR-Cut Filter for Illumination-Aware Spectral Reconstruction From RGB
Bo Sun, Junchi Yan, Xiao Zhou, Yinqiang Zheng
To reconstruct spectral signals from multi-channel observations, in particular trichromatic RGBs, has recently emerged as a promising alternative to traditional scanning-based spectral imager. It has been proven that the reconstruction accuracy relies heavily on the spectral response of the RGB camera in use. To improve accuracy, data-driven algorithms have been proposed to retrieve the best response curves of existing RGB cameras, or even to design brand new three-channel response curves. Instead, this paper explores the filter-array based color imaging mechanism of existing RGB cameras, and proposes to design the IR-cut filter properly for improved spectral recovery, which stands out as an in-between solution with better trade-off between reconstruction accuracy and implementation complexity. We further propose a deep learning based spectral reconstruction method, which allows to recover the illumination spectrum as well. Experiment results with both synthetic and real images under daylight illumination have shown the benefits of our IR-cut filter tuning method and our illumination-aware spectral reconstruction method.
https://openaccess.thecvf.com/content/CVPR2021/papers/Sun_Tuning_IR-Cut_Filter_for_Illumination-Aware_Spectral_Reconstruction_From_RGB_CVPR_2021_paper.pdf
http://arxiv.org/abs/2103.14708
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Sun_Tuning_IR-Cut_Filter_for_Illumination-Aware_Spectral_Reconstruction_From_RGB_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Sun_Tuning_IR-Cut_Filter_for_Illumination-Aware_Spectral_Reconstruction_From_RGB_CVPR_2021_paper.html
CVPR 2021
null
null
Relative Order Analysis and Optimization for Unsupervised Deep Metric Learning
Shichao Kan, Yigang Cen, Yang Li, Vladimir Mladenovic, Zhihai He
In unsupervised learning of image features without labels, especially on datasets with fine-grained object classes, it is often very difficult to tell if a given image belongs to one specific object class or another, even for human eyes. However, we can reliably tell if image C is more similar to image A than image B. In this work, we propose to explore how this relative order can be used to learn discriminative features with an unsupervised metric learning method. Instead of resorting to clustering or self-supervision to create pseudo labels for an absolute decision, which often suffers from high label error rates, we construct reliable relative orders for groups of image samples and learn a deep neural network to predict these relative orders. During training, this relative order prediction network and the feature embedding network are tightly coupled, providing mutual constraints to each other to improve overall metric learning performance in a cooperative manner. During testing, the predicted relative orders are used as constraints to optimize the generated features and refine their feature distance-based image retrieval results using a constrained optimization procedure. Our experimental results demonstrate that the proposed relative orders for unsupervised learning (ROUL) method is able to significantly improve the performance of unsupervised deep metric learning.
https://openaccess.thecvf.com/content/CVPR2021/papers/Kan_Relative_Order_Analysis_and_Optimization_for_Unsupervised_Deep_Metric_Learning_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Kan_Relative_Order_Analysis_and_Optimization_for_Unsupervised_Deep_Metric_Learning_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Kan_Relative_Order_Analysis_and_Optimization_for_Unsupervised_Deep_Metric_Learning_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Kan_Relative_Order_Analysis_CVPR_2021_supplemental.pdf
null
Anchor-Free Person Search
Yichao Yan, Jinpeng Li, Jie Qin, Song Bai, Shengcai Liao, Li Liu, Fan Zhu, Ling Shao
Person search aims to simultaneously localize and identify a query person from realistic, uncropped images, which can be regarded as the unified task of pedestrian detection and person re-identification (re-id). Most existing works employ two-stage detectors like Faster-RCNN, yielding encouraging accuracy but with high computational overhead. In this work, we present the Feature-Aligned Person Search Network (AlignPS), the first anchor-free framework to efficiently tackle this challenging task. AlignPS explicitly addresses the major challenges, which we summarize as the misalignment issues in different levels (i.e., scale, region, and task), when accommodating an anchor-free detector for this task. More specifically, we propose an aligned feature aggregation module to generate more discriminative and robust feature embeddings by following a "re-id first" principle. Such a simple design directly improves the baseline anchor-free model on CUHK-SYSU by more than 20% in mAP. Moreover, AlignPS outperforms state-of-the-art two-stage methods, with a higher speed. The code is available at https://github.com/daodaofr/AlignPS.
https://openaccess.thecvf.com/content/CVPR2021/papers/Yan_Anchor-Free_Person_Search_CVPR_2021_paper.pdf
http://arxiv.org/abs/2103.11617
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Yan_Anchor-Free_Person_Search_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Yan_Anchor-Free_Person_Search_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Yan_Anchor-Free_Person_Search_CVPR_2021_supplemental.pdf
null
Are Labels Always Necessary for Classifier Accuracy Evaluation?
Weijian Deng, Liang Zheng
To calculate the model accuracy on a computer vision task, e.g., object recognition, we usually require a test set composing of test samples and their ground truth labels. Whilst standard usage cases satisfy this requirement, many real-world scenarios involve unlabeled test data, rendering common model evaluation methods infeasible. We investigate this important and under-explored problem, Automatic model Evaluation (AutoEval). Specifically, given a labeled training set and a classifier, we aim to estimate the classification accuracy on unlabeled test datasets. We construct a meta-dataset: a dataset comprised of datasets generated from the original images via various transformations such as rotation, background substitution, foreground scaling, etc. As the classification accuracy of the model on each sample (dataset) is known from the original dataset labels, our task can be solved via regression. Using the feature statistics to represent the distribution of a sample dataset, we can train regression models (e.g., a regression neural network) to predict model performance. Using synthetic meta-dataset and real-world datasets in training and testing, respectively, we report a reasonable and promising prediction of the model accuracy. We also provide insights into the application scope, limitation, and potential future direction of AutoEval.
https://openaccess.thecvf.com/content/CVPR2021/papers/Deng_Are_Labels_Always_Necessary_for_Classifier_Accuracy_Evaluation_CVPR_2021_paper.pdf
http://arxiv.org/abs/2007.02915
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Deng_Are_Labels_Always_Necessary_for_Classifier_Accuracy_Evaluation_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Deng_Are_Labels_Always_Necessary_for_Classifier_Accuracy_Evaluation_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Deng_Are_Labels_Always_CVPR_2021_supplemental.pdf
null
Self-Supervised Motion Learning From Static Images
Ziyuan Huang, Shiwei Zhang, Jianwen Jiang, Mingqian Tang, Rong Jin, Marcelo H. Ang
Motions are reflected in videos as the movement of pixels, and actions are essentially patterns of inconsistent motions between the foreground and the background. To well distinguish the actions, especially those with complicated spatio-temporal interactions, correctly locating the prominent motion areas is of crucial importance. However, most motion information in existing videos are difficult to label and training a model with good motion representations with supervision will thus require a large amount of human labour for annotation. In this paper, we address this problem by self-supervised learning. Specifically, we propose to learn Motion from Static Images (MoSI). The model learns to encode motion information by classifying pseudo motions generated by MoSI. We furthermore introduce a static mask in pseudo motions to create local motion patterns, which forces the model to additionally locate notable motion areas for the correct classification.We demonstrate that MoSI can discover regions with large motion even without fine-tuning on the downstream datasets. As a result, the learned motion representations boost the performance of tasks requiring understanding of complex scenes and motions, i.e., action recognition. Extensive experiments show the consistent and transferable improvements achieved by MoSI. Codes will be soon released.
https://openaccess.thecvf.com/content/CVPR2021/papers/Huang_Self-Supervised_Motion_Learning_From_Static_Images_CVPR_2021_paper.pdf
http://arxiv.org/abs/2104.00240
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Huang_Self-Supervised_Motion_Learning_From_Static_Images_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Huang_Self-Supervised_Motion_Learning_From_Static_Images_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Huang_Self-Supervised_Motion_Learning_CVPR_2021_supplemental.pdf
null
AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling
Dilin Wang, Meng Li, Chengyue Gong, Vikas Chandra
Neural architecture search (NAS) has shown great promise in designing state-of-the-art (SOTA) models that are both accurate and efficient. Recently, two-stage NAS, e.g. BigNAS, decouples the model training and searching process and achieves remarkable search efficiency and accuracy. Two-stage NAS requires sampling from the search space during training, which directly impacts the accuracy of the final searched models. While uniform sampling has been widely used for its simplicity, it is agnostic of the model performance Pareto front, which is the main focus in the search process, and thus, misses opportunities to further improve the model accuracy. In this work, we propose AttentiveNAS that focuses on improving the sampling strategy to achieve better performance Pareto. We also propose algorithms to efficiently and effectively identify the networks on the Pareto during training. Without extra re-training or post-processing, we can simultaneously obtain a large number of networks across a wide range of FLOPs. Our discovered model family, AttentiveNAS models, achieves top-1 accuracy from 77.3% to 80.7% on ImageNet, and outperforms SOTA models, including BigNAS, Once-for-All networks and FBNetV3. We also achieve ImageNet accuracy of 80.1% with only 491 MFLOPs. Our training code and pretrained models are available at https://github. com/facebookresearch/AttentiveNAS.
https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_AttentiveNAS_Improving_Neural_Architecture_Search_via_Attentive_Sampling_CVPR_2021_paper.pdf
http://arxiv.org/abs/2011.09011
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Wang_AttentiveNAS_Improving_Neural_Architecture_Search_via_Attentive_Sampling_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Wang_AttentiveNAS_Improving_Neural_Architecture_Search_via_Attentive_Sampling_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Wang_AttentiveNAS_Improving_Neural_CVPR_2021_supplemental.pdf
null
StablePose: Learning 6D Object Poses From Geometrically Stable Patches
Yifei Shi, Junwen Huang, Xin Xu, Yifan Zhang, Kai Xu
We introduce the concept of geometric stability to the problem of 6D object pose estimation and propose to learn pose inference based on geometrically stable patches extracted from observed 3D point clouds. According to the theory of geometric stability analysis, a minimal set of three planar/cylindrical patches are geometrically stable and determine the full 6DoFs of the object pose. We train a deep neural network to regress 6D object pose based on geometrically stable patch groups via learning both intra-patch geometric features and inter-patch contextual features. A subnetwork is jointly trained to predict per-patch poses. This auxiliary task is a relaxation of the group pose prediction: A single patch cannot determine the full 6DoFs but is able to improve pose accuracy in its corresponding DoFs. Working with patch groups makes our method generalize well for random occlusion and unseen instances. The method is easily amenable to resolve symmetry ambiguities. Our method achieves the state-of-the-art results on public benchmarks compared not only to depth-only but also to RGBD methods. It also performs well in category-level pose estimation.
https://openaccess.thecvf.com/content/CVPR2021/papers/Shi_StablePose_Learning_6D_Object_Poses_From_Geometrically_Stable_Patches_CVPR_2021_paper.pdf
http://arxiv.org/abs/2102.09334
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Shi_StablePose_Learning_6D_Object_Poses_From_Geometrically_Stable_Patches_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Shi_StablePose_Learning_6D_Object_Poses_From_Geometrically_Stable_Patches_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Shi_StablePose_Learning_6D_CVPR_2021_supplemental.pdf
null
Towards Evaluating and Training Verifiably Robust Neural Networks
Zhaoyang Lyu, Minghao Guo, Tong Wu, Guodong Xu, Kehuan Zhang, Dahua Lin
Recent works have shown that interval bound propagation (IBP) can be used to train verifiably robust neural networks. Reseachers observe an intriguing phenomenon on these IBP trained networks: CROWN, a bounding method based on tight linear relaxation, often gives very loose bounds on these networks. We also observe that most neurons become dead during the IBP training process, which could hurt the representation capability of the network. In this paper, we study the relationship between IBP and CROWN, and prove that CROWN is always tighter than IBP when choosing appropriate bounding lines. We further propose a relaxed version of CROWN, linear bound propagation (LBP), that can be used to verify large networks to obtain lower verified errors than IBP. We also design a new activation function, parameterized ramp function (ParamRamp), which has more diversity of neuron status than ReLU. We conduct extensive experiments on MNIST, CIFAR-10 and Tiny-ImageNet with ParamRamp activation and achieve state-of-the-art verified robustness. Code is available at https://github.com/ZhaoyangLyu/VerifiablyRobustNN.
https://openaccess.thecvf.com/content/CVPR2021/papers/Lyu_Towards_Evaluating_and_Training_Verifiably_Robust_Neural_Networks_CVPR_2021_paper.pdf
http://arxiv.org/abs/2104.00447
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Lyu_Towards_Evaluating_and_Training_Verifiably_Robust_Neural_Networks_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Lyu_Towards_Evaluating_and_Training_Verifiably_Robust_Neural_Networks_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Lyu_Towards_Evaluating_and_CVPR_2021_supplemental.pdf
null
Interpolation-Based Semi-Supervised Learning for Object Detection
Jisoo Jeong, Vikas Verma, Minsung Hyun, Juho Kannala, Nojun Kwak
Despite the data labeling cost for the object detection tasks being substantially more than that of the classification tasks, semi-supervised learning methods for object detection have not been studied much. In this paper, we propose an Interpolation-based Semi-supervised learning method for object Detection (ISD), which considers and solves the problems caused by applying conventional Interpolation Regularization (IR) directly to object detection. We divide the output of the model into two types according to the objectness scores of both original patches that are mixed in IR. Then, we apply a separate loss suitable for each type in an unsupervised manner. The proposed losses dramatically improve the performance of semi-supervised learning as well as supervised learning. In the supervised learning setting, our method improves the baseline methods by a significant margin. In the semi-supervised learning setting, our algorithm improves the performance on a benchmark dataset (PASCAL VOC and MSCOCO) in a benchmark architecture (SSD).
https://openaccess.thecvf.com/content/CVPR2021/papers/Jeong_Interpolation-Based_Semi-Supervised_Learning_for_Object_Detection_CVPR_2021_paper.pdf
http://arxiv.org/abs/2006.02158
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Jeong_Interpolation-Based_Semi-Supervised_Learning_for_Object_Detection_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Jeong_Interpolation-Based_Semi-Supervised_Learning_for_Object_Detection_CVPR_2021_paper.html
CVPR 2021
null
null
Teachers Do More Than Teach: Compressing Image-to-Image Models
Qing Jin, Jian Ren, Oliver J. Woodford, Jiazhuo Wang, Geng Yuan, Yanzhi Wang, Sergey Tulyakov
Generative Adversarial Networks (GANs) have achieved huge success in generating high-fidelity images, however, they suffer from low efficiency due to tremendous computational cost and bulky memory usage. Recent efforts on compression GANs show noticeable progress in obtaining smaller generators by sacrificing image quality or involving a time-consuming searching process. In this work, we aim to address these issues by introducing a teacher network that provides a search space in which efficient network architectures can be found, in addition to performing knowledge distillation. First, we revisit the search space of generative models, introducing an inception-based residual block into generators. Second, to achieve target computation cost, we propose a one-step pruning algorithm that searches a student architecture from the teacher model and substantially reduces searching cost. It requires no L1 sparsity regularization and its associated hyper-parameters, simplifying the training procedure. Finally, we propose to distill knowledge through maximizing feature similarity between teacher and student via an index named Global Centered Kernel Alignment (GCKA). Our compressed networks achieve better image fidelity (FID, mIoU) than the original models with much-reduced computational cost, e.g., MACs.
https://openaccess.thecvf.com/content/CVPR2021/papers/Jin_Teachers_Do_More_Than_Teach_Compressing_Image-to-Image_Models_CVPR_2021_paper.pdf
http://arxiv.org/abs/2103.03467
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Jin_Teachers_Do_More_Than_Teach_Compressing_Image-to-Image_Models_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Jin_Teachers_Do_More_Than_Teach_Compressing_Image-to-Image_Models_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Jin_Teachers_Do_More_CVPR_2021_supplemental.pdf
null
Seeing in Extra Darkness Using a Deep-Red Flash
Jinhui Xiong, Jian Wang, Wolfgang Heidrich, Shree Nayar
We propose a new flash technique for low-light imaging, using deep-red light as an illuminating source. Our main observation is that in a dim environment, the human eye mainly uses rods for the perception of light, which are not sensitive to wavelengths longer than 620nm, yet the camera sensor still has a spectral response. We propose a novel modulation strategy when training a modern CNN model for guided image filtering, fusing a noisy RGB frame and a flash frame. This fusion network is further extended for video reconstruction. We have built a prototype with minor hardware adjustments and tested the new flash technique on a variety of static and dynamic scenes. The experimental results demonstrate that our method produces compelling reconstructions, even in extra dim conditions.
https://openaccess.thecvf.com/content/CVPR2021/papers/Xiong_Seeing_in_Extra_Darkness_Using_a_Deep-Red_Flash_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Xiong_Seeing_in_Extra_Darkness_Using_a_Deep-Red_Flash_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Xiong_Seeing_in_Extra_Darkness_Using_a_Deep-Red_Flash_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Xiong_Seeing_in_Extra_CVPR_2021_supplemental.pdf
null
PSD: Principled Synthetic-to-Real Dehazing Guided by Physical Priors
Zeyuan Chen, Yangchao Wang, Yang Yang, Dong Liu
Deep learning-based methods have achieved remarkable performance for image dehazing. However, previous studies are mostly focused on training models with synthetic hazy images, which incurs performance drop when the models are used for real-world hazy images. We propose a Principled Synthetic-to-real Dehazing (PSD) framework to improve the generalization performance of dehazing. Starting from a dehazing model backbone that is pre-trained on synthetic data, PSD exploits real hazy images to fine-tune the model in an unsupervised fashion. For the fine-tuning, we leverage several well-grounded physical priors and combine them into a prior loss committee. PSD allows for most of the existing dehazing models as its backbone, and the combination of multiple physical priors boosts dehazing significantly. Through extensive experiments, we demonstrate that our PSD framework establishes the new state-of-the-art performance for real-world dehazing, in terms of visual quality assessed by no-reference quality metrics as well as subjective evaluation and downstream task performance indicator.
https://openaccess.thecvf.com/content/CVPR2021/papers/Chen_PSD_Principled_Synthetic-to-Real_Dehazing_Guided_by_Physical_Priors_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Chen_PSD_Principled_Synthetic-to-Real_Dehazing_Guided_by_Physical_Priors_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Chen_PSD_Principled_Synthetic-to-Real_Dehazing_Guided_by_Physical_Priors_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Chen_PSD_Principled_Synthetic-to-Real_CVPR_2021_supplemental.pdf
null
3D Spatial Recognition Without Spatially Labeled 3D
Zhongzheng Ren, Ishan Misra, Alexander G. Schwing, Rohit Girdhar
We introduce WyPR, a Weakly-supervised framework for Point cloud Recognition, requiring only scene-level class tags as supervision. WyPR jointly addresses three core 3D recognition tasks: point-level semantic segmentation, 3D proposal generation, and 3D object detection, coupling their predictions through self and cross-task consistency losses. We show that in conjunction with standard multiple-instance learning objectives, WyPR can detect and segment objects in point cloud without access to any spatial labels at training time. We demonstrate its efficacy using the ScanNet and S3DIS datasets, outperforming prior state of the art on weakly-supervised segmentation by more than 6% mIoU. In addition, we set up the first benchmark for weakly-supervised 3D object detection on both datasets, where WyPR outperforms standard approaches and establishes strong baselines for future work.
https://openaccess.thecvf.com/content/CVPR2021/papers/Ren_3D_Spatial_Recognition_Without_Spatially_Labeled_3D_CVPR_2021_paper.pdf
http://arxiv.org/abs/2105.06461
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Ren_3D_Spatial_Recognition_Without_Spatially_Labeled_3D_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Ren_3D_Spatial_Recognition_Without_Spatially_Labeled_3D_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Ren_3D_Spatial_Recognition_CVPR_2021_supplemental.pdf
null
Robust Reference-Based Super-Resolution via C2-Matching
Yuming Jiang, Kelvin C.K. Chan, Xintao Wang, Chen Change Loy, Ziwei Liu
Reference-based Super-Resolution (Ref-SR) has recently emerged as a promising paradigm to enhance a low-resolution (LR) input image by introducing an additional high-resolution (HR) reference image. Existing Ref-SR methods mostly rely on implicit correspondence matching to borrow HR textures from reference images to compensate for the information loss in input images. However, performing local transfer is difficult because of two gaps between input and reference images: the transformation gap (e.g. scale and rotation) and the resolution gap (e.g. HR and LR). To tackle these challenges, we propose C^ 2 -Matching in this work, which produces explicit robust matching crossing transformation and resolution. 1) For the transformation gap, we propose a contrastive correspondence network, which learns transformation-robust correspondences using augmented views of the input image. 2) For the resolution gap, we adopt a teacher-student correlation distillation, which distills knowledge from the easier HR-HR matching to guide the more ambiguous LR-HR matching. 3) Finally, we design a dynamic aggregation module to address the potential misalignment issue. In addition, to faithfully evaluate the performance of Ref-SR under a realistic setting, we contribute the Webly-Referenced SR (WR-SR) dataset, mimicking the practical usage scenario. Extensive experiments demonstrate that our proposed C^ 2 -Matching significantly outperforms current state-of-the-art methods by over 1dB on the standard CUFED5 benchmark. Notably, it also shows great generalizability on WR-SR dataset as well as robustness across large scale and rotation transformations.
https://openaccess.thecvf.com/content/CVPR2021/papers/Jiang_Robust_Reference-Based_Super-Resolution_via_C2-Matching_CVPR_2021_paper.pdf
http://arxiv.org/abs/2106.01863
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Jiang_Robust_Reference-Based_Super-Resolution_via_C2-Matching_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Jiang_Robust_Reference-Based_Super-Resolution_via_C2-Matching_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Jiang_Robust_Reference-Based_Super-Resolution_CVPR_2021_supplemental.pdf
null
Temporal-Relational CrossTransformers for Few-Shot Action Recognition
Toby Perrett, Alessandro Masullo, Tilo Burghardt, Majid Mirmehdi, Dima Damen
We propose a novel approach to few-shot action recognition, finding temporally-corresponding frame tuples between the query and videos in the support set. Distinct from previous few-shot works, we construct class prototypes using the CrossTransformer attention mechanism to observe relevant sub-sequences of all support videos, rather than using class averages or single best matches. Video representations are formed from ordered tuples of varying numbers of frames, which allows sub-sequences of actions at different speeds and temporal offsets to be compared. Our proposed Temporal-Relational CrossTransformers (TRX) achieve state-of-the-art results on few-shot splits of Kinetics, Something-Something V2 (SSv2), HMDB51 and UCF101. Importantly, our method outperforms prior work on SSv2 by a wide margin (12%) due to the its ability to model temporal relations. A detailed ablation showcases the importance of matching to multiple support set videos and learning higher-order relational CrossTransformers.
https://openaccess.thecvf.com/content/CVPR2021/papers/Perrett_Temporal-Relational_CrossTransformers_for_Few-Shot_Action_Recognition_CVPR_2021_paper.pdf
http://arxiv.org/abs/2101.06184
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Perrett_Temporal-Relational_CrossTransformers_for_Few-Shot_Action_Recognition_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Perrett_Temporal-Relational_CrossTransformers_for_Few-Shot_Action_Recognition_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Perrett_Temporal-Relational_CrossTransformers_for_CVPR_2021_supplemental.pdf
null
Understanding Failures of Deep Networks via Robust Feature Extraction
Sahil Singla, Besmira Nushi, Shital Shah, Ece Kamar, Eric Horvitz
Traditional evaluation metrics for learned models that report aggregate scores over a test set are insufficient for surfacing important and informative patterns of failure over features and instances. We introduce and study a method aimed at characterizing and explaining failures by identifying visual attributes whose presence or absence results in poor performance. In distinction to previous work that relies upon crowdsourced labels for visual attributes, we leverage the representation of a separate robust model to extract interpretable features and then harness these features to identify failure modes. We further propose a visualization method aimed at enabling humans to understand the meaning encoded in such features and we test the comprehensibility of the features. An evaluation of the methods on the ImageNet dataset demonstrates that: (i) the proposed workflow is effective for discovering important failure modes, (ii) the visualization techniques help humans to understand the extracted features, and (iii) the extracted insights can assist engineers with error analysis and debugging.
https://openaccess.thecvf.com/content/CVPR2021/papers/Singla_Understanding_Failures_of_Deep_Networks_via_Robust_Feature_Extraction_CVPR_2021_paper.pdf
http://arxiv.org/abs/2012.01750
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Singla_Understanding_Failures_of_Deep_Networks_via_Robust_Feature_Extraction_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Singla_Understanding_Failures_of_Deep_Networks_via_Robust_Feature_Extraction_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Singla_Understanding_Failures_of_CVPR_2021_supplemental.pdf
null
Relation-aware Instance Refinement for Weakly Supervised Visual Grounding
Yongfei Liu, Bo Wan, Lin Ma, Xuming He
Visual grounding, which aims to build a correspondence between visual objects and their language entities, plays a key role in cross-modal scene understanding. One promising and scalable strategy for learning visual grounding is to utilize weak supervision from only image-caption pairs. Previous methods typically rely on matching query phrases directly to a precomputed, fixed object candidate pool, which leads to inaccurate localization and ambiguous matching due to lack of semantic relation constraints. In our paper, we propose a novel context-aware weakly-supervised learning method that incorporates coarse-to-fine object refinement and entity relation modeling into a two-stage deep network, capable of producing more accurate object representation and matching. To effectively train our network, we introduce a self-taught regression loss for the proposal locations and a classification loss based on parsed entity relations. Extensive experiments on two public benchmarks Flickr30K Entities and ReferItGame demonstrate the efficacy of our weakly grounding framework. The results show that we outperform the previous methods by a considerable margin, achieving 59.27% top-1 accuracy in Flickr30K Entities and 37.68% in the ReferItGame dataset respectively.
https://openaccess.thecvf.com/content/CVPR2021/papers/Liu_Relation-aware_Instance_Refinement_for_Weakly_Supervised_Visual_Grounding_CVPR_2021_paper.pdf
http://arxiv.org/abs/2103.12989
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Liu_Relation-aware_Instance_Refinement_for_Weakly_Supervised_Visual_Grounding_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Liu_Relation-aware_Instance_Refinement_for_Weakly_Supervised_Visual_Grounding_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Liu_Relation-aware_Instance_Refinement_CVPR_2021_supplemental.pdf
null
Spatially-Invariant Style-Codes Controlled Makeup Transfer
Han Deng, Chu Han, Hongmin Cai, Guoqiang Han, Shengfeng He
Transferring makeup from the misaligned reference image is challenging. Previous methods overcome this barrier by computing pixel-wise correspondences between two images, which is inaccurate and computational-expensive. In this paper, we take a different perspective to break down the makeup transfer problem into a two-step extraction-assignment process. To this end, we propose a Style-based Controllable GAN model that consists of three components, each of which corresponds to target style-code encoding, face identity features extraction, and makeup fusion, respectively. In particular, a Part-specific Style Encoder encodes the component-wise makeup style of the reference image into a style-code in an intermediate latent space W. The style-code discards spatial information and therefore is invariant to spatial misalignment. On the other hand, the style-code embeds component-wise information, enabling flexible partial makeup editing from multiple references. This style-code, together with source identity features, are integrated to a Makeup Fusion Decoder equipped with multiple AdaIN layers to generate the final result. Our proposed method demonstrates great flexibility on makeup transfer by supporting makeup removal, shade-controllable makeup transfer, and part-specific makeup transfer, even with large spatial misalignment. Extensive experiments demonstrate the superiority of our approach over state-of-the-art methods. Code is available at https://github.com/makeuptransfer/SCGAN.
https://openaccess.thecvf.com/content/CVPR2021/papers/Deng_Spatially-Invariant_Style-Codes_Controlled_Makeup_Transfer_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Deng_Spatially-Invariant_Style-Codes_Controlled_Makeup_Transfer_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Deng_Spatially-Invariant_Style-Codes_Controlled_Makeup_Transfer_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Deng_Spatially-Invariant_Style-Codes_Controlled_CVPR_2021_supplemental.zip
null
Adaptive Image Transformer for One-Shot Object Detection
Ding-Jie Chen, He-Yen Hsieh, Tyng-Luh Liu
One-shot object detection tackles a challenging task that aims at identifying within a target image all object instances of the same class, implied by a query image patch. The main difficulty lies in the situation that the class label of the query patch and its respective examples are not available in the training data. Our main idea leverages the concept of language translation to boost metric-learning-based detection methods. Specifically, we emulate the language translation process to adaptively translate the feature of each object proposal to better correlate the given query feature for discriminating the class-similarity among the proposal-query pairs. To this end, we propose the Adaptive Image Transformer (AIT) module that deploys an attention-based encoder-decoder architecture to simultaneously explore intra-coder and inter-coder (i.e., each proposal-query pair) attention. The adaptive nature of our design turns out to be flexible and effective in addressing the one-shot learning scenario. With the informative attention cues, the proposed model excels in predicting the class-similarity between the target image proposals and the query image patch. Though conceptually simple, our model significantly outperforms a state-of-the-art technique, improving the unseen-class object classification from 63.8 mAP and 22.0 AP50 to 72.2 mAP and 24.3 AP50 on the PASCAL-VOC and MS-COCO benchmark datasets, respectively.
https://openaccess.thecvf.com/content/CVPR2021/papers/Chen_Adaptive_Image_Transformer_for_One-Shot_Object_Detection_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Chen_Adaptive_Image_Transformer_for_One-Shot_Object_Detection_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Chen_Adaptive_Image_Transformer_for_One-Shot_Object_Detection_CVPR_2021_paper.html
CVPR 2021
null
null
Bilateral Grid Learning for Stereo Matching Networks
Bin Xu, Yuhua Xu, Xiaoli Yang, Wei Jia, Yulan Guo
Real-time performance of stereo matching networks is important for many applications, such as automatic driving, robot navigation and augmented reality (AR). Although significant progress has been made in stereo matching networks in recent years, it is still challenging to balance real-time performance and accuracy. In this paper, we present a novel edge-preserving cost volume upsampling module based on the slicing operation in the learned bilateral grid. The slicing layer is parameter-free, which allows us to obtain a high quality cost volume of high resolution from a low-resolution cost volume under the guide of the learned guidance map efficiently. The proposed cost volume upsampling module can be seamlessly embedded into many existing stereo matching networks, such as GCNet, PSMNet, and GANet. The resulting networks are accelerated several times while maintaining comparable accuracy. Furthermore, we design a real-time network (named BGNet) based on this module, which outperforms existing published real-time deep stereo matching networks, as well as some complex networks on the KITTI stereo datasets. The code is available at https://github.com/YuhuaXu/BGNet.
https://openaccess.thecvf.com/content/CVPR2021/papers/Xu_Bilateral_Grid_Learning_for_Stereo_Matching_Networks_CVPR_2021_paper.pdf
http://arxiv.org/abs/2101.01601
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Xu_Bilateral_Grid_Learning_for_Stereo_Matching_Networks_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Xu_Bilateral_Grid_Learning_for_Stereo_Matching_Networks_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Xu_Bilateral_Grid_Learning_CVPR_2021_supplemental.pdf
null
A Multi-Task Network for Joint Specular Highlight Detection and Removal
Gang Fu, Qing Zhang, Lei Zhu, Ping Li, Chunxia Xiao
Specular highlight detection and removal are fundamental and challenging tasks. Although recent methods achieve promising results on the two tasks by supervised training on synthetic training data, they are typically solely designed for highlight detection or removal, and their performance usually deteriorates significantly on real-world images. In this paper, we present a novel network that aims to detect and remove highlights from natural images. To remove the domain gap between synthetic training samples and real test images, and support the investigation of learning-based approaches, we first introduce a dataset of 16K real images, each of which has the corresponding highlight detection and removal images. Using the presented dataset, we develop a multi-task network for joint highlight detection and removal, based on a new specular highlight image formation model. Experiments on the benchmark datasets and our new dataset show that our approach clearly outperforms the state-of-the-art methods for both highlight detection and removal.
https://openaccess.thecvf.com/content/CVPR2021/papers/Fu_A_Multi-Task_Network_for_Joint_Specular_Highlight_Detection_and_Removal_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Fu_A_Multi-Task_Network_for_Joint_Specular_Highlight_Detection_and_Removal_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Fu_A_Multi-Task_Network_for_Joint_Specular_Highlight_Detection_and_Removal_CVPR_2021_paper.html
CVPR 2021
null
null
A Deep Emulator for Secondary Motion of 3D Characters
Mianlun Zheng, Yi Zhou, Duygu Ceylan, Jernej Barbic
Fast and light-weight methods for animating 3D characters are desirable in various applications such as computer games. We present a learning-based approach to enhance skinning-based animations of 3D characters with vivid secondary motion effects. We represent each local patch of a character simulation mesh as a graph network where the edges implicitly encode the internal forces between the neighboring vertices. We then train a neural network that emulates the ordinary differential equations of the character dynamics, predicting new vertex positions from the current accelerations, velocities and positions. Being a local method, our network is independent of the mesh topology and generalizes to arbitrarily shaped 3D character meshes at test time. We further represent per-vertex constraints and material properties such as stiffness, enabling us to easily adjust the dynamics in different parts of the mesh. We evaluate our method on various character meshes and complex motion sequences. Our method can be over 30 times more efficient than ground-truth physically based simulation, and outperforms alternative solutions that provide fast approximations.
https://openaccess.thecvf.com/content/CVPR2021/papers/Zheng_A_Deep_Emulator_for_Secondary_Motion_of_3D_Characters_CVPR_2021_paper.pdf
http://arxiv.org/abs/2103.01261
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Zheng_A_Deep_Emulator_for_Secondary_Motion_of_3D_Characters_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Zheng_A_Deep_Emulator_for_Secondary_Motion_of_3D_Characters_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Zheng_A_Deep_Emulator_CVPR_2021_supplemental.pdf
null
Omni-Supervised Point Cloud Segmentation via Gradual Receptive Field Component Reasoning
Jingyu Gong, Jiachen Xu, Xin Tan, Haichuan Song, Yanyun Qu, Yuan Xie, Lizhuang Ma
Hidden features in neural network usually fail to learn informative representation for 3D segmentation as supervisions are only given on output prediction, while this can be solved by omni-scale supervision on intermediate layers. In this paper, we bring the first omni-scale supervision method to point cloud segmentation via the proposed gradual Receptive Field Component Reasoning (RFCR), where target Receptive Field Component Codes (RFCCs) are designed to record categories within receptive fields for hidden units in the encoder. Then, target RFCCs will supervise the decoder to gradually infer the RFCCs in a coarse-to-fine categories reasoning manner, and finally obtain the semantic labels. Because many hidden features are inactive with tiny magnitude and make minor contributions to RFCC prediction, we propose a Feature Densification with a centrifugal potential to obtain more unambiguous features, and it is in effect equivalent to entropy regularization over features. More active features can further unleash the potential of our omni-supervision method. We embed our method into four prevailing backbones and test on three challenging benchmarks. Our method can significantly improve the backbones in all three datasets. Specifically, our method brings new state-of-the-art performances for S3DIS as well as Semantic3D and ranks the 1st in the ScanNet benchmark among all the point-based methods. Code is publicly available at https://github.com/azuki-miho/RFCR.
https://openaccess.thecvf.com/content/CVPR2021/papers/Gong_Omni-Supervised_Point_Cloud_Segmentation_via_Gradual_Receptive_Field_Component_Reasoning_CVPR_2021_paper.pdf
http://arxiv.org/abs/2105.10203
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Gong_Omni-Supervised_Point_Cloud_Segmentation_via_Gradual_Receptive_Field_Component_Reasoning_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Gong_Omni-Supervised_Point_Cloud_Segmentation_via_Gradual_Receptive_Field_Component_Reasoning_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Gong_Omni-Supervised_Point_Cloud_CVPR_2021_supplemental.pdf
null
All Labels Are Not Created Equal: Enhancing Semi-Supervision via Label Grouping and Co-Training
Islam Nassar, Samitha Herath, Ehsan Abbasnejad, Wray Buntine, Gholamreza Haffari
Pseudo-labeling is a key component in semi-supervised learning (SSL). It relies on iteratively using the model to generate artificial labels for the unlabeled data to train against. A common property among its various methods is that they only rely on the model's prediction to make labeling decisions without considering any prior knowledge about the visual similarity among the classes. In this paper, we demonstrate that this degrades the quality of pseudo-labeling as it poorly represents visually similar classes in the pool of pseudo-labeled data. We propose SemCo, a method which leverages label semantics and co-training to address this problem. We train two classifiers with two different views of the class labels: one classifier uses the one-hot view of the labels and disregards any potential similarity among the classes, while the other uses a distributed view of the labels and groups potentially similar classes together. We then co-train the two classifiers to learn based on their disagreements. We show that our method achieves state-of-the-art performance across various SSL tasks including 5.6% accuracy improvement on Mini-ImageNet dataset with 1000 labeled examples. We also show that our method requires smaller batch size and fewer training iterations to reach its best performance. We make our code available at https://github.com/islam-nassar/semco.
https://openaccess.thecvf.com/content/CVPR2021/papers/Nassar_All_Labels_Are_Not_Created_Equal_Enhancing_Semi-Supervision_via_Label_CVPR_2021_paper.pdf
http://arxiv.org/abs/2104.05248
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Nassar_All_Labels_Are_Not_Created_Equal_Enhancing_Semi-Supervision_via_Label_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Nassar_All_Labels_Are_Not_Created_Equal_Enhancing_Semi-Supervision_via_Label_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Nassar_All_Labels_Are_CVPR_2021_supplemental.pdf
null
PMP-Net: Point Cloud Completion by Learning Multi-Step Point Moving Paths
Xin Wen, Peng Xiang, Zhizhong Han, Yan-Pei Cao, Pengfei Wan, Wen Zheng, Yu-Shen Liu
The task of point cloud completion aims to predict the missing part for an incomplete 3D shape. A widely used strategy is to generate a complete point cloud from the incomplete one. However, the unordered nature of point clouds will degrade the generation of high-quality 3D shapes, as the detailed topology and structure of discrete points are hard to be captured by the generative process only using a latent code. In this paper, we address the above problem by reconsidering the completion task from a new perspective, where we formulate the prediction as a point cloud deformation process. Specifically, we design a novel neural network, named PMP-Net, to mimic the behavior of an earth mover. It moves move each point of the incomplete input to complete the point cloud, where the total distance of point moving paths (PMP) should be shortest. Therefore, PMP-Net predicts a unique point moving path for each point according to the constraint of total point moving distances. As a result, the network learns a strict and unique correspondence on point-level, and thus improves the quality of the predicted complete shape. We conduct comprehensive experiments on Completion3D and PCN datasets, which demonstrate our advantages over the state-of-the-art point cloud completion methods. Code will be available at https://github.com/diviswen/PMP-Net.
https://openaccess.thecvf.com/content/CVPR2021/papers/Wen_PMP-Net_Point_Cloud_Completion_by_Learning_Multi-Step_Point_Moving_Paths_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Wen_PMP-Net_Point_Cloud_Completion_by_Learning_Multi-Step_Point_Moving_Paths_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Wen_PMP-Net_Point_Cloud_Completion_by_Learning_Multi-Step_Point_Moving_Paths_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Wen_PMP-Net_Point_Cloud_CVPR_2021_supplemental.pdf
null
Gradient-Based Algorithms for Machine Teaching
Pei Wang, Kabir Nagrecha, Nuno Vasconcelos
The problem of machine teaching is considered. A new formulation is proposed under the assumption of an optimal student, where optimality is defined in the usual machine learning sense of empirical risk minimization. This is a sensible assumption for machine learning students and for human students in crowdsourcing platforms, who tend to perform at least as well as machine learning systems. It is shown that, if allowed unbounded effort, the optimal student always learns the optimal predictor for a classification task. Hence, the role of the optimal teacher is to select the teaching set that minimizes student effort. This is formulated as a problem of functional optimization where, at each teaching iteration, the teacher seeks to align the steepest descent directions of the risk of (1) the teaching set and (2) entire example population. The optimal teacher, denoted MaxGrad, is then shown to maximize the gradient of the risk on the set of new examples selected per iteration. MaxGrad teaching algorithms are finally provided for both binary and multiclass tasks, and shown to have some similarities with boosting algorithms. Experimental evaluations demonstrate the effectiveness of MaxGrad, which outperforms previous algorithms on the classification task, for both machine learning and human students from MTurk, by a substantial margin.
https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_Gradient-Based_Algorithms_for_Machine_Teaching_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Wang_Gradient-Based_Algorithms_for_Machine_Teaching_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Wang_Gradient-Based_Algorithms_for_Machine_Teaching_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Wang_Gradient-Based_Algorithms_for_CVPR_2021_supplemental.pdf
null
MetaSCI: Scalable and Adaptive Reconstruction for Video Compressive Sensing
Zhengjue Wang, Hao Zhang, Ziheng Cheng, Bo Chen, Xin Yuan
To capture high-speed videos using a two-dimensional detector, video snapshot compressive imaging (SCI) is a promising system, where the video frames are coded by different masks and then compressed to a snapshot measurement. Following this, efficient algorithms are desired to reconstruct the high-speed frames, where the state-of-the-art results are achieved by deep learning networks. However, these networks are usually trained for specific small-scale masks and often have high demands of training time and GPU memory, which are hence not flexible to i) a new mask with the same size and ii) a larger-scale mask. We address these challenges by developing a Meta Modulated Convolutional Network for SCI reconstruction, dubbed MetaSCI. MetaSCI is composed of a shared backbone for different masks, and light-weight meta-modulation parameters to evolve to different modulation parameters for each mask, thus having the properties of fast adaptation to new masks (or systems) and ready to scale to large data. Extensive simulation and real data results demonstrate the superior performance of our proposed approach.
https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_MetaSCI_Scalable_and_Adaptive_Reconstruction_for_Video_Compressive_Sensing_CVPR_2021_paper.pdf
http://arxiv.org/abs/2103.01786
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Wang_MetaSCI_Scalable_and_Adaptive_Reconstruction_for_Video_Compressive_Sensing_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Wang_MetaSCI_Scalable_and_Adaptive_Reconstruction_for_Video_Compressive_Sensing_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Wang_MetaSCI_Scalable_and_CVPR_2021_supplemental.pdf
null
Removing Raindrops and Rain Streaks in One Go
Ruijie Quan, Xin Yu, Yuanzhi Liang, Yi Yang
Existing rain-removal algorithms often tackle either rain streak removal or raindrop removal, and thus may fail to handle real-world rainy scenes. Besides, the lack of real-world deraining datasets comprising different types of rain and their corresponding rain-free ground-truth also impedes deraining algorithm development. In this paper, we aim to address real-world deraining problems from two aspects. First, we propose a complementary cascaded network architecture, namely CCN, to remove rain streaks and raindrops in a unified framework. Specifically, our CCN removes raindrops and rain streaks in a complementary fashion, i.e., raindrop removal followed by rain streak removal and vice versa, and then fuses the results via an attention based fusion module. Considering significant shape and structure differences between rain streaks and raindrops, it is difficult to manually design a sophisticated network to remove them effectively. Thus, we employ neural architecture search to adaptively find optimal architectures within our specified deraining search space. Second, we present a new real-world rain dataset, namely RainDS, to prosper the development of deraining algorithms in practical scenarios. RainDS consists of rain images in different types and their corresponding rain-free ground-truth, including rain streak only, raindrop only, and both of them. Extensive experimental results on both existing benchmarks and RainDS demonstrate that our method outperforms the state-of-the-art.
https://openaccess.thecvf.com/content/CVPR2021/papers/Quan_Removing_Raindrops_and_Rain_Streaks_in_One_Go_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Quan_Removing_Raindrops_and_Rain_Streaks_in_One_Go_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Quan_Removing_Raindrops_and_Rain_Streaks_in_One_Go_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Quan_Removing_Raindrops_and_CVPR_2021_supplemental.pdf
null
Action Unit Memory Network for Weakly Supervised Temporal Action Localization
Wang Luo, Tianzhu Zhang, Wenfei Yang, Jingen Liu, Tao Mei, Feng Wu, Yongdong Zhang
Weakly supervised temporal action localization aims to detect and localize actions in untrimmed videos with only video-level labels during training. However, without frame-level annotations, it is challenging to achieve localization completeness and relieve background interference. In this paper, we present an Action Unit Memory Network (AUMN) for weakly supervised temporal action localization, which can mitigate the above two challenges by learning an action unit memory bank. In the proposed AUMN, two attention modules are designed to update the memory bank adaptively and learn action units specific classifiers. Furthermore, three effective mechanisms (diversity, homogeneity and sparsity) are designed to guide the updating of the memory network. To the best of our knowledge, this is the first work to explicitly model the action units with a memory network. Extensive experimental results on two standard benchmarks (THUMOS14 and ActivityNet) demonstrate that our AUMN performs favorably against stateof-the-art methods. Specifically, the average mAP of IoU thresholds from 0.1 to 0.5 on the THUMOS14 dataset is significantly improved from 47.0% to 52.1%.
https://openaccess.thecvf.com/content/CVPR2021/papers/Luo_Action_Unit_Memory_Network_for_Weakly_Supervised_Temporal_Action_Localization_CVPR_2021_paper.pdf
http://arxiv.org/abs/2104.14135
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Luo_Action_Unit_Memory_Network_for_Weakly_Supervised_Temporal_Action_Localization_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Luo_Action_Unit_Memory_Network_for_Weakly_Supervised_Temporal_Action_Localization_CVPR_2021_paper.html
CVPR 2021
null
null
IMAGINE: Image Synthesis by Image-Guided Model Inversion
Pei Wang, Yijun Li, Krishna Kumar Singh, Jingwan Lu, Nuno Vasconcelos
Synthesizing variations of a specific reference image with semantically valid content is an important task in terms of personalized generation as well as for data augmentation. In this work, we propose an inversion based method, denoted as IMAge-Guided model INvErsion (IMAGINE), to generate high-quality and diverse images only from one single training sample. We mainly leverage the knowledge of image semantics from a pre-trained classifier and achieve plausible generations via matching multi-level feature representations in the classifier, associated with adversarial training with an external discriminator. IMAGINE enables the synthesis procedure to be able to simultaneously 1) enforce semantic specificity constraints during the synthesis, 2) produce realistic images without the introduction of generator training, 3) allow fine controls over the synthesized image, and 4) be model-compact. With extensive experimental results, we demonstrate qualitatively and quantitatively that IMAGINE performs favorably against state-of-the-art GAN-based and inversion-based methods, across three different image domains, i.e., the object, scene and texture.
https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_IMAGINE_Image_Synthesis_by_Image-Guided_Model_Inversion_CVPR_2021_paper.pdf
http://arxiv.org/abs/2104.05895
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Wang_IMAGINE_Image_Synthesis_by_Image-Guided_Model_Inversion_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Wang_IMAGINE_Image_Synthesis_by_Image-Guided_Model_Inversion_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Wang_IMAGINE_Image_Synthesis_CVPR_2021_supplemental.pdf
null
Neural Scene Graphs for Dynamic Scenes
Julian Ost, Fahim Mannan, Nils Thuerey, Julian Knodt, Felix Heide
Recent implicit neural rendering methods have demonstrated that it is possible to learn accurate view synthesis for complex scenes by predicting their volumetric density and color supervised solely by a set of RGB images. However, existing methods are restricted to learning efficient representations of static scenes that encode all scene objects into a single neural network, and they lack the ability to represent dynamic scenes and decompose scenes into individual objects. In this work, we present the first neural rendering method that represents multi-object dynamic scenes as scene graphs. We propose a learned scene graph representation, which encodes object transformations and radiance, allowing us to efficiently render novel arrangements and views of the scene. To this end, we learn implicitly encoded scenes, combined with a jointly learned latent representation to describe similar objects with a single implicit function. We assess the proposed method on synthetic and real automotive data, validating that our approach learns dynamic scenes -- only by observing a video of this scene -- and allows for rendering novel photo-realistic views of novel scene compositions with unseen sets of objects at unseen poses.
https://openaccess.thecvf.com/content/CVPR2021/papers/Ost_Neural_Scene_Graphs_for_Dynamic_Scenes_CVPR_2021_paper.pdf
http://arxiv.org/abs/2011.10379
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Ost_Neural_Scene_Graphs_for_Dynamic_Scenes_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Ost_Neural_Scene_Graphs_for_Dynamic_Scenes_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Ost_Neural_Scene_Graphs_CVPR_2021_supplemental.zip
null
RSTNet: Captioning With Adaptive Attention on Visual and Non-Visual Words
Xuying Zhang, Xiaoshuai Sun, Yunpeng Luo, Jiayi Ji, Yiyi Zhou, Yongjian Wu, Feiyue Huang, Rongrong Ji
Recent progress on visual question answering has explored the merits of grid features for vision language tasks. Meanwhile, transformer-based models have shown remarkable performance in various sequence prediction problems. However, the spatial information loss of grid features caused by flattening operation, as well as the defect of the transformer model in distinguishing visual words and non visual words, are still left unexplored. In this paper, we first propose Grid-Augmented (GA) module, in which relative geometry features between grids are incorporated to enhance visual representations. Then, we build a BERTbased language model to extract language context and propose Adaptive-Attention (AA) module on top of a transformer decoder to adaptively measure the contribution of visual and language cues before making decisions for word prediction. To prove the generality of our proposals, we apply the two modules to the vanilla transformer model to build our Relationship-Sensitive Transformer (RSTNet) for image captioning task. The proposed model is tested on the MSCOCO benchmark, where it achieves new state-ofart results on both the Karpathy test split and the online test server. Source code is available at GitHub 1.
https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_RSTNet_Captioning_With_Adaptive_Attention_on_Visual_and_Non-Visual_Words_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_RSTNet_Captioning_With_Adaptive_Attention_on_Visual_and_Non-Visual_Words_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_RSTNet_Captioning_With_Adaptive_Attention_on_Visual_and_Non-Visual_Words_CVPR_2021_paper.html
CVPR 2021
null
null
Time Lens: Event-Based Video Frame Interpolation
Stepan Tulyakov, Daniel Gehrig, Stamatios Georgoulis, Julius Erbach, Mathias Gehrig, Yuanyou Li, Davide Scaramuzza
State-of-the-art frame interpolation methods generate intermediate frames by inferring object motions in the image from consecutive key-frames. In the absence of additional information, first-order approximations, i.e. optical flow, must be used, but this choice restricts the types of motions that can be modeled, leading to errors in highly dynamic scenarios. Event cameras are novel sensors that address this limitation by providing auxiliary visual information in the blind-time between frames. They asynchronously measure per-pixel brightness changes and do this with high temporal resolution and low latency. Event-based frame interpolation methods typically adopt a synthesis-based approach, where predicted frame residuals are directly applied to the key-frames. However, while these approaches can capture non-linear motions they suffer from ghosting and perform poorly in low-texture regions with few events. Thus, synthesis-based and flow-based approaches are complementary. In this work, we introduce Time Lens, a novel method that leverages the advantages of both. We extensively evaluate our method on three synthetic and two real benchmarks where we show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods. Finally, we release a new large-scale dataset in highly dynamic scenarios, aimed at pushing the limits of existing methods.
https://openaccess.thecvf.com/content/CVPR2021/papers/Tulyakov_Time_Lens_Event-Based_Video_Frame_Interpolation_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Tulyakov_Time_Lens_Event-Based_Video_Frame_Interpolation_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Tulyakov_Time_Lens_Event-Based_Video_Frame_Interpolation_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Tulyakov_Time_Lens_Event-Based_CVPR_2021_supplemental.zip
null
FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space
Quande Liu, Cheng Chen, Jing Qin, Qi Dou, Pheng-Ann Heng
Federated learning allows distributed medical institutions to collaboratively learn a shared prediction model with privacy protection. While at clinical deployment, the models trained in federated learning can still suffer from performance drop when applied to completely unseen hospitals outside the federation. In this paper, we point out and solve a novel problem setting of federated domain generalization, which aims to learn a federated model from multiple distributed source domains such that it can directly generalize to unseen target domains. We present a novel approach, named as Episodic Learning in Continuous Frequency Space (ELCFS), for this problem by enabling each client to exploit multi-source data distributions under the challenging constraint of data decentralization. Our approach transmits the distribution information across clients in a privacy-protecting way through an effective continuous frequency space interpolation mechanism. With the transferred multi-source distributions, we further carefully design a boundary-oriented episodic learning paradigm to expose the local learning to domain distribution shifts and particularly meet the challenges of model generalization in medical image segmentation scenario. The effectiveness of our method is demonstrated with superior performance over state-of-the-arts and in-depth ablation experiments on two medical image segmentation tasks. The code is available at "https://github.com/liuquande/FedDG-ELCFS".
https://openaccess.thecvf.com/content/CVPR2021/papers/Liu_FedDG_Federated_Domain_Generalization_on_Medical_Image_Segmentation_via_Episodic_CVPR_2021_paper.pdf
http://arxiv.org/abs/2103.06030
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Liu_FedDG_Federated_Domain_Generalization_on_Medical_Image_Segmentation_via_Episodic_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Liu_FedDG_Federated_Domain_Generalization_on_Medical_Image_Segmentation_via_Episodic_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Liu_FedDG_Federated_Domain_CVPR_2021_supplemental.pdf
null
Anomaly Detection in Video via Self-Supervised and Multi-Task Learning
Mariana-Iuliana Georgescu, Antonio Barbalau, Radu Tudor Ionescu, Fahad Shahbaz Khan, Marius Popescu, Mubarak Shah
Anomaly detection in video is a challenging computer vision problem. Due to the lack of anomalous events at training time, anomaly detection requires the design of learning methods without full supervision. In this paper, we approach anomalous event detection in video through self-supervised and multi-task learning at the object level. We first utilize a pre-trained detector to detect objects. Then, we train a 3D convolutional neural network to produce discriminative anomaly-specific information by jointly learning multiple proxy tasks: three self-supervised and one based on knowledge distillation. The self-supervised tasks are: (i) discrimination of forward/backward moving objects (arrow of time), (ii) discrimination of objects in consecutive/intermittent frames (motion irregularity) and (iii) reconstruction of object-specific appearance information. The knowledge distillation task takes into account both classification and detection information, generating large prediction discrepancies between teacher and student models when anomalies occur. To the best of our knowledge, we are the first to approach anomalous event detection in video as a multi-task learning problem, integrating multiple self-supervised and knowledge distillation proxy tasks in a single architecture. Our lightweight architecture outperforms the state-of-the-art methods on three benchmarks: Avenue, ShanghaiTech and UCSD Ped2. Additionally, we perform an ablation study demonstrating the importance of integrating self-supervised learning and normality-specific distillation in a multi-task learning setting.
https://openaccess.thecvf.com/content/CVPR2021/papers/Georgescu_Anomaly_Detection_in_Video_via_Self-Supervised_and_Multi-Task_Learning_CVPR_2021_paper.pdf
http://arxiv.org/abs/2011.07491
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Georgescu_Anomaly_Detection_in_Video_via_Self-Supervised_and_Multi-Task_Learning_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Georgescu_Anomaly_Detection_in_Video_via_Self-Supervised_and_Multi-Task_Learning_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Georgescu_Anomaly_Detection_in_CVPR_2021_supplemental.pdf
null
Multiresolution Knowledge Distillation for Anomaly Detection
Mohammadreza Salehi, Niousha Sadjadi, Soroosh Baselizadeh, Mohammad H. Rohban, Hamid R. Rabiee
Unsupervised representation learning has proved to be a critical component of anomaly detection/localization in images. The challenges to learn such a representation are two-fold. Firstly, the sample size is not often large enough to learn a rich generalizable representation through conventional techniques. Secondly, while only normal samples are available at training, the learned features should be discriminative of normal and anomalous samples. Here, we propose to use the "distillation" of features at various layers of an expert network, which is pre-trained on ImageNet, into a simpler cloner network to tackle both issues. We detect and localize anomalies using the discrepancy between the expert and cloner networks' intermediate activation values given an input sample. We show that considering multiple intermediate hints in distillation leads to better exploitation of the expert's knowledge and a more distinctive discrepancy between the two networks, compared to utilizing only the last layer activation values. Notably, previous methods either fail in precise anomaly localization or need expensive region-based training. In contrast, with no need for any special or intensive training procedure, we incorporate interpretability algorithms in our novel framework to localize anomalous regions. Despite the striking difference between some test datasets and ImageNet, we achieve competitive or significantly superior results compared to SOTA on MNIST, F-MNIST, CIFAR-10, MVTecAD, Retinal-OCT, and two other medical datasets on both anomaly detection and localization.
https://openaccess.thecvf.com/content/CVPR2021/papers/Salehi_Multiresolution_Knowledge_Distillation_for_Anomaly_Detection_CVPR_2021_paper.pdf
http://arxiv.org/abs/2011.11108
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Salehi_Multiresolution_Knowledge_Distillation_for_Anomaly_Detection_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Salehi_Multiresolution_Knowledge_Distillation_for_Anomaly_Detection_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Salehi_Multiresolution_Knowledge_Distillation_CVPR_2021_supplemental.pdf
null
Joint Learning of 3D Shape Retrieval and Deformation
Mikaela Angelina Uy, Vladimir G. Kim, Minhyuk Sung, Noam Aigerman, Siddhartha Chaudhuri, Leonidas J. Guibas
We propose a novel technique for producing high-quality 3D models that match a given target object image or scan. Our method is based on retrieving an existing shape from a database of 3D models and then deforming its parts to match the target shape. Unlike previous approaches that independently focus on either shape retrieval or deformation, we propose a joint learning procedure that simultaneously trains the neural deformation module along with the embedding space used by the retrieval module. This enables our network to learn a deformation-aware embedding space, so that retrieved models are more amenable to match the target after an appropriate deformation. In fact, we use the embedding space to guide the shape pairs used to train the deformation module, so that it invests its capacity in learning deformations between meaningful shape pairs. Furthermore, our novel part-aware deformation module can work with inconsistent and diverse part-structures on the source shapes. We demonstrate the benefits of our joint training not only on our novel framework, but also on other state-of-the-art neural deformation modules proposed in recent years. Lastly, we also show that our jointly-trained method outperforms various non-joint baselines.
https://openaccess.thecvf.com/content/CVPR2021/papers/Uy_Joint_Learning_of_3D_Shape_Retrieval_and_Deformation_CVPR_2021_paper.pdf
http://arxiv.org/abs/2101.07889
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Uy_Joint_Learning_of_3D_Shape_Retrieval_and_Deformation_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Uy_Joint_Learning_of_3D_Shape_Retrieval_and_Deformation_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Uy_Joint_Learning_of_CVPR_2021_supplemental.pdf
null
Learning Spatially-Variant MAP Models for Non-Blind Image Deblurring
Jiangxin Dong, Stefan Roth, Bernt Schiele
The classical maximum a-posteriori (MAP) framework for non-blind image deblurring requires defining suitable data and regularization terms, whose interplay yields the desired clear image through optimization. The vast majority of prior work focuses on advancing one of these two crucial ingredients, while keeping the other one standard. Considering the indispensable roles and interplay of both data and regularization terms, we propose a simple and effective approach to jointly learn these two terms, embedding deep neural networks within the constraints of the MAP framework, trained in an end-to-end manner. The neural networks not only yield suitable image-adaptive features for both terms, but actually predict per-pixel spatially-variant features instead of the commonly used spatially-uniform ones. The resulting spatially-variant data and regularization terms particularly improve the restoration of fine-scale structures and detail. Quantitative and qualitative results underline the effectiveness of our approach, substantially outperforming the current state of the art.
https://openaccess.thecvf.com/content/CVPR2021/papers/Dong_Learning_Spatially-Variant_MAP_Models_for_Non-Blind_Image_Deblurring_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Dong_Learning_Spatially-Variant_MAP_Models_for_Non-Blind_Image_Deblurring_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Dong_Learning_Spatially-Variant_MAP_Models_for_Non-Blind_Image_Deblurring_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Dong_Learning_Spatially-Variant_MAP_CVPR_2021_supplemental.pdf
null
FCPose: Fully Convolutional Multi-Person Pose Estimation With Dynamic Instance-Aware Convolutions
Weian Mao, Zhi Tian, Xinlong Wang, Chunhua Shen
We propose a fully convolutional multi-person pose estimation framework using dynamic instance-aware convolutions, termed FCPose. Different from existing methods, which often require ROI (Region of Interest) operations and/or grouping post-processing, FCPose eliminates the ROIs and grouping post-processing with dynamic instance-aware keypoint estimation heads. The dynamic keypoint heads are conditioned on each instance (person), and can encode the instance concept in the dynamically-generated weights of their filters. Moreover, with the strong representation capacity of dynamic convolutions, the keypoint heads in FCPose are designed to be very compact, resulting in fast inference and makes FCPose have almost constant inference time regardless of the number of persons in the image. For example, on the COCO dataset, a real-time version of FCPose using the DLA-34 backbone infers about 4.5 times faster than Mask R-CNN (ResNet-101) (41.67 FPS vs. 9.26 FPS) while achieving improved performance (64.8% AP vs. 64.3% AP). FCPose also offers better speed/accuracy trade-off than other state-of-the-art methods. Our experiment results show that FCPose is a simple yet effective multi-person pose estimation framework. Code is available at: https://git.io/AdelaiDet
https://openaccess.thecvf.com/content/CVPR2021/papers/Mao_FCPose_Fully_Convolutional_Multi-Person_Pose_Estimation_With_Dynamic_Instance-Aware_Convolutions_CVPR_2021_paper.pdf
http://arxiv.org/abs/2105.14185
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Mao_FCPose_Fully_Convolutional_Multi-Person_Pose_Estimation_With_Dynamic_Instance-Aware_Convolutions_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Mao_FCPose_Fully_Convolutional_Multi-Person_Pose_Estimation_With_Dynamic_Instance-Aware_Convolutions_CVPR_2021_paper.html
CVPR 2021
null
null
BoxInst: High-Performance Instance Segmentation With Box Annotations
Zhi Tian, Chunhua Shen, Xinlong Wang, Hao Chen
We present a high-performance method that can achieve mask-level instance segmentation with only bounding-box annotations for training. While this setting has been studied in the literature, here we show significantly stronger performance with a simple design (e.g., dramatically improving previous best reported mask AP of 21.1% to 31.6% on the COCO dataset). Our core idea is to redesign the loss of learning masks in instance segmentation, with no modification to the segmentation network itself. The new loss functions can supervise the mask training without relying on mask annotations. This is made possible with two loss terms, namely, 1) a surrogate term that minimizes the discrepancy between the projections of the ground-truth box and the predicted mask; 2) a pairwise loss that can exploit the prior that proximal pixels with similar colors are very likely to have the same category label. Experiments demonstrate that the redesigned mask loss can yield surprisingly high-quality instance masks with only box annotations. For example, without using any mask annotations, with a ResNet-101 backbone and 3x training schedule, we achieve 33.2% mask AP on COCO test-dev split (vs. 39.1% of the fully supervised counterpart). Our excellent experiment results on COCO and Pascal VOC indicate that our method dramatically narrows the performance gap between weakly and fully supervised instance segmentation. Code is available at https://git.io/AdelaiDet
https://openaccess.thecvf.com/content/CVPR2021/papers/Tian_BoxInst_High-Performance_Instance_Segmentation_With_Box_Annotations_CVPR_2021_paper.pdf
http://arxiv.org/abs/2012.02310
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Tian_BoxInst_High-Performance_Instance_Segmentation_With_Box_Annotations_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Tian_BoxInst_High-Performance_Instance_Segmentation_With_Box_Annotations_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Tian_BoxInst_High-Performance_Instance_CVPR_2021_supplemental.pdf
null
Modeling Multi-Label Action Dependencies for Temporal Action Localization
Praveen Tirupattur, Kevin Duarte, Yogesh S Rawat, Mubarak Shah
Real world videos contain many complex actions with inherent relationships between action classes. In this work, we propose an attention-based architecture that model these action relationships for the task of temporal action localization in untrimmed videos. As opposed to previous works which leverage video-level co-occurrence of actions, we distinguish the relationships between actions that occur at the same time-step and actions that occur at different time-steps (i.e. those which precede or follow each other). We define these distinct relationships as action dependencies. We propose to improve action localization performance by modeling these action dependencies in a novel attention based Multi-Label Action Dependency (MLAD) layer. The MLAD layer consists of two branches: a Co-occurrence Dependency Branch and a Temporal Dependency Branch to model co-occurrence action dependencies and temporal action dependencies, respectively. We observe that existing metrics used for multi-label classification do not explicitly measure how well action dependencies are modeled, therefore, we propose novel metrics which consider both co-occurrence and temporal dependencies between action classes. Through empirical evaluation and extensive analysis we show improved performance over state-of-the art methods on multi-label action localization benchmarks (MultiTHUMOS and Charades) in terms of f-mAP and our proposed metric.
https://openaccess.thecvf.com/content/CVPR2021/papers/Tirupattur_Modeling_Multi-Label_Action_Dependencies_for_Temporal_Action_Localization_CVPR_2021_paper.pdf
http://arxiv.org/abs/2103.03027
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Tirupattur_Modeling_Multi-Label_Action_Dependencies_for_Temporal_Action_Localization_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Tirupattur_Modeling_Multi-Label_Action_Dependencies_for_Temporal_Action_Localization_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Tirupattur_Modeling_Multi-Label_Action_CVPR_2021_supplemental.zip
null
HCRF-Flow: Scene Flow From Point Clouds With Continuous High-Order CRFs and Position-Aware Flow Embedding
Ruibo Li, Guosheng Lin, Tong He, Fayao Liu, Chunhua Shen
Scene flow in 3D point clouds plays an important role in understanding dynamic environments. Although significant advances have been made by deep neural networks, the performance is far from satisfactory as only per-point translational motion is considered, neglecting the constraints of the rigid motion in local regions. To address the issue, we propose to introduce the motion consistency to force the smoothness among neighboring points. In addition, constraints on the rigidity of the local transformation are also added by sharing unique rigid motion parameters for all points within each local region. To this end, a high-order CRFs based relation module (Con-HCRFs) is deployed to explore both point-wise smoothness and region-wise rigidity. To empower the CRFs to have a discriminative unary term, we also introduce a position-aware flow estimation module to be incorporated into the Con-HCRFs. Comprehensive experiments on FlyingThings3D and KITTI show that our proposed framework (HCRF-Flow) achieves state-of-the-art performance and significantly outperforms previous approaches substantially.
https://openaccess.thecvf.com/content/CVPR2021/papers/Li_HCRF-Flow_Scene_Flow_From_Point_Clouds_With_Continuous_High-Order_CRFs_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Li_HCRF-Flow_Scene_Flow_From_Point_Clouds_With_Continuous_High-Order_CRFs_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Li_HCRF-Flow_Scene_Flow_From_Point_Clouds_With_Continuous_High-Order_CRFs_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Li_HCRF-Flow_Scene_Flow_CVPR_2021_supplemental.pdf
null
Lite-HRNet: A Lightweight High-Resolution Network
Changqian Yu, Bin Xiao, Changxin Gao, Lu Yuan, Lei Zhang, Nong Sang, Jingdong Wang
We present an efficient high-resolution network, Lite-HRNet, for human pose estimation. We start by simply applying the efficient shuffle block in ShuffleNet to HRNet (high-resolution network), yielding stronger performance over popular lightweight networks, such as MobileNet, ShuffleNet, and Small HRNet. We find that the heavily-used pointwise (1x1) convolutions in shuffle blocks become the computational bottleneck. We introduce a lightweight unit, conditional channel weighting, to replace costly pointwise (1x1) convolutions in shuffle blocks. The complexity of channel weighting is linear w.r.t the number of channels and lower than the quadratic time complexity for pointwise convolutions. Our solution learns the weights from all the channels and over multiple resolutions that are readily available in the parallel branches in HRNet. It uses the weights as the bridge to exchange information across channels and resolutions, compensating the role played by the pointwise (1x1) convolution. Lite-HRNet demonstrates superior results on human pose estimation over popular lightweight networks. Moreover, Lite-HRNet can be easily applied to semantic segmentation task in the same lightweight manner. The code and models have been publicly available at https://github.com/HRNet/Lite-HRNet.
https://openaccess.thecvf.com/content/CVPR2021/papers/Yu_Lite-HRNet_A_Lightweight_High-Resolution_Network_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Yu_Lite-HRNet_A_Lightweight_High-Resolution_Network_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Yu_Lite-HRNet_A_Lightweight_High-Resolution_Network_CVPR_2021_paper.html
CVPR 2021
null
null
Self-Supervised Video Representation Learning by Context and Motion Decoupling
Lianghua Huang, Yu Liu, Bin Wang, Pan Pan, Yinghui Xu, Rong Jin
A key challenge in self-supervised video representation learning is how to effectively capture motion information besides context bias. While most existing works implicitly achieve this with video-specific pretext tasks (e.g., predicting clip orders, time arrows, and paces), we develop a method that explicitly decouples motion supervision from context bias through a carefully designed pretext task. Specifically, we take the key frames and motion vectors in compressed videos (e.g., in H.264 format) as the supervision sources for context and motion, respectively, which can be efficiently extracted at over 500 fps on CPU. Then we design two pretext tasks that are jointly optimized: a context matching task where a pairwise contrastive loss is cast between video clip and key frame features; and a motion prediction task where clip features, passed through an encoder-decoder network, are used to estimate motion features in a near future. These two tasks use a shared video backbone and separate MLP heads. Experiments show that our approach improves the quality of the learned video representation over previous works, where we obtain absolute gains of 16.0% and 11.1% in video retrieval recall on UCF101 and HMDB51, respectively. Moreover, we find the motion prediction to be a strong regularization for video networks, where using it as an auxiliary task improves the accuracy of action recognition with a margin of 7.4% 13.8%.
https://openaccess.thecvf.com/content/CVPR2021/papers/Huang_Self-Supervised_Video_Representation_Learning_by_Context_and_Motion_Decoupling_CVPR_2021_paper.pdf
http://arxiv.org/abs/2104.00862
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Huang_Self-Supervised_Video_Representation_Learning_by_Context_and_Motion_Decoupling_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Huang_Self-Supervised_Video_Representation_Learning_by_Context_and_Motion_Decoupling_CVPR_2021_paper.html
CVPR 2021
null
null
ReAgent: Point Cloud Registration Using Imitation and Reinforcement Learning
Dominik Bauer, Timothy Patten, Markus Vincze
Point cloud registration is a common step in many 3D computer vision tasks such as object pose estimation, where a 3D model is aligned to an observation. Classical registration methods generalize well to novel domains but fail when given a noisy observation or a bad initialization. Learning-based methods, in contrast, are more robust but lack in generalization capacity. We propose to consider iterative point cloud registration as a reinforcement learning task and, to this end, present a novel registration agent (ReAgent). We employ imitation learning to initialize its discrete registration policy based on a steady expert policy. Integration with policy optimization, based on our proposed alignment reward, further improves the agent's registration performance. We compare our approach to classical and learning-based registration methods on both ModelNet40 (synthetic) and ScanObjectNN (real data) and show that our ReAgent achieves state-of-the-art accuracy. The lightweight architecture of the agent, moreover, enables reduced inference time as compared to related approaches.
https://openaccess.thecvf.com/content/CVPR2021/papers/Bauer_ReAgent_Point_Cloud_Registration_Using_Imitation_and_Reinforcement_Learning_CVPR_2021_paper.pdf
http://arxiv.org/abs/2103.15231
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Bauer_ReAgent_Point_Cloud_Registration_Using_Imitation_and_Reinforcement_Learning_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Bauer_ReAgent_Point_Cloud_Registration_Using_Imitation_and_Reinforcement_Learning_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Bauer_ReAgent_Point_Cloud_CVPR_2021_supplemental.pdf
null
Uncertainty Guided Collaborative Training for Weakly Supervised Temporal Action Detection
Wenfei Yang, Tianzhu Zhang, Xiaoyuan Yu, Tian Qi, Yongdong Zhang, Feng Wu
Weakly supervised temporal action detection aims to localize temporal boundaries of actions and identify their categories simultaneously with only video-level category labels during training. Among existing methods, attention-based methods have achieved superior performance by separating action and non-action segments. However, without the segment-level ground-truth supervision, the quality of the attention weight hinders the performance of these methods. To alleviate this problem, we propose a novel Uncertainty Guided Collaborative Training (UGCT) strategy, which mainly includes two key designs: (1) The first design is an online pseudo label generation module, in which the RGB and FLOW streams work collaboratively to learn from each other. (2) The second design is an uncertainty aware learning module, which can mitigate the noise in the generated pseudo labels. These two designs work together to promote the model performance effectively and efficiently. Experimental results on three state-of-the-art attentionbased methods demonstrate that the proposed training strategy can significantly improve the performance of these methods, e.g., more than 4% for all three methods in terms of mAP@IoU=0.5 on the THUMOS14 dataset.
https://openaccess.thecvf.com/content/CVPR2021/papers/Yang_Uncertainty_Guided_Collaborative_Training_for_Weakly_Supervised_Temporal_Action_Detection_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Yang_Uncertainty_Guided_Collaborative_Training_for_Weakly_Supervised_Temporal_Action_Detection_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Yang_Uncertainty_Guided_Collaborative_Training_for_Weakly_Supervised_Temporal_Action_Detection_CVPR_2021_paper.html
CVPR 2021
null
null
Dynamic Probabilistic Graph Convolution for Facial Action Unit Intensity Estimation
Tengfei Song, Zijun Cui, Yuru Wang, Wenming Zheng, Qiang Ji
Deep learning methods have been widely applied to automatic facial action unit (AU) intensity estimation and achieved state-of-the-art performance. These methods, however, are mostly appearance-based and fail to exploit the underlying structural information among the AUs. In this paper, we propose a novel dynamic probabilistic graph convolution (DPG) model to simultaneously exploit AU appearances, AU dynamics, and their semantic structural dependencies for AU intensity estimation. First, we propose to use Bayesian Network to capture the inherent dependencies among the AUs. Second, we introduce probabilistic graph convolution that allows to perform graph convolution on the distribution of Bayesian Network structure to extract AU structural features. Finally, we introduce a dynamic deep model based on LSTM to simultaneously combine AU appearance features, AU dynamic features, and AU structural features for improved AU intensity estimation. In experiments, our method achieves comparable and even better performance with state-of-the-art methods on two benchmark facial AU intensity estimation databases, i.e., FERA 2015 and DISFA.
https://openaccess.thecvf.com/content/CVPR2021/papers/Song_Dynamic_Probabilistic_Graph_Convolution_for_Facial_Action_Unit_Intensity_Estimation_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Song_Dynamic_Probabilistic_Graph_Convolution_for_Facial_Action_Unit_Intensity_Estimation_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Song_Dynamic_Probabilistic_Graph_Convolution_for_Facial_Action_Unit_Intensity_Estimation_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Song_Dynamic_Probabilistic_Graph_CVPR_2021_supplemental.pdf
null
Few-Shot Segmentation Without Meta-Learning: A Good Transductive Inference Is All You Need?
Malik Boudiaf, Hoel Kervadec, Ziko Imtiaz Masud, Pablo Piantanida, Ismail Ben Ayed, Jose Dolz
We show that the way inference is performed in few-shot segmentation tasks has a substantial effect on performances--an aspect often overlooked in the literature in favor of the meta-learning paradigm. We introduce a transductive inference for a given query image, leveraging the statistics of its unlabeled pixels, by optimizing a new loss containing three complementary terms: i) the cross-entropy on the labeled support pixels; ii) the Shannon entropy of the posteriors on the unlabeled query image pixels; and iii) a global KL-divergence regularizer based on the proportion of the predicted foreground. As our inference uses a simple linear classifier of the extracted features, its computational load is comparable to inductive inference and can be used on top of any base training. Foregoing episodic training and using only standard cross-entropy training on the base classes, our inference yields competitive performances on standard benchmarks in the 1-shot scenarios. As the number of available shots increases, the gap in performances widens: on PASCAL-5i, our method brings about 5% and 6% improvements over the state-of-the-art, in the 5- and 10-shot scenarios, respectively. Furthermore, we introduce a new setting that includes domain shifts, where the base and novel classes are drawn from different datasets. Our method achieves the best performances in this more realistic setting. Our code is freely available online: https://github.com/mboudiaf/RePRI-for-Few-Shot-Segmentation.
https://openaccess.thecvf.com/content/CVPR2021/papers/Boudiaf_Few-Shot_Segmentation_Without_Meta-Learning_A_Good_Transductive_Inference_Is_All_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Boudiaf_Few-Shot_Segmentation_Without_Meta-Learning_A_Good_Transductive_Inference_Is_All_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Boudiaf_Few-Shot_Segmentation_Without_Meta-Learning_A_Good_Transductive_Inference_Is_All_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Boudiaf_Few-Shot_Segmentation_Without_CVPR_2021_supplemental.pdf
null
Spatial-Temporal Correlation and Topology Learning for Person Re-Identification in Videos
Jiawei Liu, Zheng-Jun Zha, Wei Wu, Kecheng Zheng, Qibin Sun
Video-based person re-identification aims to match pedestrians from video sequences across non-overlapping camera views. The key factor for video person re-identification is to effectively exploit both spatial and temporal clues from video sequences. In this work, we propose a novel Spatial-Temporal Correlation and Topology Learning framework (CTL) to pursue discriminative and robust representation by modeling cross-scale spatial-temporal correlation. Specifically, CTL utilizes a CNN backbone and a key-points estimator to extract semantic local features from human body at multiple granularities as graph nodes. It explores a context-reinforced topology to construct multi-scale graphs by considering both global contextual information and physical connections of human body. Moreover, a 3D graph convolution and a cross-scale graph convolution are designed, which facilitate direct cross-spacetime and cross-scale information propagation for capturing hierarchical spatial-temporal dependencies and structural information. By jointly performing the two convolutions, CTL effectively mines comprehensive clues that are complementary with appearance information to enhance representational capacity. Extensive experiments on two video benchmarks have demonstrated the effectiveness of the proposed method and the state-of-the-art performance.
https://openaccess.thecvf.com/content/CVPR2021/papers/Liu_Spatial-Temporal_Correlation_and_Topology_Learning_for_Person_Re-Identification_in_Videos_CVPR_2021_paper.pdf
http://arxiv.org/abs/2104.08241
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Liu_Spatial-Temporal_Correlation_and_Topology_Learning_for_Person_Re-Identification_in_Videos_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Liu_Spatial-Temporal_Correlation_and_Topology_Learning_for_Person_Re-Identification_in_Videos_CVPR_2021_paper.html
CVPR 2021
null
null
SPSG: Self-Supervised Photometric Scene Generation From RGB-D Scans
Angela Dai, Yawar Siddiqui, Justus Thies, Julien Valentin, Matthias Niessner
We present SPSG, a novel approach to generate high-quality, colored 3D models of scenes from RGB-D scan observations by learning to infer unobserved scene geometry and color in a self-supervised fashion. Our self-supervised approach learns to jointly inpaint geometry and color by correlating an incomplete RGB-D scan with a more complete version of that scan. Notably, rather than relying on 3D reconstruction losses to inform our 3D geometry and color reconstruction, we propose adversarial and perceptual losses operating on 2D renderings in order to achieve high-resolution, high-quality colored reconstructions of scenes. This exploits the high-resolution, self-consistent signal from individual raw RGB-D frames, in contrast to fused 3D reconstructions of the frames which exhibit inconsistencies from view-dependent effects, such as color balancing or pose inconsistencies. Thus, by informing our 3D scene generation directly through 2D signal, we produce high-quality colored reconstructions of 3D scenes, outperforming state of the art on both synthetic and real data.
https://openaccess.thecvf.com/content/CVPR2021/papers/Dai_SPSG_Self-Supervised_Photometric_Scene_Generation_From_RGB-D_Scans_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Dai_SPSG_Self-Supervised_Photometric_Scene_Generation_From_RGB-D_Scans_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Dai_SPSG_Self-Supervised_Photometric_Scene_Generation_From_RGB-D_Scans_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Dai_SPSG_Self-Supervised_Photometric_CVPR_2021_supplemental.zip
null
Neural Auto-Exposure for High-Dynamic Range Object Detection
Emmanuel Onzon, Fahim Mannan, Felix Heide
Real-world scenes have a dynamic range of up to 280 dB that today's imaging sensors cannot directly capture. Existing live vision pipelines tackle this fundamental challenge by relying on high dynamic range (HDR) sensors that try to recover HDR images from multiple captures with different exposures. While HDR sensors substantially increase the dynamic range, they are not without disadvantages, including severe artifacts for dynamic scenes, reduced fill-factor, lower resolution, and high sensor cost. At the same time, traditional auto-exposure methods for low-dynamic range sensors have advanced as proprietary methods relying on image statistics separated from downstream vision algorithms. In this work, we revisit auto-exposure control as an alternative to HDR sensors. We propose a neural network for exposure selection that is trained jointly, end-to-end with an object detector and an image signal processing (ISP) pipeline. To this end, we use an HDR dataset for automotive object detection and an HDR training procedure. We validate that the proposed neural auto-exposure control, which is tailored to object detection, outperforms conventional auto-exposure methods by more than 6 points in mean average precision (mAP).
https://openaccess.thecvf.com/content/CVPR2021/papers/Onzon_Neural_Auto-Exposure_for_High-Dynamic_Range_Object_Detection_CVPR_2021_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2021/html/Onzon_Neural_Auto-Exposure_for_High-Dynamic_Range_Object_Detection_CVPR_2021_paper.html
https://openaccess.thecvf.com/content/CVPR2021/html/Onzon_Neural_Auto-Exposure_for_High-Dynamic_Range_Object_Detection_CVPR_2021_paper.html
CVPR 2021
https://openaccess.thecvf.com/content/CVPR2021/supplemental/Onzon_Neural_Auto-Exposure_for_CVPR_2021_supplemental.pdf
null