Search is not available for this dataset
title
string | authors
string | abstract
string | pdf
string | arXiv
string | bibtex
string | url
string | detail_url
string | tags
string | supp
string | string |
---|---|---|---|---|---|---|---|---|---|---|
End-to-End Learning for Joint Image Demosaicing, Denoising and Super-Resolution | Wenzhu Xing, Karen Egiazarian | Image denoising, demosaicing and super-resolution are key problems of image restoration well studied in the recent decades. Often, in practice, one has to solve these problems simultaneously. A problem of finding a joint solution of the multiple image restoration tasks just begun to attract an increased attention of researchers. In this paper, we propose an end-to-end solution for the joint demosaicing, denoising and super-resolution based on a specially designed deep convolutional neural network (CNN). We systematically study different methods to solve this problem and compared them with the proposed method. Extensive experiments carried out on large image datasets demonstrate that our method outperforms the state-of-the-art both quantitatively and qualitatively. Finally, we have applied various loss functions in the proposed scheme and demonstrate that by using the mean absolute error as a loss function, we can obtain superior results in comparison to other cases. | https://openaccess.thecvf.com/content/CVPR2021/papers/Xing_End-to-End_Learning_for_Joint_Image_Demosaicing_Denoising_and_Super-Resolution_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Xing_End-to-End_Learning_for_Joint_Image_Demosaicing_Denoising_and_Super-Resolution_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Xing_End-to-End_Learning_for_Joint_Image_Demosaicing_Denoising_and_Super-Resolution_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Xing_End-to-End_Learning_for_CVPR_2021_supplemental.pdf | null |
Keep Your Eyes on the Lane: Real-Time Attention-Guided Lane Detection | Lucas Tabelini, Rodrigo Berriel, Thiago M. Paixao, Claudine Badue, Alberto F. De Souza, Thiago Oliveira-Santos | Modern lane detection methods have achieved remarkable performances in complex real-world scenarios, but many have issues maintaining real-time efficiency, which is important for autonomous vehicles. In this work, we propose LaneATT: an anchor-based deep lane detection model, which, akin to other generic deep object detectors, uses the anchors for the feature pooling step. Since lanes follow a regular pattern and are highly correlated, we hypothesize that in some cases global information may be crucial to infer their positions, especially in conditions such as occlusion, missing lane markers, and others. Thus, this work proposes a novel anchor-based attention mechanism that aggregates global information. The model was evaluated extensively on three of the most widely used datasets in the literature. The results show that our method outperforms the current state-of-the-art methods showing both higher efficacy and efficiency. Moreover, an ablation study is performed along with a discussion on efficiency trade-off options that are useful in practice. Code and models are available at https://github.com/lucastabelini/LaneATT. | https://openaccess.thecvf.com/content/CVPR2021/papers/Tabelini_Keep_Your_Eyes_on_the_Lane_Real-Time_Attention-Guided_Lane_Detection_CVPR_2021_paper.pdf | http://arxiv.org/abs/2010.12035 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Tabelini_Keep_Your_Eyes_on_the_Lane_Real-Time_Attention-Guided_Lane_Detection_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Tabelini_Keep_Your_Eyes_on_the_Lane_Real-Time_Attention-Guided_Lane_Detection_CVPR_2021_paper.html | CVPR 2021 | null | null |
Lesion-Aware Transformers for Diabetic Retinopathy Grading | Rui Sun, Yihao Li, Tianzhu Zhang, Zhendong Mao, Feng Wu, Yongdong Zhang | Diabetic retinopathy (DR) is the leading cause of permanent blindness in the working-age population. And automatic DR diagnosis can assist ophthalmologists to design tailored treatments for patients, including DR grading and lesion discovery. However, most of existing methods treat DR grading and lesion discovery as two independent tasks, which require lesion annotations as a learning guidance and limits the actual deployment. To alleviate this problem, we propose a novel lesion-aware transformer (LAT) for DR grading and lesion discovery jointly in a unified deep model via an encoder-decoder structure including a pixel relation based encoder and a lesion filter based decoder. The proposed LAT enjoys several merits. First, to the best of our knowledge, this is the first work to formulate lesion discovery as a weakly supervised lesion localization problem via a transformer decoder. Second, to learn lesion filters well with only image-level labels, we design two effective mechanisms including lesion region importance and lesion region diversity for identifying diverse lesion regions. Extensive experimental results on three challenging benchmarks including Messidor-1, Messidor-2 and EyePACS demonstrate that the proposed LAT performs favorably against state-of-the-art DR grading and lesion discovery methods. | https://openaccess.thecvf.com/content/CVPR2021/papers/Sun_Lesion-Aware_Transformers_for_Diabetic_Retinopathy_Grading_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Sun_Lesion-Aware_Transformers_for_Diabetic_Retinopathy_Grading_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Sun_Lesion-Aware_Transformers_for_Diabetic_Retinopathy_Grading_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Sun_Lesion-Aware_Transformers_for_CVPR_2021_supplemental.pdf | null |
Involution: Inverting the Inherence of Convolution for Visual Recognition | Duo Li, Jie Hu, Changhu Wang, Xiangtai Li, Qi She, Lei Zhu, Tong Zhang, Qifeng Chen | Convolution has been the core ingredient of modern neural networks, triggering the surge of deep learning in vision. In this work, we rethink the inherent principles of standard convolution for vision tasks, specifically spatial-agnostic and channel-specific. Instead, we present a novel atomic operation for deep neural networks by inverting the aforementioned design principles of convolution, coined as involution. We additionally demystify the recent popular self-attention operator and subsume it into our involution family as an over-complicated instantiation. The proposed involution operator could be leveraged as fundamental bricks to build the new generation of neural networks for visual recognition, powering different deep learning models on several prevalent benchmarks, including ImageNet classification, COCO detection and segmentation, together with Cityscapes segmentation. Our involution-based models improve the performance of convolutional baselines using ResNet-50 by up to 1.6% top-1 accuracy, 2.5% and 2.4% bounding box AP, and 4.7% mean IoU absolutely while compressing the computational cost to 66%, 65%, 72%, and 57% on the above benchmarks, respectively. Code and pre-trained models for all the tasks are available at https://github.com/d-li14/involution. | https://openaccess.thecvf.com/content/CVPR2021/papers/Li_Involution_Inverting_the_Inherence_of_Convolution_for_Visual_Recognition_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.06255 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Li_Involution_Inverting_the_Inherence_of_Convolution_for_Visual_Recognition_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Li_Involution_Inverting_the_Inherence_of_Convolution_for_Visual_Recognition_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Li_Involution_Inverting_the_CVPR_2021_supplemental.pdf | null |
QPIC: Query-Based Pairwise Human-Object Interaction Detection With Image-Wide Contextual Information | Masato Tamura, Hiroki Ohashi, Tomoaki Yoshinaga | We propose a simple, intuitive yet powerful method for human-object interaction (HOI) detection. HOIs are so diverse in spatial distribution in an image that existing CNN-based methods face the following three major drawbacks; they cannot leverage image-wide features due to CNN's locality, they rely on a manually defined location-of-interest for the feature aggregation, which sometimes does not cover contextually important regions, and they cannot help but mix up the features for multiple HOI instances if they are located closely. To overcome these drawbacks, we propose a transformer-based feature extractor, in which an attention mechanism and query-based detection play key roles. The attention mechanism is effective in aggregating contextually important information image-wide, while the queries, which we design in such a way that each query captures at most one human-object pair, can avoid mixing up the features from multiple instances. This transformer-based feature extractor produces so effective embeddings that the subsequent detection heads may be fairly simple and intuitive. The extensive analysis reveals that the proposed method successfully extracts contextually important features, and thus outperforms existing methods by large margins (5.37 mAP on HICO-DET, and 5.6 mAP on V-COCO). The source codes are available at https://github.com/hitachi-rd-cv/qpic. | https://openaccess.thecvf.com/content/CVPR2021/papers/Tamura_QPIC_Query-Based_Pairwise_Human-Object_Interaction_Detection_With_Image-Wide_Contextual_Information_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.05399 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Tamura_QPIC_Query-Based_Pairwise_Human-Object_Interaction_Detection_With_Image-Wide_Contextual_Information_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Tamura_QPIC_Query-Based_Pairwise_Human-Object_Interaction_Detection_With_Image-Wide_Contextual_Information_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Tamura_QPIC_Query-Based_Pairwise_CVPR_2021_supplemental.pdf | null |
Home Action Genome: Cooperative Compositional Action Understanding | Nishant Rai, Haofeng Chen, Jingwei Ji, Rishi Desai, Kazuki Kozuka, Shun Ishizaka, Ehsan Adeli, Juan Carlos Niebles | Existing research on action recognition treats activities as monolithic events occurring in videos. Recently, the benefits of formulating actions as a combination of atomic-actions have shown promise in improving action understanding with the emergence of datasets containing such annotations, allowing us to learn representations capturing this information. However, there remains a lack of studies that extend action composition and leverage multiple viewpoints and multiple modalities of data for representation learning. To promote research in this direction, we introduce Home Action Genome (HOMAGE): a multi-view action dataset with multiple modalities and view-points supplemented with hierarchical activity and atomic action labels together with dense scene composition labels. Leveraging rich multi-modal and multi-view settings, we propose Cooperative Compositional Action Understanding (CCAU), a cooperative learning framework for hierarchical action recognition that is aware of compositional action elements. CCAU shows consistent performance improvements across all modalities. Furthermore, we demonstrate the utility of co-learning compositions in few-shot action recognition by achieving 28.6% mAP with just a single sample. | https://openaccess.thecvf.com/content/CVPR2021/papers/Rai_Home_Action_Genome_Cooperative_Compositional_Action_Understanding_CVPR_2021_paper.pdf | http://arxiv.org/abs/2105.05226 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Rai_Home_Action_Genome_Cooperative_Compositional_Action_Understanding_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Rai_Home_Action_Genome_Cooperative_Compositional_Action_Understanding_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Rai_Home_Action_Genome_CVPR_2021_supplemental.zip | null |
Deep Lesion Tracker: Monitoring Lesions in 4D Longitudinal Imaging Studies | Jinzheng Cai, Youbao Tang, Ke Yan, Adam P. Harrison, Jing Xiao, Gigin Lin, Le Lu | Monitoring treatment response in longitudinal studies plays an important role in clinical practice. Accurately identifying lesions across serial imaging follow-up is the core to the monitoring procedure. Typically this incorporates both image and anatomical considerations. However, matching lesions manually is labor-intensive and time-consuming. In this work, we present deep lesion tracker (DLT), a deep learning approach that uses both appearance- and anatomical-based signals. To incorporate anatomical constraints, we propose an anatomical signal encoder, which prevents lesions being matched with visually similar but spurious regions. In addition, we present a new formulation for Siamese networks that avoids the heavy computational loads of 3D cross-correlation. To present our network with greater varieties of images, we also propose a self-supervised learning strategy to train trackers with unpaired images, overcoming barriers to data collection. To train and evaluate our tracker, we introduce and release the first lesion tracking benchmark, consisting of 3891 lesion pairs from the public DeepLesion database. The proposed method, DLT, locates lesion centers with a mean error distance of 7mm. This is 5% better than a leading registration algorithm while running 14 times faster with whole CT volumes. We demonstrate even greater improvements over detector or similarity-learning alternatives. DLT also generalizes well on an external clinical test set of 100% longitudinal studies, achieving 88% accuracy. Finally, we plug DLT into an automatic tumor monitoring workflow where it leads to an accuracy of 85% in assessing lesion treatment responses, which is only 0.46% lower than the accuracy of manual inputs. | https://openaccess.thecvf.com/content/CVPR2021/papers/Cai_Deep_Lesion_Tracker_Monitoring_Lesions_in_4D_Longitudinal_Imaging_Studies_CVPR_2021_paper.pdf | http://arxiv.org/abs/2012.04872 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Cai_Deep_Lesion_Tracker_Monitoring_Lesions_in_4D_Longitudinal_Imaging_Studies_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Cai_Deep_Lesion_Tracker_Monitoring_Lesions_in_4D_Longitudinal_Imaging_Studies_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Cai_Deep_Lesion_Tracker_CVPR_2021_supplemental.pdf | null |
Learning To Warp for Style Transfer | Xiao-Chang Liu, Yong-Liang Yang, Peter Hall | Since its inception in 2015, Style Transfer has focused on texturing a content image using an art exemplar. Recently, the geometric changes that artists make have been acknowledged as an important component of style. Our contribution is to propose a neural network that, uniquely, learns a mapping from a 4D array of inter-feature distances to a non-parametric 2D warp field. The system is generic in not being limited by semantic class, a single learned model will suffice; all examples in this paper are output from one model. Our approach combines the benefits of the high speed of Liu et al. with the non-parametric warping of Kim et al. Furthermore, our system extends the normal NST paradigm: although it can be used with a single exemplar, we also allow two style exemplars: one for texture and another for geometry. This supports far greater flexibility in use cases than single exemplars can provide. | https://openaccess.thecvf.com/content/CVPR2021/papers/Liu_Learning_To_Warp_for_Style_Transfer_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Liu_Learning_To_Warp_for_Style_Transfer_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Liu_Learning_To_Warp_for_Style_Transfer_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Liu_Learning_To_Warp_CVPR_2021_supplemental.pdf | null |
Towards Extremely Compact RNNs for Video Recognition With Fully Decomposed Hierarchical Tucker Structure | Miao Yin, Siyu Liao, Xiao-Yang Liu, Xiaodong Wang, Bo Yuan | Recurrent Neural Networks (RNNs) have been widely used in sequence analysis and modeling. However, when processing high-dimensional data, RNNs typically require very large model sizes, thereby bringing a series of deployment challenges. Although various prior works have been proposed to reduce the RNN model sizes, executing RNN models in resource-restricted environments is still a very challenging problem. In this paper, we propose to develop extremely compact RNN models with fully decomposed hierarchical Tucker (FDHT) structure. The HT decomposition does not only provide much higher storage cost reduction than the other tensor decomposition approaches but also brings better accuracy performance improvement for the compact RNN models. Meanwhile, unlike the existing tensor decomposition-based methods that can only decompose the input-to-hidden layer of RNNs, our proposed fully decomposition approach enables the comprehensive compression for the entire RNN models with maintaining very high accuracy. Our experimental results on several popular video recognition datasets show that our proposed fully decomposed hierarchical tucker-based LSTM (FDHT-LSTM) is extremely compact and highly efficient. To the best of our knowledge, FDHT-LSTM, for the first time, consistently achieves very high accuracy with only few thousand parameters (3,132 to 8,808) on different datasets. Compared with the state-of-the-art compressed RNN models, such as TT-LSTM, TR-LSTM and BT-LSTM, our FDHT-LSTM simultaneously enjoys both order-of-magnitude (3,985x to 10,711x) fewer parameters and significant accuracy improvement (0.6% to 12.7%). | https://openaccess.thecvf.com/content/CVPR2021/papers/Yin_Towards_Extremely_Compact_RNNs_for_Video_Recognition_With_Fully_Decomposed_CVPR_2021_paper.pdf | http://arxiv.org/abs/2104.05758 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Yin_Towards_Extremely_Compact_RNNs_for_Video_Recognition_With_Fully_Decomposed_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Yin_Towards_Extremely_Compact_RNNs_for_Video_Recognition_With_Fully_Decomposed_CVPR_2021_paper.html | CVPR 2021 | null | null |
Self-Supervised Multi-Frame Monocular Scene Flow | Junhwa Hur, Stefan Roth | Estimating 3D scene flow from a sequence of monocular images has been gaining increased attention due to the simple, economical capture setup. Owing to the severe ill-posedness of the problem, the accuracy of current methods has been limited, especially that of efficient, real-time approaches. In this paper, we introduce a multi-frame monocular scene flow network based on self-supervised learning, improving the accuracy over previous networks while retaining real-time efficiency. Based on an advanced two-frame baseline with a split-decoder design, we propose (i) a multi-frame model using a triple frame input and convolutional LSTM connections, (ii) an occlusion-aware census loss for better accuracy, and (iii) a gradient detaching strategy to improve training stability. On the KITTI dataset, we observe state-of-the-art accuracy among monocular scene flow methods based on self-supervised learning. | https://openaccess.thecvf.com/content/CVPR2021/papers/Hur_Self-Supervised_Multi-Frame_Monocular_Scene_Flow_CVPR_2021_paper.pdf | http://arxiv.org/abs/2105.02216 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Hur_Self-Supervised_Multi-Frame_Monocular_Scene_Flow_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Hur_Self-Supervised_Multi-Frame_Monocular_Scene_Flow_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Hur_Self-Supervised_Multi-Frame_Monocular_CVPR_2021_supplemental.pdf | null |
Enriching ImageNet With Human Similarity Judgments and Psychological Embeddings | Brett D. Roads, Bradley C. Love | Advances in supervised learning approaches to object recognition flourished in part because of the availability of high-quality datasets and associated benchmarks. However, these benchmarks---such as ILSVRC---are relatively task-specific, focusing predominately on predicting class labels. We introduce a publicly-available dataset that embodies the task-general capabilities of human perception and reasoning. The Human Similarity Judgments extension to ImageNet (ImageNet-HSJ) is composed of a large set of human similarity judgments that supplements the existing ILSVRC validation set. The new dataset supports a range of task and performance metrics, including evaluation of unsupervised algorithms. We demonstrate two methods of assessment: using the similarity judgments directly and using a psychological embedding trained on the similarity judgments. This embedding space contains an order of magnitude more points (i.e., images) than previous efforts based on human judgments. We were able to scale to the full 50,000 image ILSVRC validation set through a selective sampling process that used variational Bayesian inference and model ensembles to sample aspects of the embedding space that were most uncertain. To demonstrate the utility of ImageNet-HSJ, we used the similarity ratings and the embedding space to evaluate how well several popular models conform to human similarity judgments. One finding is that more complex models that perform better on task-specific benchmarks do not better conform to human semantic judgments. In addition to the human similarity judgments, pre-trained psychological embeddings and code for inferring variational embeddings are made publicly available. ImageNet-HSJ supports the appraisal of internal representations and the development of more human-like models. | https://openaccess.thecvf.com/content/CVPR2021/papers/Roads_Enriching_ImageNet_With_Human_Similarity_Judgments_and_Psychological_Embeddings_CVPR_2021_paper.pdf | http://arxiv.org/abs/2011.11015 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Roads_Enriching_ImageNet_With_Human_Similarity_Judgments_and_Psychological_Embeddings_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Roads_Enriching_ImageNet_With_Human_Similarity_Judgments_and_Psychological_Embeddings_CVPR_2021_paper.html | CVPR 2021 | null | null |
What's in the Image? Explorable Decoding of Compressed Images | Yuval Bahat, Tomer Michaeli | The ever-growing amounts of visual contents captured on a daily basis necessitate the use of lossy compression methods in order to save storage space and transmission bandwidth. While extensive research efforts are devoted to improving compression techniques, every method inevitably discards information. Especially at low bit rates, this information often corresponds to semantically meaningful visual cues, so that decompression involves significant ambiguity. In spite of this fact, existing decompression algorithms typically produce only a single output, and do not allow the viewer to explore the set of images that map to the given compressed code. In this work we propose the first image decompression method to facilitate user-exploration of the diverse set of natural images that could have given rise to the compressed input code, thus granting users the ability to determine what could and what could not have been there in the original scene. Specifically, we develop a novel deep-network based decoder architecture for the ubiquitous JPEG standard, which allows traversing the set of decompressed images that are consistent with the compressed JPEG file. To allow for simple user interaction, we develop a graphical user interface comprising several intuitive exploration tools, including an automatic tool for examining specific solutions of interest. We exemplify our framework on graphical, medical and forensic use cases, demonstrating its wide range of potential applications. | https://openaccess.thecvf.com/content/CVPR2021/papers/Bahat_Whats_in_the_Image_Explorable_Decoding_of_Compressed_Images_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Bahat_Whats_in_the_Image_Explorable_Decoding_of_Compressed_Images_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Bahat_Whats_in_the_Image_Explorable_Decoding_of_Compressed_Images_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Bahat_Whats_in_the_CVPR_2021_supplemental.pdf | null |
Context Modeling in 3D Human Pose Estimation: A Unified Perspective | Xiaoxuan Ma, Jiajun Su, Chunyu Wang, Hai Ci, Yizhou Wang | Estimating 3D human pose from a single image suffers from severe ambiguity since multiple 3D joint configurations may have the same 2D projection. The state-of-the-art methods often rely on context modeling methods such as pictorial structure model (PSM) or graph neural network (GNN) to reduce ambiguity. However, there is no study that rigorously compares them side by side. So we first present a general formula for context modeling in which both PSM and GNN are its special cases. By comparing the two methods, we found that the end-to-end training scheme in GNN and the limb length constraints in PSM are two complementary factors to improve results. To combine their advantages, we propose ContextPose based on attention mechanism that allows enforcing soft limb length constraints in a deep network. The approach effectively reduces the chance of getting absurd 3D pose estimates with incorrect limb lengths and achieves state-of-the-art results on two benchmark datasets. More importantly, the introduction of limb length constraints into deep networks enables the approach to achieve much better generalization performance. | https://openaccess.thecvf.com/content/CVPR2021/papers/Ma_Context_Modeling_in_3D_Human_Pose_Estimation_A_Unified_Perspective_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.15507 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Ma_Context_Modeling_in_3D_Human_Pose_Estimation_A_Unified_Perspective_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Ma_Context_Modeling_in_3D_Human_Pose_Estimation_A_Unified_Perspective_CVPR_2021_paper.html | CVPR 2021 | null | null |
Less Is More: ClipBERT for Video-and-Language Learning via Sparse Sampling | Jie Lei, Linjie Li, Luowei Zhou, Zhe Gan, Tamara L. Berg, Mohit Bansal, Jingjing Liu | The canonical approach to video-and-language learning (e.g., video question answering) dictates a neural model to learn from offline-extracted dense video features from vision models and text features from language models. These feature extractors are trained independently and usually on tasks different from the target domains, rendering these fixed features sub-optimal for downstream tasks. Moreover, due to the high computational overload of dense video features, it is often difficult (or infeasible) to plug feature extractors directly into existing approaches for easy finetuning. To provide a remedy to this dilemma, we propose a generic framework CLIPBERT that enables affordable end-to-end learning for video-and-language tasks, by employing sparse sampling, where only a single or a few sparsely sampled short clips from a video are used at each training step. Experiments on text-to-video retrieval and video question answering on six datasets demonstrate that CLIPBERT outperforms (or is on par with) existing methods that exploit full-length videos, suggesting that end-to-end learning with just a few sparsely sampled clips is often more accurate than using densely extracted offline features from full-length videos, proving the proverbial less-is-more principle. Videos in the datasets are from considerably different domains and lengths, ranging from 3-second generic-domain GIF videos to 180-second YouTube human activity videos, showing the generalization ability of our approach. Comprehensive ablation studies and thorough analyses are provided to dissect what factors lead to this success. | https://openaccess.thecvf.com/content/CVPR2021/papers/Lei_Less_Is_More_ClipBERT_for_Video-and-Language_Learning_via_Sparse_Sampling_CVPR_2021_paper.pdf | http://arxiv.org/abs/2102.06183 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Lei_Less_Is_More_ClipBERT_for_Video-and-Language_Learning_via_Sparse_Sampling_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Lei_Less_Is_More_ClipBERT_for_Video-and-Language_Learning_via_Sparse_Sampling_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Lei_Less_Is_More_CVPR_2021_supplemental.pdf | null |
Consensus Maximisation Using Influences of Monotone Boolean Functions | Ruwan Tennakoon, David Suter, Erchuan Zhang, Tat-Jun Chin, Alireza Bab-Hadiashar | Consensus maximisation (MaxCon), widely used for robust fitting in computer vision, aims to find the largest subset of data that fits the model within some tolerance level. In this paper, we outline the connection between MaxCon problem and the abstract problem of finding the maximum upper zero of a Monotone Boolean Function (MBF) defined over the Boolean Cube. Then, we link the concept of influences (in a MBF) to the concept of outlier (in MaxCon) and show that influences of points belonging to the largest structure in data would be the smallest under certian conditions. Based on this observation, we present an iterative algorithm to perform consensus maximisation. Results for both synthetic and real visual data experiments show that the MBF based algorithm is capable of generating a near optimal solution relatively quickly. This is particularly important where there are large number of outliers (gross or pseudo) in the observed data. | https://openaccess.thecvf.com/content/CVPR2021/papers/Tennakoon_Consensus_Maximisation_Using_Influences_of_Monotone_Boolean_Functions_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.04200 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Tennakoon_Consensus_Maximisation_Using_Influences_of_Monotone_Boolean_Functions_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Tennakoon_Consensus_Maximisation_Using_Influences_of_Monotone_Boolean_Functions_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Tennakoon_Consensus_Maximisation_Using_CVPR_2021_supplemental.pdf | null |
Meta-Mining Discriminative Samples for Kinship Verification | Wanhua Li, Shiwei Wang, Jiwen Lu, Jianjiang Feng, Jie Zhou | Kinship verification aims to find out whether there is a kin relation for a given pair of facial images. Kinship verification databases are born with unbalanced data. For a database with N positive kinship pairs, we naturally obtain N(N-1) negative pairs. How to fully utilize the limited positive pairs and mine discriminative information from sufficient negative samples for kinship verification remains an open issue. To address this problem, we propose a Discriminative Sample Meta-Mining (DSMM) approach in this paper. Unlike existing methods that usually construct a balanced dataset with fixed negative pairs, we propose to utilize all possible pairs and automatically learn discriminative information from data. Specifically, we sample an unbalanced train batch and a balanced meta-train batch for each iteration. Then we learn a meta-miner with the meta-gradient on the balanced meta-train batch. In the end, the samples in the unbalanced train batch are re-weighted by the learned meta-miner to optimize the kinship models. Experimental results on the widely used KinFaceW-I, KinFaceW-II, TSKinFace, and Cornell Kinship datasets demonstrate the effectiveness of the proposed approach. | https://openaccess.thecvf.com/content/CVPR2021/papers/Li_Meta-Mining_Discriminative_Samples_for_Kinship_Verification_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.15108 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Li_Meta-Mining_Discriminative_Samples_for_Kinship_Verification_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Li_Meta-Mining_Discriminative_Samples_for_Kinship_Verification_CVPR_2021_paper.html | CVPR 2021 | null | null |
AQD: Towards Accurate Quantized Object Detection | Peng Chen, Jing Liu, Bohan Zhuang, Mingkui Tan, Chunhua Shen | Network quantization allows inference to be conducted using low-precision arithmetic for improved inference efficiency of deep neural networks on edge devices. However, designing aggressively low-bit (e.g., 2-bit) quantization schemes on complex tasks, such as object detection, still remains challenging in terms of severe performance degradation and unverifiable efficiency on common hardware. In this paper, we propose an Accurate Quantized object Detection solution, termed AQD, to fully get rid of floating-point computation. To this end, we target using fixed-point operations in all kinds of layers, including the convolutional layers, normalization layers, and skip connections, allowing the inference to be executed using integer-only arithmetic. To demonstrate the improved latency-vs-accuracy trade-off, we apply the proposed methods on RetinaNet and FCOS. In particular, experimental results on MS-COCO dataset show that our AQD achieves comparable or even better performance compared with the full-precision counterpart under extremely low-bit schemes, which is of great practical value. Source code and models are available at: https://github.com/aim-uofa/model-quantization | https://openaccess.thecvf.com/content/CVPR2021/papers/Chen_AQD_Towards_Accurate_Quantized_Object_Detection_CVPR_2021_paper.pdf | http://arxiv.org/abs/2007.06919 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Chen_AQD_Towards_Accurate_Quantized_Object_Detection_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Chen_AQD_Towards_Accurate_Quantized_Object_Detection_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Chen_AQD_Towards_Accurate_CVPR_2021_supplemental.pdf | null |
Learning Cross-Modal Retrieval With Noisy Labels | Peng Hu, Xi Peng, Hongyuan Zhu, Liangli Zhen, Jie Lin | Recently, cross-modal retrieval is emerging with the help of deep multimodal learning. However, even for unimodal data, collecting large-scale well-annotated data is expensive and time-consuming, and not to mention the additional challenges from multiple modalities. Although crowd-sourcing annotation, e.g., Amazon's Mechanical Turk, can be utilized to mitigate the labeling cost, but leading to the unavoidable noise in labels for the non-expert annotating. To tackle the challenge, this paper presents a general Multimodal Robust Learning framework (MRL) for learning with multimodal noisy labels to mitigate noisy samples and correlate distinct modalities simultaneously. To be specific, we propose a Robust Clustering loss (RC) to make the deep networks focus on clean samples instead of noisy ones. Besides, a simple yet effective multimodal loss function, called Multimodal Contrastive loss (MC), is proposed to maximize the mutual information between different modalities, thus alleviating the interference of noisy samples and cross-modal discrepancy. Extensive experiments are conducted on four widely-used multimodal datasets to demonstrate the effectiveness of the proposed approach by comparing to 14 state-of-the-art methods. | https://openaccess.thecvf.com/content/CVPR2021/papers/Hu_Learning_Cross-Modal_Retrieval_With_Noisy_Labels_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Hu_Learning_Cross-Modal_Retrieval_With_Noisy_Labels_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Hu_Learning_Cross-Modal_Retrieval_With_Noisy_Labels_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Hu_Learning_Cross-Modal_Retrieval_CVPR_2021_supplemental.pdf | null |
LOHO: Latent Optimization of Hairstyles via Orthogonalization | Rohit Saha, Brendan Duke, Florian Shkurti, Graham W. Taylor, Parham Aarabi | Hairstyle transfer is challenging due to hair structure differences in the source and target hair. Therefore, we propose Latent Optimization of Hairstyles via Orthogonalization (LOHO), an optimization-based approach using GAN inversion to infill missing hair structure details in latent space during hairstyle transfer. Our approach decomposes hair into three attributes: perceptual structure, appearance, and style, and includes tailored losses to model each of these attributes independently. Furthermore, we propose two-stage optimization and gradient orthogonalization to enable disentangled latent space optimization of our hair attributes. Using LOHO for latent space manipulation, users can synthesize novel photorealistic images by manipulating hair attributes either individually or jointly, transferring the desired attributes from reference hairstyles. LOHO achieves a superior FID compared with the current state-of-the-art (SOTA) for hairstyle transfer. Additionally, LOHO preserves the subject's identity comparably well according to PSNR and SSIM when compared to SOTA image embedding pipelines. | https://openaccess.thecvf.com/content/CVPR2021/papers/Saha_LOHO_Latent_Optimization_of_Hairstyles_via_Orthogonalization_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.03891 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Saha_LOHO_Latent_Optimization_of_Hairstyles_via_Orthogonalization_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Saha_LOHO_Latent_Optimization_of_Hairstyles_via_Orthogonalization_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Saha_LOHO_Latent_Optimization_CVPR_2021_supplemental.pdf | null |
Single-Shot Freestyle Dance Reenactment | Oran Gafni, Oron Ashual, Lior Wolf | The task of motion transfer between a source dancer and a target person is a special case of the pose transfer problem, in which the target person changes their pose in accordance with the motions of the dancer. In this work, we propose a novel method that can reanimate a single image by arbitrary video sequences, unseen during training. The method combines three networks: (i) a segmentation-mapping network, (ii) a realistic frame-rendering network, and (iii) a face refinement network. By separating this task into three stages, we are able to attain a novel sequence of realistic frames, capturing natural motion and appearance. Our method obtains significantly better visual quality than previous methods and is able to animate diverse body types and appearances, which are captured in challenging poses. | https://openaccess.thecvf.com/content/CVPR2021/papers/Gafni_Single-Shot_Freestyle_Dance_Reenactment_CVPR_2021_paper.pdf | http://arxiv.org/abs/2012.01158 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Gafni_Single-Shot_Freestyle_Dance_Reenactment_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Gafni_Single-Shot_Freestyle_Dance_Reenactment_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Gafni_Single-Shot_Freestyle_Dance_CVPR_2021_supplemental.pdf | null |
A Quasiconvex Formulation for Radial Cameras | Carl Olsson, Viktor Larsson, Fredrik Kahl | In this paper we study structure from motion problems for 1D radial cameras. Under this model the projection of a 3D point is a line in the image plane going through the principal point, which makes the model invariant to radial distortion and changes in focal length. It can therefore effectively be applied to uncalibrated image collections without the need for explicit estimation of camera intrinsics. We show that the reprojection errors of 1D radial cameras are examples of quasiconvex functions. This opens up the possibility to solve a general class of relevant reconstruction problems globally optimally using tools from convex optimization. In fact, our resulting algorithm is based on solving a series of LP problems. We perform an extensive experimental evaluation, on both synthetic and real data, showing that a whole class of multiview geometry problems across a range of different cameras models with varying and unknown intrinsic calibration can be reliably and accurately solved within the same framework. | https://openaccess.thecvf.com/content/CVPR2021/papers/Olsson_A_Quasiconvex_Formulation_for_Radial_Cameras_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Olsson_A_Quasiconvex_Formulation_for_Radial_Cameras_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Olsson_A_Quasiconvex_Formulation_for_Radial_Cameras_CVPR_2021_paper.html | CVPR 2021 | null | null |
Self-Supervised Learning of Depth Inference for Multi-View Stereo | Jiayu Yang, Jose M. Alvarez, Miaomiao Liu | Recent supervised multi-view depth estimation networks have achieved promising results. Similar to all supervised approaches, these networks require ground-truth data during training. However, collecting a large amount of multi-view depth data is very challenging. Here, we propose a self-supervised learning framework for multi-view stereo that exploit pseudo labels from the input data. We start by learning to estimate depth maps as initial pseudo labels under an unsupervised learning framework relying on image reconstruction loss as supervision. We then refine the initial pseudo labels using a carefully designed pipeline leveraging depth information inferred from a higher resolution image and neighboring views. We use these high-quality pseudo labels as the supervision signal to train the network and improve, iteratively, its performance by self-training. Extensive experiments on the DTU dataset show that our proposed self-supervised learning framework outperforms existing unsupervised multi-view stereo networks by a large margin and performs on par compared to the supervised counterpart. Code is available at https://github.com/JiayuYANG/Self-supervised-CVP-MVSNet | https://openaccess.thecvf.com/content/CVPR2021/papers/Yang_Self-Supervised_Learning_of_Depth_Inference_for_Multi-View_Stereo_CVPR_2021_paper.pdf | http://arxiv.org/abs/2104.02972 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Yang_Self-Supervised_Learning_of_Depth_Inference_for_Multi-View_Stereo_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Yang_Self-Supervised_Learning_of_Depth_Inference_for_Multi-View_Stereo_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Yang_Self-Supervised_Learning_of_CVPR_2021_supplemental.pdf | null |
BRepNet: A Topological Message Passing System for Solid Models | Joseph G. Lambourne, Karl D.D. Willis, Pradeep Kumar Jayaraman, Aditya Sanghi, Peter Meltzer, Hooman Shayani | Boundary representation (B-rep) models are the standard way 3D shapes are described in Computer-Aided Design (CAD) applications. They combine lightweight parametric curves and surfaces with topological information which connects the geometric entities to describe manifolds. In this paper we introduce BRepNet, a neural network architecture designed to operate directly on B-rep data structures, avoiding the need to approximate the model as meshes or point clouds. BRepNet defines convolutional kernels with respect to oriented coedges in the data structure. In the neighborhood of each coedge, a small collection of faces, edges and coedges can be identified and patterns in the feature vectors from these entities detected by specific learnable parameters. In addition, to encourage further deep learning research with B-reps, we publish the Fusion 360 Gallery segmentation dataset. A collection of over 35,000 B-rep models annotated with information about the modeling operations which created each face. We demonstrate that BRepNet can segment these models with higher accuracy than methods working on meshes, and point clouds. | https://openaccess.thecvf.com/content/CVPR2021/papers/Lambourne_BRepNet_A_Topological_Message_Passing_System_for_Solid_Models_CVPR_2021_paper.pdf | http://arxiv.org/abs/2104.00706 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Lambourne_BRepNet_A_Topological_Message_Passing_System_for_Solid_Models_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Lambourne_BRepNet_A_Topological_Message_Passing_System_for_Solid_Models_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Lambourne_BRepNet_A_Topological_CVPR_2021_supplemental.zip | null |
Learning To Predict Visual Attributes in the Wild | Khoi Pham, Kushal Kafle, Zhe Lin, Zhihong Ding, Scott Cohen, Quan Tran, Abhinav Shrivastava | Visual attributes constitute a large portion of information contained in a scene. Objects can be described using a wide variety of attributes which portray their visual appearance (color, texture), geometry (shape, size, posture), and other intrinsic properties (state, action). Existing work is mostly limited to study of attribute prediction in specific domains. In this paper, we introduce a large-scale in-the-wild visual attribute prediction dataset consisting of over 927K attribute annotations for over 260K object instances. Formally, object attribute prediction is a multi-label classification problem where all attributes that apply to an object must be predicted. Our dataset poses significant challenges to existing methods due to large number of attributes, label sparsity, data imbalance, and object occlusion. To this end, we propose several techniques that systematically tackle these challenges, including a base model that utilizes both low- and high-level CNN features with multi-hop attention, reweighting and resampling techniques, a novel negative label expansion scheme, and a novel supervised attribute-aware contrastive learning algorithm. Using these techniques, we achieve near 3.7 mAP and 5.7 overall F1 points improvement over the current state of the art. Further details about the VAW dataset can be found at https://vawdataset.com/. | https://openaccess.thecvf.com/content/CVPR2021/papers/Pham_Learning_To_Predict_Visual_Attributes_in_the_Wild_CVPR_2021_paper.pdf | http://arxiv.org/abs/2106.09707 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Pham_Learning_To_Predict_Visual_Attributes_in_the_Wild_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Pham_Learning_To_Predict_Visual_Attributes_in_the_Wild_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Pham_Learning_To_Predict_CVPR_2021_supplemental.pdf | null |
Animating Pictures With Eulerian Motion Fields | Aleksander Holynski, Brian L. Curless, Steven M. Seitz, Richard Szeliski | In this paper, we demonstrate a fully automatic method for converting a still image into a realistic animated looping video. We target scenes with continuous fluid motion, such as flowing water and billowing smoke. Our method relies on the observation that this type of natural motion can be convincingly reproduced from a static Eulerian motion description, i.e. a single, temporally constant flow field that defines the immediate motion of a particle at a given 2D location. We use an image-to-image translation network to encode motion priors of natural scenes collected from online videos, so that for a new photo, we can synthesize a corresponding motion field. The image is then animated using the generated motion through a deep warping technique: pixels are encoded as deep features, those features are warped via Eulerian motion, and the resulting warped feature maps are decoded as images. In order to produce continuous, seamlessly looping video textures, we propose a novel video looping technique that flows features both forward and backward in time and then blends the results. We demonstrate the effectiveness and robustness of our method by applying it to a large collection of examples including beaches, waterfalls, and flowing rivers. | https://openaccess.thecvf.com/content/CVPR2021/papers/Holynski_Animating_Pictures_With_Eulerian_Motion_Fields_CVPR_2021_paper.pdf | http://arxiv.org/abs/2011.15128 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Holynski_Animating_Pictures_With_Eulerian_Motion_Fields_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Holynski_Animating_Pictures_With_Eulerian_Motion_Fields_CVPR_2021_paper.html | CVPR 2021 | null | null |
Generalized Focal Loss V2: Learning Reliable Localization Quality Estimation for Dense Object Detection | Xiang Li, Wenhai Wang, Xiaolin Hu, Jun Li, Jinhui Tang, Jian Yang | Localization Quality Estimation (LQE) is crucial and popular in the recent advancement of dense object detectors since it can provide accurate ranking scores that benefit the Non-Maximum Suppression processing and improve detection performance. As a common practice, most existing methods predict LQE scores through vanilla convolutional features shared with object classification or bounding box regression. In this paper, we explore a completely novel and different perspective to perform LQE -- based on the learned distributions of the four parameters of the bounding box. The bounding box distributions are inspired and introduced as "General Distribution" in GFLV1, which describes the uncertainty of the predicted bounding boxes well. Such a property makes the distribution statistics of a bounding box highly correlated to its real localization quality. Specifically, a bounding box distribution with a sharp peak usually corresponds to high localization quality, and vice versa. By leveraging the close correlation between distribution statistics and the real localization quality, we develop a considerably lightweight Distribution-Guided Quality Predictor (DGQP) for reliable LQE based on GFLV1, thus producing GFLV2. To our best knowledge, it is the first attempt in object detection to use a highly relevant, statistical representation to facilitate LQE. Extensive experiments demonstrate the effectiveness of our method. Notably, GFLV2 (ResNet-101) achieves 46.2 AP at 14.6 FPS, surpassing the previous state-of-the-art ATSS baseline (43.6 AP at 14.6 FPS) by absolute 2.6 AP on COCO \tt test-dev , without sacrificing the efficiency both in training and inference. | https://openaccess.thecvf.com/content/CVPR2021/papers/Li_Generalized_Focal_Loss_V2_Learning_Reliable_Localization_Quality_Estimation_for_CVPR_2021_paper.pdf | http://arxiv.org/abs/2011.12885 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Li_Generalized_Focal_Loss_V2_Learning_Reliable_Localization_Quality_Estimation_for_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Li_Generalized_Focal_Loss_V2_Learning_Reliable_Localization_Quality_Estimation_for_CVPR_2021_paper.html | CVPR 2021 | null | null |
Cross-Domain Adaptive Clustering for Semi-Supervised Domain Adaptation | Jichang Li, Guanbin Li, Yemin Shi, Yizhou Yu | In semi-supervised domain adaptation, a few labeled samples per class in the target domain guide features of the remaining target samples to aggregate around them. However, the trained model cannot produce a highly discriminative feature representation for the target domain because the training data is dominated by labeled samples from the source domain. This could lead to disconnection between the labeled and unlabeled target samples as well as misalignment between unlabeled target samples and the source domain. In this paper, we propose a novel approach called Cross-domain Adaptive Clustering to address this problem. To achieve both inter-domain and intra-domain adaptation, we first introduce an adversarial adaptive clustering loss to group features of unlabeled target data into clusters and perform cluster-wise feature alignment across the source and target domains. We further apply pseudo labeling to unlabeled samples in the target domain and retain pseudo-labels with high confidence. Pseudo labeling expands the number of "labeled" samples in each class in the target domain, and thus produces a more robust and powerful cluster core for each class to facilitate adversarial learning. Extensive experiments on benchmark datasets, including DomainNet, Office-Home and Office, demonstrate that our proposed approach achieves the state-of-the-art performance in semi-supervised domain adaptation. | https://openaccess.thecvf.com/content/CVPR2021/papers/Li_Cross-Domain_Adaptive_Clustering_for_Semi-Supervised_Domain_Adaptation_CVPR_2021_paper.pdf | http://arxiv.org/abs/2104.09415 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Li_Cross-Domain_Adaptive_Clustering_for_Semi-Supervised_Domain_Adaptation_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Li_Cross-Domain_Adaptive_Clustering_for_Semi-Supervised_Domain_Adaptation_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Li_Cross-Domain_Adaptive_Clustering_CVPR_2021_supplemental.pdf | null |
ST3D: Self-Training for Unsupervised Domain Adaptation on 3D Object Detection | Jihan Yang, Shaoshuai Shi, Zhe Wang, Hongsheng Li, Xiaojuan Qi | We present a new domain adaptive self-training pipeline, named ST3D, for unsupervised domain adaptation on 3D object detection from point clouds. First, we pre-train the 3D detector on the source domain with our proposed random object scaling strategy for mitigating the negative effects of source domain bias. Then, the detector is iteratively improved on the target domain by alternatively conducting two steps, which are the pseudo label updating with the developed quality-aware triplet memory bank and the model training with curriculum data augmentation. These specific designs for 3D object detection enable the detector to be trained with consistent and high-quality pseudo labels and to avoid overfitting to the large number of easy examples in pseudo labeled data. Our ST3D achieves state-of-the-art performance on all evaluated datasets and even surpasses fully supervised results on KITTI 3D object detection benchmark. Code will be available at https://github.com/CVMI-Lab/ST3D. | https://openaccess.thecvf.com/content/CVPR2021/papers/Yang_ST3D_Self-Training_for_Unsupervised_Domain_Adaptation_on_3D_Object_Detection_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.05346 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Yang_ST3D_Self-Training_for_Unsupervised_Domain_Adaptation_on_3D_Object_Detection_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Yang_ST3D_Self-Training_for_Unsupervised_Domain_Adaptation_on_3D_Object_Detection_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Yang_ST3D_Self-Training_for_CVPR_2021_supplemental.pdf | null |
HITNet: Hierarchical Iterative Tile Refinement Network for Real-time Stereo Matching | Vladimir Tankovich, Christian Hane, Yinda Zhang, Adarsh Kowdle, Sean Fanello, Sofien Bouaziz | This paper presents HITNet, a novel neural network architecture for real-time stereo matching. Contrary to many recent neural network approaches that operate on a full costvolume and rely on 3D convolutions, our approach does not explicitly build a volume and instead relies on a fast multi-resolution initialization step, differentiable 2D geometric propagation and warping mechanisms to infer disparity hypotheses. To achieve a high level of accuracy, our network not only geometrically reasons about disparities but also infers slanted plane hypotheses allowing to more accurately perform geometric warping and upsampling operations. Our architecture is inherently multi-resolution allowing the propagation of information across different levels. Multiple experiments prove the effectiveness of the proposed approach at a fraction of the computation required by the state-of-the-art methods. At the time of writing, HITNet ranks 1st-3rd on all the metrics published on the ETH3D website for two view stereo, ranks 1st on most of the metrics amongst all the end-to-end learning approaches on Middleburyv3, ranks 1st on the popular KITTI 2012 and 2015 benchmarks among the published methods faster than 100ms. | https://openaccess.thecvf.com/content/CVPR2021/papers/Tankovich_HITNet_Hierarchical_Iterative_Tile_Refinement_Network_for_Real-time_Stereo_Matching_CVPR_2021_paper.pdf | http://arxiv.org/abs/2007.12140 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Tankovich_HITNet_Hierarchical_Iterative_Tile_Refinement_Network_for_Real-time_Stereo_Matching_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Tankovich_HITNet_Hierarchical_Iterative_Tile_Refinement_Network_for_Real-time_Stereo_Matching_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Tankovich_HITNet_Hierarchical_Iterative_CVPR_2021_supplemental.pdf | null |
VaB-AL: Incorporating Class Imbalance and Difficulty With Variational Bayes for Active Learning | Jongwon Choi, Kwang Moo Yi, Jihoon Kim, Jinho Choo, Byoungjip Kim, Jinyeop Chang, Youngjune Gwon, Hyung Jin Chang | Active Learning for discriminative models has largely been studied with the focus on individual samples, with less emphasis on how classes are distributed or which classes are hard to deal with. In this work, we show that this is harmful. We propose a method based on the Bayes' rule, that can naturally incorporate class imbalance into the Active Learning framework. We derive that three terms should be considered together when estimating the probability of a classifier making a mistake for a given sample; i) probability of mislabelling a class, ii) likelihood of the data given a predicted class, and iii) the prior probability on the abundance of a predicted class. Implementing these terms requires a generative model and an intractable likelihood estimation. Therefore, we train a Variational Auto Encoder (VAE) for this purpose. To further tie the VAE with the classifier and facilitate VAE training, we use the classifiers' deep feature representations as input to the VAE. By considering all three probabilities, among them especially the data imbalance, we can substantially improve the potential of existing methods under limited data budget. We show that our method can be applied to classification tasks on multiple different datasets -- including one that is a real-world dataset with heavy data imbalance -- significantly outperforming the state of the art. | https://openaccess.thecvf.com/content/CVPR2021/papers/Choi_VaB-AL_Incorporating_Class_Imbalance_and_Difficulty_With_Variational_Bayes_for_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Choi_VaB-AL_Incorporating_Class_Imbalance_and_Difficulty_With_Variational_Bayes_for_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Choi_VaB-AL_Incorporating_Class_Imbalance_and_Difficulty_With_Variational_Bayes_for_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Choi_VaB-AL_Incorporating_Class_CVPR_2021_supplemental.pdf | null |
Exploiting & Refining Depth Distributions With Triangulation Light Curtains | Yaadhav Raaj, Siddharth Ancha, Robert Tamburo, David Held, Srinivasa G. Narasimhan | Active sensing through the use of Adaptive Depth Sensors is a nascent field, with potential in areas such as Advanced driver-assistance systems (ADAS). They do however require dynamically driving a laser / light-source to a specific location to capture information, with one such class of sensor being the Triangulation Light Curtains (LC). In this work, we introduce a novel approach that exploits prior depth distributions from RGB cameras to drive a Light Curtain's laser line to regions of uncertainty to get new measurements. These measurements are utilized such that depth uncertainty is reduced and errors get corrected recursively. We show real-world experiments that validate our approach in outdoor and driving settings, and demonstrate qualitative and quantitative improvements in depth RMSE when RGB cameras are used in tandem with a Light Curtain. | https://openaccess.thecvf.com/content/CVPR2021/papers/Raaj_Exploiting__Refining_Depth_Distributions_With_Triangulation_Light_Curtains_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Raaj_Exploiting__Refining_Depth_Distributions_With_Triangulation_Light_Curtains_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Raaj_Exploiting__Refining_Depth_Distributions_With_Triangulation_Light_Curtains_CVPR_2021_paper.html | CVPR 2021 | null | null |
DG-Font: Deformable Generative Networks for Unsupervised Font Generation | Yangchen Xie, Xinyuan Chen, Li Sun, Yue Lu | Font generation is a challenging problem especially for some writing systems that consist of a large number of characters and has attracted a lot of attention in recent years. However, existing methods for font generation are often in supervised learning. They require a large number of paired data, which is labor-intensive and expensive to collect. Besides, common image-to-image translation models often define style as the set of textures and colors, which cannot be directly applied to font generation. To address these problems, we propose novel deformable generative networks for unsupervised font generation (DG-Font). We introduce a feature deformation skip connection (FDSC) which predicts pairs of displacement maps and employs the predicted maps to apply deformable convolution to the low-level feature maps from the content encoder. The outputs of FDSC are fed into a mixer to generate the final results. Taking advantage of FDSC, the mixer outputs a high-quality character with a complete structure. To further improve the quality of generated images, we use three deformable convolution layers in the content encoder to learn style-invariant feature representations. Experiments demonstrate that our model generates characters in higher quality than state-of-art methods. The source code is available at https://github.com/ecnuycxie/DG-Font. | https://openaccess.thecvf.com/content/CVPR2021/papers/Xie_DG-Font_Deformable_Generative_Networks_for_Unsupervised_Font_Generation_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Xie_DG-Font_Deformable_Generative_Networks_for_Unsupervised_Font_Generation_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Xie_DG-Font_Deformable_Generative_Networks_for_Unsupervised_Font_Generation_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Xie_DG-Font_Deformable_Generative_CVPR_2021_supplemental.pdf | null |
Deep Multi-Task Learning for Joint Localization, Perception, and Prediction | John Phillips, Julieta Martinez, Ioan Andrei Barsan, Sergio Casas, Abbas Sadat, Raquel Urtasun | Over the last few years, we have witnessed tremendous progress on many subtasks of autonomous driving including perception, motion forecasting, and motion planning. However, these systems often assume that the car is accurately localized against a high-definition map. In this paper we question this assumption, and investigate the issues that arise in state-of-the-art autonomy stacks under localization error. Based on our observations, we design a system that jointly performs perception, prediction, and localization. Our architecture is able to reuse computation between the three tasks, and is thus able to correct localization errors efficiently. We show experiments on a large-scale autonomy dataset, demonstrating the efficiency and accuracy of our proposed approach. | https://openaccess.thecvf.com/content/CVPR2021/papers/Phillips_Deep_Multi-Task_Learning_for_Joint_Localization_Perception_and_Prediction_CVPR_2021_paper.pdf | http://arxiv.org/abs/2101.06720 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Phillips_Deep_Multi-Task_Learning_for_Joint_Localization_Perception_and_Prediction_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Phillips_Deep_Multi-Task_Learning_for_Joint_Localization_Perception_and_Prediction_CVPR_2021_paper.html | CVPR 2021 | null | null |
Deeply Shape-Guided Cascade for Instance Segmentation | Hao Ding, Siyuan Qiao, Alan Yuille, Wei Shen | The key to a successful cascade architecture for precise instance segmentation is to fully leverage the relationship between bounding box detection and mask segmentation across multiple stages. Although modern instance segmentation cascades achieve leading performance, they mainly make use of a unidirectional relationship, i.e., mask segmentation can benefit from iteratively refined bounding box detection. In this paper, we investigate an alternative direction, i.e., how to take the advantage of precise mask segmentation for bounding box detection in a cascade architecture. We propose a Deeply Shape-guided Cascade (DSC) for instance segmentation, which iteratively imposes the shape guidances extracted from mask prediction at previous stage on bounding box detection at current stage. It forms a bi-directional relationship between the two tasks by introducing three key components: (1) Initial shape guidance: A mask-supervised Region Proposal Network (mPRN) with the ability to generate class-agnostic masks; (2) Explicit shape guidance: A mask-guided region-of-interest (RoI) feature extractor, which employs mask segmentation at previous stage to focus feature extraction at current stage within a region aligned well with the shape of the instance-of-interest rather than a rectangular RoI; (3) Implicit shape guidance: A feature fusion operation which feeds intermediate mask features at previous stage to the bounding box head at current stage. Experimental results show that DSC outperforms the state-of-the-art instance segmentation cascade, Hybrid Task Cascade (HTC), by a large margin and achieves 51.8 box AP and 45.5 mask AP on COCO test-dev. The code is released at: https://github.com/hding2455/DSC. | https://openaccess.thecvf.com/content/CVPR2021/papers/Ding_Deeply_Shape-Guided_Cascade_for_Instance_Segmentation_CVPR_2021_paper.pdf | http://arxiv.org/abs/1911.11263 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Ding_Deeply_Shape-Guided_Cascade_for_Instance_Segmentation_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Ding_Deeply_Shape-Guided_Cascade_for_Instance_Segmentation_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Ding_Deeply_Shape-Guided_Cascade_CVPR_2021_supplemental.pdf | null |
MetricOpt: Learning To Optimize Black-Box Evaluation Metrics | Chen Huang, Shuangfei Zhai, Pengsheng Guo, Josh Susskind | We study the problem of directly optimizing arbitrary non-differentiable task evaluation metrics such as misclassification rate and recall. Our method, named MetricOpt, operates in a black-box setting where the computational details of the target metric are unknown. We achieve this by learning a differentiable value function, which maps compact task-specific model parameters to metric observations. The learned value function is easily pluggable into existing optimizers like SGD and Adam, and is effective for rapidly finetuning a pre-trained model. This leads to consistent improvements since the value function provides effective metric supervision during finetuning, and helps to correct the potential bias of loss-only supervision. MetricOpt achieves state-of-the-art performance on a variety of metrics for (image) classification, image retrieval and object detection. Solid benefits are found over competing methods, which often involve complex loss design or adaptation. MetricOpt also generalizes well to new tasks and model architectures. | https://openaccess.thecvf.com/content/CVPR2021/papers/Huang_MetricOpt_Learning_To_Optimize_Black-Box_Evaluation_Metrics_CVPR_2021_paper.pdf | http://arxiv.org/abs/2104.10631 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Huang_MetricOpt_Learning_To_Optimize_Black-Box_Evaluation_Metrics_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Huang_MetricOpt_Learning_To_Optimize_Black-Box_Evaluation_Metrics_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Huang_MetricOpt_Learning_To_CVPR_2021_supplemental.pdf | null |
Multispectral Photometric Stereo for Spatially-Varying Spectral Reflectances: A Well Posed Problem? | Heng Guo, Fumio Okura, Boxin Shi, Takuya Funatomi, Yasuhiro Mukaigawa, Yasuyuki Matsushita | Multispectral photometric stereo (MPS) aims at recovering the surface normal of a scene from a single-shot multispectral image, which is known as an ill-posed problem. To make the problem well-posed, existing MPS methods rely on restrictive assumptions, such as shape prior, surfaces having a monochromatic with uniform albedo. This paper alleviates the restrictive assumptions in existing methods. We show that the problem becomes well-posed for a surface with a uniform chromaticity but spatially-varying albedos based on our new formulation. Specifically, if at least three (or two) scene points share the same chromaticity, the proposed method uniquely recovers their surface normals and spectral reflectance with the illumination of more than or equal to four (or five) spectral lights. Besides, our method can be made robust by having many (i.e., 4 or more) spectral bands using robust estimation techniques for conventional photometric stereo. Experiments on both synthetic and real-world scenes demonstrate the effectiveness of our method. Our data and result can be found at https://github.com/GH-HOME/MultispectralPS.git. | https://openaccess.thecvf.com/content/CVPR2021/papers/Guo_Multispectral_Photometric_Stereo_for_Spatially-Varying_Spectral_Reflectances_A_Well_Posed_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Guo_Multispectral_Photometric_Stereo_for_Spatially-Varying_Spectral_Reflectances_A_Well_Posed_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Guo_Multispectral_Photometric_Stereo_for_Spatially-Varying_Spectral_Reflectances_A_Well_Posed_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Guo_Multispectral_Photometric_Stereo_CVPR_2021_supplemental.pdf | null |
Fashion IQ: A New Dataset Towards Retrieving Images by Natural Language Feedback | Hui Wu, Yupeng Gao, Xiaoxiao Guo, Ziad Al-Halah, Steven Rennie, Kristen Grauman, Rogerio Feris | Conversational interfaces for the detail-oriented retail fashion domain are more natural, expressive, and user friendly than classical keyword-based search interfaces. In this paper, we introduce the Fashion IQ dataset to support and advance research on interactive fashion image retrieval. Fashion IQ is the first fashion dataset to provide human-generated captions that distinguish similar pairs of garment images together with side-information consisting of real-world product descriptions and derived visual attribute labels for these images. We provide a detailed analysis of the characteristics of the Fashion IQ data, and present a transformer-based user simulator and interactive image retriever that can seamlessly integrate visual attributes with image features, user feedback, and dialog history, leading to improved performance over the state of the art in dialog-based image retrieval. We believe that our dataset will encourage further work on developing more natural and real-world applicable conversational shopping assistants. | https://openaccess.thecvf.com/content/CVPR2021/papers/Wu_Fashion_IQ_A_New_Dataset_Towards_Retrieving_Images_by_Natural_CVPR_2021_paper.pdf | http://arxiv.org/abs/1905.12794 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Wu_Fashion_IQ_A_New_Dataset_Towards_Retrieving_Images_by_Natural_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Wu_Fashion_IQ_A_New_Dataset_Towards_Retrieving_Images_by_Natural_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Wu_Fashion_IQ_A_CVPR_2021_supplemental.pdf | null |
Few-Shot Human Motion Transfer by Personalized Geometry and Texture Modeling | Zhichao Huang, Xintong Han, Jia Xu, Tong Zhang | We present a new method for few-shot human motion transfer that achieves realistic human image generation with only a small number of appearance inputs. Despite recent advances in single person motion transfer, prior methods often require a large number of training images and take long training time. One promising direction is to perform few-shot human motion transfer, which only needs a few of source images for appearance transfer. However, it is particularly challenging to obtain satisfactory transfer results. In this paper, we address this issue by rendering a human texture map to a surface geometry (represented as a UV map), which is personalized to the source person. Our geometry generator combines the shape information from source images, and the pose information from 2D keypoints to synthesize the personalized UV map. A texture generator then generates the texture map conditioned on the texture of source images to fill out invisible parts. Furthermore, we may fine-tune the texture map on the manifold of the texture generator from a few source images at the test time, which improves the quality of the texture map without over-fitting or artifacts. Extensive experiments show the proposed method outperforms state-of-the-art methods both qualitatively and quantitatively. Our code is available at https://github.com/HuangZhiChao95/FewShotMotionTransfer. | https://openaccess.thecvf.com/content/CVPR2021/papers/Huang_Few-Shot_Human_Motion_Transfer_by_Personalized_Geometry_and_Texture_Modeling_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.14338 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Huang_Few-Shot_Human_Motion_Transfer_by_Personalized_Geometry_and_Texture_Modeling_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Huang_Few-Shot_Human_Motion_Transfer_by_Personalized_Geometry_and_Texture_Modeling_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Huang_Few-Shot_Human_Motion_CVPR_2021_supplemental.pdf | null |
HDMapGen: A Hierarchical Graph Generative Model of High Definition Maps | Lu Mi, Hang Zhao, Charlie Nash, Xiaohan Jin, Jiyang Gao, Chen Sun, Cordelia Schmid, Nir Shavit, Yuning Chai, Dragomir Anguelov | High Definition (HD) maps are maps with precise definitions of road lanes with rich semantics of the traffic rules. They are critical for several key stages in an autonomous driving system, including motion forecasting and planning. However, there are only a small amount of real-world road topologies and geometries, which significantly limits our ability to test out the self-driving stack to generalize onto new unseen scenarios. To address this issue, we introduce a new challenging task to generate HD maps. In this work, we explore several autoregressive models using different data representations, including sequence, plain graph, and hierarchical graph. We propose HDMapGen, a hierarchical graph generation model capable of producing high-quality and diverse HD maps through a coarse-to-fine approach. Experiments on the Argoverse dataset and an in-house dataset show that HDMapGen significantly outperforms baseline methods. Additionally, we demonstrate that HDMapGen achieves high efficiency and scalability. | https://openaccess.thecvf.com/content/CVPR2021/papers/Mi_HDMapGen_A_Hierarchical_Graph_Generative_Model_of_High_Definition_Maps_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Mi_HDMapGen_A_Hierarchical_Graph_Generative_Model_of_High_Definition_Maps_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Mi_HDMapGen_A_Hierarchical_Graph_Generative_Model_of_High_Definition_Maps_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Mi_HDMapGen_A_Hierarchical_CVPR_2021_supplemental.pdf | null |
GeoSim: Realistic Video Simulation via Geometry-Aware Composition for Self-Driving | Yun Chen, Frieda Rong, Shivam Duggal, Shenlong Wang, Xinchen Yan, Sivabalan Manivasagam, Shangjie Xue, Ersin Yumer, Raquel Urtasun | Scalable sensor simulation is an important yet challenging open problem for safety-critical domains such as self-driving. Current works in image simulation either fail to be photorealistic or do not model the 3D environment and the dynamic objects within, losing high-level control and physical realism. In this paper, we present GeoSim, a geometry-aware image composition process which synthesizes novel urban driving scenarios by augmenting existing images with dynamic objects extracted from other scenes and rendered at novel poses. Towards this goal, we first build a diverse bank of 3D objects with both realistic geometry and appearance from sensor data. During simulation, we perform a novel geometry-aware simulation-by-composition procedure which 1) proposes plausible and realistic object placements into a given scene, 2) render novel views of dynamic objects from the asset bank, and 3) composes and blends the rendered image segments. The resulting synthetic images are realistic, traffic-aware, and geometrically consistent, allowing our approach to scale to complex use cases. We demonstrate two such important applications: long-range realistic video simulation across multiple camera sensors, and synthetic data generation for data augmentation on downstream segmentation tasks. Please check https://tmux.top/publication/geosim/ for high-resolution video results. | https://openaccess.thecvf.com/content/CVPR2021/papers/Chen_GeoSim_Realistic_Video_Simulation_via_Geometry-Aware_Composition_for_Self-Driving_CVPR_2021_paper.pdf | http://arxiv.org/abs/2101.06543 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Chen_GeoSim_Realistic_Video_Simulation_via_Geometry-Aware_Composition_for_Self-Driving_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Chen_GeoSim_Realistic_Video_Simulation_via_Geometry-Aware_Composition_for_Self-Driving_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Chen_GeoSim_Realistic_Video_CVPR_2021_supplemental.zip | null |
AlphaMatch: Improving Consistency for Semi-Supervised Learning With Alpha-Divergence | Chengyue Gong, Dilin Wang, Qiang Liu | Semi-supervised learning (SSL) is a key approach toward more data-efficient machine learning by jointly leverage both labeled and unlabeled data. We propose AlphaMatch, an efficient SSL method that leverages data augmentations, by efficiently enforcing the label consistency between the data points and the augmented data derived from them. Our key technical contribution lies on: 1) using alpha-divergence to prioritize the regularization on data with high Semi-supervised learning (SSL) is a key approach toward more data-efficient machine learning by jointly leverage both labeled and unlabeled data. We propose AlphaMatch, an efficient SSL method that leverages data augmentations, by efficiently enforcing the label consistency between the data points and the augmented data derived from them. Our key technical contribution lies on: 1) using alpha-divergence to prioritize the regularization on data with high confidence, achieving similar effect as FixMatch but in a more flexible fashion, and 2) proposing an optimization-based, EM-like algorithm to enforce the consistency, which enjoys better convergence than iterative regularization procedures used in recent SSL methods such as FixMatch, UDA, and MixMatch. AlphaMatch is simple and easy to implement, and consistently outperforms prior arts on standard benchmarks, e.g. CIFAR-10, SVHN, CIFAR-100, STL-10. Specifically, we achieve 91.3% test accuracy on CIFAR-10 with just 4 labelled data per class, substantially improving over the previously best 88.7% accuracy achieved by FixMatch. | https://openaccess.thecvf.com/content/CVPR2021/papers/Gong_AlphaMatch_Improving_Consistency_for_Semi-Supervised_Learning_With_Alpha-Divergence_CVPR_2021_paper.pdf | http://arxiv.org/abs/2011.11779 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Gong_AlphaMatch_Improving_Consistency_for_Semi-Supervised_Learning_With_Alpha-Divergence_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Gong_AlphaMatch_Improving_Consistency_for_Semi-Supervised_Learning_With_Alpha-Divergence_CVPR_2021_paper.html | CVPR 2021 | null | null |
Unbalanced Feature Transport for Exemplar-Based Image Translation | Fangneng Zhan, Yingchen Yu, Kaiwen Cui, Gongjie Zhang, Shijian Lu, Jianxiong Pan, Changgong Zhang, Feiying Ma, Xuansong Xie, Chunyan Miao | Despite the great success of GANs in images translation with different conditioned inputs such as semantic segmentation and edge map, generating high-fidelity images with reference styles from exemplars remains a grand challenge in conditional image-to-image translation. This paper presents a general image translation framework that incorporates optimal transport for feature alignment between conditional inputs and style exemplars in translation. The introduction of optimal transport mitigates the constraint of many-to-one feature matching significantly while building up semantic correspondences between conditional inputs and exemplars. We design a novel unbalanced optimal transport to address the transport between features with deviational distributions which exists widely between conditional inputs and exemplars. In addition, we design a semantic-aware normalization scheme that injects style and semantic features of exemplars into the image translation process successfully. Extensive experiments over multiple image translation tasks show that our proposed technique achieves superior image translation qualitatively and quantitatively as compared with the state-of-the-art. | https://openaccess.thecvf.com/content/CVPR2021/papers/Zhan_Unbalanced_Feature_Transport_for_Exemplar-Based_Image_Translation_CVPR_2021_paper.pdf | http://arxiv.org/abs/2106.10482 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Zhan_Unbalanced_Feature_Transport_for_Exemplar-Based_Image_Translation_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Zhan_Unbalanced_Feature_Transport_for_Exemplar-Based_Image_Translation_CVPR_2021_paper.html | CVPR 2021 | null | null |
Self-Generated Defocus Blur Detection via Dual Adversarial Discriminators | Wenda Zhao, Cai Shang, Huchuan Lu | Although existing fully-supervised defocus blur detection (DBD) models significantly improve performance, training such deep models requires abundant pixel-level manual annotation, which is highly time-consuming and error-prone. Addressing this issue, this paper makes an effort to train a deep DBD model without using any pixel-level annotation. The core insight is that a defocus blur region/focused clear area can be arbitrarily pasted to a given realistic full blurred image/full clear image without affecting the judgment of the full blurred image/full clear image. Specifically, we train a generator G in an adversarial manner against dual discriminators Dc and Db. G learns to produce a DBD mask that generates a composite clear image and a composite blurred image through copying the focused area and unfocused region from corresponding source image to another full clear image and full blurred image. Then, Dc and Db can not distinguish them from realistic full clear image and full blurred image simultaneously, achieving a self-generated DBD by an implicit manner to define what a defocus blur area is. Besides, we propose a bilateral triplet-excavating constraint to avoid the degenerate problem caused by the case one discriminator defeats the other one. Comprehensive experiments on two widely-used DBD datasets demonstrate the superiority of the proposed approach. Source codes are available at: https://github.com/shangcai1/SG. | https://openaccess.thecvf.com/content/CVPR2021/papers/Zhao_Self-Generated_Defocus_Blur_Detection_via_Dual_Adversarial_Discriminators_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Zhao_Self-Generated_Defocus_Blur_Detection_via_Dual_Adversarial_Discriminators_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Zhao_Self-Generated_Defocus_Blur_Detection_via_Dual_Adversarial_Discriminators_CVPR_2021_paper.html | CVPR 2021 | null | null |
View Generalization for Single Image Textured 3D Models | Anand Bhattad, Aysegul Dundar, Guilin Liu, Andrew Tao, Bryan Catanzaro | Humans can easily infer the underlying 3D geometry and texture of an object only from a single 2D image. Current computer vision methods can do this, too, but suffer from view generalization problems -- the models inferred tend to make poor predictions of appearance in novel views. As for generalization problems in machine learning, the difficulty is balancing single-view accuracy (cf. training error; bias) with novel view accuracy (cf. test error; variance). We describe a class of models whose geometric rigidity is easily controlled to manage this tradeoff. We describe a cycle consistency loss that improves view generalization (roughly, a model from a generated view should predict the original view well). View generalization of textures requires that models share texture information, so a car seen from the back still has headlights because other cars have headlights. We describe a cycle consistency loss that encourages model textures to be aligned, so as to encourage sharing. We compare our method against the state-of-the-art method and show both qualitative and quantitative improvements. | https://openaccess.thecvf.com/content/CVPR2021/papers/Bhattad_View_Generalization_for_Single_Image_Textured_3D_Models_CVPR_2021_paper.pdf | http://arxiv.org/abs/2106.06533 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Bhattad_View_Generalization_for_Single_Image_Textured_3D_Models_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Bhattad_View_Generalization_for_Single_Image_Textured_3D_Models_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Bhattad_View_Generalization_for_CVPR_2021_supplemental.pdf | null |
Your "Flamingo" is My "Bird": Fine-Grained, or Not | Dongliang Chang, Kaiyue Pang, Yixiao Zheng, Zhanyu Ma, Yi-Zhe Song, Jun Guo | Whether what you see in Figure 1 is a "flamingo" or a "bird", is the question we ask in this paper. While fine-grained visual classification (FGVC) strives to arrive at the former, for the majority of us non-experts just "bird" would probably suffice. The real question is therefore -- how can we tailor for different fine-grained definitions under divergent levels of expertise. For that, we re-envisage the traditional setting of FGVC, from single-label classification, to that of top-down traversal of a pre-defined coarse-to-fine label hierarchy -- so that our answer becomes "bird"="Phoenicopteriformes"="Phoenicopteridae"="flamingo". To approach this new problem, we first conduct a comprehensive human study where we confirm that most participants prefer multi-granularity labels, regardless whether they consider themselves experts. We then discover the key intuition that: coarse-level label prediction exacerbates fine-grained feature learning, yet fine-level feature betters the learning of coarse-level classifier. This discovery enables us to design a very simple albeit surprisingly effective solution to our new problem, where we (i) leverage level-specific classification heads to disentangle coarse-level features with fine-grained ones, and (ii) allow finer-grained features to participate in coarser-grained label predictions, which in turn helps with better disentanglement. Experiments show that our method achieves superior performance in the new FGVC setting, and performs better than state-of-the-art on traditional single-label FGVC problem as well. Thanks to its simplicity, our method can be easily implemented on top of any existing FGVC frameworks and is parameter-free. Codes are available at: https://github.com/PRIS-CV/Fine-Grained-or-Not | https://openaccess.thecvf.com/content/CVPR2021/papers/Chang_Your_Flamingo_is_My_Bird_Fine-Grained_or_Not_CVPR_2021_paper.pdf | http://arxiv.org/abs/2011.09040 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Chang_Your_Flamingo_is_My_Bird_Fine-Grained_or_Not_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Chang_Your_Flamingo_is_My_Bird_Fine-Grained_or_Not_CVPR_2021_paper.html | CVPR 2021 | null | null |
Anchor-Constrained Viterbi for Set-Supervised Action Segmentation | Jun Li, Sinisa Todorovic | This paper is about action segmentation under weak supervision in training, where the ground truth provides only a set of actions present, but neither their temporal ordering nor when they occur in a training video. We use a Hidden Markov Model (HMM) grounded on a multilayer perceptron (MLP) to label video frames, and thus generate a pseudo-ground truth for the subsequent pseudo-supervised training. In testing, a Monte Carlo sampling of action sets seen in training is used to generate candidate temporal sequences of actions, and select the maximum posterior sequence. Our key contribution is a new anchor-constrained Viterbi algorithm (ACV) for generating the pseudo-ground truth, where anchors are salient action parts estimated for each action from a given ground-truth set. Our evaluation on the tasks of action segmentation and alignment on the benchmark Breakfast, MPII Cooking2, Hollywood Extended datasets demonstrates our superior performance relative to that of prior work. | https://openaccess.thecvf.com/content/CVPR2021/papers/Li_Anchor-Constrained_Viterbi_for_Set-Supervised_Action_Segmentation_CVPR_2021_paper.pdf | http://arxiv.org/abs/2104.02113 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Li_Anchor-Constrained_Viterbi_for_Set-Supervised_Action_Segmentation_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Li_Anchor-Constrained_Viterbi_for_Set-Supervised_Action_Segmentation_CVPR_2021_paper.html | CVPR 2021 | null | null |
SOON: Scenario Oriented Object Navigation With Graph-Based Exploration | Fengda Zhu, Xiwen Liang, Yi Zhu, Qizhi Yu, Xiaojun Chang, Xiaodan Liang | The ability to navigate like a human towards a language-guided target from anywhere in a 3D embodied environment is one of the 'holy grail' goals of intelligent robots. Most visual navigation benchmarks, however, focus on navigating toward a target from a fixed starting point, guided by an elaborate set of instructions that depicts step-by-step. This approach deviates from real-world problems in which human-only describes what the object and its surrounding look like and asks the robot to start navigation from anywhere. Accordingly, in this paper, we introduce a Scenario Oriented Object Navigation (SOON) task. In this task, an agent is required to navigate from an arbitrary position in a 3D embodied environment to localize a target following a scene description. To give a promising direction to solve this task, we propose a novel graph-based exploration (GBE) method, which models the navigation state as a graph and introduces a novel graph-based exploration approach to learn knowledge from the graph and stabilize training by learning sub-optimal trajectories. We also propose a new large-scale benchmark named From Anywhere to Object (FAO) dataset. To avoid target ambiguity, the descriptions in FAO provide rich semantic scene information includes: object attribute, object relationship, region description, and nearby region description. Our experiments reveal that the proposed GBE outperforms various state-of-the-arts on both FAO and R2R datasets. And the ablation studies on FAO validates the quality of the dataset. | https://openaccess.thecvf.com/content/CVPR2021/papers/Zhu_SOON_Scenario_Oriented_Object_Navigation_With_Graph-Based_Exploration_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.17138 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Zhu_SOON_Scenario_Oriented_Object_Navigation_With_Graph-Based_Exploration_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Zhu_SOON_Scenario_Oriented_Object_Navigation_With_Graph-Based_Exploration_CVPR_2021_paper.html | CVPR 2021 | null | null |
Learning Scalable lY=-Constrained Near-Lossless Image Compression via Joint Lossy Image and Residual Compression | Yuanchao Bai, Xianming Liu, Wangmeng Zuo, Yaowei Wang, Xiangyang Ji | We propose a novel joint lossy image and residual compression framework for learning l_infinity-constrained near-lossless image compression. Specifically, we obtain a lossy reconstruction of the raw image through lossy image compression and uniformly quantize the corresponding residual to satisfy a given tight l_infinity error bound. Suppose that the error bound is zero, i.e., lossless image compression, we formulate the joint optimization problem of compressing both the lossy image and the original residual in terms of variational auto-encoders and solve it with end-to-end training. To achieve scalable compression with the error bound larger than zero, we derive the probability model of the quantized residual by quantizing the learned probability model of the original residual, instead of training multiple networks. We further correct the bias of the derived probability model caused by the context mismatch between training and inference. Finally, the quantized residual is encoded according to the bias-corrected probability model and is concatenated with the bitstream of the compressed lossy image. Experimental results demonstrate that our near-lossless codec achieves the state-of-the-art performance for lossless and near-lossless image compression, and achieves competitive PSNR while much smaller l_infinity error compared with lossy image codecs at high bit rates. | https://openaccess.thecvf.com/content/CVPR2021/papers/Bai_Learning_Scalable_lY-Constrained_Near-Lossless_Image_Compression_via_Joint_Lossy_Image_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Bai_Learning_Scalable_lY-Constrained_Near-Lossless_Image_Compression_via_Joint_Lossy_Image_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Bai_Learning_Scalable_lY-Constrained_Near-Lossless_Image_Compression_via_Joint_Lossy_Image_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Bai_Learning_Scalable_lY-Constrained_CVPR_2021_supplemental.pdf | null |
Minimally Invasive Surgery for Sparse Neural Networks in Contrastive Manner | Chong Yu | With the development of deep learning, neural networks tend to be deeper and larger to achieve good performance. Trained models are more compute-intensive and memory-intensive, which lead to the big challenges on memory bandwidth, storage, latency, and throughput. In this paper, we propose the neural network compression method named minimally invasive surgery. Different from traditional model compression and knowledge distillation methods, the proposed method refers to the minimally invasive surgery principle. It learns the principal features from a pair of dense and compressed models in a contrastive manner. It also optimizes the neural networks to meet the specific hardware acceleration requirements. Through qualitative, quantitative, and ablation experiments, the proposed method shows a compelling performance, acceleration, and generalization in various tasks. | https://openaccess.thecvf.com/content/CVPR2021/papers/Yu_Minimally_Invasive_Surgery_for_Sparse_Neural_Networks_in_Contrastive_Manner_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Yu_Minimally_Invasive_Surgery_for_Sparse_Neural_Networks_in_Contrastive_Manner_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Yu_Minimally_Invasive_Surgery_for_Sparse_Neural_Networks_in_Contrastive_Manner_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Yu_Minimally_Invasive_Surgery_CVPR_2021_supplemental.pdf | null |
XProtoNet: Diagnosis in Chest Radiography With Global and Local Explanations | Eunji Kim, Siwon Kim, Minji Seo, Sungroh Yoon | Automated diagnosis using deep neural networks in chest radiography can help radiologists detect life-threatening diseases. However, existing methods only provide predictions without accurate explanations, undermining the trustworthiness of the diagnostic methods. Here, we present XProtoNet, a globally and locally interpretable diagnosis framework for chest radiography. XProtoNet learns representative patterns of each disease from X-ray images, which are prototypes, and makes a diagnosis on a given X-ray image based on the patterns. It predicts the area where a sign of the disease is likely to appear and compares the features in the predicted area with the prototypes. It can provide a global explanation, the prototype, and a local explanation, how the prototype contributes to the prediction of a single image. Despite the constraint for interpretability, XProtoNet achieves state-of-the-art classification performance on the public NIH chest X-ray dataset. | https://openaccess.thecvf.com/content/CVPR2021/papers/Kim_XProtoNet_Diagnosis_in_Chest_Radiography_With_Global_and_Local_Explanations_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.10663 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Kim_XProtoNet_Diagnosis_in_Chest_Radiography_With_Global_and_Local_Explanations_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Kim_XProtoNet_Diagnosis_in_Chest_Radiography_With_Global_and_Local_Explanations_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Kim_XProtoNet_Diagnosis_in_CVPR_2021_supplemental.pdf | null |
Learning Scene Structure Guidance via Cross-Task Knowledge Transfer for Single Depth Super-Resolution | Baoli Sun, Xinchen Ye, Baopu Li, Haojie Li, Zhihui Wang, Rui Xu | Existing color-guided depth super-resolution (DSR) approaches require paired RGB-D data as training examples where the RGB image is used as structural guidance to recover the degraded depth map due to their geometrical similarity. However, the paired data may be limited or expensive to be collected in actual testing environment. Therefore, we explore for the first time to learn the cross-modal knowledge at training stage, where both RGB and depth modalities are available, but test on the target dataset, where only single depth modality exists. Our key idea is to distill the knowledge of scene structural guidance from color modality to the single DSR task without changing its network architecture. Specifically, we propose an auxiliary depth estimation (DE) task that takes color image as input to estimate a depth map, and train both DSR task and DE task collaboratively to boost the performance of DSR. A cross-task distillation module is designed to realize bilateral cross-task knowledge transfer. Moreover, to address the problem of RGB-D structure inconsistency and boost the structure perception, we advance a structure prediction (SP) task that provides extra structure regularization to help both DSR and DE networks learn more informative structure representations for depth recovery. Extensive experiments demonstrate that our scheme achieves superior performance in comparison with other DSR methods. | https://openaccess.thecvf.com/content/CVPR2021/papers/Sun_Learning_Scene_Structure_Guidance_via_Cross-Task_Knowledge_Transfer_for_Single_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.12955 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Sun_Learning_Scene_Structure_Guidance_via_Cross-Task_Knowledge_Transfer_for_Single_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Sun_Learning_Scene_Structure_Guidance_via_Cross-Task_Knowledge_Transfer_for_Single_CVPR_2021_paper.html | CVPR 2021 | null | null |
Visual Navigation With Spatial Attention | Bar Mayo, Tamir Hazan, Ayellet Tal | This work focuses on object goal visual navigation, aiming at finding the location of an object from a given class, where in each step the agent is provided with an egocentric RGB image of the scene. We propose to learn the agent's policy using a reinforcement learning algorithm. Our key contribution is a novel attention probability model for visual navigation tasks. This attention encodes semantic information about observed objects, as well as spatial information about their place. This combination of the "what"" and the "where"" allows the agent to navigate toward the sought-after object effectively. The attention model is shown to improve the agent's policy and to achieve state-of-the-art results on commonly-used datasets. | https://openaccess.thecvf.com/content/CVPR2021/papers/Mayo_Visual_Navigation_With_Spatial_Attention_CVPR_2021_paper.pdf | http://arxiv.org/abs/2104.09807 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Mayo_Visual_Navigation_With_Spatial_Attention_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Mayo_Visual_Navigation_With_Spatial_Attention_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Mayo_Visual_Navigation_With_CVPR_2021_supplemental.pdf | null |
Model-Based 3D Hand Reconstruction via Self-Supervised Learning | Yujin Chen, Zhigang Tu, Di Kang, Linchao Bao, Ying Zhang, Xuefei Zhe, Ruizhi Chen, Junsong Yuan | Reconstructing a 3D hand from a single-view RGB image is challenging due to various hand configurations and depth ambiguity. To reliably reconstruct a 3D hand from a monocular image, most state-of-the-art methods heavily rely on 3D annotations at the training stage, but obtaining 3D annotations is expensive. To alleviate reliance on labeled training data, we propose S2HAND, a self-supervised 3D hand reconstruction network that can jointly estimate pose, shape, texture, and the camera viewpoint. Specifically, we obtain geometric cues from the input image through easily accessible 2D detected keypoints. To learn an accurate hand reconstruction model from these noisy geometric cues, we utilize the consistency between 2D and 3D representations and propose a set of novel losses to rationalize outputs of the neural network. For the first time, we demonstrate the feasibility of training an accurate 3D hand reconstruction network without relying on manual annotations. Our experiments show that the proposed method achieves comparable performance with recent fully-supervised methods while using fewer supervision data. | https://openaccess.thecvf.com/content/CVPR2021/papers/Chen_Model-Based_3D_Hand_Reconstruction_via_Self-Supervised_Learning_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.11703 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Chen_Model-Based_3D_Hand_Reconstruction_via_Self-Supervised_Learning_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Chen_Model-Based_3D_Hand_Reconstruction_via_Self-Supervised_Learning_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Chen_Model-Based_3D_Hand_CVPR_2021_supplemental.pdf | null |
Robust Reflection Removal With Reflection-Free Flash-Only Cues | Chenyang Lei, Qifeng Chen | We propose a simple yet effective reflection-free cue for robust reflection removal from a pair of flash and ambient (no-flash) images. The reflection-free cue exploits a flash-only image obtained by subtracting the ambient image from the corresponding flash image in raw data space. The flash-only image is equivalent to an image taken in a dark environment with only a flash on. We observe that this flash-only image is visually reflection-free, and thus it can provide robust cues to infer the reflection in the ambient image. Since the flash-only image usually has artifacts, we further propose a dedicated model that not only utilizes the reflection-free cue but also avoids introducing artifacts, which helps accurately estimate reflection and transmission. Our experiments on real-world images with various types of reflection demonstrate the effectiveness of our model with reflection-free flash-only cues: our model outperforms state-of-the-art reflection removal approaches by more than 5.23dB in PSNR, 0.04 in SSIM, and 0.068 in LPIPS. Our source code and dataset are publicly available at github.com/ChenyangLEI/flash-reflection-removal. | https://openaccess.thecvf.com/content/CVPR2021/papers/Lei_Robust_Reflection_Removal_With_Reflection-Free_Flash-Only_Cues_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.04273 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Lei_Robust_Reflection_Removal_With_Reflection-Free_Flash-Only_Cues_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Lei_Robust_Reflection_Removal_With_Reflection-Free_Flash-Only_Cues_CVPR_2021_paper.html | CVPR 2021 | null | null |
Real-Time Selfie Video Stabilization | Jiyang Yu, Ravi Ramamoorthi, Keli Cheng, Michel Sarkis, Ning Bi | We propose a novel real-time selfie video stabilization method. Our method is completely automatic and runs at 26 fps. We use a 1D linear convolutional network to directly infer the rigid moving least squares warping which implicitly balances between the global rigidity and local flexibility. Our network structure is specifically designed to stabilize the background and foreground at the same time, while providing optional control of stabilization focus (relative importance of foreground vs. background) to the users. To train our network, we collect a selfie video dataset with 1005 videos, which is significantly larger than previous selfie video datasets. We also propose a grid approximation to the rigid moving least squares that enables the real-time frame warping. Our method is fully automatic and produces visually and quantitatively better results than previous real-time general video stabilization methods. Compared to previous offline selfie video methods, our approach produces comparable quality with a speed improvement of orders of magnitude. Our code and selfie video dataset is available at https://github.com/jiy173/selfievideostabilization. | https://openaccess.thecvf.com/content/CVPR2021/papers/Yu_Real-Time_Selfie_Video_Stabilization_CVPR_2021_paper.pdf | http://arxiv.org/abs/2009.02007 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Yu_Real-Time_Selfie_Video_Stabilization_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Yu_Real-Time_Selfie_Video_Stabilization_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Yu_Real-Time_Selfie_Video_CVPR_2021_supplemental.zip | null |
3D Human Action Representation Learning via Cross-View Consistency Pursuit | Linguo Li, Minsi Wang, Bingbing Ni, Hang Wang, Jiancheng Yang, Wenjun Zhang | In this work, we propose a Cross-view Contrastive Learning framework for unsupervised 3D skeleton-based action representation (CrosSCLR), by leveraging multi-view complementary supervision signal. CrosSCLR consists of both single-view contrastive learning (SkeletonCLR) and cross-view consistent knowledge mining (CVC-KM) modules, integrated in a collaborative learning manner. It is noted that CVC-KM works in such a way that high-confidence positive/negative samples and their distributions are exchanged among views according to their embedding similarity, ensuring cross-view consistency in terms of contrastive context, i.e., similar distributions. Extensive experiments show that CrosSCLR achieves remarkable action recognition results on NTU-60 and NTU-120 datasets under unsupervised settings, with observed higher-quality action representations. Our code is available at https://github.com/LinguoLi/CrosSCLR. | https://openaccess.thecvf.com/content/CVPR2021/papers/Li_3D_Human_Action_Representation_Learning_via_Cross-View_Consistency_Pursuit_CVPR_2021_paper.pdf | http://arxiv.org/abs/2104.14466 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Li_3D_Human_Action_Representation_Learning_via_Cross-View_Consistency_Pursuit_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Li_3D_Human_Action_Representation_Learning_via_Cross-View_Consistency_Pursuit_CVPR_2021_paper.html | CVPR 2021 | null | null |
Differentiable SLAM-Net: Learning Particle SLAM for Visual Navigation | Peter Karkus, Shaojun Cai, David Hsu | Simultaneous localization and mapping (SLAM) remains challenging for a number of downstream applications, such as visual robot navigation, because of rapid turns, featureless walls, and poor camera quality. We introduce the Differentiable SLAM Network (SLAM-net) along with a navigation architecture to enable planar robot navigation in previously unseen indoor environments. SLAM-net encodes a particle filter based SLAM algorithm in a differentiable computation graph, and learns task-oriented neural network components by backpropagating through the SLAM algorithm. Because it can optimize all model components jointly for the end-objective, SLAM-net learns to be robust in challenging conditions. We run experiments in the Habitat platform with different real-world RGB and RGB-D datasets. SLAM-net significantly outperforms the widely adapted ORB-SLAM in noisy conditions. Our navigation architecture with SLAM-net improves the state-of-the-art for the Habitat Challenge 2020 PointNav task by a large margin (37% to 64% success). | https://openaccess.thecvf.com/content/CVPR2021/papers/Karkus_Differentiable_SLAM-Net_Learning_Particle_SLAM_for_Visual_Navigation_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Karkus_Differentiable_SLAM-Net_Learning_Particle_SLAM_for_Visual_Navigation_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Karkus_Differentiable_SLAM-Net_Learning_Particle_SLAM_for_Visual_Navigation_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Karkus_Differentiable_SLAM-Net_Learning_CVPR_2021_supplemental.pdf | null |
Learning Goals From Failure | Dave Epstein, Carl Vondrick | We introduce a framework that predicts the goals behind observable human action in video. Motivated by evidence in developmental psychology, we leverage video of unintentional action to learn video representations of goals without direct supervision. Our approach models videos as contextual trajectories that represent both low-level motion and high-level action features. Experiments and visualizations show our trained model is able to predict the underlying goals in video of unintentional action. We also propose a method to "automatically correct" unintentional action by leveraging gradient signals of our model to adjust latent trajectories. Although the model is trained with minimal supervision, it is competitive with or outperforms baselines trained on large (supervised) datasets of successfully executed goals, showing that observing unintentional action is crucial to learning about goals in video. | https://openaccess.thecvf.com/content/CVPR2021/papers/Epstein_Learning_Goals_From_Failure_CVPR_2021_paper.pdf | http://arxiv.org/abs/2006.15657 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Epstein_Learning_Goals_From_Failure_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Epstein_Learning_Goals_From_Failure_CVPR_2021_paper.html | CVPR 2021 | null | null |
Rank-One Prior: Toward Real-Time Scene Recovery | Jun Liu, Wen Liu, Jianing Sun, Tieyong Zeng | Scene recovery is a fundamental imaging task for several practical applications, e.g., video surveillance and autonomous vehicles, etc. To improve visual quality under different weather/imaging conditions, we propose a real-time light correction method to recover the degraded scenes in the cases of sandstorms, underwater, and haze. The heart of our work is that we propose an intensity projection strategy to estimate the transmission. This strategy is motivated by a straightforward rank-one transmission prior. The complexity of transmission estimation is O(N) where N is the size of the single image. Then we can recover the scene in real-time. Comprehensive experiments on different types of weather/imaging conditions illustrate that our method outperforms competitively several state-of-the-art imaging methods in terms of efficiency and robustness. | https://openaccess.thecvf.com/content/CVPR2021/papers/Liu_Rank-One_Prior_Toward_Real-Time_Scene_Recovery_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.17126 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Liu_Rank-One_Prior_Toward_Real-Time_Scene_Recovery_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Liu_Rank-One_Prior_Toward_Real-Time_Scene_Recovery_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Liu_Rank-One_Prior_Toward_CVPR_2021_supplemental.pdf | null |
Body2Hands: Learning To Infer 3D Hands From Conversational Gesture Body Dynamics | Evonne Ng, Shiry Ginosar, Trevor Darrell, Hanbyul Joo | We propose a novel learned deep prior of body motion for 3D hand shape synthesis and estimation in the domain of conversational gestures. Our model builds upon the insight that body motion and hand gestures are strongly correlated in non-verbal communication settings. We formulate the learning of this prior as a prediction task of 3D hand shape over time given body motion input alone. Trained with 3D pose estimations obtained from a large-scale dataset of internet videos, our hand prediction model produces convincing 3D hand gestures given only the 3D motion of the speaker's arms as input. We demonstrate the efficacy of our method on hand gesture synthesis from body motion input, and as a strong body prior for single-view image-based 3D hand pose estimation. We demonstrate that our method outperforms previous state-of-the-art approaches and can generalize beyond the monologue-based training data to multi-person conversations. | https://openaccess.thecvf.com/content/CVPR2021/papers/Ng_Body2Hands_Learning_To_Infer_3D_Hands_From_Conversational_Gesture_Body_CVPR_2021_paper.pdf | http://arxiv.org/abs/2007.12287 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Ng_Body2Hands_Learning_To_Infer_3D_Hands_From_Conversational_Gesture_Body_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Ng_Body2Hands_Learning_To_Infer_3D_Hands_From_Conversational_Gesture_Body_CVPR_2021_paper.html | CVPR 2021 | null | null |
Linear Semantics in Generative Adversarial Networks | Jianjin Xu, Changxi Zheng | Generative Adversarial Networks (GANs) are able to generate high-quality images, but it remains difficult to explicitly specify the semantics of synthesized images. In this work, we aim to better understand the semantic representation of GANs, and thereby enable semantic control in GAN's generation process. Interestingly, we find that a well-trained GAN encodes image semantics in its internal feature maps in a surprisingly simple way: a linear transformation of feature maps suffices to extract the generated image semantics. To verify this simplicity, we conduct extensive experiments on various GANs and datasets; and thanks to this simplicity, we are able to learn a semantic segmentation model for a trained GAN from a small number (e.g., 8) of labeled images. Last but not least, leveraging our finding, we propose two few-shot image editing approaches, namely Semantic-Conditional Sampling and Semantic Image Editing. Given a trained GAN and as few as eight semantic annotations, the user is able to generate diverse images subject to a user-provided semantic layout, and control the synthesized image semantics. We have made the code publicly available. | https://openaccess.thecvf.com/content/CVPR2021/papers/Xu_Linear_Semantics_in_Generative_Adversarial_Networks_CVPR_2021_paper.pdf | http://arxiv.org/abs/2104.00487 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Xu_Linear_Semantics_in_Generative_Adversarial_Networks_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Xu_Linear_Semantics_in_Generative_Adversarial_Networks_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Xu_Linear_Semantics_in_CVPR_2021_supplemental.pdf | null |
Mesoscopic Photogrammetry With an Unstabilized Phone Camera | Kevin C. Zhou, Colin Cooke, Jaehee Park, Ruobing Qian, Roarke Horstmeyer, Joseph A. Izatt, Sina Farsiu | We present a feature-free photogrammetric technique that enables quantitative 3D mesoscopic (mm-scale height variation) imaging with tens-of-micron accuracy from sequences of images acquired by a smartphone at close range (several cm) under freehand motion without additional hardware. Our end-to-end, pixel-intensity-based approach jointly registers and stitches all the images by estimating a coaligned height map, which acts as a pixel-wise radial deformation field that orthorectifies each camera image to allow plane-plus-parallax registration. The height maps themselves are reparameterized as the output of an untrained encoder-decoder convolutional neural network (CNN) with the raw camera images as the input, which effectively removes many reconstruction artifacts. Our method also jointly estimates both the camera's dynamic 6D pose and its distortion using a nonparametric model, the latter of which is especially important in mesoscopic applications when using cameras not designed for imaging at short working distances, such as smartphone cameras. We also propose strategies for reducing computation time and memory, applicable to other multi-frame registration problems. Finally, we demonstrate our method using sequences of multi-megapixel images captured by an unstabilized smartphone on a variety of samples (e.g., painting brushstrokes, circuit board, seeds). | https://openaccess.thecvf.com/content/CVPR2021/papers/Zhou_Mesoscopic_Photogrammetry_With_an_Unstabilized_Phone_Camera_CVPR_2021_paper.pdf | http://arxiv.org/abs/2012.06044 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Zhou_Mesoscopic_Photogrammetry_With_an_Unstabilized_Phone_Camera_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Zhou_Mesoscopic_Photogrammetry_With_an_Unstabilized_Phone_Camera_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Zhou_Mesoscopic_Photogrammetry_With_CVPR_2021_supplemental.pdf | null |
Joint Generative and Contrastive Learning for Unsupervised Person Re-Identification | Hao Chen, Yaohui Wang, Benoit Lagadec, Antitza Dantcheva, Francois Bremond | Recent self-supervised contrastive learning provides an effective approach for unsupervised person re-identification (ReID) by learning invariance from different views (transformed versions) of an input. In this paper, we incorporate a Generative Adversarial Network (GAN) and a contrastive learning module into one joint training framework. While the GAN provides online data augmentation for contrastive learning, the contrastive module learns view-invariant features for generation. In this context, we propose a mesh-based view generator. Specifically, mesh projections serve as references towards generating novel views of a person. In addition, we propose a view-invariant loss to facilitate contrastive learning between original and generated views. Deviating from previous GAN-based unsupervised ReID methods involving domain adaptation, we do not rely on a labeled source dataset, which makes our method more flexible. Extensive experimental results show that our method significantly outperforms state-of-the-art methods under both, fully unsupervised and unsupervised domain adaptive settings on several large scale ReID datsets. | https://openaccess.thecvf.com/content/CVPR2021/papers/Chen_Joint_Generative_and_Contrastive_Learning_for_Unsupervised_Person_Re-Identification_CVPR_2021_paper.pdf | http://arxiv.org/abs/2012.09071 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Chen_Joint_Generative_and_Contrastive_Learning_for_Unsupervised_Person_Re-Identification_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Chen_Joint_Generative_and_Contrastive_Learning_for_Unsupervised_Person_Re-Identification_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Chen_Joint_Generative_and_CVPR_2021_supplemental.pdf | null |
Wide-Baseline Multi-Camera Calibration Using Person Re-Identification | Yan Xu, Yu-Jhe Li, Xinshuo Weng, Kris Kitani | We address the problem of estimating the 3D pose of a network of cameras for large-environment wide-baseline scenarios, e.g., cameras for construction sites, sports stadiums, and public spaces. This task is challenging since detecting and matching the same 3D keypoint observed from two very different camera views is difficult, making standard structure-from-motion (SfM) pipelines inapplicable. In such circumstances, treating people in the scene as "keypoints" and associating them across different camera views can be an alternative method for obtaining correspondences. Based on this intuition, we propose a method that uses ideas from person re-identification (re-ID) for wide-baseline camera calibration. Our method first employs a re-ID method to associate human bounding boxes across cameras, then converts bounding box correspondences to point correspondences, and finally solves for camera pose using multi-view geometry and bundle adjustment. Since our method does not require specialized calibration targets except for visible people, it applies to situations where frequent calibration updates are required. We perform extensive experiments on datasets captured from scenes of different sizes, camera settings (indoor and outdoor), and human activities (walking, playing basketball, construction). Experiment results show that our method achieves similar performance to standard SfM methods relying on manually labeled point correspondences. | https://openaccess.thecvf.com/content/CVPR2021/papers/Xu_Wide-Baseline_Multi-Camera_Calibration_Using_Person_Re-Identification_CVPR_2021_paper.pdf | http://arxiv.org/abs/2104.08568 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Xu_Wide-Baseline_Multi-Camera_Calibration_Using_Person_Re-Identification_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Xu_Wide-Baseline_Multi-Camera_Calibration_Using_Person_Re-Identification_CVPR_2021_paper.html | CVPR 2021 | null | null |
ATSO: Asynchronous Teacher-Student Optimization for Semi-Supervised Image Segmentation | Xinyue Huo, Lingxi Xie, Jianzhong He, Zijie Yang, Wengang Zhou, Houqiang Li, Qi Tian | Semi-supervised learning is a useful tool for image segmentation, mainly due to its ability in extracting knowledge from unlabeled data to assist learning from labeled data. This paper focuses on a popular pipeline known as self-learning, where we point out a weakness named lazy mimicking that refers to the inertia that a model retains the prediction from itself and thus resists updates. To alleviate this issue, we propose the Asynchronous Teacher-Student Optimization (ATSO) algorithm that (i) breaks up continual learning from teacher to student and (ii) partitions the unlabeled training data into two subsets and alternately uses one subset to fine-tune the model which updates the labels on the other. We show the ability of ATSO on medical and natural image segmentation. In both scenarios, our method reports competitive performance, on par with the state-of-the-arts, in either using partial labeled data in the same dataset or transferring the trained model to an unlabeled dataset. | https://openaccess.thecvf.com/content/CVPR2021/papers/Huo_ATSO_Asynchronous_Teacher-Student_Optimization_for_Semi-Supervised_Image_Segmentation_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Huo_ATSO_Asynchronous_Teacher-Student_Optimization_for_Semi-Supervised_Image_Segmentation_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Huo_ATSO_Asynchronous_Teacher-Student_Optimization_for_Semi-Supervised_Image_Segmentation_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Huo_ATSO_Asynchronous_Teacher-Student_CVPR_2021_supplemental.pdf | null |
Panoramic Image Reflection Removal | Yuchen Hong, Qian Zheng, Lingran Zhao, Xudong Jiang, Alex C. Kot, Boxin Shi | This paper studies the problem of panoramic image reflection removal, aiming at reliving the content ambiguity between reflection and transmission scenes. Although a partial view of the reflection scene is included in the panoramic image, it cannot be utilized directly due to its misalignment with the reflection-contaminated image. We propose a two-step approach to solve this problem, by first accomplishing geometric and photometric alignment for the reflection scene via a coarse-to-fine strategy, and then restoring the transmission scene via a recovery network. The proposed method is trained with a synthetic dataset and verified quantitatively with a real panoramic image dataset. The effectiveness of the proposed method is validated by the significant performance advantage over single image-based reflection removal methods and generalization capacity to limited-FoV scenarios captured by conventional camera or mobile phone users. | https://openaccess.thecvf.com/content/CVPR2021/papers/Hong_Panoramic_Image_Reflection_Removal_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Hong_Panoramic_Image_Reflection_Removal_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Hong_Panoramic_Image_Reflection_Removal_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Hong_Panoramic_Image_Reflection_CVPR_2021_supplemental.zip | null |
OTCE: A Transferability Metric for Cross-Domain Cross-Task Representations | Yang Tan, Yang Li, Shao-Lun Huang | Transfer learning across heterogeneous data distributions (a.k.a. domains) and distinct tasks is a more general and challenging problem than conventional transfer learning, where either domains or tasks are assumed to be the same. While neural network based feature transfer is widely used in transfer learning applications, finding the optimal transfer strategy still requires time-consuming experiments and domain knowledge. We propose a transferability metric called Optimal Transport based Conditional Entropy (OTCE), to analytically predict the transfer performance for supervised classification tasks in such cross-domain and cross-task feature transfer settings. Our OTCE score characterizes transferability as a combination of domain difference and task difference, and explicitly evaluates them from data in a unified framework. Specifically, we use optimal transport to estimate domain difference and the optimal coupling between source and target distributions, which is then used to derive the conditional entropy of the target task (task difference). Experiments on the largest cross-domain dataset DomainNet and Office31 demonstrate that OTCE shows an average of 21% gain in the correlation with the ground truth transfer accuracy compared to state-of-the-art methods. We also investigate two applications of the OTCE score including source model selection and multi-source feature fusion. | https://openaccess.thecvf.com/content/CVPR2021/papers/Tan_OTCE_A_Transferability_Metric_for_Cross-Domain_Cross-Task_Representations_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.13843 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Tan_OTCE_A_Transferability_Metric_for_Cross-Domain_Cross-Task_Representations_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Tan_OTCE_A_Transferability_Metric_for_Cross-Domain_Cross-Task_Representations_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Tan_OTCE_A_Transferability_CVPR_2021_supplemental.pdf | null |
Diverse Semantic Image Synthesis via Probability Distribution Modeling | Zhentao Tan, Menglei Chai, Dongdong Chen, Jing Liao, Qi Chu, Bin Liu, Gang Hua, Nenghai Yu | Semantic image synthesis, translating semantic layouts to photo-realistic images, is a one-to-many mapping problem. Though impressive progress has been recently made, diverse semantic synthesis that can efficiently produce semantic-level multimodal results, still remains a challenge. In this paper, we propose a novel diverse semantic image synthesis framework from the perspective of semantic class distributions, which naturally supports diverse generation at semantic or even instance level. We achieve this by modeling class-level conditional modulation parameters as continuous probability distributions instead of discrete values, and sampling per-instance modulation parameters through instance-adaptive stochastic sampling that is consistent across the network. Moreover, we propose prior noise remapping, through linear perturbation parameters encoded from paired references, to facilitate supervised training and exemplar-based instance style control at test time. Extensive experiments on multiple datasets show that our method can achieve superior diversity and comparable quality compared to state-of-the-art methods. Code will be available at https://github.com/tzt101/INADE.git | https://openaccess.thecvf.com/content/CVPR2021/papers/Tan_Diverse_Semantic_Image_Synthesis_via_Probability_Distribution_Modeling_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.06878 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Tan_Diverse_Semantic_Image_Synthesis_via_Probability_Distribution_Modeling_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Tan_Diverse_Semantic_Image_Synthesis_via_Probability_Distribution_Modeling_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Tan_Diverse_Semantic_Image_CVPR_2021_supplemental.pdf | null |
NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections | Ricardo Martin-Brualla, Noha Radwan, Mehdi S. M. Sajjadi, Jonathan T. Barron, Alexey Dosovitskiy, Daniel Duckworth | We present a learning-based method for synthesizingnovel views of complex scenes using only unstructured collections of in-the-wild photographs. We build on Neural Radiance Fields (NeRF), which uses the weights of a multi-layer perceptron to model the density and color of a scene as a function of 3D coordinates. While NeRF works well on images of static subjects captured under controlled settings, it is incapable of modeling many ubiquitous, real-world phenomena in uncontrolled images, such as variable illumination or transient occluders. We introduce a series of extensions to NeRF to address these issues, thereby enabling accurate reconstructions from unstructured image collections taken from the internet. We apply our system, dubbed NeRF-W, to internet photo collections of famous landmarks,and demonstrate temporally consistent novel view renderings that are significantly closer to photorealism than the prior state of the art. | https://openaccess.thecvf.com/content/CVPR2021/papers/Martin-Brualla_NeRF_in_the_Wild_Neural_Radiance_Fields_for_Unconstrained_Photo_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Martin-Brualla_NeRF_in_the_Wild_Neural_Radiance_Fields_for_Unconstrained_Photo_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Martin-Brualla_NeRF_in_the_Wild_Neural_Radiance_Fields_for_Unconstrained_Photo_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Martin-Brualla_NeRF_in_the_CVPR_2021_supplemental.pdf | null |
Learning by Watching | Jimuyang Zhang, Eshed Ohn-Bar | When in a new situation or geographical location, human drivers have an extraordinary ability to watch others and learn maneuvers that they themselves may have never performed. In contrast, existing techniques for learning to drive preclude such a possibility as they assume direct access to an instrumented ego-vehicle with fully known observations and expert driver actions. However, such measurements cannot be directly accessed for the non-ego vehicles when learning by watching others. Therefore, in an application where data is regarded as a highly valuable asset, current approaches completely discard the vast portion of the training data that can be potentially obtained through indirect observation of surrounding vehicles. Motivated by this key insight, we propose the Learning by Watching (LbW) framework which enables learning a driving policy without requiring full knowledge of neither the state nor expert actions. To increase its data, i.e., with new perspectives and maneuvers, LbW makes use of the demonstrations of other vehicles in a given scene by (1) transforming the ego-vehicle's observations to their points of view, and (2) inferring their expert actions. Our LbW agent learns more robust driving policies while enabling data-efficient learning, including quick adaptation of the policy to rare and novel scenarios. In particular, LbW drives robustly even with a fraction of available driving data required by existing methods, achieving an average success rate of 92% on the original CARLA benchmark with only 30 minutes of total driving data and 82% with only 10 minutes. | https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_Learning_by_Watching_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Learning_by_Watching_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Learning_by_Watching_CVPR_2021_paper.html | CVPR 2021 | null | null |
Pseudo Facial Generation With Extreme Poses for Face Recognition | Guoli Wang, Jiaqi Ma, Qian Zhang, Jiwen Lu, Jie Zhou | Face recognition has achieved a great success in recent years, it is still challenging to recognize those facial images with extreme poses. Traditional methods consider it as a domain gap problem. Many of them settle it by generating fake frontal faces from extreme ones, whereas they are tough to maintain the identity information with high computational consumption and uncontrolled disturbances. Our experimental analysis shows a dramatic precision drop with extreme poses. Meanwhile, those extreme poses just exist minor visual differences after small rotations. Derived from this insight, we attempt to relieve such a huge precision drop by making minor changes to the input images without modifying existing discriminators. A novel lightweight pseudo facial generation is proposed to relieve the problem of extreme poses without generating any frontal facial image. It can depict the facial contour information and make appropriate modifications to preserve the critical identity information. Specifically, the proposed method reconstructs pseudo profile faces by minimizing the pixel-wise differences with original profile faces and maintaining the identity consistent information from their corresponding frontal faces simultaneously. The proposed framework can improve existing discriminators and obtain a great promotion on several benchmark datasets. | https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_Pseudo_Facial_Generation_With_Extreme_Poses_for_Face_Recognition_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Wang_Pseudo_Facial_Generation_With_Extreme_Poses_for_Face_Recognition_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Wang_Pseudo_Facial_Generation_With_Extreme_Poses_for_Face_Recognition_CVPR_2021_paper.html | CVPR 2021 | null | null |
Inverting Generative Adversarial Renderer for Face Reconstruction | Jingtan Piao, Keqiang Sun, Quan Wang, Kwan-Yee Lin, Hongsheng Li | Given a monocular face image as input, 3D face geometry reconstruction aims to recover a corresponding 3Dface mesh. Recently, both optimization-based and learning-based face reconstruction methods have taken advantage of the emerging differentiable renderer and shown promising results. However, the differentiable renderer, mainly based on graphics rules, simplifies the realistic mechanism of the illumination, reflection, etc., of the real world, thus can-not produce realistic images. This brings a lot of domain-shift noise to the optimization or training process. In this work, we introduce a novel Generative Adversarial Renderer (GAR) and propose to tailor its inverted version to the general fitting pipeline, to tackle the above problem. Specifically, the carefully designed neural renderer takes a face normal map and a latent code representing other fac-tors as inputs and renders a realistic face image. Since the GAR learns to model the complicated real-world image, instead of relying on the simplified graphics rules, it is capable of producing realistic images, which essentially inhibits the domain-shift noise in training and optimization. Equipped with the elaborated GAR, we further proposed a novel approach to predict 3D face parameters, in which we first obtain fine initial parameters via Renderer Invertingand then refine it with gradient-based optimizers. Extensive experiments have been conducted to demonstrate the effectiveness of the proposed generative adversarial renderer and the novel optimization-based face reconstruction framework. Our method achieves state-of-the-art performance on multiple face reconstruction datasets. | https://openaccess.thecvf.com/content/CVPR2021/papers/Piao_Inverting_Generative_Adversarial_Renderer_for_Face_Reconstruction_CVPR_2021_paper.pdf | http://arxiv.org/abs/2105.02431 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Piao_Inverting_Generative_Adversarial_Renderer_for_Face_Reconstruction_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Piao_Inverting_Generative_Adversarial_Renderer_for_Face_Reconstruction_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Piao_Inverting_Generative_Adversarial_CVPR_2021_supplemental.pdf | null |
Efficient Object Embedding for Spliced Image Retrieval | Bor-Chun Chen, Zuxuan Wu, Larry S. Davis, Ser-Nam Lim | Detecting spliced images is one of the emerging challenges in computer vision. Unlike prior methods that focus on detecting low-level artifacts generated during the manipulation process, we use an image retrieval approach to tackle this problem. When given a spliced query image, our goal is to retrieve the original image from a database of authentic images. To achieve this goal, we propose representing an image by its constituent objects based on the intuition that the finest granularity of manipulations is oftentimes at the object-level. We introduce a framework, object embeddings for spliced image retrieval (OE-SIR), that utilizes modern object detectors to localize object regions. Each region is then embedded and collectively used to represent the image. Further, we propose a student-teacher training paradigm for learning discriminative embeddings within object regions to avoid expensive multiple forward passes. Detailed analysis of the efficacy of different feature embedding models is also provided in this study. Extensive experimental results show that the OE-SIR achieves state-of-the-art performance in spliced image retrieval. | https://openaccess.thecvf.com/content/CVPR2021/papers/Chen_Efficient_Object_Embedding_for_Spliced_Image_Retrieval_CVPR_2021_paper.pdf | http://arxiv.org/abs/1905.11903 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Chen_Efficient_Object_Embedding_for_Spliced_Image_Retrieval_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Chen_Efficient_Object_Embedding_for_Spliced_Image_Retrieval_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Chen_Efficient_Object_Embedding_CVPR_2021_supplemental.pdf | null |
GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection | Abhinav Kumar, Garrick Brazil, Xiaoming Liu | Modern 3D object detectors have immensely benefited from the end-to-end learning idea. However, most of them use a post-processing algorithm called Non-Maximal Suppression (NMS) only during inference. While there were attempts to include NMS in the training pipeline for tasks such as 2D object detection, they have been less widely adopted due to a non-mathematical expression of the NMS. In this paper, we present and integrate GrooMeD-NMS -- a novel Grouped Mathematically Differentiable NMS for monocular 3D object detection, such that the network is trained end-to-end with a loss on the boxes after NMS. We first formulate NMS as a matrix operation and then group and mask the boxes in an unsupervised manner to obtain a simple closed-form expression of the NMS. GrooMeD-NMS addresses the mismatch between training and inference pipelines and, therefore, forces the network to select the best 3D box in a differentiable manner. As a result, GrooMeD-NMS achieves state-of-the-art monocular 3D object detection results on the KITTI benchmark dataset performing comparably to monocular video-based methods. | https://openaccess.thecvf.com/content/CVPR2021/papers/Kumar_GrooMeD-NMS_Grouped_Mathematically_Differentiable_NMS_for_Monocular_3D_Object_Detection_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Kumar_GrooMeD-NMS_Grouped_Mathematically_Differentiable_NMS_for_Monocular_3D_Object_Detection_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Kumar_GrooMeD-NMS_Grouped_Mathematically_Differentiable_NMS_for_Monocular_3D_Object_Detection_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Kumar_GrooMeD-NMS_Grouped_Mathematically_CVPR_2021_supplemental.zip | null |
Flow Guided Transformable Bottleneck Networks for Motion Retargeting | Jian Ren, Menglei Chai, Oliver J. Woodford, Kyle Olszewski, Sergey Tulyakov | Human motion retargeting aims to transfer the motion of one person in a driving video or set of images to another person. Existing efforts leverage a long training video from each target person to train a subject-specific motion transfer model. However, the scalability of such methods is limited, as each model can only generate videos for the given target subject, and such training videos are labor-intensive to acquire and process. Few-shot motion transfer techniques, which only require one or a few images from a target, have recently drawn considerable attention. Methods addressing this task generally use either 2D or explicit 3D representations to transfer motion, and in doing so, sacrifice either accurate geometric modeling or the flexibility of an end-to-end learned representation. Inspired by the Transformable Bottleneck Network, which renders novel views and manipulations of rigid objects, we propose an approach based on an implicit volumetric representation of the image content, which can then be spatially manipulated using volumetric flow fields. We address the challenging question of how to aggregate information across different body poses, learning flow fields that allow for combining content from the appropriate regions of input images of highly non-rigid human subjects performing complex motions into a single implicit volumetric representation. This allows us to learn our 3D representation solely from videos of moving people. Armed with both 3D object understanding and end-to-end learned rendering, this categorically novel representation delivers state-of-the-art image generation quality, as shown by our quantitative and qualitative evaluations. | https://openaccess.thecvf.com/content/CVPR2021/papers/Ren_Flow_Guided_Transformable_Bottleneck_Networks_for_Motion_Retargeting_CVPR_2021_paper.pdf | http://arxiv.org/abs/2106.07771 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Ren_Flow_Guided_Transformable_Bottleneck_Networks_for_Motion_Retargeting_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Ren_Flow_Guided_Transformable_Bottleneck_Networks_for_Motion_Retargeting_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Ren_Flow_Guided_Transformable_CVPR_2021_supplemental.zip | null |
Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-View Transformation | Weixiang Yang, Qi Li, Wenxi Liu, Yuanlong Yu, Yuexin Ma, Shengfeng He, Jia Pan | HD map reconstruction is crucial for autonomous driving. LiDAR-based methods are limited due to the deployed expensive sensors and time-consuming computation. Camera-based methods usually need to separately perform road segmentation and view transformation, which often causes distortion and the absence of content. To push the limits of the technology, we present a novel framework that enables reconstructing a local map formed by road layout and vehicle occupancy in the bird's-eye view given a front-view monocular image only. In particular, we propose a cross-view transformation module, which takes the constraint of cycle consistency between views into account and makes full use of their correlation to strengthen the view transformation and scene understanding. Considering the relationship between vehicles and roads, we also design a context-aware discriminator to further refine the results. Experiments on public benchmarks show that our method achieves the state-of-the-art performance in the tasks of road layout estimation and vehicle occupancy estimation. Especially for the latter task, our model outperforms all competitors by a large margin. Furthermore, our model runs at 35 FPS on a single GPU, which is efficient and applicable for real-time panorama HD map reconstruction. | https://openaccess.thecvf.com/content/CVPR2021/papers/Yang_Projecting_Your_View_Attentively_Monocular_Road_Scene_Layout_Estimation_via_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Yang_Projecting_Your_View_Attentively_Monocular_Road_Scene_Layout_Estimation_via_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Yang_Projecting_Your_View_Attentively_Monocular_Road_Scene_Layout_Estimation_via_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Yang_Projecting_Your_View_CVPR_2021_supplemental.zip | null |
Deep Analysis of CNN-Based Spatio-Temporal Representations for Action Recognition | Chun-Fu Richard Chen, Rameswar Panda, Kandan Ramakrishnan, Rogerio Feris, John Cohn, Aude Oliva, Quanfu Fan | In recent years, a number of approaches based on 2D or 3D convolutional neural networks (CNN) have emerged for video action recognition, achieving state-of-the-art results on several large-scale benchmark datasets. In this paper, we carry out in-depth comparative analysis to better understand the differences between these approaches and the progress made by them. To this end, we develop an unified framework for both 2D-CNN and 3D-CNN action models, which enables us to remove bells and whistles and provides a common ground for fair comparison. We then conduct an effort towards a large-scale analysis involving over 300 action recognition models. Our comprehensive analysis reveals that a) a significant leap is made in efficiency for action recognition, but not in accuracy; b) 2D-CNN and 3D-CNN models behave similarly in terms of spatio-temporal representation abilities and transferability. Our codes are available at https://github.com/IBM/action-recognition-pytorch. | https://openaccess.thecvf.com/content/CVPR2021/papers/Chen_Deep_Analysis_of_CNN-Based_Spatio-Temporal_Representations_for_Action_Recognition_CVPR_2021_paper.pdf | http://arxiv.org/abs/2010.11757 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Chen_Deep_Analysis_of_CNN-Based_Spatio-Temporal_Representations_for_Action_Recognition_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Chen_Deep_Analysis_of_CNN-Based_Spatio-Temporal_Representations_for_Action_Recognition_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Chen_Deep_Analysis_of_CVPR_2021_supplemental.pdf | null |
Generalizable Person Re-Identification With Relevance-Aware Mixture of Experts | Yongxing Dai, Xiaotong Li, Jun Liu, Zekun Tong, Ling-Yu Duan | Domain generalizable (DG) person re-identification (ReID) is a challenging problem because we cannot access any unseen target domain data during training. Almost all the existing DG ReID methods follow the same pipeline where they use a hybrid dataset from multiple source domains for training, and then directly apply the trained model to the unseen target domains for testing. These methods often neglect individual source domains' discriminative characteristics and their relevances w.r.t. the unseen target domains, though both of which can be leveraged to help the model's generalization. To handle the above two issues, we propose a novel method called the relevance-aware mixture of experts (RaMoE), using an effective voting-based mixture mechanism to dynamically leverage source domains' diverse characteristics to improve the model's generalization. Specifically, we propose a decorrelation loss to make the source domain networks (experts) keep the diversity and discriminability of individual domains' characteristics. Besides, we design a voting network to adaptively integrate all the experts' features into the more generalizable aggregated features with domain relevance. Considering the target domains' invisibility during training, we propose a novel learning-to-learn algorithm combined with our relation alignment loss to update the voting network. Extensive experiments demonstrate that our proposed RaMoE outperforms the state-of-the-art methods. | https://openaccess.thecvf.com/content/CVPR2021/papers/Dai_Generalizable_Person_Re-Identification_With_Relevance-Aware_Mixture_of_Experts_CVPR_2021_paper.pdf | http://arxiv.org/abs/2105.09156 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Dai_Generalizable_Person_Re-Identification_With_Relevance-Aware_Mixture_of_Experts_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Dai_Generalizable_Person_Re-Identification_With_Relevance-Aware_Mixture_of_Experts_CVPR_2021_paper.html | CVPR 2021 | null | null |
Part-Aware Panoptic Segmentation | Daan de Geus, Panagiotis Meletis, Chenyang Lu, Xiaoxiao Wen, Gijs Dubbelman | In this work, we introduce the new scene understanding task of Part-aware Panoptic Segmentation (PPS), which aims to understand a scene at multiple levels of abstraction, and unifies the tasks of scene parsing and part parsing. For this novel task, we provide consistent annotations on two commonly used datasets: Cityscapes and Pascal VOC. Moreover, we present a single metric to evaluate PPS, called Part-aware Panoptic Quality (PartPQ). For this new task, using the metric and annotations, we set multiple baselines by merging results of existing state-of-the-art methods for panoptic segmentation and part segmentation. Finally, we conduct several experiments that evaluate the importance of the different levels of abstraction in this single task. | https://openaccess.thecvf.com/content/CVPR2021/papers/de_Geus_Part-Aware_Panoptic_Segmentation_CVPR_2021_paper.pdf | http://arxiv.org/abs/2106.06351 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/de_Geus_Part-Aware_Panoptic_Segmentation_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/de_Geus_Part-Aware_Panoptic_Segmentation_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/de_Geus_Part-Aware_Panoptic_Segmentation_CVPR_2021_supplemental.pdf | null |
Unsupervised Degradation Representation Learning for Blind Super-Resolution | Longguang Wang, Yingqian Wang, Xiaoyu Dong, Qingyu Xu, Jungang Yang, Wei An, Yulan Guo | Most existing CNN-based super-resolution (SR) methods are developed based on an assumption that the degradation is fixed and known (e.g., bicubic downsampling). However, these methods suffer a severe performance drop when the real degradation is different from their assumption. To handle various unknown degradations in real-world applications, previous methods rely on degradation estimation to reconstruct the SR image. Nevertheless, degradation estimation methods are usually time-consuming and may lead to SR failure due to large estimation errors. In this paper, we propose an unsupervised degradation representation learning scheme for blind SR without explicit degradation estimation. Specifically, we learn abstract representations to distinguish various degradations in the representation space rather than explicit estimation in the pixel space. Moreover, we introduce a Degradation-Aware SR (DASR) network with flexible adaption to various degradations based on the learned representations. It is demonstrated that our degradation representation learning scheme can extract discriminative representations to obtain accurate degradation information. Experiments on both synthetic and real images show that our network achieves state-of-the-art performance for the blind SR task. Code is available at: https://github.com/LongguangWang/DASR. | https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_Unsupervised_Degradation_Representation_Learning_for_Blind_Super-Resolution_CVPR_2021_paper.pdf | http://arxiv.org/abs/2104.00416 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Wang_Unsupervised_Degradation_Representation_Learning_for_Blind_Super-Resolution_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Wang_Unsupervised_Degradation_Representation_Learning_for_Blind_Super-Resolution_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Wang_Unsupervised_Degradation_Representation_CVPR_2021_supplemental.pdf | null |
Convolutional Hough Matching Networks | Juhong Min, Minsu Cho | Despite advances in feature representation, leveraging geometric relations is crucial for establishing reliable visual correspondences under large variations of images. In this work we introduce a Hough transform perspective on convolutional matching and propose an effective geometric matching algorithm, dubbed Convolutional Hough Matching (CHM). The method distributes similarities of candidate matches over a geometric transformation space and evaluate them in a convolutional manner. We cast it into a trainable neural layer with a semi-isotropic high-dimensional kernel, which learns non-rigid matching with a small number of interpretable parameters. To validate the effect, we develop the neural network with CHM layers that perform convolutional matching in the space of translation and scaling. Our method sets a new state of the art on standard benchmarks for semantic visual correspondence, proving its strong robustness to challenging intra-class variations. | https://openaccess.thecvf.com/content/CVPR2021/papers/Min_Convolutional_Hough_Matching_Networks_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.16831 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Min_Convolutional_Hough_Matching_Networks_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Min_Convolutional_Hough_Matching_Networks_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Min_Convolutional_Hough_Matching_CVPR_2021_supplemental.pdf | null |
Hierarchical and Partially Observable Goal-Driven Policy Learning With Goals Relational Graph | Xin Ye, Yezhou Yang | We present a novel two-layer hierarchical reinforcement learning approach equipped with a Goals Relational Graph (GRG) for tackling the partially observable goal-driven task, such as goal-driven visual navigation. Our GRG captures the underlying relations of all goals in the goal space through a Dirichlet-categorical process that facilitates: 1) the high-level network raising a sub-goal towards achieving a designated final goal; 2) the low-level network towards an optimal policy; and 3) the overall system generalizing unseen environments and goals. We evaluate our approach with two settings of partially observable goal-driven tasks -- a grid-world domain and a robotic object search task. Our experimental results show that our approach exhibits superior generalization performance on both unseen environments and new goals. | https://openaccess.thecvf.com/content/CVPR2021/papers/Ye_Hierarchical_and_Partially_Observable_Goal-Driven_Policy_Learning_With_Goals_Relational_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.01350 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Ye_Hierarchical_and_Partially_Observable_Goal-Driven_Policy_Learning_With_Goals_Relational_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Ye_Hierarchical_and_Partially_Observable_Goal-Driven_Policy_Learning_With_Goals_Relational_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Ye_Hierarchical_and_Partially_CVPR_2021_supplemental.pdf | null |
Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud Videos | Hehe Fan, Yi Yang, Mohan Kankanhalli | Point cloud videos exhibit irregularities and lack of order along the spatial dimension where points emerge inconsistently across different frames. To capture the dynamics in point cloud videos, point tracking is usually employed. However, as points may flow in and out across frames, computing accurate point trajectories is extremely difficult. Moreover, tracking usually relies on point colors and thus may fail to handle colorless point clouds. In this paper, to avoid point tracking, we propose a novel Point 4D Transformer (P4Transformer) network to model raw point cloud videos. Specifically, P4Transformer consists of (i) a point 4D convolution to embed the spatio-temporal local structures presented in a point cloud video and (ii) a transformer to capture the appearance and motion information across the entire video by performing self-attention on the embedded local features. In this fashion, related or similar local areas are merged with attention weight rather than by explicit tracking. Extensive experiments, including 3D action recognition and 4D semantic segmentation, on four benchmarks demonstrate the effectiveness of our P4Transformer for point cloud video modeling. | https://openaccess.thecvf.com/content/CVPR2021/papers/Fan_Point_4D_Transformer_Networks_for_Spatio-Temporal_Modeling_in_Point_Cloud_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Fan_Point_4D_Transformer_Networks_for_Spatio-Temporal_Modeling_in_Point_Cloud_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Fan_Point_4D_Transformer_Networks_for_Spatio-Temporal_Modeling_in_Point_Cloud_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Fan_Point_4D_Transformer_CVPR_2021_supplemental.pdf | null |
CoCoNets: Continuous Contrastive 3D Scene Representations | Shamit Lal, Mihir Prabhudesai, Ishita Mediratta, Adam W. Harley, Katerina Fragkiadaki | This paper explores self-supervised learning of amodal 3D feature representations from RGB and RGB-D posed images and videos, agnostic to object and scene semantic content, and evaluates the resulting scene representations in the downstream tasks of visual correspondence, object tracking, and object detection. The model infers a latent 3D representation of the scene in the form of 3D feature points, where each continuous world 3D point is mapped to its corresponding feature vector. The model is trained for contrastive view prediction by rendering 3D feature clouds in queried viewpoints and matching against the 3D feature point cloud predicted from the query view. Notably, the representation can be queried for any 3D location, even if it is not visible from the input view. Our model brings together three powerful ideas of recent exciting research work: 3D feature grids as a neural bottleneck for view prediction, implicit functions for handling resolution limitations of 3D grids, and contrastive learning for unsupervised training of feature representations. We show the resulting 3D visual feature representations effectively scale across objects and scenes, imagine information occluded or missing from the input viewpoints, track objects over time, align semantically related objects in 3D, and improve 3D object detection. We outperform many existing state-of-the-art methods for 3D feature learning and view prediction, which are either limited by 3D grid spatial resolution, do not attempt to build amodal 3D representations, or do not handle combinatorial scene variability due to their non-convolutional bottlenecks. | https://openaccess.thecvf.com/content/CVPR2021/papers/Lal_CoCoNets_Continuous_Contrastive_3D_Scene_Representations_CVPR_2021_paper.pdf | http://arxiv.org/abs/2104.03851 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Lal_CoCoNets_Continuous_Contrastive_3D_Scene_Representations_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Lal_CoCoNets_Continuous_Contrastive_3D_Scene_Representations_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Lal_CoCoNets_Continuous_Contrastive_CVPR_2021_supplemental.zip | null |
Distribution Alignment: A Unified Framework for Long-Tail Visual Recognition | Songyang Zhang, Zeming Li, Shipeng Yan, Xuming He, Jian Sun | Despite the success of the deep neural networks, it remains challenging to effectively build a system for long-tail visual recognition tasks. To address this problem, we first investigate the performance bottleneck of the two-stage learning framework via ablative study. Motivated by our discovery, we develop a unified distribution alignment strategy for long-tail visual recognition. Particularly, we first propose an adaptive calibration strategy for each data point to calibrate its classification scores. Then we introduce a generalized re-weight method to incorporate the class prior, which provides a flexible and unified solution to copy with diverse scenarios of various visual recognition tasks. We validate our method by extensive experiments on four tasks, including image classification, semantic segmentation, object detection, and instance segmentation. Our approach achieves the state-of-the-art results across all four recognition tasks with a simple and unified framework. | https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_Distribution_Alignment_A_Unified_Framework_for_Long-Tail_Visual_Recognition_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.16370 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Distribution_Alignment_A_Unified_Framework_for_Long-Tail_Visual_Recognition_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Distribution_Alignment_A_Unified_Framework_for_Long-Tail_Visual_Recognition_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Zhang_Distribution_Alignment_A_CVPR_2021_supplemental.pdf | null |
Dynamic Class Queue for Large Scale Face Recognition in the Wild | Bi Li, Teng Xi, Gang Zhang, Haocheng Feng, Junyu Han, Jingtuo Liu, Errui Ding, Wenyu Liu | Learning discriminative representation using large-scale face datasets in the wild is crucial for real-world applications, yet it remains challenging. The difficulties lie in many aspects and this work focus on computing resource constraint and long-tailed class distribution. Recently, classification-based representation learning with deep neural networks and well-designed losses have demonstrated good recognition performance. However, the computing and memory cost linearly scales up to the number of identities (classes) in the training set, and the learning process suffers from unbalanced classes. In this work, we propose a dynamic class queue (DCQ) to tackle these two problems. Specifically, for each iteration during training, a subset of classes for recognition are dynamically selected and their class weights are dynamically generated on-the-fly which are stored in a queue. Since only a subset of classes is selected for each iteration, the computing requirement is reduced. By using a single server without model parallel, we empirically verify in large-scale datasets that 10% of classes are sufficient to achieve similar performance as using all classes. Moreover, the class weights are dynamically generated in a few-shot manner and therefore suitable for tail classes with only a few instances. We show clear improvement over a strong baseline in the largest public dataset Megaface Challenge2 (MF2) which has 672K identities and over 88% of them have less than 10 instances. Code is available at https://github.com/bilylee/DCQ | https://openaccess.thecvf.com/content/CVPR2021/papers/Li_Dynamic_Class_Queue_for_Large_Scale_Face_Recognition_in_the_CVPR_2021_paper.pdf | http://arxiv.org/abs/2105.11113 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Li_Dynamic_Class_Queue_for_Large_Scale_Face_Recognition_in_the_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Li_Dynamic_Class_Queue_for_Large_Scale_Face_Recognition_in_the_CVPR_2021_paper.html | CVPR 2021 | null | null |
3D-MAN: 3D Multi-Frame Attention Network for Object Detection | Zetong Yang, Yin Zhou, Zhifeng Chen, Jiquan Ngiam | 3D object detection is an important module in autonomous driving and robotics. However, many existing methods focus on using single frames to perform 3D detection, and do not fully utilize information from multiple frames. In this paper, we present 3D-MAN: a 3D multi-frame attention network that effectively aggregates features from multiple perspectives and achieves state-of-the-art performance on Waymo Open Dataset. 3D-MAN first uses a novel fast single-frame detector to produce box proposals. The box proposals and their corresponding feature maps are then stored in a memory bank. We design a multi-view alignment and aggregation module, using attention networks, to extract and aggregate the temporal features stored in the memory bank. This effectively combines the features coming from different perspectives of the scene. We demonstrate the effectiveness of our approach on the large-scale complex Waymo Open Dataset, achieving state-of-the-art results compared to published single-frame and multi-frame methods. | https://openaccess.thecvf.com/content/CVPR2021/papers/Yang_3D-MAN_3D_Multi-Frame_Attention_Network_for_Object_Detection_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Yang_3D-MAN_3D_Multi-Frame_Attention_Network_for_Object_Detection_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Yang_3D-MAN_3D_Multi-Frame_Attention_Network_for_Object_Detection_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Yang_3D-MAN_3D_Multi-Frame_CVPR_2021_supplemental.pdf | null |
Cross-Modal Center Loss for 3D Cross-Modal Retrieval | Longlong Jing, Elahe Vahdani, Jiaxing Tan, Yingli Tian | Cross-modal retrieval aims to learn discriminative and modal-invariant features for data from different modalities. Unlike the existing methods which usually learn from the features extracted by offline networks, in this paper, we propose an approach to jointly train the components of cross-modal retrieval framework with metadata, and enable the network to find optimal features. The proposed end-to-end framework is updated with three loss functions: 1) a novel cross-modal center loss to eliminate cross-modal discrepancy, 2) cross-entropy loss to maximize inter-class variations, and 3) mean-square-error loss to reduce modality variations. In particular, our proposed cross-modal center loss minimizes the distances of features from objects belonging to the same class across all modalities. Extensive experiments have been conducted on the retrieval tasks across multi-modalities including 2D image, 3D point cloud and mesh data. The proposed framework significantly outperforms the state-of-the-art methods for both cross-modal and in-domain retrieval for 3D objects on the ModelNet10 and ModelNet40 datasets. | https://openaccess.thecvf.com/content/CVPR2021/papers/Jing_Cross-Modal_Center_Loss_for_3D_Cross-Modal_Retrieval_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Jing_Cross-Modal_Center_Loss_for_3D_Cross-Modal_Retrieval_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Jing_Cross-Modal_Center_Loss_for_3D_Cross-Modal_Retrieval_CVPR_2021_paper.html | CVPR 2021 | null | null |
Learning View Selection for 3D Scenes | Yifan Sun, Qixing Huang, Dun-Yu Hsiao, Li Guan, Gang Hua | Efficient 3D space sampling to represent an underlying3D object/scene is essential for 3D vision, robotics, and be-yond. A standard approach is to explicitly sample a densecollection of views and formulate it as a view selection prob-lem, or, more generally, a set cover problem. In this paper,we introduce a novel approach that avoids dense view sam-pling. The key idea is to learn a view prediction networkand a trainable aggregation module that takes the predictedviews as input and outputs an approximation of their genericscores (e.g., surface coverage, viewing angle from surfacenormals). This methodology allows us to turn the set coverproblem (or multi-view representation optimization) into acontinuous optimization problem. We then explain how toeffectively solve the induced optimization problem using con-tinuation, i.e., aggregating a hierarchy of smoothed scoringmodules. Experimental results show that our approach ar-rives at similar or better solutions with about 10 x speed upin running time, comparing with the standard methods. | https://openaccess.thecvf.com/content/CVPR2021/papers/Sun_Learning_View_Selection_for_3D_Scenes_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Sun_Learning_View_Selection_for_3D_Scenes_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Sun_Learning_View_Selection_for_3D_Scenes_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Sun_Learning_View_Selection_CVPR_2021_supplemental.pdf | null |
FESTA: Flow Estimation via Spatial-Temporal Attention for Scene Point Clouds | Haiyan Wang, Jiahao Pang, Muhammad A. Lodhi, Yingli Tian, Dong Tian | Scene flow depicts the dynamics of a 3D scene, which is critical for various applications such as autonomous driving, robot navigation, AR/VR, etc. Conventionally, scene flow is estimated from dense/regular RGB video frames. With the development of depth-sensing technologies, precise 3D measurements are available via point clouds which have sparked new research in 3D scene flow. Nevertheless, it remains challenging to extract scene flow from point clouds due to the sparsity and irregularity in typical point cloud sampling patterns. One major issue related to irregular sampling is identified as the randomness during point set abstraction/feature extraction---an elementary process in many flow estimation scenarios. A novel Spatial Abstraction with Attention (SA^2) layer is accordingly proposed to alleviate the unstable abstraction problem. Moreover, a Temporal Abstraction with Attention (TA^2) layer is proposed to rectify attention in temporal domain, leading to benefits with motions scaled in a larger range. Extensive analysis and experiments verified the motivation and significant performance gains of our method, dubbed as Flow Estimation via Spatial-Temporal Attention (FESTA), when compared to several state-of-the-art benchmarks of scene flow estimation. | https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_FESTA_Flow_Estimation_via_Spatial-Temporal_Attention_for_Scene_Point_Clouds_CVPR_2021_paper.pdf | http://arxiv.org/abs/2104.00798 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Wang_FESTA_Flow_Estimation_via_Spatial-Temporal_Attention_for_Scene_Point_Clouds_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Wang_FESTA_Flow_Estimation_via_Spatial-Temporal_Attention_for_Scene_Point_Clouds_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Wang_FESTA_Flow_Estimation_CVPR_2021_supplemental.pdf | null |
Semi-Supervised Action Recognition With Temporal Contrastive Learning | Ankit Singh, Omprakash Chakraborty, Ashutosh Varshney, Rameswar Panda, Rogerio Feris, Kate Saenko, Abir Das | Learning to recognize actions from only a handful of labeled videos is a challenging problem due to the scarcity of tediously collected activity labels. We approach this problem by learning a two-pathway temporal contrastive model using unlabeled videos at two different speeds leveraging the fact that changing video speed does not change an action. Specifically, we propose to maximize the similarity between encoded representations of the same video at two different speeds as well as minimize the similarity between different videos played at different speeds. This way we use the rich supervisory information in terms of `time' that is present in otherwise unsupervised pool of videos. With this simple yet effective strategy of manipulating video playback rates, we considerably outperform video extensions of sophisticated state-of-the-art semi-supervised image recognition methods across multiple diverse benchmark datasets and network architectures. Interestingly, our proposed approach benefits from out-of-domain unlabeled videos showing generalization and robustness. We also perform rigorous ablations and analysis to validate our approach. Project page: https://cvir.github.io/TCL/. | https://openaccess.thecvf.com/content/CVPR2021/papers/Singh_Semi-Supervised_Action_Recognition_With_Temporal_Contrastive_Learning_CVPR_2021_paper.pdf | http://arxiv.org/abs/2102.02751 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Singh_Semi-Supervised_Action_Recognition_With_Temporal_Contrastive_Learning_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Singh_Semi-Supervised_Action_Recognition_With_Temporal_Contrastive_Learning_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Singh_Semi-Supervised_Action_Recognition_CVPR_2021_supplemental.pdf | null |
SG-Net: Spatial Granularity Network for One-Stage Video Instance Segmentation | Dongfang Liu, Yiming Cui, Wenbo Tan, Yingjie Chen | Video instance segmentation (VIS) is a new and critical task in computer vision. To date, top-performing VIS methods extend the two-stage Mask R-CNN by adding a tracking branch, leaving plenty of room for improvement. In contrast, we approach the VIS task from a new perspective and propose a one-stage spatial granularity network (SG-Net). SG-Net demonstrates four advantages: 1) Our task heads (detection, segmentation, and tracking) are crafted interdependently so they can effectively share features and enjoy the joint optimization; 2) Each of our task predictions avoids using proposal-based RoI features, resulting in much reduced runtime complexity per instance; 3) Our mask prediction is dynamically performed on the sub-regions of each detected instance, leading to high-quality masks of fine granularity; 4) Our tracking head models objects' centerness movements for tracking, which effectively enhances the tracking robustness to different object appearances. In evaluation, we present state-of-the-art comparisons on the YouTube-VIS dataset. Extensive experiments demonstrate that our compact one-stage method can achieve improved performance in both accuracy and inference speed. We hope our SG-Net could serve as a simple yet strong baseline for the VIS task. Code will be available. | https://openaccess.thecvf.com/content/CVPR2021/papers/Liu_SG-Net_Spatial_Granularity_Network_for_One-Stage_Video_Instance_Segmentation_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Liu_SG-Net_Spatial_Granularity_Network_for_One-Stage_Video_Instance_Segmentation_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Liu_SG-Net_Spatial_Granularity_Network_for_One-Stage_Video_Instance_Segmentation_CVPR_2021_paper.html | CVPR 2021 | null | null |
Learned Initializations for Optimizing Coordinate-Based Neural Representations | Matthew Tancik, Ben Mildenhall, Terrance Wang, Divi Schmidt, Pratul P. Srinivasan, Jonathan T. Barron, Ren Ng | Coordinate-based neural representations have shown significant promise as an alternative to discrete, array-based representations for complex low dimensional signals. However, optimizing a coordinate-based network from randomly initialized weights for each new signal is inefficient. We propose applying standard meta-learning algorithms to learn the initial weight parameters for these fully-connected networks based on the underlying class of signals being represented (e.g., images of faces or 3D models of chairs). Despite requiring only a minor change in implementation, using these learned initial weights enables faster convergence during optimization and can serve as a strong prior over the signal class being modeled, resulting in better generalization when only partial observations of a given signal are available. We explore these benefits across a variety of tasks, including representing 2D images, reconstructing CT scans, and recovering 3D shapes and scenes from 2D image observations. | https://openaccess.thecvf.com/content/CVPR2021/papers/Tancik_Learned_Initializations_for_Optimizing_Coordinate-Based_Neural_Representations_CVPR_2021_paper.pdf | http://arxiv.org/abs/2012.02189 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Tancik_Learned_Initializations_for_Optimizing_Coordinate-Based_Neural_Representations_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Tancik_Learned_Initializations_for_Optimizing_Coordinate-Based_Neural_Representations_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Tancik_Learned_Initializations_for_CVPR_2021_supplemental.pdf | null |
Actor-Context-Actor Relation Network for Spatio-Temporal Action Localization | Junting Pan, Siyu Chen, Mike Zheng Shou, Yu Liu, Jing Shao, Hongsheng Li | Localizing persons and recognizing their actions from videos is a challenging task towards high-level video under-standing. Recent advances have been achieved by modeling direct pairwise relations between entities. In this paper, we take one step further, not only model direct relations between pairs but also take into account indirect higher-order relations established upon multiple elements. We propose to explicitly model the Actor-Context-Actor Relation, which is the relation between two actors based on their interactions with the context. To this end, we design an Actor-Context-Actor Relation Network (ACAR-Net) which builds upon a novel High-order Relation Reasoning Operator and an Actor-Context Feature Bank to enable indirect relation reasoning for spatio-temporal action localization. Experiments on AVA and UCF101-24 datasets show the advantages of modeling actor-context-actor relations, and visualization of attention maps further verifies that our model is capable of finding relevant higher-order relations to support action detection. Notably, our method ranks first in the AVA-Kinetics action localization task of ActivityNet Challenge 2020, outperforming other entries by a significant margin (+6.71 mAP). The code is available online. | https://openaccess.thecvf.com/content/CVPR2021/papers/Pan_Actor-Context-Actor_Relation_Network_for_Spatio-Temporal_Action_Localization_CVPR_2021_paper.pdf | http://arxiv.org/abs/2006.07976 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Pan_Actor-Context-Actor_Relation_Network_for_Spatio-Temporal_Action_Localization_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Pan_Actor-Context-Actor_Relation_Network_for_Spatio-Temporal_Action_Localization_CVPR_2021_paper.html | CVPR 2021 | null | null |
Cross-View Cross-Scene Multi-View Crowd Counting | Qi Zhang, Wei Lin, Antoni B. Chan | Multi-view crowd counting has been previously proposed to utilize multi-cameras to extend the field-of-view of a single camera, capturing more people in the scene, and improve counting performance for occluded people or those in low resolution. However, the current multi-view paradigm trains and tests on the same single scene and camera-views, which limits its practical application. In this paper, we propose a cross-view cross-scene (CVCS) multi-view crowd counting paradigm, where the training and testing occur on different scenes with arbitrary camera layouts. To dynamically handle the challenge of optimal view fusion under scene and camera layout change and non-correspondence noise due to camera calibration errors or erroneous features, we propose a CVCS model that attentively selects and fuses multiple views together using camera layout geometry, and a noise view regularization method to train the model to handle non-correspondence errors. We also generate a large synthetic multi-camera crowd counting dataset with a large number of scenes and camera views to capture many possible variations, which avoids the difficulty of collecting and annotating such a large real dataset. We then test our trained CVCS model on real multi-view counting datasets, by using unsupervised domain transfer. The proposed CVCS model trained on synthetic data outperforms the same model trained only on real data, and achieves promising performance compared to fully supervised methods that train and test on the same single scene. | https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_Cross-View_Cross-Scene_Multi-View_Crowd_Counting_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Cross-View_Cross-Scene_Multi-View_Crowd_Counting_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Cross-View_Cross-Scene_Multi-View_Crowd_Counting_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Zhang_Cross-View_Cross-Scene_Multi-View_CVPR_2021_supplemental.pdf | null |
Semantic Segmentation With Generative Models: Semi-Supervised Learning and Strong Out-of-Domain Generalization | Daiqing Li, Junlin Yang, Karsten Kreis, Antonio Torralba, Sanja Fidler | Training deep networks with limited labeled data while achieving a strong generalization ability is key in the quest to reduce human annotation efforts. This is the goal of semi-supervised learning, which exploits more widely available unlabeled data to complement small labeled data sets. In this paper, we propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels. Concretely, we learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images supplemented with only few labeled ones. We build our architecture on top of StyleGAN2, augmented with a label synthesis branch. Image labeling at test time is achieved by first embedding the target image into the joint latent space via an encoder network and test-time optimization, and then generating the label from the inferred embedding. We evaluate our approach in two important domains: medical image segmentation and part-based face segmentation. We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization, such as transferring from CT to MRI in medical imaging, and photographs of real faces to paintings, sculptures, and even cartoons and animal faces. | https://openaccess.thecvf.com/content/CVPR2021/papers/Li_Semantic_Segmentation_With_Generative_Models_Semi-Supervised_Learning_and_Strong_Out-of-Domain_CVPR_2021_paper.pdf | http://arxiv.org/abs/2104.05833 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Li_Semantic_Segmentation_With_Generative_Models_Semi-Supervised_Learning_and_Strong_Out-of-Domain_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Li_Semantic_Segmentation_With_Generative_Models_Semi-Supervised_Learning_and_Strong_Out-of-Domain_CVPR_2021_paper.html | CVPR 2021 | null | null |
Depth-Aware Mirror Segmentation | Haiyang Mei, Bo Dong, Wen Dong, Pieter Peers, Xin Yang, Qiang Zhang, Xiaopeng Wei | We present a novel mirror segmentation method that leverages depth estimates from ToF-based cameras as an additional cue to disambiguate challenging cases where the contrast or relation in RGB colors between the mirror reflection and the surrounding scene is subtle. A key observation is that ToF depth estimates do not report the true depth of the mirror surface, but instead return the total length of the reflected light paths, thereby creating obvious depth discontinuities at the mirror boundaries. To exploit depth information in mirror segmentation, we first construct a large-scale RGB-D mirror segmentation dataset, which we subsequently employ to train a novel depth-aware mirror segmentation framework. Our mirror segmentation framework first locates the mirrors based on color and depth discontinuities and correlations. Next, our model further refines the mirror boundaries through contextual contrast taking into account both color and depth information. We extensively validate our depth-aware mirror segmentation method and demonstrate that our model outperforms state-of-the-art RGB and RGB-D based methods for mirror segmentation. Experimental results also show that depth is a powerful cue for mirror segmentation. | https://openaccess.thecvf.com/content/CVPR2021/papers/Mei_Depth-Aware_Mirror_Segmentation_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Mei_Depth-Aware_Mirror_Segmentation_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Mei_Depth-Aware_Mirror_Segmentation_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Mei_Depth-Aware_Mirror_Segmentation_CVPR_2021_supplemental.pdf | null |
You Only Look One-Level Feature | Qiang Chen, Yingming Wang, Tong Yang, Xiangyu Zhang, Jian Cheng, Jian Sun | This paper revisits feature pyramids networks (FPN) for one-stage detectors and points out that the success of FPN is due to its divide-and-conquer solution to the optimization problem in object detection rather than multi-scale feature fusion. From the perspective of optimization, we introduce an alternative way to address the problem instead of adopting the complex feature pyramids -- utilizing only one-level feature for detection. Based on the simple and efficient solution, we present You Only Look One-level Feature (YOLOF). In our method, two key components, Dilated Encoder and Uniform Matching, are proposed and bring considerable improvements. Extensive experiments on the COCO benchmark prove the effectiveness of the proposed model. Our YOLOF achieves comparable results with its feature pyramids counterpart RetinaNet while being 2.5 times faster. Without transformer layers, YOLOF can match the performance of DETR in a single-level feature manner with 7 times less training epochs. | https://openaccess.thecvf.com/content/CVPR2021/papers/Chen_You_Only_Look_One-Level_Feature_CVPR_2021_paper.pdf | http://arxiv.org/abs/2103.09460 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Chen_You_Only_Look_One-Level_Feature_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Chen_You_Only_Look_One-Level_Feature_CVPR_2021_paper.html | CVPR 2021 | null | null |
Multi-Perspective LSTM for Joint Visual Representation Learning | Alireza Sepas-Moghaddam, Fernando Pereira, Paulo Lobato Correia, Ali Etemad | We present a novel LSTM cell architecture capable of learning both intra- and inter-perspective relationships available in visual sequences captured from multiple perspectives. Our architecture adopts a novel recurrent joint learning strategy that uses additional gates and memories at the cell level. We demonstrate that by using the proposed cell to create a network, more effective and richer visual representations are learned for recognition tasks. We validate the performance of our proposed architecture in the context of two multi-perspective visual recognition tasks namely lip reading and face recognition. Three relevant datasets are considered and the results are compared against fusion strategies, other existing multi-input LSTM architectures, and alternative recognition solutions. The experiments show the superior performance of our solution over the considered benchmarks, both in terms of recognition accuracy and complexity. We make our code publicly available at: https://github.com/arsm/MPLSTM | https://openaccess.thecvf.com/content/CVPR2021/papers/Sepas-Moghaddam_Multi-Perspective_LSTM_for_Joint_Visual_Representation_Learning_CVPR_2021_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Sepas-Moghaddam_Multi-Perspective_LSTM_for_Joint_Visual_Representation_Learning_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Sepas-Moghaddam_Multi-Perspective_LSTM_for_Joint_Visual_Representation_Learning_CVPR_2021_paper.html | CVPR 2021 | null | null |
Towards Improving the Consistency, Efficiency, and Flexibility of Differentiable Neural Architecture Search | Yibo Yang, Shan You, Hongyang Li, Fei Wang, Chen Qian, Zhouchen Lin | Most differentiable neural architecture search methods construct a super-net for search and derive a target-net as its sub-graph for evaluation. There exists a significant gap between the architectures in search and evaluation. As a result, current methods suffer from an inconsistent, inefficient, and inflexible search process. In this paper, we introduce EnTranNAS that is composed of Engine-cells and Transit-cells. The Engine-cell is differentiable for architecture search, while the Transit-cell only transits a sub-graph by architecture derivation. Consequently, the gap between the architectures in search and evaluation is significantly reduced. Our method also spares much memory and computation cost, which speeds up the search process. A feature sharing strategy is introduced for more balanced optimization and more efficient search. Furthermore, we develop an architecture derivation method to replace the traditional one that is based on a hand-crafted rule. Our method enables differentiable sparsification, and keeps the derived architecture equivalent to that of Engine-cell, which further improves the consistency between search and evaluation. More importantly, it supports the search for topology where a node can be connected to prior nodes with any number of connections, so that the searched architectures could be more flexible. Our search on CIFAR-10 has an error rate of 2.22% with only 0.07 GPU-day. We can also directly perform the search on ImageNet with topology learnable and achieve a top-1 error rate of 23.8% in 2.1 GPU-day. | https://openaccess.thecvf.com/content/CVPR2021/papers/Yang_Towards_Improving_the_Consistency_Efficiency_and_Flexibility_of_Differentiable_Neural_CVPR_2021_paper.pdf | http://arxiv.org/abs/2101.11342 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2021/html/Yang_Towards_Improving_the_Consistency_Efficiency_and_Flexibility_of_Differentiable_Neural_CVPR_2021_paper.html | https://openaccess.thecvf.com/content/CVPR2021/html/Yang_Towards_Improving_the_Consistency_Efficiency_and_Flexibility_of_Differentiable_Neural_CVPR_2021_paper.html | CVPR 2021 | https://openaccess.thecvf.com/content/CVPR2021/supplemental/Yang_Towards_Improving_the_CVPR_2021_supplemental.zip | null |
Subsets and Splits