Search is not available for this dataset
title
string | authors
string | abstract
string | pdf
string | arXiv
string | bibtex
string | url
string | detail_url
string | tags
string | supp
string | string |
---|---|---|---|---|---|---|---|---|---|---|
HumanNorm: Learning Normal Diffusion Model for High-quality and Realistic 3D Human Generation | Xin Huang, Ruizhi Shao, Qi Zhang, Hongwen Zhang, Ying Feng, Yebin Liu, Qing Wang | Recent text-to-3D methods employing diffusion models have made significant advancements in 3D human generation. However these approaches face challenges due to the limitations of text-to-image diffusion models which lack an understanding of 3D structures. Consequently these methods struggle to achieve high-quality human generation resulting in smooth geometry and cartoon-like appearances. In this paper we propose HumanNorm a novel approach for high-quality and realistic 3D human generation. The main idea is to enhance the model's 2D perception of 3D geometry by learning a normal-adapted diffusion model and a normal-aligned diffusion model. The normal-adapted diffusion model can generate high-fidelity normal maps corresponding to user prompts with view-dependent and body-aware text. The normal-aligned diffusion model learns to generate color images aligned with the normal maps thereby transforming physical geometry details into realistic appearance. Leveraging the proposed normal diffusion model we devise a progressive geometry generation strategy and a multi-step Score Distillation Sampling (SDS) loss to enhance the performance of 3D human generation. Comprehensive experiments substantiate HumanNorm's ability to generate 3D humans with intricate geometry and realistic appearances. HumanNorm outperforms existing text-to-3D methods in both geometry and texture quality. The project page of HumanNorm is https://humannorm.github.io/. | https://openaccess.thecvf.com/content/CVPR2024/papers/Huang_HumanNorm_Learning_Normal_Diffusion_Model_for_High-quality_and_Realistic_3D_CVPR_2024_paper.pdf | http://arxiv.org/abs/2310.01406 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Huang_HumanNorm_Learning_Normal_Diffusion_Model_for_High-quality_and_Realistic_3D_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Huang_HumanNorm_Learning_Normal_Diffusion_Model_for_High-quality_and_Realistic_3D_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Huang_HumanNorm_Learning_Normal_CVPR_2024_supplemental.pdf | null |
Unleashing Unlabeled Data: A Paradigm for Cross-View Geo-Localization | Guopeng Li, Ming Qian, Gui-Song Xia | This paper investigates the effective utilization of unlabeled data for large-area cross-view geo-localization (CVGL) encompassing both unsupervised and semi-supervised settings. Common approaches to CVGL rely on ground-satellite image pairs and employ label-driven supervised training. However the cost of collecting precise cross-view image pairs hinders the deployment of CVGL in real-life scenarios. Without the pairs CVGL will be more challenging to handle the significant imaging and spatial gaps between ground and satellite images. To this end we propose an unsupervised framework including a cross-view projection to guide the model for retrieving initial pseudo-labels and a fast re-ranking mechanism to refine the pseudo-labels by leveraging the fact that "the perfectly paired ground-satellite image is located in a unique and identical scene". The framework exhibits competitive performance compared with supervised works on three open-source benchmarks. Our code and models will be released on https://github.com/liguopeng0923/UCVGL. | https://openaccess.thecvf.com/content/CVPR2024/papers/Li_Unleashing_Unlabeled_Data_A_Paradigm_for_Cross-View_Geo-Localization_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.14198 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Li_Unleashing_Unlabeled_Data_A_Paradigm_for_Cross-View_Geo-Localization_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Li_Unleashing_Unlabeled_Data_A_Paradigm_for_Cross-View_Geo-Localization_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_Unleashing_Unlabeled_Data_CVPR_2024_supplemental.pdf | null |
Global Latent Neural Rendering | Thomas Tanay, Matteo Maggioni | A recent trend among generalizable novel view synthesis methods is to learn a rendering operator acting over single camera rays. This approach is promising because it removes the need for explicit volumetric rendering but it effectively treats target images as collections of independent pixels. Here we propose to learn a global rendering operator acting over all camera rays jointly. We show that the right representation to enable such rendering is a 5-dimensional plane sweep volume consisting of the projection of the input images on a set of planes facing the target camera. Based on this understanding we introduce our Convolutional Global Latent Renderer (ConvGLR) an efficient convolutional architecture that performs the rendering operation globally in a low-resolution latent space. Experiments on various datasets under sparse and generalizable setups show that our approach consistently outperforms existing methods by significant margins. | https://openaccess.thecvf.com/content/CVPR2024/papers/Tanay_Global_Latent_Neural_Rendering_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.08338 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Tanay_Global_Latent_Neural_Rendering_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Tanay_Global_Latent_Neural_Rendering_CVPR_2024_paper.html | CVPR 2024 | null | null |
PanoOcc: Unified Occupancy Representation for Camera-based 3D Panoptic Segmentation | Yuqi Wang, Yuntao Chen, Xingyu Liao, Lue Fan, Zhaoxiang Zhang | Comprehensive modeling of the surrounding 3D world is crucial for the success of autonomous driving. However existing perception tasks like object detection road structure segmentation depth & elevation estimation and open-set object localization each only focus on a small facet of the holistic 3D scene understanding task. This divide-and-conquer strategy simplifies the algorithm development process but comes at the cost of losing an end-to-end unified solution to the problem. In this work we address this limitation by studying camera-based 3D panoptic segmentation aiming to achieve a unified occupancy representation for camera-only 3D scene understanding. To achieve this we introduce a novel method called PanoOcc which utilizes voxel queries to aggregate spatiotemporal information from multi-frame and multi-view images in a coarse-to-fine scheme integrating feature learning and scene representation into a unified occupancy representation. We have conducted extensive ablation studies to validate the effectiveness and efficiency of the proposed method. Our approach achieves new state-of-the-art results for camera-based semantic segmentation and panoptic segmentation on the nuScenes dataset. Furthermore our method can be easily extended to dense occupancy prediction and has demonstrated promising performance on the Occ3D benchmark. The code will be made available at https://github.com/Robertwyq/PanoOcc. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_PanoOcc_Unified_Occupancy_Representation_for_Camera-based_3D_Panoptic_Segmentation_CVPR_2024_paper.pdf | http://arxiv.org/abs/2306.10013 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_PanoOcc_Unified_Occupancy_Representation_for_Camera-based_3D_Panoptic_Segmentation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_PanoOcc_Unified_Occupancy_Representation_for_Camera-based_3D_Panoptic_Segmentation_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wang_PanoOcc_Unified_Occupancy_CVPR_2024_supplemental.pdf | null |
Sparse Views Near Light: A Practical Paradigm for Uncalibrated Point-light Photometric Stereo | Mohammed Brahimi, Bjoern Haefner, Zhenzhang Ye, Bastian Goldluecke, Daniel Cremers | Neural approaches have shown a significant progress on camera-based reconstruction. But they require either a fairly dense sampling of the viewing sphere or pre-training on an existing dataset thereby limiting their generalizability. In contrast photometric stereo (PS) approaches have shown great potential for achieving high-quality reconstruction under sparse viewpoints. Yet they are impractical because they typically require tedious laboratory conditions are restricted to dark rooms and often multi-staged making them subject to accumulated errors. To address these shortcomings we propose an end-to-end uncalibrated multi-view PS framework for reconstructing high-resolution shapes acquired from sparse viewpoints in a real-world environment. We relax the dark room assumption and allow a combination of static ambient lighting and dynamic near LED lighting thereby enabling easy data capture outside the lab. Experimental validation confirms that it outperforms existing baseline approaches in the regime of sparse viewpoints by a large margin. This allows to bring high accuracy 3D reconstruction from the dark room to the real world while maintaining a reasonable data capture complexity. | https://openaccess.thecvf.com/content/CVPR2024/papers/Brahimi_Sparse_Views_Near_Light_A_Practical_Paradigm_for_Uncalibrated_Point-light_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.00098 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Brahimi_Sparse_Views_Near_Light_A_Practical_Paradigm_for_Uncalibrated_Point-light_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Brahimi_Sparse_Views_Near_Light_A_Practical_Paradigm_for_Uncalibrated_Point-light_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Brahimi_Sparse_Views_Near_CVPR_2024_supplemental.pdf | null |
Meta-Point Learning and Refining for Category-Agnostic Pose Estimation | Junjie Chen, Jiebin Yan, Yuming Fang, Li Niu | Category-agnostic pose estimation (CAPE) aims to predict keypoints for arbitrary classes given a few support images annotated with keypoints. Existing methods only rely on the features extracted at support keypoints to predict or refine the keypoints on query image but a few support feature vectors are local and inadequate for CAPE. Considering that human can quickly perceive potential keypoints of arbitrary objects we propose a novel framework for CAPE based on such potential keypoints (named as meta-points). Specifically we maintain learnable embeddings to capture inherent information of various keypoints which interact with image feature maps to produce meta-points without any support. The produced meta-points could serve as meaningful potential keypoints for CAPE. Due to the inevitable gap between inherency and annotation we finally utilize the identities and details offered by support keypoints to assign and refine meta-points to desired keypoints in query image. In addition we propose a progressive deformable point decoder and a slacked regression loss for better prediction and supervision. Our novel framework not only reveals the inherency of keypoints but also outperforms existing methods of CAPE. Comprehensive experiments and in-depth studies on large-scale MP-100 dataset demonstrate the effectiveness of our framework. | https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_Meta-Point_Learning_and_Refining_for_Category-Agnostic_Pose_Estimation_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.13647 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Meta-Point_Learning_and_Refining_for_Category-Agnostic_Pose_Estimation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Meta-Point_Learning_and_Refining_for_Category-Agnostic_Pose_Estimation_CVPR_2024_paper.html | CVPR 2024 | null | null |
Cross-view and Cross-pose Completion for 3D Human Understanding | Matthieu Armando, Salma Galaaoui, Fabien Baradel, Thomas Lucas, Vincent Leroy, Romain Brégier, Philippe Weinzaepfel, Grégory Rogez | Human perception and understanding is a major domain of computer vision which like many other vision subdomains recently stands to gain from the use of large models pre-trained on large datasets. We hypothesize that the most common pre-training strategy of relying on general purpose object-centric image datasets such as ImageNet is limited by an important domain shift. On the other hand collecting domain-specific ground truth such as 2D or 3D labels does not scale well. Therefore we propose a pre-training approach based on self-supervised learning that works on human-centric data using only images. Our method uses pairs of images of humans: the first is partially masked and the model is trained to reconstruct the masked parts given the visible ones and a second image. It relies on both stereoscopic (cross-view) pairs and temporal (cross-pose) pairs taken from videos in order to learn priors about 3D as well as human motion. We pre-train a model for body-centric tasks and one for hand-centric tasks. With a generic transformer architecture these models outperform existing self-supervised pre-training methods on a wide set of human-centric downstream tasks and obtain state-of-the-art performance for instance when fine-tuning for model-based and model-free human mesh recovery. | https://openaccess.thecvf.com/content/CVPR2024/papers/Armando_Cross-view_and_Cross-pose_Completion_for_3D_Human_Understanding_CVPR_2024_paper.pdf | http://arxiv.org/abs/2311.09104 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Armando_Cross-view_and_Cross-pose_Completion_for_3D_Human_Understanding_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Armando_Cross-view_and_Cross-pose_Completion_for_3D_Human_Understanding_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Armando_Cross-view_and_Cross-pose_CVPR_2024_supplemental.zip | null |
Batch Normalization Alleviates the Spectral Bias in Coordinate Networks | Zhicheng Cai, Hao Zhu, Qiu Shen, Xinran Wang, Xun Cao | Representing signals using coordinate networks dominates the area of inverse problems recently and is widely applied in various scientific computing tasks. Still there exists an issue of spectral bias in coordinate networks limiting the capacity to learn high-frequency components. This problem is caused by the pathological distribution of the neural tangent kernel's (NTK's) eigenvalues of coordinate networks. We find that this pathological distribution could be improved using the classical batch normalization (BN) which is a common deep learning technique but rarely used in coordinate networks. BN greatly reduces the maximum and variance of NTK's eigenvalues while slightly modifies the mean value considering the max eigenvalue is much larger than the most this variance change results in a shift of eigenvalues' distribution from a lower one to a higher one therefore the spectral bias could be alleviated (see Fig. 1). This observation is substantiated by the significant improvements of applying BN-based coordinate networks to various tasks including the image compression computed tomography reconstruction shape representation magnetic resonance imaging and novel view synthesis. | https://openaccess.thecvf.com/content/CVPR2024/papers/Cai_Batch_Normalization_Alleviates_the_Spectral_Bias_in_Coordinate_Networks_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Cai_Batch_Normalization_Alleviates_the_Spectral_Bias_in_Coordinate_Networks_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Cai_Batch_Normalization_Alleviates_the_Spectral_Bias_in_Coordinate_Networks_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Cai_Batch_Normalization_Alleviates_CVPR_2024_supplemental.pdf | null |
Efficient Scene Recovery Using Luminous Flux Prior | Zhongyu Li, Lei Zhang | Scene recovery the restoration of images degraded by adverse weather conditions presents significant challenges for existing methods. Physical models constrained by their inherent assumptions often fail when these assumptions are not met; Deep learning models are powerful they are limited by the diversity of their training datasets leading to poor generalization and high computational demands. To address these limitations we propose the Luminous Flux Prior (LFP) to recover degraded images under diverse adverse weather without learning. Luminous flux a physical measure that reflects image brightness has a rate of change that demonstrates a significant correlation with transmission. Consequently we leverage this rate of change in luminous flux as prior knowledge to estimate transmission which in turn assists in image recovery. This approach reduces dependency on physical parameters and enhances adaptability to various weather. Experimental validation under diverse conditions such as sandstorms underwater environments and haze attests to the robustness of LFP in restoring clear images. With a time complexity of \mathcal O (N\log N) LFP enables real-time recovery making it a suitable for devices with limited computational resources. | https://openaccess.thecvf.com/content/CVPR2024/papers/Li_Efficient_Scene_Recovery_Using_Luminous_Flux_Prior_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Li_Efficient_Scene_Recovery_Using_Luminous_Flux_Prior_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Li_Efficient_Scene_Recovery_Using_Luminous_Flux_Prior_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_Efficient_Scene_Recovery_CVPR_2024_supplemental.pdf | null |
LQMFormer: Language-aware Query Mask Transformer for Referring Image Segmentation | Nisarg A. Shah, Vibashan VS, Vishal M. Patel | Referring Image Segmentation (RIS) aims to segment objects from an image based on a language description. Recent advancements have introduced transformer-based methods that leverage cross-modal dependencies significantly enhancing performance in referring segmentation tasks. These methods are designed such that each query predicts different masks. However RIS inherently requires a single-mask prediction leading to a phenomenon known as Query Collapse where all queries yield the same mask prediction. This reduces the generalization capability of the RIS model for complex or novel scenarios. To address this issue we propose a Multi-modal Query Feature Fusion technique characterized by two innovative designs: (1) Gaussian enhanced Multi-Modal Fusion a novel visual grounding mechanism that enhances overall representation by extracting rich local visual information and global visual-linguistic relationships and (2) A Dynamic Query Module that produces a diverse set of queries through a scoring network where the network selectively focuses on queries for objects referred to in the language description. Moreover we show that including an auxiliary loss to increase the distance between mask representations of different queries further enhances performance and mitigates query collapse. Extensive experiments conducted on four benchmark datasets validate the effectiveness of our framework. | https://openaccess.thecvf.com/content/CVPR2024/papers/Shah_LQMFormer_Language-aware_Query_Mask_Transformer_for_Referring_Image_Segmentation_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Shah_LQMFormer_Language-aware_Query_Mask_Transformer_for_Referring_Image_Segmentation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Shah_LQMFormer_Language-aware_Query_Mask_Transformer_for_Referring_Image_Segmentation_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Shah_LQMFormer_Language-aware_Query_CVPR_2024_supplemental.pdf | null |
Customize your NeRF: Adaptive Source Driven 3D Scene Editing via Local-Global Iterative Training | Runze He, Shaofei Huang, Xuecheng Nie, Tianrui Hui, Luoqi Liu, Jiao Dai, Jizhong Han, Guanbin Li, Si Liu | In this paper we target the adaptive source driven 3D scene editing task by proposing a CustomNeRF model that unifies a text description or a reference image as the editing prompt. However obtaining desired editing results conformed with the editing prompt is nontrivial since there exist two significant challenges including accurate editing of only foreground regions and multi-view consistency given a single-view reference image. To tackle the first challenge we propose a Local-Global Iterative Editing (LGIE) training scheme that alternates between foreground region editing and full-image editing aimed at foreground-only manipulation while preserving the background. For the second challenge we also design a class-guided regularization that exploits class priors within the generation model to alleviate the inconsistency problem among different views in image-driven editing. Extensive experiments show that our CustomNeRF produces precise editing results under various real scenes for both text- and image-driven settings. The code is available at: https://github. com/hrz2000/CustomNeRF. | https://openaccess.thecvf.com/content/CVPR2024/papers/He_Customize_your_NeRF_Adaptive_Source_Driven_3D_Scene_Editing_via_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.01663 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/He_Customize_your_NeRF_Adaptive_Source_Driven_3D_Scene_Editing_via_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/He_Customize_your_NeRF_Adaptive_Source_Driven_3D_Scene_Editing_via_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/He_Customize_your_NeRF_CVPR_2024_supplemental.pdf | null |
SplaTAM: Splat Track & Map 3D Gaussians for Dense RGB-D SLAM | Nikhil Keetha, Jay Karhade, Krishna Murthy Jatavallabhula, Gengshan Yang, Sebastian Scherer, Deva Ramanan, Jonathon Luiten | Dense simultaneous localization and mapping (SLAM) is crucial for robotics and augmented reality applications. However current methods are often hampered by the non-volumetric or implicit way they represent a scene. This work introduces SplaTAM an approach that for the first time leverages explicit volumetric representations i.e. 3D Gaussians to enable high-fidelity reconstruction from a single unposed RGB-D camera surpassing the capabilities of existing methods. SplaTAM employs a simple online tracking and mapping system tailored to the underlying Gaussian representation. It utilizes a silhouette mask to elegantly capture the presence of scene density. This combination enables several benefits over prior representations including fast rendering and dense optimization quickly determining if areas have been previously mapped and structured map expansion by adding more Gaussians. Extensive experiments show that SplaTAM achieves up to 2x superior performance in camera pose estimation map construction and novel-view synthesis over existing methods paving the way for more immersive high-fidelity SLAM applications. | https://openaccess.thecvf.com/content/CVPR2024/papers/Keetha_SplaTAM_Splat_Track__Map_3D_Gaussians_for_Dense_RGB-D_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Keetha_SplaTAM_Splat_Track__Map_3D_Gaussians_for_Dense_RGB-D_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Keetha_SplaTAM_Splat_Track__Map_3D_Gaussians_for_Dense_RGB-D_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Keetha_SplaTAM_Splat_Track_CVPR_2024_supplemental.pdf | null |
Instance-based Max-margin for Practical Few-shot Recognition | Minghao Fu, Ke Zhu | In order to mimic the human few-shot learning (FSL) ability better and to make FSL closer to real-world applications this paper proposes a practical FSL (pFSL) setting. pFSL is based on unsupervised pre-trained models (analogous to human prior knowledge) and recognizes many novel classes simultaneously. Compared to traditional FSL pFSL is simpler in its formulation easier to evaluate more challenging and more practical. To cope with the rarity of training examples this paper proposes IbM2 an instance-based max-margin method not only for the new pFSL setting but also works well in traditional FSL scenarios. Based on the Gaussian Annulus Theorem IbM2 converts random noise applied to the instances into a mechanism to achieve maximum margin in the many-way pFSL (or traditional FSL) recognition task. Experiments with various self-supervised pre-training methods and diverse many- or few-way FSL tasks show that IbM2 almost always leads to improvements compared to its respective baseline methods and in most cases the improvements are significant. With both the new pFSL setting and novel IbM2 method this paper shows that practical few-shot learning is both viable and promising. | https://openaccess.thecvf.com/content/CVPR2024/papers/Fu_Instance-based_Max-margin_for_Practical_Few-shot_Recognition_CVPR_2024_paper.pdf | http://arxiv.org/abs/2305.17368 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Fu_Instance-based_Max-margin_for_Practical_Few-shot_Recognition_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Fu_Instance-based_Max-margin_for_Practical_Few-shot_Recognition_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Fu_Instance-based_Max-margin_for_CVPR_2024_supplemental.pdf | null |
Spherical Mask: Coarse-to-Fine 3D Point Cloud Instance Segmentation with Spherical Representation | Sangyun Shin, Kaichen Zhou, Madhu Vankadari, Andrew Markham, Niki Trigoni | Coarse-to-fine 3D instance segmentation methods show weak performances compared to recent Grouping-based Kernel-based and Transformer-based methods. We argue that this is due to two limitations: 1) Instance size overestimation by axis-aligned bounding box(AABB) 2) False negative error accumulation from inaccurate box to the refinement phase. In this work we introduce Spherical Mask a novel coarse-to-fine approach based on spherical representation overcoming those two limitations with several benefits. Specifically our coarse detection estimates each instance with a 3D polygon using a center and radial distance predictions which avoids excessive size estimation of AABB. To cut the error propagation in the existing coarse-to-fine approaches we virtually migrate points based on the polygon allowing all foreground points including false negatives to be refined. During inference the proposal and point migration modules run in parallel and are assembled to form binary masks of instances. We also introduce two margin-based losses for the point migration to enforce corrections for the false positives/negatives and cohesion of foreground points significantly improving the performance. Experimental results from three datasets such as ScanNetV2 S3DIS and STPLS3D show that our proposed method outperforms existing works demonstrating the effectiveness of the new instance representation with spherical coordinates. The code is available at: https://github.com/yunshin/SphericalMask | https://openaccess.thecvf.com/content/CVPR2024/papers/Shin_Spherical_Mask_Coarse-to-Fine_3D_Point_Cloud_Instance_Segmentation_with_Spherical_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.11269 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Shin_Spherical_Mask_Coarse-to-Fine_3D_Point_Cloud_Instance_Segmentation_with_Spherical_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Shin_Spherical_Mask_Coarse-to-Fine_3D_Point_Cloud_Instance_Segmentation_with_Spherical_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Shin_Spherical_Mask_Coarse-to-Fine_CVPR_2024_supplemental.pdf | null |
Omni-Q: Omni-Directional Scene Understanding for Unsupervised Visual Grounding | Sai Wang, Yutian Lin, Yu Wu | Unsupervised visual grounding methods alleviate the issue of expensive manual annotation of image-query pairs by generating pseudo-queries. However existing methods are prone to confusing the spatial relationships between objects and rely on designing complex prompt modules to generate query texts which severely impedes the ability to generate accurate and comprehensive queries due to ambiguous spatial relationships and manually-defined fixed templates. To tackle these challenges we propose a omni-directional language query generation approach for unsupervised visual grounding named Omni-Q. Specifically we develop a 3D spatial relation module to extend the 2D spatial representation to 3D thereby utilizing 3D location information to accurately determine the spatial position among objects. Besides we introduce a spatial graph module leveraging the power of graph structures to establish accurate and diverse object relationships and thus enhancing the flexibility of query generation. Extensive experiments on five public benchmark datasets demonstrate that our method significantly outperforms existing state-of-the-art unsupervised methods by up to 16.17%. In addition when applied in the supervised setting our method can freely save up to 60% human annotations without a loss of performance. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_Omni-Q_Omni-Directional_Scene_Understanding_for_Unsupervised_Visual_Grounding_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Omni-Q_Omni-Directional_Scene_Understanding_for_Unsupervised_Visual_Grounding_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Omni-Q_Omni-Directional_Scene_Understanding_for_Unsupervised_Visual_Grounding_CVPR_2024_paper.html | CVPR 2024 | null | null |
VISTA-LLAMA: Reducing Hallucination in Video Language Models via Equal Distance to Visual Tokens | Fan Ma, Xiaojie Jin, Heng Wang, Yuchen Xian, Jiashi Feng, Yi Yang | Recent advances in large video-language models have displayed promising outcomes in video comprehension. Current approaches straightforwardly convert video into language tokens and employ large language models for multi-modal tasks. However this method often leads to the generation of irrelevant content commonly known as "hallucination" as the length of the text increases and the impact of the video diminishes. To address this problem we propose Vista-LLaMA a novel framework that maintains the consistent distance between all visual tokens and any language tokens irrespective of the generated text length. Vista-LLaMA omits relative position encoding when determining attention weights between visual and text tokens retaining the position encoding for text and text tokens. This amplifies the effect of visual tokens on text generation especially when the relative distance is longer between visual and text tokens. The proposed attention mechanism significantly reduces the chance of producing irrelevant text related to the video content. Furthermore we present a sequential visual projector that projects the current video frame into tokens of language space with the assistance of the previous frame. This approach not only captures the temporal relationship within the video but also allows less visual tokens to encompass the entire video. Our approach significantly outperforms various previous methods (e.g. Video-ChatGPT MovieChat) on four challenging open-ended video question answering benchmarks. We reach an accuracy of 60.7 on the zero-shot NExT-QA and 60.5 on the zero-shot MSRVTT-QA setting a new state-of-the-art performance. | https://openaccess.thecvf.com/content/CVPR2024/papers/Ma_VISTA-LLAMA_Reducing_Hallucination_in_Video_Language_Models_via_Equal_Distance_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Ma_VISTA-LLAMA_Reducing_Hallucination_in_Video_Language_Models_via_Equal_Distance_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Ma_VISTA-LLAMA_Reducing_Hallucination_in_Video_Language_Models_via_Equal_Distance_CVPR_2024_paper.html | CVPR 2024 | null | null |
FSRT: Facial Scene Representation Transformer for Face Reenactment from Factorized Appearance Head-pose and Facial Expression Features | Andre Rochow, Max Schwarz, Sven Behnke | The task of face reenactment is to transfer the head motion and facial expressions from a driving video to the appearance of a source image which may be of a different person (cross-reenactment). Most existing methods are CNN-based and estimate optical flow from the source image to the current driving frame which is then inpainted and refined to produce the output animation. We propose a transformer-based encoder for computing a set-latent representation of the source image(s). We then predict the output color of a query pixel using a transformer-based decoder which is conditioned with keypoints and a facial expression vector extracted from the driving frame. Latent representations of the source person are learned in a self-supervised manner that factorize their appearance head pose and facial expressions. Thus they are perfectly suited for cross-reenactment. In contrast to most related work our method naturally extends to multiple source images and can thus adapt to person-specific facial dynamics. We also propose data augmentation and regularization schemes that are necessary to prevent overfitting and support generalizability of the learned representations. We evaluated our approach in a randomized user study. The results indicate superior performance compared to the state-of-the-art in terms of motion transfer quality and temporal consistency. | https://openaccess.thecvf.com/content/CVPR2024/papers/Rochow_FSRT_Facial_Scene_Representation_Transformer_for_Face_Reenactment_from_Factorized_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.09736 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Rochow_FSRT_Facial_Scene_Representation_Transformer_for_Face_Reenactment_from_Factorized_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Rochow_FSRT_Facial_Scene_Representation_Transformer_for_Face_Reenactment_from_Factorized_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Rochow_FSRT_Facial_Scene_CVPR_2024_supplemental.pdf | null |
Efficient Multitask Dense Predictor via Binarization | Yuzhang Shang, Dan Xu, Gaowen Liu, Ramana Rao Kompella, Yan Yan | Multi-task learning for dense prediction has emerged as a pivotal area in computer vision enabling simultaneous processing of diverse yet interrelated pixel-wise prediction tasks. However the substantial computational demands of state-of-the-art (SoTA) models often limit their widespread deployment. This paper addresses this challenge by introducing network binarization to compress resource-intensive multi-task dense predictors. Specifically our goal is to significantly accelerate multi-task dense prediction models via Binary Neural Networks (BNNs) while maintaining and even improving model performance at the same time. To reach this goal we propose a Binary Multi-task Dense Predictor Bi-MTDP and several variants of \bimtdp in which a multi-task dense predictor is constructed via specified binarized modules. Our systematical analysis of this predictor reveals that performance drop from binarization is primarily caused by severe information degradation. To address this issue we introduce a deep information bottleneck layer that enforces representations for downstream tasks satisfying Gaussian distribution in forward propagation. Moreover we introduce a knowledge distillation mechanism to correct the direction of information flow in backward propagation. Intriguingly one variant of Bi-MTDP outperforms full-precision (FP) multi-task dense prediction SoTAs ARTC (CNN-based) and InvPT (ViT-based). This result indicates that Bi-MTDP is not merely a naive trade-off between performance and efficiency but is rather a benefit of the redundant information flow thanks to the multi-task architecture. | https://openaccess.thecvf.com/content/CVPR2024/papers/Shang_Efficient_Multitask_Dense_Predictor_via_Binarization_CVPR_2024_paper.pdf | http://arxiv.org/abs/2405.14136 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Shang_Efficient_Multitask_Dense_Predictor_via_Binarization_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Shang_Efficient_Multitask_Dense_Predictor_via_Binarization_CVPR_2024_paper.html | CVPR 2024 | null | null |
TetraSphere: A Neural Descriptor for O(3)-Invariant Point Cloud Analysis | Pavlo Melnyk, Andreas Robinson, Michael Felsberg, Mårten Wadenbäck | In many practical applications 3D point cloud analysis requires rotation invariance. In this paper we present a learnable descriptor invariant under 3D rotations and reflections i.e. the O(3) actions utilizing the recently introduced steerable 3D spherical neurons and vector neurons. Specifically we propose an embedding of the 3D spherical neurons into 4D vector neurons which leverages end-to-end training of the model. In our approach we perform TetraTransform---an equivariant embedding of the 3D input into 4D constructed from the steerable neurons---and extract deeper O(3)-equivariant features using vector neurons. This integration of the TetraTransform into the VN-DGCNN framework termed TetraSphere negligibly increases the number of parameters by less than 0.0002%. TetraSphere sets a new state-of-the-art performance classifying randomly rotated real-world object scans of the challenging subsets of ScanObjectNN. Additionally TetraSphere outperforms all equivariant methods on randomly rotated synthetic data: classifying objects from ModelNet40 and segmenting parts of the ShapeNet shapes. Thus our results reveal the practical value of steerable 3D spherical neurons for learning in 3D Euclidean space. The code is available at https://github.com/pavlo-melnyk/tetrasphere. | https://openaccess.thecvf.com/content/CVPR2024/papers/Melnyk_TetraSphere_A_Neural_Descriptor_for_O3-Invariant_Point_Cloud_Analysis_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Melnyk_TetraSphere_A_Neural_Descriptor_for_O3-Invariant_Point_Cloud_Analysis_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Melnyk_TetraSphere_A_Neural_Descriptor_for_O3-Invariant_Point_Cloud_Analysis_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Melnyk_TetraSphere_A_Neural_CVPR_2024_supplemental.pdf | null |
ZeroRF: Fast Sparse View 360deg Reconstruction with Zero Pretraining | Ruoxi Shi, Xinyue Wei, Cheng Wang, Hao Su | We present ZeroRF a novel per-scene optimization method addressing the challenge of sparse view 360deg reconstruction in neural field representations. Current breakthroughs like Neural Radiance Fields (NeRF) have demonstrated high-fidelity image synthesis but struggle with sparse input views. Existing methods such as Generalizable NeRFs and per-scene optimization approaches face limitations in data dependency computational cost and generalization across diverse scenarios. To overcome these challenges we propose ZeroRF whose key idea is to integrate a tailored Deep Image Prior into a factorized NeRF representation. Unlike traditional methods ZeroRF parametrizes feature grids with a neural network generator enabling efficient sparse view 360deg reconstruction without any pretraining or additional regularization. Extensive experiments showcase ZeroRF's versatility and superiority in terms of both quality and speed achieving state-of-the-art results on benchmark datasets. ZeroRF's significance extends to applications in 3D content generation and editing. Project page: https://sarahweiii.github.io/zerorf/ | https://openaccess.thecvf.com/content/CVPR2024/papers/Shi_ZeroRF_Fast_Sparse_View_360deg_Reconstruction_with_Zero_Pretraining_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Shi_ZeroRF_Fast_Sparse_View_360deg_Reconstruction_with_Zero_Pretraining_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Shi_ZeroRF_Fast_Sparse_View_360deg_Reconstruction_with_Zero_Pretraining_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Shi_ZeroRF_Fast_Sparse_CVPR_2024_supplemental.pdf | null |
RCooper: A Real-world Large-scale Dataset for Roadside Cooperative Perception | Ruiyang Hao, Siqi Fan, Yingru Dai, Zhenlin Zhang, Chenxi Li, Yuntian Wang, Haibao Yu, Wenxian Yang, Jirui Yuan, Zaiqing Nie | The value of roadside perception which could extend the boundaries of autonomous driving and traffic management has gradually become more prominent and acknowledged in recent years. However existing roadside perception approaches only focus on the single-infrastructure sensor system which cannot realize a comprehensive understanding of a traffic area because of the limited sensing range and blind spots. Orienting high-quality roadside perception we need Roadside Cooperative Perception (RCooper) to achieve practical area-coverage roadside perception for restricted traffic areas. Rcooper has its own domain-specific challenges but further exploration is hindered due to the lack of datasets. We hence release the first real-world large-scale RCooper dataset to bloom the research on practical roadside cooperative perception including detection and tracking. The manually annotated dataset comprises 50k images and 30k point clouds including two representative traffic scenes (i.e. intersection and corridor). The constructed benchmarks prove the effectiveness of roadside cooperation perception and demonstrate the direction of further research. Codes and dataset can be accessed at: https://github.com/AIR-THU/DAIR-RCooper. | https://openaccess.thecvf.com/content/CVPR2024/papers/Hao_RCooper_A_Real-world_Large-scale_Dataset_for_Roadside_Cooperative_Perception_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.10145 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Hao_RCooper_A_Real-world_Large-scale_Dataset_for_Roadside_Cooperative_Perception_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Hao_RCooper_A_Real-world_Large-scale_Dataset_for_Roadside_Cooperative_Perception_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Hao_RCooper_A_Real-world_CVPR_2024_supplemental.pdf | null |
TutteNet: Injective 3D Deformations by Composition of 2D Mesh Deformations | Bo Sun, Thibault Groueix, Chen Song, Qixing Huang, Noam Aigerman | This work proposes a novel representation of injective deformations of 3D space which overcomes existing limitations of injective methods namely inaccuracy lack of robustness and incompatibility with general learning and optimization frameworks. Our core idea is to reduce the problem to a "deep" composition of multiple 2D mesh-based piecewise-linear maps. Namely we build differentiable layers that produce mesh deformations through Tutte's embedding (guaranteed to be injective in 2D) and compose these layers over different planes to create complex 3D injective deformations of the 3D volume. We show our method provides the ability to ef?ciently and accurately optimize and learn complex deformations outperforming other injective approaches. As a main application we produce complex and artifact-free NeRF and SDF deformations. | https://openaccess.thecvf.com/content/CVPR2024/papers/Sun_TutteNet_Injective_3D_Deformations_by_Composition_of_2D_Mesh_Deformations_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Sun_TutteNet_Injective_3D_Deformations_by_Composition_of_2D_Mesh_Deformations_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Sun_TutteNet_Injective_3D_Deformations_by_Composition_of_2D_Mesh_Deformations_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Sun_TutteNet_Injective_3D_CVPR_2024_supplemental.zip | null |
WANDR: Intention-guided Human Motion Generation | Markos Diomataris, Nikos Athanasiou, Omid Taheri, Xi Wang, Otmar Hilliges, Michael J. Black | Synthesizing natural human motions that enable a 3D human avatar to walk and reach for arbitrary goals in 3D space remains an unsolved problem with many applications. Existing methods (data-driven or using reinforcement learning) are limited in terms of generalization and motion naturalness. A primary obstacle is the scarcity of training data that combines locomotion with goal reaching. To address this we introduce WANDR a data-driven model that takes an avatar's initial pose and a goal's 3D position and generates natural human motions that place the end effector (wrist) on the goal location. To solve this we introduce novel intention features that drive rich goal-oriented movement. Intention guides the agent to the goal and interactively adapts the generation to novel situations without needing to define sub-goals or the entire motion path. Crucially intention allows training on datasets that have goal-oriented motions as well as those that do not. WANDR is a conditional Variational Auto-Encoder (c-VAE) which we train using the AMASS and CIRCLE datasets. We evaluate our method extensively and demonstrate its ability to generate natural and long-term motions that reach 3D goals and generalize to unseen goal locations. Our models and code are available for research purposes at wandr.is.tue.mpg.de | https://openaccess.thecvf.com/content/CVPR2024/papers/Diomataris_WANDR_Intention-guided_Human_Motion_Generation_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.15383 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Diomataris_WANDR_Intention-guided_Human_Motion_Generation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Diomataris_WANDR_Intention-guided_Human_Motion_Generation_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Diomataris_WANDR_Intention-guided_Human_CVPR_2024_supplemental.zip | null |
Jointly Training and Pruning CNNs via Learnable Agent Guidance and Alignment | Alireza Ganjdanesh, Shangqian Gao, Heng Huang | Structural model pruning is a prominent approach used for reducing the computational cost of Convolutional Neural Networks (CNNs) before their deployment on resource-constrained devices. Yet the majority of proposed ideas require a pretrained model before pruning which is costly to secure. In this paper we propose a novel structural pruning approach to jointly learn the weights and structurally prune architectures of CNN models. The core element of our method is a Reinforcement Learning (RL) agent whose actions determine the pruning ratios of the CNN model's layers and the resulting model's accuracy serves as its reward. We conduct the joint training and pruning by iteratively training the model's weights and the agent's policy and we regularize the model's weights to align with the selected structure by the agent. The evolving model's weights result in a dynamic reward function for the agent which prevents using prominent episodic RL methods with stationary environment assumption for our purpose. We address this challenge by designing a mechanism to model the complex changing dynamics of the reward function and provide a representation of it to the RL agent. To do so we take a learnable embedding for each training epoch and employ a recurrent model to calculate a representation of the changing environment. We train the recurrent model and embeddings using a decoder model to reconstruct observed rewards. Such a design empowers our agent to effectively leverage episodic observations along with the environment representations to learn a proper policy to determine performant sub-networks of the CNN model. Our extensive experiments on CIFAR-10 and ImageNet using ResNets and MobileNets demonstrate the effectiveness of our method. | https://openaccess.thecvf.com/content/CVPR2024/papers/Ganjdanesh_Jointly_Training_and_Pruning_CNNs_via_Learnable_Agent_Guidance_and_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.19490 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Ganjdanesh_Jointly_Training_and_Pruning_CNNs_via_Learnable_Agent_Guidance_and_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Ganjdanesh_Jointly_Training_and_Pruning_CNNs_via_Learnable_Agent_Guidance_and_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ganjdanesh_Jointly_Training_and_CVPR_2024_supplemental.pdf | null |
Estimating Noisy Class Posterior with Part-level Labels for Noisy Label Learning | Rui Zhao, Bin Shi, Jianfei Ruan, Tianze Pan, Bo Dong | In noisy label learning estimating noisy class posteriors plays a fundamental role for developing consistent classifiers as it forms the basis for estimating clean class posteriors and the transition matrix. Existing methods typically learn noisy class posteriors by training a classification model with noisy labels. However when labels are incorrect these models may be misled to overemphasize the feature parts that do not reflect the instance characteristics resulting in significant errors in estimating noisy class posteriors. To address this issue this paper proposes to augment the supervised information with part-level labels encouraging the model to focus on and integrate richer information from various parts. Specifically our method first partitions features into distinct parts by cropping instances yielding part-level labels associated with these various parts. Subsequently we introduce a novel single-to-multiple transition matrix to model the relationship between the noisy and part-level labels which incorporates part-level labels into a classifier-consistent framework. Utilizing this framework with part-level labels we can learn the noisy class posteriors more precisely by guiding the model to integrate information from various parts ultimately improving the classification performance. Our method is theoretically sound while experiments show that it is empirically effective in synthetic and real-world noisy benchmarks. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zhao_Estimating_Noisy_Class_Posterior_with_Part-level_Labels_for_Noisy_Label_CVPR_2024_paper.pdf | http://arxiv.org/abs/2405.05714 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zhao_Estimating_Noisy_Class_Posterior_with_Part-level_Labels_for_Noisy_Label_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zhao_Estimating_Noisy_Class_Posterior_with_Part-level_Labels_for_Noisy_Label_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhao_Estimating_Noisy_Class_CVPR_2024_supplemental.pdf | null |
Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification | Sravanti Addepalli, Ashish Ramayee Asokan, Lakshay Sharma, R. Venkatesh Babu | Vision-Language Models (VLMs) such as CLIP are trained on large amounts of image-text pairs resulting in remarkable generalization across several data distributions. However in several cases their expensive training and data collection/curation costs do not justify the end application. This motivates a vendor-client paradigm where a vendor trains a large-scale VLM and grants only input-output access to clients on a pay-per-query basis in a black-box setting. The client aims to minimize inference cost by distilling the VLM to a student model using the limited available task-specific data and further deploying this student model in the downstream application. While naive distillation largely improves the In-Domain (ID) accuracy of the student it fails to transfer the superior out-of-distribution (OOD) generalization of the VLM teacher using the limited available labeled images. To mitigate this we propose Vision-Language to Vision - Align Distill Predict (VL2V-ADiP) which first aligns the vision and language modalities of the teacher model with the vision modality of a pre-trained student model and further distills the aligned VLM representations to the student. This maximally retains the pre-trained features of the student while also incorporating the rich representations of the VLM image encoder and the superior generalization of the text embeddings. The proposed approach achieves state-of-the-art results on the standard Domain Generalization benchmarks in a black-box teacher setting as well as a white-box setting where the weights of the VLM are accessible. | https://openaccess.thecvf.com/content/CVPR2024/papers/Addepalli_Leveraging_Vision-Language_Models_for_Improving_Domain_Generalization_in_Image_Classification_CVPR_2024_paper.pdf | http://arxiv.org/abs/2310.08255 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Addepalli_Leveraging_Vision-Language_Models_for_Improving_Domain_Generalization_in_Image_Classification_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Addepalli_Leveraging_Vision-Language_Models_for_Improving_Domain_Generalization_in_Image_Classification_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Addepalli_Leveraging_Vision-Language_Models_CVPR_2024_supplemental.pdf | null |
Diffusion-EDFs: Bi-equivariant Denoising Generative Modeling on SE(3) for Visual Robotic Manipulation | Hyunwoo Ryu, Jiwoo Kim, Hyunseok An, Junwoo Chang, Joohwan Seo, Taehan Kim, Yubin Kim, Chaewon Hwang, Jongeun Choi, Roberto Horowitz | Diffusion generative modeling has become a promising approach for learning robotic manipulation tasks from stochastic human demonstrations. In this paper we present Diffusion-EDFs a novel SE(3)-equivariant diffusion-based approach for visual robotic manipulation tasks. We show that our proposed method achieves remarkable data efficiency requiring only 5 to 10 human demonstrations for effective end-to-end training in less than an hour. Furthermore our benchmark experiments demonstrate that our approach has superior generalizability and robustness compared to state-of-the-art methods. Lastly we validate our methods with real hardware experiments. | https://openaccess.thecvf.com/content/CVPR2024/papers/Ryu_Diffusion-EDFs_Bi-equivariant_Denoising_Generative_Modeling_on_SE3_for_Visual_Robotic_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Ryu_Diffusion-EDFs_Bi-equivariant_Denoising_Generative_Modeling_on_SE3_for_Visual_Robotic_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Ryu_Diffusion-EDFs_Bi-equivariant_Denoising_Generative_Modeling_on_SE3_for_Visual_Robotic_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ryu_Diffusion-EDFs_Bi-equivariant_Denoising_CVPR_2024_supplemental.pdf | null |
Prompt Learning via Meta-Regularization | Jinyoung Park, Juyeon Ko, Hyunwoo J. Kim | Pre-trained vision-language models have shown impressive success on various computer vision tasks with their zero-shot generalizability. Recently prompt learning approaches have been explored to efficiently and effectively adapt the vision-language models to a variety of downstream tasks. However most existing prompt learning methods suffer from task overfitting since the general knowledge of the pre-trained vision language models is forgotten while the prompts are finetuned on a small data set from a specific target task. To address this issue we propose a Prompt Meta-Regularization (ProMetaR) to improve the generalizability of prompt learning for vision-language models. Specifically ProMetaR meta-learns both the regularizer and the soft prompts to harness the task-specific knowledge from the downstream tasks and task-agnostic general knowledge from the vision-language models. Further ProMetaR augments the task to generate multiple virtual tasks to alleviate the meta-overfitting. In addition we provide the analysis to comprehend how ProMetaR improves the generalizability of prompt tuning in the perspective of the gradient alignment. Our extensive experiments demonstrate that our ProMetaR improves the generalizability of conventional prompt learning methods under base-to-base/base-to-new and domain generalization settings. The code of ProMetaR is available at https://github.com/mlvlab/ProMetaR. | https://openaccess.thecvf.com/content/CVPR2024/papers/Park_Prompt_Learning_via_Meta-Regularization_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.00851 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Park_Prompt_Learning_via_Meta-Regularization_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Park_Prompt_Learning_via_Meta-Regularization_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Park_Prompt_Learning_via_CVPR_2024_supplemental.pdf | null |
Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Compositional Understanding | Le Zhang, Rabiul Awal, Aishwarya Agrawal | Vision-Language Models (VLMs) such as CLIP exhibit strong image-text comprehension abilities facilitating advances in several downstream tasks such as zero-shot image classification image-text retrieval and text-to-image generation. However the compositional reasoning abilities of existing VLMs remains subpar. The root of this limitation lies in the inadequate alignment between the images and captions in the pretraining datasets. Additionally the current contrastive learning objective fails to focus on fine-grained grounding components like relations actions and attributes resulting in "bag-of-words" representations. We introduce a simple and effective method to improve compositional reasoning in VLMs. Our method better leverages available datasets by refining and expanding the standard image-text contrastive learning framework. Our approach does not require specific annotations and does not incur extra parameters. When integrated with CLIP our technique yields notable improvement over state-of-the-art baselines across five vision-language compositional benchmarks. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Contrasting_Intra-Modal_and_Ranking_Cross-Modal_Hard_Negatives_to_Enhance_Visio-Linguistic_CVPR_2024_paper.pdf | http://arxiv.org/abs/2306.08832 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Contrasting_Intra-Modal_and_Ranking_Cross-Modal_Hard_Negatives_to_Enhance_Visio-Linguistic_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Contrasting_Intra-Modal_and_Ranking_Cross-Modal_Hard_Negatives_to_Enhance_Visio-Linguistic_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_Contrasting_Intra-Modal_and_CVPR_2024_supplemental.pdf | null |
CMA: A Chromaticity Map Adapter for Robust Detection of Screen-Recapture Document Images | Changsheng Chen, Liangwei Lin, Yongqi Chen, Bin Li, Jishen Zeng, Jiwu Huang | The rebroadcasting of screen-recaptured document images introduces a significant risk to the confidential documents processed in government departments and commercial companies. However detecting recaptured document images subjected to distortions from online social networks (OSNs) is challenging since the common forensics cues such as moir ?e pattern are weakened during transmission. In this work we first devise a pixel-level distortion model of the screen-recaptured document image to identify the robust features of color artifacts. Then we extract a chromaticity map from the recaptured image to highlight the presence of color artifacts even under low-quality samples. Based on the prior understanding we design a chromaticity map adapter (CMA) to efficiently extract the chromaticity map and feed it into the transformer backbone as multi-modal prompt tokens. To evaluate the performance of the proposed method we collect a recaptured office document image dataset with over 10K diverse samples. Experimental results demonstrate that the proposed CMA method outperforms a SOTA approach (with RGB modality only) reducing the average EER from 26.82% to 16.78%. Robustness evaluation shows that our method achieves 0.8688 and 0.7554 AUCs under samples with JPEG compression (QF=70) and resolution as low as 534x503 pixels. | https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_CMA_A_Chromaticity_Map_Adapter_for_Robust_Detection_of_Screen-Recapture_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Chen_CMA_A_Chromaticity_Map_Adapter_for_Robust_Detection_of_Screen-Recapture_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Chen_CMA_A_Chromaticity_Map_Adapter_for_Robust_Detection_of_Screen-Recapture_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Chen_CMA_A_Chromaticity_CVPR_2024_supplemental.pdf | null |
Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld | Yijun Yang, Tianyi Zhou, Kanxue Li, Dapeng Tao, Lusong Li, Li Shen, Xiaodong He, Jing Jiang, Yuhui Shi | While large language models (LLMs) excel in a simulated world of texts they struggle to interact with the more realistic world without perceptions of other modalities such as visual or audio signals. Although vision-language models (VLMs) integrate LLM modules (1) aligned with static image features and (2) may possess prior knowledge of world dynamics (as demonstrated in the text world) they have not been trained in an embodied visual world and thus cannot align with its dynamics. On the other hand training an embodied agent in a noisy visual world without expert guidance is often challenging and inefficient. In this paper we train a VLM agent living in a visual world using an LLM agent excelling in a parallel text world. Specifically we distill LLM's reflection outcomes (improved actions by analyzing mistakes) in a text world's tasks to finetune the VLM on the same tasks of the visual world resulting in an Embodied Multi-Modal Agent (EMMA) quickly adapting to the visual world dynamics. Such cross-modality imitation learning between the two parallel worlds is achieved by a novel DAgger-DPO algorithm enabling EMMA to generalize to a broad scope of new tasks without any further guidance from the LLM expert. Extensive evaluations on the ALFWorld benchmark's diverse tasks highlight EMMA's superior performance to SOTA VLM-based agents e.g. 20%-70% improvement in the success rate. | https://openaccess.thecvf.com/content/CVPR2024/papers/Yang_Embodied_Multi-Modal_Agent_trained_by_an_LLM_from_a_Parallel_CVPR_2024_paper.pdf | http://arxiv.org/abs/2311.16714 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Embodied_Multi-Modal_Agent_trained_by_an_LLM_from_a_Parallel_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Embodied_Multi-Modal_Agent_trained_by_an_LLM_from_a_Parallel_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yang_Embodied_Multi-Modal_Agent_CVPR_2024_supplemental.pdf | null |
VA3: Virtually Assured Amplification Attack on Probabilistic Copyright Protection for Text-to-Image Generative Models | Xiang Li, Qianli Shen, Kenji Kawaguchi | The booming use of text-to-image generative models has raised concerns about their high risk of producing copyright-infringing content. While probabilistic copyright protection methods provide a probabilistic guarantee against such infringement in this paper we introduce Virtually Assured Amplification Attack (VA3) a novel online attack framework that exposes the vulnerabilities of these protection mechanisms. The proposed framework significantly amplifies the probability of generating infringing content on the sustained interactions with generative models and a non-trivial lower-bound on the success probability of each engagement. Our theoretical and experimental results demonstrate the effectiveness of our approach under various scenarios.These findings highlight the potential risk of implementing probabilistic copyright protection in practical applications of text-to-image generative models. Code is available at https://github.com/South7X/VA3. | https://openaccess.thecvf.com/content/CVPR2024/papers/Li_VA3_Virtually_Assured_Amplification_Attack_on_Probabilistic_Copyright_Protection_for_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.00057 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Li_VA3_Virtually_Assured_Amplification_Attack_on_Probabilistic_Copyright_Protection_for_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Li_VA3_Virtually_Assured_Amplification_Attack_on_Probabilistic_Copyright_Protection_for_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_VA3_Virtually_Assured_CVPR_2024_supplemental.pdf | null |
Point-VOS: Pointing Up Video Object Segmentation | Sabarinath Mahadevan, Idil Esen Zulfikar, Paul Voigtlaender, Bastian Leibe | Current state-of-the-art Video Object Segmentation (VOS) methods rely on dense per-object mask annotations both during training and testing. This requires time-consuming and costly video annotation mechanisms. We propose a novel Point-VOS task with a spatio-temporally sparse point-wise annotation scheme that substantially reduces the annotation effort. We apply our annotation scheme to two large-scale video datasets with text descriptions and annotate over 19M points across 133K objects in 32K videos. Based on our annotations we propose a new Point-VOS benchmark and a corresponding point-based training mechanism which we use to establish strong baseline results. We show that existing VOS methods can easily be adapted to leverage our point annotations during training and can achieve results close to the fully-supervised performance when trained on pseudo-masks generated from these points. In addition we show that our data can be used to improve models that connect vision and language by evaluating it on the Video Narrative Grounding (VNG) task. We will make our code and annotations available at https://pointvos.github.io. | https://openaccess.thecvf.com/content/CVPR2024/papers/Mahadevan_Point-VOS_Pointing_Up_Video_Object_Segmentation_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Mahadevan_Point-VOS_Pointing_Up_Video_Object_Segmentation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Mahadevan_Point-VOS_Pointing_Up_Video_Object_Segmentation_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Mahadevan_Point-VOS_Pointing_Up_CVPR_2024_supplemental.pdf | null |
Intriguing Properties of Diffusion Models: An Empirical Study of the Natural Attack Capability in Text-to-Image Generative Models | Takami Sato, Justin Yue, Nanze Chen, Ningfei Wang, Qi Alfred Chen | Denoising probabilistic diffusion models have shown breakthrough performance to generate more photo-realistic images or human-level illustrations than the prior models such as GANs. This high image-generation capability has stimulated the creation of many downstream applications in various areas. However we find that this technology is actually a double-edged sword: We identify a new type of attack called the Natural Denoising Diffusion (NDD) attack based on the finding that state-of-the-art deep neural network (DNN) models still hold their prediction even if we intentionally remove their robust features which are essential to the human visual system (HVS) through text prompts. The NDD attack shows a significantly high capability to generate low-cost model-agnostic and transferable adversarial attacks by exploiting the natural attack capability in diffusion models. To systematically evaluate the risk of the NDD attack we perform a large-scale empirical study with our newly created dataset the Natural Denoising Diffusion Attack (NDDA) dataset. We evaluate the natural attack capability by answering 6 research questions. Through a user study we find that it can achieve an 88% detection rate while being stealthy to 93% of human subjects; we also find that the non-robust features embedded by diffusion models contribute to the natural attack capability. To confirm the model-agnostic and transferable attack capability we perform the NDD attack against the Tesla Model 3 and find that 73% of the physically printed attacks can be detected as stop signs. Our hope is that the study and dataset can help our community be aware of the risks in diffusion models and facilitate further research toward robust DNN models. | https://openaccess.thecvf.com/content/CVPR2024/papers/Sato_Intriguing_Properties_of_Diffusion_Models_An_Empirical_Study_of_the_CVPR_2024_paper.pdf | http://arxiv.org/abs/2308.15692 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Sato_Intriguing_Properties_of_Diffusion_Models_An_Empirical_Study_of_the_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Sato_Intriguing_Properties_of_Diffusion_Models_An_Empirical_Study_of_the_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Sato_Intriguing_Properties_of_CVPR_2024_supplemental.pdf | null |
GroupContrast: Semantic-aware Self-supervised Representation Learning for 3D Understanding | Chengyao Wang, Li Jiang, Xiaoyang Wu, Zhuotao Tian, Bohao Peng, Hengshuang Zhao, Jiaya Jia | Self-supervised 3D representation learning aims to learn effective representations from large-scale unlabeled point clouds. Most existing approaches adopt point discrimination as the pretext task which assigns matched points in two distinct views as positive pairs and unmatched points as negative pairs. However this approach often results in semantically identical points having dissimilar representations leading to a high number of false negatives and introducing a semantic conflict problem. To address this issue we propose GroupContrast a novel approach that combines segment grouping and semantic-aware contrastive learning. Segment grouping partitions points into semantically meaningful regions which enhances semantic coherence and provides semantic guidance for the subsequent contrastive representation learning. Semantic-aware contrastive learning augments the semantic information extracted from segment grouping and helps to alleviate the issue of semantic conflict. We conducted extensive experiments on multiple 3D scene understanding tasks. The results demonstrate that GroupContrast learns semantically meaningful representations and achieves promising transfer learning performance. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_GroupContrast_Semantic-aware_Self-supervised_Representation_Learning_for_3D_Understanding_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.09639 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_GroupContrast_Semantic-aware_Self-supervised_Representation_Learning_for_3D_Understanding_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_GroupContrast_Semantic-aware_Self-supervised_Representation_Learning_for_3D_Understanding_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wang_GroupContrast_Semantic-aware_Self-supervised_CVPR_2024_supplemental.pdf | null |
HouseCat6D - A Large-Scale Multi-Modal Category Level 6D Object Perception Dataset with Household Objects in Realistic Scenarios | HyunJun Jung, Shun-Cheng Wu, Patrick Ruhkamp, Guangyao Zhai, Hannah Schieber, Giulia Rizzoli, Pengyuan Wang, Hongcheng Zhao, Lorenzo Garattoni, Sven Meier, Daniel Roth, Nassir Navab, Benjamin Busam | Estimating 6D object poses is a major challenge in 3D computer vision. Building on successful instance-level approaches research is shifting towards category-level pose estimation for practical applications. Current category-level datasets however fall short in annotation quality and pose variety. Addressing this we introduce HouseCat6D a new category-level 6D pose dataset. It features 1) multi-modality with Polarimetric RGB and Depth (RGBD+P) 2) encompasses 194 diverse objects across 10 household categories including two photometrically challenging ones and 3) provides high-quality pose annotations with an error range of only 1.35 mm to 1.74 mm. The dataset also includes 4) 41 large-scale scenes with comprehensive viewpoint and occlusion coverage 5) a checkerboard-free environment and 6. dense 6D parallel-jaw robotic grasp annotations. Additionally we present benchmark results for leading category-level pose estimation networks. | https://openaccess.thecvf.com/content/CVPR2024/papers/Jung_HouseCat6D_-_A_Large-Scale_Multi-Modal_Category_Level_6D_Object_Perception_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Jung_HouseCat6D_-_A_Large-Scale_Multi-Modal_Category_Level_6D_Object_Perception_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Jung_HouseCat6D_-_A_Large-Scale_Multi-Modal_Category_Level_6D_Object_Perception_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Jung_HouseCat6D_-_A_CVPR_2024_supplemental.zip | null |
Privacy-Preserving Face Recognition Using Trainable Feature Subtraction | Yuxi Mi, Zhizhou Zhong, Yuge Huang, Jiazhen Ji, Jianqing Xu, Jun Wang, Shaoming Wang, Shouhong Ding, Shuigeng Zhou | The widespread adoption of face recognition has led to increasing privacy concerns as unauthorized access to face images can expose sensitive personal information. This paper explores face image protection against viewing and recovery attacks. Inspired by image compression we propose creating a visually uninformative face image through feature subtraction between an original face and its model-produced regeneration. Recognizable identity features within the image are encouraged by co-training a recognition model on its high-dimensional feature representation. To enhance privacy the high-dimensional representation is crafted through random channel shuffling resulting in randomized recognizable images devoid of attacker-leverageable texture details. We distill our methodologies into a novel privacy-preserving face recognition method MinusFace. Experiments demonstrate its high recognition accuracy and effective privacy protection. Its code is available at https://github.com/Tencent/TFace. | https://openaccess.thecvf.com/content/CVPR2024/papers/Mi_Privacy-Preserving_Face_Recognition_Using_Trainable_Feature_Subtraction_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.12457 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Mi_Privacy-Preserving_Face_Recognition_Using_Trainable_Feature_Subtraction_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Mi_Privacy-Preserving_Face_Recognition_Using_Trainable_Feature_Subtraction_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Mi_Privacy-Preserving_Face_Recognition_CVPR_2024_supplemental.pdf | null |
Towards Co-Evaluation of Cameras HDR and Algorithms for Industrial-Grade 6DoF Pose Estimation | Agastya Kalra, Guy Stoppi, Dmitrii Marin, Vage Taamazyan, Aarrushi Shandilya, Rishav Agarwal, Anton Boykov, Tze Hao Chong, Michael Stark | 6DoF Pose estimation has been gaining increased importance in vision for over a decade however it does not yet meet the reliability and accuracy standards for mass deployment in industrial robotics. To this effect we present the Industrial Plenoptic Dataset (IPD): the first dataset for the co-evaluation of cameras HDR and algorithms targeted at reliable high-accuracy industrial automation. Specifically we capture 2300 physical scenes of 20 industrial parts covering a 1mx1mx0.5m working volume resulting in over 100000 distinct object views. Each scene is captured with 13 well-calibrated multi-modal cameras including polarization and high-resolution structured light. In terms of lighting we capture each scene at 4 exposures and in 3 challenging lighting conditions ranging from 100 lux to 100000 lux. We also present validate and analyze robot consistency an evaluation method targeted at scalable high accuracy evaluation. We hope that vision systems that succeed on this dataset will have direct industry impact. The dataset and evaluation code are available at https://github.com/intrinsic-ai/ipd. | https://openaccess.thecvf.com/content/CVPR2024/papers/Kalra_Towards_Co-Evaluation_of_Cameras_HDR_and_Algorithms_for_Industrial-Grade_6DoF_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Kalra_Towards_Co-Evaluation_of_Cameras_HDR_and_Algorithms_for_Industrial-Grade_6DoF_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Kalra_Towards_Co-Evaluation_of_Cameras_HDR_and_Algorithms_for_Industrial-Grade_6DoF_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kalra_Towards_Co-Evaluation_of_CVPR_2024_supplemental.pdf | null |
Learning Visual Prompt for Gait Recognition | Kang Ma, Ying Fu, Chunshui Cao, Saihui Hou, Yongzhen Huang, Dezhi Zheng | Gait a prevalent and complex form of human motion plays a significant role in the field of long-range pedestrian retrieval due to the unique characteristics inherent in individual motion patterns. However gait recognition in real-world scenarios is challenging due to the limitations of capturing comprehensive cross-viewing and cross-clothing data. Additionally distractors such as occlusions directional changes and lingering movements further complicate the problem. The widespread application of deep learning techniques has led to the development of various potential gait recognition methods. However these methods utilize convolutional networks to extract shared information across different views and attire conditions. Once trained the parameters and non-linear function become constrained to fixed patterns limiting their adaptability to various distractors in real-world scenarios. In this paper we present a unified gait recognition framework to extract global motion patterns and develop a novel dynamic transformer to generate representative gait features. Specifically we develop a trainable part-based prompt pool with numerous key-value pairs that can dynamically select prompt templates to incorporate into the gait sequence thereby providing task-relevant shared knowledge information. Furthermore we specifically design dynamic attention to extract robust motion patterns and address the length generalization issue. Extensive experiments on four widely recognized gait datasets i.e. Gait3D GREW OUMVLP and CASIA-B reveal that the proposed method yields substantial improvements compared to current state-of-the-art approaches. | https://openaccess.thecvf.com/content/CVPR2024/papers/Ma_Learning_Visual_Prompt_for_Gait_Recognition_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Ma_Learning_Visual_Prompt_for_Gait_Recognition_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Ma_Learning_Visual_Prompt_for_Gait_Recognition_CVPR_2024_paper.html | CVPR 2024 | null | null |
MLP Can Be A Good Transformer Learner | Sihao Lin, Pumeng Lyu, Dongrui Liu, Tao Tang, Xiaodan Liang, Andy Song, Xiaojun Chang | Self-attention mechanism is the key of the Transformer but often criticized for its computation demands. Previous token pruning works motivate their methods from the view of computation redundancy but still need to load the full network and require same memory costs. This paper introduces a novel strategy that simplifies vision transformers and reduces computational load through the selective removal of non-essential attention layers guided by entropy considerations. We identify that regarding the attention layer in bottom blocks their subsequent MLP layers i.e. two feed-forward layers can elicit the same entropy quantity. Meanwhile the accompanied MLPs are under-exploited since they exhibit smaller feature entropy compared to those MLPs in the top blocks. Therefore we propose to integrate the uninformative attention layers into their subsequent counterparts by degenerating them into identical mapping yielding only MLP in certain transformer blocks. Experimental results on ImageNet-1k show that the proposed method can remove 40% attention layer of DeiT-B improving throughput and memory bound without performance compromise. | https://openaccess.thecvf.com/content/CVPR2024/papers/Lin_MLP_Can_Be_A_Good_Transformer_Learner_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.05657 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Lin_MLP_Can_Be_A_Good_Transformer_Learner_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Lin_MLP_Can_Be_A_Good_Transformer_Learner_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lin_MLP_Can_Be_CVPR_2024_supplemental.pdf | null |
GraphDreamer: Compositional 3D Scene Synthesis from Scene Graphs | Gege Gao, Weiyang Liu, Anpei Chen, Andreas Geiger, Bernhard Schölkopf | As pretrained text-to-image diffusion models become increasingly powerful recent efforts have been made to distill knowledge from these text-to-image pretrained models for optimizing a text-guided 3D model. Most of the existing methods generate a holistic 3D model from a plain text input. This can be problematic when the text describes a complex scene with multiple objects because the vectorized text embeddings are inherently unable to capture a complex description with multiple entities and relationships. Holistic 3D modeling of the entire scene further prevents accurate grounding of text entities and concepts. To address this limitation we propose GraphDreamer a novel framework to generate compositional 3D scenes from scene graphs where objects are represented as nodes and their interactions as edges. By exploiting node and edge information in scene graphs our method makes better use of the pretrained text-to-image diffusion model and is able to fully disentangle different objects without image-level supervision. To facilitate modeling of object-wise relationships we use signed distance fields as representation and impose a constraint to avoid inter-penetration of objects. To avoid manual scene graph creation we design a text prompt for ChatGPT to generate scene graphs based on text inputs. We conduct both qualitative and quantitative experiments to validate the effectiveness of GraphDreamer in generating high-fidelity compositional 3D scenes with disentangled object entities. | https://openaccess.thecvf.com/content/CVPR2024/papers/Gao_GraphDreamer_Compositional_3D_Scene_Synthesis_from_Scene_Graphs_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.00093 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Gao_GraphDreamer_Compositional_3D_Scene_Synthesis_from_Scene_Graphs_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Gao_GraphDreamer_Compositional_3D_Scene_Synthesis_from_Scene_Graphs_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Gao_GraphDreamer_Compositional_3D_CVPR_2024_supplemental.pdf | null |
Visual-Augmented Dynamic Semantic Prototype for Generative Zero-Shot Learning | Wenjin Hou, Shiming Chen, Shuhuang Chen, Ziming Hong, Yan Wang, Xuetao Feng, Salman Khan, Fahad Shahbaz Khan, Xinge You | Generative Zero-shot learning (ZSL) learns a generator to synthesize visual samples for unseen classes which is an effective way to advance ZSL. However existing generative methods rely on the conditions of Gaussian noise and the predefined semantic prototype which limit the generator only optimized on specific seen classes rather than characterizing each visual instance resulting in poor generalizations (e.g. overfitting to seen classes). To address this issue we propose a novel Visual-Augmented Dynamic Semantic prototype method (termed VADS) to boost the generator to learn accurate semantic-visual mapping by fully exploiting the visual-augmented knowledge into semantic conditions. In detail VADS consists of two modules: (1) Visual-aware Domain Knowledge Learning module (VDKL) learns the local bias and global prior of the visual features (referred to as domain visual knowledge) which replace pure Gaussian noise to provide richer prior noise information; (2) Vision-Oriented Semantic Updation module (VOSU) updates the semantic prototype according to the visual representations of the samples. Ultimately we concatenate their output as a dynamic semantic prototype which serves as the condition of the generator. Extensive experiments demonstrate that our VADS achieves superior CZSL and GZSL performances on three prominent datasets and outperforms other state-of-the-art methods with averaging increases by 6.4% 5.9% and 4.2% on SUN CUB and AWA2 respectively. | https://openaccess.thecvf.com/content/CVPR2024/papers/Hou_Visual-Augmented_Dynamic_Semantic_Prototype_for_Generative_Zero-Shot_Learning_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.14808 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Hou_Visual-Augmented_Dynamic_Semantic_Prototype_for_Generative_Zero-Shot_Learning_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Hou_Visual-Augmented_Dynamic_Semantic_Prototype_for_Generative_Zero-Shot_Learning_CVPR_2024_paper.html | CVPR 2024 | null | null |
Dynamic Prompt Optimizing for Text-to-Image Generation | Wenyi Mo, Tianyu Zhang, Yalong Bai, Bing Su, Ji-Rong Wen, Qing Yang | Text-to-image generative models specifically those based on diffusion models like Imagen and Stable Diffusion have made substantial advancements. Recently there has been a surge of interest in the delicate refinement of text prompts. Users assign weights or alter the injection time steps of certain words in the text prompts to improve the quality of generated images. However the success of fine-control prompts depends on the accuracy of the text prompts and the careful selection of weights and time steps which requires significant manual intervention. To address this we introduce the Prompt Auto-Editing (PAE) method. Besides refining the original prompts for image generation we further employ an online reinforcement learning strategy to explore the weights and injection time steps of each word leading to the dynamic fine-control prompts. The reward function during training encourages the model to consider aesthetic score semantic consistency and user preferences. Experimental results demonstrate that our proposed method effectively improves the original prompts generating visually more appealing images while maintaining semantic alignment. Code is available at \href https://github.com/Mowenyii/PAE this https URL . | https://openaccess.thecvf.com/content/CVPR2024/papers/Mo_Dynamic_Prompt_Optimizing_for_Text-to-Image_Generation_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.04095 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Mo_Dynamic_Prompt_Optimizing_for_Text-to-Image_Generation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Mo_Dynamic_Prompt_Optimizing_for_Text-to-Image_Generation_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Mo_Dynamic_Prompt_Optimizing_CVPR_2024_supplemental.pdf | null |
SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes | Yi-Hua Huang, Yang-Tian Sun, Ziyi Yang, Xiaoyang Lyu, Yan-Pei Cao, Xiaojuan Qi | Novel view synthesis for dynamic scenes is still a challenging problem in computer vision and graphics. Recently Gaussian splatting has emerged as a robust technique to represent static scenes and enable high-quality and real-time novel view synthesis. Building upon this technique we propose a new representation that explicitly decomposes the motion and appearance of dynamic scenes into sparse control points and dense Gaussians respectively. Our key idea is to use sparse control points significantly fewer in number than the Gaussians to learn compact 6 DoF transformation bases which can be locally interpolated through learned interpolation weights to yield the motion field of 3D Gaussians. We employ a deformation MLP to predict time-varying 6 DoF transformations for each control point which reduces learning complexities enhances learning abilities and facilitates obtaining temporal and spatial coherent motion patterns. Then we jointly learn the 3D Gaussians the canonical space locations of control points and the deformation MLP to reconstruct the appearance geometry and dynamics of 3D scenes. During learning the location and number of control points are adaptively adjusted to accommodate varying motion complexities in different regions and an ARAP loss following the principle of as rigid as possible is developed to enforce spatial continuity and local rigidity of learned motions. Finally thanks to the explicit sparse motion representation and its decomposition from appearance our method can enable user-controlled motion editing while retaining high-fidelity appearances. Extensive experiments demonstrate that our approach outperforms existing approaches on novel view synthesis with a high rendering speed and enables novel appearance-preserved motion editing applications. | https://openaccess.thecvf.com/content/CVPR2024/papers/Huang_SC-GS_Sparse-Controlled_Gaussian_Splatting_for_Editable_Dynamic_Scenes_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Huang_SC-GS_Sparse-Controlled_Gaussian_Splatting_for_Editable_Dynamic_Scenes_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Huang_SC-GS_Sparse-Controlled_Gaussian_Splatting_for_Editable_Dynamic_Scenes_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Huang_SC-GS_Sparse-Controlled_Gaussian_CVPR_2024_supplemental.mp4 | null |
360Loc: A Dataset and Benchmark for Omnidirectional Visual Localization with Cross-device Queries | Huajian Huang, Changkun Liu, Yipeng Zhu, Hui Cheng, Tristan Braud, Sai-Kit Yeung | Portable 360^\circ cameras are becoming a cheap and efficient tool to establish large visual databases. By capturing omnidirectional views of a scene these cameras could expedite building environment models that are essential for visual localization. However such an advantage is often overlooked due to the lack of valuable datasets. This paper introduces a new benchmark dataset 360Loc composed of 360^\circ images with ground truth poses for visual localization. We present a practical implementation of 360^\circ mapping combining 360^\circ images with lidar data to generate the ground truth 6DoF poses. 360Loc is the first dataset and benchmark that explores the challenge of cross-device visual positioning involving 360^\circ reference frames and query frames from pinhole ultra-wide FoV fisheye and 360^\circ cameras. We propose a virtual camera approach to generate lower-FoV query frames from 360^\circ images which ensures a fair comparison of performance among different query types in visual localization tasks. We also extend this virtual camera approach to feature matching-based and pose regression-based methods to alleviate the performance loss caused by the cross-device domain gap and evaluate its effectiveness against state-of-the-art baselines. We demonstrate that omnidirectional visual localization is more robust in challenging large-scale scenes with symmetries and repetitive structures. These results provide new insights into 360-camera mapping and omnidirectional visual localization with cross-device queries. Project Page and dataset: https://huajianup.github.io/research/360Loc/. | https://openaccess.thecvf.com/content/CVPR2024/papers/Huang_360Loc_A_Dataset_and_Benchmark_for_Omnidirectional_Visual_Localization_with_CVPR_2024_paper.pdf | http://arxiv.org/abs/2311.17389 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Huang_360Loc_A_Dataset_and_Benchmark_for_Omnidirectional_Visual_Localization_with_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Huang_360Loc_A_Dataset_and_Benchmark_for_Omnidirectional_Visual_Localization_with_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Huang_360Loc_A_Dataset_CVPR_2024_supplemental.pdf | null |
Domain Gap Embeddings for Generative Dataset Augmentation | Yinong Oliver Wang, Younjoon Chung, Chen Henry Wu, Fernando De la Torre | The performance of deep learning models is intrinsically tied to the quality volume and relevance of their training data. Gathering ample data for production scenarios often demands significant time and resources. Among various strategies data augmentation circumvents exhaustive data collection by generating new data points from existing ones. However traditional augmentation techniques can be less effective amidst a shift in training and testing distributions. This paper explores the potential of synthetic data by leveraging large pre-trained models for data augmentation especially when confronted with distribution shifts. Although recent advancements in generative models have enabled several prior works in cross-distribution data generation they require model fine-tuning and a complex setup. To bypass these shortcomings we introduce Domain Gap Embeddings (DoGE) a plug-and-play semantic data augmentation framework in a cross-distribution few-shot setting. Our method extracts disparities between source and desired data distributions in a latent form and subsequently steers a generative process to supplement the training set with endless diverse synthetic samples. Our evaluations conducted on a subpopulation shift and three domain adaptation scenarios under a few-shot paradigm reveal that our versatile method improves performance across tasks without needing hands-on intervention or intricate fine-tuning. DoGE paves the way to effortlessly generate realistic controllable synthetic datasets following the test distributions bolstering real-world efficacy for downstream task models. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_Domain_Gap_Embeddings_for_Generative_Dataset_Augmentation_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Domain_Gap_Embeddings_for_Generative_Dataset_Augmentation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Domain_Gap_Embeddings_for_Generative_Dataset_Augmentation_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wang_Domain_Gap_Embeddings_CVPR_2024_supplemental.pdf | null |
Geometrically-driven Aggregation for Zero-shot 3D Point Cloud Understanding | Guofeng Mei, Luigi Riz, Yiming Wang, Fabio Poiesi | Zero-shot 3D point cloud understanding can be achieved via 2D Vision-Language Models (VLMs). Existing strategies directly map VLM representations from 2D pixels of rendered or captured views to 3D points overlooking the inherent and expressible point cloud geometric structure. Geometrically similar or close regions can be exploited for bolstering point cloud understanding as they are likely to share semantic information. To this end we introduce the first training-free aggregation technique that leverages the point cloud's 3D geometric structure to improve the quality of the transferred VLM representation. Our approach operates iteratively performing local-to-global aggregation based on geometric and semantic point-level reasoning. We benchmark our approach on three downstream tasks including classification part segmentation and semantic segmentation with a variety of datasets representing both synthetic/real-world and indoor/outdoor scenarios. Our approach achieves new state-of-the-art results in all benchmarks. | https://openaccess.thecvf.com/content/CVPR2024/papers/Mei_Geometrically-driven_Aggregation_for_Zero-shot_3D_Point_Cloud_Understanding_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.02244 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Mei_Geometrically-driven_Aggregation_for_Zero-shot_3D_Point_Cloud_Understanding_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Mei_Geometrically-driven_Aggregation_for_Zero-shot_3D_Point_Cloud_Understanding_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Mei_Geometrically-driven_Aggregation_for_CVPR_2024_supplemental.pdf | null |
Learning to Rank Patches for Unbiased Image Redundancy Reduction | Yang Luo, Zhineng Chen, Peng Zhou, Zuxuan Wu, Xieping Gao, Yu-Gang Jiang | Images suffer from heavy spatial redundancy because pixels in neighboring regions are spatially correlated. Existing approaches strive to overcome this limitation by reducing less meaningful image regions. However current leading methods rely on supervisory signals. They may compel models to preserve content that aligns with labeled categories and discard content belonging to unlabeled categories. This categorical inductive bias makes these methods less effective in real-world scenarios. To address this issue we propose a self-supervised framework for image redundancy reduction called Learning to Rank Patches (LTRP). We observe that image reconstruction of masked image modeling models is sensitive to the removal of visible patches when the masking ratio is high (e.g. 90%). Building upon it we implement LTRP via two steps: inferring the semantic density score of each patch by quantifying variation between reconstructions with and without this patch and learning to rank the patches with the pseudo score. The entire process is self-supervised thus getting out of the dilemma of categorical inductive bias. We design extensive experiments on different datasets and tasks. The results demonstrate that LTRP outperforms both supervised and other self-supervised methods due to the fair assessment of image content. | https://openaccess.thecvf.com/content/CVPR2024/papers/Luo_Learning_to_Rank_Patches_for_Unbiased_Image_Redundancy_Reduction_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.00680 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Luo_Learning_to_Rank_Patches_for_Unbiased_Image_Redundancy_Reduction_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Luo_Learning_to_Rank_Patches_for_Unbiased_Image_Redundancy_Reduction_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Luo_Learning_to_Rank_CVPR_2024_supplemental.pdf | null |
Going Beyond Multi-Task Dense Prediction with Synergy Embedding Models | Huimin Huang, Yawen Huang, Lanfen Lin, Ruofeng Tong, Yen-Wei Chen, Hao Zheng, Yuexiang Li, Yefeng Zheng | Multi-task visual scene understanding aims to leverage the relationships among a set of correlated tasks which are solved simultaneously by embedding them within a uni- fied network. However most existing methods give rise to two primary concerns from a task-level perspective: (1) the lack of task-independent correspondences for distinct tasks and (2) the neglect of explicit task-consensual dependencies among various tasks. To address these issues we propose a novel synergy embedding models (SEM) which goes be- yond multi-task dense prediction by leveraging two innova- tive designs: the intra-task hierarchy-adaptive module and the inter-task EM-interactive module. Specifically the con- structed intra-task module incorporates hierarchy-adaptive keys from multiple stages enabling the efficient learning of specialized visual patterns with an optimal trade-off. In ad- dition the developed inter-task module learns interactions from a compact set of mutual bases among various tasks benefiting from the expectation maximization (EM) algo- rithm. Extensive empirical evidence from two public bench- marks NYUD-v2 and PASCAL-Context demonstrates that SEM consistently outperforms state-of-the-art approaches across a range of metrics. | https://openaccess.thecvf.com/content/CVPR2024/papers/Huang_Going_Beyond_Multi-Task_Dense_Prediction_with_Synergy_Embedding_Models_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Huang_Going_Beyond_Multi-Task_Dense_Prediction_with_Synergy_Embedding_Models_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Huang_Going_Beyond_Multi-Task_Dense_Prediction_with_Synergy_Embedding_Models_CVPR_2024_paper.html | CVPR 2024 | null | null |
Disentangled Pre-training for Human-Object Interaction Detection | Zhuolong Li, Xingao Li, Changxing Ding, Xiangmin Xu | Detecting human-object interaction (HOI) has long been limited by the amount of supervised data available. Recent approaches address this issue by pre-training according to pseudo-labels which align object regions with HOI triplets parsed from image captions. However pseudo-labeling is tricky and noisy making HOI pre-training a complex process. Therefore we propose an efficient disentangled pre-training method for HOI detection (DP-HOI) to address this problem. First DP-HOI utilizes object detection and action recognition datasets to pre-train the detection and interaction decoder layers respectively. Then we arrange these decoder layers so that the pre-training architecture is consistent with the downstream HOI detection task. This facilitates efficient knowledge transfer. Specifically the detection decoder identifies reliable human instances in each action recognition dataset image generates one corresponding query and feeds it into the interaction decoder for verb classification. Next we combine the human instance verb predictions in the same image and impose image-level supervision. The DP-HOI structure can be easily adapted to the HOI detection task enabling effective model parameter initialization. Therefore it significantly enhances the performance of existing HOI detection models on a broad range of rare categories. The code and pre-trained weight are available at https://github.com/xingaoli/DP-HOI. | https://openaccess.thecvf.com/content/CVPR2024/papers/Li_Disentangled_Pre-training_for_Human-Object_Interaction_Detection_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.01725 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Li_Disentangled_Pre-training_for_Human-Object_Interaction_Detection_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Li_Disentangled_Pre-training_for_Human-Object_Interaction_Detection_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_Disentangled_Pre-training_for_CVPR_2024_supplemental.pdf | null |
Light the Night: A Multi-Condition Diffusion Framework for Unpaired Low-Light Enhancement in Autonomous Driving | Jinlong Li, Baolu Li, Zhengzhong Tu, Xinyu Liu, Qing Guo, Felix Juefei-Xu, Runsheng Xu, Hongkai Yu | Vision-centric perception systems for autonomous driving have gained considerable attention recently due to their cost-effectiveness and scalability especially compared to LiDAR-based systems. However these systems often struggle in low-light conditions potentially compromising their performance and safety. To address this our paper introduces LightDiff a domain-tailored framework designed to enhance the low-light image quality for autonomous driving applications. Specifically we employ a multi-condition controlled diffusion model. LightDiff works without any human-collected paired data leveraging a dynamic data degradation process instead. It incorporates a novel multi-condition adapter that adaptively controls the input weights from different modalities including depth maps RGB images and text captions to effectively illuminate dark scenes while maintaining context consistency. Furthermore to align the enhanced images with the detection model's knowledge LightDiff employs perception-specific scores as rewards to guide the diffusion training process through reinforcement learning. Extensive experiments on the nuScenes datasets demonstrate that LightDiff can significantly improve the performance of several state-of-the-art 3D detectors in night-time conditions while achieving high visual quality scores highlighting its potential to safeguard autonomous driving. | https://openaccess.thecvf.com/content/CVPR2024/papers/Li_Light_the_Night_A_Multi-Condition_Diffusion_Framework_for_Unpaired_Low-Light_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.04804 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Li_Light_the_Night_A_Multi-Condition_Diffusion_Framework_for_Unpaired_Low-Light_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Li_Light_the_Night_A_Multi-Condition_Diffusion_Framework_for_Unpaired_Low-Light_CVPR_2024_paper.html | CVPR 2024 | null | null |
MetaCloak: Preventing Unauthorized Subject-driven Text-to-image Diffusion-based Synthesis via Meta-learning | Yixin Liu, Chenrui Fan, Yutong Dai, Xun Chen, Pan Zhou, Lichao Sun | Text-to-image diffusion models allow seamless generation of personalized images from scant reference photos. Yet these tools in the wrong hands can fabricate misleading or harmful content endangering individuals. To address this problem existing poisoning-based approaches perturb user images in an imperceptible way to render them "unlearnable" from malicious uses. We identify two limitations of these defending approaches: i) sub-optimal due to the hand-crafted heuristics for solving the intractable bilevel optimization and ii) lack of robustness against simple data transformations like Gaussian filtering. To solve these challenges we propose MetaCloak which solves the bi-level poisoning problem with a meta-learning framework with an additional transformation sampling process to craft transferable and robust perturbation. Specifically we employ a pool of surrogate diffusion models to craft transferable and model-agnostic perturbation. Furthermore by incorporating an additional transformation process we design a simple denoising-error maximization loss that is sufficient for causing transformation-robust semantic distortion and degradation in a personalized generation. Extensive experiments on the VGGFace2 and CelebA-HQ datasets show that MetaCloak outperforms existing approaches. Notably MetaCloak can successfully fool online training services like Replicate in a black-box manner demonstrating the effectiveness of MetaCloak in real-world scenarios. | https://openaccess.thecvf.com/content/CVPR2024/papers/Liu_MetaCloak_Preventing_Unauthorized_Subject-driven_Text-to-image_Diffusion-based_Synthesis_via_Meta-learning_CVPR_2024_paper.pdf | http://arxiv.org/abs/2311.13127 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Liu_MetaCloak_Preventing_Unauthorized_Subject-driven_Text-to-image_Diffusion-based_Synthesis_via_Meta-learning_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Liu_MetaCloak_Preventing_Unauthorized_Subject-driven_Text-to-image_Diffusion-based_Synthesis_via_Meta-learning_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liu_MetaCloak_Preventing_Unauthorized_CVPR_2024_supplemental.pdf | null |
Neural Modes: Self-supervised Learning of Nonlinear Modal Subspaces | Jiahong Wang, Yinwei Du, Stelian Coros, Bernhard Thomaszewski | We propose a self-supervised approach for learning physics-based subspaces for real-time simulation. Existing learning-based methods construct subspaces by approximating pre-defined simulation data in a purely geometric way. However this approach tends to produce high-energy configurations leads to entangled latent space dimensions and generalizes poorly beyond the training set. To overcome these limitations we propose a self-supervised approach that directly minimizes the system's mechanical energy during training. We show that our method leads to learned subspaces that reflect physical equilibrium constraints resolve overfitting issues of previous methods and offer interpretable latent space parameters. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_Neural_Modes_Self-supervised_Learning_of_Nonlinear_Modal_Subspaces_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.17620 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Neural_Modes_Self-supervised_Learning_of_Nonlinear_Modal_Subspaces_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Neural_Modes_Self-supervised_Learning_of_Nonlinear_Modal_Subspaces_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wang_Neural_Modes_Self-supervised_CVPR_2024_supplemental.pdf | null |
How to Train Neural Field Representations: A Comprehensive Study and Benchmark | Samuele Papa, Riccardo Valperga, David Knigge, Miltiadis Kofinas, Phillip Lippe, Jan-Jakob Sonke, Efstratios Gavves | Neural fields (NeFs) have recently emerged as a versatile method for modeling signals of various modalities including images shapes and scenes. Subsequently a number of works have explored the use of NeFs as representations for downstream tasks e.g. classifying an image based on the parameters of a NeF that has been fit to it. However the impact of the NeF hyperparameters on their quality as downstream representation is scarcely understood and remains largely unexplored. This is in part caused by the large amount of time required to fit datasets of neural fields.In this work we propose a JAX-based library that leverages parallelization to enable fast optimization of large-scale NeF datasets resulting in a significant speed-up. With this library we perform a comprehensive study that investigates the effects of different hyperparameters on fitting NeFs for downstream tasks. In particular we explore the use of a shared initialization the effects of overtraining and the expressiveness of the network architectures used. Our study provides valuable insights on how to train NeFs and offers guidance for optimizing their effectiveness in downstream applications. Finally based on the proposed library and our analysis we propose Neural Field Arena a benchmark consisting of neural field variants of popular vision datasets including MNIST CIFAR variants of ImageNet and ShapeNetv2. Our library and the Neural Field Arena will be open-sourced to introduce standardized benchmarking and promote further research on neural fields. | https://openaccess.thecvf.com/content/CVPR2024/papers/Papa_How_to_Train_Neural_Field_Representations_A_Comprehensive_Study_and_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.10531 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Papa_How_to_Train_Neural_Field_Representations_A_Comprehensive_Study_and_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Papa_How_to_Train_Neural_Field_Representations_A_Comprehensive_Study_and_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Papa_How_to_Train_CVPR_2024_supplemental.pdf | null |
Delving into the Trajectory Long-tail Distribution for Muti-object Tracking | Sijia Chen, En Yu, Jinyang Li, Wenbing Tao | Multiple Object Tracking (MOT) is a critical area within computer vision with a broad spectrum of practical implementations. Current research has primarily focused on the development of tracking algorithms and enhancement of post-processing techniques. Yet there has been a lack of thorough examination concerning the nature of tracking data it self. In this study we pioneer an exploration into the distribution patterns of tracking data and identify a pronounced long-tail distribution issue within existing MOT datasets. We note a significant imbalance in the distribution of trajectory lengths across different pedestrians a phenomenon we refer to as "pedestrians trajectory long-tail distribution". Addressing this challenge we introduce a bespoke strategy designed to mitigate the effects of this skewed distribution. Specifically we propose two data augmentation strategies including Stationary Camera View Data Augmentation (SVA) and Dynamic Camera View Data Augmentation (DVA) designed for viewpoint states and the Group Softmax (GS) module for Re-ID. SVA is to backtrack and predict the pedestrian trajectory of tail classes and DVA is to use diffusion model to change the background of the scene. GS divides the pedestrians into unrelated groups and performs softmax operation on each group individually. Our proposed strategies can be integrated into numerous existing tracking systems and extensive experimentation validates the efficacy of our method in reducing the influence of long-tail distribution on multi-object tracking performance. The code is available at https://github.com/chen-si-jia/Trajectory-Long-tail-Distribution-for-MOT. | https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_Delving_into_the_Trajectory_Long-tail_Distribution_for_Muti-object_Tracking_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.04700 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Delving_into_the_Trajectory_Long-tail_Distribution_for_Muti-object_Tracking_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Delving_into_the_Trajectory_Long-tail_Distribution_for_Muti-object_Tracking_CVPR_2024_paper.html | CVPR 2024 | null | null |
Tri-Modal Motion Retrieval by Learning a Joint Embedding Space | Kangning Yin, Shihao Zou, Yuxuan Ge, Zheng Tian | Text-to-motion tasks have been the focus of recent advancements in the human motion domain. However the performance of text-to-motion tasks have not reached its potential primarily due to the lack of motion datasets and the pronounced gap between the text and motion modalities. To mitigate this challenge we introduce VLMA a novel Video-Language-Motion Alignment method. This approach leverages human-centric videos as an intermediary modality effectively bridging the divide between text and motion. By employing contrastive learning we construct a cohesive embedding space across the three modalities. Furthermore we incorporate a motion reconstruction branch ensuring that the resulting motion remains closely aligned with its original trajectory. Experimental evaluations on the HumanML3D and KIT-ML datasets demonstrate the superiority of our method in comparison to existing approaches. Furthermore we introduce a novel task termed video-to-motion retrieval designed to facilitate the seamlessxt eraction of corresponding 3D motions from an RGB video. Supplementary experiments demonstrate that our model is extensible to real-world human-centric videos offering a valuable complement to the pose estimation task. | https://openaccess.thecvf.com/content/CVPR2024/papers/Yin_Tri-Modal_Motion_Retrieval_by_Learning_a_Joint_Embedding_Space_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.00691 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Yin_Tri-Modal_Motion_Retrieval_by_Learning_a_Joint_Embedding_Space_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Yin_Tri-Modal_Motion_Retrieval_by_Learning_a_Joint_Embedding_Space_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yin_Tri-Modal_Motion_Retrieval_CVPR_2024_supplemental.pdf | null |
Seg2Reg: Differentiable 2D Segmentation to 1D Regression Rendering for 360 Room Layout Reconstruction | Cheng Sun, Wei-En Tai, Yu-Lin Shih, Kuan-Wei Chen, Yong-Jing Syu, Kent Selwyn The, Yu-Chiang Frank Wang, Hwann-Tzong Chen | State-of-the-art single-view 360 room layout reconstruction methods formulate the problem as a high-level 1D (per-column) regression task. On the other hand traditional low-level 2D layout segmentation is simpler to learn and can represent occluded regions but it requires complex post-processing for the targeting layout polygon and sacrifices accuracy. We present Seg2Reg to render 1D layout depth regression from the 2D segmentation map in a differentiable and occlusion-aware way marrying the merits of both sides. Specifically our model predicts floor-plan density for the input equirectangular 360 image. Formulating the 2D layout representation as a density field enables us to employ 'flattened' volume rendering to form 1D layout depth regression. In addition we propose a novel 3D warping augmentation on layout to improve generalization. Finally we re-implement recent room layout reconstruction methods into our codebase for benchmarking and explore modern backbones and training techniques to serve as the strong baseline. The code is at https: //PanoLayoutStudio.github.io . | https://openaccess.thecvf.com/content/CVPR2024/papers/Sun_Seg2Reg_Differentiable_2D_Segmentation_to_1D_Regression_Rendering_for_360_CVPR_2024_paper.pdf | http://arxiv.org/abs/2311.18695 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Sun_Seg2Reg_Differentiable_2D_Segmentation_to_1D_Regression_Rendering_for_360_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Sun_Seg2Reg_Differentiable_2D_Segmentation_to_1D_Regression_Rendering_for_360_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Sun_Seg2Reg_Differentiable_2D_CVPR_2024_supplemental.pdf | null |
Strong Transferable Adversarial Attacks via Ensembled Asymptotically Normal Distribution Learning | Zhengwei Fang, Rui Wang, Tao Huang, Liping Jing | Strong adversarial examples are crucial for evaluating and enhancing the robustness of deep neural networks. However the performance of popular attacks is usually sensitive for instance to minor image transformations stemming from limited information -- typically only one input example a handful of white-box source models and undefined defense strategies. Hence the crafted adversarial examples are prone to overfit the source model which hampers their transferability to unknown architectures. In this paper we propose an approach named Multiple Asymptotically Normal Distribution Attacks (MultiANDA) which explicitly characterize adversarial perturbations from a learned distribution. Specifically we approximate the posterior distribution over the perturbations by taking advantage of the asymptotic normality property of stochastic gradient ascent (SGA) then employ the deep ensemble strategy as an effective proxy for Bayesian marginalization in this process aiming to estimate a mixture of Gaussians that facilitates a more thorough exploration of the potential optimization space. The approximated posterior essentially describes the stationary distribution of SGA iterations which captures the geometric information around the local optimum. Thus MultiANDA allows drawing an unlimited number of adversarial perturbations for each input and reliably maintains the transferability. Our proposed method outperforms ten state-of-the-art black-box attacks on deep learning models with or without defenses through extensive experiments on seven normally trained and seven defense models. | https://openaccess.thecvf.com/content/CVPR2024/papers/Fang_Strong_Transferable_Adversarial_Attacks_via_Ensembled_Asymptotically_Normal_Distribution_Learning_CVPR_2024_paper.pdf | http://arxiv.org/abs/2209.11964 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Fang_Strong_Transferable_Adversarial_Attacks_via_Ensembled_Asymptotically_Normal_Distribution_Learning_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Fang_Strong_Transferable_Adversarial_Attacks_via_Ensembled_Asymptotically_Normal_Distribution_Learning_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Fang_Strong_Transferable_Adversarial_CVPR_2024_supplemental.pdf | null |
Spanning Training Progress: Temporal Dual-Depth Scoring (TDDS) for Enhanced Dataset Pruning | Xin Zhang, Jiawei Du, Yunsong Li, Weiying Xie, Joey Tianyi Zhou | Dataset pruning aims to construct a coreset capable of achieving performance comparable to the original full dataset. Most existing dataset pruning methods rely on snapshot-based criteria to identify representative samples often resulting in poor generalization across various pruning and cross-architecture scenarios. Recent studies have addressed this issue by expanding the scope of training dynamics considered including factors such as forgetting event and probability change typically using an averaging approach. However these works struggle to integrate a broader range of training dynamics without overlooking well-generalized samples which may not be sufficiently highlighted in an averaging manner. In this study we propose a novel dataset pruning method termed as Temporal Dual-Depth Scoring (TDDS) to tackle this problem. TDDS utilizes a dual-depth strategy to achieve a balance between incorporating extensive training dynamics and identifying representative samples for dataset pruning. In the first depth we estimate the series of each sample's individual contributions spanning the training progress ensuring comprehensive integration of training dynamics. In the second depth we focus on the variability of the sample-wise contributions identified in the first depth to highlight well-generalized samples. Extensive experiments conducted on CIFAR and ImageNet datasets verify the superiority of TDDS over previous SOTA methods. Specifically on CIFAR-100 our method achieves 54.51% accuracy with only 10% training data surpassing baselines methods by more than 12.69%. Our codes are available at https://github.com/zhangxin-xd/Dataset-Pruning-TDDS. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Spanning_Training_Progress_Temporal_Dual-Depth_Scoring_TDDS_for_Enhanced_Dataset_CVPR_2024_paper.pdf | http://arxiv.org/abs/2311.13613 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Spanning_Training_Progress_Temporal_Dual-Depth_Scoring_TDDS_for_Enhanced_Dataset_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Spanning_Training_Progress_Temporal_Dual-Depth_Scoring_TDDS_for_Enhanced_Dataset_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_Spanning_Training_Progress_CVPR_2024_supplemental.pdf | null |
UniMix: Towards Domain Adaptive and Generalizable LiDAR Semantic Segmentation in Adverse Weather | Haimei Zhao, Jing Zhang, Zhuo Chen, Shanshan Zhao, Dacheng Tao | LiDAR semantic segmentation (LSS) is a critical task in autonomous driving and has achieved promising progress. However prior LSS methods are conventionally investigated and evaluated on datasets within the same domain in clear weather. The robustness of LSS models in unseen scenes and all weather conditions is crucial for ensuring safety and reliability in real applications. To this end we propose UniMix a universal method that enhances the adaptability and generalizability of LSS models. UniMix first leverages physically valid adverse weather simulation to construct a Bridge Domain which serves to bridge the domain gap between the clear weather scenes and the adverse weather scenes. Then a Universal Mixing operator is defined regarding spatial intensity and semantic distributions to create the intermediate domain with mixed samples from given domains. Integrating the proposed two techniques into a teacher-student framework UniMix efficiently mitigates the domain gap and enables LSS models to learn weather-robust and domain-invariant representations. We devote UniMix to two main setups: 1) unsupervised domain adaption adapting the model from the clear weather source domain to the adverse weather target domain; 2) domain generalization learning a model that generalizes well to unseen scenes in adverse weather. Extensive experiments validate the effectiveness of UniMix across different tasks and datasets all achieving superior performance over state-of-the-art methods. The code will be released. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zhao_UniMix_Towards_Domain_Adaptive_and_Generalizable_LiDAR_Semantic_Segmentation_in_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.05145 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zhao_UniMix_Towards_Domain_Adaptive_and_Generalizable_LiDAR_Semantic_Segmentation_in_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zhao_UniMix_Towards_Domain_Adaptive_and_Generalizable_LiDAR_Semantic_Segmentation_in_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhao_UniMix_Towards_Domain_CVPR_2024_supplemental.pdf | null |
Visual Delta Generator with Large Multi-modal Models for Semi-supervised Composed Image Retrieval | Young Kyun Jang, Donghyun Kim, Zihang Meng, Dat Huynh, Ser-Nam Lim | Composed Image Retrieval (CIR) is a task that retrieves images similar to a query based on a provided textual modification. Current techniques rely on supervised learning for CIR models using labeled triplets of the <reference image text target image>. These specific triplets are not as commonly available as simple image-text pairs limiting the widespread use of CIR and its scalability. On the other hand zero-shot CIR can be relatively easily trained with image-caption pairs without considering the image-to-image relation but this approach tends to yield lower accuracy. We propose a new semi-supervised CIR approach where we search for a reference and its related target images in auxiliary data and learn our large language model-based Visual Delta Generator (VDG) to generate text describing the visual difference (i.e. visual delta) between the two. VDG equipped with fluent language knowledge and being model agnostic can generate pseudo triplets to boost the performance of CIR models. Our approach significantly improves the existing supervised learning approaches and achieves state-of-the-art results on the CIR benchmarks. | https://openaccess.thecvf.com/content/CVPR2024/papers/Jang_Visual_Delta_Generator_with_Large_Multi-modal_Models_for_Semi-supervised_Composed_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.15516 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Jang_Visual_Delta_Generator_with_Large_Multi-modal_Models_for_Semi-supervised_Composed_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Jang_Visual_Delta_Generator_with_Large_Multi-modal_Models_for_Semi-supervised_Composed_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Jang_Visual_Delta_Generator_CVPR_2024_supplemental.pdf | null |
Selective Interpretable and Motion Consistent Privacy Attribute Obfuscation for Action Recognition | Filip Ilic, He Zhao, Thomas Pock, Richard P. Wildes | Concerns for the privacy of individuals captured in public imagery have led to privacy-preserving action recognition. Existing approaches often suffer from issues arising through obfuscation being applied globally and a lack of interpretability. Global obfuscation hides privacy sensitive regions but also contextual regions important for action recognition. Lack of interpretability erodes trust in these new technologies. We highlight the limitations of current paradigms and propose a solution: Human selected privacy templates that yield interpretability by design an obfuscation scheme that selectively hides attributes and also induces temporal consistency which is important in action recognition. Our approach is architecture agnostic and directly modifies input imagery while existing approaches generally require architecture training. Our approach offers more flexibility as no retraining is required and outperforms alternatives on three widely used datasets. | https://openaccess.thecvf.com/content/CVPR2024/papers/Ilic_Selective_Interpretable_and_Motion_Consistent_Privacy_Attribute_Obfuscation_for_Action_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.12710 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Ilic_Selective_Interpretable_and_Motion_Consistent_Privacy_Attribute_Obfuscation_for_Action_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Ilic_Selective_Interpretable_and_Motion_Consistent_Privacy_Attribute_Obfuscation_for_Action_CVPR_2024_paper.html | CVPR 2024 | null | null |
HiPose: Hierarchical Binary Surface Encoding and Correspondence Pruning for RGB-D 6DoF Object Pose Estimation | Yongliang Lin, Yongzhi Su, Praveen Nathan, Sandeep Inuganti, Yan Di, Martin Sundermeyer, Fabian Manhardt, Didier Stricker, Jason Rambach, Yu Zhang | In this work we present a novel dense-correspondence method for 6DoF object pose estimation from a single RGB-D image. While many existing data-driven methods achieve impressive performance they tend to be time-consuming due to their reliance on rendering-based refinement approaches. To circumvent this limitation we present HiPose which establishes 3D-3D correspondences in a coarse-to-fine manner with a hierarchical binary surface encoding. Unlike previous dense-correspondence methods we estimate the correspondence surface by employing point-to-surface matching and iteratively constricting the surface until it becomes a correspondence point while gradually removing outliers. Extensive experiments on public benchmarks LM-O YCB-V and T-Less demonstrate that our method surpasses all refinement-free methods and is even on par with expensive refinement-based approaches. Crucially our approach is computationally efficient and enables real-time critical applications with high accuracy requirements. | https://openaccess.thecvf.com/content/CVPR2024/papers/Lin_HiPose_Hierarchical_Binary_Surface_Encoding_and_Correspondence_Pruning_for_RGB-D_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Lin_HiPose_Hierarchical_Binary_Surface_Encoding_and_Correspondence_Pruning_for_RGB-D_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Lin_HiPose_Hierarchical_Binary_Surface_Encoding_and_Correspondence_Pruning_for_RGB-D_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lin_HiPose_Hierarchical_Binary_CVPR_2024_supplemental.pdf | null |
DiffForensics: Leveraging Diffusion Prior to Image Forgery Detection and Localization | Zeqin Yu, Jiangqun Ni, Yuzhen Lin, Haoyi Deng, Bin Li | As manipulating images may lead to misinterpretation of the visual content addressing the image forgery detection and localization (IFDL) problem has drawn serious public concerns. In this work we propose a simple assumption that the effective forensic method should focus on the mesoscopic properties of images. Based on the assumption a novel two-stage self-supervised framework leveraging the diffusion model for IFDL task i.e. DiffForensics is proposed in this paper. The DiffForensics begins with self-supervised denoising diffusion paradigm equipped with the module of encoder-decoder structure by freezing the pre-trained encoder (e.g. in ADE-20K) to inherit macroscopic features for general image characteristics while encouraging the decoder to learn microscopic feature representation of images enforcing the whole model to focus the mesoscopic representations. The pre-trained model as a prior is then further fine-tuned for IFDL task with the customized Edge Cue Enhancement Module (ECEM) which progressively highlights the boundary features within the manipulated regions thereby refining tampered area localization with better precision. Extensive experiments on several public challenging datasets demonstrate the effectiveness of the proposed method compared with other state-of-the-art methods. The proposed DiffForensics could significantly improve the model's capabilities for both accurate tamper detection and precise tamper localization while concurrently elevating its generalization and robustness. | https://openaccess.thecvf.com/content/CVPR2024/papers/Yu_DiffForensics_Leveraging_Diffusion_Prior_to_Image_Forgery_Detection_and_Localization_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Yu_DiffForensics_Leveraging_Diffusion_Prior_to_Image_Forgery_Detection_and_Localization_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Yu_DiffForensics_Leveraging_Diffusion_Prior_to_Image_Forgery_Detection_and_Localization_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yu_DiffForensics_Leveraging_Diffusion_CVPR_2024_supplemental.pdf | null |
CoSeR: Bridging Image and Language for Cognitive Super-Resolution | Haoze Sun, Wenbo Li, Jianzhuang Liu, Haoyu Chen, Renjing Pei, Xueyi Zou, Youliang Yan, Yujiu Yang | Existing super-resolution (SR) models primarily focus on restoring local texture details often neglecting the global semantic information within the scene. This oversight can lead to the omission of crucial semantic details or the introduction of inaccurate textures during the recovery process. In our work we introduce the Cognitive Super-Resolution (CoSeR) framework empowering SR models with the capacity to comprehend low-resolution images. We achieve this by marrying image appearance and language understanding to generate a cognitive embedding which not only activates prior information from large text-to-image diffusion models but also facilitates the generation of high-quality reference images to optimize the SR process. To further improve image fidelity we propose a novel condition injection scheme called "All-in-Attention" consolidating all conditional information into a single module. Consequently our method successfully restores semantically correct and photorealistic details demonstrating state-of-the-art performance across multiple benchmarks. Project page: https://coser-main.github.io/ | https://openaccess.thecvf.com/content/CVPR2024/papers/Sun_CoSeR_Bridging_Image_and_Language_for_Cognitive_Super-Resolution_CVPR_2024_paper.pdf | http://arxiv.org/abs/2311.16512 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Sun_CoSeR_Bridging_Image_and_Language_for_Cognitive_Super-Resolution_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Sun_CoSeR_Bridging_Image_and_Language_for_Cognitive_Super-Resolution_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Sun_CoSeR_Bridging_Image_CVPR_2024_supplemental.pdf | null |
Geometry-aware Reconstruction and Fusion-refined Rendering for Generalizable Neural Radiance Fields | Tianqi Liu, Xinyi Ye, Min Shi, Zihao Huang, Zhiyu Pan, Zhan Peng, Zhiguo Cao | Generalizable NeRF aims to synthesize novel views for unseen scenes. Common practices involve constructing variance-based cost volumes for geometry reconstruction and encoding 3D descriptors for decoding novel views. However existing methods show limited generalization ability in challenging conditions due to inaccurate geometry sub-optimal descriptors and decoding strategies. We address these issues point by point. First we find the variance-based cost volume exhibits failure patterns as the features of pixels corresponding to the same point can be inconsistent across different views due to occlusions or reflections. We introduce an Adaptive Cost Aggregation (ACA) approach to amplify the contribution of consistent pixel pairs and suppress inconsistent ones. Unlike previous methods that solely fuse 2D features into descriptors our approach introduces a Spatial-View Aggregator (SVA) to incorporate 3D context into descriptors through spatial and inter-view interaction. When decoding the descriptors we observe the two existing decoding strategies excel in different areas which are complementary. A Consistency-Aware Fusion (CAF) strategy is proposed to leverage the advantages of both. We incorporate the above ACA SVA and CAF into a coarse-to-fine framework termed Geometry-aware Reconstruction and Fusion-refined Rendering (GeFu). GeFu attains state-of-the-art performance across multiple datasets. | https://openaccess.thecvf.com/content/CVPR2024/papers/Liu_Geometry-aware_Reconstruction_and_Fusion-refined_Rendering_for_Generalizable_Neural_Radiance_Fields_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.17528 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Geometry-aware_Reconstruction_and_Fusion-refined_Rendering_for_Generalizable_Neural_Radiance_Fields_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Geometry-aware_Reconstruction_and_Fusion-refined_Rendering_for_Generalizable_Neural_Radiance_Fields_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liu_Geometry-aware_Reconstruction_and_CVPR_2024_supplemental.pdf | null |
Boosting Self-Supervision for Single-View Scene Completion via Knowledge Distillation | null | null | null | null | null | https://openaccess.thecvf.com/content/CVPR2024/html/Han_Boosting_Self-Supervision_for_Single-View_Scene_Completion_via_Knowledge_Distillation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Han_Boosting_Self-Supervision_for_Single-View_Scene_Completion_via_Knowledge_Distillation_CVPR_2024_paper.html | CVPR 2024 | null | null |
PromptKD: Unsupervised Prompt Distillation for Vision-Language Models | Zheng Li, Xiang Li, Xinyi Fu, Xin Zhang, Weiqiang Wang, Shuo Chen, Jian Yang | Prompt learning has emerged as a valuable technique in enhancing vision-language models (VLMs) such as CLIP for downstream tasks in specific domains. Existing work mainly focuses on designing various learning forms of prompts neglecting the potential of prompts as effective distillers for learning from larger teacher models. In this paper we introduce an unsupervised domain prompt distillation framework which aims to transfer the knowledge of a larger teacher model to a lightweight target model through prompt-driven imitation using unlabeled domain images. Specifically our framework consists of two distinct stages. In the initial stage we pre-train a large CLIP teacher model using domain (few-shot) labels. After pre-training we leverage the unique decoupled-modality characteristics of CLIP by pre-computing and storing the text features as class vectors only once through the teacher text encoder. In the subsequent stage the stored class vectors are shared across teacher and student image encoders for calculating the predicted logits. Further we align the logits of both the teacher and student models via KL divergence encouraging the student image encoder to generate similar probability distributions to the teacher through the learnable prompts. The proposed prompt distillation process eliminates the reliance on labeled data enabling the algorithm to leverage a vast amount of unlabeled images within the domain. Finally the well-trained student image encoders and pre-stored text features (class vectors) are utilized for inference. To our best knowledge we are the first to (1) perform unsupervised domain-specific prompt-driven knowledge distillation for CLIP and (2) establish a practical pre-storing mechanism of text features as shared class vectors between teacher and student. Extensive experiments on 11 datasets demonstrate the effectiveness of our method. Code is publicly available at https://github.com/zhengli97/PromptKD. | https://openaccess.thecvf.com/content/CVPR2024/papers/Li_PromptKD_Unsupervised_Prompt_Distillation_for_Vision-Language_Models_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.02781 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Li_PromptKD_Unsupervised_Prompt_Distillation_for_Vision-Language_Models_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Li_PromptKD_Unsupervised_Prompt_Distillation_for_Vision-Language_Models_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_PromptKD_Unsupervised_Prompt_CVPR_2024_supplemental.pdf | null |
VideoBooth: Diffusion-based Video Generation with Image Prompts | Yuming Jiang, Tianxing Wu, Shuai Yang, Chenyang Si, Dahua Lin, Yu Qiao, Chen Change Loy, Ziwei Liu | Text-driven video generation witnesses rapid progress. However merely using text prompts is not enough to depict the desired subject appearance that accurately aligns with users' intents especially for customized content creation. In this paper we study the task of video generation with image prompts which provide more accurate and direct content control beyond the text prompts. Specifically we propose a feed-forward framework VideoBooth with two dedicated designs: 1) We propose to embed image prompts in a coarse-to-fine manner. Coarse visual embeddings from image encoder provide high-level encodings of image prompts while fine visual embeddings from the proposed attention injection module provide multi-scale and detailed encoding of image prompts. These two complementary embeddings can faithfully capture the desired appearance. 2) In the attention injection module at fine level multi-scale image prompts are fed into different cross-frame attention layers as additional keys and values. This extra spatial information refines the details in the first frame and then it is propagated to the remaining frames which maintains temporal consistency. Extensive experiments demonstrate that VideoBooth achieves state-of-the-art performance in generating customized high-quality videos with subjects specified in image prompts. Notably VideoBooth is a generalizable framework where a single model works for a wide range of image prompts with only feed-forward passes. | https://openaccess.thecvf.com/content/CVPR2024/papers/Jiang_VideoBooth_Diffusion-based_Video_Generation_with_Image_Prompts_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.00777 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Jiang_VideoBooth_Diffusion-based_Video_Generation_with_Image_Prompts_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Jiang_VideoBooth_Diffusion-based_Video_Generation_with_Image_Prompts_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Jiang_VideoBooth_Diffusion-based_Video_CVPR_2024_supplemental.pdf | null |
Robust Overfitting Does Matter: Test-Time Adversarial Purification With FGSM | Linyu Tang, Lei Zhang | Numerous studies have demonstrated the susceptibility of deep neural networks (DNNs) to subtle adversarial perturbations prompting the development of many advanced adversarial defense methods aimed at mitigating adversarial attacks. Current defense strategies usually train DNNs for a specific adversarial attack method and can achieve good robustness in defense against this type of adversarial attack. Nevertheless when subjected to evaluations involving unfamiliar attack modalities empirical evidence reveals a pronounced deterioration in the robustness of DNNs. Meanwhile there is a trade-off between the classification accuracy of clean examples and adversarial examples. Most defense methods often sacrifice the accuracy of clean examples in order to improve the adversarial robustness of DNNs. To alleviate these problems and enhance the overall robust generalization of DNNs we propose the Test-Time Pixel-Level Adversarial Purification (TPAP) method. This approach is based on the robust overfitting characteristic of DNNs to the fast gradient sign method (FGSM) on training and test datasets. It utilizes FGSM for adversarial purification to process images for purifying unknown adversarial perturbations from pixels at testing time in a "counter changes with changelessness" manner thereby enhancing the defense capability of DNNs against various unknown adversarial attacks. Extensive experimental results show that our method can effectively improve both overall robust generalization of DNNs notably over previous methods. Code is available https://github.com/tly18/TPAP. | https://openaccess.thecvf.com/content/CVPR2024/papers/Tang_Robust_Overfitting_Does_Matter_Test-Time_Adversarial_Purification_With_FGSM_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.11448 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Tang_Robust_Overfitting_Does_Matter_Test-Time_Adversarial_Purification_With_FGSM_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Tang_Robust_Overfitting_Does_Matter_Test-Time_Adversarial_Purification_With_FGSM_CVPR_2024_paper.html | CVPR 2024 | null | null |
Sparse Global Matching for Video Frame Interpolation with Large Motion | Chunxu Liu, Guozhen Zhang, Rui Zhao, Limin Wang | Large motion poses a critical challenge in Video Frame Interpolation (VFI) task. Existing methods are often constrained by limited receptive fields resulting in sub-optimal performance when handling scenarios with large motion. In this paper we introduce a new pipeline for VFI which can effectively integrate global-level information to alleviate issues associated with large motion. Specifically we first estimate a pair of initial intermediate flows using a high-resolution feature map for extracting local details. Then we incorporate a sparse global matching branch to compensate for flow estimation which consists of identifying flaws in initial flows and generating sparse flow compensation with a global receptive field. Finally we adaptively merge the initial flow estimation with global flow compensation yielding a more accurate intermediate flow. To evaluate the effectiveness of our method in handling large motion we carefully curate a more challenging subset from commonly used benchmarks. Our method demonstrates the state-of-the-art performance on these VFI subsets with large motion. | https://openaccess.thecvf.com/content/CVPR2024/papers/Liu_Sparse_Global_Matching_for_Video_Frame_Interpolation_with_Large_Motion_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.06913 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Sparse_Global_Matching_for_Video_Frame_Interpolation_with_Large_Motion_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Sparse_Global_Matching_for_Video_Frame_Interpolation_with_Large_Motion_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liu_Sparse_Global_Matching_CVPR_2024_supplemental.pdf | null |
ExtDM: Distribution Extrapolation Diffusion Model for Video Prediction | Zhicheng Zhang, Junyao Hu, Wentao Cheng, Danda Paudel, Jufeng Yang | Video prediction is a challenging task due to its nature of uncertainty especially for forecasting a long period. To model the temporal dynamics advanced methods benefit from the recent success of diffusion models and repeatedly refine the predicted future frames with 3D spatiotemporal U-Net. However there exists a gap between the present and future and the repeated usage of U-Net brings a heavy computation burden. To address this we propose a diffusion-based video prediction method that predicts future frames by extrapolating the present distribution of features namely ExtDM. Specifically our method consists of three components: (i) a motion autoencoder conducts a bijection transformation between video frames and motion cues; (ii) a layered distribution adaptor module extrapolates the present features in the guidance of Gaussian distribution; (iii) a 3D U-Net architecture specialized for jointly fusing guidance and features among the temporal dimension by spatiotemporal-window attention. Extensive experiments on five popular benchmarks covering short- and long-term video prediction verify the effectiveness of ExtDM. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_ExtDM_Distribution_Extrapolation_Diffusion_Model_for_Video_Prediction_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_ExtDM_Distribution_Extrapolation_Diffusion_Model_for_Video_Prediction_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_ExtDM_Distribution_Extrapolation_Diffusion_Model_for_Video_Prediction_CVPR_2024_paper.html | CVPR 2024 | null | null |
Modality-Collaborative Test-Time Adaptation for Action Recognition | Baochen Xiong, Xiaoshan Yang, Yaguang Song, Yaowei Wang, Changsheng Xu | Video-based Unsupervised Domain Adaptation (VUDA) method improves the generalization of the video model enabling it to be applied to action recognition tasks in different environments. However these methods require continuous access to source data during the adaptation process which are impractical in real scenarios where the source videos are not available with concerns in transmission efficiency or privacy issues. To address this problem in this paper we propose to solve the Multimodal Video Test-Time Adaptation task (MVTTA). Existing image-based TTA methods cannot be directly applied to this task because video have domain shift in multimodal and temporal which brings difficulties to adaptation. To address the above challenges we propose a Modality-Collaborative Test-Time Adaptation (MC-TTA) Network. We maintain teacher and student memory banks respectively for generating pseudo-prototypes and target-prototypes. In the teacher model we propose Self-assembled Source-friendly Feature Reconstruction (SSFR) module to encourage the teacher memory bank to store features that are more likely to be consistent with the source distribution. Through multimodal prototype alignment and cross-modal relative consistency our method can effectively alleviate domain shift in videos. We evaluate the proposed model on four public video datasets. The results show that our model outperforms existing state-of-the-art methods. | https://openaccess.thecvf.com/content/CVPR2024/papers/Xiong_Modality-Collaborative_Test-Time_Adaptation_for_Action_Recognition_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Xiong_Modality-Collaborative_Test-Time_Adaptation_for_Action_Recognition_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Xiong_Modality-Collaborative_Test-Time_Adaptation_for_Action_Recognition_CVPR_2024_paper.html | CVPR 2024 | null | null |
SCULPT: Shape-Conditioned Unpaired Learning of Pose-dependent Clothed and Textured Human Meshes | Soubhik Sanyal, Partha Ghosh, Jinlong Yang, Michael J. Black, Justus Thies, Timo Bolkart | We present SCULPT a novel 3D generative model for clothed and textured 3D meshes of humans. Specifically we devise a deep neural network that learns to represent the geometry and appearance distribution of clothed human bodies. Training such a model is challenging as datasets of textured 3D meshes for humans are limited in size and accessibility. Our key observation is that there exist medium-sized 3D scan datasets like CAPE as well as large-scale 2D image datasets of clothed humans and multiple appearances can be mapped to a single geometry. To effectively learn from the two data modalities we propose an unpaired learning procedure for pose-dependent clothed and textured human meshes. Specifically we learn a pose-dependent geometry space from 3D scan data. We represent this as per vertex displacements w.r.t. the SMPL model. Next we train a geometry conditioned texture generator in an unsupervised way using the 2D image data. We use intermediate activations of the learned geometry model to condition our texture generator. To alleviate entanglement between pose and clothing type and pose and clothing appearance we condition both the texture and geometry generators with attribute labels such as clothing types for the geometry and clothing colors for the texture generator. We automatically generated these conditioning labels for the 2D images based on the visual question-answering model BLIP and CLIP. We validate our method on the SCULPT dataset and compare to state-of-the-art 3D generative models for clothed human bodies. Our code and data can be found at https://sculpt.is.tue.mpg.de. | https://openaccess.thecvf.com/content/CVPR2024/papers/Sanyal_SCULPT_Shape-Conditioned_Unpaired_Learning_of_Pose-dependent_Clothed_and_Textured_Human_CVPR_2024_paper.pdf | http://arxiv.org/abs/2308.10638 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Sanyal_SCULPT_Shape-Conditioned_Unpaired_Learning_of_Pose-dependent_Clothed_and_Textured_Human_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Sanyal_SCULPT_Shape-Conditioned_Unpaired_Learning_of_Pose-dependent_Clothed_and_Textured_Human_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Sanyal_SCULPT_Shape-Conditioned_Unpaired_CVPR_2024_supplemental.zip | null |
Point Segment and Count: A Generalized Framework for Object Counting | Zhizhong Huang, Mingliang Dai, Yi Zhang, Junping Zhang, Hongming Shan | Class-agnostic object counting aims to count all objects in an image with respect to example boxes or class names a.k.a few-shot and zero-shot counting. In this paper we propose a generalized framework for both few-shot and zero-shot object counting based on detection. Our framework combines the superior advantages of two foundation models without compromising their zero-shot capability: (i) SAM to segment all possible objects as mask proposals and (ii) CLIP to classify proposals to obtain accurate object counts. However this strategy meets the obstacles of efficiency overhead and the small crowded objects that cannot be localized and distinguished. To address these issues our framework termed PseCo follows three steps: point segment and count. Specifically we first propose a class-agnostic object localization to provide accurate but least point prompts for SAM which consequently not only reduces computation costs but also avoids missing small objects. Furthermore we propose a generalized object classification that leverages CLIP image/text embeddings as the classifier following a hierarchical knowledge distillation to obtain discriminative classifications among hierarchical mask proposals. Extensive experimental results on FSC-147 COCO and LVIS demonstrate that PseCo achieves state-of-the-art performance in both few-shot/zero-shot object counting/detection. | https://openaccess.thecvf.com/content/CVPR2024/papers/Huang_Point_Segment_and_Count_A_Generalized_Framework_for_Object_Counting_CVPR_2024_paper.pdf | http://arxiv.org/abs/2311.12386 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Huang_Point_Segment_and_Count_A_Generalized_Framework_for_Object_Counting_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Huang_Point_Segment_and_Count_A_Generalized_Framework_for_Object_Counting_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Huang_Point_Segment_and_CVPR_2024_supplemental.pdf | null |
Small Steps and Level Sets: Fitting Neural Surface Models with Point Guidance | Chamin Hewa Koneputugodage, Yizhak Ben-Shabat, Dylan Campbell, Stephen Gould | A neural signed distance function (SDF) is a convenient shape representation for many tasks such as surface reconstruction editing and generation. However neural SDFs are difficult to fit to raw point clouds such as those sampled from the surface of a shape by a scanner. A major issue occurs when the shape's geometry is very different from the structural biases implicit in the network's initialization. In this case we observe that the standard loss formulation does not guide the network towards the correct SDF values. We circumvent this problem by introducing guiding points and use them to steer the optimization towards the true shape via small incremental changes for which the loss formulation has a good descent direction. We show that this point-guided homotopy-based optimization scheme facilitates a deformation from an easy problem to the difficult reconstruction problem. We also propose a metric to quantify the difference in surface geometry between a target shape and an initial surface which helps indicate whether the standard loss formulation is guiding towards the target shape. Our method outperforms previous state-of-the-art approaches with large improvements on shapes identified by this metric as particularly challenging. | https://openaccess.thecvf.com/content/CVPR2024/papers/Koneputugodage_Small_Steps_and_Level_Sets_Fitting_Neural_Surface_Models_with_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Koneputugodage_Small_Steps_and_Level_Sets_Fitting_Neural_Surface_Models_with_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Koneputugodage_Small_Steps_and_Level_Sets_Fitting_Neural_Surface_Models_with_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Koneputugodage_Small_Steps_and_CVPR_2024_supplemental.pdf | null |
Domain-Agnostic Mutual Prompting for Unsupervised Domain Adaptation | Zhekai Du, Xinyao Li, Fengling Li, Ke Lu, Lei Zhu, Jingjing Li | Conventional Unsupervised Domain Adaptation (UDA) strives to minimize distribution discrepancy between domains which neglects to harness rich semantics from data and struggles to handle complex domain shifts. A promising technique is to leverage the knowledge of large-scale pre-trained vision-language models for more guided adaptation. Despite some endeavors current methods often learn textual prompts to embed domain semantics for source and target domains separately and perform classification within each domain limiting cross-domain knowledge transfer. Moreover prompting only the language branch lacks flexibility to adapt both modalities dynamically. To bridge this gap we propose Domain-Agnostic Mutual Prompting (DAMP) to exploit domain-invariant semantics by mutually aligning visual and textual embeddings. Specifically the image contextual information is utilized to prompt the language branch in a domain-agnostic and instance-conditioned way. Meanwhile visual prompts are imposed based on the domain-agnostic textual prompt to elicit domain-invariant visual embeddings. These two branches of prompts are learned mutually with a cross-attention module and regularized with a semantic-consistency loss and an instance-discrimination contrastive loss. Experiments on three UDA benchmarks demonstrate the superiority of DAMP over state-of-the-art approaches. | https://openaccess.thecvf.com/content/CVPR2024/papers/Du_Domain-Agnostic_Mutual_Prompting_for_Unsupervised_Domain_Adaptation_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.02899 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Du_Domain-Agnostic_Mutual_Prompting_for_Unsupervised_Domain_Adaptation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Du_Domain-Agnostic_Mutual_Prompting_for_Unsupervised_Domain_Adaptation_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Du_Domain-Agnostic_Mutual_Prompting_CVPR_2024_supplemental.pdf | null |
PTT: Point-Trajectory Transformer for Efficient Temporal 3D Object Detection | Kuan-Chih Huang, Weijie Lyu, Ming-Hsuan Yang, Yi-Hsuan Tsai | Recent temporal LiDAR-based 3D object detectors achieve promising performance based on the two-stage proposal-based approach. They generate 3D box candidates from the first-stage dense detector followed by different temporal aggregation methods. However these approaches require per-frame objects or whole point clouds posing challenges related to memory bank utilization. Moreover point clouds and trajectory features are combined solely based on concatenation which may neglect effective interactions between them. In this paper we propose a point-trajectory transformer with long short-term memory for efficient temporal 3D object detection. To this end we only utilize point clouds of current-frame objects and their historical trajectories as input to minimize the memory bank storage requirement. Furthermore we introduce modules to encode trajectory features focusing on long short-term and future-aware perspectives and then effectively aggregate them with point cloud features. We conduct extensive experiments on the large-scale Waymo dataset to demonstrate that our approach performs well against state-of-the-art methods. The source codes and trained models will be made publicly available. Code and models will be made publicly available at https://github.com/kuanchihhuang/PTT. | https://openaccess.thecvf.com/content/CVPR2024/papers/Huang_PTT_Point-Trajectory_Transformer_for_Efficient_Temporal_3D_Object_Detection_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.08371 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Huang_PTT_Point-Trajectory_Transformer_for_Efficient_Temporal_3D_Object_Detection_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Huang_PTT_Point-Trajectory_Transformer_for_Efficient_Temporal_3D_Object_Detection_CVPR_2024_paper.html | CVPR 2024 | null | null |
Generative Proxemics: A Prior for 3D Social Interaction from Images | Lea Müller, Vickie Ye, Georgios Pavlakos, Michael Black, Angjoo Kanazawa | Social interaction is a fundamental aspect of human behavior and communication. The way individuals position themselves in relation to others also known as proxemics conveys social cues and affects the dynamics of social interaction. Reconstructing such interaction from images presents challenges because of mutual occlusion and the limited availability of large training datasets. To address this we present a novel approach that learns a prior over the 3D proxemics two people in close social interaction and demonstrate its use for single-view 3D reconstruction. We start by creating 3D training data of interacting people using image datasets with contact annotations. We then model the proxemics using a novel denoising diffusion model called BUDDI that learns the joint distribution over the poses of two people in close social interaction. Sampling from our generative proxemics model produces realistic 3D human interactions which we validate through a perceptual study. We use BUDDI in reconstructing two people in close proximity from an image without any contact annotation via an optimization approach that uses the diffusion model as a prior. Our approach recovers accurate 3D social interactions from noisy initial estimates outperforming state-of-the-art methods. Our code data and model are available at: muelea.github.io/buddi. | https://openaccess.thecvf.com/content/CVPR2024/papers/Muller_Generative_Proxemics_A_Prior_for_3D_Social_Interaction_from_Images_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Muller_Generative_Proxemics_A_Prior_for_3D_Social_Interaction_from_Images_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Muller_Generative_Proxemics_A_Prior_for_3D_Social_Interaction_from_Images_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Muller_Generative_Proxemics_A_CVPR_2024_supplemental.pdf | null |
A Simple and Effective Point-based Network for Event Camera 6-DOFs Pose Relocalization | Hongwei Ren, Jiadong Zhu, Yue Zhou, Haotian Fu, Yulong Huang, Bojun Cheng | Event cameras exhibit remarkable attributes such as high dynamic range asynchronicity and low latency making them highly suitable for vision tasks that involve high-speed motion in challenging lighting conditions. These cameras implicitly capture movement and depth information in events making them appealing sensors for Camera Pose Relocalization (CPR) tasks. Nevertheless existing CPR networks based on events neglect the pivotal fine-grained temporal information in events resulting in unsatisfactory performance. Moreover the energy-efficient features are further compromised by the use of excessively complex models hindering efficient deployment on edge devices. In this paper we introduce PEPNet a simple and effective point-based network designed to regress six degrees of freedom (6-DOFs) event camera poses. We rethink the relationship between the event camera and CPR tasks leveraging the raw Point Cloud directly as network input to harness the high-temporal resolution and inherent sparsity of events. PEPNet is adept at abstracting the spatial and implicit temporal features through hierarchical structure and explicit temporal features by Attentive Bi-directional Long Short-Term Memory (A-Bi-LSTM). By employing a carefully crafted lightweight design PEPNet delivers state-of-the-art (SOTA) performance on both indoor and outdoor datasets with meager computational resources. Specifically PEPNet attains a significant 38% and 33% performance improvement on the random split IJRR and M3ED datasets respectively. Moreover the lightweight design version PEPNet_ tiny accomplishes results comparable to the SOTA while employing a mere 0.5% of the parameters. | https://openaccess.thecvf.com/content/CVPR2024/papers/Ren_A_Simple_and_Effective_Point-based_Network_for_Event_Camera_6-DOFs_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Ren_A_Simple_and_Effective_Point-based_Network_for_Event_Camera_6-DOFs_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Ren_A_Simple_and_Effective_Point-based_Network_for_Event_Camera_6-DOFs_CVPR_2024_paper.html | CVPR 2024 | null | null |
Semantic-Aware Multi-Label Adversarial Attacks | Hassan Mahmood, Ehsan Elhamifar | Despite its importance generating attacks for multi label learning (MLL) models has received much less attention compared to multi-class recognition. Attacking an MLL model by optimizing a loss on the target set of labels has often the undesired consequence of changing the predictions for other labels. On the other hand adding a loss on the remaining labels to keep them fixed leads to highly negatively correlated gradient directions reducing the attack effectiveness. In this paper we develop a framework for crafting effective and semantic aware adversarial attacks for MLL. First to obtain an attack that leads to semantically consistent predictions across all labels we find a minimal superset of the target labels referred to as consistent target set. To do so we develop an efficient search algorithm over a knowledge graph which encodes label dependencies. Next we propose an optimization that searches for an attack that modifies the predictions of labels in the consistent target set while ensuring other labels will not get affected. This leads to an efficient algorithm that projects the gradient of the consistent target set loss onto the orthogonal direction of the gradient of the loss on other labels. Our framework can generate attacks on different target set sizes and for MLL with thousands of labels (as in OpenImages). Finally by extensive experiments on three datasets and several MLL models we show that our method generates both successful and semantically consistent attacks. | https://openaccess.thecvf.com/content/CVPR2024/papers/Mahmood_Semantic-Aware_Multi-Label_Adversarial_Attacks_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Mahmood_Semantic-Aware_Multi-Label_Adversarial_Attacks_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Mahmood_Semantic-Aware_Multi-Label_Adversarial_Attacks_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Mahmood_Semantic-Aware_Multi-Label_Adversarial_CVPR_2024_supplemental.pdf | null |
EasyDrag: Efficient Point-based Manipulation on Diffusion Models | Xingzhong Hou, Boxiao Liu, Yi Zhang, Jihao Liu, Yu Liu, Haihang You | Generative models are gaining increasing popularity and the demand for precisely generating images is on the rise. However generating an image that perfectly aligns with users' expectations is extremely challenging. The shapes of objects the poses of animals the structures of landscapes and more may not match the user's desires and this applies to real images as well. This is where point-based image editing becomes essential. An excellent image editing method needs to meet the following criteria: user-friendly interaction high performance and good generalization capability. Due to the limitations of StyleGAN DragGAN exhibits limited robustness across diverse scenarios while DragDiffusion lacks user-friendliness due to the necessity of LoRA fine-tuning and masks. In this paper we introduce a novel interactive point-based image editing framework called EasyDrag that leverages pretrained diffusion models to achieve high-quality editing outcomes and user-friendship. Extensive experimentation demonstrates that our approach surpasses DragDiffusion in terms of both image quality and editing precision for point-based image manipulation tasks. | https://openaccess.thecvf.com/content/CVPR2024/papers/Hou_EasyDrag_Efficient_Point-based_Manipulation_on_Diffusion_Models_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Hou_EasyDrag_Efficient_Point-based_Manipulation_on_Diffusion_Models_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Hou_EasyDrag_Efficient_Point-based_Manipulation_on_Diffusion_Models_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Hou_EasyDrag_Efficient_Point-based_CVPR_2024_supplemental.pdf | null |
Region-Based Representations Revisited | Michal Shlapentokh-Rothman, Ansel Blume, Yao Xiao, Yuqun Wu, Sethuraman TV, Heyi Tao, Jae Yong Lee, Wilfredo Torres, Yu-Xiong Wang, Derek Hoiem | We investigate whether region-based representations are effective for recognition. Regions were once a mainstay in recognition approaches but pixel and patch-based features are now used almost exclusively. We show that recent class-agnostic segmenters like SAM can be effectively combined with strong unsupervised representations like DINOv2 and used for a wide variety of tasks including semantic segmentation object-based image retrieval and multi-image analysis. Once the masks and features are extracted these representations even with linear decoders enable competitive performance making them well suited to applications that require custom queries. The compactness of the representation also makes it well-suited to video analysis and other problems requiring inference across many images. | https://openaccess.thecvf.com/content/CVPR2024/papers/Shlapentokh-Rothman_Region-Based_Representations_Revisited_CVPR_2024_paper.pdf | http://arxiv.org/abs/2402.02352 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Shlapentokh-Rothman_Region-Based_Representations_Revisited_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Shlapentokh-Rothman_Region-Based_Representations_Revisited_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Shlapentokh-Rothman_Region-Based_Representations_Revisited_CVPR_2024_supplemental.pdf | null |
GenH2R: Learning Generalizable Human-to-Robot Handover via Scalable Simulation Demonstration and Imitation | Zifan Wang, Junyu Chen, Ziqing Chen, Pengwei Xie, Rui Chen, Li Yi | This paper presents GenH2R a framework for learning generalizable vision-based human-to-robot (H2R) handover skills. The goal is to equip robots with the ability to reliably receive objects with unseen geometry handed over by humans in various complex trajectories. We acquire such generalizability by learning H2R handover at scale with a comprehensive solution including procedural simulation assets creation automated demonstration generation and effective imitation learning. We leverage large-scale 3D model repositories dexterous grasp generation methods and curve-based 3D animation to create an H2R handover simulation environment named GenH2R-Sim surpassing the number of scenes in existing simulators by three orders of magnitude. We further introduce a distillation-friendly demonstration generation method that automatically generates a million high-quality demonstrations suitable for learning. Finally we present a 4D imitation learning method augmented by a future forecasting objective to distill demonstrations into a visuo-motor handover policy. Experimental evaluations in both simulators and the real world demonstrate significant improvements (at least +10% success rate) over baselines in all cases. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_GenH2R_Learning_Generalizable_Human-to-Robot_Handover_via_Scalable_Simulation_Demonstration_and_CVPR_2024_paper.pdf | http://arxiv.org/abs/2401.00929 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_GenH2R_Learning_Generalizable_Human-to-Robot_Handover_via_Scalable_Simulation_Demonstration_and_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_GenH2R_Learning_Generalizable_Human-to-Robot_Handover_via_Scalable_Simulation_Demonstration_and_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wang_GenH2R_Learning_Generalizable_CVPR_2024_supplemental.pdf | null |
Modality-Agnostic Structural Image Representation Learning for Deformable Multi-Modality Medical Image Registration | Tony C. W. Mok, Zi Li, Yunhao Bai, Jianpeng Zhang, Wei Liu, Yan-Jie Zhou, Ke Yan, Dakai Jin, Yu Shi, Xiaoli Yin, Le Lu, Ling Zhang | Establishing dense anatomical correspondence across distinct imaging modalities is a foundational yet challenging procedure for numerous medical image analysis studies and image-guided radiotherapy. Existing multi-modality image registration algorithms rely on statistical-based similarity measures or local structural image representations. However the former is sensitive to locally varying noise while the latter is not discriminative enough to cope with complex anatomical structures in multimodal scans causing ambiguity in determining the anatomical correspondence across scans with different modalities. In this paper we propose a modality-agnostic structural representation learning method which leverages Deep Neighbourhood Self-similarity (DNS) and anatomy-aware contrastive learning to learn discriminative and contrast-invariance deep structural image representations (DSIR) without the need for anatomical delineations or pre-aligned training images. We evaluate our method on multiphase CT abdomen MR-CT and brain MR T1w-T2w registration. Comprehensive results demonstrate that our method is superior to the conventional local structural representation and statistical-based similarity measures in terms of discriminability and accuracy. | https://openaccess.thecvf.com/content/CVPR2024/papers/Mok_Modality-Agnostic_Structural_Image_Representation_Learning_for_Deformable_Multi-Modality_Medical_Image_CVPR_2024_paper.pdf | http://arxiv.org/abs/2402.18933 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Mok_Modality-Agnostic_Structural_Image_Representation_Learning_for_Deformable_Multi-Modality_Medical_Image_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Mok_Modality-Agnostic_Structural_Image_Representation_Learning_for_Deformable_Multi-Modality_Medical_Image_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Mok_Modality-Agnostic_Structural_Image_CVPR_2024_supplemental.pdf | null |
Any-Shift Prompting for Generalization over Distributions | Zehao Xiao, Jiayi Shen, Mohammad Mahdi Derakhshani, Shengcai Liao, Cees G. M. Snoek | Image-language models with prompt learning have shown remarkable advances in numerous downstream vision tasks. Nevertheless conventional prompt learning methods overfit the training distribution and lose the generalization ability on the test distributions. To improve the generalization across various distribution shifts we propose any-shift prompting: a general probabilistic inference framework that considers the relationship between training and test distributions during prompt learning. We explicitly connect training and test distributions in the latent space by constructing training and test prompts in a hierarchical architecture. Within this framework the test prompt exploits the distribution relationships to guide the generalization of the CLIP image-language model from training to any test distribution. To effectively encode the distribution information and their relationships we further introduce a transformer inference network with a pseudo-shift training mechanism. The network generates the tailored test prompt with both training and test information in a feedforward pass avoiding extra training costs at test time. Extensive experiments on twenty-three datasets demonstrate the effectiveness of any-shift prompting on the generalization over various distribution shifts. | https://openaccess.thecvf.com/content/CVPR2024/papers/Xiao_Any-Shift_Prompting_for_Generalization_over_Distributions_CVPR_2024_paper.pdf | http://arxiv.org/abs/2402.10099 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Xiao_Any-Shift_Prompting_for_Generalization_over_Distributions_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Xiao_Any-Shift_Prompting_for_Generalization_over_Distributions_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Xiao_Any-Shift_Prompting_for_CVPR_2024_supplemental.pdf | null |
InterHandGen: Two-Hand Interaction Generation via Cascaded Reverse Diffusion | Jihyun Lee, Shunsuke Saito, Giljoo Nam, Minhyuk Sung, Tae-Kyun Kim | We present InterHandGen a novel framework that learns the generative prior of two-hand interaction. Sampling from our model yields plausible and diverse two-hand shapes in close interaction with or without an object. Our prior can be incorporated into any optimization or learning methods to reduce ambiguity in an ill-posed setup. Our key observation is that directly modeling the joint distribution of multiple instances imposes high learning complexity due to its combinatorial nature. Thus we propose to decompose the modeling of joint distribution into the modeling of factored unconditional and conditional single instance distribution. In particular we introduce a diffusion model that learns the single-hand distribution unconditional and conditional to another hand via conditioning dropout. For sampling we combine anti-penetration and classifier-free guidance to enable plausible generation. Furthermore we establish the rigorous evaluation protocol of two-hand synthesis where our method significantly outperforms baseline generative models in terms of plausibility and diversity. We also demonstrate that our diffusion prior can boost the performance of two-hand reconstruction from monocular in-the-wild images achieving new state-of-the-art accuracy. | https://openaccess.thecvf.com/content/CVPR2024/papers/Lee_InterHandGen_Two-Hand_Interaction_Generation_via_Cascaded_Reverse_Diffusion_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.17422 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Lee_InterHandGen_Two-Hand_Interaction_Generation_via_Cascaded_Reverse_Diffusion_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Lee_InterHandGen_Two-Hand_Interaction_Generation_via_Cascaded_Reverse_Diffusion_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lee_InterHandGen_Two-Hand_Interaction_CVPR_2024_supplemental.pdf | null |
CPR-Coach: Recognizing Composite Error Actions based on Single-class Training | Shunli Wang, Shuaibing Wang, Dingkang Yang, Mingcheng Li, Haopeng Kuang, Xiao Zhao, Liuzhen Su, Peng Zhai, Lihua Zhang | Fine-grained medical action analysis plays a vital role in improving medical skill training efficiency but it faces the problems of data and algorithm shortage. Cardiopulmonary Resuscitation (CPR) is an essential skill in emergency treatment. Currently the assessment of CPR skills mainly depends on dummies and trainers leading to high training costs and low efficiency. For the first time this paper constructs a vision-based system to complete error action recognition and skill assessment in CPR. Specifically we define 13 types of single-error actions and 74 types of composite error actions during external cardiac compression and then develop a video dataset named CPR-Coach. By taking the CPR-Coach as a benchmark this paper investigates and compares the performance of existing action recognition models based on different data modalities. To solve the unavoidable "Single-class Training & Multi-class Testing" problem we propose a human-cognition-inspired framework named ImagineNet to improve the model's multi-error recognition performance under restricted supervision. Extensive comparison and actual deployment experiments verify the effectiveness of the framework. We hope this work could bring new inspiration to the computer vision and medical skills training communities simultaneously. The dataset and the code are publicly available on https://github.com/Shunli-Wang/CPR-Coach. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_CPR-Coach_Recognizing_Composite_Error_Actions_based_on_Single-class_Training_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_CPR-Coach_Recognizing_Composite_Error_Actions_based_on_Single-class_Training_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_CPR-Coach_Recognizing_Composite_Error_Actions_based_on_Single-class_Training_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wang_CPR-Coach_Recognizing_Composite_CVPR_2024_supplemental.pdf | null |
Video2Game: Real-time Interactive Realistic and Browser-Compatible Environment from a Single Video | Hongchi Xia, Zhi-Hao Lin, Wei-Chiu Ma, Shenlong Wang | Creating high-quality and interactive virtual environments such as games and simulators often involves complex and costly manual modeling processes. In this paper we present Video2Game a novel approach that automatically converts videos of real-world scenes into realistic and interactive game environments. At the heart of our system are three core components: (i) a neural radiance fields (NeRF) module that effectively captures the geometry and visual appearance of the scene; (ii) a mesh module that distills the knowledge from NeRF for faster rendering; and (iii) a physics module that models the interactions and physical dynamics among the objects. By following the carefully designed pipeline one can construct an interactable and actionable digital replica of the real world. We benchmark our system on both indoor and large-scale outdoor scenes. We show that we can not only produce highly-realistic renderings in real-time but also build interactive games on top. | https://openaccess.thecvf.com/content/CVPR2024/papers/Xia_Video2Game_Real-time_Interactive_Realistic_and_Browser-Compatible_Environment_from_a_Single_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.09833 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Xia_Video2Game_Real-time_Interactive_Realistic_and_Browser-Compatible_Environment_from_a_Single_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Xia_Video2Game_Real-time_Interactive_Realistic_and_Browser-Compatible_Environment_from_a_Single_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Xia_Video2Game_Real-time_Interactive_CVPR_2024_supplemental.zip | null |
Tackling the Singularities at the Endpoints of Time Intervals in Diffusion Models | Pengze Zhang, Hubery Yin, Chen Li, Xiaohua Xie | Most diffusion models assume that the reverse process adheres to a Gaussian distribution. However this approximation has not been rigorously validated especially at singularities where t=0 and t=1. Improperly dealing with such singularities leads to an average brightness issue in applications and limits the generation of images with extreme brightness or darkness. We primarily focus on tackling singularities from both theoretical and practical perspectives. Initially we establish the error bounds for the reverse process approximation and showcase its Gaussian characteristics at singularity time steps. Based on this theoretical insight we confirm the singularity at t=1 is conditionally removable while it at t=0 is an inherent property. Upon these significant conclusions we propose a novel plug-and-play method SingDiffusion to address the initial singular time step sampling which not only effectively resolves the average brightness issue for a wide range of diffusion models without extra training efforts but also enhances their generation capability in achieving notable lower FID scores. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Tackling_the_Singularities_at_the_Endpoints_of_Time_Intervals_in_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.08381 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Tackling_the_Singularities_at_the_Endpoints_of_Time_Intervals_in_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Tackling_the_Singularities_at_the_Endpoints_of_Time_Intervals_in_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_Tackling_the_Singularities_CVPR_2024_supplemental.pdf | null |
MatSynth: A Modern PBR Materials Dataset | Giuseppe Vecchio, Valentin Deschaintre | We introduce MatSynth a dataset of 4000+ CC0 ultra-high resolution PBR materials. Materials are crucial components of virtual relightable assets defining the interaction of light at the surface of geometries. Given their importance significant research effort was dedicated to their representation creation and acquisition. However in the past 6 years most research in material acquisition or generation relied either on the same unique dataset or on company-owned huge library of procedural materials. With this dataset we propose a significantly larger more diverse and higher resolution set of materials than previously publicly available. We carefully discuss the data collection process and demonstrate the benefits of this dataset for material acquisition and generation applications. The complete data further contains metadata with each material's origin license category tags creation method and when available descriptions and physical size as well as 3M+ renderings of the augmented materials in 1K under various environment lightings. The MatSynth dataset is released through the project page at: https://www.gvecchio.com/matsynth. | https://openaccess.thecvf.com/content/CVPR2024/papers/Vecchio_MatSynth_A_Modern_PBR_Materials_Dataset_CVPR_2024_paper.pdf | http://arxiv.org/abs/2401.06056 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Vecchio_MatSynth_A_Modern_PBR_Materials_Dataset_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Vecchio_MatSynth_A_Modern_PBR_Materials_Dataset_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Vecchio_MatSynth_A_Modern_CVPR_2024_supplemental.zip | null |
CHAIN: Enhancing Generalization in Data-Efficient GANs via lipsCHitz continuity constrAIned Normalization | Yao Ni, Piotr Koniusz | Generative Adversarial Networks (GANs) significantly advanced image generation but their performance heavily depends on abundant training data. In scenarios with limited data GANs often struggle with discriminator overfitting and unstable training. Batch Normalization (BN) despite being known for enhancing generalization and training stability has rarely been used in the discriminator of Data-Efficient GANs. Our work addresses this gap by identifying a critical flaw in BN: the tendency for gradient explosion during the centering and scaling steps. To tackle this issue we present CHAIN (lipsCHitz continuity constrAIned Normalization) which replaces the conventional centering step with zero-mean regularization and integrates a Lipschitz continuity constraint in the scaling step. CHAIN further enhances GAN training by adaptively interpolating the normalized and unnormalized features effectively avoiding discriminator overfitting. Our theoretical analyses firmly establishes CHAIN's effectiveness in reducing gradients in latent features and weights improving stability and generalization in GAN training. Empirical evidence supports our theory. CHAIN achieves state-of-the-art results in data-limited scenarios on CIFAR-10/100 ImageNet five low-shot and seven high-resolution few-shot image datasets. | https://openaccess.thecvf.com/content/CVPR2024/papers/Ni_CHAIN_Enhancing_Generalization_in_Data-Efficient_GANs_via_lipsCHitz_continuity_constrAIned_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.00521 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Ni_CHAIN_Enhancing_Generalization_in_Data-Efficient_GANs_via_lipsCHitz_continuity_constrAIned_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Ni_CHAIN_Enhancing_Generalization_in_Data-Efficient_GANs_via_lipsCHitz_continuity_constrAIned_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ni_CHAIN_Enhancing_Generalization_CVPR_2024_supplemental.pdf | null |
RTracker: Recoverable Tracking via PN Tree Structured Memory | Yuqing Huang, Xin Li, Zikun Zhou, Yaowei Wang, Zhenyu He, Ming-Hsuan Yang | Existing tracking methods mainly focus on learning better target representation or developing more robust prediction models to improve tracking performance. While tracking performance has significantly improved the target loss issue occurs frequently due to tracking failures complete occlusion or out-of-view situations. However considerably less attention is paid to the self-recovery issue of tracking methods which is crucial for practical applications. To this end we propose a recoverable tracking framework \ourmethod that uses a tree-structured memory to dynamically associate a tracker and a detector to enable self-recovery ability. Specifically we propose a Positive-Negative Tree-structured memory to chronologically store and maintain positive and negative target samples. Upon the PN tree memory we develop corresponding walking rules for determining the state of the target and define a set of control flows to unite the tracker and the detector in different tracking scenarios. Our core idea is to use the support samples of positive and negative target categories to establish a relative distance-based criterion for a reliable assessment of target loss. The favorable performance in comparison against the state-of-the-art methods on numerous challenging benchmarks demonstrates the effectiveness of the proposed algorithm. All the source code and trained models will be released at https://github.com/NorahGreen/RTracker. | https://openaccess.thecvf.com/content/CVPR2024/papers/Huang_RTracker_Recoverable_Tracking_via_PN_Tree_Structured_Memory_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.19242 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Huang_RTracker_Recoverable_Tracking_via_PN_Tree_Structured_Memory_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Huang_RTracker_Recoverable_Tracking_via_PN_Tree_Structured_Memory_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Huang_RTracker_Recoverable_Tracking_CVPR_2024_supplemental.zip | null |
High-Quality Facial Geometry and Appearance Capture at Home | Yuxuan Han, Junfeng Lyu, Feng Xu | Facial geometry and appearance capture have demonstrated tremendous success in 3D scanning real humans in studios. Recent works propose to democratize this technique while keeping the results high quality. However they are still inconvenient for daily usage. In addition they focus on an easier problem of only capturing facial skin. This paper proposes a novel method for high-quality face capture featuring an easy-to-use system and the capability to model the complete face with skin mouth interior hair and eyes. We reconstruct facial geometry and appearance from a single co-located smartphone flashlight sequence captured in a dim room where the flashlight is the dominant light source (e.g. rooms with curtains or at night). To model the complete face we propose a novel hybrid representation to effectively model both eyes and other facial regions along with novel techniques to learn it from images. We apply a combined lighting model to compactly represent real illuminations and exploit a morphable face albedo model as a reflectance prior to disentangle diffuse and specular. Experiments show that our method can capture high-quality 3D relightable scans. Our code will be released. | https://openaccess.thecvf.com/content/CVPR2024/papers/Han_High-Quality_Facial_Geometry_and_Appearance_Capture_at_Home_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.03442 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Han_High-Quality_Facial_Geometry_and_Appearance_Capture_at_Home_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Han_High-Quality_Facial_Geometry_and_Appearance_Capture_at_Home_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Han_High-Quality_Facial_Geometry_CVPR_2024_supplemental.pdf | null |
DualAD: Disentangling the Dynamic and Static World for End-to-End Driving | Simon Doll, Niklas Hanselmann, Lukas Schneider, Richard Schulz, Marius Cordts, Markus Enzweiler, Hendrik P. A. Lensch | State-of-the-art approaches for autonomous driving integrate multiple sub-tasks of the overall driving task into a single pipeline that can be trained in an end-to-end fashion by passing latent representations between the different modules. In contrast to previous approaches that rely on a unified grid to represent the belief state of the scene we propose dedicated representations to disentangle dynamic agents and static scene elements. This allows us to explicitly compensate for the effect of both ego and object motion between consecutive time steps and to flexibly propagate the belief state through time. Furthermore dynamic objects can not only attend to the input camera images but also directly benefit from the inferred static scene structure via a novel dynamic-static cross-attention. Extensive experiments on the challenging nuScenes benchmark demonstrate the benefits of the proposed dual-stream design especially for modelling highly dynamic agents in the scene and highlight the improved temporal consistency of our approach. Our method titled DualAD not only outperforms independently trained single-task networks but also improves over previous state-of-the-art end-to-end models by a large margin on all tasks along the functional chain of driving. | https://openaccess.thecvf.com/content/CVPR2024/papers/Doll_DualAD_Disentangling_the_Dynamic_and_Static_World_for_End-to-End_Driving_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Doll_DualAD_Disentangling_the_Dynamic_and_Static_World_for_End-to-End_Driving_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Doll_DualAD_Disentangling_the_Dynamic_and_Static_World_for_End-to-End_Driving_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Doll_DualAD_Disentangling_the_CVPR_2024_supplemental.zip | null |
OTE: Exploring Accurate Scene Text Recognition Using One Token | Jianjun Xu, Yuxin Wang, Hongtao Xie, Yongdong Zhang | In this paper we propose a novel framework to fully exploit the potential of a single vector for scene text recognition (STR). Different from previous sequence-to-sequence methods that rely on a sequence of visual tokens to represent scene text images we prove that just one token is enough to characterize the entire text image and achieve accurate text recognition. Based on this insight we introduce a new paradigm for STR called One Token rEcognizer (OTE). Specifically we implement an image-to-vector encoder to extract the fine-grained global semantics eliminating the need for sequential features. Furthermore an elegant yet potent vector-to-sequence decoder is designed to adaptively diffuse global semantics to corresponding character locations enabling both autoregressive and non-autoregressive decoding schemes. By executing decoding within a high-level representational space our vector-to-sequence (V2S) approach avoids the alignment issues between visual tokens and character embeddings prevalent in traditional sequence-to-sequence methods. Remarkably due to introducing character-wise fine-grained information such global tokens also boost the performance of scene text retrieval tasks. Extensive experiments on synthetic and real datasets demonstrate the effectiveness of our method by achieving new state-of-the-art results on various public STR benchmarks. Our code is available at https://github.com/Xu-Jianjun/OTE. | https://openaccess.thecvf.com/content/CVPR2024/papers/Xu_OTE_Exploring_Accurate_Scene_Text_Recognition_Using_One_Token_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Xu_OTE_Exploring_Accurate_Scene_Text_Recognition_Using_One_Token_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Xu_OTE_Exploring_Accurate_Scene_Text_Recognition_Using_One_Token_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Xu_OTE_Exploring_Accurate_CVPR_2024_supplemental.pdf | null |
MULDE: Multiscale Log-Density Estimation via Denoising Score Matching for Video Anomaly Detection | Jakub Micorek, Horst Possegger, Dominik Narnhofer, Horst Bischof, Mateusz Kozinski | We propose a novel approach to video anomaly detection: we treat feature vectors extracted from videos as realizations of a random variable with a fixed distribution and model this distribution with a neural network. This lets us estimate the likelihood of test videos and detect video anomalies by thresholding the likelihood estimates. We train our video anomaly detector using a modification of denoising score matching a method that injects training data with noise to facilitate modeling its distribution. To eliminate hyperparameter selection we model the distribution of noisy video features across a range of noise levels and introduce a regularizer that tends to align the models for different levels of noise. At test time we combine anomaly indications at multiple noise scales with a Gaussian mixture model. Running our video anomaly detector induces minimal delays as inference requires merely extracting the features and forward-propagating them through a shallow neural network and a Gaussian mixture model. Our experiments on five popular video anomaly detection benchmarks demonstrate state-of-the-art performance both in the object-centric and in the frame-centric setup. | https://openaccess.thecvf.com/content/CVPR2024/papers/Micorek_MULDE_Multiscale_Log-Density_Estimation_via_Denoising_Score_Matching_for_Video_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.14497 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Micorek_MULDE_Multiscale_Log-Density_Estimation_via_Denoising_Score_Matching_for_Video_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Micorek_MULDE_Multiscale_Log-Density_Estimation_via_Denoising_Score_Matching_for_Video_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Micorek_MULDE_Multiscale_Log-Density_CVPR_2024_supplemental.pdf | null |
Your Image is My Video: Reshaping the Receptive Field via Image-To-Video Differentiable AutoAugmentation and Fusion | Sofia Casarin, Cynthia I. Ugwu, Sergio Escalera, Oswald Lanz | The landscape of deep learning research is moving towards innovative strategies to harness the true potential of data. Traditionally emphasis has been on scaling model architectures resulting in large and complex neural networks which can be difficult to train with limited computational resources. However independently of the model size data quality (i.e. amount and variability) is still a major factor that affects model generalization. In this work we propose a novel technique to exploit available data through the use of automatic data augmentation for the tasks of image classification and semantic segmentation. We introduce the first Differentiable Augmentation Search method (DAS) to generate variations of images that can be processed as videos. Compared to previous approaches DAS is extremely fast and flexible allowing the search on very large search spaces in less than a GPU day. Our intuition is that the increased receptive field in the temporal dimension provided by DAS could lead to benefits also to the spatial receptive field. More specifically we leverage DAS to guide the reshaping of the spatial receptive field by selecting task-dependant transformations. As a result compared to standard augmentation alternatives we improve in terms of accuracy on ImageNet Cifar10 Cifar100 Tiny-ImageNet Pascal-VOC-2012 and CityScapes datasets when plugging-in our DAS over different light-weight video backbones. | https://openaccess.thecvf.com/content/CVPR2024/papers/Casarin_Your_Image_is_My_Video_Reshaping_the_Receptive_Field_via_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.15194 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Casarin_Your_Image_is_My_Video_Reshaping_the_Receptive_Field_via_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Casarin_Your_Image_is_My_Video_Reshaping_the_Receptive_Field_via_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Casarin_Your_Image_is_CVPR_2024_supplemental.pdf | null |
PTQ4SAM: Post-Training Quantization for Segment Anything | Chengtao Lv, Hong Chen, Jinyang Guo, Yifu Ding, Xianglong Liu | Segment Anything Model (SAM) has achieved impressive performance in many computer vision tasks. However as a large-scale model the immense memory and computation costs hinder its practical deployment. In this paper we propose a post-training quantization (PTQ) framework for Segment Anything Model namely PTQ4SAM. First we investigate the inherent bottleneck of SAM quantization attributed to the bimodal distribution in \cls post-Key-Linear activations. We analyze its characteristics from both per-tensor and per-channel perspectives and propose a Bimodal Integration strategy which utilizes a mathematically equivalent sign operation to transform the bimodal distribution into a relatively easy-quantized normal distribution offline. Second SAM encompasses diverse attention mechanisms (i.e. self-attention and two-way cross-attention) resulting in substantial variations in the post-Softmax distributions. Therefore we introduce an Adaptive Granularity Quantization for Softmax through searching the optimal power-of-two base which is hardware-friendly. Extensive experimental results across various vision tasks (instance segmentation semantic segmentation and object detection) datasets and model variants show the superiority of PTQ4SAM. For example when quantizing SAM-L to 6-bit we achieve lossless accuracy for instance segmentation about 0.5% drop with theoretical 3.9xacceleration. The code is available at https://github.com/chengtao-lv/PTQ4SAM. | https://openaccess.thecvf.com/content/CVPR2024/papers/Lv_PTQ4SAM_Post-Training_Quantization_for_Segment_Anything_CVPR_2024_paper.pdf | http://arxiv.org/abs/2405.03144 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Lv_PTQ4SAM_Post-Training_Quantization_for_Segment_Anything_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Lv_PTQ4SAM_Post-Training_Quantization_for_Segment_Anything_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lv_PTQ4SAM_Post-Training_Quantization_CVPR_2024_supplemental.pdf | null |
Improving Bird's Eye View Semantic Segmentation by Task Decomposition | Tianhao Zhao, Yongcan Chen, Yu Wu, Tianyang Liu, Bo Du, Peilun Xiao, Shi Qiu, Hongda Yang, Guozhen Li, Yi Yang, Yutian Lin | Semantic segmentation in bird's eye view (BEV) plays a crucial role in autonomous driving. Previous methods usually follow an end-to-end pipeline directly predicting the BEV segmentation map from monocular RGB inputs. However the challenge arises when the RGB inputs and BEV targets from distinct perspectives making the direct point-to-point predicting hard to optimize. In this paper we decompose the original BEV segmentation task into two stages namely BEV map reconstruction and RGB-BEV feature alignment. In the first stage we train a BEV autoencoder to reconstruct the BEV segmentation maps given corrupted noisy latent representation which urges the decoder to learn fundamental knowledge of typical BEV patterns. The second stage involves mapping RGB input images into the BEV latent space of the first stage directly optimizing the correlations between the two views at the feature level. Our approach simplifies the complexity of combining perception and generation into distinct steps equipping the model to handle intricate and challenging scenes effectively. Besides we propose to transform the BEV segmentation map from the Cartesian to the polar coordinate system to establish the column-wise correspondence between RGB images and BEV maps. Moreover our method requires neither multi-scale features nor camera intrinsic parameters for depth estimation and saves computational overhead. Extensive experiments on nuScenes and Argoverse show the effectiveness and efficiency of our method. Code is available at https://github.com/happytianhao/TaDe. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zhao_Improving_Birds_Eye_View_Semantic_Segmentation_by_Task_Decomposition_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zhao_Improving_Birds_Eye_View_Semantic_Segmentation_by_Task_Decomposition_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zhao_Improving_Birds_Eye_View_Semantic_Segmentation_by_Task_Decomposition_CVPR_2024_paper.html | CVPR 2024 | null | null |
Subsets and Splits