Search is not available for this dataset
title
string
authors
string
abstract
string
pdf
string
arXiv
string
bibtex
string
url
string
detail_url
string
tags
string
supp
string
string
The More You See in 2D the More You Perceive in 3D
Xinyang Han, Zelin Gao, Angjoo Kanazawa, Shubham Goel, Yossi Gandelsman
Humans can infer 3D structure from 2D images of an object based on past experience and improve their 3D understanding as they see more images. Inspired by this behavior we introduce SAP3D a system for 3D reconstruction and novel view synthesis from an arbitrary number of unposed images. Given a few unposed images of an object we adapt a pre-trained view-conditioned diffusion model together with the camera poses of the images via test-time fine-tuning. The adapted diffusion model and the obtained camera poses are then utilized as instance-specific priors for 3D reconstruction and novel view synthesis. We show that as the number of input images increases the performance of our approach improves bridging the gap between optimization-based prior-less 3D reconstruction methods and single-image-to-3D diffusion-based methods. We demonstrate our system on real images as well as standard synthetic benchmarks. Our ablation studies confirm that this adaption behavior is key for more accurate 3D understanding.
https://openaccess.thecvf.com/content/CVPR2024/papers/Han_The_More_You_See_in_2D_the_More_You_Perceive_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.03652
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Han_The_More_You_See_in_2D_the_More_You_Perceive_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Han_The_More_You_See_in_2D_the_More_You_Perceive_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Han_The_More_You_CVPR_2024_supplemental.pdf
null
GLiDR: Topologically Regularized Graph Generative Network for Sparse LiDAR Point Clouds
Prashant Kumar, Kshitij Madhav Bhat, Vedang Bhupesh Shenvi Nadkarni, Prem Kalra
Sparse LiDAR point clouds cause severe loss of detail of static structures and reduce the density of static points available for navigation. Reduced density can be detrimental to navigation under several scenarios. We observe that despite high sparsity in most cases the global topology of LiDAR outlining the static structures can be inferred. We utilize this property to obtain a backbone skeleton of a LiDAR scan in the form of a single connected component that is a proxy to its global topology. We utilize the backbone to augment new points along static structures to overcome sparsity. Newly introduced points could correspond to existing static structures or to static points that were earlier obstructed by dynamic objects. To the best of our knowledge we are the first to use such a strategy for sparse LiDAR point clouds. Existing solutions close to our approach fail to identify and preserve the global static LiDAR topology and generate sub-optimal points. We propose GLiDR a Graph Generative network that is topologically regularized using 0-dimensional Persistent Homology (PH) constraints. This enables GLiDR to introduce newer static points along a topologically consistent global static LiDAR backbone. GLiDR generates precise static points using 32x sparser dynamic scans and performs better than the baselines across three datasets. GLiDR generates a valuable byproduct - an accurate binary segmentation mask of static and dynamic objects that are helpful for navigation planning and safety in constrained environments. The newly introduced static points allow GLiDR to outperform LiDAR-based navigation using SLAM in several settings.
https://openaccess.thecvf.com/content/CVPR2024/papers/Kumar_GLiDR_Topologically_Regularized_Graph_Generative_Network_for_Sparse_LiDAR_Point_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.00068
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Kumar_GLiDR_Topologically_Regularized_Graph_Generative_Network_for_Sparse_LiDAR_Point_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Kumar_GLiDR_Topologically_Regularized_Graph_Generative_Network_for_Sparse_LiDAR_Point_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kumar_GLiDR_Topologically_Regularized_CVPR_2024_supplemental.zip
null
Separate and Conquer: Decoupling Co-occurrence via Decomposition and Representation for Weakly Supervised Semantic Segmentation
Zhiwei Yang, Kexue Fu, Minghong Duan, Linhao Qu, Shuo Wang, Zhijian Song
Weakly supervised semantic segmentation (WSSS) with image-level labels aims to achieve segmentation tasks without dense annotations. However attributed to the frequent coupling of co-occurring objects and the limited supervision from image-level labels the challenging co-occurrence problem is widely present and leads to false activation of objects in WSSS. In this work we devise a 'Separate and Conquer' scheme SeCo to tackle this issue from dimensions of image space and feature space. In the image space we propose to 'separate' the co-occurring objects with image decomposition by subdividing images into patches. Importantly we assign each patch a category tag from Class Activation Maps (CAMs) which spatially helps remove the co-context bias and guide the subsequent representation. In the feature space we propose to 'conquer' the false activation by enhancing semantic representation with multi-granularity knowledge contrast. To this end a dual-teacher-single-student architecture is designed and tag-guided contrast is conducted which guarantee the correctness of knowledge and further facilitate the discrepancy among co-contexts. We streamline the multi-staged WSSS pipeline end-to-end and tackle this issue without external supervision. Extensive experiments are conducted validating the efficiency of our method and the superiority over previous single-staged and even multi-staged competitors on PASCAL VOC and MS COCO. Code is available at https://github.com/zwyang6/SeCo.git.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yang_Separate_and_Conquer_Decoupling_Co-occurrence_via_Decomposition_and_Representation_for_CVPR_2024_paper.pdf
http://arxiv.org/abs/2402.18467
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Separate_and_Conquer_Decoupling_Co-occurrence_via_Decomposition_and_Representation_for_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Separate_and_Conquer_Decoupling_Co-occurrence_via_Decomposition_and_Representation_for_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yang_Separate_and_Conquer_CVPR_2024_supplemental.pdf
null
BiPer: Binary Neural Networks using a Periodic Function
Edwin Vargas, Claudia V. Correa, Carlos Hinojosa, Henry Arguello
Quantized neural networks employ reduced precision representations for both weights and activations. This quantization process significantly reduces the memory requirements and computational complexity of the network. Binary Neural Networks (BNNs) are the extreme quantization case representing values with just one bit. Since the sign function is typically used to map real values to binary values smooth approximations are introduced to mimic the gradients during error backpropagation. Thus the mismatch between the forward and backward models corrupts the direction of the gradient causing training inconsistency problems and performance degradation. In contrast to current BNN approaches we propose to employ a binary periodic (BiPer) function during binarization. Specifically we use a square wave for the forward pass to obtain the binary values and employ the trigonometric sine function with the same period of the square wave as a differentiable surrogate during the backward pass. We demonstrate that this approach can control the quantization error by using the frequency of the periodic function and improves network performance. Extensive experiments validate the effectiveness of BiPer in benchmark datasets and network architectures with improvements of up to 1% and 0.69% with respect to state-of-the-art methods in the classification task over CIFAR-10 and ImageNet respectively. Our code is publicly available at https://github.com/edmav4/BiPer.
https://openaccess.thecvf.com/content/CVPR2024/papers/Vargas_BiPer_Binary_Neural_Networks_using_a_Periodic_Function_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.01278
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Vargas_BiPer_Binary_Neural_Networks_using_a_Periodic_Function_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Vargas_BiPer_Binary_Neural_Networks_using_a_Periodic_Function_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Vargas_BiPer_Binary_Neural_CVPR_2024_supplemental.pdf
null
Unifying Automatic and Interactive Matting with Pretrained ViTs
Zixuan Ye, Wenze Liu, He Guo, Yujia Liang, Chaoyi Hong, Hao Lu, Zhiguo Cao
Automatic and interactive matting largely improve image matting by respectively alleviating the need for auxiliary input and enabling object selection. Due to different settings on whether prompts exist they either suffer from weakness in instance completeness or region details. Also when dealing with different scenarios directly switching between the two matting models introduces inconvenience and higher workload. Therefore we wonder whether we can alleviate the limitations of both settings while achieving unification to facilitate more convenient use. Our key idea is to offer saliency guidance for automatic mode to enable its attention to detailed regions and also refine the instance completeness in interactive mode by replacing the binary mask guidance with a more probabilistic form. With different guidance for each mode we can achieve unification through adaptable guidance defined as saliency information in automatic mode and user cue for interactive one. It is instantiated as candidate feature in our method an automatic switch for class token in pretrained ViTs and average feature of user prompts controlled by the existence of user prompts. Then we use the candidate feature to generate a probabilistic similarity map as the guidance to alleviate the over-reliance on binary mask. Extensive experiments show that our method can adapt well to both automatic and interactive scenarios with more light-weight framework. Code available at https://github.com/coconuthust/SmartMatting.
https://openaccess.thecvf.com/content/CVPR2024/papers/Ye_Unifying_Automatic_and_Interactive_Matting_with_Pretrained_ViTs_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Ye_Unifying_Automatic_and_Interactive_Matting_with_Pretrained_ViTs_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Ye_Unifying_Automatic_and_Interactive_Matting_with_Pretrained_ViTs_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ye_Unifying_Automatic_and_CVPR_2024_supplemental.pdf
null
Segment Any Event Streams via Weighted Adaptation of Pivotal Tokens
Zhiwen Chen, Zhiyu Zhu, Yifan Zhang, Junhui Hou, Guangming Shi, Jinjian Wu
In this paper we delve into the nuanced challenge of tailoring the Segment Anything Models (SAMs) for integration with event data with the overarching objective of attaining robust and universal object segmentation within the event-centric domain. One pivotal issue at the heart of this endeavor is the precise alignment and calibration of embeddings derived from event-centric data such that they harmoniously coincide with those originating from RGB imagery. Capitalizing on the vast repositories of datasets with paired events and RGB images our proposition is to harness and extrapolate the profound knowledge encapsulated within the pre-trained SAM framework. As a cornerstone to achieving this we introduce a multi-scale feature distillation methodology. This methodology rigorously optimizes the alignment of token embeddings originating from event data with their RGB image counterparts thereby preserving and enhancing the robustness of the overall architecture. Considering the distinct significance that token embeddings from intermediate layers hold for higher-level embeddings our strategy is centered on accurately calibrating the pivotal token embeddings. This targeted calibration is aimed at effectively managing the discrepancies in high-level embeddings originating from both the event and image domains. Extensive experiments on different datasets demonstrate the effectiveness of the proposed distillation method. Code in https://github.com/happychenpipi/EventSAM.
https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_Segment_Any_Event_Streams_via_Weighted_Adaptation_of_Pivotal_Tokens_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Segment_Any_Event_Streams_via_Weighted_Adaptation_of_Pivotal_Tokens_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Segment_Any_Event_Streams_via_Weighted_Adaptation_of_Pivotal_Tokens_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Chen_Segment_Any_Event_CVPR_2024_supplemental.pdf
null
AnyDoor: Zero-shot Object-level Image Customization
Xi Chen, Lianghua Huang, Yu Liu, Yujun Shen, Deli Zhao, Hengshuang Zhao
This work presents AnyDoor a diffusion-based image generator with the power to teleport target objects to new scenes at user-specified locations with desired shapes. Instead of tuning parameters for each object our model is trained only once and effortlessly generalizes to diverse object-scene combinations at the inference stage. Such a challenging zero-shot setting requires an adequate characterization of a certain object. To this end we complement the commonly used identity feature with detail features which are carefully designed to maintain appearance details yet allow versatile local variations(e.g. lighting orientation posture etc.) supporting the object in favorably blending with different surroundings. We further propose to borrow knowledge from video datasets where we can observe various forms (i.e. along the time axis) of a single object leading to stronger model generalizability and robustness. Extensive experiments demonstrate the superiority of our approach over existing alternatives as well as its great potential in real-world applications such as virtual try-on shape editing and object swapping.
https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_AnyDoor_Zero-shot_Object-level_Image_Customization_CVPR_2024_paper.pdf
http://arxiv.org/abs/2307.09481
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_AnyDoor_Zero-shot_Object-level_Image_Customization_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_AnyDoor_Zero-shot_Object-level_Image_Customization_CVPR_2024_paper.html
CVPR 2024
null
null
Commonsense Prototype for Outdoor Unsupervised 3D Object Detection
Hai Wu, Shijia Zhao, Xun Huang, Chenglu Wen, Xin Li, Cheng Wang
The prevalent approaches of unsupervised 3D object detection follow cluster-based pseudo-label generation and iterative self-training processes. However the challenge arises due to the sparsity of LiDAR scans which leads to pseudo-labels with erroneous size and position resulting in subpar detection performance. To tackle this problem this paper introduces a Commonsense Prototype-based Detector termed CPD for unsupervised 3D object detection. CPD first constructs Commonsense Prototype (CProto) characterized by high-quality bounding box and dense points based on commonsense intuition. Subsequently CPD refines the low-quality pseudo-labels by leveraging the size prior from CProto. Furthermore CPD enhances the detection accuracy of sparsely scanned objects by the geometric knowledge from CProto. CPD outperforms state-of-the-art unsupervised 3D detectors on Waymo Open Dataset (WOD) PandaSet and KITTI datasets by a large margin. Besides by training CPD on WOD and testing on KITTI CPD attains 90.85% and 81.01% 3D Average Precision on easy and moderate car classes respectively. These achievements position CPD in close proximity to fully supervised detectors highlighting the significance of our method. The code will be available at https://github.com/hailanyi/CPD.
https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_Commonsense_Prototype_for_Outdoor_Unsupervised_3D_Object_Detection_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.16493
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wu_Commonsense_Prototype_for_Outdoor_Unsupervised_3D_Object_Detection_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wu_Commonsense_Prototype_for_Outdoor_Unsupervised_3D_Object_Detection_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wu_Commonsense_Prototype_for_CVPR_2024_supplemental.pdf
null
Lookahead Exploration with Neural Radiance Representation for Continuous Vision-Language Navigation
Zihan Wang, Xiangyang Li, Jiahao Yang, Yeqi Liu, Junjie Hu, Ming Jiang, Shuqiang Jiang
Vision-and-language navigation (VLN) enables the agent to navigate to a remote location following the natural language instruction in 3D environments. At each navigation step the agent selects from possible candidate locations and then makes the move. For better navigation planning the lookahead exploration strategy aims to effectively evaluate the agent's next action by accurately anticipating the future environment of candidate locations. To this end some existing works predict RGB images for future environments while this strategy suffers from image distortion and high computational cost. To address these issues we propose the pre-trained hierarchical neural radiance representation model (HNR) to produce multi-level semantic features for future environments which are more robust and efficient than pixel-wise RGB reconstruction. Furthermore with the predicted future environmental representations our lookahead VLN model is able to construct the navigable future path tree and select the optimal path via efficient parallel evaluation. Extensive experiments on the VLN-CE datasets confirm the effectiveness of our method.
https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_Lookahead_Exploration_with_Neural_Radiance_Representation_for_Continuous_Vision-Language_Navigation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.01943
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Lookahead_Exploration_with_Neural_Radiance_Representation_for_Continuous_Vision-Language_Navigation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Lookahead_Exploration_with_Neural_Radiance_Representation_for_Continuous_Vision-Language_Navigation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wang_Lookahead_Exploration_with_CVPR_2024_supplemental.pdf
null
Clustering Propagation for Universal Medical Image Segmentation
Yuhang Ding, Liulei Li, Wenguan Wang, Yi Yang
Prominent solutions for medical image segmentation are typically tailored for automatic or interactive setups posing challenges in facilitating progress achieved in one task to another. This also necessitates separate models for each task duplicating both training time and parameters. To address above issues we introduce S2VNet a universal framework that leverages Slice-to-Volume propagation to unify automatic/interactive segmentation within a single model and one training session. Inspired by clustering-based segmentation techniques S2VNet makes full use of the slice-wise structure of volumetric data by initializing cluster centers from the cluster results of previous slice. This enables knowledge acquired from prior slices to assist in the segmentation of the current slice further efficiently bridging the communication between remote slices using mere 2D networks. Moreover such a framework readily accommodates inter- active segmentation with no architectural change simply by initializing centroids from user inputs. S2VNet distinguishes itself by swift inference speeds and reduced memory consumption compared to prevailing 3D solutions. It can also handle multi-class interactions with each of them serving to initialize different centroids. Experiments on three benchmarks demonstrate S2VNet surpasses task-specified solutions on both automatic/interactive setups.
https://openaccess.thecvf.com/content/CVPR2024/papers/Ding_Clustering_Propagation_for_Universal_Medical_Image_Segmentation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.16646
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Ding_Clustering_Propagation_for_Universal_Medical_Image_Segmentation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Ding_Clustering_Propagation_for_Universal_Medical_Image_Segmentation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ding_Clustering_Propagation_for_CVPR_2024_supplemental.pdf
null
MoPE-CLIP: Structured Pruning for Efficient Vision-Language Models with Module-wise Pruning Error Metric
Haokun Lin, Haoli Bai, Zhili Liu, Lu Hou, Muyi Sun, Linqi Song, Ying Wei, Zhenan Sun
Vision-language pre-trained models have achieved impressive performance on various downstream tasks. However their large model sizes hinder their utilization on platforms with limited computational resources. We find that directly using smaller pre-trained models and applying magnitude-based pruning on CLIP models leads to inflexibility and inferior performance. Recent efforts for VLP compression either adopt uni-modal compression metrics resulting in limited performance or involve costly mask-search processes with learnable masks. In this paper we first propose the Module-wise Pruning Error (MoPE) metric accurately assessing CLIP module importance by performance decline on cross-modal tasks. Using the MoPE metric we introduce a unified pruning framework applicable to both pre-training and task-specific fine-tuning compression stages. For pre-training MoPE-CLIP effectively leverages knowledge from the teacher model significantly reducing pre-training costs while maintaining strong zero-shot capabilities. For fine-tuning consecutive pruning from width to depth yields highly competitive task-specific models. Extensive experiments in two stages demonstrate the effectiveness of the MoPE metric and MoPE-CLIP outperforms previous state-of-the-art VLP compression methods.
https://openaccess.thecvf.com/content/CVPR2024/papers/Lin_MoPE-CLIP_Structured_Pruning_for_Efficient_Vision-Language_Models_with_Module-wise_Pruning_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Lin_MoPE-CLIP_Structured_Pruning_for_Efficient_Vision-Language_Models_with_Module-wise_Pruning_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Lin_MoPE-CLIP_Structured_Pruning_for_Efficient_Vision-Language_Models_with_Module-wise_Pruning_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lin_MoPE-CLIP_Structured_Pruning_CVPR_2024_supplemental.pdf
null
Learning Vision from Models Rivals Learning Vision from Data
Yonglong Tian, Lijie Fan, Kaifeng Chen, Dina Katabi, Dilip Krishnan, Phillip Isola
We introduce SynCLR a novel approach for learning visual representations exclusively from synthetic images without any real data. We synthesize a large dataset of image captions using LLMs then use an off-the-shelf text-to-image model to generate multiple images corresponding to each synthetic caption. We perform visual representation learning on these synthetic images via contrastive learning treating images sharing the same caption as positive pairs. The resulting representations demonstrate remarkable transferability competing favorably with other general-purpose visual representation learners such as CLIP and DINO v2 in image classification tasks. Furthermore in dense prediction tasks such as semantic segmentation SynCLR outperforms previous self-supervised methods by a significant margin e.g. improving over MAE and iBOT by 5.0 and 3.1 mIoU on ADE20k for ViT-B/16.
https://openaccess.thecvf.com/content/CVPR2024/papers/Tian_Learning_Vision_from_Models_Rivals_Learning_Vision_from_Data_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.17742
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Tian_Learning_Vision_from_Models_Rivals_Learning_Vision_from_Data_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Tian_Learning_Vision_from_Models_Rivals_Learning_Vision_from_Data_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Tian_Learning_Vision_from_CVPR_2024_supplemental.pdf
null
Leveraging Frame Affinity for sRGB-to-RAW Video De-rendering
Chen Zhang, Wencheng Han, Yang Zhou, Jianbing Shen, Cheng-zhong Xu, Wentao Liu
Unprocessed RAW video has shown distinct advantages over sRGB video in video editing and computer vision tasks. However capturing RAW video is challenging due to limitations in bandwidth and storage. Various methods have been proposed to address similar issues in single image RAW capture through de-rendering. These methods utilize both the metadata and the sRGB image to perform sRGB-to-RAW de-rendering and recover high-quality single-frame RAW data. However metadata-based methods always require additional computation for online metadata generation imposing severe burden on mobile camera device for high frame rate RAW video capture. To address this issue we propose a framework that utilizes frame affinity to achieve high-quality sRGB-to-RAW video reconstruction. Our approach consists of two main steps. The first step temporal affinity prior extraction uses motion information between adjacent frames to obtain a reference RAW image. The second step spatial feature fusion and mapping learns a pixel-level mapping function using scene-specific and position-specific features provided by the previous frame. Our method can be easily applied to current mobile camera equipment without complicated adaptations or added burden. To demonstrate the effectiveness of our approach we introduce the first RAW Video De-rendering Benchmark. In this benchmark our method outperforms state-of-the-art RAW image reconstruction methods even without image-level metadata.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Leveraging_Frame_Affinity_for_sRGB-to-RAW_Video_De-rendering_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Leveraging_Frame_Affinity_for_sRGB-to-RAW_Video_De-rendering_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Leveraging_Frame_Affinity_for_sRGB-to-RAW_Video_De-rendering_CVPR_2024_paper.html
CVPR 2024
null
null
Adapting Short-Term Transformers for Action Detection in Untrimmed Videos
Min Yang, Huan Gao, Ping Guo, Limin Wang
Vision Transformer (ViT) has shown high potential in video recognition owing to its flexible design adaptable self-attention mechanisms and the efficacy of masked pre-training. Yet it remains unclear how to adapt these pre-trained short-term ViTs for temporal action detection (TAD) in untrimmed videos. The existing works treat them as off-the-shelf feature extractors for each short-trimmed snippet without capturing the fine-grained relation among different snippets in a broader temporal context. To mitigate this issue this paper focuses on designing a new mechanism for adapting these pre-trained ViT models as a unified long-form video transformer to fully unleash its modeling power in capturing inter-snippet relation while still keeping low computation overhead and memory consumption for efficient TAD. To this end we design effective cross-snippet propagation modules to gradually exchange short-term video information among different snippets from two levels. For inner-backbone information propagation we introduce a cross-snippet propagation strategy to enable multi-snippet temporal feature interaction inside the backbone. For post-backbone information propagation we propose temporal transformer layers for further clip-level modeling. With the plain ViT-B pre-trained with VideoMAE our end-to-end temporal action detector (ViT-TAD) yields a very competitive performance to previous temporal action detectors riching up to 69.5 average mAP on THUMOS14 37.40 average mAP on ActivityNet-1.3 and 17.20 average mAP on FineAction.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yang_Adapting_Short-Term_Transformers_for_Action_Detection_in_Untrimmed_Videos_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.01897
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Adapting_Short-Term_Transformers_for_Action_Detection_in_Untrimmed_Videos_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Adapting_Short-Term_Transformers_for_Action_Detection_in_Untrimmed_Videos_CVPR_2024_paper.html
CVPR 2024
null
null
The Mirrored Influence Hypothesis: Efficient Data Influence Estimation by Harnessing Forward Passes
Myeongseob Ko, Feiyang Kang, Weiyan Shi, Ming Jin, Zhou Yu, Ruoxi Jia
Large-scale black-box models have become ubiquitous across numerous applications. Understanding the influence of individual training data sources on predictions made by these models is crucial for improving their trustworthiness. Current influence estimation techniques involve computing gradients for every training point or repeated training on different subsets. These approaches face obvious computational challenges when scaled up to large datasets and models. In this paper we introduce and explore the Mirrored Influence Hypothesis highlighting a reciprocal nature of influence between training and test data. Specifically it suggests that evaluating the influence of training data on test predictions can be reformulated as an equivalent yet inverse problem: assessing how the predictions for training samples would be altered if the model were trained on specific test samples. Through both empirical and theoretical validations we demonstrate the wide applicability of our hypothesis. Inspired by this we introduce a new method for estimating the influence of training data which requires calculating gradients for specific test samples paired with a forward pass for each training point. This approach can capitalize on the common asymmetry in scenarios where the number of test samples under concurrent examination is much smaller than the scale of the training dataset thus gaining a significant improvement in efficiency compared to existing approaches. We demonstrate the applicability of our method across a range of scenarios including data attribution in diffusion models data leakage detection analysis of memorization mislabeled data detection and tracing behavior in language models.
https://openaccess.thecvf.com/content/CVPR2024/papers/Ko_The_Mirrored_Influence_Hypothesis_Efficient_Data_Influence_Estimation_by_Harnessing_CVPR_2024_paper.pdf
http://arxiv.org/abs/2402.08922
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Ko_The_Mirrored_Influence_Hypothesis_Efficient_Data_Influence_Estimation_by_Harnessing_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Ko_The_Mirrored_Influence_Hypothesis_Efficient_Data_Influence_Estimation_by_Harnessing_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ko_The_Mirrored_Influence_CVPR_2024_supplemental.pdf
null
SOAC: Spatio-Temporal Overlap-Aware Multi-Sensor Calibration using Neural Radiance Fields
Quentin Herau, Nathan Piasco, Moussab Bennehar, Luis Roldao, Dzmitry Tsishkou, Cyrille Migniot, Pascal Vasseur, Cédric Demonceaux
In rapidly-evolving domains such as autonomous driving the use of multiple sensors with different modalities is crucial to ensure high operational precision and stability. To correctly exploit the provided information by each sensor in a single common frame it is essential for these sensors to be accurately calibrated. In this paper we leverage the ability of Neural Radiance Fields (NeRF) to represent different sensors modalities in a common volumetric representation to achieve robust and accurate spatio-temporal sensor calibration. By designing a partitioning approach based on the visible part of the scene for each sensor we formulate the calibration problem using only the overlapping areas. This strategy results in a more robust and accurate calibration that is less prone to failure. We demonstrate that our approach works on outdoor urban scenes by validating it on multiple established driving datasets. Results show that our method is able to get better accuracy and robustness compared to existing methods.
https://openaccess.thecvf.com/content/CVPR2024/papers/Herau_SOAC_Spatio-Temporal_Overlap-Aware_Multi-Sensor_Calibration_using_Neural_Radiance_Fields_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.15803
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Herau_SOAC_Spatio-Temporal_Overlap-Aware_Multi-Sensor_Calibration_using_Neural_Radiance_Fields_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Herau_SOAC_Spatio-Temporal_Overlap-Aware_Multi-Sensor_Calibration_using_Neural_Radiance_Fields_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Herau_SOAC_Spatio-Temporal_Overlap-Aware_CVPR_2024_supplemental.pdf
null
G^3-LQ: Marrying Hyperbolic Alignment with Explicit Semantic-Geometric Modeling for 3D Visual Grounding
Yuan Wang, Yali Li, Shengjin Wang
Grounding referred objects in 3D scenes is a burgeoning vision-language task pivotal for propelling Embodied AI as it endeavors to connect the 3D physical world with free-form descriptions. Compared to the 2D counterparts challenges posed by the variability of 3D visual grounding remain relatively unsolved in existing studies: 1) the underlying geometric and complex spatial relationships in 3D scene. 2) the inherent complexity of 3D grounded language. 3) the inconsistencies between text and geometric features. To tackle these issues we propose G^3-LQ a DEtection TRansformer-based model tailored for 3D visual grounding task. G^3-LQ explicitly models Geometric-aware visual representations and Generates fine-Grained Language-guided object Queries in an overarching framework which comprises two dedicated modules. Specifically the Position Adaptive Geometric Exploring (PAGE) unearths underlying information of 3D objects in the geometric details and spatial relationships perspectives. The Fine-grained Language-guided Query Selection (Flan-QS) delves into syntactic structure of texts and generates object queries that exhibit higher relevance towards fine-grained text features. Finally a pioneering Poincare Semantic Alignment (PSA) loss establishes semantic-geometry consistencies by modeling non-linear vision-text feature mappings and aligning them on a hyperbolic prototype--Poincare ball. Extensive experiments verify the superiority of our G^3-LQ method trumping the state-of-the-arts by a considerable margin.
https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_G3-LQ_Marrying_Hyperbolic_Alignment_with_Explicit_Semantic-Geometric_Modeling_for_3D_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_G3-LQ_Marrying_Hyperbolic_Alignment_with_Explicit_Semantic-Geometric_Modeling_for_3D_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_G3-LQ_Marrying_Hyperbolic_Alignment_with_Explicit_Semantic-Geometric_Modeling_for_3D_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wang_G3-LQ_Marrying_Hyperbolic_CVPR_2024_supplemental.pdf
null
Garment Recovery with Shape and Deformation Priors
Ren Li, Corentin Dumery, Benoît Guillard, Pascal Fua
While modeling people wearing tight-fitting clothing has made great strides in recent years loose-fitting clothing remains a challenge. We propose a method that delivers realistic garment models from real-world images regardless of garment shape or deformation. To this end we introduce a fitting approach that utilizes shape and deformation priors learned from synthetic data to accurately capture garment shapes and deformations including large ones. Not only does our approach recover the garment geometry accurately it also yields models that can be directly used by downstream applications such as animation and simulation.
https://openaccess.thecvf.com/content/CVPR2024/papers/Li_Garment_Recovery_with_Shape_and_Deformation_Priors_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.10356
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Li_Garment_Recovery_with_Shape_and_Deformation_Priors_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Li_Garment_Recovery_with_Shape_and_Deformation_Priors_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_Garment_Recovery_with_CVPR_2024_supplemental.pdf
null
Psychometry: An Omnifit Model for Image Reconstruction from Human Brain Activity
Ruijie Quan, Wenguan Wang, Zhibo Tian, Fan Ma, Yi Yang
Reconstructing the viewed images from human brain activity bridges human and computer vision through the Brain-Computer Interface. The inherent variability in brain function between individuals leads existing literature to focus on acquiring separate models for each individual using their respective brain signal data ignoring commonalities between these data. In this article we devise Psychometry an omnifit model for reconstructing images from functional Magnetic Resonance Imaging (fMRI) obtained from different subjects. Psychometry incorporates an omni mixture-of-experts (Omni MoE) module where all the experts work together to capture the inter-subject commonalities while each expert associated with subject-specific parameters copes with the individual differences. Moreover Psychometry is equipped with a retrieval-enhanced inference strategy termed Ecphory which aims to enhance the learned fMRI representation via retrieving from prestored subject-specific memories. These designs collectively render Psychometry omnifit and efficient enabling it to capture both inter-subject commonality and individual specificity across subjects. As a result the enhanced fMRI representations serve as conditional signals to guide a generation model to reconstruct high-quality and realistic images establishing Psychometry as state-of-the-art in terms of both high-level and low-level metrics.
https://openaccess.thecvf.com/content/CVPR2024/papers/Quan_Psychometry_An_Omnifit_Model_for_Image_Reconstruction_from_Human_Brain_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.20022
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Quan_Psychometry_An_Omnifit_Model_for_Image_Reconstruction_from_Human_Brain_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Quan_Psychometry_An_Omnifit_Model_for_Image_Reconstruction_from_Human_Brain_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Quan_Psychometry_An_Omnifit_CVPR_2024_supplemental.pdf
null
Exploring Regional Clues in CLIP for Zero-Shot Semantic Segmentation
Yi Zhang, Meng-Hao Guo, Miao Wang, Shi-Min Hu
CLIP has demonstrated marked progress in visual recognition due to its powerful pre-training on large-scale image-text pairs. However it still remains a critical challenge: how to transfer image-level knowledge into pixel-level understanding tasks such as semantic segmentation. In this paper to solve the mentioned challenge we analyze the gap between the capability of the CLIP model and the requirement of the zero-shot semantic segmentation task. Based on our analysis and observations we propose a novel method for zero-shot semantic segmentation dubbed CLIP-RC (CLIP with Regional Clues) bringing two main insights. On the one hand a region-level bridge is necessary to provide fine-grained semantics. On the other hand overfitting should be mitigated during the training stage. Benefiting from the above discoveries CLIP-RC achieves state-of-the-art performance on various zero-shot semantic segmentation benchmarks including PASCAL VOC PASCAL Context and COCO-Stuff 164K. Code will be available at https://github.com/Jittor/JSeg.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Exploring_Regional_Clues_in_CLIP_for_Zero-Shot_Semantic_Segmentation_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Exploring_Regional_Clues_in_CLIP_for_Zero-Shot_Semantic_Segmentation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Exploring_Regional_Clues_in_CLIP_for_Zero-Shot_Semantic_Segmentation_CVPR_2024_paper.html
CVPR 2024
null
null
Move as You Say Interact as You Can: Language-guided Human Motion Generation with Scene Affordance
Zan Wang, Yixin Chen, Baoxiong Jia, Puhao Li, Jinlu Zhang, Jingze Zhang, Tengyu Liu, Yixin Zhu, Wei Liang, Siyuan Huang
Despite significant advancements in text-to-motion synthesis generating language-guided human motion within 3D environments poses substantial challenges. These challenges stem primarily from (i) the absence of powerful generative models capable of jointly modeling natural language 3D scenes and human motion and (ii) the generative models' intensive data requirements contrasted with the scarcity of comprehensive high-quality language-scene-motion datasets. To tackle these issues we introduce a novel two-stage framework that employs scene affordance as an intermediate representation effectively linking 3D scene grounding and conditional motion generation. Our framework comprises an Affordance Diffusion Model (ADM) for predicting explicit affordance map and an Affordance-to-Motion Diffusion Model (AMDM) for generating plausible human motions. By leveraging scene affordance maps our method overcomes the difficulty in generating human motion under multimodal condition signals especially when training with limited data lacking extensive language-scene-motion pairs. Our extensive experiments demonstrate that our approach consistently outperforms all baselines on established benchmarks including HumanML3D and HUMANISE. Additionally we validate our model's exceptional generalization capabilities on a specially curated evaluation set featuring previously unseen descriptions and scenes.
https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_Move_as_You_Say_Interact_as_You_Can_Language-guided_Human_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.18036
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Move_as_You_Say_Interact_as_You_Can_Language-guided_Human_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Move_as_You_Say_Interact_as_You_Can_Language-guided_Human_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wang_Move_as_You_CVPR_2024_supplemental.pdf
null
Choose What You Need: Disentangled Representation Learning for Scene Text Recognition Removal and Editing
Boqiang Zhang, Hongtao Xie, Zuan Gao, Yuxin Wang
Scene text images contain not only style information (font background) but also content information (character texture). Different scene text tasks need different information but previous representation learning methods use tightly coupled features for all tasks resulting in sub-optimal performance. We propose a Disentangled Representation Learning framework (DARLING) aimed at disentangling these two types of features for improved adaptability in better addressing various downstream tasks (choose what you really need). Specifically we synthesize a dataset of image pairs with identical style but different content. Based on the dataset we decouple the two types of features by the supervision design. Clearly we directly split the visual representation into style and content features the content features are supervised by a text recognition loss while an alignment loss aligns the style features in the image pairs. Then style features are employed in reconstructing the counterpart image via an image decoder with a prompt that indicates the counterpart's content. Such an operation effectively decouples the features based on their distinctive properties. To the best of our knowledge this is the first time in the field of scene text that disentangles the inherent properties of the text images. Our method achieves state-of-the-art performance in Scene Text Recognition Removal and Editing.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Choose_What_You_Need_Disentangled_Representation_Learning_for_Scene_Text_CVPR_2024_paper.pdf
http://arxiv.org/abs/2405.04377
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Choose_What_You_Need_Disentangled_Representation_Learning_for_Scene_Text_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Choose_What_You_Need_Disentangled_Representation_Learning_for_Scene_Text_CVPR_2024_paper.html
CVPR 2024
null
null
Generalizable Face Landmarking Guided by Conditional Face Warping
Jiayi Liang, Haotian Liu, Hongteng Xu, Dixin Luo
As a significant step for human face modeling editing and generation face landmarking aims at extracting facial keypoints from images. A generalizable face landmarker is required in practice because real-world facial images e.g. the avatars in animations and games are often stylized in various ways. However achieving generalizable face landmarking is challenging due to the diversity of facial styles and the scarcity of labeled stylized faces. In this study we propose a simple but effective paradigm to learn a generalizable face landmarker based on labeled real human faces and unlabeled stylized faces. Our method learns the face landmarker as the key module of a conditional face warper. Given a pair of real and stylized facial images the conditional face warper predicts a warping field from the real face to the stylized one in which the face landmarker predicts the ending points of the warping field and provides us with high-quality pseudo landmarks for the corresponding stylized facial images. Applying an alternating optimization strategy we learn the face landmarker to minimize i) the discrepancy between the stylized faces and the warped real ones and ii) the prediction errors of both real and pseudo landmarks. Experiments on various datasets show that our method outperforms existing state-of-the-art domain adaptation methods in face landmarking tasks leading to a face landmarker with better generalizability. Code is available at https://plustwo0.github.io/project-face-landmarker.
https://openaccess.thecvf.com/content/CVPR2024/papers/Liang_Generalizable_Face_Landmarking_Guided_by_Conditional_Face_Warping_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.12322
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Liang_Generalizable_Face_Landmarking_Guided_by_Conditional_Face_Warping_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Liang_Generalizable_Face_Landmarking_Guided_by_Conditional_Face_Warping_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liang_Generalizable_Face_Landmarking_CVPR_2024_supplemental.pdf
null
Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion
Zuoyue Li, Zhenqiang Li, Zhaopeng Cui, Marc Pollefeys, Martin R. Oswald
Directly generating scenes from satellite imagery offers exciting possibilities for integration into applications like games and map services. However challenges arise from significant view changes and scene scale. Previous efforts mainly focused on image or video generation lacking exploration into the adaptability of scene generation for arbitrary views. Existing 3D generation works either operate at the object level or are difficult to utilize the geometry obtained from satellite imagery. To overcome these limitations we propose a novel architecture for direct 3D scene generation by introducing diffusion models into 3D sparse representations and combining them with neural rendering techniques. Specifically our approach generates texture colors at the point level for a given geometry using a 3D diffusion model first which is then transformed into a scene representation in a feed-forward manner. The representation can be utilized to render arbitrary views which would excel in both single-frame quality and inter-frame consistency. Experiments in two city-scale datasets show that our model demonstrates proficiency in generating photo-realistic street-view image sequences and cross-view urban scenes from satellite imagery.
https://openaccess.thecvf.com/content/CVPR2024/papers/Li_Sat2Scene_3D_Urban_Scene_Generation_from_Satellite_Images_with_Diffusion_CVPR_2024_paper.pdf
http://arxiv.org/abs/2401.10786
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Li_Sat2Scene_3D_Urban_Scene_Generation_from_Satellite_Images_with_Diffusion_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Li_Sat2Scene_3D_Urban_Scene_Generation_from_Satellite_Images_with_Diffusion_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_Sat2Scene_3D_Urban_CVPR_2024_supplemental.pdf
null
Control4D: Efficient 4D Portrait Editing with Text
Ruizhi Shao, Jingxiang Sun, Cheng Peng, Zerong Zheng, Boyao Zhou, Hongwen Zhang, Yebin Liu
We introduce Control4D an innovative framework for editing dynamic 4D portraits using text instructions. Our method addresses the prevalent challenges in 4D editing notably the inefficiencies of existing 4D representations and the inconsistent editing effect caused by diffusion-based editors. We first propose GaussianPlanes a novel 4D representation that makes Gaussian Splatting more structured by applying plane-based decomposition in 3D space and time. This enhances both efficiency and robustness in 4D editing. Furthermore we propose to leverage a 4D generator to learn a more continuous generation space from inconsistent edited images produced by the diffusion-based editor which effectively improves the consistency and quality of 4D editing. Comprehensive evaluation demonstrates the superiority of Control4D including significantly reduced training time high-quality rendering and spatial-temporal consistency in 4D portrait editing.
https://openaccess.thecvf.com/content/CVPR2024/papers/Shao_Control4D_Efficient_4D_Portrait_Editing_with_Text_CVPR_2024_paper.pdf
http://arxiv.org/abs/2305.20082
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Shao_Control4D_Efficient_4D_Portrait_Editing_with_Text_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Shao_Control4D_Efficient_4D_Portrait_Editing_with_Text_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Shao_Control4D_Efficient_4D_CVPR_2024_supplemental.pdf
null
Symphonize 3D Semantic Scene Completion with Contextual Instance Queries
Haoyi Jiang, Tianheng Cheng, Naiyu Gao, Haoyang Zhang, Tianwei Lin, Wenyu Liu, Xinggang Wang
3D Semantic Scene Completion (SSC) has emerged as a nascent and pivotal undertaking in autonomous driving aiming to predict the voxel occupancy within volumetric scenes. However prevailing methodologies primarily focus on voxel-wise feature aggregation while neglecting instance semantics and scene context. In this paper we present a novel paradigm termed Symphonies (Scene-from-Insts) that delves into the integration of instance queries to orchestrate 2D-to-3D reconstruction and 3D scene modeling. Leveraging our proposed Serial Instance-Propagated Attentions Symphonies dynamically encodes instance-centric semantics facilitating intricate interactions between the image and volumetric domains. Simultaneously Symphonies fosters holistic scene comprehension by capturing context through the efficient fusion of instance queries alleviating geometric ambiguities such as occlusion and perspective errors through contextual scene reasoning. Experimental results demonstrate that Symphonies achieves state-of-the-art performance on the challenging SemanticKITTI and SSCBench-KITTI-360 benchmarks yielding remarkable mIoU scores of 15.04 and 18.58 respectively. These results showcase the promising advancements of our paradigm. The code for our method is available at https://github.com/hustvl/Symphonies.
https://openaccess.thecvf.com/content/CVPR2024/papers/Jiang_Symphonize_3D_Semantic_Scene_Completion_with_Contextual_Instance_Queries_CVPR_2024_paper.pdf
http://arxiv.org/abs/2306.15670
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Jiang_Symphonize_3D_Semantic_Scene_Completion_with_Contextual_Instance_Queries_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Jiang_Symphonize_3D_Semantic_Scene_Completion_with_Contextual_Instance_Queries_CVPR_2024_paper.html
CVPR 2024
null
null
Loopy-SLAM: Dense Neural SLAM with Loop Closures
Lorenzo Liso, Erik Sandström, Vladimir Yugay, Luc Van Gool, Martin R. Oswald
Neural RGBD SLAM techniques have shown promise in dense Simultaneous Localization And Mapping (SLAM) yet face challenges such as error accumulation during camera tracking resulting in distorted maps. In response we introduce Loopy-SLAM that globally optimizes poses and the dense 3D model. We use frame-to-model tracking using a data-driven point-based submap generation method and trigger loop closures online by performing global place recognition. Robust pose graph optimization is used to rigidly align the local submaps. As our representation is point based map corrections can be performed efficiently without the need to store the entire history of input frames used for mapping as typically required by methods employing a grid based mapping structure. Evaluation on the synthetic Replica and real-world TUM-RGBD and ScanNet datasets demonstrate competitive or superior performance in tracking mapping and rendering accuracy when compared to existing dense neural RGBD SLAM methods. Project page: notchla.github.io/Loopy-SLAM.
https://openaccess.thecvf.com/content/CVPR2024/papers/Liso_Loopy-SLAM_Dense_Neural_SLAM_with_Loop_Closures_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Liso_Loopy-SLAM_Dense_Neural_SLAM_with_Loop_Closures_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Liso_Loopy-SLAM_Dense_Neural_SLAM_with_Loop_Closures_CVPR_2024_paper.html
CVPR 2024
null
null
CLIPtone: Unsupervised Learning for Text-based Image Tone Adjustment
Hyeongmin Lee, Kyoungkook Kang, Jungseul Ok, Sunghyun Cho
Recent image tone adjustment (or enhancement) approaches have predominantly adopted supervised learning for learning human-centric perceptual assessment. However these approaches are constrained by intrinsic challenges of supervised learning. Primarily the requirement for expertly-curated or retouched images escalates the data acquisition expenses. Moreover their coverage of target styles is confined to stylistic variants inferred from the training data. To surmount the above challenges we propose an unsupervised learning-based approach for text-based image tone adjustment CLIPtone that extends an existing image enhancement method to accommodate natural language descriptions. Specifically we design a hyper-network to adaptively modulate the pretrained parameters of a backbone model based on a text description. To assess whether an adjusted image aligns with its text description without a ground-truth image we utilize CLIP which is trained on a vast set of language-image pairs and thus encompasses the knowledge of human perception. The major advantages of our approach are threefold: (i) minimal data collection expenses (ii) support for a range of adjustments and (iii) the ability to handle novel text descriptions unseen in training. The efficacy of the proposed method is demonstrated through comprehensive experiments including a user study.
https://openaccess.thecvf.com/content/CVPR2024/papers/Lee_CLIPtone_Unsupervised_Learning_for_Text-based_Image_Tone_Adjustment_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.01123
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Lee_CLIPtone_Unsupervised_Learning_for_Text-based_Image_Tone_Adjustment_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Lee_CLIPtone_Unsupervised_Learning_for_Text-based_Image_Tone_Adjustment_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lee_CLIPtone_Unsupervised_Learning_CVPR_2024_supplemental.pdf
null
ToonerGAN: Reinforcing GANs for Obfuscating Automated Facial Indexing
Kartik Thakral, Shashikant Prasad, Stuti Aswani, Mayank Vatsa, Richa Singh
The rapid evolution of automatic facial indexing tech- nologies increases the risk of compromising personal and sensitive information. To address the issue we propose cre- ating cartoon avatars or 'toon avatars' designed to effec- tively obscure identity features. The primary objective is to deceive current AI systems preventing them from accu- rately identifying individuals while making minimal modi- fications to their facial features. Moreover we aim to en- sure that a human observer can still recognize the person depicted in these altered avatar images. To achieve this we introduce 'ToonerGAN' a novel approach that utilizes Generative Adversarial Networks (GANs) to craft person- alized cartoon avatars. The ToonerGAN framework con- sists of a style module and a de-identification module that work together to produce high-resolution realistic cartoon images. For the efficient training of our network we have developed an extensive dataset named 'ToonSet' compris- ing approximately 23000 facial images and their cartoon renditions. Through comprehensive experiments and bench- marking against existing datasets including CelebA-HQ our method demonstrates superior performance in obfus- cating identity while preserving the utility of data. Addi- tionally a user-centric study to explore the effectiveness of ToonerGAN has yielded some compelling observations.
https://openaccess.thecvf.com/content/CVPR2024/papers/Thakral_ToonerGAN_Reinforcing_GANs_for_Obfuscating_Automated_Facial_Indexing_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Thakral_ToonerGAN_Reinforcing_GANs_for_Obfuscating_Automated_Facial_Indexing_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Thakral_ToonerGAN_Reinforcing_GANs_for_Obfuscating_Automated_Facial_Indexing_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Thakral_ToonerGAN_Reinforcing_GANs_CVPR_2024_supplemental.pdf
null
Content-Adaptive Non-Local Convolution for Remote Sensing Pansharpening
Yule Duan, Xiao Wu, Haoyu Deng, Liang-Jian Deng
Currently machine learning-based methods for remote sensing pansharpening have progressed rapidly. However existing pansharpening methods often do not fully exploit differentiating regional information in non-local spaces thereby limiting the effectiveness of the methods and resulting in redundant learning parameters. In this paper we introduce a so-called content-adaptive non-local convolution (CANConv) a novel method tailored for remote sensing image pansharpening. Specifically CANConv employs adaptive convolution ensuring spatial adaptability and incorporates non-local self-similarity through the similarity relationship partition (SRP) and the partition-wise adaptive convolution (PWAC) sub-modules. Furthermore we also propose a corresponding network architecture called CANNet which mainly utilizes the multi-scale self-similarity. Extensive experiments demonstrate the superior performance of CANConv compared with recent promising fusion methods. Besides we substantiate the method's effectiveness through visualization ablation experiments and comparison with existing methods on multiple test sets. The source code is publicly available at https://github.com/duanyll/CANConv.
https://openaccess.thecvf.com/content/CVPR2024/papers/Duan_Content-Adaptive_Non-Local_Convolution_for_Remote_Sensing_Pansharpening_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.07543
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Duan_Content-Adaptive_Non-Local_Convolution_for_Remote_Sensing_Pansharpening_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Duan_Content-Adaptive_Non-Local_Convolution_for_Remote_Sensing_Pansharpening_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Duan_Content-Adaptive_Non-Local_Convolution_CVPR_2024_supplemental.pdf
null
Codebook Transfer with Part-of-Speech for Vector-Quantized Image Modeling
Baoquan Zhang, Huaibin Wang, Chuyao Luo, Xutao Li, Guotao Liang, Yunming Ye, Xiaochen Qi, Yao He
Vector-Quantized Image Modeling (VQIM) is a fundamental research problem in image synthesis which aims to represent an image with a discrete token sequence. Existing studies effectively address this problem by learning a discrete codebook from scratch and in a code-independent manner to quantize continuous representations into discrete tokens. However learning a codebook from scratch and in a code-independent manner is highly challenging which may be a key reason causing codebook collapse i.e. some code vectors can rarely be optimized without regard to the relationship between codes and good codebook priors such that die off finally. In this paper inspired by pretrained language models we find that these language models have actually pretrained a superior codebook via a large number of text corpus but such information is rarely exploited in VQIM. To this end we propose a novel codebook transfer framework with part-of-speech called VQCT which aims to transfer a well-trained codebook from pretrained language models to VQIM for robust codebook learning. Specifically we first introduce a pretrained codebook from language models and part-of-speech knowledge as priors. Then we construct a vision-related codebook with these priors for achieving codebook transfer. Finally a novel codebook transfer network is designed to exploit abundant semantic relationships between codes contained in pretrained codebooks for robust VQIM codebook learning. Experimental results on four datasets show that our VQCT method achieves superior VQIM performance over previous state-of-the-art methods.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Codebook_Transfer_with_Part-of-Speech_for_Vector-Quantized_Image_Modeling_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.10071
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Codebook_Transfer_with_Part-of-Speech_for_Vector-Quantized_Image_Modeling_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Codebook_Transfer_with_Part-of-Speech_for_Vector-Quantized_Image_Modeling_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_Codebook_Transfer_with_CVPR_2024_supplemental.pdf
null
Learning Inclusion Matching for Animation Paint Bucket Colorization
Yuekun Dai, Shangchen Zhou, Qinyue Li, Chongyi Li, Chen Change Loy
Colorizing line art is a pivotal task in the production of hand-drawn cel animation. This typically involves digital painters using a paint bucket tool to manually color each segment enclosed by lines based on RGB values predetermined by a color designer. This frame-by-frame process is both arduous and time-intensive. Current automated methods mainly focus on segment matching. This technique migrates colors from a reference to the target frame by aligning features within line-enclosed segments across frames. However issues like occlusion and wrinkles in animations often disrupt these direct correspondences leading to mismatches. In this work we introduce a new learning-based inclusion matching pipeline which directs the network to comprehend the inclusion relationships between segments rather than relying solely on direct visual correspondences. Our method features a two-stage pipeline that integrates a coarse color warping module with an inclusion matching module enabling more nuanced and accurate colorization. To facilitate the training of our network we also develope a unique dataset referred to as PaintBucket-Character. This dataset includes rendered line arts alongside their colorized counterparts featuring various 3D characters. Extensive experiments demonstrate the effectiveness and superiority of our method over existing techniques.
https://openaccess.thecvf.com/content/CVPR2024/papers/Dai_Learning_Inclusion_Matching_for_Animation_Paint_Bucket_Colorization_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.18342
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Dai_Learning_Inclusion_Matching_for_Animation_Paint_Bucket_Colorization_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Dai_Learning_Inclusion_Matching_for_Animation_Paint_Bucket_Colorization_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Dai_Learning_Inclusion_Matching_CVPR_2024_supplemental.pdf
null
Editable Scene Simulation for Autonomous Driving via Collaborative LLM-Agents
Yuxi Wei, Zi Wang, Yifan Lu, Chenxin Xu, Changxing Liu, Hao Zhao, Siheng Chen, Yanfeng Wang
Scene simulation in autonomous driving has gained significant attention because of its huge potential for generating customized data. However existing editable scene simulation approaches face limitations in terms of user interaction efficiency multi-camera photo-realistic rendering and external digital assets integration. To address these challenges this paper introduces ChatSim the first system that enables editable photo-realistic 3D driving scene simulations via natural language commands with external digital assets. To enable editing with high command flexibility ChatSim leverages a large language model (LLM) agent collaboration framework. To generate photo-realistic outcomes ChatSim employs a novel multi-camera neural radiance field method. Furthermore to unleash the potential of extensive high-quality digital assets ChatSim employs a novel multi-camera lighting estimation method to achieve scene-consistent assets' rendering. Our experiments on Waymo Open Dataset demonstrate that ChatSim can handle complex language commands and generate corresponding photo-realistic scene videos. Code can be accessed at: https://github.com/yifanlu0227/ChatSim.
https://openaccess.thecvf.com/content/CVPR2024/papers/Wei_Editable_Scene_Simulation_for_Autonomous_Driving_via_Collaborative_LLM-Agents_CVPR_2024_paper.pdf
http://arxiv.org/abs/2402.05746
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wei_Editable_Scene_Simulation_for_Autonomous_Driving_via_Collaborative_LLM-Agents_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wei_Editable_Scene_Simulation_for_Autonomous_Driving_via_Collaborative_LLM-Agents_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wei_Editable_Scene_Simulation_CVPR_2024_supplemental.pdf
null
SAM-6D: Segment Anything Model Meets Zero-Shot 6D Object Pose Estimation
Jiehong Lin, Lihua Liu, Dekun Lu, Kui Jia
Zero-shot 6D object pose estimation involves the detection of novel objects with their 6D poses in cluttered scenes presenting significant challenges for model generalizability. Fortunately the recent Segment Anything Model (SAM) has showcased remarkable zero-shot transfer performance which provides a promising solution to tackle this task. Motivated by this we introduce SAM-6D a novel framework designed to realize the task through two steps including instance segmentation and pose estimation. Given the target objects SAM-6D employs two dedicated sub-networks namely Instance Segmentation Model (ISM) and Pose Estimation Model (PEM) to perform these steps on cluttered RGB-D images. ISM takes SAM as an advanced starting point to generate all possible object proposals and selectively preserves valid ones through meticulously crafted object matching scores in terms of semantics appearance and geometry. By treating pose estimation as a partial-to-partial point matching problem PEM performs a two-stage point matching process featuring a novel design of background tokens to construct dense 3D-3D correspondence ultimately yielding the pose estimates. Without bells and whistles SAM-6D outperforms the existing methods on the seven core datasets of the BOP Benchmark for both instance segmentation and pose estimation of novel objects.
https://openaccess.thecvf.com/content/CVPR2024/papers/Lin_SAM-6D_Segment_Anything_Model_Meets_Zero-Shot_6D_Object_Pose_Estimation_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Lin_SAM-6D_Segment_Anything_Model_Meets_Zero-Shot_6D_Object_Pose_Estimation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Lin_SAM-6D_Segment_Anything_Model_Meets_Zero-Shot_6D_Object_Pose_Estimation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lin_SAM-6D_Segment_Anything_CVPR_2024_supplemental.pdf
null
InceptionNeXt: When Inception Meets ConvNeXt
Weihao Yu, Pan Zhou, Shuicheng Yan, Xinchao Wang
Inspired by the long-range modeling ability of ViTs large-kernel convolutions are widely studied and adopted recently to enlarge the receptive field and improve model performance like the remarkable work ConvNeXt which employs 7x7 depthwise convolution. Although such depthwise operator only consumes a few FLOPs it largely harms the model efficiency on powerful computing devices due to the high memory access costs. For example ConvNeXt-T has similar FLOPs with ResNet-50 but only achieves 60% throughputs when trained on A100 GPUs with full precision. Although reducing the kernel size of ConvNeXt can improve speed it results in significant performance degradation which poses a challenging problem: How to speed up large-kernel-based CNN models while preserving their performance. To tackle this issue inspired by Inceptions we propose to decompose large-kernel depthwise convolution into four parallel branches along channel dimension i.e. small square kernel two orthogonal band kernels and an identity mapping. With this new Inception depthwise convolution we build a series of networks namely IncepitonNeXt which not only enjoy high throughputs but also maintain competitive performance. For instance InceptionNeXt-T achieves 1.6x higher training throughputs than ConvNeX-T as well as attains 0.2% top-1 accuracy improvement on ImageNet-1K. We anticipate InceptionNeXt can serve as an economical baseline for future architecture design to reduce carbon footprint.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yu_InceptionNeXt_When_Inception_Meets_ConvNeXt_CVPR_2024_paper.pdf
http://arxiv.org/abs/2303.16900
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yu_InceptionNeXt_When_Inception_Meets_ConvNeXt_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yu_InceptionNeXt_When_Inception_Meets_ConvNeXt_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yu_InceptionNeXt_When_Inception_CVPR_2024_supplemental.pdf
null
SnAG: Scalable and Accurate Video Grounding
Fangzhou Mu, Sicheng Mo, Yin Li
Temporal grounding of text descriptions in videos is a central problem in vision-language learning and video understanding. Existing methods often prioritize accuracy over scalability --- they have been optimized for grounding only a few text queries within short videos and fail to scale up to long videos with hundreds of queries. In this paper we study the effect of cross-modal fusion on the scalability of video grounding models. Our analysis establishes late fusion as a more cost-effective fusion scheme for long-form videos with many text queries. Moreover it leads us to a novel video-centric sampling scheme for efficient training. Based on these findings we present SnAG a simple baseline for scalable and accurate video grounding. Without bells and whistles SnAG is 43% more accurate and 1.5x faster than CONE a state of the art for long-form video grounding on the challenging MAD dataset while achieving highly competitive results on short videos.
https://openaccess.thecvf.com/content/CVPR2024/papers/Mu_SnAG_Scalable_and_Accurate_Video_Grounding_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.02257
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Mu_SnAG_Scalable_and_Accurate_Video_Grounding_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Mu_SnAG_Scalable_and_Accurate_Video_Grounding_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Mu_SnAG_Scalable_and_CVPR_2024_supplemental.pdf
null
SPOT: Self-Training with Patch-Order Permutation for Object-Centric Learning with Autoregressive Transformers
Ioannis Kakogeorgiou, Spyros Gidaris, Konstantinos Karantzalos, Nikos Komodakis
Unsupervised object-centric learning aims to decompose scenes into interpretable object entities termed slots. Slot-based auto-encoders stand out as a prominent method for this task. Within them crucial aspects include guiding the encoder to generate object-specific slots and ensuring the decoder utilizes them during reconstruction. This work introduces two novel techniques (i) an attention-based self-training approach which distills superior slot-based attention masks from the decoder to the encoder enhancing object segmentation and (ii) an innovative patch-order permutation strategy for autoregressive transformers that strengthens the role of slot vectors in reconstruction. The effectiveness of these strategies is showcased experimentally. The combined approach significantly surpasses prior slot-based autoencoder methods in unsupervised object segmentation especially with complex real-world images. We provide the implementation code at https://github.com/gkakogeorgiou/spot .
https://openaccess.thecvf.com/content/CVPR2024/papers/Kakogeorgiou_SPOT_Self-Training_with_Patch-Order_Permutation_for_Object-Centric_Learning_with_Autoregressive_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.00648
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Kakogeorgiou_SPOT_Self-Training_with_Patch-Order_Permutation_for_Object-Centric_Learning_with_Autoregressive_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Kakogeorgiou_SPOT_Self-Training_with_Patch-Order_Permutation_for_Object-Centric_Learning_with_Autoregressive_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kakogeorgiou_SPOT_Self-Training_with_CVPR_2024_supplemental.pdf
null
LiveHPS: LiDAR-based Scene-level Human Pose and Shape Estimation in Free Environment
Yiming Ren, Xiao Han, Chengfeng Zhao, Jingya Wang, Lan Xu, Jingyi Yu, Yuexin Ma
For human-centric large-scale scenes fine-grained modeling for 3D human global pose and shape is significant for scene understanding and can benefit many real-world applications. In this paper we present LiveHPS a novel single-LiDAR-based approach for scene-level human pose and shape estimation without any limitation of light conditions and wearable devices. In particular we design a distillation mechanism to mitigate the distribution-varying effect of LiDAR point clouds and exploit the temporal-spatial geometric and dynamic information existing in consecutive frames to solve the occlusion and noise disturbance. LiveHPS with its efficient configuration and high-quality output is well-suited for real-world applications. Moreover we propose a huge human motion dataset named FreeMotion which is collected in various scenarios with diverse human poses shapes and translations. It consists of multi-modal and multi-view acquisition data from calibrated and synchronized LiDARs cameras and IMUs. Extensive experiments on our new dataset and other public datasets demonstrate the SOTA performance and robustness of our approach. We will release our code and dataset soon.
https://openaccess.thecvf.com/content/CVPR2024/papers/Ren_LiveHPS_LiDAR-based_Scene-level_Human_Pose_and_Shape_Estimation_in_Free_CVPR_2024_paper.pdf
http://arxiv.org/abs/2402.17171
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Ren_LiveHPS_LiDAR-based_Scene-level_Human_Pose_and_Shape_Estimation_in_Free_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Ren_LiveHPS_LiDAR-based_Scene-level_Human_Pose_and_Shape_Estimation_in_Free_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ren_LiveHPS_LiDAR-based_Scene-level_CVPR_2024_supplemental.pdf
null
Segment Every Out-of-Distribution Object
Wenjie Zhao, Jia Li, Xin Dong, Yu Xiang, Yunhui Guo
Semantic segmentation models while effective for in-distribution categories face challenges in real-world deployment due to encountering out-of-distribution (OoD) objects. Detecting these OoD objects is crucial for safety-critical applications. Existing methods rely on anomaly scores but choosing a suitable threshold for generating masks presents difficulties and can lead to fragmentation and inaccuracy. This paper introduces a method to convert anomaly Score To segmentation Mask called S2M a simple and effective framework for OoD detection in semantic segmentation. Unlike assigning anomaly scores to pixels S2M directly segments the entire OoD object. By transforming anomaly scores into prompts for a promptable segmentation model S2M eliminates the need for threshold selection. Extensive experiments demonstrate that S2M outperforms the state-of-the-art by approximately 20% in IoU and 40% in mean F1 score on average across various benchmarks including Fishyscapes Segment-Me-If-You-Can and RoadAnomaly datasets.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhao_Segment_Every_Out-of-Distribution_Object_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.16516
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhao_Segment_Every_Out-of-Distribution_Object_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhao_Segment_Every_Out-of-Distribution_Object_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhao_Segment_Every_Out-of-Distribution_CVPR_2024_supplemental.pdf
null
Building Vision-Language Models on Solid Foundations with Masked Distillation
Sepehr Sameni, Kushal Kafle, Hao Tan, Simon Jenni
Recent advancements in Vision-Language Models (VLMs) have marked a significant leap in bridging the gap between computer vision and natural language processing. However traditional VLMs trained through contrastive learning on limited and noisy image-text pairs often lack the spatial and linguistic understanding to generalize well to dense vision tasks or less common languages. Our approach Solid Foundation CLIP (SF-CLIP) circumvents this issue by implicitly building on the solid visual and language understanding of foundational models trained on vast amounts of unimodal data. SF-CLIP integrates contrastive image-text pretraining with a masked knowledge distillation from large foundational text and vision models. This methodology guides our VLM in developing robust text and image representations. As a result SF-CLIP shows exceptional zero-shot classification accuracy and enhanced image and text retrieval capabilities setting a new state of the art for ViT-B/16 trained on YFCC15M and CC12M. Moreover the dense per-patch supervision enhances our zero-shot and linear probe performance in semantic segmentation tasks. A remarkable aspect of our model is its multilingual proficiency evidenced by strong retrieval results in multiple languages despite being trained predominantly on English data. We achieve all of these improvements without sacrificing the training efficiency through our selective application of masked distillation and the inheritance of teacher word embeddings.
https://openaccess.thecvf.com/content/CVPR2024/papers/Sameni_Building_Vision-Language_Models_on_Solid_Foundations_with_Masked_Distillation_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Sameni_Building_Vision-Language_Models_on_Solid_Foundations_with_Masked_Distillation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Sameni_Building_Vision-Language_Models_on_Solid_Foundations_with_Masked_Distillation_CVPR_2024_paper.html
CVPR 2024
null
null
Wavelet-based Fourier Information Interaction with Frequency Diffusion Adjustment for Underwater Image Restoration
Chen Zhao, Weiling Cai, Chenyu Dong, Chengwei Hu
Underwater images are subject to intricate and diverse degradation inevitably affecting the effectiveness of underwater visual tasks. However most approaches primarily operate in the raw pixel space of images which limits the exploration of the frequency characteristics of underwater images leading to an inadequate utilization of deep models' representational capabilities in producing high-quality images. In this paper we introduce a novel Underwater Image Enhancement (UIE) framework named WF-Diff designed to fully leverage the characteristics of frequency domain information and diffusion models. WF-Diff consists of two detachable networks: Wavelet-based Fourier information interaction network (WFI2-net) and Frequency Residual Diffusion Adjustment Module (FRDAM). With our full exploration of the frequency domain information WFI2-net aims to achieve preliminary enhancement of frequency information in the wavelet space. Our proposed FRDAM can further refine the high- and low-frequency information of the initial enhanced images which can be viewed as a plug-and-play universal module to adjust the detail of the underwater images. With the above techniques our algorithm can show SOTA performance on real-world underwater image datasets and achieves competitive performance in visual quality.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhao_Wavelet-based_Fourier_Information_Interaction_with_Frequency_Diffusion_Adjustment_for_Underwater_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.16845
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhao_Wavelet-based_Fourier_Information_Interaction_with_Frequency_Diffusion_Adjustment_for_Underwater_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhao_Wavelet-based_Fourier_Information_Interaction_with_Frequency_Diffusion_Adjustment_for_Underwater_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhao_Wavelet-based_Fourier_Information_CVPR_2024_supplemental.pdf
null
CroSel: Cross Selection of Confident Pseudo Labels for Partial-Label Learning
Shiyu Tian, Hongxin Wei, Yiqun Wang, Lei Feng
Partial-label learning (PLL) is an important weakly supervised learning problem which allows each training example to have a candidate label set instead of a single ground-truth label. Identification-based methods have been widely explored to tackle label ambiguity issues in PLL which regard the true label as a latent variable to be identified. However identifying the true labels accurately and completely remains challenging causing noise in pseudo labels during model training. In this paper we propose a new method called CroSel which leverages historical predictions from the model to identify true labels for most training examples. First we introduce a cross selection strategy which enables two deep models to select true labels of partially labeled data for each other. Besides we propose a novel consistency regularization term called co-mix to avoid sample waste and tiny noise caused by false selection. In this way CroSel can pick out the true labels of most examples with high precision. Extensive experiments demonstrate the superiority of CroSel which consistently outperforms previous state-of-the-art methods on benchmark datasets. Additionally our method achieves over 90% accuracy and quantity for selecting true labels on CIFAR-type datasets under various settings.
https://openaccess.thecvf.com/content/CVPR2024/papers/Tian_CroSel_Cross_Selection_of_Confident_Pseudo_Labels_for_Partial-Label_Learning_CVPR_2024_paper.pdf
http://arxiv.org/abs/2303.10365
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Tian_CroSel_Cross_Selection_of_Confident_Pseudo_Labels_for_Partial-Label_Learning_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Tian_CroSel_Cross_Selection_of_Confident_Pseudo_Labels_for_Partial-Label_Learning_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Tian_CroSel_Cross_Selection_CVPR_2024_supplemental.zip
null
PoNQ: a Neural QEM-based Mesh Representation
Nissim Maruani, Maks Ovsjanikov, Pierre Alliez, Mathieu Desbrun
Although polygon meshes have been a standard representation in geometry processing their irregular and combinatorial nature hinders their suitability for learning-based applications. In this work we introduce a novel learnable mesh representation through a set of local 3D sample Points and their associated Normals and Quadric error metrics (QEM) w.r.t. the underlying shape which we denote PoNQ. A global mesh is directly derived from PoNQ by efficiently leveraging the knowledge of the local quadric errors. Besides marking the first use of QEM within a neural shape representation our contribution guarantees both topological and geometrical properties by ensuring that a PoNQ mesh does not self-intersect and is always the boundary of a volume. Notably our representation does not rely on a regular grid is supervised directly by the target surface alone and also handles open surfaces with boundaries and/or sharp features. We demonstrate the efficacy of PoNQ through a learning-based mesh prediction from SDF grids and show that our method surpasses recent state-of-the-art techniques in terms of both surface and edge-based metrics.
https://openaccess.thecvf.com/content/CVPR2024/papers/Maruani_PoNQ_a_Neural_QEM-based_Mesh_Representation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.12870
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Maruani_PoNQ_a_Neural_QEM-based_Mesh_Representation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Maruani_PoNQ_a_Neural_QEM-based_Mesh_Representation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Maruani_PoNQ_a_Neural_CVPR_2024_supplemental.pdf
null
ModaVerse: Efficiently Transforming Modalities with LLMs
Xinyu Wang, Bohan Zhuang, Qi Wu
Humans possess the capability to comprehend diverse modalities and seamlessly transfer information between them. In this work we introduce ModaVerse a Multi-modal Large Language Model (MLLM) capable of comprehending and transforming content across various modalities including images videos and audio. Predominant MLLM frameworks have largely relied on aligning latent spaces of textual and non-textual features. This alignment process which synchronizes a language model trained on textual data with encoders and decoders trained on multi-modal data often necessitates extensive training of several projection layers in multiple stages. Inspired by LLM-as-agent methodologies we propose a novel Input/Output (I/O) alignment mechanism that operates directly at the level of natural language. It aligns the LLM's output with the input of generative models avoiding the complexities associated with latent feature alignments and simplifying the multiple training stages of existing MLLMs into a single efficient process. By conducting experiments on several benchmarks we demonstrate that our approach attains comparable performance with the state of the art while achieving considerable efficiencies in data usage.
https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_ModaVerse_Efficiently_Transforming_Modalities_with_LLMs_CVPR_2024_paper.pdf
http://arxiv.org/abs/2401.06395
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_ModaVerse_Efficiently_Transforming_Modalities_with_LLMs_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_ModaVerse_Efficiently_Transforming_Modalities_with_LLMs_CVPR_2024_paper.html
CVPR 2024
null
null
TransLoc4D: Transformer-based 4D Radar Place Recognition
Guohao Peng, Heshan Li, Yangyang Zhao, Jun Zhang, Zhenyu Wu, Pengyu Zheng, Danwei Wang
Place Recognition is crucial for unmanned vehicles in terms of localization and mapping. Recent years have witnessed numerous explorations in the field where 2D cameras and 3D LiDARs are mostly employed. Despite their admirable performance they may encounter challenges in adverse weather such as rain and fog. Hopefully 4D millimeter-wave Radar emerges as a promising alternative as its longer wavelength makes it virtually immune to interference from tiny particles of fog and rain. Therefore in this work we propose a novel 4D Radar place recognition model TransLoc4D based on sparse convolution and Transformer structures. Specifically a Minkloc4D backbone is first proposed to leverage the geometric intensity and velocity information from 4D Radar scans. While mainstream 3D LiDAR solutions merely capture geometric structures of point clouds Minkloc4D explores the intensity and velocity properties of 4D Radar scans and demonstrates their effectiveness. After feature extraction a Transformer layer is introduced to enhance local features where linear self-attention captures the long-range dependency of point cloud alleviating its sparsity and noise. To validate TransLoc4D we construct two datasets and set up benchmarks for 4D radar place recognition. Experiments show TransLoc4D is feasible and can robustly deal with dynamic and adverse environments.
https://openaccess.thecvf.com/content/CVPR2024/papers/Peng_TransLoc4D_Transformer-based_4D_Radar_Place_Recognition_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Peng_TransLoc4D_Transformer-based_4D_Radar_Place_Recognition_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Peng_TransLoc4D_Transformer-based_4D_Radar_Place_Recognition_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Peng_TransLoc4D_Transformer-based_4D_CVPR_2024_supplemental.pdf
null
Frequency-aware Event-based Video Deblurring for Real-World Motion Blur
Taewoo Kim, Hoonhee Cho, Kuk-Jin Yoon
Video deblurring aims to restore sharp frames from blurred video clips. Despite notable progress in video deblurring works it is still a challenging problem because of the loss of motion information during the duration of the exposure time. Since event cameras can capture clear motion information asynchronously with high temporal resolution several works exploit the event camera for deblurring as they can provide abundant motion information. However despite these approaches there were few cases of actively exploiting the long-range temporal dependency of videos. To tackle these deficiencies we present an event-based video deblurring framework by actively utilizing temporal information from videos. To be specific we first introduce a frequency-based cross-modal feature enhancement module. Second we propose event-guided video alignment modules by considering the valuable characteristics of the event and videos. In addition we designed a hybrid camera system to collect the first real-world event-based video deblurring dataset. For the first time we build a dataset containing synchronized high-resolution real-world blurred videos and corresponding sharp videos and event streams. Experimental results validate that our frameworks significantly outperform the state-of-the-art frame-based and event-based deblurring works in the various datasets. In addition we designed a hybrid camera system to collect the first real-world event-based video deblurring dataset. For the first time we build a dataset containing synchronized high-resolution real-world blurred videos and corresponding sharp videos and event streams. Experimental results validate that our frameworks significantly outperform the state-of-the-art frame-based and event-based deblurring works in the various datasets. The project pages are available at https://sites.google.com/view/fevd-cvpr2024.
https://openaccess.thecvf.com/content/CVPR2024/papers/Kim_Frequency-aware_Event-based_Video_Deblurring_for_Real-World_Motion_Blur_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Kim_Frequency-aware_Event-based_Video_Deblurring_for_Real-World_Motion_Blur_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Kim_Frequency-aware_Event-based_Video_Deblurring_for_Real-World_Motion_Blur_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kim_Frequency-aware_Event-based_Video_CVPR_2024_supplemental.zip
null
Multiscale Vision Transformers Meet Bipartite Matching for Efficient Single-stage Action Localization
Ioanna Ntinou, Enrique Sanchez, Georgios Tzimiropoulos
Action Localization is a challenging problem that combines detection and recognition tasks which are often addressed separately. State-of-the-art methods rely on off-the-shelf bounding box detections pre-computed at high resolution and propose transformer models that focus on the classification task alone. Such two-stage solutions are prohibitive for real-time deployment. On the other hand single-stage methods target both tasks by devoting part of the network (generally the backbone) to sharing the majority of the workload compromising performance for speed. These methods build on adding a DETR head with learnable queries that after cross- and self-attention can be sent to corresponding MLPs for detecting a person's bounding box and action. However DETR-like architectures are challenging to train and can incur in big complexity. In this paper we observe that a straight bipartite matching loss can be applied to the output tokens of a vision transformer. This results in a backbone + MLP architecture that can do both tasks without the need of an extra encoder-decoder head and learnable queries. We show that a single MViTv2-S architecture trained with bipartite matching to perform both tasks surpasses the same MViTv2-S when trained with RoI align on pre-computed bounding boxes. With a careful design of token pooling and the proposed training pipeline our Bipartite-Matching Vision Transformer model BMViT achieves +3 mAP on AVA2.2. w.r.t. the two-stage MViTv2-S counterpart. Code is available at https://github.com/IoannaNti/BMViT
https://openaccess.thecvf.com/content/CVPR2024/papers/Ntinou_Multiscale_Vision_Transformers_Meet_Bipartite_Matching_for_Efficient_Single-stage_Action_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.17686
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Ntinou_Multiscale_Vision_Transformers_Meet_Bipartite_Matching_for_Efficient_Single-stage_Action_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Ntinou_Multiscale_Vision_Transformers_Meet_Bipartite_Matching_for_Efficient_Single-stage_Action_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ntinou_Multiscale_Vision_Transformers_CVPR_2024_supplemental.pdf
null
Boosting Order-Preserving and Transferability for Neural Architecture Search: a Joint Architecture Refined Search and Fine-tuning Approach
Beichen Zhang, Xiaoxing Wang, Xiaohan Qin, Junchi Yan
Supernet is a core component in many recent Neural Architecture Search (NAS) methods. It not only helps embody the search space but also provides a (relative) estimation of the final performance of candidate architectures. Thus it is critical that the top architectures ranked by a supernet should be consistent with those ranked by true performance which is known as the order-preserving ability. In this work we analyze the order-preserving ability on the whole search space (global) and a sub-space of top architectures (local) and empirically show that the local order-preserving for current two-stage NAS methods still need to be improved. To rectify this we propose a novel concept of Supernet Shifting a refined search strategy combining architecture searching with supernet fine-tuning. Specifically apart from evaluating the training loss is also accumulated in searching and the supernet is updated every iteration. Since superior architectures are sampled more frequently in evolutionary searching the supernet is encouraged to focus on top architectures thus improving local order-preserving. Besides a pre-trained supernet is often un-reusable for one-shot methods. We show that Supernet Shifting can fulfill transferring supernet to a new dataset. Specifically the last classifier layer will be unset and trained through evolutionary searching. Comprehensive experiments show that our method has better order-preserving ability and can find a dominating architecture. Moreover the pre-trained supernet can be easily transferred into a new dataset with no loss of performance.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Boosting_Order-Preserving_and_Transferability_for_Neural_Architecture_Search_a_Joint_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.11380
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Boosting_Order-Preserving_and_Transferability_for_Neural_Architecture_Search_a_Joint_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Boosting_Order-Preserving_and_Transferability_for_Neural_Architecture_Search_a_Joint_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_Boosting_Order-Preserving_and_CVPR_2024_supplemental.pdf
null
Dr. Bokeh: DiffeRentiable Occlusion-aware Bokeh Rendering
Yichen Sheng, Zixun Yu, Lu Ling, Zhiwen Cao, Xuaner Zhang, Xin Lu, Ke Xian, Haiting Lin, Bedrich Benes
Bokeh is widely used in photography to draw attention to the subject while effectively isolating distractions in the background. Computational methods can simulate bokeh effects without relying on a physical camera lens but the inaccurate lens modeling in existing filtering-based methods leads to artifacts that need post-processing or learning-based methods to fix. We propose Dr.Bokeh a novel rendering method that addresses the issue by directly correcting the defect that violates the physics in the current filtering-based bokeh rendering equation. Dr.Bokeh first preprocesses the input RGBD to obtain a layered scene representation. Dr.Bokeh then takes the layered representation and user-defined lens parameters to render photo-realistic lens blur based on the novel occlusion-aware bokeh rendering method. Experiments show that the non-learning based renderer Dr.Bokeh outperforms state-of-the-art bokeh rendering algorithms in terms of photo-realism. In addition extensive quantitative and qualitative evaluations show the more accurate lens model further pushes the limit of a closely related field depth-from-defocus.
https://openaccess.thecvf.com/content/CVPR2024/papers/Sheng_Dr._Bokeh_DiffeRentiable_Occlusion-aware_Bokeh_Rendering_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Sheng_Dr._Bokeh_DiffeRentiable_Occlusion-aware_Bokeh_Rendering_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Sheng_Dr._Bokeh_DiffeRentiable_Occlusion-aware_Bokeh_Rendering_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Sheng_Dr._Bokeh_DiffeRentiable_CVPR_2024_supplemental.pdf
null
Unsegment Anything by Simulating Deformation
Jiahao Lu, Xingyi Yang, Xinchao Wang
Foundation segmentation models while powerful pose a significant risk: they enable users to effortlessly extract any objects from any digital content with a single click potentially leading to copyright infringement or malicious misuse. To mitigate this risk we introduce a new task "Anything Unsegmentable" to grant any image "the right to be unsegmented". The ambitious pursuit of the task is to achieve highly transferable adversarial attack against all prompt-based segmentation models regardless of model parameterizations and prompts. We highlight the non-transferable and heterogeneous nature of prompt-specific adversarial noises. Our approach focuses on disrupting image encoder features to achieve prompt-agnostic attacks. Intriguingly targeted feature attacks exhibit better transferability compared to untargeted ones suggesting the optimal update direction aligns with the image manifold. Based on the observations we design a novel attack named Unsegment Anything by Simulating Deformation (UAD). Our attack optimizes a differentiable deformation function to create a target deformed image which alters structural information while preserving achievable feature distance by adversarial example. Extensive experiments verify the effectiveness of our approach compromising a variety of promptable segmentation models with different architectures and prompt interfaces.
https://openaccess.thecvf.com/content/CVPR2024/papers/Lu_Unsegment_Anything_by_Simulating_Deformation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.02585
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Lu_Unsegment_Anything_by_Simulating_Deformation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Lu_Unsegment_Anything_by_Simulating_Deformation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lu_Unsegment_Anything_by_CVPR_2024_supplemental.pdf
null
Transductive Zero-Shot and Few-Shot CLIP
Ségolène Martin, Yunshi Huang, Fereshteh Shakeri, Jean-Christophe Pesquet, Ismail Ben Ayed
Transductive inference has been widely investigated in few-shot image classification but completely overlooked in the recent fast growing literature on adapting vision-langage models like CLIP. This paper addresses the transductive zero-shot and few-shot CLIP classification challenge in which inference is performed jointly across a mini-batch of unlabeled query samples rather than treating each instance independently. This paper addresses the transductive zero-shot and few-shot CLIP classification challenge in which inference is performed jointly across a mini-batch of unlabeled query samples rather than treating each instance independently. We initially construct informative vision-text probability features leading to a classification problem on the unit simplex set. Inspired by Expectation-Maximization (EM) our optimization-based classifying objective models the data probability distribution for each class using a Dirichlet law. The minimization problem is then tackled with a novel block Majorization-Minimization algorithm which simultaneously estimates the distribution parameters and class assignments. Extensivenumerical experiments on 11 datasets underscore the benefits and efficacy of our batch inference approach. On zero-shot tasks with test batches of 75 samples our approach yields near 20% improvement in ImageNet accuracy over CLIP's zero-shot performance. Additionally we outperform state-of-the-art methods in the few-shot setting. Code is available at https://github.com/SegoleneMartin/transductive-CLIP.
https://openaccess.thecvf.com/content/CVPR2024/papers/Martin_Transductive_Zero-Shot_and_Few-Shot_CLIP_CVPR_2024_paper.pdf
http://arxiv.org/abs/2405.18437
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Martin_Transductive_Zero-Shot_and_Few-Shot_CLIP_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Martin_Transductive_Zero-Shot_and_Few-Shot_CLIP_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Martin_Transductive_Zero-Shot_and_CVPR_2024_supplemental.pdf
null
Deep Single Image Camera Calibration by Heatmap Regression to Recover Fisheye Images Under Manhattan World Assumption
Nobuhiko Wakai, Satoshi Sato, Yasunori Ishii, Takayoshi Yamashita
A Manhattan world lying along cuboid buildings is useful for camera angle estimation. However accurate and robust angle estimation from fisheye images in the Manhattan world has remained an open challenge because general scene images tend to lack constraints such as lines arcs and vanishing points. To achieve higher accuracy and robustness we propose a learning-based calibration method that uses heatmap regression which is similar to pose estimation using keypoints to detect the directions of labeled image coordinates. Simultaneously our two estimators recover the rotation and remove fisheye distortion by remapping from a general scene image. Without considering vanishing-point constraints we find that additional points for learning-based methods can be defined. To compensate for the lack of vanishing points in images we introduce auxiliary diagonal points that have the optimal 3D arrangement of spatial uniformity. Extensive experiments demonstrated that our method outperforms conventional methods on large-scale datasets and with off-the-shelf cameras.
https://openaccess.thecvf.com/content/CVPR2024/papers/Wakai_Deep_Single_Image_Camera_Calibration_by_Heatmap_Regression_to_Recover_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wakai_Deep_Single_Image_Camera_Calibration_by_Heatmap_Regression_to_Recover_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wakai_Deep_Single_Image_Camera_Calibration_by_Heatmap_Regression_to_Recover_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wakai_Deep_Single_Image_CVPR_2024_supplemental.pdf
null
ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation
Jia-Hao Wu, Fu-Jen Tsai, Yan-Tsung Peng, Chung-Chi Tsai, Chia-Wen Lin, Yen-Yu Lin
Image deblurring aims to remove undesired blurs from an image captured in a dynamic scene. Much research has been dedicated to improving deblurring performance through model architectural designs. However there is little work on data augmentation for image deblurring. Since continuous motion causes blurred artifacts during image exposure we aspire to develop a groundbreaking blur augmentation method to generate diverse blurred images by simulating motion trajectories in a continuous space. This paper proposes Implicit Diffusion-based reBLurring AUgmentation (ID-Blau) utilizing a sharp image paired with a controllable blur condition map to produce a corresponding blurred image. We parameterize the blur patterns of a blurred image with their orientations and magnitudes as a pixel-wise blur condition map to simulate motion trajectories and implicitly represent them in a continuous space. By sampling diverse blur conditions ID-Blau can generate various blurred images unseen in the training set. Experimental results demonstrate that ID-Blau can produce realistic blurred images for training and thus significantly improve performance for state-of-the-art deblurring models. The source code is available at https://github.com/plusgood-steven/ID-Blau.
https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_ID-Blau_Image_Deblurring_by_Implicit_Diffusion-based_reBLurring_AUgmentation_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wu_ID-Blau_Image_Deblurring_by_Implicit_Diffusion-based_reBLurring_AUgmentation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wu_ID-Blau_Image_Deblurring_by_Implicit_Diffusion-based_reBLurring_AUgmentation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wu_ID-Blau_Image_Deblurring_CVPR_2024_supplemental.pdf
null
LAENeRF: Local Appearance Editing for Neural Radiance Fields
Lukas Radl, Michael Steiner, Andreas Kurz, Markus Steinberger
Due to the omnipresence of Neural Radiance Fields (NeRFs) the interest towards editable implicit 3D representations has surged over the last years. However editing implicit or hybrid representations as used for NeRFs is difficult due to the entanglement of appearance and geometry encoded in the model parameters. Despite these challenges recent research has shown first promising steps towards photorealistic and non-photorealistic appearance edits. The main open issues of related work include limited interactivity a lack of support for local edits and large memory requirements rendering them less useful in practice. We address these limitations with LAENeRF a unified framework for photorealistic and non-photorealistic appearance editing of NeRFs. To tackle local editing we leverage a voxel grid as starting point for region selection. We learn a mapping from expected ray terminations to final output color which can optionally be supervised by a style loss resulting in a framework which can perform photorealistic and non-photorealistic appearance editing of selected regions. Relying on a single point per ray for our mapping we limit memory requirements and enable fast optimization. To guarantee interactivity we compose the output color using a set of learned modifiable base colors composed with additive layer mixing. Compared to concurrent work LAENeRF enables recoloring and stylization while keeping processing time low. Furthermore we demonstrate that our approach surpasses baseline methods both quantitatively and qualitatively.
https://openaccess.thecvf.com/content/CVPR2024/papers/Radl_LAENeRF_Local_Appearance_Editing_for_Neural_Radiance_Fields_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.09913
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Radl_LAENeRF_Local_Appearance_Editing_for_Neural_Radiance_Fields_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Radl_LAENeRF_Local_Appearance_Editing_for_Neural_Radiance_Fields_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Radl_LAENeRF_Local_Appearance_CVPR_2024_supplemental.pdf
null
CSTA: CNN-based Spatiotemporal Attention for Video Summarization
Jaewon Son, Jaehun Park, Kwangsu Kim
Video summarization aims to generate a concise representation of a video capturing its essential content and key moments while reducing its overall length. Although several methods employ attention mechanisms to handle long-term dependencies they often fail to capture the visual significance inherent in frames. To address this limitation we propose a CNN-based SpatioTemporal Attention (CSTA) method that stacks each feature of frames from a single video to form image-like frame representations and applies 2D CNN to these frame features. Our methodology relies on CNN to comprehend the inter and intra-frame relations and to find crucial attributes in videos by exploiting its ability to learn absolute positions within images. In contrast to previous work compromising efficiency by designing additional modules to focus on spatial importance CSTA requires minimal computational overhead as it uses CNN as a sliding window. Extensive experiments on two benchmark datasets (SumMe and TVSum) demonstrate that our proposed approach achieves state-of-the-art performance with fewer MACs compared to previous methods. Codes are available at https://github.com/thswodnjs3/CSTA.
https://openaccess.thecvf.com/content/CVPR2024/papers/Son_CSTA_CNN-based_Spatiotemporal_Attention_for_Video_Summarization_CVPR_2024_paper.pdf
http://arxiv.org/abs/2405.11905
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Son_CSTA_CNN-based_Spatiotemporal_Attention_for_Video_Summarization_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Son_CSTA_CNN-based_Spatiotemporal_Attention_for_Video_Summarization_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Son_CSTA_CNN-based_Spatiotemporal_CVPR_2024_supplemental.pdf
null
Adversarial Score Distillation: When score distillation meets GAN
Min Wei, Jingkai Zhou, Junyao Sun, Xuesong Zhang
Existing score distillation methods are sensitive to classifier-free guidance (CFG) scale manifested as over-smoothness or instability at small CFG scales while over-saturation at large ones. To explain and analyze these issues we revisit the derivation of Score Distillation Sampling (SDS) and decipher existing score distillation with the Wasserstein Generative Adversarial Network (WGAN) paradigm. With the WGAN paradigm we find that existing score distillation either employs a fixed sub-optimal discriminator or conducts incomplete discriminator optimization resulting in the scale-sensitive issue. We propose the Adversarial Score Distillation (ASD) which maintains an optimizable discriminator and updates it using the complete optimization objective. Experiments show that the proposed ASD performs favorably in 2D distillation and text-to-3D tasks against existing methods. Furthermore to explore the generalization ability of our paradigm we extend ASD to the image editing task which achieves competitive results. The project page and code are at https://github.com/2y7c3/ASD
https://openaccess.thecvf.com/content/CVPR2024/papers/Wei_Adversarial_Score_Distillation_When_score_distillation_meets_GAN_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.00739
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wei_Adversarial_Score_Distillation_When_score_distillation_meets_GAN_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wei_Adversarial_Score_Distillation_When_score_distillation_meets_GAN_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wei_Adversarial_Score_Distillation_CVPR_2024_supplemental.pdf
null
Decentralized Directed Collaboration for Personalized Federated Learning
Yingqi Liu, Yifan Shi, Qinglun Li, Baoyuan Wu, Xueqian Wang, Li Shen
Personalized Federated Learning (PFL) is proposed to find the greatest personalized models for each client. To avoid the central failure and communication bottleneck in the server-based FL we concentrate on the Decentralized Personalized Federated Learning (DPFL) that performs distributed model training in a Peer-to-Peer (P2P) manner. Most personalized works in DPFL are based on undirected and symmetric topologies however the data computation and communication resources heterogeneity result in large variances in the personalized models which lead the undirected aggregation to suboptimal personalized performance and unguaranteed convergence. To address these issues we propose a directed collaboration DPFL framework by incorporating stochastic gradient push and partial model personalized called Decentralized Federated Partial Gradient Push (DFedPGP). It personalizes the linear classifier in the modern deep model to customize the local solution and learns a consensus representation in a fully decentralized manner. Clients only share gradients with a subset of neighbors based on the directed and asymmetric topologies which guarantees flexible choices for resource efficiency and better convergence. Theoretically we show that the proposed DFedPGP achieves a superior convergence rate of O(1/?T) in the general non-convex setting and tighter connectivity among clients will speed up the convergence. The proposed method achieves state-of-the-art (SOTA) accuracy in both data and computation heterogeneity scenarios demonstrating the efficiency of the directed collaboration and partial gradient push.
https://openaccess.thecvf.com/content/CVPR2024/papers/Liu_Decentralized_Directed_Collaboration_for_Personalized_Federated_Learning_CVPR_2024_paper.pdf
http://arxiv.org/abs/2405.17876
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Decentralized_Directed_Collaboration_for_Personalized_Federated_Learning_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Decentralized_Directed_Collaboration_for_Personalized_Federated_Learning_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liu_Decentralized_Directed_Collaboration_CVPR_2024_supplemental.pdf
null
Vector Graphics Generation via Mutually Impulsed Dual-domain Diffusion
Zhongyin Zhao, Ye Chen, Zhangli Hu, Xuanhong Chen, Bingbing Ni
Intelligent generation of vector graphics has very promising applications in the fields of advertising and logo design artistic painting animation production etc. However current mainstream vector image generation methods lack the encoding of image appearance information that is associated with the original vector representation and therefore lose valid supervision signal from the strong correlation between the discrete vector parameter (drawing instruction) sequence and the target shape/structure of the corresponding pixel image. On the one hand the generation process based on pure vector domain completely ignores the similarity measurement between shape parameter (and their combination) and the paired pixel image appearance pattern; on the other hand two-stage methods (i.e. generation-and-vectorization) based on pixel diffusion followed by differentiable image-to-vector translation suffer from wrong error-correction signal caused by approximate gradients. To address the above issues we propose a novel generation framework based on dual-domain (vector-pixel) diffusion with cross-modality impulse signals from each other. First in each diffusion step the current representation extracted from the other domain is used as a condition variable to constrain the subsequent sampling operation yielding shape-aware new parameterizations; second independent supervision signals from both domains avoid the gradient error accumulation problem caused by cross-domain representation conversion. Extensive experimental results on popular benchmarks including font and icon datasets demonstrate the great advantages of our proposed framework in terms of generated shape quality.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhao_Vector_Graphics_Generation_via_Mutually_Impulsed_Dual-domain_Diffusion_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhao_Vector_Graphics_Generation_via_Mutually_Impulsed_Dual-domain_Diffusion_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhao_Vector_Graphics_Generation_via_Mutually_Impulsed_Dual-domain_Diffusion_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhao_Vector_Graphics_Generation_CVPR_2024_supplemental.pdf
null
PEM: Prototype-based Efficient MaskFormer for Image Segmentation
Niccolò Cavagnero, Gabriele Rosi, Claudia Cuttano, Francesca Pistilli, Marco Ciccone, Giuseppe Averta, Fabio Cermelli
Recent transformer-based architectures have shown impressive results in the field of image segmentation. Thanks to their flexibility they obtain outstanding performance in multiple segmentation tasks such as semantic and panoptic under a single unified framework. To achieve such impressive performance these architectures employ intensive operations and require substantial computational resources which are often not available especially on edge devices. To fill this gap we propose Prototype-based Efficient MaskFormer (PEM) an efficient transformer-based architecture that can operate in multiple segmentation tasks. PEM proposes a novel prototype-based cross-attention which leverages the redundancy of visual features to restrict the computation and improve the efficiency without harming the performance. In addition PEM introduces an efficient multi-scale feature pyramid network capable of extracting features that have high semantic content in an efficient way thanks to the combination of deformable convolutions and context-based self-modulation. We benchmark the proposed PEM architecture on two tasks semantic and panoptic segmentation evaluated on two different datasets Cityscapes and ADE20K. PEM demonstrates outstanding performance on every task and dataset outperforming task-specific architectures while being comparable and even better than computationally expensive baselines. Code is available at https://github.com/NiccoloCavagnero/PEM.
https://openaccess.thecvf.com/content/CVPR2024/papers/Cavagnero_PEM_Prototype-based_Efficient_MaskFormer_for_Image_Segmentation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2402.19422
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Cavagnero_PEM_Prototype-based_Efficient_MaskFormer_for_Image_Segmentation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Cavagnero_PEM_Prototype-based_Efficient_MaskFormer_for_Image_Segmentation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Cavagnero_PEM_Prototype-based_Efficient_CVPR_2024_supplemental.pdf
null
Referring Expression Counting
Siyang Dai, Jun Liu, Ngai-Man Cheung
Existing counting tasks are limited to the class level which don't account for fine-grained details within the class. In real applications it often requires in-context or referring human input for counting target objects. Take urban analysis as an example fine-grained information such as traffic flow in different directions pedestrians and vehicles waiting or moving at different sides of the junction is more beneficial. Current settings of both class-specific and class-agnostic counting treat objects of the same class indifferently which pose limitations in real use cases. To this end we propose a new task named Referring Expression Counting (REC) which aims to count objects with different attributes within the same class. To evaluate the REC task we create a novel dataset named REC-8K which contains 8011 images and 17122 referring expressions. Experiments on REC-8K show that our proposed method achieves state-of-the-art performance compared with several text-based counting methods and an open-set object detection model. We also outperform prior models on the class agnostic counting (CAC) benchmark [36] for the zero-shot setting and perform on par with the few-shot methods. Code and dataset is available at https://github.com/sydai/referring-expression-counting.
https://openaccess.thecvf.com/content/CVPR2024/papers/Dai_Referring_Expression_Counting_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Dai_Referring_Expression_Counting_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Dai_Referring_Expression_Counting_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Dai_Referring_Expression_Counting_CVPR_2024_supplemental.pdf
null
ScoreHypo: Probabilistic Human Mesh Estimation with Hypothesis Scoring
Yuan Xu, Xiaoxuan Ma, Jiajun Su, Wentao Zhu, Yu Qiao, Yizhou Wang
Monocular 3D human mesh estimation is an ill-posed problem characterized by inherent ambiguity and occlusion. While recent probabilistic methods propose generating multiple solutions little attention is paid to obtaining high-quality estimates from them. To address this limitation we introduce ScoreHypo a versatile framework by first leverages our novel HypoNet to generate multiple hypotheses followed by employing a meticulously designed scorer ScoreNet to evaluate and select high-quality estimates. ScoreHypo formulates the estimation process as a reverse denoising process where HypoNet produces a diverse set of plausible estimates that effectively align with the image cues. Subsequently ScoreNet is employed to rigorously evaluate and rank these estimates based on their quality and finally identify superior ones. Experimental results demonstrate that HypoNet outperforms existing state-of-the-art probabilistic methods as a multi-hypothesis mesh estimator. Moreover the estimates selected by ScoreNet significantly outperform random generation or simple averaging. Notably the trained ScoreNet exhibits generalizability as it can effectively score existing methods and significantly reduce their errors by more than 15%. Code and models are available at https://xy02-05.github.io/ScoreHypo.
https://openaccess.thecvf.com/content/CVPR2024/papers/Xu_ScoreHypo_Probabilistic_Human_Mesh_Estimation_with_Hypothesis_Scoring_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Xu_ScoreHypo_Probabilistic_Human_Mesh_Estimation_with_Hypothesis_Scoring_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Xu_ScoreHypo_Probabilistic_Human_Mesh_Estimation_with_Hypothesis_Scoring_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Xu_ScoreHypo_Probabilistic_Human_CVPR_2024_supplemental.pdf
null
GES : Generalized Exponential Splatting for Efficient Radiance Field Rendering
Abdullah Hamdi, Luke Melas-Kyriazi, Jinjie Mai, Guocheng Qian, Ruoshi Liu, Carl Vondrick, Bernard Ghanem, Andrea Vedaldi
Advancements in 3D Gaussian Splatting have significantly accelerated 3D reconstruction and generation. However it may require a large number of Gaussians which creates a substantial memory footprint. This paper introduces GES (Generalized Exponential Splatting) a novel representation that employs Generalized Exponential Function (GEF) to model 3D scenes requiring far fewer particles to represent a scene and thus significantly outperforming Gaussian Splatting methods in efficiency with a plug-and-play replacement ability for Gaussian-based utilities. GES is validated theoretically and empirically in both principled 1D setup and realistic 3D scenes. It is shown to represent signals with sharp edges more accurately which are typically challenging for Gaussians due to their inherent low-pass characteristics. Our empirical analysis demonstrates that GEF outperforms Gaussians in fitting natural-occurring signals (E.g. squares triangles parabolic signals) thereby reducing the need for extensive splitting operations that increase the memory footprint of Gaussian Splatting. With the aid of a frequency-modulated loss GES achieves competitive performance in novel-view synthesis benchmarks while requiring less than half the memory storage of Gaussian Splatting and increasing the rendering speed by up to 39%. The code is available on the project website https://abdullahamdi.com/ges .
https://openaccess.thecvf.com/content/CVPR2024/papers/Hamdi_GES__Generalized_Exponential_Splatting_for_Efficient_Radiance_Field_Rendering_CVPR_2024_paper.pdf
http://arxiv.org/abs/2402.10128
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Hamdi_GES__Generalized_Exponential_Splatting_for_Efficient_Radiance_Field_Rendering_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Hamdi_GES__Generalized_Exponential_Splatting_for_Efficient_Radiance_Field_Rendering_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Hamdi_GES__Generalized_CVPR_2024_supplemental.pdf
null
Learning to Predict Activity Progress by Self-Supervised Video Alignment
Gerard Donahue, Ehsan Elhamifar
In this paper we tackle the problem of self-supervised video alignment and activity progress prediction using in-the-wild videos. Our proposed self-supervised representation learning method carefully addresses different action orderings redundant actions and background frames to generate improved video representations compared to previous methods. Our model generalizes temporal cycle-consistency learning to allow for more flexibility in determining cycle-consistent neighbors. More specifically to handle repeated actions we propose a multi-neighbor cycle consistency and a multi-cycle-back regression loss by finding multiple soft nearest neighbors using a Gaussian Mixture Model. To handle background and redundant frames we introduce a context-dependent drop function in our framework discouraging the alignment of droppable frames. On the other hand to learn from videos of multiple activities jointly we propose a multi-head crosstask network allowing us to embed a video and estimate progress without knowing its activity label. Experiments on multiple datasets show that our method outperforms the state-of-the-art for video alignment and progress prediction.
https://openaccess.thecvf.com/content/CVPR2024/papers/Donahue_Learning_to_Predict_Activity_Progress_by_Self-Supervised_Video_Alignment_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Donahue_Learning_to_Predict_Activity_Progress_by_Self-Supervised_Video_Alignment_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Donahue_Learning_to_Predict_Activity_Progress_by_Self-Supervised_Video_Alignment_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Donahue_Learning_to_Predict_CVPR_2024_supplemental.pdf
null
VicTR: Video-conditioned Text Representations for Activity Recognition
Kumara Kahatapitiya, Anurag Arnab, Arsha Nagrani, Michael S. Ryoo
Vision-Language models (VLMs) have excelled in the image-domain--- especially in zero-shot settings--- thanks to the availability of vast pretraining data (i.e. paired image-text samples). However for videos such paired data is not as abundant. Therefore video-VLMs are usually designed by adapting pretrained image-VLMs to the video-domain instead of training from scratch. All such recipes rely on augmenting visual embeddings with temporal information (i.e. image --> video) often keeping text embeddings unchanged or even being discarded. In this paper we argue the contrary that better video-VLMs can be designed by focusing more on augmenting text rather than visual information. More specifically we introduce Video-conditioned Text Representations (VicTR): a form of text embeddings optimized w.r.t. visual embeddings creating a more-flexible contrastive latent space. Our model can further make use of freely-available semantic information in the form of visually-grounded auxiliary text (e.g. object or scene information). We evaluate our model on few-shot zero-shot (HMDB-51 UCF-101) short-form (Kinetics-400) and long-form (Charades) activity recognition benchmarks showing strong performance among video-VLMs.
https://openaccess.thecvf.com/content/CVPR2024/papers/Kahatapitiya_VicTR_Video-conditioned_Text_Representations_for_Activity_Recognition_CVPR_2024_paper.pdf
http://arxiv.org/abs/2304.02560
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Kahatapitiya_VicTR_Video-conditioned_Text_Representations_for_Activity_Recognition_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Kahatapitiya_VicTR_Video-conditioned_Text_Representations_for_Activity_Recognition_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kahatapitiya_VicTR_Video-conditioned_Text_CVPR_2024_supplemental.pdf
null
Label-Efficient Group Robustness via Out-of-Distribution Concept Curation
Yiwei Yang, Anthony Z. Liu, Robert Wolfe, Aylin Caliskan, Bill Howe
Deep neural networks are prone to capture correlations between spurious attributes and class labels leading to low accuracy on some combinations of class labels and spurious attribute values. When a spurious attribute represents a protected class these low-accuracy groups can manifest discriminatory bias. Existing methods attempting to improve worst-group accuracy assume the training data validation data or both are reliably labeled by the spurious attribute. But a model may be perceived to be biased towards a concept that is not represented by pre-existing labels on the training data. In these situations the spurious attribute must be defined with external information. We propose Concept Correction a framework that represents a concept as a curated set of images from any source then labels each training sample by its similarity to the concept set to control spurious correlations. For example concept sets representing gender can be used to measure and control gender bias even without explicit labels. We demonstrate and evaluate an instance of the framework as Concept DRO which uses concept sets to estimate group labels then uses these labels to train with a state of the art distributively robust optimization objective. We show that Concept DRO outperforms existing methods that do not require labels of spurious attributes by up to 33.1% on three image classification datasets and is competitive with the best methods that assume access to labels. We consider how the size and quality of the concept set influences performance and find that even smaller manually curated sets of noisy AI-generated images are effective at controlling spurious correlations suggesting that high-quality reusable concept sets are easy to create and effective in reducing bias.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yang_Label-Efficient_Group_Robustness_via_Out-of-Distribution_Concept_Curation_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Label-Efficient_Group_Robustness_via_Out-of-Distribution_Concept_Curation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Label-Efficient_Group_Robustness_via_Out-of-Distribution_Concept_Curation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yang_Label-Efficient_Group_Robustness_CVPR_2024_supplemental.pdf
null
MMCert: Provable Defense against Adversarial Attacks to Multi-modal Models
Yanting Wang, Hongye Fu, Wei Zou, Jinyuan Jia
Different from a unimodal model whose input is from a single modality the input (called multi-modal input) of a multi-modal model is from multiple modalities such as image 3D points audio text etc. Similar to unimodal models many existing studies show that a multi-modal model is also vulnerable to adversarial perturbation where an attacker could add small perturbation to all modalities of a multi-modal input such that the multi-modal model makes incorrect predictions for it. Existing certified defenses are mostly designed for unimodal models which achieve sub-optimal certified robustness guarantees when extended to multi-modal models as shown in our experimental results. In our work we propose MMCert the first certified defense against adversarial attacks to a multi-modal model. We derive a lower bound on the performance of our MMCert under arbitrary adversarial attacks with bounded perturbations to both modalities (e.g. in the context of auto-driving we bound the number of changed pixels in both RGB image and depth image). We evaluate our MMCert using two benchmark datasets: one for the multi-modal road segmentation task and the other for the multi-modal emotion recognition task. Moreover we compare our MMCert with a state-of-the-art certified defense extended from unimodal models. Our experimental results show that our MMCert outperforms the baseline.
https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_MMCert_Provable_Defense_against_Adversarial_Attacks_to_Multi-modal_Models_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.19080
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_MMCert_Provable_Defense_against_Adversarial_Attacks_to_Multi-modal_Models_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_MMCert_Provable_Defense_against_Adversarial_Attacks_to_Multi-modal_Models_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wang_MMCert_Provable_Defense_CVPR_2024_supplemental.pdf
null
3DToonify: Creating Your High-Fidelity 3D Stylized Avatar Easily from 2D Portrait Images
Yifang Men, Hanxi Liu, Yuan Yao, Miaomiao Cui, Xuansong Xie, Zhouhui Lian
Visual content creation has aroused a surge of interest given its applications in mobile photography and AR/VR. Portrait style transfer and 3D recovery from monocular images as two representative tasks have so far evolved independently. In this paper we make a connection between the two and tackle the challenging task of 3D portrait stylization - modeling high-fidelity 3D stylized avatars from captured 2D portrait images. However naively combining the techniques from the two isolated areas may suffer from either inadequate stylization or absence of 3D assets. To this end we propose 3DToonify a new framework that introduces a progressive training scheme to achieve 3D style adaption on spatial neural representation (SNR). SNR is constructed with implicit fields and they are dynamically optimized by the progressive training scheme which consists of three stages: guided prior learning deformable geometry adaption and explicit texture adaption. In this way stylized geometry and texture are learned in SNR in an explicit and structured way with only a single stylized exemplar needed. Moreover our method obtains style-adaptive underlying structures (i.e. deformable geometry and exaggerated texture) and view-consistent stylized avatar rendering from arbitrary novel viewpoints. Both qualitative and quantitative experiments have been conducted to demonstrate the effectiveness and superiority of our method for automatically generating exemplar-guided 3D stylized avatars.
https://openaccess.thecvf.com/content/CVPR2024/papers/Men_3DToonify_Creating_Your_High-Fidelity_3D_Stylized_Avatar_Easily_from_2D_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Men_3DToonify_Creating_Your_High-Fidelity_3D_Stylized_Avatar_Easily_from_2D_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Men_3DToonify_Creating_Your_High-Fidelity_3D_Stylized_Avatar_Easily_from_2D_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Men_3DToonify_Creating_Your_CVPR_2024_supplemental.pdf
null
NAYER: Noisy Layer Data Generation for Efficient and Effective Data-free Knowledge Distillation
Minh-Tuan Tran, Trung Le, Xuan-May Le, Mehrtash Harandi, Quan Hung Tran, Dinh Phung
Data-Free Knowledge Distillation (DFKD) has made significant recent strides by transferring knowledge from a teacher neural network to a student neural network without accessing the original data. Nonetheless existing approaches encounter a significant challenge when attempting to generate samples from random noise inputs which inherently lack meaningful information. Consequently these models struggle to effectively map this noise to the ground-truth sample distribution resulting in prolonging training times and low-quality outputs. In this paper we propose a novel Noisy Layer Generation method (NAYER) which relocates the random source from the input to a noisy layer and utilizes the meaningful constant label-text embedding (LTE) as the input. LTE is generated by using the language model once and then it is stored in memory for all subsequent training processes. The significance of LTE lies in its ability to contain substantial meaningful inter-class information enabling the generation of high-quality samples with only a few training steps. Simultaneously the noisy layer plays a key role in addressing the issue of diversity in sample generation by preventing the model from overemphasizing the constrained label information. By reinitializing the noisy layer in each iteration we aim to facilitate the generation of diverse samples while still retaining the method's efficiency thanks to the ease of learning provided by LTE. Experiments carried out on multiple datasets demonstrate that our NAYER not only outperforms the state-of-the-art methods but also achieves speeds 5 to 15 times faster than previous approaches. The code is available at https://github.com/tmtuan1307/nayer.
https://openaccess.thecvf.com/content/CVPR2024/papers/Tran_NAYER_Noisy_Layer_Data_Generation_for_Efficient_and_Effective_Data-free_CVPR_2024_paper.pdf
http://arxiv.org/abs/2310.00258
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Tran_NAYER_Noisy_Layer_Data_Generation_for_Efficient_and_Effective_Data-free_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Tran_NAYER_Noisy_Layer_Data_Generation_for_Efficient_and_Effective_Data-free_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Tran_NAYER_Noisy_Layer_CVPR_2024_supplemental.pdf
null
OmniVec2 - A Novel Transformer based Network for Large Scale Multimodal and Multitask Learning
Siddharth Srivastava, Gaurav Sharma
We present a novel multimodal multitask network and associated training algorithm. The method is capable of ingesting data from approximately 12 different modalities namely image video audio text depth point cloud time series tabular graph X-ray infrared IMU and hyperspectral. The proposed approach utilizes modality specialized tokenizers a shared transformer architecture and cross-attention mechanisms to project the data from different modalities into a unified embedding space. It addresses multimodal and multitask scenarios by incorporating modality-specific task heads for different tasks in respective modalities. We propose a novel pretraining strategy with iterative modality switching to initialize the network and a training algorithm which trades off fully joint training over all modalities with training on pairs of modalities at a time. We provide comprehensive evaluation across 25 datasets from 12 modalities and show state of the art performances demonstrating the effectiveness of the proposed architecture pretraining strategy and adapted multitask training.
https://openaccess.thecvf.com/content/CVPR2024/papers/Srivastava_OmniVec2_-_A_Novel_Transformer_based_Network_for_Large_Scale_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Srivastava_OmniVec2_-_A_Novel_Transformer_based_Network_for_Large_Scale_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Srivastava_OmniVec2_-_A_Novel_Transformer_based_Network_for_Large_Scale_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Srivastava_OmniVec2_-_A_CVPR_2024_supplemental.pdf
null
Investigating Compositional Challenges in Vision-Language Models for Visual Grounding
Yunan Zeng, Yan Huang, Jinjin Zhang, Zequn Jie, Zhenhua Chai, Liang Wang
Pre-trained vision-language models (VLMs) have achieved high performance on various downstream tasks which have been widely used for visual grounding tasks in a weakly supervised manner. However despite the performance gains contributed by large vision and language pre-training we find that state-of-the-art VLMs struggle with compositional reasoning on grounding tasks. To demonstrate this we propose Attribute Relation and Priority grounding (ARPGrounding) benchmark to test VLMs' compositional reasoning ability on visual grounding tasks. ARPGrounding contains 11425 samples and evaluates the compositional understanding of VLMs in three dimensions: 1) attribute denoting comprehension of objects' properties; 2) relation indicating an understanding of relation between objects; 3) priority reflecting an awareness of the part of speech associated with nouns. Using the ARPGrounding benchmark we evaluate several mainstream VLMs. We empirically find that these models perform quite well on conventional visual grounding datasets achieving performance comparable to or surpassing state-of-the-art methods but showing strong deficiencies in compositional reasoning. Furthermore we propose a composition-aware fine-tuning pipeline demonstrating the potential to leverage cost-effective image-text annotations for enhancing the compositional understanding of VLMs in grounding tasks.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zeng_Investigating_Compositional_Challenges_in_Vision-Language_Models_for_Visual_Grounding_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zeng_Investigating_Compositional_Challenges_in_Vision-Language_Models_for_Visual_Grounding_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zeng_Investigating_Compositional_Challenges_in_Vision-Language_Models_for_Visual_Grounding_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zeng_Investigating_Compositional_Challenges_CVPR_2024_supplemental.pdf
null
6D-Diff: A Keypoint Diffusion Framework for 6D Object Pose Estimation
Li Xu, Haoxuan Qu, Yujun Cai, Jun Liu
Estimating the 6D object pose from a single RGB image often involves noise and indeterminacy due to challenges such as occlusions and cluttered backgrounds. Meanwhile diffusion models have shown appealing performance in generating high-quality images from random noise with high indeterminacy through step-by-step denoising. Inspired by their denoising capability we propose a novel diffusion-based framework (6D-Diff) to handle the noise and indeterminacy in object pose estimation for better performance. In our framework to establish accurate 2D-3D correspondence we formulate 2D keypoints detection as a reverse diffusion (denoising) process. To facilitate such a denoising process we design a Mixture-of-Cauchy-based forward diffusion process and condition the reverse process on the object appearance features. Extensive experiments on the LM-O and YCB-V datasets demonstrate the effectiveness of our framework.
https://openaccess.thecvf.com/content/CVPR2024/papers/Xu_6D-Diff_A_Keypoint_Diffusion_Framework_for_6D_Object_Pose_Estimation_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Xu_6D-Diff_A_Keypoint_Diffusion_Framework_for_6D_Object_Pose_Estimation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Xu_6D-Diff_A_Keypoint_Diffusion_Framework_for_6D_Object_Pose_Estimation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Xu_6D-Diff_A_Keypoint_CVPR_2024_supplemental.pdf
null
Generative Region-Language Pretraining for Open-Ended Object Detection
Chuang Lin, Yi Jiang, Lizhen Qu, Zehuan Yuan, Jianfei Cai
In recent research significant attention has been devoted to the open-vocabulary object detection task aiming to generalize beyond the limited number of classes labeled during training and detect objects described by arbitrary category names at inference. Compared with conventional object detection open vocabulary object detection largely extends the object detection categories. However it relies on calculating the similarity between image regions and a set of arbitrary category names with a pretrained vision-and-language model. This implies that despite its open-set nature the task still needs the predefined object categories during the inference stage. This raises the question: What if we do not have exact knowledge of object categories during inference? In this paper we call such a new setting as generative open-ended object detection which is a more general and practical problem. To address it we formulate object detection as a generative problem and propose a simple framework named GenerateU which can detect dense objects and generate their names in a free-form way. Particularly we employ Deformable DETR as a region proposal generator with a language model translating visual regions to object names. To assess the free-form object detection task we introduce an evaluation method designed to quantitatively measure the performance of generative outcomes. Extensive experiments demonstrate strong zero-shot detection performance of our GenerateU. For example on the LVIS dataset our GenerateU achieves comparable results to the open-vocabulary object detection method GLIP even though the category names are not seen by GenerateU during inference. Code is available at: https://github.com/FoundationVision/GenerateU.
https://openaccess.thecvf.com/content/CVPR2024/papers/Lin_Generative_Region-Language_Pretraining_for_Open-Ended_Object_Detection_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.10191
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Lin_Generative_Region-Language_Pretraining_for_Open-Ended_Object_Detection_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Lin_Generative_Region-Language_Pretraining_for_Open-Ended_Object_Detection_CVPR_2024_paper.html
CVPR 2024
null
null
Enhancing Post-training Quantization Calibration through Contrastive Learning
Yuzhang Shang, Gaowen Liu, Ramana Rao Kompella, Yan Yan
Post-training quantization (PTQ) converts a pre-trained full-precision (FP) model into a quantized model in a training-free manner. Determining suitable quantization parameters such as scaling factors and weight rounding is the primary strategy for mitigating the impact of quantization noise (calibration) and restoring the performance of the quantized models. However the existing activation calibration methods have never considered information degradation between pre- (FP) and post-quantized activations. In this study we introduce a well-defined distributional metric from information theory mutual information into PTQ calibration. We aim to calibrate the quantized activations by maximizing the mutual information between the pre- and post-quantized activations. To realize this goal we establish a contrastive learning (CL) framework for the PTQ calibration where the quantization parameters are optimized through a self-supervised proxy task. Specifically by leveraging CL during the PTQ process we can benefit from pulling the positive pairs of quantized and FP activations collected from the same input samples while pushing negative pairs from different samples. Thanks to the ingeniously designed critic function we avoid the unwanted but often-encountered collision solution in CL especially in calibration scenarios where the amount of calibration data is limited. Additionally we provide a theoretical guarantee that minimizing our designed loss is equivalent to maximizing the desired mutual information. Consequently the quantized activations retain more information which ultimately enhances the performance of the quantized network. Experimental results show that our method can effectively serve as an add-on module to existing SoTA PTQ methods.
https://openaccess.thecvf.com/content/CVPR2024/papers/Shang_Enhancing_Post-training_Quantization_Calibration_through_Contrastive_Learning_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Shang_Enhancing_Post-training_Quantization_Calibration_through_Contrastive_Learning_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Shang_Enhancing_Post-training_Quantization_Calibration_through_Contrastive_Learning_CVPR_2024_paper.html
CVPR 2024
null
null
Efficient Model Stealing Defense with Noise Transition Matrix
Dong-Dong Wu, Chilin Fu, Weichang Wu, Wenwen Xia, Xiaolu Zhang, Jun Zhou, Min-Ling Zhang
With the escalating complexity and investment cost of training deep neural networks safeguarding them from unauthorized usage and intellectual property theft has become imperative. Especially the rampant misuse of prediction APIs to replicate models without access to the original data or architecture poses grave security threats. Diverse defense strategies have emerged to address these vulnerabilities yet these defenses either incur heavy inference overheads or assume idealized attack scenarios. To address these challenges we revisit the utilization of noise transition matrix as an efficient perturbation technique which injects noise into predicted posteriors in a linear manner and integrates seamlessly into existing systems with minimal overhead for model stealing defense. Provably with such perturbed posteriors the attacker's cloning process degrades into learning from noisy data. Toward optimizing the noise transition matrix we proposed a novel bi-level optimization training framework which performs fidelity on the victim model while the surrogate model adversarially. Comprehensive experimental results demonstrate that our method effectively thwarts model stealing attacks and achieves minimal utility trade-offs outperforming existing state-of-the-art defenses.
https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_Efficient_Model_Stealing_Defense_with_Noise_Transition_Matrix_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wu_Efficient_Model_Stealing_Defense_with_Noise_Transition_Matrix_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wu_Efficient_Model_Stealing_Defense_with_Noise_Transition_Matrix_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wu_Efficient_Model_Stealing_CVPR_2024_supplemental.pdf
null
MeshPose: Unifying DensePose and 3D Body Mesh Reconstruction
Eric-Tuan Le, Antonis Kakolyris, Petros Koutras, Himmy Tam, Efstratios Skordos, George Papandreou, Riza Alp Güler, Iasonas Kokkinos
DensePose provides a pixel-accurate association of images with 3D mesh coordinates but does not provide a 3D mesh while Human Mesh Reconstruction (HMR) systems have high 2D reprojection error as measured by DensePose localization metrics. In this work we introduce MeshPose to jointly tackle DensePose and HMR. For this we first introduce new losses that allow us to use weak DensePose supervision to accurately localize in 2D a subset of the mesh vertices ('VertexPose'). We then lift these vertices to 3D yielding a low-poly body mesh ('MeshPose'). Our system is trained in an end-to-end manner and is the first HMR method to attain competitive DensePose accuracy while also being lightweight and amenable to efficient inference making it suitable for real-time AR applications.
https://openaccess.thecvf.com/content/CVPR2024/papers/Le_MeshPose_Unifying_DensePose_and_3D_Body_Mesh_Reconstruction_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Le_MeshPose_Unifying_DensePose_and_3D_Body_Mesh_Reconstruction_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Le_MeshPose_Unifying_DensePose_and_3D_Body_Mesh_Reconstruction_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Le_MeshPose_Unifying_DensePose_CVPR_2024_supplemental.zip
null
Unsupervised Salient Instance Detection
Xin Tian, Ke Xu, Rynson Lau
The significant amount of manual efforts in annotating pixel-level labels has triggered the advancement of unsupervised saliency learning. However without supervision signals state-of-the-art methods can only infer region-level saliency. In this paper we propose to explore the unsupervised salient instance detection (USID) problem for a more fine-grained visual understanding. Our key observation is that self-supervised transformer features may exhibit local similarities as well as different levels of contrast to other regions which provide informative cues to identify salient instances. Hence we propose SCoCo a novel network that models saliency coherence and contrast for USID. SCoCo includes two novel modules: (1) a global background adaptation (GBA) module with a scene-level contrastive loss to extract salient regions from the scene by searching the adaptive "saliency threshold" in the self-supervised transformer features and (2) a locality-aware similarity (LAS) module with an instance-level contrastive loss to group salient regions into instances by modeling the in-region saliency coherence and cross-region saliency contrasts. Extensive experiments show that SCoCo outperforms state-of-the-art weakly-supervised SID methods and carefully designed unsupervised baselines and has comparable performances to fully-supervised SID methods.
https://openaccess.thecvf.com/content/CVPR2024/papers/Tian_Unsupervised_Salient_Instance_Detection_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Tian_Unsupervised_Salient_Instance_Detection_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Tian_Unsupervised_Salient_Instance_Detection_CVPR_2024_paper.html
CVPR 2024
null
null
Enhancing Visual Document Understanding with Contrastive Learning in Large Visual-Language Models
Xin Li, Yunfei Wu, Xinghua Jiang, Zhihao Guo, Mingming Gong, Haoyu Cao, Yinsong Liu, Deqiang Jiang, Xing Sun
Recently the advent of Large Visual-Language Models (LVLMs) has received increasing attention across various domains particularly in the field of visual document understanding (VDU). Different from conventional vision-language tasks VDU is specifically concerned with text-rich scenarios containing abundant document elements. Nevertheless the importance of fine-grained features remains largely unexplored within the community of LVLMs leading to suboptimal performance in text-rich scenarios. In this paper we abbreviate it as the fine-grained feature collapse issue. With the aim of filling this gap we propose a contrastive learning framework termed Document Object COntrastive learning (DoCo) specifically tailored for the downstream tasks of VDU. DoCo leverages an auxiliary multimodal encoder to obtain the features of document objects and align them to the visual features generated by the vision encoder of LVLM which enhances visual representation in text-rich scenarios. It can represent that the contrastive learning between the visual holistic representations and the multimodal fine-grained features of document objects can assist the vision encoder in acquiring more effective visual cues thereby enhancing the comprehension of text-rich documents in LVLMs. We also demonstrate that the proposed DoCo serves as a plug-and-play pre-training method which can be employed in the pre-training of various LVLMs without inducing any increase in computational complexity during the inference process. Extensive experimental results on multiple benchmarks of VDU reveal that LVLMs equipped with our proposed DoCo can achieve superior performance and mitigate the gap between VDU and generic vision-language tasks.
https://openaccess.thecvf.com/content/CVPR2024/papers/Li_Enhancing_Visual_Document_Understanding_with_Contrastive_Learning_in_Large_Visual-Language_CVPR_2024_paper.pdf
http://arxiv.org/abs/2402.19014
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Li_Enhancing_Visual_Document_Understanding_with_Contrastive_Learning_in_Large_Visual-Language_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Li_Enhancing_Visual_Document_Understanding_with_Contrastive_Learning_in_Large_Visual-Language_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_Enhancing_Visual_Document_CVPR_2024_supplemental.pdf
null
Move Anything with Layered Scene Diffusion
Jiawei Ren, Mengmeng Xu, Jui-Chieh Wu, Ziwei Liu, Tao Xiang, Antoine Toisoul
Diffusion models generate images with an unprecedented level of quality but how can we freely rearrange image layouts? Recent works generate controllable scenes via learning spatially disentangled latent codes but these methods do not apply to diffusion models due to their fixed forward process. In this work we propose SceneDiffusion to optimize a layered scene representation during the diffusion sampling process. Our key insight is that spatial disentanglement can be obtained by jointly denoising scene renderings at different spatial layouts. Our generated scenes support a wide range of spatial editing operations including moving resizing cloning and layer-wise appearance editing operations including object restyling and replacing. Moreover a scene can be generated conditioned on a reference image thus enabling object moving for in-the-wild images. Notably this approach is training-free compatible with general text-to-image diffusion models and responsive in less than a second.
https://openaccess.thecvf.com/content/CVPR2024/papers/Ren_Move_Anything_with_Layered_Scene_Diffusion_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.07178
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Ren_Move_Anything_with_Layered_Scene_Diffusion_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Ren_Move_Anything_with_Layered_Scene_Diffusion_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ren_Move_Anything_with_CVPR_2024_supplemental.pdf
null
GS-SLAM: Dense Visual SLAM with 3D Gaussian Splatting
Chi Yan, Delin Qu, Dan Xu, Bin Zhao, Zhigang Wang, Dong Wang, Xuelong Li
In this paper we introduce GS-SLAM that first utilizes 3D Gaussian representation in the Simultaneous Localization and Mapping (SLAM) system. It facilitates a better balance between efficiency and accuracy. Compared to recent SLAM methods employing neural implicit representations our method utilizes a real-time differentiable splatting rendering pipeline that offers significant speedup to map optimization and RGB-D rendering. Specifically we propose an adaptive expansion strategy that adds new or deletes noisy 3D Gaussians in order to efficiently reconstruct new observed scene geometry and improve the mapping of previously observed areas. This strategy is essential to extend 3D Gaussian representation to reconstruct the whole scene rather than synthesize a static object in existing methods. Moreover in the pose tracking process an effective coarse-to-fine technique is designed to select reliable 3D Gaussian representations to optimize camera pose resulting in runtime reduction and robust estimation. Our method achieves competitive performance compared with existing state-of-the-art real-time methods on the Replica TUM-RGBD datasets. Project page: \href https://gs-slam.github.io/ https://gs-slam.github.io/ .
https://openaccess.thecvf.com/content/CVPR2024/papers/Yan_GS-SLAM_Dense_Visual_SLAM_with_3D_Gaussian_Splatting_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yan_GS-SLAM_Dense_Visual_SLAM_with_3D_Gaussian_Splatting_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yan_GS-SLAM_Dense_Visual_SLAM_with_3D_Gaussian_Splatting_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yan_GS-SLAM_Dense_Visual_CVPR_2024_supplemental.pdf
null
Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering
Tao Lu, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, Bo Dai
Neural rendering methods have significantly advanced photo-realistic 3D scene rendering in various academic and industrial applications. The recent 3D Gaussian Splatting method has achieved the state-of-the-art rendering quality and speed combining the benefits of both primitive-based representations and volumetric representations. However it often leads to heavily redundant Gaussians that try to fit every training view neglecting the underlying scene geometry. Consequently the resulting model becomes less robust to significant view changes texture-less area and lighting effects. We introduce Scaffold-GS which uses anchor points to distribute local 3D Gaussians and predicts their attributes on-the-fly based on viewing direction and distance within the view frustum. Anchor growing and pruning strategies are developed based on the importance of neural Gaussians to reliably improve the scene coverage. We show that our method effectively reduces redundant Gaussians while delivering high-quality rendering. We also demonstrates an enhanced capability to accommodate scenes with varying levels-of-detail and view-dependent observations without sacrificing the rendering speed. Project page: https://city-super.github.io/scaffold-gs.
https://openaccess.thecvf.com/content/CVPR2024/papers/Lu_Scaffold-GS_Structured_3D_Gaussians_for_View-Adaptive_Rendering_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Lu_Scaffold-GS_Structured_3D_Gaussians_for_View-Adaptive_Rendering_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Lu_Scaffold-GS_Structured_3D_Gaussians_for_View-Adaptive_Rendering_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lu_Scaffold-GS_Structured_3D_CVPR_2024_supplemental.pdf
null
Data Valuation and Detections in Federated Learning
Wenqian Li, Shuran Fu, Fengrui Zhang, Yan Pang
Federated Learning (FL) enables collaborative model training while preserving the privacy of raw data. A challenge in this framework is the fair and efficient valuation of data which is crucial for incentivizing clients to contribute high-quality data in the FL task. In scenarios involving numerous data clients within FL it is often the case that only a subset of clients and datasets are pertinent to a specific learning task while others might have either a negative or negligible impact on the model training process. This paper introduces a novel privacy-preserving method for evaluating client contributions and selecting relevant datasets without a pre-specified training algorithm in an FL task. Our proposed approach \texttt FedBary utilizes Wasserstein distance within the federated context offering a new solution for data valuation in the FL framework. This method ensures transparent data valuation and efficient computation of the Wasserstein barycenter and reduces the dependence on validation datasets. Through extensive empirical experiments and theoretical analyses we demonstrate the advantages of this data valuation method as a promising avenue for FL research.
https://openaccess.thecvf.com/content/CVPR2024/papers/Li_Data_Valuation_and_Detections_in_Federated_Learning_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.05304
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Li_Data_Valuation_and_Detections_in_Federated_Learning_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Li_Data_Valuation_and_Detections_in_Federated_Learning_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_Data_Valuation_and_CVPR_2024_supplemental.pdf
null
Classes Are Not Equal: An Empirical Study on Image Recognition Fairness
Jiequan Cui, Beier Zhu, Xin Wen, Xiaojuan Qi, Bei Yu, Hanwang Zhang
In this paper we present an empirical study on image recognition unfairness i.e. extreme class accuracy disparity on balanced data like ImageNet. We demonstrate that classes are not equal and unfairness is prevalent for image classification models across various datasets network architectures and model capacities. Moreover several intriguing properties of fairness are identified. First the unfairness lies in problematic representation rather than classifier bias distinguished from long-tailed recognition. Second with the proposed concept of Model Prediction Bias we investigate the origins of problematic representation during training optimization. Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize. It means that more other classes will be confused with harder classes. Then the False Positives (FPs) will dominate the learning in optimization thus leading to their poor accuracy. Further we conclude that data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
https://openaccess.thecvf.com/content/CVPR2024/papers/Cui_Classes_Are_Not_Equal_An_Empirical_Study_on_Image_Recognition_CVPR_2024_paper.pdf
http://arxiv.org/abs/2402.18133
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Cui_Classes_Are_Not_Equal_An_Empirical_Study_on_Image_Recognition_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Cui_Classes_Are_Not_Equal_An_Empirical_Study_on_Image_Recognition_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Cui_Classes_Are_Not_CVPR_2024_supplemental.pdf
null
Human Gaussian Splatting: Real-time Rendering of Animatable Avatars
Arthur Moreau, Jifei Song, Helisa Dhamo, Richard Shaw, Yiren Zhou, Eduardo Pérez-Pellitero
This work addresses the problem of real-time rendering of photorealistic human body avatars learned from multi-view videos. While the classical approaches to model and render virtual humans generally use a textured mesh recent research has developed neural body representations that achieve impressive visual quality. However these models are difficult to render in real-time and their quality degrades when the character is animated with body poses different than the training observations. We propose an animatable human model based on 3D Gaussian Splatting that has recently emerged as a very efficient alternative to neural radiance fields. The body is represented by a set of gaussian primitives in a canonical space which is deformed with a coarse to fine approach that combines forward skinning and local non-rigid refinement. We describe how to learn our Human Gaussian Splatting (HuGS) model in an end-to-end fashion from multi-view observations and evaluate it against the state-of-the-art approaches for novel pose synthesis of clothed body. Our method achieves 1.5 dB PSNR improvement over the state-of-the-art on THuman4 dataset while being able to render in real-time (?80 fps for 512x512 resolution).
https://openaccess.thecvf.com/content/CVPR2024/papers/Moreau_Human_Gaussian_Splatting_Real-time_Rendering_of_Animatable_Avatars_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.17113
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Moreau_Human_Gaussian_Splatting_Real-time_Rendering_of_Animatable_Avatars_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Moreau_Human_Gaussian_Splatting_Real-time_Rendering_of_Animatable_Avatars_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Moreau_Human_Gaussian_Splatting_CVPR_2024_supplemental.zip
null
Multi-Scale 3D Gaussian Splatting for Anti-Aliased Rendering
Zhiwen Yan, Weng Fei Low, Yu Chen, Gim Hee Lee
3D Gaussians have recently emerged as a highly efficient representation for 3D reconstruction and rendering. Despite its high rendering quality and speed at high resolutions they both deteriorate drastically when rendered at lower resolutions or from far away camera position. During low resolution or far away rendering the pixel size of the image can fall below the Nyquist frequency compared to the screen size of each splatted 3D Gaussian and leads to aliasing effect. The rendering is also drastically slowed down by the sequential alpha blending of more splatted Gaussians per pixel. To address these issues we propose a multi-scale 3D Gaussian splatting algorithm which maintains Gaussians at different scales to represent the same scene. Higher-resolution images are rendered with more small Gaussians and lower-resolution images are rendered with fewer larger Gaussians. With similar training time our algorithm can achieve 13%-66% PSNR and 160%-2400% rendering speed improvement at 4x-128x scale rendering on Mip-NeRF360 dataset compared to the single scale 3D Gaussian splatting.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yan_Multi-Scale_3D_Gaussian_Splatting_for_Anti-Aliased_Rendering_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.17089
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yan_Multi-Scale_3D_Gaussian_Splatting_for_Anti-Aliased_Rendering_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yan_Multi-Scale_3D_Gaussian_Splatting_for_Anti-Aliased_Rendering_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yan_Multi-Scale_3D_Gaussian_CVPR_2024_supplemental.zip
null
A Bayesian Approach to OOD Robustness in Image Classification
Prakhar Kaushik, Adam Kortylewski, Alan Yuille
An important and unsolved problem in computer vision is to ensure that the algorithms are robust to changes in image domains. We address this problem in the scenario where we have access to images from the target domains but no annotations. Motivated by the challenges of the OOD-CV benchmark where we encounter real world Out-of-Domain (OOD) nuisances and occlusion we introduce a novel Bayesian approach to OOD robustness for object classification. Our work extends Compositional Neural Networks (CompNets) which have been shown to be robust to occlusion but degrade badly when tested on OOD data. We exploit the fact that CompNets contain a generative head defined over feature vectors represented by von Mises-Fisher (vMF) kernels which correspond roughly to object parts and can be learned without supervision. We obverse that some vMF kernels are similar between different domains while others are not. This enables us to learn a transitional dictionary of vMF kernels that are intermediate between the source and target domains and train the generative model on this dictionary using the annotations on the source domain followed by iterative refinement. This approach termed Unsupervised Generative Transition (UGT) performs very well in OOD scenarios even when occlusion is present. UGT is evaluated on different OOD benchmarks including the OOD-CV dataset several popular datasets (e.g. ImageNet-C artificial image corruptions (including adding occluders) and synthetic-to-real domain transfer and does well in all scenarios outperforming SOTA alternatives.
https://openaccess.thecvf.com/content/CVPR2024/papers/Kaushik_A_Bayesian_Approach_to_OOD_Robustness_in_Image_Classification_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.07277
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Kaushik_A_Bayesian_Approach_to_OOD_Robustness_in_Image_Classification_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Kaushik_A_Bayesian_Approach_to_OOD_Robustness_in_Image_Classification_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kaushik_A_Bayesian_Approach_CVPR_2024_supplemental.pdf
null
Unified-IO 2: Scaling Autoregressive Multimodal Models with Vision Language Audio and Action
Jiasen Lu, Christopher Clark, Sangho Lee, Zichen Zhang, Savya Khosla, Ryan Marten, Derek Hoiem, Aniruddha Kembhavi
We present Unified-IO 2 a multimodal and multi-skill unified model capable of following novel instructions. Unified-IO 2 can use text images audio and/or videos as input and can generate text image or audio outputs which is accomplished in a unified way by tokenizing these different inputs and outputs into a shared semantic space that can then be processed by a single encoder-decoder transformer model. Unified-IO 2 is trained from scratch on a custom-built multimodal pre-training corpus and then learns an expansive set of skills through fine-tuning on over 120 datasets including datasets for segmentation object detection image editing audio localization video tracking embodied AI and 3D detection. To facilitate instruction-following we add prompts and other data augmentations to these tasks to allow Unified-IO 2 to generalize these skills to new tasks zero-shot. Unified-IO 2 is the first model to be trained on such a diverse and wide-reaching set of skills and unify three separate generation capabilities. Unified-IO 2 achieves state-of-the-art performance on the multi-task GRIT benchmark and achieves strong results on 30 diverse datasets including SEED-Bench image and video understanding TIFA image generation VQA 2.0 ScienceQA VIMA robotic manipulation VGG-Sound and Kinetics-Sounds and can perform unseen tasks and generate free-form responses. We release our model and code to facilitate future work.
https://openaccess.thecvf.com/content/CVPR2024/papers/Lu_Unified-IO_2_Scaling_Autoregressive_Multimodal_Models_with_Vision_Language_Audio_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.17172
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Lu_Unified-IO_2_Scaling_Autoregressive_Multimodal_Models_with_Vision_Language_Audio_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Lu_Unified-IO_2_Scaling_Autoregressive_Multimodal_Models_with_Vision_Language_Audio_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lu_Unified-IO_2_Scaling_CVPR_2024_supplemental.pdf
null
Joint Reconstruction of 3D Human and Object via Contact-Based Refinement Transformer
Hyeongjin Nam, Daniel Sungho Jung, Gyeongsik Moon, Kyoung Mu Lee
Human-object contact serves as a strong cue to understand how humans physically interact with objects. Nevertheless it is not widely explored to utilize human-object contact information for the joint reconstruction of 3D human and object from a single image. In this work we present a novel joint 3D human-object reconstruction method (CONTHO) that effectively exploits contact information between humans and objects. There are two core designs in our system: 1) 3D-guided contact estimation and 2) contact-based 3D human and object refinement. First for accurate human-object contact estimation CONTHO initially reconstructs 3D humans and objects and utilizes them as explicit 3D guidance for contact estimation. Second to refine the initial reconstructions of 3D human and object we propose a novel contact-based refinement Transformer that effectively aggregates human features and object features based on the estimated human-object contact. The proposed contact-based refinement prevents the learning of erroneous correlation between human and object which enables accurate 3D reconstruction. As a result our CONTHO achieves state-of-the-art performance in both human-object contact estimation and joint reconstruction of 3D human and object. The codes are available in https://github.com/dqj5182/CONTHO_RELEASE.
https://openaccess.thecvf.com/content/CVPR2024/papers/Nam_Joint_Reconstruction_of_3D_Human_and_Object_via_Contact-Based_Refinement_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.04819
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Nam_Joint_Reconstruction_of_3D_Human_and_Object_via_Contact-Based_Refinement_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Nam_Joint_Reconstruction_of_3D_Human_and_Object_via_Contact-Based_Refinement_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Nam_Joint_Reconstruction_of_CVPR_2024_supplemental.pdf
null
TIM: A Time Interval Machine for Audio-Visual Action Recognition
Jacob Chalk, Jaesung Huh, Evangelos Kazakos, Andrew Zisserman, Dima Damen
Diverse actions give rise to rich audio-visual signals in long videos. Recent works showcase that the two modalities of audio and video exhibit different temporal extents of events and distinct labels. We address the interplay between the two modalities in long videos by explicitly modelling the temporal extents of audio and visual events. We propose the Time Interval Machine (TIM) where a modality-specific time interval poses as a query to a transformer encoder that ingests a long video input. The encoder then attends to the specified interval as well as the surrounding context in both modalities in order to recognise the ongoing action. We test TIM on three long audio-visual video datasets: EPIC-KITCHENS Perception Test and AVE reporting state-of-the-art (SOTA) for recognition. On EPIC-KITCHENS we beat previous SOTA that utilises LLMs and significantly larger pre-training by 2.9% top-1 action recognition accuracy. Additionally we show that TIM can be adapted for action detection using dense multi-scale interval queries outperforming SOTA on EPIC-KITCHENS-100 for most metrics and showing strong performance on the Perception Test. Our ablations show the critical role of integrating the two modalities and modelling their time intervals in achieving this performance. Code and models at: https://github.com/JacobChalk/TIM.
https://openaccess.thecvf.com/content/CVPR2024/papers/Chalk_TIM_A_Time_Interval_Machine_for_Audio-Visual_Action_Recognition_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.05559
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Chalk_TIM_A_Time_Interval_Machine_for_Audio-Visual_Action_Recognition_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Chalk_TIM_A_Time_Interval_Machine_for_Audio-Visual_Action_Recognition_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Chalk_TIM_A_Time_CVPR_2024_supplemental.zip
null
The Devil is in the Details: StyleFeatureEditor for Detail-Rich StyleGAN Inversion and High Quality Image Editing
Denis Bobkov, Vadim Titov, Aibek Alanov, Dmitry Vetrov
The task of manipulating real image attributes through StyleGAN inversion has been extensively researched. This process involves searching latent variables from a well-trained StyleGAN generator that can synthesize a real image modifying these latent variables and then synthesizing an image with the desired edits. A balance must be struck between the quality of the reconstruction and the ability to edit. Earlier studies utilized the low-dimensional W-space for latent search which facilitated effective editing but struggled with reconstructing intricate details. More recent research has turned to the high-dimensional feature space F which successfully inverses the input image but loses much of the detail during editing. In this paper we introduce StyleFeatureEditor -- a novel method that enables editing in both w-latents and F-latents. This technique not only allows for the reconstruction of finer image details but also ensures their preservation during editing. We also present a new training pipeline specifically designed to train our model to accurately edit F-latents. Our method is compared with state-of-the-art encoding approaches demonstrating that our model excels in terms of reconstruction quality and is capable of editing even challenging out-of-domain examples.
https://openaccess.thecvf.com/content/CVPR2024/papers/Bobkov_The_Devil_is_in_the_Details_StyleFeatureEditor_for_Detail-Rich_StyleGAN_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Bobkov_The_Devil_is_in_the_Details_StyleFeatureEditor_for_Detail-Rich_StyleGAN_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Bobkov_The_Devil_is_in_the_Details_StyleFeatureEditor_for_Detail-Rich_StyleGAN_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Bobkov_The_Devil_is_CVPR_2024_supplemental.pdf
null
Unbiased Estimator for Distorted Conics in Camera Calibration
Chaehyeon Song, Jaeho Shin, Myung-Hwan Jeon, Jongwoo Lim, Ayoung Kim
In the literature points and conics have been major features for camera geometric calibration. Although conics are more informative features than points the loss of the conic property under distortion has critically limited the utility of conic features in camera calibration. Many existing approaches addressed conic-based calibration by ignoring distortion or introducing 3D spherical targets to circumvent this limitation. In this paper we present a novel formulation for conic-based calibration using moments. Our derivation is based on the mathematical finding that the first moment can be estimated without bias even under distortion. This allows us to track moment changes during projection and distortion ensuring the preservation of the first moment of the distorted conic. With an unbiased estimator the circular patterns can be accurately detected at the sub-pixel level and can now be fully exploited for an entire calibration pipeline resulting in significantly improved calibration. The entire code is readily available from https://github.com/ChaehyeonSong/discocal.
https://openaccess.thecvf.com/content/CVPR2024/papers/Song_Unbiased_Estimator_for_Distorted_Conics_in_Camera_Calibration_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.04583
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Song_Unbiased_Estimator_for_Distorted_Conics_in_Camera_Calibration_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Song_Unbiased_Estimator_for_Distorted_Conics_in_Camera_Calibration_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Song_Unbiased_Estimator_for_CVPR_2024_supplemental.pdf
null
MultiPhys: Multi-Person Physics-aware 3D Motion Estimation
Nicolas Ugrinovic, Boxiao Pan, Georgios Pavlakos, Despoina Paschalidou, Bokui Shen, Jordi Sanchez-Riera, Francesc Moreno-Noguer, Leonidas Guibas
We introduce MultiPhys a method designed for recovering multi-person motion from monocular videos. Our focus lies in capturing coherent spatial placement between pairs of individuals across varying degrees of engagement. MultiPhys being physically aware exhibits robustness to jittering and occlusions and effectively eliminates penetration issues between the two individuals. We devise a pipeline in which the motion estimated by a kinematic-based method is fed into a physics simulator in an autoregressive manner. We introduce distinct components that enable our model to harness the simulator's properties without compromising the accuracy of the kinematic estimates. This results in final motion estimates that are both kinematically coherent and physically compliant. Extensive evaluations on three challenging datasets characterized by substantial inter-person interaction show that our method significantly reduces errors associated with penetration and foot skating while performing competitively with the state-of-the-art on motion accuracy and smoothness.
https://openaccess.thecvf.com/content/CVPR2024/papers/Ugrinovic_MultiPhys_Multi-Person_Physics-aware_3D_Motion_Estimation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.11987
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Ugrinovic_MultiPhys_Multi-Person_Physics-aware_3D_Motion_Estimation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Ugrinovic_MultiPhys_Multi-Person_Physics-aware_3D_Motion_Estimation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ugrinovic_MultiPhys_Multi-Person_Physics-aware_CVPR_2024_supplemental.pdf
null
Multi-Level Neural Scene Graphs for Dynamic Urban Environments
Tobias Fischer, Lorenzo Porzi, Samuel Rota Bulo, Marc Pollefeys, Peter Kontschieder
We estimate the radiance field of large-scale dynamic areas from multiple vehicle captures under varying environmental conditions. Previous works in this domain are either restricted to static environments do not scale to more than a single short video or struggle to separately represent dynamic object instances. To this end we present a novel decomposable radiance field approach for dynamic urban environments. We propose a multi-level neural scene graph representation that scales to thousands of images from dozens of sequences with hundreds of fast-moving objects. To enable efficient training and rendering of our representation we develop a fast composite ray sampling and rendering scheme. To test our approach in urban driving scenarios we introduce a new novel view synthesis benchmark. We show that our approach outperforms prior art by a significant margin on both established and our proposed benchmark while being faster in training and rendering.
https://openaccess.thecvf.com/content/CVPR2024/papers/Fischer_Multi-Level_Neural_Scene_Graphs_for_Dynamic_Urban_Environments_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.00168
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Fischer_Multi-Level_Neural_Scene_Graphs_for_Dynamic_Urban_Environments_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Fischer_Multi-Level_Neural_Scene_Graphs_for_Dynamic_Urban_Environments_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Fischer_Multi-Level_Neural_Scene_CVPR_2024_supplemental.zip
null
Would Deep Generative Models Amplify Bias in Future Models?
Tianwei Chen, Yusuke Hirota, Mayu Otani, Noa Garcia, Yuta Nakashima
We investigate the impact of deep generative models on potential social biases in upcoming computer vision models. As the internet witnesses an increasing influx of AI-generated images concerns arise regarding inherent biases that may accompany them potentially leading to the dissemination of harmful content. This paper explores whether a detrimental feedback loop resulting in bias amplification would occur if generated images were used as the training data for future models. We conduct simulations by progressively substituting original images in COCO and CC3M datasets with images generated through Stable Diffusion. The modified datasets are used to train OpenCLIP and image captioning models which we evaluate in terms of quality and bias. Contrary to expectations our findings indicate that introducing generated images during training does not uniformly amplify bias. Instead instances of bias mitigation across specific tasks are observed. We further explore the factors that may influence these phenomena such as artifacts in image generation (e.g. blurry faces) or pre-existing biases in the original datasets.
https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_Would_Deep_Generative_Models_Amplify_Bias_in_Future_Models_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.03242
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Would_Deep_Generative_Models_Amplify_Bias_in_Future_Models_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Would_Deep_Generative_Models_Amplify_Bias_in_Future_Models_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Chen_Would_Deep_Generative_CVPR_2024_supplemental.pdf
null
Bayes' Rays: Uncertainty Quantification for Neural Radiance Fields
Lily Goli, Cody Reading, Silvia Sellán, Alec Jacobson, Andrea Tagliasacchi
Neural Radiance Fields (NeRFs) have shown promise in applications like view synthesis and depth estimation but learning from multiview images faces inherent uncertainties. Current methods to quantify them are either heuristic or computationally demanding. We introduce BayesRays a post-hoc framework to evaluate uncertainty in any pretrained NeRF without modifying the training process. Our method establishes a volumetric uncertainty field using spatial perturbations and a Bayesian Laplace approximation. We derive our algorithm statistically and show its superior performance in key metrics and applications. Additional results available at: https://bayesrays.github.io/
https://openaccess.thecvf.com/content/CVPR2024/papers/Goli_Bayes_Rays_Uncertainty_Quantification_for_Neural_Radiance_Fields_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Goli_Bayes_Rays_Uncertainty_Quantification_for_Neural_Radiance_Fields_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Goli_Bayes_Rays_Uncertainty_Quantification_for_Neural_Radiance_Fields_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Goli_Bayes_Rays_Uncertainty_CVPR_2024_supplemental.pdf
null
NIVeL: Neural Implicit Vector Layers for Text-to-Vector Generation
Vikas Thamizharasan, Difan Liu, Matthew Fisher, Nanxuan Zhao, Evangelos Kalogerakis, Michal Lukac
The success of denoising diffusion models in representing rich data distributions over 2D raster images has prompted research on extending them to other data representations such as vector graphics. Unfortunately due to their variable structure and scarcity of vector training data directly applying diffusion models on this domain remains a challenging problem. Using workarounds like optimization via Score Distillation Sampling (SDS) is also fraught with difficulty as vector representations are non-trivial to directly optimize and tend to result in implausible geometries such as redundant or self-intersecting shapes. NIVeL addresses these challenges by reinterpreting the problem on an alternative intermediate domain which preserves the desirable properties of vector graphics - mainly sparsity of representation and resolution-independence. This alternative domain is based on neural implicit fields expressed in a set of decomposable editable layers. Based on our experiments NIVeL produces text-to-vector graphics results of significantly better quality than the state-of-the-art.
https://openaccess.thecvf.com/content/CVPR2024/papers/Thamizharasan_NIVeL_Neural_Implicit_Vector_Layers_for_Text-to-Vector_Generation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2405.15217
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Thamizharasan_NIVeL_Neural_Implicit_Vector_Layers_for_Text-to-Vector_Generation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Thamizharasan_NIVeL_Neural_Implicit_Vector_Layers_for_Text-to-Vector_Generation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Thamizharasan_NIVeL_Neural_Implicit_CVPR_2024_supplemental.pdf
null
Driving-Video Dehazing with Non-Aligned Regularization for Safety Assistance
Junkai Fan, Jiangwei Weng, Kun Wang, Yijun Yang, Jianjun Qian, Jun Li, Jian Yang
Real driving-video dehazing poses a significant challenge due to the inherent difficulty in acquiring precisely aligned hazy/clear video pairs for effective model training especially in dynamic driving scenarios with unpredictable weather conditions. In this paper we propose a pioneering approach that addresses this challenge through a nonaligned regularization strategy. Our core concept involves identifying clear frames that closely match hazy frames serving as references to supervise a video dehazing network. Our approach comprises two key components: reference matching and video dehazing. Firstly we introduce a non-aligned reference frame matching module leveraging an adaptive sliding window to match high-quality reference frames from clear videos. Video dehazing incorporates flow-guided cosine attention sampler and deformable cosine attention fusion modules to enhance spatial multiframe alignment and fuse their improved information. To validate our approach we collect a GoProHazy dataset captured effortlessly with GoPro cameras in diverse rural and urban road environments. Extensive experiments demonstrate the superiority of the proposed method over current state-of-the-art methods in the challenging task of real driving-video dehazing. Project page.
https://openaccess.thecvf.com/content/CVPR2024/papers/Fan_Driving-Video_Dehazing_with_Non-Aligned_Regularization_for_Safety_Assistance_CVPR_2024_paper.pdf
http://arxiv.org/abs/2405.09996
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Fan_Driving-Video_Dehazing_with_Non-Aligned_Regularization_for_Safety_Assistance_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Fan_Driving-Video_Dehazing_with_Non-Aligned_Regularization_for_Safety_Assistance_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Fan_Driving-Video_Dehazing_with_CVPR_2024_supplemental.pdf
null
Is Vanilla MLP in Neural Radiance Field Enough for Few-shot View Synthesis?
Hanxin Zhu, Tianyu He, Xin Li, Bingchen Li, Zhibo Chen
Neural Radiance Field (NeRF) has achieved superior performance for novel view synthesis by modeling the scene with a Multi-Layer Perception (MLP) and a volume rendering procedure however when fewer known views are given (i.e. few-shot view synthesis) the model is prone to overfit the given views. To handle this issue previous efforts have been made towards leveraging learned priors or introducing additional regularizations. In contrast in this paper we for the first time provide an orthogonal method from the perspective of network structure. Given the observation that trivially reducing the number of model parameters alleviates the overfitting issue but at the cost of missing details we propose the multi-input MLP (mi-MLP) that incorporates the inputs (i.e. location and viewing direction) of the vanilla MLP into each layer to prevent the overfitting issue without harming detailed synthesis. To further reduce the artifacts we propose to model colors and volume density separately and present two regularization terms. Extensive experiments on multiple datasets demonstrate that: 1) although the proposed mi-MLP is easy to implement it is surprisingly effective as it boosts the PSNR of the baseline from 14.73 to 24.23. 2) the overall framework achieves state-of-the-art results on a wide range of benchmarks. We will release the code upon publication.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhu_Is_Vanilla_MLP_in_Neural_Radiance_Field_Enough_for_Few-shot_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.06092
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhu_Is_Vanilla_MLP_in_Neural_Radiance_Field_Enough_for_Few-shot_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhu_Is_Vanilla_MLP_in_Neural_Radiance_Field_Enough_for_Few-shot_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhu_Is_Vanilla_MLP_CVPR_2024_supplemental.pdf
null
CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs
Yingji Zhong, Lanqing Hong, Zhenguo Li, Dan Xu
Neural Radiance Fields (NeRF) have shown impressive capabilities for photorealistic novel view synthesis when trained on dense inputs. However when trained on sparse inputs NeRF typically encounters issues of incorrect density or color predictions mainly due to insufficient coverage of the scene causing partial and sparse supervision thus leading to significant performance degradation. While existing works mainly consider ray-level consistency to construct 2D learning regularization based on rendered color depth or semantics on image planes in this paper we propose a novel approach that models 3D spatial field consistency to improve NeRF's performance with sparse inputs. Specifically we first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space. We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray which are then incorporated into the volume rendering. By backpropagating through the rendering loss we enhance the consistency among neighboring points. Additionally we propose to use a contrastive loss on the encoder output of the Transformer to further improve consistency within each voxel. Experiments demonstrate that our method yields significant improvement over different radiance fields in the sparse inputs setting and achieves comparable performance with current works. The project page for this paper is available at https://zhongyingji.github.io/CVT-xRF.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhong_CVT-xRF_Contrastive_In-Voxel_Transformer_for_3D_Consistent_Radiance_Fields_from_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhong_CVT-xRF_Contrastive_In-Voxel_Transformer_for_3D_Consistent_Radiance_Fields_from_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhong_CVT-xRF_Contrastive_In-Voxel_Transformer_for_3D_Consistent_Radiance_Fields_from_CVPR_2024_paper.html
CVPR 2024
null
null
OAKINK2: A Dataset of Bimanual Hands-Object Manipulation in Complex Task Completion
Xinyu Zhan, Lixin Yang, Yifei Zhao, Kangrui Mao, Hanlin Xu, Zenan Lin, Kailin Li, Cewu Lu
We present OAKINK2 a dataset of bimanual object manipulation tasks for complex daily activities. In pursuit of constructing the complex tasks into a structured representation OAKINK2 introduces three level of abstraction to organize the manipulation tasks: Affordance Primitive Task and Complex Task. OAKINK2 features on an object-centric perspective for decoding the complex tasks treating them as a sequence of object affordance fulfillment. The first level Affordance outlines the functionalities that objects in the scene can afford the second level Primitive Task describes the minimal interaction units that humans interact with the object to achieve its affordance and the third level Complex Task illustrates how Primitive Tasks are composed and interdependent. OAKINK2 dataset provides multi-view image streams and precise pose annotations for the human body hands and various interacting objects. This extensive collection supports applications such as interaction reconstruction and motion synthesis. Based on the 3-level abstraction of OAKINK2 we explore a task-oriented framework for Complex Task Completion (CTC). CTC aims to generate a sequence of bimanual manipulation to achieve task objectives. Within the CTC framework we employ Large Language Models (LLMs) to decompose the complex task objectives into sequences of Primitive Tasks and have developed a Motion Fulfillment Model that generates bimanual hand motion for each Primitive Task. OAKINK2 datasets and models are available at https://oakink.net/v2.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhan_OAKINK2_A_Dataset_of_Bimanual_Hands-Object_Manipulation_in_Complex_Task_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.19417
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhan_OAKINK2_A_Dataset_of_Bimanual_Hands-Object_Manipulation_in_Complex_Task_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhan_OAKINK2_A_Dataset_of_Bimanual_Hands-Object_Manipulation_in_Complex_Task_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhan_OAKINK2_A_Dataset_CVPR_2024_supplemental.pdf
null
CogAgent: A Visual Language Model for GUI Agents
Wenyi Hong, Weihan Wang, Qingsong Lv, Jiazheng Xu, Wenmeng Yu, Junhui Ji, Yan Wang, Zihan Wang, Yuxiao Dong, Ming Ding, Jie Tang
People are spending an enormous amount of time on digital devices through graphical user interfaces (GUIs) e.g. computer or smartphone screens. Large language models (LLMs) such as ChatGPT can assist people in tasks like writing emails but struggle to understand and interact with GUIs thus limiting their potential to increase automation levels. In this paper we introduce CogAgent an 18-billion-parameter visual language model (VLM) specializing in GUI understanding and navigation. By utilizing both low-resolution and high-resolution image encoders CogAgent supports input at a resolution of 1120*1120 enabling it to recognize tiny page elements and text. As a generalist visual language model CogAgent achieves the state of the art on five text-rich and four general VQA benchmarks including VQAv2 OK-VQA Text-VQA ST-VQA ChartQA infoVQA DocVQA MM-Vet and POPE. CogAgent using only screenshots as input outperforms LLM-based methods that consume extracted HTML text on both PC and Android GUI navigation tasks---Mind2Web and AITW advancing the state of the art. The model and codes are available at https://github.com/THUDM/CogVLM.
https://openaccess.thecvf.com/content/CVPR2024/papers/Hong_CogAgent_A_Visual_Language_Model_for_GUI_Agents_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.08914
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Hong_CogAgent_A_Visual_Language_Model_for_GUI_Agents_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Hong_CogAgent_A_Visual_Language_Model_for_GUI_Agents_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Hong_CogAgent_A_Visual_CVPR_2024_supplemental.pdf
null