Search is not available for this dataset
title
string
authors
string
abstract
string
pdf
string
arXiv
string
bibtex
string
url
string
detail_url
string
tags
string
supp
string
string
UnO: Unsupervised Occupancy Fields for Perception and Forecasting
null
null
null
null
null
https://openaccess.thecvf.com/content/CVPR2024/html/Agro_UnO_Unsupervised_Occupancy_Fields_for_Perception_and_Forecasting_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Agro_UnO_Unsupervised_Occupancy_Fields_for_Perception_and_Forecasting_CVPR_2024_paper.html
CVPR 2024
null
null
SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities
Boyuan Chen, Zhuo Xu, Sean Kirmani, Brain Ichter, Dorsa Sadigh, Leonidas Guibas, Fei Xia
Understanding and reasoning about spatial relationships is crucial for Visual Question Answering (VQA) and robotics. Vision Language Models (VLMs) have shown impressive performance in some VQA benchmarks but struggle with 3D spatial reasoning such as recognizing distances or size differences between physical objects. This limitation may stem from a lack of 3D spatial knowledge in their training data. To address this we propose training VLMs with extensive spatial reasoning data from the internet. Our approach includes developing an automatic 3D spatial VQA data generation framework capable of creating 2 billion VQA examples from 10 million real-world images. We explore various factors in the training process such as data quality training pipeline and VLM architecture. Our work introduces the first Internet-scale 3D spatial reasoning dataset in metric space. By co-training a VLM with this dataset we significantly improve its performance in both qualitative and quantitative spatial VQA. Additionally this enhanced VLM enables new applications in chain-of-thought spatial reasoning and robotics particularly in quantitative estimation.
https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_SpatialVLM_Endowing_Vision-Language_Models_with_Spatial_Reasoning_Capabilities_CVPR_2024_paper.pdf
http://arxiv.org/abs/2401.12168
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_SpatialVLM_Endowing_Vision-Language_Models_with_Spatial_Reasoning_Capabilities_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_SpatialVLM_Endowing_Vision-Language_Models_with_Spatial_Reasoning_Capabilities_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Chen_SpatialVLM_Endowing_Vision-Language_CVPR_2024_supplemental.pdf
null
InstructDiffusion: A Generalist Modeling Interface for Vision Tasks
Zigang Geng, Binxin Yang, Tiankai Hang, Chen Li, Shuyang Gu, Ting Zhang, Jianmin Bao, Zheng Zhang, Houqiang Li, Han Hu, Dong Chen, Baining Guo
We present InstructDiffusion a unified and generic framework for aligning computer vision tasks with human instructions. Unlike existing approaches that integrate prior knowledge and pre-define the output space (e.g. categories and coordinates) for each vision task we cast diverse vision tasks into a human-intuitive image-manipulating process whose output space is a flexible and interactive pixel space. Concretely the model is built upon the diffusion process and is trained to predict pixels according to user instructions such as encircling the man's left shoulder in red or applying a blue mask to the left car. InstructDiffusion could handle a variety of vision tasks including understanding tasks (such as segmentation and keypoint detection) and generative tasks (such as editing and enhancement) and outperforms prior methods on novel datasets. This represents a solid step towards a generalist modeling interface for vision tasks advancing artificial general intelligence in the field of computer vision.
https://openaccess.thecvf.com/content/CVPR2024/papers/Geng_InstructDiffusion_A_Generalist_Modeling_Interface_for_Vision_Tasks_CVPR_2024_paper.pdf
http://arxiv.org/abs/2309.03895
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Geng_InstructDiffusion_A_Generalist_Modeling_Interface_for_Vision_Tasks_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Geng_InstructDiffusion_A_Generalist_Modeling_Interface_for_Vision_Tasks_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Geng_InstructDiffusion_A_Generalist_CVPR_2024_supplemental.pdf
null
DreamVideo: Composing Your Dream Videos with Customized Subject and Motion
Yujie Wei, Shiwei Zhang, Zhiwu Qing, Hangjie Yuan, Zhiheng Liu, Yu Liu, Yingya Zhang, Jingren Zhou, Hongming Shan
Customized generation using diffusion models has made impressive progress in image generation but remains unsatisfactory in the challenging video generation task as it requires the controllability of both subjects and motions. To that end we present DreamVideo a novel approach to generating personalized videos from a few static images of the desired subject and a few videos of target motion. DreamVideo decouples this task into two stages subject learning and motion learning by leveraging a pre-trained video diffusion model. The subject learning aims to accurately capture the fine appearance of the subject from provided images which is achieved by combining textual inversion and fine-tuning of our carefully designed identity adapter. In motion learning we architect a motion adapter and fine-tune it on the given videos to effectively model the target motion pattern. Combining these two lightweight and efficient adapters allows for flexible customization of any subject with any motion. Extensive experimental results demonstrate the superior performance of our DreamVideo over the state-of-the-art methods for customized video generation. Our project page is at https://dreamvideo-t2v.github.io.
https://openaccess.thecvf.com/content/CVPR2024/papers/Wei_DreamVideo_Composing_Your_Dream_Videos_with_Customized_Subject_and_Motion_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.04433
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wei_DreamVideo_Composing_Your_Dream_Videos_with_Customized_Subject_and_Motion_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wei_DreamVideo_Composing_Your_Dream_Videos_with_Customized_Subject_and_Motion_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wei_DreamVideo_Composing_Your_CVPR_2024_supplemental.pdf
null
Gated Fields: Learning Scene Reconstruction from Gated Videos
Andrea Ramazzina, Stefanie Walz, Pragyan Dahal, Mario Bijelic, Felix Heide
Reconstructing outdoor 3D scenes from temporal observations is a challenge that recent work on neural fields has offered a new avenue for. However existing methods that recover scene properties such as geometry appearance or radiance solely from RGB captures often fail when handling poorly-lit or texture-deficient regions. Similarly recovering scenes with scanning lidar sensors is also difficult due to their low angular sampling rate which makes recovering expansive real-world scenes difficult. Tackling these gaps we introduce Gated Fields - a neural scene reconstruction method that utilizes active gated video sequences. To this end we propose a neural rendering approach that seamlessly incorporates time-gated capture and illumination. Our method exploits the intrinsic depth cues in the gated videos achieving precise and dense geometry reconstruction irrespective of ambient illumination conditions. We validate the method across day and night scenarios and find that Gated Fields compares favorably to RGB and LiDAR reconstruction methods
https://openaccess.thecvf.com/content/CVPR2024/papers/Ramazzina_Gated_Fields_Learning_Scene_Reconstruction_from_Gated_Videos_CVPR_2024_paper.pdf
http://arxiv.org/abs/2405.19819
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Ramazzina_Gated_Fields_Learning_Scene_Reconstruction_from_Gated_Videos_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Ramazzina_Gated_Fields_Learning_Scene_Reconstruction_from_Gated_Videos_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ramazzina_Gated_Fields_Learning_CVPR_2024_supplemental.pdf
null
RadarDistill: Boosting Radar-based Object Detection Performance via Knowledge Distillation from LiDAR Features
Geonho Bang, Kwangjin Choi, Jisong Kim, Dongsuk Kum, Jun Won Choi
The inherent noisy and sparse characteristics of radar data pose challenges in finding effective representations for 3D object detection. In this paper we propose RadarDistill a novel knowledge distillation (KD) method which can improve the representation of radar data by leveraging LiDAR data. RadarDistill successfully transfers desirable characteristics of LiDAR features into radar features using three key components: Cross-Modality Alignment (CMA) Activation-based Feature Distillation (AFD) and Proposal-based Feature Distillation (PFD). CMA enhances the density of radar features by employing multiple layers of dilation operations effectively addressing the challenge of inefficient knowledge transfer from LiDAR to radar. AFD selectively transfers knowledge based on regions of the LiDAR features with a specific focus on areas where activation intensity exceeds a predefined threshold. PFD similarly guides the radar network to selectively mimic features from the LiDAR network within the object proposals. Our comparative analyses conducted on the nuScenes datasets demonstrate that RadarDistill achieves state-of-the-art (SOTA) performance for radar-only object detection task recording 20.5% in mAP and 43.7% in NDS. Also RadarDistill significantly improves the performance of the camera-radar fusion model.
https://openaccess.thecvf.com/content/CVPR2024/papers/Bang_RadarDistill_Boosting_Radar-based_Object_Detection_Performance_via_Knowledge_Distillation_from_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.05061
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Bang_RadarDistill_Boosting_Radar-based_Object_Detection_Performance_via_Knowledge_Distillation_from_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Bang_RadarDistill_Boosting_Radar-based_Object_Detection_Performance_via_Knowledge_Distillation_from_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Bang_RadarDistill_Boosting_Radar-based_CVPR_2024_supplemental.pdf
null
Probabilistic Sampling of Balanced K-Means using Adiabatic Quantum Computing
Jan-Nico Zaech, Martin Danelljan, Tolga Birdal, Luc Van Gool
Adiabatic quantum computing (AQC) is a promising approach for discrete and often NP-hard optimization problems. Current AQCs allow to implement problems of research interest which has sparked the development of quantum representations for many computer vision tasks. Despite requiring multiple measurements from the noisy AQC current approaches only utilize the best measurement discarding information contained in the remaining ones. In this work we explore the potential of using this information for probabilistic balanced k-means clustering. Instead of discarding non-optimal solutions we propose to use them to compute calibrated posterior probabilities with little additional compute cost. This allows us to identify ambiguous solutions and data points which we demonstrate on a D-Wave AQC on synthetic tasks and real visual data.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zaech_Probabilistic_Sampling_of_Balanced_K-Means_using_Adiabatic_Quantum_Computing_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zaech_Probabilistic_Sampling_of_Balanced_K-Means_using_Adiabatic_Quantum_Computing_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zaech_Probabilistic_Sampling_of_Balanced_K-Means_using_Adiabatic_Quantum_Computing_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zaech_Probabilistic_Sampling_of_CVPR_2024_supplemental.pdf
null
UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory
Haiwen Diao, Bo Wan, Ying Zhang, Xu Jia, Huchuan Lu, Long Chen
Parameter-efficient transfer learning (PETL) i.e. fine-tuning a small portion of parameters is an effective strategy for adapting pre-trained models to downstream domains. To further reduce the memory demand recent PETL works focus on the more valuable memory-efficient characteristic. In this paper we argue that the scalability adaptability and generalizability of state-of-the-art methods are hindered by structural dependency and pertinency on specific pre-trained backbones. To this end we propose a new memory-efficient PETL strategy Universal Parallel Tuning (UniPT) to mitigate these weaknesses. Specifically we facilitate the transfer process via a lightweight and learnable parallel network which consists of: 1) A parallel interaction module that decouples the sequential connections and processes the intermediate activations detachedly from the pre-trained network. 2) A confidence aggregation module that learns optimal strategies adaptively for integrating cross-layer features. We evaluate UniPT with different backbones (e.g. T5 VSEinfinity CLIP4Clip Clip-ViL and MDETR) on various vision-and-language and pure NLP tasks. Extensive ablations on 18 datasets have validated that UniPT can not only dramatically reduce memory consumption and outperform the best competitor but also achieve competitive performance over other plain PETL methods with lower training memory overhead. Our code is publicly available at: https://github.com/Paranioar/UniPT.
https://openaccess.thecvf.com/content/CVPR2024/papers/Diao_UniPT_Universal_Parallel_Tuning_for_Transfer_Learning_with_Efficient_Parameter_CVPR_2024_paper.pdf
http://arxiv.org/abs/2308.14316
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Diao_UniPT_Universal_Parallel_Tuning_for_Transfer_Learning_with_Efficient_Parameter_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Diao_UniPT_Universal_Parallel_Tuning_for_Transfer_Learning_with_Efficient_Parameter_CVPR_2024_paper.html
CVPR 2024
null
null
Composed Video Retrieval via Enriched Context and Discriminative Embeddings
Omkar Thawakar, Muzammal Naseer, Rao Muhammad Anwer, Salman Khan, Michael Felsberg, Mubarak Shah, Fahad Shahbaz Khan
Composed video retrieval (CoVR) is a challenging prob- lem in computer vision which has recently highlighted the in- tegration of modification text with visual queries for more so- phisticated video search in large databases. Existing works predominantly rely on visual queries combined with modi- fication text to distinguish relevant videos. However such a strategy struggles to fully preserve the rich query-specific context in retrieved target videos and only represents the target video using visual embedding. We introduce a novel CoVR framework that leverages detailed language descrip- tions to explicitly encode query-specific contextual informa- tion and learns discriminative embeddings of vision only text only and vision-text for better alignment to accurately retrieve matched target videos. Our proposed framework can be flexibly employed for both composed video (CoVR) and image (CoIR) retrieval tasks. Experiments on three datasets show that our approach obtains state-of-the-art per- formance for both CovR and zero-shot CoIR tasks achiev- ing gains as high as around 7% in terms of recall@K=1 score. Our code detailed language descriptions for WebViD- CoVR dataset are available at https://github.com/ OmkarThawakar/composed-video-retrieval.
https://openaccess.thecvf.com/content/CVPR2024/papers/Thawakar_Composed_Video_Retrieval_via_Enriched_Context_and_Discriminative_Embeddings_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.16997
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Thawakar_Composed_Video_Retrieval_via_Enriched_Context_and_Discriminative_Embeddings_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Thawakar_Composed_Video_Retrieval_via_Enriched_Context_and_Discriminative_Embeddings_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Thawakar_Composed_Video_Retrieval_CVPR_2024_supplemental.pdf
null
Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model
Kai Yang, Jian Tao, Jiafei Lyu, Chunjiang Ge, Jiaxin Chen, Weihan Shen, Xiaolong Zhu, Xiu Li
Using reinforcement learning with human feedback (RLHF) has shown significant promise in fine-tuning diffusion models. Previous methods start by training a reward model that aligns with human preferences then leverage RL techniques to fine-tune the underlying models. However crafting an efficient reward model demands extensive datasets optimal architecture and manual hyperparameter tuning making the process both time and cost-intensive. The direct preference optimization (DPO) method effective in fine-tuning large language models eliminates the necessity for a reward model. However the extensive GPU memory requirement of the diffusion model's denoising process hinders the direct application of the DPO method. To address this issue we introduce the Direct Preference for Denoising Diffusion Policy Optimization (D3PO) method to directly fine-tune diffusion models. The theoretical analysis demonstrates that although D3PO omits training a reward model it effectively functions as the optimal reward model trained using human feedback data to guide the learning process. This approach requires no training of a reward model proving to be more direct cost-effective and minimizing computational overhead. In experiments our method uses the relative scale of objectives as a proxy for human preference delivering comparable results to methods using ground-truth rewards. Moreover D3PO demonstrates the ability to reduce image distortion rates and generate safer images overcoming challenges lacking robust reward models. Our code is publicly available at https://github.com/yk7333/D3PO.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yang_Using_Human_Feedback_to_Fine-tune_Diffusion_Models_without_Any_Reward_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.13231
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Using_Human_Feedback_to_Fine-tune_Diffusion_Models_without_Any_Reward_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Using_Human_Feedback_to_Fine-tune_Diffusion_Models_without_Any_Reward_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yang_Using_Human_Feedback_CVPR_2024_supplemental.pdf
null
Perceptual Assessment and Optimization of HDR Image Rendering
Peibei Cao, Rafal K. Mantiuk, Kede Ma
High dynamic range (HDR) rendering has the ability to faithfully reproduce the wide luminance ranges in natural scenes but how to accurately assess the rendering quality is relatively underexplored. Existing quality models are mostly designed for low dynamic range (LDR) images and do not align well with human perception of HDR image quality. To fill this gap we propose a family of HDR quality metrics in which the key step is employing a simple inverse display model to decompose an HDR image into a stack of LDR images with varying exposures. Subsequently these decomposed images are assessed through well-established LDR quality metrics. Our HDR quality models present three distinct benefits. First they directly inherit the recent advancements of LDR quality metrics. Second they do not rely on human perceptual data of HDR image quality for re-calibration. Third they facilitate the alignment and prioritization of specific luminance ranges for more accurate and detailed quality assessment. Experimental results show that our HDR quality metrics consistently outperform existing models in terms of quality assessment on four HDR image quality datasets and perceptual optimization of HDR novel view synthesis.
https://openaccess.thecvf.com/content/CVPR2024/papers/Cao_Perceptual_Assessment_and_Optimization_of_HDR_Image_Rendering_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Cao_Perceptual_Assessment_and_Optimization_of_HDR_Image_Rendering_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Cao_Perceptual_Assessment_and_Optimization_of_HDR_Image_Rendering_CVPR_2024_paper.html
CVPR 2024
null
null
Multiview Aerial Visual RECognition (MAVREC): Can Multi-view Improve Aerial Visual Perception?
Aritra Dutta, Srijan Das, Jacob Nielsen, Rajatsubhra Chakraborty, Mubarak Shah
Despite the commercial abundance of UAVs aerial data acquisition remains challenging and the existing Asia and North America-centric open-source UAV datasets are small-scale or low-resolution and lack diversity in scene contextuality. Additionally the color content of the scenes solar zenith angle and population density of different geographies influence the data diversity. These factors conjointly render suboptimal aerial-visual perception of the deep neural network (DNN) models trained primarily on the ground view data including the open-world foundational models. To pave the way for a transformative era of aerial detection we present Multiview Aerial Visual RECognition (MAVREC) a video dataset where we record synchronized scenes from different perspectives --- ground camera and drone-mounted camera. MAVREC consists of around 2.5 hours of industry-standard 2.7K resolution video sequences more than 0.5 million frames and 1.1 million annotated bounding boxes. This makes MAVREC the largest ground and aerial view dataset and the fourth largest among all drone-based datasets across all modalities and tasks. Through our extensive benchmarking on MAVREC we recognize that augmenting object detectors with ground view images from the corresponding geographical location is a superior pre-training strategy for aerial detection. Building on this strategy we benchmark MAVREC with a curriculum-based semi-supervised object detection approach that leverages labeled (ground and aerial) and unlabeled (only aerial) images to enhance aerial detection.
https://openaccess.thecvf.com/content/CVPR2024/papers/Dutta_Multiview_Aerial_Visual_RECognition_MAVREC_Can_Multi-view_Improve_Aerial_Visual_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.04548
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Dutta_Multiview_Aerial_Visual_RECognition_MAVREC_Can_Multi-view_Improve_Aerial_Visual_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Dutta_Multiview_Aerial_Visual_RECognition_MAVREC_Can_Multi-view_Improve_Aerial_Visual_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Dutta_Multiview_Aerial_Visual_CVPR_2024_supplemental.pdf
null
Diffusion-driven GAN Inversion for Multi-Modal Face Image Generation
Jihyun Kim, Changjae Oh, Hoseok Do, Soohyun Kim, Kwanghoon Sohn
We present a new multi-modal face image generation method that converts a text prompt and a visual input such as a semantic mask or scribble map into a photo-realistic face image. To do this we combine the strengths of Generative Adversarial networks (GANs) and diffusion models (DMs) by employing the multi-modal features in the DM into the latent space of the pre-trained GANs. We present a simple mapping and a style modulation network to link two models and convert meaningful representations in feature maps and attention maps into latent codes. With GAN inversion the estimated latent codes can be used to generate 2D or 3D-aware facial images. We further present a multi-step training strategy that reflects textual and structural representations into the generated image. Our proposed network produces realistic 2D multi-view and stylized face images which align well with inputs. We validate our method by using pre-trained 2D and 3D GANs and our results outperform existing methods. Our project page is available at https://github.com/1211sh/Diffusiondriven_GAN-Inversion/.
https://openaccess.thecvf.com/content/CVPR2024/papers/Kim_Diffusion-driven_GAN_Inversion_for_Multi-Modal_Face_Image_Generation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2405.04356
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Kim_Diffusion-driven_GAN_Inversion_for_Multi-Modal_Face_Image_Generation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Kim_Diffusion-driven_GAN_Inversion_for_Multi-Modal_Face_Image_Generation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kim_Diffusion-driven_GAN_Inversion_CVPR_2024_supplemental.pdf
null
Low-Rank Knowledge Decomposition for Medical Foundation Models
Yuhang Zhou, Haolin Li, Siyuan Du, Jiangchao Yao, Ya Zhang, Yanfeng Wang
The popularity of large-scale pre-training has promoted the development of medical foundation models. However some studies have shown that although foundation models exhibit strong general feature extraction capabilities their performance on specific tasks is still inferior to task-specific methods. In this paper we explore a new perspective called "Knowledge Decomposition" to improve the performance on specific medical tasks which deconstruct the foundation model into multiple lightweight expert models each dedicated to a particular task with the goal of improving specialization while concurrently mitigating resource expenditure. To accomplish the above objective we design a novel framework named Low-Rank Knowledge Decomposition (LoRKD) which explicitly separates graidents by incorporating low-rank expert modules and the efficient knowledge separation convolution. Extensive experimental results demonstrate that the decomposed models perform well in terms of performance and transferability even surpassing the original foundation models. Source code is available at: https://github.com/MediaBrain-SJTU/LoRKD
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhou_Low-Rank_Knowledge_Decomposition_for_Medical_Foundation_Models_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.17184
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhou_Low-Rank_Knowledge_Decomposition_for_Medical_Foundation_Models_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhou_Low-Rank_Knowledge_Decomposition_for_Medical_Foundation_Models_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhou_Low-Rank_Knowledge_Decomposition_CVPR_2024_supplemental.pdf
null
SaCo Loss: Sample-wise Affinity Consistency for Vision-Language Pre-training
Sitong Wu, Haoru Tan, Zhuotao Tian, Yukang Chen, Xiaojuan Qi, Jiaya Jia
Vision-language pre-training (VLP) aims to learn joint representations of vision and language modalities. The contrastive paradigm is currently dominant in this field. However we observe a notable misalignment phenomenon that is the affinity between samples has an obvious disparity across different modalities namely "Affinity Inconsistency Problem". Our intuition is that for a well-aligned model two images that look similar to each other should have the same level of similarity as their corresponding texts that describe them. In this paper we first investigate the reason of this inconsistency problem. We discover that the lack of consideration for sample-wise affinity consistency across modalities in existing training objectives is the central cause. To address this problem we propose a novel loss function named Sample-wise affinity Consistency (SaCo) loss which is designed to enhance such consistency by minimizing the distance between image embedding similarity and text embedding similarity for any two samples. Our SaCo loss can be easily incorporated into existing vision-language models as an additional loss due to its complementarity for most training objectives. In addition considering that pre-training from scratch is computationally expensive we also provide a more efficient way to continuously pre-train on a converged model by integrating our loss. Experimentally the model trained with our SaCo loss significantly outperforms the baseline on a variety of vision and language tasks.
https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_SaCo_Loss_Sample-wise_Affinity_Consistency_for_Vision-Language_Pre-training_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wu_SaCo_Loss_Sample-wise_Affinity_Consistency_for_Vision-Language_Pre-training_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wu_SaCo_Loss_Sample-wise_Affinity_Consistency_for_Vision-Language_Pre-training_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wu_SaCo_Loss_Sample-wise_CVPR_2024_supplemental.pdf
null
Steganographic Passport: An Owner and User Verifiable Credential for Deep Model IP Protection Without Retraining
Qi Cui, Ruohan Meng, Chaohui Xu, Chip-Hong Chang
Ensuring the legal usage of deep models is crucial to promoting trustable accountable and responsible artificial intelligence innovation. Current passport-based methods that obfuscate model functionality for license-to-use and ownership verifications suffer from capacity and quality constraints as they require retraining the owner model for new users. They are also vulnerable to advanced Expanded Residual Block ambiguity attacks. We propose Steganographic Passport which uses an invertible steganographic network to decouple license-to-use from ownership verification by hiding the user's identity images into the owner-side passport and recovering them from their respective user-side passports. An irreversible and collision-resistant hash function is used to avoid exposing the owner-side passport from the derived user-side passports and increase the uniqueness of the model signature. To safeguard both the passport and model's weights against advanced ambiguity attacks an activation-level obfuscation is proposed for the verification branch of the owner's model. By jointly training the verification and deployment branches their weights become tightly coupled. The proposed method supports agile licensing of deep models by providing a strong ownership proof and license accountability without requiring a separate model retraining for the admission of every new user. Experiment results show that our Steganographic Passport outperforms other passport-based deep model protection methods in robustness against various known attacks.
https://openaccess.thecvf.com/content/CVPR2024/papers/Cui_Steganographic_Passport_An_Owner_and_User_Verifiable_Credential_for_Deep_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.02889
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Cui_Steganographic_Passport_An_Owner_and_User_Verifiable_Credential_for_Deep_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Cui_Steganographic_Passport_An_Owner_and_User_Verifiable_Credential_for_Deep_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Cui_Steganographic_Passport_An_CVPR_2024_supplemental.pdf
null
Stable Neighbor Denoising for Source-free Domain Adaptive Segmentation
Dong Zhao, Shuang Wang, Qi Zang, Licheng Jiao, Nicu Sebe, Zhun Zhong
We study source-free unsupervised domain adaptation (SFUDA) for semantic segmentation which aims to adapt a source-trained model to the target domain without accessing the source data. Many works have been proposed to address this challenging problem among which uncertainty based self-training is a predominant approach. However without comprehensive denoising mechanisms they still largely fall into biased estimates when dealing with different domains and confirmation bias. In this paper we observe that pseudo-label noise is mainly contained in unstable samples in which the predictions of most pixels undergo significant variations during self-training. Inspired by this we propose a novel mechanism to denoise unstable samples with stable ones. Specifically we introduce the Stable Neighbor Denoising (SND) approach which effectively discovers highly correlated stable and unstable samples by nearest neighbor retrieval and guides the reliable optimization of unstable samples by bi-level learning. Moreover we compensate for the stable set by object-level object paste which can further eliminate the bias caused by less learned classes. Our SND enjoys two advantages. First SND does not require a specific segmentor structure endowing its universality. Second SND simultaneously addresses the issues of class domain and confirmation biases during adaptation ensuring its effectiveness. Extensive experiments show that SND consistently outperforms state-of-the-art methods in various SFUDA semantic segmentation settings. In addition SND can be easily integrated with other approaches obtaining further improvements. The source code will be publicly available.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhao_Stable_Neighbor_Denoising_for_Source-free_Domain_Adaptive_Segmentation_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhao_Stable_Neighbor_Denoising_for_Source-free_Domain_Adaptive_Segmentation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhao_Stable_Neighbor_Denoising_for_Source-free_Domain_Adaptive_Segmentation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhao_Stable_Neighbor_Denoising_CVPR_2024_supplemental.pdf
null
SynSP: Synergy of Smoothness and Precision in Pose Sequences Refinement
Tao Wang, Lei Jin, Zheng Wang, Jianshu Li, Liang Li, Fang Zhao, Yu Cheng, Li Yuan, Li Zhou, Junliang Xing, Jian Zhao
Predicting human pose sequences via existing pose estimators often encounters various estimation errors. Motion refinement methods aim to optimize the predicted human pose sequences from pose estimators while ensuring minimal computational overhead and latency. Prior investigations have primarily concentrated on striking a balance between the two objectives i.e. smoothness and precision while optimizing the predicted pose sequences. However it has come to our attention that the tension between these two objectives can provide additional quality cues about the predicted pose sequences. These cues in turn are able to aid the network in optimizing lower-quality poses. To leverage this quality information we propose a motion refinement network termed SynSP to achieve a Synergy of Smoothness and Precision in the sequence refinement tasks. Moreover SynSP can also address multi-view poses of one person simultaneously fixing inaccuracies in predicted poses through heightened attention to similar poses from other views thereby amplifying the resultant quality cues and overall performance. Compared with previous methods SynSP benefits from both pose quality and multi-view information with a much shorter input sequence length achieving state-of-the-art results among four challenging datasets involving 2D 3D and SMPL pose representations in both single-view and multi-view scenes. Github code: https://github.com/InvertedForest/SynSP.
https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_SynSP_Synergy_of_Smoothness_and_Precision_in_Pose_Sequences_Refinement_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_SynSP_Synergy_of_Smoothness_and_Precision_in_Pose_Sequences_Refinement_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_SynSP_Synergy_of_Smoothness_and_Precision_in_Pose_Sequences_Refinement_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wang_SynSP_Synergy_of_CVPR_2024_supplemental.pdf
null
En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D Synthetic Data
Yifang Men, Biwen Lei, Yuan Yao, Miaomiao Cui, Zhouhui Lian, Xuansong Xie
We present En3D an enhanced generative scheme for sculpting high-quality 3D human avatars. Unlike previous works that rely on scarce 3D datasets or limited 2D collections with imbalanced viewing angles and imprecise pose priors our approach aims to develop a zero-shot 3D generative scheme capable of producing visually realistic geometrically accurate and content-wise diverse 3D humans without relying on pre-existing 3D or 2D assets. To address this challenge we introduce a meticulously crafted workflow that implements accurate physical modeling to learn the enhanced 3D generative model from synthetic 2D data. During inference we integrate optimization modules to bridge the gap between realistic appearances and coarse 3D shapes. Specifically En3D comprises three modules: a 3D generator that accurately models generalizable 3D humans with realistic appearance from synthesized balanced diverse and structured human images; a geometry sculptor that enhances shape quality using multi-view normal constraints for intricate human structure; and a texturing module that disentangles explicit texture maps with fidelity and editability leveraging semantical UV partitioning and a differentiable rasterizer. Experimental results show that our approach significantly outperforms prior works in terms of image quality geometry accuracy and content diversity. We also showcase the applicability of our generated avatars for animation and editing as well as the scalability of our approach for content-style free adaptation.
https://openaccess.thecvf.com/content/CVPR2024/papers/Men_En3D_An_Enhanced_Generative_Model_for_Sculpting_3D_Humans_from_CVPR_2024_paper.pdf
http://arxiv.org/abs/2401.01173
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Men_En3D_An_Enhanced_Generative_Model_for_Sculpting_3D_Humans_from_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Men_En3D_An_Enhanced_Generative_Model_for_Sculpting_3D_Humans_from_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Men_En3D_An_Enhanced_CVPR_2024_supplemental.pdf
null
Neural Visibility Field for Uncertainty-Driven Active Mapping
Shangjie Xue, Jesse Dill, Pranay Mathur, Frank Dellaert, Panagiotis Tsiotra, Danfei Xu
This paper presents Neural Visibility Field (NVF) a novel uncertainty quantification method for Neural Radiance Fields (NeRF) applied to active mapping. Our key insight is that regions not visible in the training views lead to inherently unreliable color predictions by NeRF at this region resulting in increased uncertainty in the synthesized views. To address this we propose to use Bayesian Networks to composite position-based field uncertainty into ray-based uncertainty in camera observations. Consequently NVF naturally assigns higher uncertainty to unobserved regions aiding robots to select the most informative next viewpoints. Extensive evaluations show that NVF excels not only in uncertainty quantification but also in scene reconstruction for active mapping outperforming existing methods. More details can be found at https://sites.google.com/view/nvf-cvpr24/.
https://openaccess.thecvf.com/content/CVPR2024/papers/Xue_Neural_Visibility_Field_for_Uncertainty-Driven_Active_Mapping_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Xue_Neural_Visibility_Field_for_Uncertainty-Driven_Active_Mapping_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Xue_Neural_Visibility_Field_for_Uncertainty-Driven_Active_Mapping_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Xue_Neural_Visibility_Field_CVPR_2024_supplemental.pdf
null
Tri-Perspective View Decomposition for Geometry-Aware Depth Completion
Zhiqiang Yan, Yuankai Lin, Kun Wang, Yupeng Zheng, Yufei Wang, Zhenyu Zhang, Jun Li, Jian Yang
Depth completion is a vital task for autonomous driving as it involves reconstructing the precise 3D geometry of a scene from sparse and noisy depth measurements. However most existing methods either rely only on 2D depth representations or directly incorporate raw 3D point clouds for compensation which are still insufficient to capture the fine-grained 3D geometry of the scene. To address this challenge we introduce Tri-Perspective View Decomposition (TPVD) a novel framework that can explicitly model 3D geometry. In particular (1) TPVD ingeniously decomposes the original point cloud into three 2D views one of which corresponds to the sparse depth input. (2) We design TPV Fusion to update the 2D TPV features through recurrent 2D-3D-2D aggregation where a Distance-Aware Spherical Convolution (DASC) is applied. (3) By adaptively choosing TPV affinitive neighbors the newly proposed Geometric Spatial Propagation Network (GSPN) further improves the geometric consistency. As a result our TPVD outperforms existing methods on KITTI NYUv2 and SUN RGBD. Furthermore we build a novel depth completion dataset named TOFDC which is acquired by the time-of-flight (TOF) sensor and the color camera on smartphones.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yan_Tri-Perspective_View_Decomposition_for_Geometry-Aware_Depth_Completion_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.15008
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yan_Tri-Perspective_View_Decomposition_for_Geometry-Aware_Depth_Completion_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yan_Tri-Perspective_View_Decomposition_for_Geometry-Aware_Depth_Completion_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yan_Tri-Perspective_View_Decomposition_CVPR_2024_supplemental.pdf
null
Boosting Adversarial Training via Fisher-Rao Norm-based Regularization
Xiangyu Yin, Wenjie Ruan
Adversarial training is extensively utilized to improve the adversarial robustness of deep neural networks. Yet mitigating the degradation of standard generalization performance in adversarial-trained models remains an open problem. This paper attempts to resolve this issue through the lens of model complexity. First We leverage the Fisher-Rao norm a geometrically invariant metric for model complexity to establish the non-trivial bounds of the Cross-Entropy Loss-based Rademacher complexity for a ReLU-activated Multi-Layer Perceptron. Building upon this observation we propose a novel regularization framework called Logit-Oriented Adversarial Training (LOAT) which can mitigate the trade-off between robustness and accuracy while imposing only a negligible increase in computational overhead. Our extensive experiments demonstrate that the proposed regularization strategy can boost the performance of the prevalent adversarial training algorithms including PGD-AT TRADES TRADES (LSE) MART and DM-AT across various network architectures. Our code will be available at https://github.com/TrustAI/LOAT.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yin_Boosting_Adversarial_Training_via_Fisher-Rao_Norm-based_Regularization_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.17520
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yin_Boosting_Adversarial_Training_via_Fisher-Rao_Norm-based_Regularization_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yin_Boosting_Adversarial_Training_via_Fisher-Rao_Norm-based_Regularization_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yin_Boosting_Adversarial_Training_CVPR_2024_supplemental.pdf
null
Learned Representation-Guided Diffusion Models for Large-Image Generation
Alexandros Graikos, Srikar Yellapragada, Minh-Quan Le, Saarthak Kapse, Prateek Prasanna, Joel Saltz, Dimitris Samaras
To synthesize high-fidelity samples diffusion models typically require auxiliary data to guide the generation process. However it is impractical to procure the painstaking patch-level annotation effort required in specialized domains like histopathology and satellite imagery; it is often performed by domain experts and involves hundreds of millions of patches. Modern-day self-supervised learning (SSL) representations encode rich semantic and visual information. In this paper we posit that such representations are expressive enough to act as proxies to fine-grained human labels. We introduce a novel approach that trains diffusion models conditioned on embeddings from SSL. Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images. In addition we construct larger images by assembling spatially consistent patches inferred from SSL embeddings preserving long-range dependencies. Augmenting real data by generating variations of real images improves downstream classifier accuracy for patch-level and larger image-scale classification tasks. Our models are effective even on datasets not encountered during training demonstrating their robustness and generalizability. Generating images from learned embeddings is agnostic to the source of the embeddings. The SSL embeddings used to generate a large image can either be extracted from a reference image or sampled from an auxiliary model conditioned on any related modality (e.g. class labels text genomic data). As proof of concept we introduce the text-to-large image synthesis paradigm where we successfully synthesize large pathology and satellite images out of text descriptions.
https://openaccess.thecvf.com/content/CVPR2024/papers/Graikos_Learned_Representation-Guided_Diffusion_Models_for_Large-Image_Generation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.07330
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Graikos_Learned_Representation-Guided_Diffusion_Models_for_Large-Image_Generation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Graikos_Learned_Representation-Guided_Diffusion_Models_for_Large-Image_Generation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Graikos_Learned_Representation-Guided_Diffusion_CVPR_2024_supplemental.pdf
null
DAVE - A Detect-and-Verify Paradigm for Low-Shot Counting
Jer Pelhan, Alan Lukeži?, Vitjan Zavrtanik, Matej Kristan
Low-shot counters estimate the number of objects corresponding to a selected category based on only few or no exemplars annotated in the image. The current state-of-the-art estimates the total counts as the sum over the object location density map but do not provide object locations and sizes which are crucial for many applications. This is addressed by detection-based counters which however fall behind in the total count accuracy. Furthermore both approaches tend to overestimate the counts in the presence of other object classes due to many false positives. We propose DAVE a low-shot counter based on a detect-and-verify paradigm that avoids the aforementioned issues by first generating a high-recall detection set and then verifying the detections to identify and remove the outliers.This jointly increases the recall and precision leading to accurate counts. DAVE outperforms the top density-based counters by ?20% in the total count MAE it outperforms the most recent detection-based counter by ?20% in detection quality and sets a new state-of-the-art in zero-shot as well as text-prompt-based counting.
https://openaccess.thecvf.com/content/CVPR2024/papers/Pelhan_DAVE_-_A_Detect-and-Verify_Paradigm_for_Low-Shot_Counting_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Pelhan_DAVE_-_A_Detect-and-Verify_Paradigm_for_Low-Shot_Counting_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Pelhan_DAVE_-_A_Detect-and-Verify_Paradigm_for_Low-Shot_Counting_CVPR_2024_paper.html
CVPR 2024
null
null
Ranni: Taming Text-to-Image Diffusion for Accurate Instruction Following
Yutong Feng, Biao Gong, Di Chen, Yujun Shen, Yu Liu, Jingren Zhou
Existing text-to-image (T2I) diffusion models usually struggle in interpreting complex prompts especially those with quantity object-attribute binding and multi-subject descriptions. In this work we introduce a semantic panel as the middleware in decoding texts to images supporting the generator to better follow instructions. The panel is obtained through arranging the visual concepts parsed from the input text by the aid of large language models and then injected into the denoising network as a detailed control signal to complement the text condition. To facilitate text-to-panel learning we come up with a carefully designed semantic formatting protocol accompanied by a fully-automatic data preparation pipeline. Thanks to such a design our approach which we call Ranni manages to enhance a pre-trained T2I generator regarding its textual controllability. More importantly the introduction of the generative middleware brings a more convenient form of interaction (i.e. directly adjusting the elements in the panel or using language instructions) and further allows users to finely customize their generation based on which we develop a practical system and showcase its potential in continuous generation and chatting-based editing.
https://openaccess.thecvf.com/content/CVPR2024/papers/Feng_Ranni_Taming_Text-to-Image_Diffusion_for_Accurate_Instruction_Following_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.17002
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Feng_Ranni_Taming_Text-to-Image_Diffusion_for_Accurate_Instruction_Following_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Feng_Ranni_Taming_Text-to-Image_Diffusion_for_Accurate_Instruction_Following_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Feng_Ranni_Taming_Text-to-Image_CVPR_2024_supplemental.pdf
null
Relaxed Contrastive Learning for Federated Learning
Seonguk Seo, Jinkyu Kim, Geeho Kim, Bohyung Han
We propose a novel contrastive learning framework to effectively address the challenges of data heterogeneity in federated learning. We first analyze the inconsistency of gradient updates across clients during local training and establish its dependence on the distribution of feature representations leading to the derivation of the supervised contrastive learning (SCL) objective to mitigate local deviations. In addition we show that a naive integration of SCL into federated learning incurs representation collapse resulting in slow convergence and limited performance gains. To address this issue we introduce a relaxed contrastive learning loss that imposes a divergence penalty on excessively similar sample pairs within each class. This strategy prevents collapsed representations and enhances feature transferability facilitating collaborative training and leading to significant performance improvements. Our framework outperforms all existing federated learning approaches by significant margins on the standard benchmarks as demonstrated by extensive experimental results. The source code is available at our project page(https://github.com/skynbe/FedRCL).
https://openaccess.thecvf.com/content/CVPR2024/papers/Seo_Relaxed_Contrastive_Learning_for_Federated_Learning_CVPR_2024_paper.pdf
http://arxiv.org/abs/2401.04928
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Seo_Relaxed_Contrastive_Learning_for_Federated_Learning_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Seo_Relaxed_Contrastive_Learning_for_Federated_Learning_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Seo_Relaxed_Contrastive_Learning_CVPR_2024_supplemental.pdf
null
Direct2.5: Diverse Text-to-3D Generation via Multi-view 2.5D Diffusion
Yuanxun Lu, Jingyang Zhang, Shiwei Li, Tian Fang, David McKinnon, Yanghai Tsin, Long Quan, Xun Cao, Yao Yao
Recent advances in generative AI have unveiled significant potential for the creation of 3D content. However current methods either apply a pre-trained 2D diffusion model with the time-consuming score distillation sampling (SDS) or a direct 3D diffusion model trained on limited 3D data losing generation diversity. In this work we approach the problem by employing a multi-view 2.5D diffusion fine-tuned from a pre-trained 2D diffusion model. The multi-view 2.5D diffusion directly models the structural distribution of 3D data while still maintaining the strong generalization ability of the original 2D diffusion model filling the gap between 2D diffusion-based and direct 3D diffusion-based methods for 3D content generation. During inference multi-view normal maps are generated using the 2.5D diffusion and a novel differentiable rasterization scheme is introduced to fuse the almost consistent multi-view normal maps into a consistent 3D model. We further design a normal-conditioned multi-view image generation module for fast appearance generation given the 3D geometry. Our method is a one-pass diffusion process and does not require any SDS optimization as post-processing. We demonstrate through extensive experiments that our direct 2.5D generation with the specially-designed fusion scheme can achieve diverse mode-seeking-free and high-fidelity 3D content generation in only 10 seconds.
https://openaccess.thecvf.com/content/CVPR2024/papers/Lu_Direct2.5_Diverse_Text-to-3D_Generation_via_Multi-view_2.5D_Diffusion_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Lu_Direct2.5_Diverse_Text-to-3D_Generation_via_Multi-view_2.5D_Diffusion_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Lu_Direct2.5_Diverse_Text-to-3D_Generation_via_Multi-view_2.5D_Diffusion_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lu_Direct2.5_Diverse_Text-to-3D_CVPR_2024_supplemental.pdf
null
Efficient LoFTR: Semi-Dense Local Feature Matching with Sparse-Like Speed
Yifan Wang, Xingyi He, Sida Peng, Dongli Tan, Xiaowei Zhou
We present a novel method for efficiently producing semi-dense matches across images. Previous detector-free matcher LoFTR has shown remarkable matching capability in handling large-viewpoint change and texture-poor scenarios but suffers from low efficiency. We revisit its design choices and derive multiple improvements for both efficiency and accuracy. One key observation is that performing the transformer over the entire feature map is redundant due to shared local information therefore we propose an aggregated attention mechanism with adaptive token selection for efficiency. Furthermore we find spatial variance exists in LoFTR's fine correlation module which is adverse to matching accuracy. A novel two-stage correlation layer is proposed to achieve accurate subpixel correspondences for accuracy improvement. Our efficiency optimized model is ~ 2.5xfaster than LoFTR which can even surpass state-of-the-art efficient sparse matching pipeline SuperPoint + LightGlue. Moreover extensive experiments show that our method can achieve higher accuracy compared with competitive semi-dense matchers with considerable efficiency benefits. This opens up exciting prospects for large-scale or latency-sensitive applications such as image retrieval and 3D reconstruction. Project page: https://zju3dv.github.io/efficientloftr/.
https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_Efficient_LoFTR_Semi-Dense_Local_Feature_Matching_with_Sparse-Like_Speed_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.04765
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Efficient_LoFTR_Semi-Dense_Local_Feature_Matching_with_Sparse-Like_Speed_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Efficient_LoFTR_Semi-Dense_Local_Feature_Matching_with_Sparse-Like_Speed_CVPR_2024_paper.html
CVPR 2024
null
null
Contextual Augmented Global Contrast for Multimodal Intent Recognition
Kaili Sun, Zhiwen Xie, Mang Ye, Huyin Zhang
Multimodal intent recognition (MIR) aims to perceive the human intent polarity via language visual and acoustic modalities. The inherent intent ambiguity makes it challenging to recognize in multimodal scenarios. Existing MIR methods tend to model the individual video independently ignoring global contextual information across videos. This learning manner inevitably introduces perception biases exacerbated by the inconsistencies of the multimodal representation amplifying the intent uncertainty. This challenge motivates us to explore effective global context modeling. Thus we propose a context-augmented global contrast (CAGC) method to capture rich global context features by mining both intra-and cross-video context interactions for MIR. Concretely we design a context-augmented transformer module to extract global context dependencies across videos. To further alleviate error accumulation and interference we develop a cross-video bank that retrieves effective video sources by considering both intentional tendency and video similarity. Furthermore we introduce a global context-guided contrastive learning scheme designed to mitigate inconsistencies arising from global context and individual modalities in different feature spaces. This scheme incorporates global cues as the supervision to capture robust the multimodal intent representation. Experiments demonstrate CAGC obtains superior performance than state-of-the-art MIR methods. We also generalize our approach to a closely related task multimodal sentiment analysis achieving the comparable performance.
https://openaccess.thecvf.com/content/CVPR2024/papers/Sun_Contextual_Augmented_Global_Contrast_for_Multimodal_Intent_Recognition_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Sun_Contextual_Augmented_Global_Contrast_for_Multimodal_Intent_Recognition_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Sun_Contextual_Augmented_Global_Contrast_for_Multimodal_Intent_Recognition_CVPR_2024_paper.html
CVPR 2024
null
null
Pre-trained Model Guided Fine-Tuning for Zero-Shot Adversarial Robustness
Sibo Wang, Jie Zhang, Zheng Yuan, Shiguang Shan
Large-scale pre-trained vision-language models like CLIP have demonstrated impressive performance across various tasks and exhibit remarkable zero-shot generalization capability while they are also vulnerable to imperceptible adversarial examples. Existing works typically employ adversarial training (fine-tuning) as a defense method against adversarial examples. However direct application to the CLIP model may result in overfitting compromising the model's capacity for generalization. In this paper we propose Pre-trained Model Guided Adversarial Fine-Tuning (PMG-AFT) method which leverages supervision from the original pre-trained model by carefully designing an auxiliary branch to enhance the model's zero-shot adversarial robustness. Specifically PMG-AFT minimizes the distance between the features of adversarial examples in the target model and those in the pre-trained model aiming to preserve the generalization features already captured by the pre-trained model. Extensive Experiments on 15 zero-shot datasets demonstrate that PMG-AFT significantly outperforms the state-of-the-art method improving the top-1 robust accuracy by an average of 4.99%. Furthermore our approach consistently improves clean accuracy by an average of 8.72%.
https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_Pre-trained_Model_Guided_Fine-Tuning_for_Zero-Shot_Adversarial_Robustness_CVPR_2024_paper.pdf
http://arxiv.org/abs/2401.04350
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Pre-trained_Model_Guided_Fine-Tuning_for_Zero-Shot_Adversarial_Robustness_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Pre-trained_Model_Guided_Fine-Tuning_for_Zero-Shot_Adversarial_Robustness_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wang_Pre-trained_Model_Guided_CVPR_2024_supplemental.pdf
null
MatFuse: Controllable Material Generation with Diffusion Models
Giuseppe Vecchio, Renato Sortino, Simone Palazzo, Concetto Spampinato
Creating high-quality materials in computer graphics is a challenging and time-consuming task which requires great expertise. To simplify this process we introduce MatFuse a unified approach that harnesses the generative power of diffusion models for creation and editing of 3D materials. Our method integrates multiple sources of conditioning including color palettes sketches text and pictures enhancing creative possibilities and granting fine-grained control over material synthesis. Additionally MatFuse enables map-level material editing capabilities through latent manipulation by means of a multi-encoder compression model which learns a disentangled latent representation for each map. We demonstrate the effectiveness of MatFuse under multiple conditioning settings and explore the potential of material editing. Finally we assess the quality of the generated materials both quantitatively in terms of CLIP-IQA and FID scores and qualitatively by conducting a user study. Source code for training MatFuse and supplemental materials are publicly available at https://gvecchio.com/matfuse.
https://openaccess.thecvf.com/content/CVPR2024/papers/Vecchio_MatFuse_Controllable_Material_Generation_with_Diffusion_Models_CVPR_2024_paper.pdf
http://arxiv.org/abs/2308.11408
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Vecchio_MatFuse_Controllable_Material_Generation_with_Diffusion_Models_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Vecchio_MatFuse_Controllable_Material_Generation_with_Diffusion_Models_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Vecchio_MatFuse_Controllable_Material_CVPR_2024_supplemental.zip
null
CoGS: Controllable Gaussian Splatting
Heng Yu, Joel Julin, Zoltán A. Milacski, Koichiro Niinuma, László A. Jeni
Capturing and re-animating the 3D structure of articulated objects present significant barriers. On one hand methods requiring extensively calibrated multi-view setups are prohibitively complex and resource-intensive limiting their practical applicability. On the other hand while single-camera Neural Radiance Fields (NeRFs) offer a more streamlined approach they have excessive training and rendering costs. 3D Gaussian Splatting would be a suitable alternative but for two reasons. Firstly existing methods for 3D dynamic Gaussians require synchronized multi-view cameras and secondly the lack of controllability in dynamic scenarios. We present CoGS a method for Controllable Gaussian Splatting that enables the direct manipulation of scene elements offering real-time control of dynamic scenes without the prerequisite of pre-computing control signals. We evaluated CoGS using both synthetic and real-world datasets that include dynamic objects that differ in degree of difficulty. In our evaluations CoGS consistently outperformed existing dynamic and controllable neural representations in terms of visual fidelity.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yu_CoGS_Controllable_Gaussian_Splatting_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.05664
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yu_CoGS_Controllable_Gaussian_Splatting_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yu_CoGS_Controllable_Gaussian_Splatting_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yu_CoGS_Controllable_Gaussian_CVPR_2024_supplemental.pdf
null
Partial-to-Partial Shape Matching with Geometric Consistency
Viktoria Ehm, Maolin Gao, Paul Roetzer, Marvin Eisenberger, Daniel Cremers, Florian Bernard
Finding correspondences between 3D shapes is an important and long-standing problem in computer vision graphics and beyond. A prominent challenge are partial-to-partial shape matching settings which occur when the shapes to match are only observed incompletely (e.g. from 3D scanning). Although partial-to-partial matching is a highly relevant setting in practice it is rarely explored. Our work bridges the gap between existing (rather artificial) 3D full shape matching and partial-to-partial real-world settings by exploiting geometric consistency as a strong constraint. We demonstrate that it is indeed possible to solve this challenging problem in a variety of settings. For the first time we achieve geometric consistency for partial-to-partial matching which is realized by a novel integer non-linear program formalism building on triangle product spaces along with a new pruning algorithm based on linear integer programming. Further we generate a new inter-class dataset for partial-to-partial shape-matching. We show that our method outperforms current SOTA methods on both an established intra-class dataset and our novel inter-class dataset.
https://openaccess.thecvf.com/content/CVPR2024/papers/Ehm_Partial-to-Partial_Shape_Matching_with_Geometric_Consistency_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.12209
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Ehm_Partial-to-Partial_Shape_Matching_with_Geometric_Consistency_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Ehm_Partial-to-Partial_Shape_Matching_with_Geometric_Consistency_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ehm_Partial-to-Partial_Shape_Matching_CVPR_2024_supplemental.pdf
null
Descriptor and Word Soups: Overcoming the Parameter Efficiency Accuracy Tradeoff for Out-of-Distribution Few-shot Learning
Christopher Liao, Theodoros Tsiligkaridis, Brian Kulis
Over the past year a large body of multimodal research has emerged around zero-shot evaluation using GPT descriptors. These studies boost the zero-shot accuracy of pretrained VL models with an ensemble of label-specific text generated by GPT. A recent study WaffleCLIP demonstrated that similar zero-shot accuracy can be achieved with an ensemble of random descriptors. However both zero-shot methods are un-trainable and consequently sub-optimal when some few-shot out-of-distribution (OOD) training data is available. Inspired by these prior works we present two more flexible methods called descriptor and word soups which do not require an LLM at test time and can leverage training data to increase OOD target accuracy. Descriptor soup greedily selects a small set of textual descriptors using generic few-shot training data then calculates robust class embeddings using the selected descriptors. Word soup greedily assembles a chain of words in a similar manner. Compared to existing few-shot soft prompt tuning methods word soup requires fewer parameters by construction and less GPU memory since it does not require backpropagation. Both soups outperform current published few-shot methods even when combined with SoTA zero-shot methods on cross-dataset and domain generalization benchmarks. Compared with SoTA prompt and descriptor ensembling methods such as ProDA and WaffleCLIP word soup achieves higher OOD accuracy with fewer ensemble members. Please checkout our code: https://github.com/Chris210634/word_soups
https://openaccess.thecvf.com/content/CVPR2024/papers/Liao_Descriptor_and_Word_Soups_Overcoming_the_Parameter_Efficiency_Accuracy_Tradeoff_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.13612
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Liao_Descriptor_and_Word_Soups_Overcoming_the_Parameter_Efficiency_Accuracy_Tradeoff_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Liao_Descriptor_and_Word_Soups_Overcoming_the_Parameter_Efficiency_Accuracy_Tradeoff_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liao_Descriptor_and_Word_CVPR_2024_supplemental.pdf
null
Harnessing the Power of MLLMs for Transferable Text-to-Image Person ReID
Wentan Tan, Changxing Ding, Jiayu Jiang, Fei Wang, Yibing Zhan, Dapeng Tao
Text-to-image person re-identification (ReID) retrieves pedestrian images according to textual descriptions. Manually annotating textual descriptions is time-consuming restricting the scale of existing datasets and therefore the generalization ability of ReID models. As a result we study the transferable text-to-image ReID problem where we train a model on our proposed large-scale database and directly deploy it to various datasets for evaluation. We obtain substantial training data via Multi-modal Large Language Models (MLLMs). Moreover we identify and address two key challenges in utilizing the obtained textual descriptions. First an MLLM tends to generate descriptions with similar structures causing the model to overfit specific sentence patterns. Thus we propose a novel method that uses MLLMs to caption images according to various templates. These templates are obtained using a multi-turn dialogue with a Large Language Model (LLM). Therefore we can build a large-scale dataset with diverse textual descriptions. Second an MLLM may produce incorrect descriptions. Hence we introduce a novel method that automatically identifies words in a description that do not correspond with the image. This method is based on the similarity between one text and all patch token embeddings in the image. Then we mask these words with a larger probability in the subsequent training epoch alleviating the impact of noisy textual descriptions. The experimental results demonstrate that our methods significantly boost the direct transfer text-to-image ReID performance. Benefiting from the pre-trained model weights we also achieve state-of-the-art performance in the traditional evaluation settings.
https://openaccess.thecvf.com/content/CVPR2024/papers/Tan_Harnessing_the_Power_of_MLLMs_for_Transferable_Text-to-Image_Person_ReID_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Tan_Harnessing_the_Power_of_MLLMs_for_Transferable_Text-to-Image_Person_ReID_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Tan_Harnessing_the_Power_of_MLLMs_for_Transferable_Text-to-Image_Person_ReID_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Tan_Harnessing_the_Power_CVPR_2024_supplemental.pdf
null
360+x: A Panoptic Multi-modal Scene Understanding Dataset
Hao Chen, Yuqi Hou, Chenyuan Qu, Irene Testini, Xiaohan Hong, Jianbo Jiao
Human perception of the world is shaped by a multitude of viewpoints and modalities. While many existing datasets focus on scene understanding from a certain perspective (e.g. egocentric or third-person views) our dataset offers a panoptic perspective (i.e. multiple viewpoints with multiple data modalities). Specifically we encapsulate third-person panoramic and front views as well as egocentric monocular/binocular views with rich modalities including video multi-channel audio directional binaural delay location data and textual scene descriptions within each scene captured presenting comprehensive observation of the world. To the best of our knowledge this is the first database that covers multiple viewpoints with multiple data modalities to mimic how daily information is accessed in the real world. Through our benchmark analysis we presented 5 different scene understanding tasks on the proposed 360+x dataset to evaluate the impact and benefit of each data modality and perspective in panoptic scene understanding. We hope this unique dataset could broaden the scope of comprehensive scene understanding and encourage the community to approach these problems from more diverse perspectives.
https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_360x_A_Panoptic_Multi-modal_Scene_Understanding_Dataset_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_360x_A_Panoptic_Multi-modal_Scene_Understanding_Dataset_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_360x_A_Panoptic_Multi-modal_Scene_Understanding_Dataset_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Chen_360x_A_Panoptic_CVPR_2024_supplemental.pdf
null
Weakly Supervised Video Individual Counting
Xinyan Liu, Guorong Li, Yuankai Qi, Ziheng Yan, Zhenjun Han, Anton van den Hengel, Ming-Hsuan Yang, Qingming Huang
Video Individual Counting (VIC) aims to predict the number of unique individuals in a single video. Existing methods learn representations based on trajectory labels for individuals which are annotation-expensive. To provide a more realistic reflection of the underlying practical challenge we introduce a weakly supervised VIC task wherein trajectory labels are not provided. Instead two types of labels are provided to indicate traffic entering the field of view (inflow) and leaving the field view (outflow). We also propose the first solution as a baseline that formulates the task as a weakly supervised contrastive learning problem under group-level matching. In doing so we devise an end-to-end trainable soft contrastive loss to drive the network to distinguish inflow outflow and the remaining. To facilitate future study in this direction we generate annotations from the existing VIC datasets SenseCrowd and CroHD and also build a new dataset UAVVIC. Extensive results show that our baseline weakly supervised method outperforms supervised methods and thus little information is lost in the transition to the more practically relevant weakly supervised task. The code and trained model can be found at CGNet.
https://openaccess.thecvf.com/content/CVPR2024/papers/Liu_Weakly_Supervised_Video_Individual_Counting_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Weakly_Supervised_Video_Individual_Counting_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Weakly_Supervised_Video_Individual_Counting_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liu_Weakly_Supervised_Video_CVPR_2024_supplemental.zip
null
Gaussian Shading: Provable Performance-Lossless Image Watermarking for Diffusion Models
Zijin Yang, Kai Zeng, Kejiang Chen, Han Fang, Weiming Zhang, Nenghai Yu
Ethical concerns surrounding copyright protection and inappropriate content generation pose challenges for the practical implementation of diffusion models. One effective solution involves watermarking the generated images. However existing methods often compromise the model performance or require additional training which is undesirable for operators and users. To address this issue we propose Gaussian Shading a diffusion model watermarking technique that is both performance-lossless and training-free while serving the dual purpose of copyright protection and tracing of offending content. Our watermark embedding is free of model parameter modifications and thus is plug-and-play. We map the watermark to latent representations following a standard Gaussian distribution which is indistinguishable from latent representations obtained from the non-watermarked diffusion model. Therefore we can achieve watermark embedding with lossless performance for which we also provide theoretical proof. Furthermore since the watermark is intricately linked with image semantics it exhibits resilience to lossy processing and erasure attempts. The watermark can be extracted by Denoising Diffusion Implicit Models (DDIM) inversion and inverse sampling. We evaluate Gaussian Shading on multiple versions of Stable Diffusion and the results demonstrate that Gaussian Shading not only is performance-lossless but also outperforms existing methods in terms of robustness.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yang_Gaussian_Shading_Provable_Performance-Lossless_Image_Watermarking_for_Diffusion_Models_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.04956
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Gaussian_Shading_Provable_Performance-Lossless_Image_Watermarking_for_Diffusion_Models_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Gaussian_Shading_Provable_Performance-Lossless_Image_Watermarking_for_Diffusion_Models_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yang_Gaussian_Shading_Provable_CVPR_2024_supplemental.pdf
null
Generalized Event Cameras
Varun Sundar, Matthew Dutson, Andrei Ardelean, Claudio Bruschini, Edoardo Charbon, Mohit Gupta
Event cameras capture the world at high time resolution and with minimal bandwidth requirements. However event streams which only encode changes in brightness do not contain sufficient scene information to support a wide variety of downstream tasks. In this work we design generalized event cameras that inherently preserve scene intensity in a bandwidth-efficient manner. We generalize event cameras in terms of when an event is generated and what information is transmitted. To implement our designs we turn to single-photon sensors that provide digital access to individual photon detections; this modality gives us the flexibility to realize a rich space of generalized event cameras. Our single-photon event cameras are capable of high-speed high-fidelity imaging at low readout rates. Consequently these event cameras can support plug-and-play downstream inference without capturing new event datasets or designing specialized event-vision models. As a practical implication our designs which involve lightweight and near-sensor-compatible computations provide a way to use single-photon sensors without exorbitant bandwidth costs.
https://openaccess.thecvf.com/content/CVPR2024/papers/Sundar_Generalized_Event_Cameras_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Sundar_Generalized_Event_Cameras_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Sundar_Generalized_Event_Cameras_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Sundar_Generalized_Event_Cameras_CVPR_2024_supplemental.pdf
null
3D Neural Edge Reconstruction
Lei Li, Songyou Peng, Zehao Yu, Shaohui Liu, Rémi Pautrat, Xiaochuan Yin, Marc Pollefeys
Real-world objects and environments are predominantly composed of edge features including straight lines and curves. Such edges are crucial elements for various applications such as CAD modeling surface meshing lane mapping etc. However existing traditional methods only prioritize lines over curves for simplicity in geometric modeling. To this end we introduce EMAP a new method for learning 3D edge representations with a focus on both lines and curves. Our method implicitly encodes 3D edge distance and direction in Unsigned Distance Functions (UDF) from multi-view edge maps. On top of this neural representation we propose an edge extraction algorithm that robustly abstracts parametric 3D edges from the inferred edge points and their directions. Comprehensive evaluations demonstrate that our method achieves better 3D edge reconstruction on multiple challenging datasets. We further show that our learned UDF field enhances neural surface reconstruction by capturing more details.
https://openaccess.thecvf.com/content/CVPR2024/papers/Li_3D_Neural_Edge_Reconstruction_CVPR_2024_paper.pdf
http://arxiv.org/abs/2405.19295
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Li_3D_Neural_Edge_Reconstruction_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Li_3D_Neural_Edge_Reconstruction_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_3D_Neural_Edge_CVPR_2024_supplemental.pdf
null
DocRes: A Generalist Model Toward Unifying Document Image Restoration Tasks
Jiaxin Zhang, Dezhi Peng, Chongyu Liu, Peirong Zhang, Lianwen Jin
Document image restoration is a crucial aspect of Document AI systems as the quality of document images significantly influences the overall performance. Prevailing methods address distinct restoration tasks independently leading to intricate systems and the incapability to harness the potential synergies of multi-task learning. To overcome this challenge we propose DocRes a generalist model that unifies five document image restoration tasks including dewarping deshadowing appearance enhancement deblurring and binarization. To instruct DocRes to perform various restoration tasks we propose a novel visual prompt approach called Dynamic Task-Specific Prompt (DTSPrompt). The DTSPrompt for different tasks comprises distinct prior features which are additional characteristics extracted from the input image. Beyond its role as a cue for task-specific execution DTSPrompt can also serve as supplementary information to enhance the model's performance. Moreover DTSPrompt is more flexible than prior visual prompt approaches as it can be seamlessly applied and adapted to inputs with high and variable resolutions. Experimental results demonstrate that DocRes achieves competitive or superior performance compared to existing state-of-the-art task-specific models. This underscores the potential of DocRes across a broader spectrum of document image restoration tasks. The source code is publicly available at https://github.com/ZZZHANG-jx/DocRes.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_DocRes_A_Generalist_Model_Toward_Unifying_Document_Image_Restoration_Tasks_CVPR_2024_paper.pdf
http://arxiv.org/abs/2405.04408
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_DocRes_A_Generalist_Model_Toward_Unifying_Document_Image_Restoration_Tasks_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_DocRes_A_Generalist_Model_Toward_Unifying_Document_Image_Restoration_Tasks_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_DocRes_A_Generalist_CVPR_2024_supplemental.pdf
null
Honeybee: Locality-enhanced Projector for Multimodal LLM
Junbum Cha, Wooyoung Kang, Jonghwan Mun, Byungseok Roh
In Multimodal Large Language Models (MLLMs) a visual projector plays a crucial role in bridging pre-trained vision encoders with LLMs enabling profound visual understanding while harnessing the LLMs' robust capabilities. Despite the importance of the visual projector it has been relatively less explored. In this study we first identify two essential projector properties: (i) flexibility in managing the number of visual tokens crucial for MLLMs' overall efficiency and (ii) preservation of local context from visual features vital for spatial understanding. Based on these findings we propose a novel projector design that is both flexible and locality-enhanced effectively satisfying the two desirable properties. Additionally we present comprehensive strategies to effectively utilize multiple and multifaceted instruction datasets. Through extensive experiments we examine the impact of individual design choices. Finally our proposed MLLM Honeybee remarkably outperforms previous state-of-the-art methods across various benchmarks including MME MMBench SEED-Bench and LLaVA-Bench achieving significantly higher efficiency. Code and models are available at https://github.com/kakaobrain/honeybee.
https://openaccess.thecvf.com/content/CVPR2024/papers/Cha_Honeybee_Locality-enhanced_Projector_for_Multimodal_LLM_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.06742
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Cha_Honeybee_Locality-enhanced_Projector_for_Multimodal_LLM_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Cha_Honeybee_Locality-enhanced_Projector_for_Multimodal_LLM_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Cha_Honeybee_Locality-enhanced_Projector_CVPR_2024_supplemental.pdf
null
Learned Trajectory Embedding for Subspace Clustering
Yaroslava Lochman, Carl Olsson, Christopher Zach
Clustering multiple motions from observed point trajectories is a fundamental task in understanding dynamic scenes. Most motion models require multiple tracks to estimate their parameters hence identifying clusters when multiple motions are observed is a very challenging task. This is even aggravated for high-dimensional motion models. The starting point of our work is that this high-dimensionality of motion model can actually be leveraged to our advantage as sufficiently long trajectories identify the underlying motion uniquely in practice. Consequently we propose to learn a mapping from trajectories to embedding vectors that represent the generating motion. The obtained trajectory embeddings are useful for clustering multiple observed motions but are also trained to contain sufficient information to recover the parameters of the underlying motion by utilizing a geometric loss. We therefore are able to use only weak supervision from given motion segmentation to train this mapping. The entire algorithm consisting of trajectory embedding clustering and motion parameter estimation is highly efficient. We conduct experiments on the Hopkins155 Hopkins12 and KT3DMoSeg datasets and show state-of-the-art performance of our proposed method for trajectory-based motion segmentation on full sequences and its competitiveness on the occluded sequences. Project page: https://ylochman.github.io/trajectory-embedding.
https://openaccess.thecvf.com/content/CVPR2024/papers/Lochman_Learned_Trajectory_Embedding_for_Subspace_Clustering_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Lochman_Learned_Trajectory_Embedding_for_Subspace_Clustering_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Lochman_Learned_Trajectory_Embedding_for_Subspace_Clustering_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lochman_Learned_Trajectory_Embedding_CVPR_2024_supplemental.pdf
null
Training Vision Transformers for Semi-Supervised Semantic Segmentation
Xinting Hu, Li Jiang, Bernt Schiele
We present S4Former a novel approach to training Vision Transformers for Semi-Supervised Semantic Segmentation (S4). At its core S4Former employs a Vision Transformer within a classic teacher-student framework and then leverages three novel technical ingredients: PatchShuffle as a parameter-free perturbation technique Patch-Adaptive Self-Attention (PASA) as a fine-grained feature modulation method and the innovative Negative Class Ranking (NCR) regularization loss. Based on these regularization modules aligned with Transformer-specific characteristics across the image input feature and output dimensions S4Former exploits the Transformer's ability to capture and differentiate consistent global contextual information in unlabeled images. Overall S4Former not only defines a new state of the art in S4 but also maintains a streamlined and scalable architecture. Being readily compatible with existing frameworks S4Former achieves strong improvements (up to 4.9%) on benchmarks like Pascal VOC 2012 COCO and Cityscapes with varying numbers of labeled data. The code is at https://github.com/JoyHuYY1412/S4Former.
https://openaccess.thecvf.com/content/CVPR2024/papers/Hu_Training_Vision_Transformers_for_Semi-Supervised_Semantic_Segmentation_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Hu_Training_Vision_Transformers_for_Semi-Supervised_Semantic_Segmentation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Hu_Training_Vision_Transformers_for_Semi-Supervised_Semantic_Segmentation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Hu_Training_Vision_Transformers_CVPR_2024_supplemental.pdf
null
HarmonyView: Harmonizing Consistency and Diversity in One-Image-to-3D
Sangmin Woo, Byeongjun Park, Hyojun Go, Jin-Young Kim, Changick Kim
Recent progress in single-image 3D generation highlights the importance of multi-view coherency leveraging 3D priors from large-scale diffusion models pretrained on Internet-scale images. However the aspect of novel-view diversity remains underexplored within the research landscape due to the ambiguity in converting a 2D image into 3D content where numerous potential shapes can emerge. Here we aim to address this research gap by simultaneously addressing both consistency and diversity. Yet striking a balance between these two aspects poses a considerable challenge due to their inherent trade-offs. This work introduces HarmonyView a simple yet effective diffusion sampling technique adept at decomposing two intricate aspects in single-image 3D generation: consistency and diversity. This approach paves the way for a more nuanced exploration of the two critical dimensions within the sampling process. Moreover we propose a new evaluation metric based on CLIP image and text encoders to comprehensively assess the diversity of the generated views which closely aligns with human evaluators' judgments. In experiments HarmonyView achieves a harmonious balance demonstrating a win-win scenario in both consistency and diversity.
https://openaccess.thecvf.com/content/CVPR2024/papers/Woo_HarmonyView_Harmonizing_Consistency_and_Diversity_in_One-Image-to-3D_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.15980
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Woo_HarmonyView_Harmonizing_Consistency_and_Diversity_in_One-Image-to-3D_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Woo_HarmonyView_Harmonizing_Consistency_and_Diversity_in_One-Image-to-3D_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Woo_HarmonyView_Harmonizing_Consistency_CVPR_2024_supplemental.pdf
null
DGC-GNN: Leveraging Geometry and Color Cues for Visual Descriptor-Free 2D-3D Matching
Shuzhe Wang, Juho Kannala, Daniel Barath
Matching 2D keypoints in an image to a sparse 3D point cloud of the scene without requiring visual descriptors has garnered increased interest due to its low memory requirements inherent privacy preservation and reduced need for expensive 3D model maintenance compared to visual descriptor-based methods. However existing algorithms often compromise on performance resulting in a significant deterioration compared to their descriptor-based counterparts. In this paper we introduce DGC-GNN a novel algorithm that employs a global-to-local Graph Neural Network (GNN) that progressively exploits geometric and color cues to rep- resent keypoints thereby improving matching accuracy. Our procedure encodes both Euclidean and angular relations at a coarse level forming the geometric embedding to guide the point matching. We evaluate DGC-GNN on both indoor and outdoor datasets demonstrating that it not only doubles the accuracy of the state-of-the-art visual descriptor-free algorithm but also substantially narrows the performance gap between descriptor-based and descriptor free methods.
https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_DGC-GNN_Leveraging_Geometry_and_Color_Cues_for_Visual_Descriptor-Free_2D-3D_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_DGC-GNN_Leveraging_Geometry_and_Color_Cues_for_Visual_Descriptor-Free_2D-3D_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_DGC-GNN_Leveraging_Geometry_and_Color_Cues_for_Visual_Descriptor-Free_2D-3D_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wang_DGC-GNN_Leveraging_Geometry_CVPR_2024_supplemental.pdf
null
CuVLER: Enhanced Unsupervised Object Discoveries through Exhaustive Self-Supervised Transformers
Shahaf Arica, Or Rubin, Sapir Gershov, Shlomi Laufer
In this paper we introduce VoteCut an innovative method for unsupervised object discovery that leverages feature representations from multiple self-supervised models. VoteCut employs normalized-cut based graph partitioning clustering and a pixel voting approach. Additionally We present CuVLER (Cut-Vote-and-LEaRn) a zero-shot model trained using pseudo-labels generated by VoteCut and a novel soft target loss to refine segmentation accuracy. Through rigorous evaluations across multiple datasets and several unsupervised setups our methods demonstrate significant improvements in comparison to previous state-of-the-art models. Our ablation studies further highlight the contributions of each component revealing the robustness and efficacy of our approach. Collectively VoteCut and CuVLER pave the way for future advancements in image segmentation.
https://openaccess.thecvf.com/content/CVPR2024/papers/Arica_CuVLER_Enhanced_Unsupervised_Object_Discoveries_through_Exhaustive_Self-Supervised_Transformers_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.07700
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Arica_CuVLER_Enhanced_Unsupervised_Object_Discoveries_through_Exhaustive_Self-Supervised_Transformers_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Arica_CuVLER_Enhanced_Unsupervised_Object_Discoveries_through_Exhaustive_Self-Supervised_Transformers_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Arica_CuVLER_Enhanced_Unsupervised_CVPR_2024_supplemental.pdf
null
Quantifying Task Priority for Multi-Task Optimization
Wooseong Jeong, Kuk-Jin Yoon
The goal of multi-task learning is to learn diverse tasks within a single unified network. As each task has its own unique objective function conflicts emerge during training resulting in negative transfer among them. Earlier research identified these conflicting gradients in shared parameters between tasks and attempted to realign them in the same direction. However we prove that such optimization strategies lead to sub-optimal Pareto solutions due to their inability to accurately determine the individual contributions of each parameter across various tasks. In this paper we propose the concept of task priority to evaluate parameter contributions across different tasks. To learn task priority we identify the type of connections related to links between parameters influenced by task-specific losses during backpropagation. The strength of connections is gauged by the magnitude of parameters to determine task priority. Based on these we present a new method named connection strength-based optimization for multi-task learning which consists of two phases. The first phase learns the task priority within the network while the second phase modifies the gradients while upholding this priority. This ultimately leads to finding new Pareto optimal solutions for multiple tasks. Through extensive experiments we show that our approach greatly enhances multi-task performance in comparison to earlier gradient manipulation methods.
https://openaccess.thecvf.com/content/CVPR2024/papers/Jeong_Quantifying_Task_Priority_for_Multi-Task_Optimization_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Jeong_Quantifying_Task_Priority_for_Multi-Task_Optimization_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Jeong_Quantifying_Task_Priority_for_Multi-Task_Optimization_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Jeong_Quantifying_Task_Priority_CVPR_2024_supplemental.pdf
null
UnSAMFlow: Unsupervised Optical Flow Guided by Segment Anything Model
Shuai Yuan, Lei Luo, Zhuo Hui, Can Pu, Xiaoyu Xiang, Rakesh Ranjan, Denis Demandolx
Traditional unsupervised optical flow methods are vulnerable to occlusions and motion boundaries due to lack of object-level information. Therefore we propose UnSAMFlow an unsupervised flow network that also leverages object information from the latest foundation model Segment Anything Model (SAM). We first include a self-supervised semantic augmentation module tailored to SAM masks. We also analyze the poor gradient landscapes of traditional smoothness losses and propose a new smoothness definition based on homography instead. A simple yet effective mask feature module has also been added to further aggregate features on the object level. With all these adaptations our method produces clear optical flow estimation with sharp boundaries around objects which outperforms state-of-the-art methods on both KITTI and Sintel datasets. Our method also generalizes well across domains and runs very efficiently.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yuan_UnSAMFlow_Unsupervised_Optical_Flow_Guided_by_Segment_Anything_Model_CVPR_2024_paper.pdf
http://arxiv.org/abs/2405.02608
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yuan_UnSAMFlow_Unsupervised_Optical_Flow_Guided_by_Segment_Anything_Model_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yuan_UnSAMFlow_Unsupervised_Optical_Flow_Guided_by_Segment_Anything_Model_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yuan_UnSAMFlow_Unsupervised_Optical_CVPR_2024_supplemental.pdf
null
Exploiting Inter-sample and Inter-feature Relations in Dataset Distillation
Wenxiao Deng, Wenbin Li, Tianyu Ding, Lei Wang, Hongguang Zhang, Kuihua Huang, Jing Huo, Yang Gao
Dataset distillation has emerged as a promising approach in deep learning enabling efficient training with small synthetic datasets derived from larger real ones. Particularly distribution matching-based distillation methods attract attention thanks to its effectiveness and low computational cost. However these methods face two primary limitations: the dispersed feature distribution within the same class in synthetic datasets reducing class discrimination and an exclusive focus on mean feature consistency lacking precision and comprehensiveness. To address these challenges we introduce two novel constraints: a class centralization constraint and a covariance matching constraint. The class centralization constraint aims to enhance class discrimination by more closely clustering samples within classes. The covariance matching constraint seeks to achieve more accurate feature distribution matching between real and synthetic datasets through local feature covariance matrices particularly beneficial when sample sizes are much smaller than the number of features. Experiments demonstrate notable improvements with these constraints yielding performance boosts of up to 6.6% on CIFAR10 2.9% on SVHN 2.5% on CIFAR100 and 2.5% on TinyImageNet compared to the state-of-the-art relevant methods. In addition our method maintains robust performance in cross-architecture settings with a maximum performance drop of 1.7% on four architectures. Code is available at https://github.com/VincenDen/IID.
https://openaccess.thecvf.com/content/CVPR2024/papers/Deng_Exploiting_Inter-sample_and_Inter-feature_Relations_in_Dataset_Distillation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.00563
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Deng_Exploiting_Inter-sample_and_Inter-feature_Relations_in_Dataset_Distillation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Deng_Exploiting_Inter-sample_and_Inter-feature_Relations_in_Dataset_Distillation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Deng_Exploiting_Inter-sample_and_CVPR_2024_supplemental.pdf
null
On the Scalability of Diffusion-based Text-to-Image Generation
Hao Li, Yang Zou, Ying Wang, Orchid Majumder, Yusheng Xie, R. Manmatha, Ashwin Swaminathan, Zhuowen Tu, Stefano Ermon, Stefano Soatto
Scaling up model and data size has been quite successful for the evolution of LLMs. However the scaling law for the diffusion based text-to-image (T2I) models is not fully explored. It is also unclear how to efficiently scale the model for better performance at reduced cost. The different training settings and expensive training cost make a fair model comparison extremely difficult. In this work we empirically study the scaling properties of diffusion based T2I models by performing extensive and rigours ablations on scaling both denoising backbones and training set including training scaled UNet and Transformer variants ranging from 0.4B to 4B parameters on datasets upto 600M images. For model scaling we find the location and amount of cross attention distinguishes the performance of existing UNet designs. And increasing the transformer blocks is more parameter-efficient for improving text-image alignment than increasing channel numbers. We then identify an efficient UNet variant which is 45% smaller and 28% faster than SDXL's UNet. On the data scaling side we show the quality and diversity of the training set matters more than simply dataset size. Increasing caption density and diversity improves text-image alignment performance and the learning efficiency. Finally we provide scaling functions to predict the text-image alignment performance as functions of the scale of model size compute and dataset size.
https://openaccess.thecvf.com/content/CVPR2024/papers/Li_On_the_Scalability_of_Diffusion-based_Text-to-Image_Generation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.02883
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Li_On_the_Scalability_of_Diffusion-based_Text-to-Image_Generation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Li_On_the_Scalability_of_Diffusion-based_Text-to-Image_Generation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_On_the_Scalability_CVPR_2024_supplemental.pdf
null
Entity-NeRF: Detecting and Removing Moving Entities in Urban Scenes
Takashi Otonari, Satoshi Ikehata, Kiyoharu Aizawa
Recent advancements in the study of Neural Radiance Fields (NeRF) for dynamic scenes often involve explicit modeling of scene dynamics. However this approach faces challenges in modeling scene dynamics in urban environments where moving objects of various categories and scales are present. In such settings it becomes crucial to effectively eliminate moving objects to accurately reconstruct static backgrounds. Our research introduces an innovative method termed here as Entity-NeRF which combines the strengths of knowledge-based and statistical strategies. This approach utilizes entity-wise statistics leveraging entity segmentation and stationary entity classification through thing/stuff segmentation. To assess our methodology we created an urban scene dataset masked with moving objects. Our comprehensive experiments demonstrate that Entity-NeRF notably outperforms existing techniques in removing moving objects and reconstructing static urban backgrounds both quantitatively and qualitatively.
https://openaccess.thecvf.com/content/CVPR2024/papers/Otonari_Entity-NeRF_Detecting_and_Removing_Moving_Entities_in_Urban_Scenes_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Otonari_Entity-NeRF_Detecting_and_Removing_Moving_Entities_in_Urban_Scenes_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Otonari_Entity-NeRF_Detecting_and_Removing_Moving_Entities_in_Urban_Scenes_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Otonari_Entity-NeRF_Detecting_and_CVPR_2024_supplemental.pdf
null
TAMM: TriAdapter Multi-Modal Learning for 3D Shape Understanding
Zhihao Zhang, Shengcao Cao, Yu-Xiong Wang
The limited scale of current 3D shape datasets hinders the advancements in 3D shape understanding and motivates multi-modal learning approaches which transfer learned knowledge from data-abundant 2D image and language modalities to 3D shapes. However even though the image and language representations have been aligned by cross-modal models like CLIP we find that the image modality fails to contribute as much as the language in existing multi-modal 3D representation learning methods. This is attributed to the domain shift in the 2D images and the distinct focus of each modality. To more effectively leverage both modalities in the pre-training we introduce TriAdapter Multi-Modal Learning (TAMM) - a novel two-stage learning approach based on three synergistic adapters. First our CLIP Image Adapter mitigates the domain gap between 3D-rendered images and natural images by adapting the visual representations of CLIP for synthetic image-text pairs. Subsequently our Dual Adapters decouple the 3D shape representation space into two complementary sub-spaces: one focusing on visual attributes and the other for semantic understanding which ensure a more comprehensive and effective multi-modal pre-training. Extensive experiments demonstrate that TAMM consistently enhances 3D representations for a wide range of 3D encoder architectures pre-training datasets and downstream tasks. Notably we boost the zero-shot classification accuracy on Objaverse-LVIS from 46.8% to 50.7% and improve the 5-way 10-shot linear probing classification accuracy on ModelNet40 from 96.1% to 99.0%. Project page: https://alanzhangcs.github.io/tamm-page.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_TAMM_TriAdapter_Multi-Modal_Learning_for_3D_Shape_Understanding_CVPR_2024_paper.pdf
http://arxiv.org/abs/2402.18490
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_TAMM_TriAdapter_Multi-Modal_Learning_for_3D_Shape_Understanding_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_TAMM_TriAdapter_Multi-Modal_Learning_for_3D_Shape_Understanding_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_TAMM_TriAdapter_Multi-Modal_CVPR_2024_supplemental.pdf
null
GauHuman: Articulated Gaussian Splatting from Monocular Human Videos
Shoukang Hu, Tao Hu, Ziwei Liu
We present GauHuman a 3D human model with Gaussian Splatting for both fast training (1 2 minutes) and real-time rendering (up to 189 FPS) compared with existing NeRF-based implicit representation modelling frameworks demanding hours of training and seconds of rendering per frame. Specifically GauHuman encodes Gaussian Splatting in the canonical space and transforms 3D Gaussians from canonical space to posed space with linear blend skinning (LBS) in which effective pose and LBS refinement modules are designed to learn fine details of 3D humans under negligible computational cost. Moreover to enable fast optimization of GauHuman we initialize and prune 3D Gaussians with 3D human prior while splitting/cloning via KL divergence guidance along with a novel merge operation for further speeding up. Extensive experiments on ZJU_Mocap and MonoCap datasets demonstrate that GauHuman achieves state-of-the-art performance quantitatively and qualitatively with fast training and real-time rendering speed. Notably without sacrificing rendering quality GauHuman can fast model the 3D human performer with 13k 3D Gaussians. Our code is available at https://github.com/skhu101/GauHuman.
https://openaccess.thecvf.com/content/CVPR2024/papers/Hu_GauHuman_Articulated_Gaussian_Splatting_from_Monocular_Human_Videos_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.02973
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Hu_GauHuman_Articulated_Gaussian_Splatting_from_Monocular_Human_Videos_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Hu_GauHuman_Articulated_Gaussian_Splatting_from_Monocular_Human_Videos_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Hu_GauHuman_Articulated_Gaussian_CVPR_2024_supplemental.pdf
null
AnySkill: Learning Open-Vocabulary Physical Skill for Interactive Agents
Jieming Cui, Tengyu Liu, Nian Liu, Yaodong Yang, Yixin Zhu, Siyuan Huang
Traditional approaches in physics-based motion generation centered around imitation learning and reward shaping often struggle to adapt to new scenarios. To tackle this limitation we propose AnySkill a novel hierarchical method that learns physically plausible interactions following open-vocabulary instructions. Our approach begins by developing a set of atomic actions via a low-level controller trained via imitation learning. Upon receiving an open-vocabulary textual instruction AnySkill employs a high-level policy that selects and integrates these atomic actions to maximize the CLIP similarity between the agent's rendered images and the text. An important feature of our method is the use of image-based rewards for the high-level policy which allows the agent to learn interactions with objects without manual reward engineering. We demonstrate AnySkill's capability to generate realistic and natural motion sequences in response to unseen instructions of varying lengths marking it the first method capable of open-vocabulary physical skill learning for interactive humanoid agents.
https://openaccess.thecvf.com/content/CVPR2024/papers/Cui_AnySkill_Learning_Open-Vocabulary_Physical_Skill_for_Interactive_Agents_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.12835
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Cui_AnySkill_Learning_Open-Vocabulary_Physical_Skill_for_Interactive_Agents_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Cui_AnySkill_Learning_Open-Vocabulary_Physical_Skill_for_Interactive_Agents_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Cui_AnySkill_Learning_Open-Vocabulary_CVPR_2024_supplemental.pdf
null
EGTR: Extracting Graph from Transformer for Scene Graph Generation
Jinbae Im, JeongYeon Nam, Nokyung Park, Hyungmin Lee, Seunghyun Park
Scene Graph Generation (SGG) is a challenging task of detecting objects and predicting relationships between objects. After DETR was developed one-stage SGG models based on a one-stage object detector have been actively studied. However complex modeling is used to predict the relationship between objects and the inherent relationship between object queries learned in the multi-head self-attention of the object detector has been neglected. We propose a lightweight one-stage SGG model that extracts the relation graph from the various relationships learned in the multi-head self-attention layers of the DETR decoder. By fully utilizing the self-attention by-products the relation graph can be extracted effectively with a shallow relation extraction head. Considering the dependency of the relation extraction task on the object detection task we propose a novel relation smoothing technique that adjusts the relation label adaptively according to the quality of the detected objects. By the relation smoothing the model is trained according to the continuous curriculum that focuses on object detection task at the beginning of training and performs multi-task learning as the object detection performance gradually improves. Furthermore we propose a connectivity prediction task that predicts whether a relation exists between object pairs as an auxiliary task of the relation extraction. We demonstrate the effectiveness and efficiency of our method for the Visual Genome and Open Image V6 datasets. Our code is publicly available at https://github.com/naver-ai/egtr.
https://openaccess.thecvf.com/content/CVPR2024/papers/Im_EGTR_Extracting_Graph_from_Transformer_for_Scene_Graph_Generation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.02072
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Im_EGTR_Extracting_Graph_from_Transformer_for_Scene_Graph_Generation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Im_EGTR_Extracting_Graph_from_Transformer_for_Scene_Graph_Generation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Im_EGTR_Extracting_Graph_CVPR_2024_supplemental.pdf
null
Generative Unlearning for Any Identity
Juwon Seo, Sung-Hoon Lee, Tae-Young Lee, Seungjun Moon, Gyeong-Moon Park
Recent advances in generative models trained on large-scale datasets have made it possible to synthesize high-quality samples across various domains. Moreover the emergence of strong inversion networks enables not only a reconstruction of real-world images but also the modification of attributes through various editing methods. However in certain domains related to privacy issues e.g. human faces advanced generative models along with strong inversion methods can lead to potential misuses. In this paper we propose an essential yet under-explored task called generative identity unlearning which steers the model not to generate an image of a specific identity. In the generative identity unlearning we target the following objectives: (i) preventing the generation of images with a certain identity and (ii) preserving the overall quality of the generative model. To satisfy these goals we propose a novel framework Generative Unlearning for Any Identity (GUIDE) which prevents the reconstruction of a specific identity by unlearning the generator with only a single image. GUIDE consists of two parts: (i) finding a target point for optimization that un-identifies the source latent code and (ii) novel loss functions that facilitate the unlearning procedure while less affecting the learned distribution. Our extensive experiments demonstrate that our proposed method achieves state-of-the-art performance in the generative machine unlearning task. The code is available at https://github.com/KHU-AGI/GUIDE.
https://openaccess.thecvf.com/content/CVPR2024/papers/Seo_Generative_Unlearning_for_Any_Identity_CVPR_2024_paper.pdf
http://arxiv.org/abs/2405.09879
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Seo_Generative_Unlearning_for_Any_Identity_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Seo_Generative_Unlearning_for_Any_Identity_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Seo_Generative_Unlearning_for_CVPR_2024_supplemental.pdf
null
Context-based and Diversity-driven Specificity in Compositional Zero-Shot Learning
Yun Li, Zhe Liu, Hang Chen, Lina Yao
Compositional Zero-Shot Learning (CZSL) aims to recognize unseen attribute-object pairs based on a limited set of observed examples. Current CZSL methodologies despite their advancements tend to neglect the distinct specificity levels present in attributes. For instance given images of sliced strawberries they may fail to prioritize `Sliced-Strawberry' over a generic `Red-Strawberry' despite the former being more informative. They also suffer from ballooning search space when shifting from Close-World (CW) to Open-World (OW) CZSL. To address the issues we introduce the Context-based and Diversity-driven Specificity learning framework for CZSL (CDS-CZSL). Our framework evaluates the specificity of attributes by considering the diversity of objects they apply to and their related context. This novel approach allows for more accurate predictions by emphasizing specific attribute-object pairs and improves composition filtering in OW-CZSL. We conduct experiments in both CW and OW scenarios and our model achieves state-of-the-art results across three datasets.
https://openaccess.thecvf.com/content/CVPR2024/papers/Li_Context-based_and_Diversity-driven_Specificity_in_Compositional_Zero-Shot_Learning_CVPR_2024_paper.pdf
http://arxiv.org/abs/2402.17251
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Li_Context-based_and_Diversity-driven_Specificity_in_Compositional_Zero-Shot_Learning_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Li_Context-based_and_Diversity-driven_Specificity_in_Compositional_Zero-Shot_Learning_CVPR_2024_paper.html
CVPR 2024
null
null
FlowVid: Taming Imperfect Optical Flows for Consistent Video-to-Video Synthesis
Feng Liang, Bichen Wu, Jialiang Wang, Licheng Yu, Kunpeng Li, Yinan Zhao, Ishan Misra, Jia-Bin Huang, Peizhao Zhang, Peter Vajda, Diana Marculescu
Diffusion models have transformed the image-to-image (I2I) synthesis and are now permeating into videos. However the advancement of video-to-video (V2V) synthesis has been hampered by the challenge of maintaining temporal consistency across video frames. This paper proposes a consistent V2V synthesis framework by jointly leveraging spatial conditions and temporal optical flow clues within the source video. Contrary to prior methods that strictly adhere to optical flow our approach harnesses its benefits while handling the imperfection in flow estimation. We encode the optical flow via warping from the first frame and serve it as a supplementary reference in the diffusion model. This enables our model for video synthesis by editing the first frame with any prevalent I2I models and then propagating edits to successive frames. Our V2V model FlowVid demonstrates remarkable properties: (1) Flexibility: FlowVid works seamlessly with existing I2I models facilitating various modifications including stylization object swaps and local edits. (2) Efficiency: Generation of a 4-second video with 30 FPS and 512x512 resolution takes only 1.5 minutes which is 3.1x 7.2x and 10.5x faster than CoDeF Rerender and TokenFlow respectively. (3) High-quality: In user studies our FlowVid is preferred 45.7% of the time outperforming CoDeF (3.5%) Rerender (10.2%) and TokenFlow (40.4%).
https://openaccess.thecvf.com/content/CVPR2024/papers/Liang_FlowVid_Taming_Imperfect_Optical_Flows_for_Consistent_Video-to-Video_Synthesis_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.17681
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Liang_FlowVid_Taming_Imperfect_Optical_Flows_for_Consistent_Video-to-Video_Synthesis_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Liang_FlowVid_Taming_Imperfect_Optical_Flows_for_Consistent_Video-to-Video_Synthesis_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liang_FlowVid_Taming_Imperfect_CVPR_2024_supplemental.pdf
null
StyleCineGAN: Landscape Cinemagraph Generation using a Pre-trained StyleGAN
Jongwoo Choi, Kwanggyoon Seo, Amirsaman Ashtari, Junyong Noh
We propose a method that can generate cinemagraphs automatically from a still landscape image using a pre-trained StyleGAN. Inspired by the success of recent unconditional video generation we leverage a powerful pre-trained image generator to synthesize high-quality cinemagraphs. Unlike previous approaches that mainly utilize the latent space of a pre-trained StyleGAN our approach utilizes its deep feature space for both GAN inversion and cinemagraph generation. Specifically we propose multi-scale deep feature warping (MSDFW) which warps the intermediate features of a pre-trained StyleGAN at different resolutions. By using MSDFW the generated cinemagraphs are of high resolution and exhibit plausible looping animation. We demonstrate the superiority of our method through user studies and quantitative comparisons with state-of-the-art cinemagraph generation methods and a video generation method that uses a pre-trained StyleGAN.
https://openaccess.thecvf.com/content/CVPR2024/papers/Choi_StyleCineGAN_Landscape_Cinemagraph_Generation_using_a_Pre-trained_StyleGAN_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.14186
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Choi_StyleCineGAN_Landscape_Cinemagraph_Generation_using_a_Pre-trained_StyleGAN_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Choi_StyleCineGAN_Landscape_Cinemagraph_Generation_using_a_Pre-trained_StyleGAN_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Choi_StyleCineGAN_Landscape_Cinemagraph_CVPR_2024_supplemental.pdf
null
Rethinking Multi-domain Generalization with A General Learning Objective
Zhaorui Tan, Xi Yang, Kaizhu Huang
Multi-domain generalization (mDG) is universally aimed to minimize the discrepancy between training and testing distributions to enhance marginal-to-label distribution mapping. However existing mDG literature lacks a general learning objective paradigm and often imposes constraints on static target marginal distributions. In this paper we propose to leverage a Y-mapping to relax the constraint. We rethink the learning objective for mDG and design a new general learning objective to interpret and analyze most existing mDG wisdom. This general objective is bifurcated into two synergistic amis: learning domain-independent conditional features and maximizing a posterior. Explorations also extend to two effective regularization terms that incorporate prior information and suppress invalid causality alleviating the issues that come with relaxed constraints. We theoretically contribute an upper bound for the domain alignment of domain-independent conditional features disclosing that many previous mDG endeavors actually optimize partially the objective and thus lead to limited performance. As such our study distills a general learning objective into four practical components providing a general robust and flexible mechanism to handle complex domain shifts. Extensive empirical results indicate that the proposed objective with Y-mapping leads to substantially better mDG performance in various downstream tasks including regression segmentation and classification. Code is available at https://github.com/zhaorui-tan/GMDG/tree/main.
https://openaccess.thecvf.com/content/CVPR2024/papers/Tan_Rethinking_Multi-domain_Generalization_with_A_General_Learning_Objective_CVPR_2024_paper.pdf
http://arxiv.org/abs/2402.18853
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Tan_Rethinking_Multi-domain_Generalization_with_A_General_Learning_Objective_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Tan_Rethinking_Multi-domain_Generalization_with_A_General_Learning_Objective_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Tan_Rethinking_Multi-domain_Generalization_CVPR_2024_supplemental.pdf
null
Laplacian-guided Entropy Model in Neural Codec with Blur-dissipated Synthesis
Atefeh Khoshkhahtinat, Ali Zafari, Piyush M. Mehta, Nasser M. Nasrabadi
While replacing Gaussian decoders with a conditional diffusion model enhances the perceptual quality of reconstructions in neural image compression their lack of inductive bias for image data restricts their ability to achieve state-of-the-art perceptual levels. To address this limitation we adopt a non-isotropic diffusion model at the decoder side. This model imposes an inductive bias aimed at distinguishing between frequency contents thereby facilitating the generation of high-quality images. Moreover our framework is equipped with a novel entropy model that accurately models the probability distribution of latent representation by exploiting spatio-channel correlations in latent space while accelerating the entropy decoding step. This channel-wise entropy model leverages both local and global spatial contexts within each channel chunk. The global spatial context is built upon the Transformer which is specifically designed for image compression tasks. The designed Transformer employs a Laplacian-shaped positional encoding the learnable parameters of which are adaptively adjusted for each channel cluster. Our experiments demonstrate that our proposed framework yields better perceptual quality compared to cutting-edge generative-based codecs and the proposed entropy model contributes to notable bitrate savings. The code is available at https://github.com/Atefeh-Khoshtinat/Blur-dissipated-compression.
https://openaccess.thecvf.com/content/CVPR2024/papers/Khoshkhahtinat_Laplacian-guided_Entropy_Model_in_Neural_Codec_with_Blur-dissipated_Synthesis_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.16258
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Khoshkhahtinat_Laplacian-guided_Entropy_Model_in_Neural_Codec_with_Blur-dissipated_Synthesis_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Khoshkhahtinat_Laplacian-guided_Entropy_Model_in_Neural_Codec_with_Blur-dissipated_Synthesis_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Khoshkhahtinat_Laplacian-guided_Entropy_Model_CVPR_2024_supplemental.pdf
null
Universal Novelty Detection Through Adaptive Contrastive Learning
Hossein Mirzaei, Mojtaba Nafez, Mohammad Jafari, Mohammad Bagher Soltani, Mohammad Azizmalayeri, Jafar Habibi, Mohammad Sabokrou, Mohammad Hossein Rohban
Novelty detection is a critical task for deploying machine learning models in the open world. A crucial property of novelty detection methods is universality which can be interpreted as generalization across various distributions of training or test data. More precisely for novelty detection distribution shifts may occur in the training set or the test set. Shifts in the training set refer to cases where we train a novelty detector on a new dataset and expect strong transferability. Conversely distribution shifts in the test set indicate the methods' performance when the trained model encounters a shifted test sample. We experimentally show that existing methods falter in maintaining universality which stems from their rigid inductive biases. Motivated by this we aim for more generalized techniques that have more adaptable inductive biases. In this context we leverage the fact that contrastive learning provides an efficient framework to easily switch and adapt to new inductive biases through the proper choice of augmentations in forming the negative pairs. We propose a novel probabilistic auto-negative pair generation method AutoAugOOD along with contrastive learning to yield a universal novelty detector method. Our experiments demonstrate the superiority of our method under different distribution shifts in various image benchmark datasets. Notably our method emerges universality in the lens of adaptability to different setups of novelty detection including one-class unlabeled multi-class and labeled multi-class settings.
https://openaccess.thecvf.com/content/CVPR2024/papers/Mirzaei_Universal_Novelty_Detection_Through_Adaptive_Contrastive_Learning_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Mirzaei_Universal_Novelty_Detection_Through_Adaptive_Contrastive_Learning_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Mirzaei_Universal_Novelty_Detection_Through_Adaptive_Contrastive_Learning_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Mirzaei_Universal_Novelty_Detection_CVPR_2024_supplemental.pdf
null
Rethinking Diffusion Model for Multi-Contrast MRI Super-Resolution
Guangyuan Li, Chen Rao, Juncheng Mo, Zhanjie Zhang, Wei Xing, Lei Zhao
Recently diffusion models (DM) have been applied in magnetic resonance imaging (MRI) super-resolution (SR) reconstruction exhibiting impressive performance especially with regard to detailed reconstruction. However the current DM-based SR reconstruction methods still face the following issues: (1) They require a large number of iterations to reconstruct the final image which is inefficient and consumes a significant amount of computational resources. (2) The results reconstructed by these methods are often misaligned with the real high-resolution images leading to remarkable distortion in the reconstructed MR images. To address the aforementioned issues we propose an efficient diffusion model for multi-contrast MRI SR named as DiffMSR. Specifically we apply DM in a highly compact low-dimensional latent space to generate prior knowledge with high-frequency detail information. The highly compact latent space ensures that DM requires only a few simple iterations to produce accurate prior knowledge. In addition we design the Prior-Guide Large Window Transformer (PLWformer) as the decoder for DM which can extend the receptive field while fully utilizing the prior knowledge generated by DM to ensure that the reconstructed MR image remains undistorted. Extensive experiments on public and clinical datasets demonstrate that our DiffMSR outperforms state-of-the-art methods.
https://openaccess.thecvf.com/content/CVPR2024/papers/Li_Rethinking_Diffusion_Model_for_Multi-Contrast_MRI_Super-Resolution_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.04785
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Li_Rethinking_Diffusion_Model_for_Multi-Contrast_MRI_Super-Resolution_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Li_Rethinking_Diffusion_Model_for_Multi-Contrast_MRI_Super-Resolution_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_Rethinking_Diffusion_Model_CVPR_2024_supplemental.pdf
null
Resurrecting Old Classes with New Data for Exemplar-Free Continual Learning
Dipam Goswami, Albin Soutif-Cormerais, Yuyang Liu, Sandesh Kamath, Bart?omiej Twardowski, Joost van de Weijer
Continual learning methods are known to suffer from catastrophic forgetting a phenomenon that is particularly hard to counter for methods that do not store exemplars of previous tasks. Therefore to reduce potential drift in the feature extractor existing exemplar-free methods are typically evaluated in settings where the first task is significantly larger than subsequent tasks. Their performance drops drastically in more challenging settings starting with a smaller first task. To address this problem of feature drift estimation for exemplar-free methods we propose to adversarially perturb the current samples such that their embeddings are close to the old class prototypes in the old model embedding space. We then estimate the drift in the embedding space from the old to the new model using the perturbed images and compensate the prototypes accordingly. We exploit the fact that adversarial samples are transferable from the old to the new feature space in a continual learning setting. The generation of these images is simple and computationally cheap. We demonstrate in our experiments that the proposed approach better tracks the movement of prototypes in embedding space and outperforms existing methods on several standard continual learning benchmarks as well as on fine-grained datasets. Code is available at https://github.com/dipamgoswami/ADC.
https://openaccess.thecvf.com/content/CVPR2024/papers/Goswami_Resurrecting_Old_Classes_with_New_Data_for_Exemplar-Free_Continual_Learning_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Goswami_Resurrecting_Old_Classes_with_New_Data_for_Exemplar-Free_Continual_Learning_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Goswami_Resurrecting_Old_Classes_with_New_Data_for_Exemplar-Free_Continual_Learning_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Goswami_Resurrecting_Old_Classes_CVPR_2024_supplemental.pdf
null
Unknown Prompt the only Lacuna: Unveiling CLIP's Potential for Open Domain Generalization
Mainak Singha, Ankit Jha, Shirsha Bose, Ashwin Nair, Moloud Abdar, Biplab Banerjee
We delve into Open Domain Generalization (ODG) marked by domain and category shifts between training's labeled source and testing's unlabeled target domains. Existing solutions to ODG face limitations due to constrained generalizations of traditional CNN backbones and errors in detecting target open samples in the absence of prior knowledge. Addressing these pitfalls we introduce ODG-CLIP harnessing the semantic prowess of the vision-language model CLIP. Our framework brings forth three primary innovations: Firstly distinct from prevailing paradigms we conceptualize ODG as a multi-class classification challenge encompassing both known and novel categories. Central to our approach is modeling a unique prompt tailored for detecting unknown class samples and to train this we employ a readily accessible stable diffusion model elegantly generating proxy images for the open class. Secondly aiming for domain-tailored classification (prompt) weights while ensuring a balance of precision and simplicity we devise a novel visual style-centric prompt learning mechanism. Finally we infuse images with class-discriminative knowledge derived from the prompt space to augment the fidelity of CLIP's visual embeddings. We introduce a novel objective to safeguard the continuity of this infused semantic intel across domains especially for the shared classes. Through rigorous testing on diverse datasets covering closed and open-set DG contexts ODG-CLIP demonstrates clear supremacy consistently outpacing peers with performance boosts between 8%-16%. Code will be available at https://github.com/mainaksingha01/ODG-CLIP.
https://openaccess.thecvf.com/content/CVPR2024/papers/Singha_Unknown_Prompt_the_only_Lacuna_Unveiling_CLIPs_Potential_for_Open_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Singha_Unknown_Prompt_the_only_Lacuna_Unveiling_CLIPs_Potential_for_Open_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Singha_Unknown_Prompt_the_only_Lacuna_Unveiling_CLIPs_Potential_for_Open_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Singha_Unknown_Prompt_the_CVPR_2024_supplemental.pdf
null
Poly Kernel Inception Network for Remote Sensing Detection
Xinhao Cai, Qiuxia Lai, Yuwei Wang, Wenguan Wang, Zeren Sun, Yazhou Yao
Object detection in remote sensing images (RSIs) often suffers from several increasing challenges including the large variation in object scales and the diverse-ranging context. Prior methods tried to address these challenges by expanding the spatial receptive field of the backbone either through large-kernel convolution or dilated convolution. However the former typically introduces considerable background noise while the latter risks generating overly sparse feature representations. In this paper we introduce the Poly Kernel Inception Network (PKINet) to handle the above challenges. PKINet employs multi-scale convolution kernels without dilation to extract object features of varying scales and capture local context. In addition a Context Anchor Attention (CAA) module is introduced in parallel to capture long-range contextual information. These two components work jointly to advance the performance of PKINet on four challenging remote sensing object detection benchmarks namely DOTA-v1.0 DOTA-v1.5 HRSC2016 and DIOR-R.
https://openaccess.thecvf.com/content/CVPR2024/papers/Cai_Poly_Kernel_Inception_Network_for_Remote_Sensing_Detection_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.06258
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Cai_Poly_Kernel_Inception_Network_for_Remote_Sensing_Detection_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Cai_Poly_Kernel_Inception_Network_for_Remote_Sensing_Detection_CVPR_2024_paper.html
CVPR 2024
null
null
RMT: Retentive Networks Meet Vision Transformers
Qihang Fan, Huaibo Huang, Mingrui Chen, Hongmin Liu, Ran He
Vision Transformer (ViT) has gained increasing attention in the computer vision community in recent years. However the core component of ViT Self-Attention lacks explicit spatial priors and bears a quadratic computational complexity thereby constraining the applicability of ViT. To alleviate these issues we draw inspiration from the recent Retentive Network (RetNet) in the field of NLP and propose RMT a strong vision backbone with explicit spatial prior for general purposes. Specifically we extend the RetNet's temporal decay mechanism to the spatial domain and propose a spatial decay matrix based on the Manhattan distance to introduce the explicit spatial prior to Self-Attention. Additionally an attention decomposition form that adeptly adapts to explicit spatial prior is proposed aiming to reduce the computational burden of modeling global information without disrupting the spatial decay matrix. Based on the spatial decay matrix and the attention decomposition form we can flexibly integrate explicit spatial prior into the vision backbone with linear complexity. Extensive experiments demonstrate that RMT exhibits exceptional performance across various vision tasks. Specifically without extra training data RMT achieves 84.8% and 86.1% top-1 acc on ImageNet-1k with 27M/4.5GFLOPs and 96M/18.2GFLOPs. For downstream tasks RMT achieves 54.5 box AP and 47.2 mask AP on the COCO detection task and 52.8 mIoU on the ADE20K semantic segmentation task.
https://openaccess.thecvf.com/content/CVPR2024/papers/Fan_RMT_Retentive_Networks_Meet_Vision_Transformers_CVPR_2024_paper.pdf
http://arxiv.org/abs/2309.11523
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Fan_RMT_Retentive_Networks_Meet_Vision_Transformers_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Fan_RMT_Retentive_Networks_Meet_Vision_Transformers_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Fan_RMT_Retentive_Networks_CVPR_2024_supplemental.pdf
null
From Coarse to Fine-Grained Open-Set Recognition
Nico Lang, Vésteinn Snæbjarnarson, Elijah Cole, Oisin Mac Aodha, Christian Igel, Serge Belongie
Open-set recognition (OSR) methods aim to identify whether or not a test example belongs to a category ob- served during training. Depending on how visually sim- ilar a test example is to the training categories the OSR task can be easy or extremely challenging. However the vast majority of previous work has studied OSR in the presence of large coarse-grained semantic shifts. In contrast many real-world problems are inherently fine- grained which means that test examples may be highly visually similar to the training categories. Motivated by this observation we investigate three aspects of OSR: label granularity similarity between the open- and closed-sets and the role of hierarchical supervision during training. To study these dimensions we curate new open-set splits of a large fine-grained visual categorization dataset. Our anal- ysis results in several interesting findings including: (i) the best OSR method to use is heavily dependent on the degree of semantic shift present and (ii) hierarchical rep- resentation learning can improve coarse-grained OSR but has little effect on fine-grained OSR performance. To fur- ther enhance fine-grained OSR performance we propose a hierarchy-adversarial learning method to discourage hier- archical structure in the representation space which results in a perhaps counter-intuitive behaviour and a relative im- provement in fine-grained OSR of up to 2% in AUROC and 7% in AUPR over standard training. Code and data are available: langnico.github.io/fine-grained-osr.
https://openaccess.thecvf.com/content/CVPR2024/papers/Lang_From_Coarse_to_Fine-Grained_Open-Set_Recognition_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Lang_From_Coarse_to_Fine-Grained_Open-Set_Recognition_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Lang_From_Coarse_to_Fine-Grained_Open-Set_Recognition_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lang_From_Coarse_to_CVPR_2024_supplemental.pdf
null
Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities
Yiyuan Zhang, Xiaohan Ding, Kaixiong Gong, Yixiao Ge, Ying Shan, Xiangyu Yue
We propose to improve transformers of a specific modality with irrelevant data from other modalities e.g. improve an ImageNet model with audio or point cloud datasets. We would like to highlight that the data samples of the target modality are irrelevant to the other modalities which distinguishes our method from other works utilizing paired (e.g. CLIP) or interleaved data of different modalities. We propose a methodology named Multimodal Pathway - given a target modality and a transformer designed for it we use an auxiliary transformer trained with data of another modality and construct pathways to connect components of the two models so that data of the target modality can be processed by both models. In this way we utilize the universal sequence-to-sequence modeling abilities of transformers obtained from two modalities. As a concrete implementation we use a modality-specific tokenizer and task-specific head as usual but utilize the transformer blocks of the auxiliary model via a proposed method named Cross-Modal Re-parameterization which exploits the auxiliary weights without any inference costs. On the image point cloud video and audio recognition tasks we observe significant and consistent performance improvements with irrelevant data from other modalities. The code and models are available at https://github.com/AILab-CVC/M2PT.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Multimodal_Pathway_Improve_Transformers_with_Irrelevant_Data_from_Other_Modalities_CVPR_2024_paper.pdf
http://arxiv.org/abs/2401.14405
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Multimodal_Pathway_Improve_Transformers_with_Irrelevant_Data_from_Other_Modalities_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Multimodal_Pathway_Improve_Transformers_with_Irrelevant_Data_from_Other_Modalities_CVPR_2024_paper.html
CVPR 2024
null
null
FaceChain-ImagineID: Freely Crafting High-Fidelity Diverse Talking Faces from Disentangled Audio
Chao Xu, Yang Liu, Jiazheng Xing, Weida Wang, Mingze Sun, Jun Dan, Tianxin Huang, Siyuan Li, Zhi-Qi Cheng, Ying Tai, Baigui Sun
In this paper we abstract the process of people hearing speech extracting meaningful cues and creating various dynamically audio-consistent talking faces termed Listening and Imagining into the task of high-fidelity diverse talking faces generation from a single audio. Specifically it involves two critical challenges: one is to effectively decouple identity content and emotion from entangled audio and the other is to maintain intra-video diversity and inter-video consistency. To tackle the issues we first dig out the intricate relationships among facial factors and simplify the decoupling process tailoring a Progressive Audio Disentanglement for accurate facial geometry and semantics learning where each stage incorporates a customized training module responsible for a specific factor. Secondly to achieve visually diverse and audio-synchronized animation solely from input audio within a single model we introduce the Controllable Coherent Frame generation which involves the flexible integration of three trainable adapters with frozen Latent Diffusion Models (LDMs) to focus on maintaining facial geometry and semantics as well as texture and temporal coherence between frames. In this way we inherit high-quality diverse generation from LDMs while significantly improving their controllability at a low training cost. Extensive experiments demonstrate the flexibility and effectiveness of our method in handling this paradigm. The codes will be released at https://github.com/modelscope/facechain.
https://openaccess.thecvf.com/content/CVPR2024/papers/Xu_FaceChain-ImagineID_Freely_Crafting_High-Fidelity_Diverse_Talking_Faces_from_Disentangled_Audio_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Xu_FaceChain-ImagineID_Freely_Crafting_High-Fidelity_Diverse_Talking_Faces_from_Disentangled_Audio_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Xu_FaceChain-ImagineID_Freely_Crafting_High-Fidelity_Diverse_Talking_Faces_from_Disentangled_Audio_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Xu_FaceChain-ImagineID_Freely_Crafting_CVPR_2024_supplemental.pdf
null
OmniViD: A Generative Framework for Universal Video Understanding
Junke Wang, Dongdong Chen, Chong Luo, Bo He, Lu Yuan, Zuxuan Wu, Yu-Gang Jiang
The core of video understanding tasks such as recognition captioning and tracking is to automatically detect objects or actions in a video and analyze their temporal evolution. Despite sharing a common goal different tasks often rely on distinct model architectures and annotation formats. In contrast natural language processing benefits from a unified output space i.e. text sequences which simplifies the training of powerful foundational language models such as GPT-3 with extensive training corpora. Inspired by this we seek to unify the output space of video understanding tasks by using languages as labels and additionally introducing time and box tokens. In this way a variety of video tasks could be formulated as video-grounded token generation. This enables us to address various types of video tasks including classification (such as action recognition) captioning (covering clip captioning video question answering and dense video captioning) and localization tasks (such as visual object tracking) within a fully shared encoder-decoder architecture following a generative framework. Through comprehensive experiments we demonstrate such a simple and straightforward idea is quite effective and can achieve state-of-the-art or competitive results on seven video benchmarks providing a novel perspective for more universal video understanding. Code is available at \href https://github.com/wangjk666/OmniVid https://github.com/wangjk666/OmniVid .
https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_OmniViD_A_Generative_Framework_for_Universal_Video_Understanding_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.17935
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_OmniViD_A_Generative_Framework_for_Universal_Video_Understanding_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_OmniViD_A_Generative_Framework_for_Universal_Video_Understanding_CVPR_2024_paper.html
CVPR 2024
null
null
Naturally Supervised 3D Visual Grounding with Language-Regularized Concept Learners
Chun Feng, Joy Hsu, Weiyu Liu, Jiajun Wu
3D visual grounding is a challenging task that often requires direct and dense supervision notably the semantic label for each object in the scene. In this paper we instead study the naturally supervised setting that learns from only 3D scene and QA pairs where prior works underperform. We propose the Language-Regularized Concept Learner (LARC) which uses constraints from language as regularization to significantly improve the accuracy of neuro-symbolic concept learners in the naturally supervised setting. Our approach is based on two core insights: the first is that language constraints (e.g. a word's relation to another) can serve as effective regularization for structured representations in neuro-symbolic models; the second is that we can query large language models to distill such constraints from language properties. We show that LARC improves performance of prior works in naturally supervised 3D visual grounding and demonstrates a wide range of 3D visual reasoning capabilities--from zero-shot composition to data efficiency and transferability. Our method represents a promising step towards regularizing structured visual reasoning frameworks with language-based priors for learning in settings without dense supervision.
https://openaccess.thecvf.com/content/CVPR2024/papers/Feng_Naturally_Supervised_3D_Visual_Grounding_with_Language-Regularized_Concept_Learners_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.19696
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Feng_Naturally_Supervised_3D_Visual_Grounding_with_Language-Regularized_Concept_Learners_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Feng_Naturally_Supervised_3D_Visual_Grounding_with_Language-Regularized_Concept_Learners_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Feng_Naturally_Supervised_3D_CVPR_2024_supplemental.pdf
null
SSR-Encoder: Encoding Selective Subject Representation for Subject-Driven Generation
Yuxuan Zhang, Yiren Song, Jiaming Liu, Rui Wang, Jinpeng Yu, Hao Tang, Huaxia Li, Xu Tang, Yao Hu, Han Pan, Zhongliang Jing
Recent advancements in subject-driven image generation have led to zero-shot generation yet precise selection and focus on crucial subject representations remain challenging. Addressing this we introduce the SSR-Encoder a novel architecture designed for selectively capturing any subject from single or multiple reference images. It responds to various query modalities including text and masks without necessitating test-time fine-tuning. The SSR-Encoder combines a Token-to-Patch Aligner that aligns query inputs with image patches and a Detail-Preserving Subject Encoder for extracting and preserving fine features of the subjects thereby generating subject embeddings. These embeddings used in conjunction with original text embeddings condition the generation process. Characterized by its model generalizability and efficiency the SSR-Encoder adapts to a range of custom models and control modules. Enhanced by the Embedding Consistency Regularization Loss for improved training our extensive experiments demonstrate its effectiveness in versatile and high-quality image generation indicating its broad applicability.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_SSR-Encoder_Encoding_Selective_Subject_Representation_for_Subject-Driven_Generation_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_SSR-Encoder_Encoding_Selective_Subject_Representation_for_Subject-Driven_Generation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_SSR-Encoder_Encoding_Selective_Subject_Representation_for_Subject-Driven_Generation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_SSR-Encoder_Encoding_Selective_CVPR_2024_supplemental.pdf
null
CA-Jaccard: Camera-aware Jaccard Distance for Person Re-identification
Yiyu Chen, Zheyi Fan, Zhaoru Chen, Yixuan Zhu
Person re-identification (re-ID) is a challenging task that aims to learn discriminative features for person retrieval. In person re-ID Jaccard distance is a widely used distance metric especially in re-ranking and clustering scenarios. However we discover that camera variation has a significant negative impact on the reliability of Jaccard distance. In particular Jaccard distance calculates the distance based on the overlap of relevant neighbors. Due to camera variation intra-camera samples dominate the relevant neighbors which reduces the reliability of the neighbors by introducing intra-camera negative samples and excluding inter-camera positive samples. To overcome this problem we propose a novel camera-aware Jaccard (CA-Jaccard) distance that leverages camera information to enhance the reliability of Jaccard distance. Specifically we design camera-aware k-reciprocal nearest neighbors (CKRNNs) to find k-reciprocal nearest neighbors on the intra-camera and inter-camera ranking lists which improves the reliability of relevant neighbors and guarantees the contribution of inter-camera samples in the overlap. Moreover we propose a camera-aware local query expansion (CLQE) to mine reliable samples in relevant neighbors by exploiting camera variation as a strong constraint and assign these samples higher weights in overlap further improving the reliability. Our CA-Jaccard distance is simple yet effective and can serve as a general distance metric for person re-ID methods with high reliability and low computational cost. Extensive experiments demonstrate the effectiveness of our method. Code is available at https://github.com/chen960/CA-Jaccard/.
https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_CA-Jaccard_Camera-aware_Jaccard_Distance_for_Person_Re-identification_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_CA-Jaccard_Camera-aware_Jaccard_Distance_for_Person_Re-identification_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_CA-Jaccard_Camera-aware_Jaccard_Distance_for_Person_Re-identification_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Chen_CA-Jaccard_Camera-aware_Jaccard_CVPR_2024_supplemental.pdf
null
Dual Prior Unfolding for Snapshot Compressive Imaging
Jiancheng Zhang, Haijin Zeng, Jiezhang Cao, Yongyong Chen, Dengxiu Yu, Yin-Ping Zhao
Recently deep unfolding methods have achieved remarkable success in the realm of Snapshot Compressive Imaging (SCI) reconstruction. However the existing methods all follow the iterative framework of a single image prior which limits the efficiency of the unfolding methods and makes it a problem to use other priors simply and effectively. To break out of the box we derive an effective Dual Prior Unfolding (DPU) which achieves the joint utilization of multiple deep priors and greatly improves iteration efficiency. Our unfolding method is implemented through two parts i.e. Dual Prior Framework (DPF) and Focused Attention (FA). In brief in addition to the normal image prior DPF introduces a residual into the iteration formula and constructs a degraded prior for the residual by considering various degradations to establish the unfolding framework. To improve the effectiveness of the image prior based on self-attention FA adopts a novel mechanism inspired by PCA denoising to scale and filter attention which lets the attention focus more on effective features with little computation cost. Besides an asymmetric backbone is proposed to further improve the efficiency of hierarchical self-attention. Remarkably our 5-stage DPU achieves state-of-the-art (SOTA) performance with the least FLOPs and parameters compared to previous methods while our 9-stage DPU significantly outperforms other unfolding methods with less computational requirement.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Dual_Prior_Unfolding_for_Snapshot_Compressive_Imaging_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Dual_Prior_Unfolding_for_Snapshot_Compressive_Imaging_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Dual_Prior_Unfolding_for_Snapshot_Compressive_Imaging_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_Dual_Prior_Unfolding_CVPR_2024_supplemental.pdf
null
COLMAP-Free 3D Gaussian Splatting
Yang Fu, Sifei Liu, Amey Kulkarni, Jan Kautz, Alexei A. Efros, Xiaolong Wang
While neural rendering has led to impressive advances in scene reconstruction and novel view synthesis it relies heavily on accurately pre-computed camera poses. To relax this constraint multiple efforts have been made to train Neural Radiance Fields (NeRFs) without pre-processed camera poses. However the implicit representations of NeRFs provide extra challenges to optimize the 3D structure and camera poses at the same time. On the other hand the recently proposed 3D Gaussian Splatting provides new opportunities given its explicit point cloud representations. This paper leverages both the explicit geometric representation and the continuity of the input video stream to perform novel view synthesis without any SfM preprocessing. We process the input frames in a sequential manner and progressively grow the 3D Gaussians set by taking one input frame at a time without the need to pre-compute the camera poses. Our method significantly improves over previous approaches in view synthesis and camera pose estimation under large motion changes. Our project page is: https: //oasisyang.github.io/colmap-free-3dgs.
https://openaccess.thecvf.com/content/CVPR2024/papers/Fu_COLMAP-Free_3D_Gaussian_Splatting_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.07504
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Fu_COLMAP-Free_3D_Gaussian_Splatting_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Fu_COLMAP-Free_3D_Gaussian_Splatting_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Fu_COLMAP-Free_3D_Gaussian_CVPR_2024_supplemental.pdf
null
MVIP-NeRF: Multi-view 3D Inpainting on NeRF Scenes via Diffusion Prior
Honghua Chen, Chen Change Loy, Xingang Pan
Despite the emergence of successful NeRF inpainting methods built upon explicit RGB and depth 2D inpainting supervisions these methods are inherently constrained by the capabilities of their underlying 2D inpainters. This is due to two key reasons: (i) independently inpainting constituent images results in view-inconsistent imagery and (ii) 2D inpainters struggle to ensure high-quality geometry completion and alignment with inpainted RGB images. To overcome these limitations we propose a novel approach called MVIP-NeRF that harnesses the potential of diffusion priors for NeRF inpainting addressing both appearance and geometry aspects. MVIP-NeRF performs joint inpainting across multiple views to reach a consistent solution which is achieved via an iterative optimization process based on Score Distillation Sampling (SDS). Apart from recovering the rendered RGB images we also extract normal maps as a geometric representation and define a normal SDS loss that motivates accurate geometry inpainting and alignment with the appearance. Additionally we formulate a multi-view SDS score function to distill generative priors simultaneously from different view images ensuring consistent visual completion when dealing with large view variations. Our experimental results show better appearance and geometry recovery than previous NeRF inpainting methods.
https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_MVIP-NeRF_Multi-view_3D_Inpainting_on_NeRF_Scenes_via_Diffusion_Prior_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_MVIP-NeRF_Multi-view_3D_Inpainting_on_NeRF_Scenes_via_Diffusion_Prior_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_MVIP-NeRF_Multi-view_3D_Inpainting_on_NeRF_Scenes_via_Diffusion_Prior_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Chen_MVIP-NeRF_Multi-view_3D_CVPR_2024_supplemental.pdf
null
StegoGAN: Leveraging Steganography for Non-Bijective Image-to-Image Translation
Sidi Wu, Yizi Chen, Samuel Mermet, Lorenz Hurni, Konrad Schindler, Nicolas Gonthier, Loic Landrieu
Most image-to-image translation models postulate that a unique correspondence exists between the semantic classes of the source and target domains. However this assumption does not always hold in real-world scenarios due to divergent distributions different class sets and asymmetrical information representation. As conventional GANs attempt to generate images that match the distribution of the target domain they may hallucinate spurious instances of classes absent from the source domain thereby diminishing the usefulness and reliability of translated images. CycleGAN-based methods are also known to hide the mismatched information in the generated images to bypass cycle consistency objectives a process known as steganography. In response to the challenge of non-bijective image translation we introduce StegoGAN a novel model that leverages steganography to prevent spurious features in generated images. Our approach enhances the semantic consistency of the translated images without requiring additional postprocessing or supervision. Our experimental evaluations demonstrate that StegoGAN outperforms existing GAN-based models across various non-bijective image-to-image translation tasks both qualitatively and quantitatively. Our code and pretrained models are accessible at https://github.com/sian-wusidi/StegoGAN.
https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_StegoGAN_Leveraging_Steganography_for_Non-Bijective_Image-to-Image_Translation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.20142
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wu_StegoGAN_Leveraging_Steganography_for_Non-Bijective_Image-to-Image_Translation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wu_StegoGAN_Leveraging_Steganography_for_Non-Bijective_Image-to-Image_Translation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wu_StegoGAN_Leveraging_Steganography_CVPR_2024_supplemental.pdf
null
M&M VTO: Multi-Garment Virtual Try-On and Editing
Luyang Zhu, Yingwei Li, Nan Liu, Hao Peng, Dawei Yang, Ira Kemelmacher-Shlizerman
We present M&M VTO-a mix and match virtual try-on method that takes as input multiple garment images text description for garment layout and an image of a person. An example input includes: an image of a shirt an image of a pair of pants "rolled sleeves shirt tucked in" and an image of a person. The output is a visualization of how those garments (in the desired layout) would look like on the given person. Key contributions of our method are: 1) a single stage diffusion based model with no super resolution cascading that allows to mix and match multiple garments at 1024x512 resolution preserving and warping intricate garment details 2) architecture design (VTO UNet Diffusion Transformer) to disentangle denoising from person specific features allowing for a highly effective finetuning strategy for identity preservation (6MB model per individual vs 4GB achieved with e.g. dreambooth finetuning); solving a common identity loss problem in current virtual try-on methods 3) layout control for multiple garments via text inputs finetuned over PaLI-3 for virtual try-on task. Experimental results indicate that M&M VTO achieves state-of-the-art performance both qualitatively and quantitatively as well as opens up new opportunities for virtual try-on via language-guided and multi-garment try-on.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhu_MM_VTO_Multi-Garment_Virtual_Try-On_and_Editing_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhu_MM_VTO_Multi-Garment_Virtual_Try-On_and_Editing_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhu_MM_VTO_Multi-Garment_Virtual_Try-On_and_Editing_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhu_MM_VTO_Multi-Garment_CVPR_2024_supplemental.zip
null
AutoAD III: The Prequel - Back to the Pixels
Tengda Han, Max Bain, Arsha Nagrani, Gül Varol, Weidi Xie, Andrew Zisserman
Generating Audio Description (AD) for movies is a challenging task that requires fine-grained visual understanding and an awareness of the characters and their names. Currently visual language models for AD generation are limited by a lack of suitable training data and also their evaluation is hampered by using performance measures not specialized to the AD domain. In this paper we make three contributions: (i) We propose two approaches for constructing AD datasets with aligned video data and build training and evaluation datasets using these. These datasets will be publicly released; (ii) We develop a Q-former-based architecture which ingests raw video and generates AD using frozen pre-trained visual encoders and large language models; and (iii) We provide new evaluation metrics to benchmark AD quality that are well matched to human performance. Taken together we improve the state of the art on AD generation.
https://openaccess.thecvf.com/content/CVPR2024/papers/Han_AutoAD_III_The_Prequel_-_Back_to_the_Pixels_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Han_AutoAD_III_The_Prequel_-_Back_to_the_Pixels_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Han_AutoAD_III_The_Prequel_-_Back_to_the_Pixels_CVPR_2024_paper.html
CVPR 2024
null
null
Characteristics Matching Based Hash Codes Generation for Efficient Fine-grained Image Retrieval
Zhen-Duo Chen, Li-Jun Zhao, Zi-Chao Zhang, Xin Luo, Xin-Shun Xu
The rapidly growing scale of data in practice poses demands on the efficiency of retrieval models. However for fine-grained image retrieval task there are inherent contradictions in the design of hashing based efficient models. Firstly the limited information embedding capacity of low-dimensional binary hash codes coupled with the detailed information required to describe fine-grained categories results in a contradiction in feature learning. Secondly there is also a contradiction between the complexity of fine-grained feature extraction models and retrieval efficiency. To address these issues in this paper we propose the characteristics matching based hash codes generation method. Coupled with the cross-layer semantic information transfer module and the multi-region feature embedding module the proposed method can generate hash codes that effectively capture fine-grained differences among samples while ensuring efficient inference. Extensive experiments on widely used datasets demonstrate that our method can significantly outperform state-of-the-art methods.
https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_Characteristics_Matching_Based_Hash_Codes_Generation_for_Efficient_Fine-grained_Image_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Characteristics_Matching_Based_Hash_Codes_Generation_for_Efficient_Fine-grained_Image_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Characteristics_Matching_Based_Hash_Codes_Generation_for_Efficient_Fine-grained_Image_CVPR_2024_paper.html
CVPR 2024
null
null
BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive Learning
Siyuan Liang, Mingli Zhu, Aishan Liu, Baoyuan Wu, Xiaochun Cao, Ee-Chien Chang
While existing backdoor attacks have successfully infected multimodal contrastive learning models such as CLIP they can be easily countered by specialized backdoor defenses for MCL models. This paper reveals the threats in this practical scenario and introduces the BadCLIP attack which is resistant to backdoor detection and model fine-tuning defenses. To achieve this we draw motivations from the perspective of the Bayesian rule and propose a dual-embedding guided framework for backdoor attacks. Specifically we ensure that visual trigger patterns approximate the textual target semantics in the embedding space making it challenging to detect the subtle parameter variations induced by backdoor learning on such natural trigger patterns. Additionally we optimize the visual trigger patterns to align the poisoned samples with target vision features in order to hinder backdoor unlearning through clean fine-tuning. Our experiments show a significant improvement in attack success rate (+45.3 % ASR) over current leading methods even against state-of-the-art backdoor defenses highlighting our attack's effectiveness in various scenarios including downstream tasks. Our codes can be found at https://github.com/LiangSiyuan21/BadCLIP.
https://openaccess.thecvf.com/content/CVPR2024/papers/Liang_BadCLIP_Dual-Embedding_Guided_Backdoor_Attack_on_Multimodal_Contrastive_Learning_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.12075
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Liang_BadCLIP_Dual-Embedding_Guided_Backdoor_Attack_on_Multimodal_Contrastive_Learning_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Liang_BadCLIP_Dual-Embedding_Guided_Backdoor_Attack_on_Multimodal_Contrastive_Learning_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liang_BadCLIP_Dual-Embedding_Guided_CVPR_2024_supplemental.pdf
null
Dynamic Inertial Poser (DynaIP): Part-Based Motion Dynamics Learning for Enhanced Human Pose Estimation with Sparse Inertial Sensors
Yu Zhang, Songpengcheng Xia, Lei Chu, Jiarui Yang, Qi Wu, Ling Pei
This paper introduces a novel human pose estimation approach using sparse inertial sensors addressing the shortcomings of previous methods reliant on synthetic data. It leverages a diverse array of real inertial motion capture data from different skeleton formats to improve motion diversity and model generalization. This method features two innovative components: a pseudo-velocity regression model for dynamic motion capture with inertial sensors and a part-based model dividing the body and sensor data into three regions each focusing on their unique characteristics. The approach demonstrates superior performance over state-of-the-art models across five public datasets notably reducing pose error by 19% on the DIP-IMU dataset thus representing a significant improvement in inertial sensor-based human pose estimation. Our codes are available at https://github.com/dx118/dynaip
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Dynamic_Inertial_Poser_DynaIP_Part-Based_Motion_Dynamics_Learning_for_Enhanced_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.02196
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Dynamic_Inertial_Poser_DynaIP_Part-Based_Motion_Dynamics_Learning_for_Enhanced_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Dynamic_Inertial_Poser_DynaIP_Part-Based_Motion_Dynamics_Learning_for_Enhanced_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_Dynamic_Inertial_Poser_CVPR_2024_supplemental.pdf
null
Matching 2D Images in 3D: Metric Relative Pose from Metric Correspondences
Axel Barroso-Laguna, Sowmya Munukutla, Victor Adrian Prisacariu, Eric Brachmann
Given two images we can estimate the relative camera pose between them by establishing image-to-image correspondences. Usually correspondences are 2D-to-2D and the pose we estimate is defined only up to scale. Some applications aiming at instant augmented reality anywhere require scale-metric pose estimates and hence they rely on external depth estimators to recover the scale. We present MicKey a keypoint matching pipeline that is able to predict metric correspondences in 3D camera space. By learning to match 3D coordinates across images we are able to infer the metric relative pose without depth measurements. Depth measurements are also not required for training nor are scene reconstructions or image overlap information. MicKey is supervised only by pairs of images and their relative poses. MicKey achieves state-of-the-art performance on the Map-Free Relocalisation benchmark while requiring less supervision than competing approaches.
https://openaccess.thecvf.com/content/CVPR2024/papers/Barroso-Laguna_Matching_2D_Images_in_3D_Metric_Relative_Pose_from_Metric_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.06337
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Barroso-Laguna_Matching_2D_Images_in_3D_Metric_Relative_Pose_from_Metric_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Barroso-Laguna_Matching_2D_Images_in_3D_Metric_Relative_Pose_from_Metric_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Barroso-Laguna_Matching_2D_Images_CVPR_2024_supplemental.pdf
null
Efficient Vision-Language Pre-training by Cluster Masking
Zihao Wei, Zixuan Pan, Andrew Owens
We propose a simple strategy for masking image patches during visual-language contrastive learning that improves the quality of the learned representations and the training speed. During each iteration of training we randomly mask clusters of visually similar image patches as measured by their raw pixel intensities. This provides an extra learning signal beyond the contrastive training itself since it forces a model to predict words for masked visual structures solely from context. It also speeds up training by reducing the amount of data used in each image. We evaluate the effectiveness of our model by pre-training on a number of benchmarks finding that it outperforms other masking strategies such as FLIP on the quality of the learned representation.
https://openaccess.thecvf.com/content/CVPR2024/papers/Wei_Efficient_Vision-Language_Pre-training_by_Cluster_Masking_CVPR_2024_paper.pdf
http://arxiv.org/abs/2405.08815
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wei_Efficient_Vision-Language_Pre-training_by_Cluster_Masking_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wei_Efficient_Vision-Language_Pre-training_by_Cluster_Masking_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wei_Efficient_Vision-Language_Pre-training_CVPR_2024_supplemental.pdf
null
GraCo: Granularity-Controllable Interactive Segmentation
Yian Zhao, Kehan Li, Zesen Cheng, Pengchong Qiao, Xiawu Zheng, Rongrong Ji, Chang Liu, Li Yuan, Jie Chen
Interactive Segmentation (IS) segments specific objects or parts in the image according to user input. Current IS pipelines fall into two categories: single-granularity output and multi-granularity output. The latter aims to alleviate the spatial ambiguity present in the former. However the multi-granularity output pipeline suffers from limited interaction flexibility and produces redundant results. In this work we introduce Granularity-Controllable Interactive Segmentation (GraCo) a novel approach that allows precise control of prediction granularity by introducing additional parameters to input. This enhances the customization of the interactive system and eliminates redundancy while resolving ambiguity. Nevertheless the exorbitant cost of annotating multi-granularity masks and the lack of available datasets with granularity annotations make it difficult for models to acquire the necessary guidance to control output granularity. To address this problem we design an any-granularity mask generator that exploits the semantic property of the pre-trained IS model to automatically generate abundant mask-granularity pairs without requiring additional manual annotation. Based on these pairs we propose a granularity-controllable learning strategy that efficiently imparts the granularity controllability to the IS model. Extensive experiments on intricate scenarios at object and part levels demonstrate that our GraCo has significant advantages over previous methods. This highlights the potential of GraCo to be a flexible annotation tool capable of adapting to diverse segmentation scenarios. The project page: https://zhao-yian.github.io/GraCo.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhao_GraCo_Granularity-Controllable_Interactive_Segmentation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2405.00587
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhao_GraCo_Granularity-Controllable_Interactive_Segmentation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhao_GraCo_Granularity-Controllable_Interactive_Segmentation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhao_GraCo_Granularity-Controllable_Interactive_CVPR_2024_supplemental.zip
null
M3-UDA: A New Benchmark for Unsupervised Domain Adaptive Fetal Cardiac Structure Detection
Bin Pu, Liwen Wang, Jiewen Yang, Guannan He, Xingbo Dong, Shengli Li, Ying Tan, Ming Chen, Zhe Jin, Kenli Li, Xiaomeng Li
The anatomical structure detection of fetal cardiac views is crucial for diagnosing fetal congenital heart disease. In practice there is a large domain gap between different hospitals' data such as the variable data quality due to differences in acquisition equipment. In addition accurate annotation information provided by obstetrician experts is always very costly or even unavailable. This study explores the unsupervised domain adaptive fetal cardiac structure detection issue. Existing unsupervised domain adaptive object detection (UDAOD) approaches mainly focus on detecting objects in natural scenes such as Foggy Cityscapes where the structural relationships of natural scenes are uncertain. Unlike all previous UDAOD scenarios we first collected a Fetal Cardiac Structure dataset from two hospital centers called FCS and proposed a multi-matching UDA approach (M3-UDA) including Histogram Matching (HM) Sub-structure Matching (SM) and Global-structure Matching (GM) to better transfer the topological knowledge of anatomical structure for UDA detection in medical scenarios. HM mitigates the domain gap between the source and target caused by pixel transformation. SM fuses the different angle information of the sub-structure to obtain the local topological knowledge for bridging the domain gap of the internal sub-structure. GM is designed to align the global topological knowledge of the whole organ from the source and target domain. Extensive experiments on our collected FCS and CardiacUDA and experimental results show that M3-UDA outperforms existing UDAOD studies significantly. All datasets and source code are available at : https://github.com/xmed-lab/M3-UDA
https://openaccess.thecvf.com/content/CVPR2024/papers/Pu_M3-UDA_A_New_Benchmark_for_Unsupervised_Domain_Adaptive_Fetal_Cardiac_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Pu_M3-UDA_A_New_Benchmark_for_Unsupervised_Domain_Adaptive_Fetal_Cardiac_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Pu_M3-UDA_A_New_Benchmark_for_Unsupervised_Domain_Adaptive_Fetal_Cardiac_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Pu_M3-UDA_A_New_CVPR_2024_supplemental.pdf
null
GPS-Gaussian: Generalizable Pixel-wise 3D Gaussian Splatting for Real-time Human Novel View Synthesis
Shunyuan Zheng, Boyao Zhou, Ruizhi Shao, Boning Liu, Shengping Zhang, Liqiang Nie, Yebin Liu
We present a new approach termed GPS-Gaussian for synthesizing novel views of a character in a real-time manner. The proposed method enables 2K-resolution rendering under a sparse-view camera setting. Unlike the original Gaussian Splatting or neural implicit rendering methods that necessitate per-subject optimizations we introduce Gaussian parameter maps defined on the source views and regress directly Gaussian Splatting properties for instant novel view synthesis without any fine-tuning or optimization. To this end we train our Gaussian parameter regression module on a large amount of human scan data jointly with a depth estimation module to lift 2D parameter maps to 3D space. The proposed framework is fully differentiable and experiments on several datasets demonstrate that our method outperforms state-of-the-art methods while achieving an exceeding rendering speed.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zheng_GPS-Gaussian_Generalizable_Pixel-wise_3D_Gaussian_Splatting_for_Real-time_Human_Novel_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zheng_GPS-Gaussian_Generalizable_Pixel-wise_3D_Gaussian_Splatting_for_Real-time_Human_Novel_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zheng_GPS-Gaussian_Generalizable_Pixel-wise_3D_Gaussian_Splatting_for_Real-time_Human_Novel_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zheng_GPS-Gaussian_Generalizable_Pixel-wise_CVPR_2024_supplemental.pdf
null
Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video Understanding
Peng Jin, Ryuichi Takanobu, Wancai Zhang, Xiaochun Cao, Li Yuan
Large language models have demonstrated impressive universal capabilities across a wide range of open-ended tasks and have extended their utility to encompass multimodal conversations. However existing methods encounter challenges in effectively handling both image and video understanding particularly with limited visual tokens. In this work we introduce Chat-UniVi a Unified Vision-language model capable of comprehending and engaging in conversations involving images and videos through a unified visual representation. Specifically we employ a set of dynamic visual tokens to uniformly represent images and videos. This representation framework empowers the model to efficiently utilize a limited number of visual tokens to simultaneously capture the spatial details necessary for images and the comprehensive temporal relationship required for videos. Moreover we leverage a multi-scale representation enabling the model to perceive both high-level semantic concepts and low-level visual details. Notably Chat-UniVi is trained on a mixed dataset containing both images and videos allowing direct application to tasks involving both mediums without requiring any modifications. Extensive experimental results demonstrate that Chat-UniVi consistently outperforms even existing methods exclusively designed for either images or videos. Code is available at https://github.com/PKU-YuanGroup/Chat-UniVi.
https://openaccess.thecvf.com/content/CVPR2024/papers/Jin_Chat-UniVi_Unified_Visual_Representation_Empowers_Large_Language_Models_with_Image_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Jin_Chat-UniVi_Unified_Visual_Representation_Empowers_Large_Language_Models_with_Image_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Jin_Chat-UniVi_Unified_Visual_Representation_Empowers_Large_Language_Models_with_Image_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Jin_Chat-UniVi_Unified_Visual_CVPR_2024_supplemental.pdf
null
MAGICK: A Large-scale Captioned Dataset from Matting Generated Images using Chroma Keying
Ryan D. Burgert, Brian L. Price, Jason Kuen, Yijun Li, Michael S. Ryoo
We introduce MAGICK a large-scale dataset of generated objects with high-quality alpha mattes. While image generation methods have produced segmentations they cannot generate alpha mattes with accurate details in hair fur and transparencies. This is likely due to the small size of current alpha matting datasets and the difficulty in obtaining ground-truth alpha. We propose a scalable method for synthesizing images of objects with high-quality alpha that can be used as a ground-truth dataset. A key idea is to generate objects on a single-colored background so chroma keying approaches can be used to extract the alpha. However this faces several challenges including that current text-to-image generation methods cannot create images that can be easily chroma keyed and that chroma keying is an underconstrained problem that generally requires manual intervention for high-quality results. We address this using a combination of generation and alpha extraction methods. Using our method we generate a dataset of 150000 objects with alpha. We show the utility of our dataset by training an alpha-to-rgb generation method that outperforms baselines. Please see our project website at https://ryanndagreat.github.io/MAGICK/.
https://openaccess.thecvf.com/content/CVPR2024/papers/Burgert_MAGICK_A_Large-scale_Captioned_Dataset_from_Matting_Generated_Images_using_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Burgert_MAGICK_A_Large-scale_Captioned_Dataset_from_Matting_Generated_Images_using_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Burgert_MAGICK_A_Large-scale_Captioned_Dataset_from_Matting_Generated_Images_using_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Burgert_MAGICK_A_Large-scale_CVPR_2024_supplemental.pdf
null
Video Super-Resolution Transformer with Masked Inter&Intra-Frame Attention
Xingyu Zhou, Leheng Zhang, Xiaorui Zhao, Keze Wang, Leida Li, Shuhang Gu
Recently Vision Transformer has achieved great success in recovering missing details in low-resolution sequences i.e. the video super-resolution (VSR) task. Despite its superiority in VSR accuracy the heavy computational burden as well as the large memory footprint hinder the deployment of Transformer-based VSR models on constrained devices. In this paper we address the above issue by proposing a novel feature-level masked processing framework: VSR with Masked Intra and inter-frame Attention (MIA-VSR). The core of MIA-VSR is leveraging feature-level temporal continuity between adjacent frames to reduce redundant computations and make more rational use of previously enhanced SR features. Concretely we propose an intra-frame and inter-frame attention block which takes the respective roles of past features and input features into consideration and only exploits previously enhanced features to provide supplementary information. In addition an adaptive block-wise mask prediction module is developed to skip unimportant computations according to feature similarity between adjacent frames. We conduct detailed ablation studies to validate our contributions and compare the proposed method with recent state-of-the-art VSR approaches. The experimental results demonstrate that MIA-VSR improves the memory and computation efficiency over state-of-the-art methods without trading off PSNR accuracy. The code is available at https://github.com/LabShuHangGU/MIA-VSR.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhou_Video_Super-Resolution_Transformer_with_Masked_InterIntra-Frame_Attention_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhou_Video_Super-Resolution_Transformer_with_Masked_InterIntra-Frame_Attention_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhou_Video_Super-Resolution_Transformer_with_Masked_InterIntra-Frame_Attention_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhou_Video_Super-Resolution_Transformer_CVPR_2024_supplemental.pdf
null
Token Transformation Matters: Towards Faithful Post-hoc Explanation for Vision Transformer
Junyi Wu, Bin Duan, Weitai Kang, Hao Tang, Yan Yan
While Transformers have rapidly gained popularity in various computer vision applications post-hoc explanations of their internal mechanisms remain largely unexplored. Vision Transformers extract visual information by representing image regions as transformed tokens and integrating them via attention weights. However existing post-hoc explanation methods merely consider these attention weights neglecting crucial information from the transformed tokens which fails to accurately illustrate the rationales behind the models' predictions. To incorporate the influence of token transformation into interpretation we propose TokenTM a novel post-hoc explanation method that utilizes our introduced measurement of token transformation effects. Specifically we quantify token transformation effects by measuring changes in token lengths and correlations in their directions pre- and post-transformation. Moreover we develop initialization and aggregation rules to integrate both attention weights and token transformation effects across all layers capturing holistic token contributions throughout the model. Experimental results on segmentation and perturbation tests demonstrate the superiority of our proposed TokenTM compared to state-of-the-art Vision Transformer explanation methods.
https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_Token_Transformation_Matters_Towards_Faithful_Post-hoc_Explanation_for_Vision_Transformer_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.14552
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wu_Token_Transformation_Matters_Towards_Faithful_Post-hoc_Explanation_for_Vision_Transformer_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wu_Token_Transformation_Matters_Towards_Faithful_Post-hoc_Explanation_for_Vision_Transformer_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wu_Token_Transformation_Matters_CVPR_2024_supplemental.pdf
null
Bayesian Differentiable Physics for Cloth Digitalization
Deshan Gong, Ningtao Mao, He Wang
We propose a new method for cloth digitalization. Deviating from existing methods which learn from data captured under relatively casual settings we propose to learn from data captured in strictly tested measuring protocols and find plausible physical parameters of the cloths. However such data is currently absent so we first propose a new dataset with accurate cloth measurements. Further the data size is considerably smaller than the ones in current deep learning due to the nature of the data capture process. To learn from small data we propose a new Bayesian differentiable cloth model to estimate the complex material heterogeneity of real cloths. It can provide highly accurate digitalization from very limited data samples. Through exhaustive evaluation and comparison we show our method is accurate in cloth digitalization efficient in learning from limited data samples and general in capturing material variations. Code and data are available in: https://github.com/realcrane/Bayesian-Differentiable-Physics-for-Cloth-Digitalization
https://openaccess.thecvf.com/content/CVPR2024/papers/Gong_Bayesian_Differentiable_Physics_for_Cloth_Digitalization_CVPR_2024_paper.pdf
http://arxiv.org/abs/2402.17664
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Gong_Bayesian_Differentiable_Physics_for_Cloth_Digitalization_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Gong_Bayesian_Differentiable_Physics_for_Cloth_Digitalization_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Gong_Bayesian_Differentiable_Physics_CVPR_2024_supplemental.pdf
null
G-HOP: Generative Hand-Object Prior for Interaction Reconstruction and Grasp Synthesis
Yufei Ye, Abhinav Gupta, Kris Kitani, Shubham Tulsiani
We propose G-HOP a denoising diffusion based generative prior for hand-object interactions that allows modeling both the 3D object and a human hand conditioned on the object category. To learn a 3D spatial diffusion model that can capture this joint distribution we represent the human hand via a skeletal distance field to obtain a representation aligned with the (latent) signed distance field for the object. We show that this hand-object prior can then serve as a generic guidance to facilitate other tasks like reconstruction from interaction clip and human grasp synthesis. We believe that our model trained by aggregating several diverse real-world interaction datasets spanning 155 categories represents a first approach that allows jointly generating both hand and object. Our empirical evaluations demonstrate the benefit of this joint prior in video-based reconstruction and human grasp synthesis outperforming current task-specific baselines.
https://openaccess.thecvf.com/content/CVPR2024/papers/Ye_G-HOP_Generative_Hand-Object_Prior_for_Interaction_Reconstruction_and_Grasp_Synthesis_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Ye_G-HOP_Generative_Hand-Object_Prior_for_Interaction_Reconstruction_and_Grasp_Synthesis_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Ye_G-HOP_Generative_Hand-Object_Prior_for_Interaction_Reconstruction_and_Grasp_Synthesis_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ye_G-HOP_Generative_Hand-Object_CVPR_2024_supplemental.pdf
null
Higher-order Relational Reasoning for Pedestrian Trajectory Prediction
Sungjune Kim, Hyung-gun Chi, Hyerin Lim, Karthik Ramani, Jinkyu Kim, Sangpil Kim
Social relations have substantial impacts on the potential trajectories of each individual. Modeling these dynamics has been a central solution for more precise and accurate trajectory forecasting. However previous works ignore the importance of `social depth' meaning the influences flowing from different degrees of social relations. In this work we propose HighGraph a graph-based pedestrian relational reasoning method that captures the higher-order dynamics of social interactions. First we construct a collision-aware relation graph based on the agents' observed trajectories. Upon this graph structure we build our core module that aggregates the agent features from diverse social distances. As a result the network is able to model complex social relations thereby yielding more accurate and socially acceptable trajectories. Our HighGraph is a plug-and-play module that can be easily applied to any current trajectory predictors. Extensive experiments with ETH/UCY and SDD datasets demonstrate that our HighGraph noticeably improves the previous state-of-the-art baselines both quantitatively and qualitatively.
https://openaccess.thecvf.com/content/CVPR2024/papers/Kim_Higher-order_Relational_Reasoning_for_Pedestrian_Trajectory_Prediction_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Kim_Higher-order_Relational_Reasoning_for_Pedestrian_Trajectory_Prediction_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Kim_Higher-order_Relational_Reasoning_for_Pedestrian_Trajectory_Prediction_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kim_Higher-order_Relational_Reasoning_CVPR_2024_supplemental.pdf
null
SurroundSDF: Implicit 3D Scene Understanding Based on Signed Distance Field
Lizhe Liu, Bohua Wang, Hongwei Xie, Daqi Liu, Li Liu, Zhiqiang Tian, Kuiyuan Yang, Bing Wang
Vision-centric 3D environment understanding is both vital and challenging for autonomous driving systems. Recently object-free methods have attracted considerable attention. Such methods perceive the world by predicting the semantics of discrete voxel grids but fail to construct continuous and accurate obstacle surfaces. To this end in this paper we propose SurroundSDF to implicitly predict the signed distance field (SDF) and semantic field for the continuous perception from surround images. Specifically we introduce a query-based approach and utilize SDF constrained by the Eikonal formulation to accurately describe the surfaces of obstacles. Furthermore considering the absence of precise SDF ground truth we propose a novel weakly supervised paradigm for SDF referred to as the Sandwich Eikonal formulation which emphasizes applying correct and dense constraints on both sides of the surface thereby enhancing the perceptual accuracy of the surface. Experiments suggest that our method achieves SOTA for both occupancy prediction and 3D scene reconstruction tasks on the nuScenes dataset.
https://openaccess.thecvf.com/content/CVPR2024/papers/Liu_SurroundSDF_Implicit_3D_Scene_Understanding_Based_on_Signed_Distance_Field_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.14366
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Liu_SurroundSDF_Implicit_3D_Scene_Understanding_Based_on_Signed_Distance_Field_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Liu_SurroundSDF_Implicit_3D_Scene_Understanding_Based_on_Signed_Distance_Field_CVPR_2024_paper.html
CVPR 2024
null
null
Contrastive Denoising Score for Text-guided Latent Diffusion Image Editing
Hyelin Nam, Gihyun Kwon, Geon Yeong Park, Jong Chul Ye
With the remarkable advent of text-to-image diffusion models image editing methods have become more diverse and continue to evolve. A promising recent approach in this realm is Delta Denoising Score (DDS) - an image editing technique based on Score Distillation Sampling (SDS) framework that leverages the rich generative prior of text-to-image diffusion models. However relying solely on the difference between scoring functions is insufficient for preserving specific structural elements from the original image a crucial aspect of image editing. To address this here we present an embarrassingly simple yet very powerful modification of DDS called Contrastive Denoising Score (CDS) for latent diffusion models (LDM). Inspired by the similarities and differences between DDS and the contrastive learning for unpaired image-to-image translation(CUT) we introduce a straightforward approach using CUT loss within the DDS framework. Rather than employing auxiliary networks as in the original CUT approach we leverage the intermediate features of LDM specifically those from the self-attention layers which possesses rich spatial information. Our approach enables zero-shot image-to-image translation and neural radiance field (NeRF) editing achieving structural correspondence between the input and output while maintaining content controllability. Qualitative results and comparisons demonstrates the effectiveness of our proposed method.
https://openaccess.thecvf.com/content/CVPR2024/papers/Nam_Contrastive_Denoising_Score_for_Text-guided_Latent_Diffusion_Image_Editing_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.18608
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Nam_Contrastive_Denoising_Score_for_Text-guided_Latent_Diffusion_Image_Editing_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Nam_Contrastive_Denoising_Score_for_Text-guided_Latent_Diffusion_Image_Editing_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Nam_Contrastive_Denoising_Score_CVPR_2024_supplemental.pdf
null
Neural Point Cloud Diffusion for Disentangled 3D Shape and Appearance Generation
Philipp Schröppel, Christopher Wewer, Jan Eric Lenssen, Eddy Ilg, Thomas Brox
Controllable generation of 3D assets is important for many practical applications like content creation in movies games and engineering as well as in AR/VR. Recently diffusion models have shown remarkable results in generation quality of 3D objects. However none of the existing models enable disentangled generation to control the shape and appearance separately. For the first time we present a suitable representation for 3D diffusion models to enable such disentanglement by introducing a hybrid point cloud and neural radiance field approach. We model a diffusion process over point positions jointly with a high-dimensional feature space for a local density and radiance decoder. While the point positions represent the coarse shape of the object the point features allow modeling the geometry and appearance details. This disentanglement enables us to sample both independently and therefore to control both separately. Our approach sets a new state of the art in generation compared to previous disentanglement-capable methods by reduced FID scores of 30-90% and is on-par with other non-disentanglement-capable state-of-the art methods.
https://openaccess.thecvf.com/content/CVPR2024/papers/Schroppel_Neural_Point_Cloud_Diffusion_for_Disentangled_3D_Shape_and_Appearance_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Schroppel_Neural_Point_Cloud_Diffusion_for_Disentangled_3D_Shape_and_Appearance_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Schroppel_Neural_Point_Cloud_Diffusion_for_Disentangled_3D_Shape_and_Appearance_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Schroppel_Neural_Point_Cloud_CVPR_2024_supplemental.pdf
null
RealNet: A Feature Selection Network with Realistic Synthetic Anomaly for Anomaly Detection
Ximiao Zhang, Min Xu, Xiuzhuang Zhou
Self-supervised feature reconstruction methods have shown promising advances in industrial image anomaly detection and localization. Despite this progress these methods still face challenges in synthesizing realistic and diverse anomaly samples as well as addressing the feature redundancy and pre-training bias of pre-trained feature. In this work we introduce RealNet a feature reconstruction network with realistic synthetic anomaly and adaptive feature selection. It is incorporated with three key innovations: First we propose Strength-controllable Diffusion Anomaly Synthesis (SDAS) a diffusion process-based synthesis strategy capable of generating samples with varying anomaly strengths that mimic the distribution of real anomalous samples. Second we develop Anomaly-aware Features Selection (AFS) a method for selecting representative and discriminative pre-trained feature subsets to improve anomaly detection performance while controlling computational costs. Third we introduce Reconstruction Residuals Selection (RRS) a strategy that adaptively selects discriminative residuals for comprehensive identification of anomalous regions across multiple levels of granularity. We assess RealNet on four benchmark datasets and our results demonstrate significant improvements in both Image AUROC and Pixel AUROC compared to the current state-of-the-art methods. The code data and models are available at https://github.com/cnulab/RealNet.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_RealNet_A_Feature_Selection_Network_with_Realistic_Synthetic_Anomaly_for_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.05897
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_RealNet_A_Feature_Selection_Network_with_Realistic_Synthetic_Anomaly_for_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_RealNet_A_Feature_Selection_Network_with_Realistic_Synthetic_Anomaly_for_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_RealNet_A_Feature_CVPR_2024_supplemental.pdf
null