Search is not available for this dataset
title
string
authors
string
abstract
string
pdf
string
arXiv
string
bibtex
string
url
string
detail_url
string
tags
string
supp
string
string
Outdoor Scene Extrapolation with Hierarchical Generative Cellular Automata
Dongsu Zhang, Francis Williams, Zan Gojcic, Karsten Kreis, Sanja Fidler, Young Min Kim, Amlan Kar
We aim to generate fine-grained 3D geometry from large-scale sparse LiDAR scans abundantly captured by autonomous vehicles (AV). Contrary to prior work on AV scene completion we aim to extrapolate fine geometry from unlabeled and beyond spatial limits of LiDAR scans taking a step towards generating realistic high-resolution simulation-ready 3D street environments. We propose hierarchical Generative Cellular Automata (hGCA) a spatially scalable conditional 3D generative model which grows geometry recursively with local kernels following GCAs in a coarse-to-fine manner equipped with a light-weight planner to induce global consistency. Experiments on synthetic scenes show that hGCA generates plausible scene geometry with higher fidelity and completeness compared to state-of-the-art baselines. Our model generalizes strongly from sim-to-real qualitatively outperforming baselines on the Waymo-open dataset. We also show anecdotal evidence of the ability to create novel objects from real-world geometric cues even when trained on limited synthetic content.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Outdoor_Scene_Extrapolation_with_Hierarchical_Generative_Cellular_Automata_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Outdoor_Scene_Extrapolation_with_Hierarchical_Generative_Cellular_Automata_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Outdoor_Scene_Extrapolation_with_Hierarchical_Generative_Cellular_Automata_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_Outdoor_Scene_Extrapolation_CVPR_2024_supplemental.pdf
null
Instruct 4D-to-4D: Editing 4D Scenes as Pseudo-3D Scenes Using 2D Diffusion
Linzhan Mou, Jun-Kun Chen, Yu-Xiong Wang
This paper proposes Instruct 4D-to-4D that achieves 4D awareness and spatial-temporal consistency for 2D diffusion models to generate high-quality instruction-guided dynamic scene editing results. Traditional applications of 2D diffusion models in dynamic scene editing often result in inconsistency primarily due to their inherent frame-by-frame editing methodology. Addressing the complexities of extending instruction-guided editing to 4D our key insight is to treat a 4D scene as a pseudo-3D scene decoupled into two sub-problems: achieving temporal consistency in video editing and applying these edits to the pseudo-3D scene. Following this we first enhance the Instruct-Pix2Pix (IP2P) model with an anchor-aware attention module for batch processing and consistent editing. Additionally we integrate optical flow-guided appearance propagation in a sliding window fashion for more precise frame-to-frame editing and incorporate depth-based projection to manage the extensive data of pseudo-3D scenes followed by iterative editing to achieve convergence. We extensively evaluate our approach in various scenes and editing instructions and demonstrate that it achieves spatially and temporally consistent editing results with significantly enhanced detail and sharpness over the prior art. Notably Instruct 4D-to-4D is general and applicable to both monocular and challenging multi-camera scenes.
https://openaccess.thecvf.com/content/CVPR2024/papers/Mou_Instruct_4D-to-4D_Editing_4D_Scenes_as_Pseudo-3D_Scenes_Using_2D_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Mou_Instruct_4D-to-4D_Editing_4D_Scenes_as_Pseudo-3D_Scenes_Using_2D_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Mou_Instruct_4D-to-4D_Editing_4D_Scenes_as_Pseudo-3D_Scenes_Using_2D_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Mou_Instruct_4D-to-4D_Editing_CVPR_2024_supplemental.pdf
null
VAREN: Very Accurate and Realistic Equine Network
Silvia Zuffi, Ylva Mellbin, Ci Li, Markus Hoeschle, Hedvig Kjellström, Senya Polikovsky, Elin Hernlund, Michael J. Black
Data-driven three-dimensional parametric shape models of the human body have gained enormous popularity both for the analysis of visual data and for the generation of synthetic humans. Following a similar approach for animals does not scale to the multitude of existing animal species not to mention the difficulty of accessing subjects to scan in 3D. However we argue that for domestic species of great importance like the horse it is a highly valuable investment to put effort into gathering a large dataset of real 3D scans and learn a realistic 3D articulated shape model. We introduce VAREN a novel 3D articulated parametric shape model learned from 3D scans of many real horses. VAREN bridges synthesis and analysis tasks as the generated model instances have unprecedented realism while being able to represent horses of different sizes and shapes. Differently from previous body models VAREN has two resolutions an anatomical skeleton and interpretable learned pose-dependent deformations which are related to the body muscles. We show with experiments that this formulation has superior performance with respect to previous strategies for modeling pose-dependent deformations in the human body case while also being more compact and allowing an analysis of the relationship between articulation and muscle deformation during articulated motion.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zuffi_VAREN_Very_Accurate_and_Realistic_Equine_Network_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zuffi_VAREN_Very_Accurate_and_Realistic_Equine_Network_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zuffi_VAREN_Very_Accurate_and_Realistic_Equine_Network_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zuffi_VAREN_Very_Accurate_CVPR_2024_supplemental.mp4
null
Photo-SLAM: Real-time Simultaneous Localization and Photorealistic Mapping for Monocular Stereo and RGB-D Cameras
Huajian Huang, Longwei Li, Hui Cheng, Sai-Kit Yeung
The integration of neural rendering and the SLAM system recently showed promising results in joint localization and photorealistic view reconstruction. However existing methods fully relying on implicit representations are so resource-hungry that they cannot run on portable devices which deviates from the original intention of SLAM. In this paper we present Photo-SLAM a novel SLAM framework with a hyper primitives map. Specifically we simultaneously exploit explicit geometric features for localization and learn implicit photometric features to represent the texture information of the observed environment. In addition to actively densifying hyper primitives based on geometric features we further introduce a Gaussian-Pyramid-based training method to progressively learn multi-level features enhancing photorealistic mapping performance. The extensive experiments with monocular stereo and RGB-D datasets prove that our proposed system Photo-SLAM significantly outperforms current state-of-the-art SLAM systems for online photorealistic mapping e.g. PSNR is 30% higher and rendering speed is hundreds of times faster in the Replica dataset. Moreover the Photo-SLAM can run at real-time speed using an embedded platform such as Jetson AGX Orin showing the potential of robotics applications. Project Page and code: https://huajianup.github.io/research/Photo-SLAM/.
https://openaccess.thecvf.com/content/CVPR2024/papers/Huang_Photo-SLAM_Real-time_Simultaneous_Localization_and_Photorealistic_Mapping_for_Monocular_Stereo_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Huang_Photo-SLAM_Real-time_Simultaneous_Localization_and_Photorealistic_Mapping_for_Monocular_Stereo_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Huang_Photo-SLAM_Real-time_Simultaneous_Localization_and_Photorealistic_Mapping_for_Monocular_Stereo_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Huang_Photo-SLAM_Real-time_Simultaneous_CVPR_2024_supplemental.pdf
null
SD-DiT: Unleashing the Power of Self-supervised Discrimination in Diffusion Transformer
Rui Zhu, Yingwei Pan, Yehao Li, Ting Yao, Zhenglong Sun, Tao Mei, Chang Wen Chen
Diffusion Transformer (DiT) has emerged as the new trend of generative diffusion models on image generation. In view of extremely slow convergence in typical DiT recent breakthroughs have been driven by mask strategy that significantly improves the training efficiency of DiT with additional intra-image contextual learning. Despite this progress mask strategy still suffers from two inherent limitations: (a) training-inference discrepancy and (b) fuzzy relations between mask reconstruction & generative diffusion process resulting in sub-optimal training of DiT. In this work we address these limitations by novelly unleashing the self-supervised discrimination knowledge to boost DiT training. Technically we frame our DiT in a teacher-student manner. The teacher-student discriminative pairs are built on the diffusion noises along the same Probability Flow Ordinary Differential Equation (PF-ODE). Instead of applying mask reconstruction loss over both DiT encoder and decoder we decouple DiT encoder and decoder to separately tackle discriminative and generative objectives. In particular by encoding discriminative pairs with student and teacher DiT encoders a new discriminative loss is designed to encourage the inter-image alignment in the self-supervised embedding space. After that student samples are fed into student DiT decoder to perform the typical generative diffusion task. Extensive experiments are conducted on ImageNet dataset and our method achieves a competitive balance between training cost and generative capacity.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhu_SD-DiT_Unleashing_the_Power_of_Self-supervised_Discrimination_in_Diffusion_Transformer_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhu_SD-DiT_Unleashing_the_Power_of_Self-supervised_Discrimination_in_Diffusion_Transformer_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhu_SD-DiT_Unleashing_the_Power_of_Self-supervised_Discrimination_in_Diffusion_Transformer_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhu_SD-DiT_Unleashing_the_CVPR_2024_supplemental.pdf
null
Multi-modal Instruction Tuned LLMs with Fine-grained Visual Perception
Junwen He, Yifan Wang, Lijun Wang, Huchuan Lu, Jun-Yan He, Jin-Peng Lan, Bin Luo, Xuansong Xie
Multimodal Large Language Model (MLLMs) leverages Large Language Models as a cognitive framework for diverse visual-language tasks. Recent efforts have been made to equip MLLMs with visual perceiving and grounding capabilities. However there still remains a gap in providing fine-grained pixel-level perceptions and extending interactions beyond text-specific inputs. In this work we propose \bf AnyRef a general MLLM model that can generate pixel-wise object perceptions and natural language descriptions from multi-modality references such as texts boxes images or audio. This innovation empowers users with greater flexibility to engage with the model beyond textual and regional prompts without modality-specific designs. Through our proposed refocusing mechanism the generated grounding output is guided to better focus on the referenced object implicitly incorporating additional pixel-level supervision. This simple modification utilizes attention scores generated during the inference of LLM eliminating the need for extra computations while exhibiting performance enhancements in both grounding masks and referring expressions. With only publicly available training data our model achieves state-of-the-art results across multiple benchmarks including diverse modality referring segmentation and region-level referring expression generation. Code and models are available at https://github.com/jwh97nn/AnyRef
https://openaccess.thecvf.com/content/CVPR2024/papers/He_Multi-modal_Instruction_Tuned_LLMs_with_Fine-grained_Visual_Perception_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.02969
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/He_Multi-modal_Instruction_Tuned_LLMs_with_Fine-grained_Visual_Perception_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/He_Multi-modal_Instruction_Tuned_LLMs_with_Fine-grained_Visual_Perception_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/He_Multi-modal_Instruction_Tuned_CVPR_2024_supplemental.pdf
null
ProMotion: Prototypes As Motion Learners
Yawen Lu, Dongfang Liu, Qifan Wang, Cheng Han, Yiming Cui, Zhiwen Cao, Xueling Zhang, Yingjie Victor Chen, Heng Fan
In this work we introduce ProMotion a unified prototypical transformer-based framework engineered to model fundamental motion tasks. ProMotion offers a range of compelling attributes that set it apart from current task-specific paradigms. 1. We adopt a prototypical perspective establishing a unified paradigm that harmonizes disparate motion learning approaches. This novel paradigm streamlines the architectural design enabling the simultaneous assimilation of diverse motion information. 2. We capitalize on a dual mechanism involving the feature denoiser and the prototypical learner to decipher the intricacies of motion. This approach effectively circumvents the pitfalls of ambiguity in pixel-wise feature matching significantly bolstering the robustness of motion representation. We demonstrate a profound degree of transferability across distinct motion patterns. This inherent versatility reverberates robustly across a comprehensive spectrum of both 2D and 3D downstream tasks. Empirical results demonstrate that outperforms various well-known specialized architectures achieving 0.54 and 0.054 Abs Rel error on the Sintel and KITTI depth datasets 1.04 and 2.01 average endpoint error on the clean and final pass of Sintel flow benchmark and 4.30 F1-all error on the KITTI flow benchmark. For its efficacy we hope our work can catalyze a paradigm shift in universal models in computer vision.
https://openaccess.thecvf.com/content/CVPR2024/papers/Lu_ProMotion_Prototypes_As_Motion_Learners_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Lu_ProMotion_Prototypes_As_Motion_Learners_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Lu_ProMotion_Prototypes_As_Motion_Learners_CVPR_2024_paper.html
CVPR 2024
null
null
SpatialTracker: Tracking Any 2D Pixels in 3D Space
Yuxi Xiao, Qianqian Wang, Shangzhan Zhang, Nan Xue, Sida Peng, Yujun Shen, Xiaowei Zhou
Recovering dense and long-range pixel motion in videos is a challenging problem. Part of the difficulty arises from the 3D-to-2D projection process leading to occlusions and discontinuities in the 2D motion domain. While 2D motion can be intricate we posit that the underlying 3D motion can often be simple and low-dimensional. In this work we propose to estimate point trajectories in 3D space to mitigate the issues caused by image projection. Our method named SpatialTracker lifts 2D pixels to 3D using monocular depth estimators represents the 3D content of each frame efficiently using a triplane representation and performs iterative updates using a transformer to estimate 3D trajectories. Tracking in 3D allows us to leverage as-rigid-as possible(ARAP) constraints while simultaneously learning a rigidity embedding that clusters pixels into different rigid parts. Extensive evaluation shows that our approach achieves state-of-the-art tracking performance both qualitatively and quantitatively particularly in chal- lenging scenarios such as out-of-plane rotation. And our project page is available at https://henry123-boy.github.io/SpaTracker/.
https://openaccess.thecvf.com/content/CVPR2024/papers/Xiao_SpatialTracker_Tracking_Any_2D_Pixels_in_3D_Space_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.04319
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Xiao_SpatialTracker_Tracking_Any_2D_Pixels_in_3D_Space_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Xiao_SpatialTracker_Tracking_Any_2D_Pixels_in_3D_Space_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Xiao_SpatialTracker_Tracking_Any_CVPR_2024_supplemental.mp4
null
LaMPilot: An Open Benchmark Dataset for Autonomous Driving with Language Model Programs
Yunsheng Ma, Can Cui, Xu Cao, Wenqian Ye, Peiran Liu, Juanwu Lu, Amr Abdelraouf, Rohit Gupta, Kyungtae Han, Aniket Bera, James M. Rehg, Ziran Wang
Autonomous driving (AD) has made significant strides in recent years. However existing frameworks struggle to interpret and execute spontaneous user instructions such as "overtake the car ahead." Large Language Models (LLMs) have demonstrated impressive reasoning capabilities showing potential to bridge this gap. In this paper we present LaMPilot a novel framework that integrates LLMs into AD systems enabling them to follow user instructions by generating code that leverages established functional primitives. We also introduce LaMPilot-Bench the first benchmark dataset specifically designed to quantitatively evaluate the efficacy of language model programs in AD. Adopting the LaMPilot framework we conduct extensive experiments to assess the performance of off-the-shelf LLMs on LaMPilot-Bench. Our results demonstrate the potential of LLMs in handling diverse driving scenarios and following user instructions in driving. To facilitate further research in this area we release our code and data at GitHub.com/PurdueDigitalTwin/LaMPilot.
https://openaccess.thecvf.com/content/CVPR2024/papers/Ma_LaMPilot_An_Open_Benchmark_Dataset_for_Autonomous_Driving_with_Language_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.04372
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Ma_LaMPilot_An_Open_Benchmark_Dataset_for_Autonomous_Driving_with_Language_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Ma_LaMPilot_An_Open_Benchmark_Dataset_for_Autonomous_Driving_with_Language_CVPR_2024_paper.html
CVPR 2024
null
null
MedBN: Robust Test-Time Adaptation against Malicious Test Samples
Hyejin Park, Jeongyeon Hwang, Sunung Mun, Sangdon Park, Jungseul Ok
Test-time adaptation (TTA) has emerged as a promising solution to address performance decay due to unforeseen distribution shifts between training and test data. While recent TTA methods excel in adapting to test data variations such adaptability exposes a model to vulnerability against malicious examples an aspect that has received limited attention. Previous studies have uncovered security vulnerabilities within TTA even when a small proportion of the test batch is maliciously manipulated. In response to the emerging threat we propose median batch normalization (MedBN) leveraging the robustness of the median for statistics estimation within the batch normalization layer during test-time inference. Our method is algorithm-agnostic thus allowing seamless integration with existing TTA frameworks. Our experimental results on benchmark datasets including CIFAR10-C CIFAR100-C and ImageNet-C consistently demonstrate that MedBN outperforms existing approaches in maintaining robust performance across different attack scenarios encompassing both instant and cumulative attacks. Through extensive experiments we show that our approach sustains the performance even in the absence of attacks achieving a practical balance between robustness and performance.
https://openaccess.thecvf.com/content/CVPR2024/papers/Park_MedBN_Robust_Test-Time_Adaptation_against_Malicious_Test_Samples_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.19326
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Park_MedBN_Robust_Test-Time_Adaptation_against_Malicious_Test_Samples_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Park_MedBN_Robust_Test-Time_Adaptation_against_Malicious_Test_Samples_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Park_MedBN_Robust_Test-Time_CVPR_2024_supplemental.pdf
null
Unsupervised Gaze Representation Learning from Multi-view Face Images
Yiwei Bao, Feng Lu
Annotating gaze is an expensive and time-consuming endeavor requiring costly eye-trackers or complex geometric calibration procedures. Although some eye-based unsupervised gaze representation learning methods have been proposed the quality of gaze representation extracted by these methods degrades severely when the head pose is large. In this paper we present the Multi-View Dual-Encoder (MV-DE) a framework designed to learn gaze representations from unlabeled multi-view face images. Through the proposed Dual-Encoder architecture and the multi-view gaze representation swapping strategy the MV-DE successfully disentangles gaze from general facial information and derives gaze representations closely tied to the subject's eyeball rotation without gaze label. Experimental results illustrate that the gaze representations learned by the MV-DE can be used in downstream tasks including gaze estimation and redirection. Gaze estimation results indicates that the proposed MV-DE displays notably higher robustness to uncontrolled head movements when compared to state-of-the-art (SOTA) unsupervised learning methods.
https://openaccess.thecvf.com/content/CVPR2024/papers/Bao_Unsupervised_Gaze_Representation_Learning_from_Multi-view_Face_Images_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Bao_Unsupervised_Gaze_Representation_Learning_from_Multi-view_Face_Images_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Bao_Unsupervised_Gaze_Representation_Learning_from_Multi-view_Face_Images_CVPR_2024_paper.html
CVPR 2024
null
null
FairDeDup: Detecting and Mitigating Vision-Language Fairness Disparities in Semantic Dataset Deduplication
Eric Slyman, Stefan Lee, Scott Cohen, Kushal Kafle
Recent dataset deduplication techniques have demonstrated that content-aware dataset pruning can dramatically reduce the cost of training Vision-Language Pretrained (VLP) models without significant performance losses compared to training on the original dataset. These results have been based on pruning commonly used image-caption datasets collected from the web -- datasets that are known to harbor harmful social biases that may then be codified in trained models. In this work we evaluate how deduplication affects the prevalence of these biases in the resulting trained models and introduce an easy-to-implement modification to the recent SemDeDup algorithm that can reduce the negative effects that we observe. When examining CLIP-style models trained on deduplicated variants of LAION-400M we find our proposed FairDeDup algorithm consistently leads to improved fairness metrics over SemDeDup on the FairFace and FACET datasets while maintaining zero-shot performance on CLIP benchmarks.
https://openaccess.thecvf.com/content/CVPR2024/papers/Slyman_FairDeDup_Detecting_and_Mitigating_Vision-Language_Fairness_Disparities_in_Semantic_Dataset_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.16123
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Slyman_FairDeDup_Detecting_and_Mitigating_Vision-Language_Fairness_Disparities_in_Semantic_Dataset_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Slyman_FairDeDup_Detecting_and_Mitigating_Vision-Language_Fairness_Disparities_in_Semantic_Dataset_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Slyman_FairDeDup_Detecting_and_CVPR_2024_supplemental.pdf
null
CrossMAE: Cross-Modality Masked Autoencoders for Region-Aware Audio-Visual Pre-Training
null
null
null
null
null
https://openaccess.thecvf.com/content/CVPR2024/html/Guo_CrossMAE_Cross-Modality_Masked_Autoencoders_for_Region-Aware_Audio-Visual_Pre-Training_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Guo_CrossMAE_Cross-Modality_Masked_Autoencoders_for_Region-Aware_Audio-Visual_Pre-Training_CVPR_2024_paper.html
CVPR 2024
null
null
Osprey: Pixel Understanding with Visual Instruction Tuning
Yuqian Yuan, Wentong Li, Jian Liu, Dongqi Tang, Xinjie Luo, Chi Qin, Lei Zhang, Jianke Zhu
Multimodal large language models (MLLMs) have recently achieved impressive general-purpose vision-language capabilities through visual instruction tuning. However current MLLMs primarily focus on image-level or box-level understanding falling short in achieving fine-grained vision-language alignment at pixel level. Besides the lack of mask-based instruction data limits their advancements. In this paper we propose Osprey a mask-text instruction tuning approach to extend MLLMs by incorporating fine-grained mask regions into language instruction aiming at achieving pixel-wise visual understanding. To achieve this goal we first meticulously curate a mask-based region-text dataset with 724K samples and then design a vision-language model by injecting pixel-level representation into LLM. Specifically Osprey adopts a convolutional CLIP backbone as the vision encoder and employs a mask-aware visual extractor to extract precise visual mask features from high resolution input. Experimental results demonstrate Osprey's superiority in various region understanding tasks showcasing its new capability for pixel-level instruction tuning. In particular Osprey can be integrated with Segment Anything Model (SAM) seamlessly to obtain multi-granularity semantics. The source code dataset and demo can be found at https://github.com/CircleRadon/Osprey.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yuan_Osprey_Pixel_Understanding_with_Visual_Instruction_Tuning_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.10032
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yuan_Osprey_Pixel_Understanding_with_Visual_Instruction_Tuning_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yuan_Osprey_Pixel_Understanding_with_Visual_Instruction_Tuning_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yuan_Osprey_Pixel_Understanding_CVPR_2024_supplemental.pdf
null
Modality-agnostic Domain Generalizable Medical Image Segmentation by Multi-Frequency in Multi-Scale Attention
Ju-Hyeon Nam, Nur Suriza Syazwany, Su Jung Kim, Sang-Chul Lee
Generalizability in deep neural networks plays a pivotal role in medical image segmentation. However deep learning-based medical image analyses tend to overlook the importance of frequency variance which is critical element for achieving a model that is both modality-agnostic and domain-generalizable. Additionally various models fail to account for the potential information loss that can arise from multi-task learning under deep supervision a factor that can impair the model's representation ability. To address these challenges we propose a Modality-agnostic Domain Generalizable Network (MADGNet) for medical image segmentation which comprises two key components: a Multi-Frequency in Multi-Scale Attention (MFMSA) block and Ensemble Sub-Decoding Module (E-SDM). The MFMSA block refines the process of spatial feature extraction particularly in capturing boundary features by incorporating multi-frequency and multi-scale features thereby offering informative cues for tissue outline and anatomical structures. Moreover we propose E-SDM to mitigate information loss in multi-task learning with deep supervision especially during substantial upsampling from low resolution. We evaluate the segmentation performance of MADGNet across six modalities and fifteen datasets. Through extensive experiments we demonstrate that MADGNet consistently outperforms state-of-the-art models across various modalities showcasing superior segmentation performance. This affirms MADGNet as a robust solution for medical image segmentation that excels in diverse imaging scenarios. Our MADGNet code is available in GitHub Link.
https://openaccess.thecvf.com/content/CVPR2024/papers/Nam_Modality-agnostic_Domain_Generalizable_Medical_Image_Segmentation_by_Multi-Frequency_in_Multi-Scale_CVPR_2024_paper.pdf
http://arxiv.org/abs/2405.06284
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Nam_Modality-agnostic_Domain_Generalizable_Medical_Image_Segmentation_by_Multi-Frequency_in_Multi-Scale_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Nam_Modality-agnostic_Domain_Generalizable_Medical_Image_Segmentation_by_Multi-Frequency_in_Multi-Scale_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Nam_Modality-agnostic_Domain_Generalizable_CVPR_2024_supplemental.pdf
null
Few-shot Learner Parameterization by Diffusion Time-steps
Zhongqi Yue, Pan Zhou, Richang Hong, Hanwang Zhang, Qianru Sun
Even when using large multi-modal foundation models few-shot learning is still challenging -- if there is no proper inductive bias it is nearly impossible to keep the nuanced class attributes while removing the visually prominent attributes that spuriously correlate with class labels. To this end we find an inductive bias that the time-steps of a Diffusion Model (DM) can isolate the nuanced class attributes i.e. as the forward diffusion adds noise to an image at each time-step nuanced attributes are usually lost at an earlier time-step than the spurious attributes that are visually prominent. Building on this we propose Time-step Few-shot (TiF) learner. We train class-specific low-rank adapters for a text-conditioned DM to make up for the lost attributes such that images can be accurately reconstructed from their noisy ones given a prompt. Hence at a small time-step the adapter and prompt are essentially a parameterization of only the nuanced class attributes. For a test image we can use the parameterization to only extract the nuanced class attributes for classification. TiF learner significantly outperforms OpenCLIP and its adapters on a variety of fine-grained and customized few-shot learning tasks. Codes are in https://github.com/yue-zhongqi/tif.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yue_Few-shot_Learner_Parameterization_by_Diffusion_Time-steps_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.02649
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yue_Few-shot_Learner_Parameterization_by_Diffusion_Time-steps_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yue_Few-shot_Learner_Parameterization_by_Diffusion_Time-steps_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yue_Few-shot_Learner_Parameterization_CVPR_2024_supplemental.pdf
null
Auto MC-Reward: Automated Dense Reward Design with Large Language Models for Minecraft
Hao Li, Xue Yang, Zhaokai Wang, Xizhou Zhu, Jie Zhou, Yu Qiao, Xiaogang Wang, Hongsheng Li, Lewei Lu, Jifeng Dai
Many reinforcement learning environments (e.g. Minecraft) provide only sparse rewards that indicate task completion or failure with binary values. The challenge in exploration efficiency in such environments makes it difficult for reinforcement-learning-based agents to learn complex tasks. To address this this paper introduces an advanced learning system named Auto MC-Reward that leverages Large Language Models (LLMs) to automatically design dense reward functions thereby enhancing the learning efficiency. Auto MC-Reward consists of three important components: Reward Designer Reward Critic and Trajectory Analyzer. Given the environment information and task descriptions the Reward Designer first design the reward function by coding an executable Python function with predefined observation inputs. Then our Reward Critic will be responsible for verifying the code checking whether the code is self-consistent and free of syntax and semantic errors. Further the Trajectory Analyzer summarizes possible failure causes and provides refinement suggestions according to collected trajectories. In the next round Reward Designer will further refine and iterate the dense reward function based on feedback. Experiments demonstrate a significant improvement in the success rate and learning efficiency of our agents in complex tasks in Minecraft such as obtaining diamond with the efficient ability to avoid lava and efficiently explore trees and animals that are sparse in the plains biome.
https://openaccess.thecvf.com/content/CVPR2024/papers/Li_Auto_MC-Reward_Automated_Dense_Reward_Design_with_Large_Language_Models_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Li_Auto_MC-Reward_Automated_Dense_Reward_Design_with_Large_Language_Models_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Li_Auto_MC-Reward_Automated_Dense_Reward_Design_with_Large_Language_Models_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_Auto_MC-Reward_Automated_CVPR_2024_supplemental.pdf
null
GenFlow: Generalizable Recurrent Flow for 6D Pose Refinement of Novel Objects
Sungphill Moon, Hyeontae Son, Dongcheol Hur, Sangwook Kim
Despite the progress of learning-based methods for 6D object pose estimation the trade-off between accuracy and scalability for novel objects still exists. Specifically previous methods for novel objects do not make good use of the target object's 3D shape information since they focus on generalization by processing the shape indirectly making them less effective. We present GenFlow an approach that enables both accuracy and generalization to novel objects with the guidance of the target object's shape. Our method predicts optical flow between the rendered image and the observed image and refines the 6D pose iteratively. It boosts the performance by a constraint of the 3D shape and the generalizable geometric knowledge learned from an end-to-end differentiable system. We further improve our model by designing a cascade network architecture to exploit the multi-scale correlations and coarse-to-fine refinement. GenFlow ranked first on the unseen object pose estimation benchmarks in both the RGB and RGB-D cases. It also achieves performance competitive with existing state-of-the-art methods for the seen object pose estimation without any fine-tuning.
https://openaccess.thecvf.com/content/CVPR2024/papers/Moon_GenFlow_Generalizable_Recurrent_Flow_for_6D_Pose_Refinement_of_Novel_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.11510
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Moon_GenFlow_Generalizable_Recurrent_Flow_for_6D_Pose_Refinement_of_Novel_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Moon_GenFlow_Generalizable_Recurrent_Flow_for_6D_Pose_Refinement_of_Novel_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Moon_GenFlow_Generalizable_Recurrent_CVPR_2024_supplemental.pdf
null
OrCo: Towards Better Generalization via Orthogonality and Contrast for Few-Shot Class-Incremental Learning
Noor Ahmed, Anna Kukleva, Bernt Schiele
Few-Shot Class-Incremental Learning (FSCIL) introduces a paradigm in which the problem space expands with limited data. FSCIL methods inherently face the challenge of catastrophic forgetting as data arrives incrementally making models susceptible to overwriting previously acquired knowledge. Moreover given the scarcity of labeled samples available at any given time models may be prone to overfitting and find it challenging to strike a balance between extensive pretraining and the limited incremental data. To address these challenges we propose the OrCo framework built on two core principles: features' orthogonality in the representation space and contrastive learning. In particular we improve the generalization of the embedding space by employing a combination of supervised and self-supervised contrastive losses during the pretraining phase. Additionally we introduce OrCo loss to address challenges arising from data limitations during incremental sessions. Through feature space perturbations and orthogonality between classes the OrCo loss maximizes margins and reserves space for the following incremental data. This in turn ensures the accommodation of incoming classes in the feature space without compromising previously acquired knowledge. Our experimental results showcase state-of-the-art performance across three benchmark datasets including mini-ImageNet CIFAR100 and CUB datasets. Code is available at https://github.com/noorahmedds/OrCo
https://openaccess.thecvf.com/content/CVPR2024/papers/Ahmed_OrCo_Towards_Better_Generalization_via_Orthogonality_and_Contrast_for_Few-Shot_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.18550
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Ahmed_OrCo_Towards_Better_Generalization_via_Orthogonality_and_Contrast_for_Few-Shot_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Ahmed_OrCo_Towards_Better_Generalization_via_Orthogonality_and_Contrast_for_Few-Shot_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ahmed_OrCo_Towards_Better_CVPR_2024_supplemental.pdf
null
MuGE: Multiple Granularity Edge Detection
Caixia Zhou, Yaping Huang, Mengyang Pu, Qingji Guan, Ruoxi Deng, Haibin Ling
Edge segmentation is well-known to be subjective due to personalized annotation styles and preferred granularity. However most existing deterministic edge detection methods produce only a single edge map for one input image. We argue that generating multiple edge maps is more reasonable than generating a single one considering the subjectivity and ambiguity of the edges. Thus motivated in this paper we propose multiple granularity edge detection called MuGE which can produce a wide range of edge maps from approximate object contours to fine texture edges. Specifically we first propose to design an edge granularity network to estimate the edge granularity from an individual edge annotation. Subsequently to guide the generation of diversified edge maps we integrate such edge granularity into the multi-scale feature maps in the spatial domain. Meanwhile we decompose the feature maps into low-frequency and high-frequency parts where the encoded edge granularity is further fused into the high-frequency part to achieve more precise control over the details of the produced edge maps. Compared to previous methods MuGE is able to not only generate multiple edge maps at different controllable granularities but also achieve a competitive performance on the BSDS500 and Multicue benchmark datasets.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhou_MuGE_Multiple_Granularity_Edge_Detection_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhou_MuGE_Multiple_Granularity_Edge_Detection_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhou_MuGE_Multiple_Granularity_Edge_Detection_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhou_MuGE_Multiple_Granularity_CVPR_2024_supplemental.pdf
null
Real-World Efficient Blind Motion Deblurring via Blur Pixel Discretization
Insoo Kim, Jae Seok Choi, Geonseok Seo, Kinam Kwon, Jinwoo Shin, Hyong-Euk Lee
As recent advances in mobile camera technology have enabled the capability to capture high-resolution images such as 4K images the demand for an efficient deblurring model handling large motion has increased. In this paper we discover that the image residual errors i.e. blur-sharp pixel differences can be grouped into some categories according to their motion blur type and how complex their neighboring pixels are. Inspired by this we decompose the deblurring (regression) task into blur pixel discretization (pixel-level blur classification) and discrete-to-continuous conversion (regression with blur class map) tasks. Specifically we generate the discretized image residual errors by identifying the blur pixels and then transform them to a continuous form which is computationally more efficient than naively solving the original regression problem with continuous values. Here we found that the discretization result i.e. blur segmentation map remarkably exhibits visual similarity with the image residual errors. As a result our efficient model shows comparable performance to state-of-the-art methods in realistic benchmarks while our method is up to 10 times computationally more efficient.
https://openaccess.thecvf.com/content/CVPR2024/papers/Kim_Real-World_Efficient_Blind_Motion_Deblurring_via_Blur_Pixel_Discretization_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.12168
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Kim_Real-World_Efficient_Blind_Motion_Deblurring_via_Blur_Pixel_Discretization_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Kim_Real-World_Efficient_Blind_Motion_Deblurring_via_Blur_Pixel_Discretization_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kim_Real-World_Efficient_Blind_CVPR_2024_supplemental.pdf
null
EmoVIT: Revolutionizing Emotion Insights with Visual Instruction Tuning
Hongxia Xie, Chu-Jun Peng, Yu-Wen Tseng, Hung-Jen Chen, Chan-Feng Hsu, Hong-Han Shuai, Wen-Huang Cheng
Visual Instruction Tuning represents a novel learning paradigm involving the fine-tuning of pre-trained language models using task-specific instructions. This paradigm shows promising zero-shot results in various natural language processing tasks but is still unexplored in vision emotion understanding. In this work we focus on enhancing the model's proficiency in understanding and adhering to instructions related to emotional contexts. Initially we identify key visual clues critical to visual emotion recognition. Subsequently we introduce a novel GPT-assisted pipeline for generating emotion visual instruction data effectively addressing the scarcity of annotated instruction data in this domain. Expanding on the groundwork established by InstructBLIP our proposed EmoVIT architecture incorporates emotion-specific instruction data leveraging the powerful capabilities of Large Language Models to enhance performance. Through extensive experiments our model showcases its proficiency in emotion classification adeptness in affective reasoning and competence in comprehending humor. The comparative analysis provides a robust benchmark for Emotion Visual Instruction Tuning in the era of LLMs providing valuable insights and opening avenues for future exploration in this domain. Our code is available at https://github.com/aimmemotion/EmoVIT.
https://openaccess.thecvf.com/content/CVPR2024/papers/Xie_EmoVIT_Revolutionizing_Emotion_Insights_with_Visual_Instruction_Tuning_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.16670
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Xie_EmoVIT_Revolutionizing_Emotion_Insights_with_Visual_Instruction_Tuning_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Xie_EmoVIT_Revolutionizing_Emotion_Insights_with_Visual_Instruction_Tuning_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Xie_EmoVIT_Revolutionizing_Emotion_CVPR_2024_supplemental.pdf
null
Learning to Count without Annotations
Lukas Knobel, Tengda Han, Yuki M. Asano
While recent supervised methods for reference-based object counting continue to improve the performance on benchmark datasets they have to rely on small datasets due to the cost associated with manually annotating dozens of objects in images. We propose UnCounTR a model that can learn this task without requiring any manual annotations. To this end we construct "Self-Collages" images with various pasted objects as training samples that provide a rich learning signal covering arbitrary object types and counts. Our method builds on existing unsupervised representations and segmentation techniques to successfully demonstrate for the first time the ability of reference-based counting without manual supervision. Our experiments show that our method not only outperforms simple baselines and generic models such as FasterRCNN and DETR but also matches the performance of supervised counting models in some domains.
https://openaccess.thecvf.com/content/CVPR2024/papers/Knobel_Learning_to_Count_without_Annotations_CVPR_2024_paper.pdf
http://arxiv.org/abs/2307.08727
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Knobel_Learning_to_Count_without_Annotations_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Knobel_Learning_to_Count_without_Annotations_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Knobel_Learning_to_Count_CVPR_2024_supplemental.pdf
null
Logarithmic Lenses: Exploring Log RGB Data for Image Classification
Bruce A. Maxwell, Sumegha Singhania, Avnish Patel, Rahul Kumar, Heather Fryling, Sihan Li, Haonan Sun, Ping He, Zewen Li
The design of deep network architectures and training methods in computer vision has been well-explored. However in almost all cases the images have been used as provided with little exploration of pre-processing steps beyond normalization and data augmentation. Virtually all images posted on the web or captured by devices are processed for viewing by humans. Is the pipeline used for humans also best for use by computers and deep networks? The human visual system uses logarithmic sensors; differences and sums correspond to ratios and products. Features in log space will be invariant to intensity changes and robust to color balance changes. Log RGB space also reveals structure that is corrupted by typical pre-processing. We explore using linear and log RGB data for training standard backbone architectures on an image classification task using data derived directly from RAW images to guarantee its integrity. We found that networks trained on log RGB data exhibit improved performance on an unmodified test set and invariance to intensity and color balance modifications without additional training or data augmentation. Furthermore we found that the gains from using high quality log data could also be partially or fully realized from data in 8-bit sRGB-JPG format by inverting the sRGB transform and taking the log. These results imply existing databases may benefit from this type of pre-processing. While working with log data we found it was critical to retain the integrity of the log relationships and that networks using log data train best with meta-parameters different than those used for sRGB or linear data. Finally we introduce a new 10-category 10k RAW image data set (RAW10) for image classification and other purposes to enable further the exploration of log RGB as an input format for deep networks in computer vision.
https://openaccess.thecvf.com/content/CVPR2024/papers/Maxwell_Logarithmic_Lenses_Exploring_Log_RGB_Data_for_Image_Classification_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Maxwell_Logarithmic_Lenses_Exploring_Log_RGB_Data_for_Image_Classification_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Maxwell_Logarithmic_Lenses_Exploring_Log_RGB_Data_for_Image_Classification_CVPR_2024_paper.html
CVPR 2024
null
null
AEROBLADE: Training-Free Detection of Latent Diffusion Images Using Autoencoder Reconstruction Error
Jonas Ricker, Denis Lukovnikov, Asja Fischer
With recent text-to-image models anyone can generate deceptively realistic images with arbitrary contents fueling the growing threat of visual disinformation. A key enabler for generating high-resolution images with low computational cost has been the development of latent diffusion models (LDMs). In contrast to conventional diffusion models LDMs perform the denoising process in the low-dimensional latent space of a pre-trained autoencoder (AE) instead of the high-dimensional image space. Despite their relevance the forensic analysis of LDMs is still in its infancy. In this work we propose AEROBLADE a novel detection method which exploits an inherent component of LDMs: the AE used to transform images between image and latent space. We find that generated images can be more accurately reconstructed by the AE than real images allowing for a simple detection approach based on the reconstruction error. Most importantly our method is easy to implement and does not require any training yet nearly matches the performance of detectors that rely on extensive training. We empirically demonstrate that AEROBLADE is effective against state-of-the-art LDMs including Stable Diffusion and Midjourney. Beyond detection our approach allows for the qualitative analysis of images which can be leveraged for identifying inpainted regions. We release our code and data at https://github.com/jonasricker/aeroblade.
https://openaccess.thecvf.com/content/CVPR2024/papers/Ricker_AEROBLADE_Training-Free_Detection_of_Latent_Diffusion_Images_Using_Autoencoder_Reconstruction_CVPR_2024_paper.pdf
http://arxiv.org/abs/2401.17879
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Ricker_AEROBLADE_Training-Free_Detection_of_Latent_Diffusion_Images_Using_Autoencoder_Reconstruction_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Ricker_AEROBLADE_Training-Free_Detection_of_Latent_Diffusion_Images_Using_Autoencoder_Reconstruction_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ricker_AEROBLADE_Training-Free_Detection_CVPR_2024_supplemental.pdf
null
Scaled Decoupled Distillation
Shicai Wei, Chunbo Luo, Yang Luo
Logit knowledge distillation attracts increasing attention due to its practicality in recent studies. However it often suffers inferior performance compared to the feature knowledge distillation. In this paper we argue that existing logit-based methods may be sub-optimal since they only leverage the global logit output that couples multiple semantic knowledge. This may transfer ambiguous knowledge to the student and mislead its learning. To this end we propose a simple but effective method i.e. Scale Decoupled Distillation (SDD) for logit knowledge distillation. SDD decouples the global logit output into multiple local logit outputs and establishes distillation pipelines for them. This helps the student to mine and inherit fine-grained and unambiguous logit knowledge. Moreover the decoupled knowledge can be further divided into consistent and complementary logit knowledge that transfers the semantic information and sample ambiguity respectively. By increasing the weight of complementary parts SDD can guide the student to focus more on ambiguous samples improving its discrimination ability. Extensive experiments on several benchmark datasets demonstrate the effectiveness of SDD for wide teacher-student pairs especially in the fine-grained classification task. Code is available at: \href https://github.com/shicaiwei123/SDD-CVPR2024 https://github.com/shicaiwei123/SDD-CVPR2024
https://openaccess.thecvf.com/content/CVPR2024/papers/Wei_Scaled_Decoupled_Distillation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.13512
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wei_Scaled_Decoupled_Distillation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wei_Scaled_Decoupled_Distillation_CVPR_2024_paper.html
CVPR 2024
null
null
NARUTO: Neural Active Reconstruction from Uncertain Target Observations
Ziyue Feng, Huangying Zhan, Zheng Chen, Qingan Yan, Xiangyu Xu, Changjiang Cai, Bing Li, Qilun Zhu, Yi Xu
We present NARUTO a neural active reconstruction system that combines a hybrid neural representation with uncertainty learning enabling high-fidelity surface reconstruction. Our approach leverages a multi-resolution hash-grid as the mapping backbone chosen for its exceptional convergence speed and capacity to capture high-frequency local features. The centerpiece of our work is the incorporation of an uncertainty learning module that dynamically quantifies reconstruction uncertainty while actively reconstructing the environment. By harnessing learned uncertainty we propose a novel uncertainty aggregation strategy for goal searching and efficient path planning. Our system autonomously explores by targeting uncertain observations and reconstructs environments with remarkable completeness and fidelity. We also demonstrate the utility of this uncertainty-aware approach by enhancing SOTA neural SLAM systems through an active ray sampling strategy. Extensive evaluations of NARUTO in various environments using an indoor scene simulator confirm its superior performance and state-of-the-art status in active reconstruction as evidenced by its impressive results on benchmark datasets like Replica and MP3D.
https://openaccess.thecvf.com/content/CVPR2024/papers/Feng_NARUTO_Neural_Active_Reconstruction_from_Uncertain_Target_Observations_CVPR_2024_paper.pdf
http://arxiv.org/abs/2402.18771
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Feng_NARUTO_Neural_Active_Reconstruction_from_Uncertain_Target_Observations_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Feng_NARUTO_Neural_Active_Reconstruction_from_Uncertain_Target_Observations_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Feng_NARUTO_Neural_Active_CVPR_2024_supplemental.pdf
null
Point2CAD: Reverse Engineering CAD Models from 3D Point Clouds
Yujia Liu, Anton Obukhov, Jan Dirk Wegner, Konrad Schindler
Computer-Aided Design (CAD) model reconstruction from point clouds is an important problem at the intersection of computer vision graphics and machine learning; it saves the designer significant time when iterating on in-the-wild objects. Recent advancements in this direction achieve relatively reliable semantic segmentation but still struggle to produce an adequate topology of the CAD model. In this work we analyze the current state of the art for that ill-posed task and identify shortcomings of existing methods. We propose a hybrid analytic-neural reconstruction scheme that bridges the gap between segmented point clouds and structured CAD models and can be readily combined with different segmentation backbones. Moreover to power the surface fitting stage we propose a novel implicit neural representation of freeform surfaces driving up the performance of our overall CAD reconstruction scheme. We extensively evaluate our method on the popular ABC benchmark of CAD models and set a new state-of-the-art for that dataset. Code is available at https://github.com/YujiaLiu76/point2cad.
https://openaccess.thecvf.com/content/CVPR2024/papers/Liu_Point2CAD_Reverse_Engineering_CAD_Models_from_3D_Point_Clouds_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.04962
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Point2CAD_Reverse_Engineering_CAD_Models_from_3D_Point_Clouds_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Point2CAD_Reverse_Engineering_CAD_Models_from_3D_Point_Clouds_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liu_Point2CAD_Reverse_Engineering_CVPR_2024_supplemental.pdf
null
Learnable Earth Parser: Discovering 3D Prototypes in Aerial Scans
Romain Loiseau, Elliot Vincent, Mathieu Aubry, Loic Landrieu
We propose an unsupervised method for parsing large 3D scans of real-world scenes with easily-interpretable shapes. This work aims to provide a practical tool for analyzing 3D scenes in the context of aerial surveying and mapping without the need for user annotations. Our approach is based on a probabilistic reconstruction model that decomposes an input 3D point cloud into a small set of learned prototypical 3D shapes. The resulting reconstruction is visually interpretable and can be used to perform unsupervised instance and low-shot semantic segmentation of complex scenes. We demonstrate the usefulness of our model on a novel dataset of seven large aerial LiDAR scans from diverse real-world scenarios. Our approach outperforms state-of-the-art unsupervised methods in terms of decomposition accuracy while remaining visually interpretable. Our code and dataset are available at https://romainloiseau.fr/learnable-earth-parser/.
https://openaccess.thecvf.com/content/CVPR2024/papers/Loiseau_Learnable_Earth_Parser_Discovering_3D_Prototypes_in_Aerial_Scans_CVPR_2024_paper.pdf
http://arxiv.org/abs/2304.09704
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Loiseau_Learnable_Earth_Parser_Discovering_3D_Prototypes_in_Aerial_Scans_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Loiseau_Learnable_Earth_Parser_Discovering_3D_Prototypes_in_Aerial_Scans_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Loiseau_Learnable_Earth_Parser_CVPR_2024_supplemental.pdf
null
NeRFiller: Completing Scenes via Generative 3D Inpainting
Ethan Weber, Aleksander Holynski, Varun Jampani, Saurabh Saxena, Noah Snavely, Abhishek Kar, Angjoo Kanazawa
We propose NeRFiller an approach that completes missing portions of a 3D capture via generative 3D inpainting using off-the-shelf 2D visual generative models. Often parts of a captured 3D scene or object are missing due to mesh reconstruction failures or a lack of observations (e.g. contact regions such as the bottom of objects or hard-to-reach areas). We approach this challenging 3D inpainting problem by leveraging a 2D inpainting diffusion model. We identify a surprising behavior of these models where they generate more 3D consistent inpaints when images form a 2x2 grid and show how to generalize this behavior to more than four images. We then present an iterative framework to distill these inpainted regions into a single consistent 3D scene. In contrast to related works we focus on completing scenes rather than deleting foreground objects and our approach does not require tight 2D object masks or text. We compare our approach to relevant baselines adapted to our setting on a variety of scenes where NeRFiller creates the most 3D consistent and plausible scene completions. Our project page is at https://ethanweber.me/nerfiller/.
https://openaccess.thecvf.com/content/CVPR2024/papers/Weber_NeRFiller_Completing_Scenes_via_Generative_3D_Inpainting_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.04560
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Weber_NeRFiller_Completing_Scenes_via_Generative_3D_Inpainting_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Weber_NeRFiller_Completing_Scenes_via_Generative_3D_Inpainting_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Weber_NeRFiller_Completing_Scenes_CVPR_2024_supplemental.pdf
null
Cloud-Device Collaborative Learning for Multimodal Large Language Models
Guanqun Wang, Jiaming Liu, Chenxuan Li, Yuan Zhang, Junpeng Ma, Xinyu Wei, Kevin Zhang, Maurice Chong, Renrui Zhang, Yijiang Liu, Shanghang Zhang
The burgeoning field of Multimodal Large Language Models (MLLMs) has exhibited remarkable performance in diverse tasks such as captioning commonsense reasoning and visual scene understanding. However the deployment of these large-scale MLLMs on client devices is hindered by their extensive model parameters leading to a notable decline in generalization capabilities when these models are compressed for device deployment. Addressing this challenge we introduce a Cloud-Device Collaborative Continual Adaptation framework designed to enhance the performance of compressed device-deployed MLLMs by leveraging the robust capabilities of cloud-based larger-scale MLLMs. Our framework is structured into three key components: a device-to-cloud uplink for efficient data transmission cloud-based knowledge adaptation and an optimized cloud-to-device downlink for model deployment. In the uplink phase we employ an Uncertainty-guided Token Sampling (UTS) strategy to effectively filter out-of-distribution tokens thereby reducing transmission costs and improving training efficiency. On the cloud side we propose Adapter-based Knowledge Distillation (AKD) method to transfer refined knowledge from large-scale to compressed pocket-size MLLMs. Furthermore we propose a Dynamic Weight update Compression (DWC) strategy for the downlink which adaptively selects and quantizes updated weight parameters enhancing transmission efficiency and reducing the representational disparity between cloud and device models. Extensive experiments on several multimodal benchmarks demonstrate the superiority of our proposed framework over prior Knowledge Distillation and device-cloud collaboration methods. Notably we also validate the feasibility of our approach to real-world experiments.
https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_Cloud-Device_Collaborative_Learning_for_Multimodal_Large_Language_Models_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.16279
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Cloud-Device_Collaborative_Learning_for_Multimodal_Large_Language_Models_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Cloud-Device_Collaborative_Learning_for_Multimodal_Large_Language_Models_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wang_Cloud-Device_Collaborative_Learning_CVPR_2024_supplemental.pdf
null
KD-DETR: Knowledge Distillation for Detection Transformer with Consistent Distillation Points Sampling
Yu Wang, Xin Li, Shengzhao Weng, Gang Zhang, Haixiao Yue, Haocheng Feng, Junyu Han, Errui Ding
DETR is a novel end-to-end transformer architecture object detector which significantly outperforms classic detectors when scaling up. In this paper we focus on the compression of DETR with knowledge distillation. While knowledge distillation has been well-studied in classic detectors there is a lack of researches on how to make it work effectively on DETR. We first provide experimental and theoretical analysis to point out that the main challenge in DETR distillation is the lack of consistent distillation points. Distillation points refer to the corresponding inputs of the predictions for student to mimic which have different formulations in CNN detector and DETR and reliable distillation requires sufficient distillation points which are consistent between teacher and student. Based on this observation we propose the first general knowledge distillation paradigm for DETR(KD-DETR) with consistent distillation points sampling for both homogeneous and heterogeneous distillation. Specifically we decouple detection and distillation tasks by introducing a set of specialized object queries to construct distillation points for DETR. We further propose a general-to-specific distillation points sampling strategy to explore the extensibility of KD-DETR. Extensive experiments validate the effectiveness and generalization of KD-DETR. For both single-scale DAB-DETR and multis-scale Deformable DETR and DINO KD-DETR boost the performance of student model with improvements of 2.6%-5.2%. We further extend KD-DETR to heterogeneous distillation and achieves 2.1% improvement by distilling the knowledge from DINO to Faster R-CNN with ResNet-50 which is comparable with homogeneous distillation methods.
https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_KD-DETR_Knowledge_Distillation_for_Detection_Transformer_with_Consistent_Distillation_Points_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_KD-DETR_Knowledge_Distillation_for_Detection_Transformer_with_Consistent_Distillation_Points_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_KD-DETR_Knowledge_Distillation_for_Detection_Transformer_with_Consistent_Distillation_Points_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wang_KD-DETR_Knowledge_Distillation_CVPR_2024_supplemental.pdf
null
Absolute Pose from One or Two Scaled and Oriented Features
Jonathan Ventura, Zuzana Kukelova, Torsten Sattler, Dániel Baráth
Keypoints used for image matching often include an estimate of the feature scale and orientation. While recent work has demonstrated the advantages of using feature scales and orientations for relative pose estimation relatively little work has considered their use for absolute pose estimation. We introduce minimal solutions for absolute pose from two oriented feature correspondences in the general case or one scaled and oriented correspondence given a known vertical direction. Nowadays assuming a known direction is not particularly restrictive as modern consumer devices such as smartphones or drones are equipped with Inertial Measurement Units (IMU) that provide the gravity direction by default. Compared to traditional absolute pose methods requiring three point correspondences our solvers need a smaller minimal sample reducing the cost and complexity of robust estimation. Evaluations on large-scale and public real datasets demonstrate the advantage of our methods for fast and accurate localization in challenging conditions. Code is available at https://github.com/danini/absolute-pose-from-oriented-and-scaled-features .
https://openaccess.thecvf.com/content/CVPR2024/papers/Ventura_Absolute_Pose_from_One_or_Two_Scaled_and_Oriented_Features_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Ventura_Absolute_Pose_from_One_or_Two_Scaled_and_Oriented_Features_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Ventura_Absolute_Pose_from_One_or_Two_Scaled_and_Oriented_Features_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ventura_Absolute_Pose_from_CVPR_2024_supplemental.pdf
null
Source-Free Domain Adaptation with Frozen Multimodal Foundation Model
Song Tang, Wenxin Su, Mao Ye, Xiatian Zhu
Source-Free Domain Adaptation (SFDA) aims to adapt a source model for a target domain with only access to unlabeled target training data and the source model pretrained on a supervised source domain. Relying on pseudo labeling and/or auxiliary supervision conventional methods are inevitably error-prone. To mitigate this limitation in this work we for the first time explore the potentials of off-the-shelf vision-language (ViL) multimodal models (e.g. CLIP) with rich whilst heterogeneous knowledge. We find that directly applying the ViL model to the target domain in a zero-shot fashion is unsatisfactory as it is not specialized for this particular task but largely generic. To make it task specific we propose a novel Distilling multImodal Foundation mOdel (DIFO) approach. Specifically DIFO alternates between two steps during adaptation: (i) Customizing the ViL model by maximizing the mutual information with the target model in a prompt learning manner (ii) Distilling the knowledge of this customized ViL model to the target model. For more fine-grained and reliable distillation we further introduce two effective regularization terms namely most-likely category encouragement and predictive consistency. Extensive experiments show that DIFO significantly outperforms the state-of-the-art alternatives. Code is here.
https://openaccess.thecvf.com/content/CVPR2024/papers/Tang_Source-Free_Domain_Adaptation_with_Frozen_Multimodal_Foundation_Model_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.16510
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Tang_Source-Free_Domain_Adaptation_with_Frozen_Multimodal_Foundation_Model_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Tang_Source-Free_Domain_Adaptation_with_Frozen_Multimodal_Foundation_Model_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Tang_Source-Free_Domain_Adaptation_CVPR_2024_supplemental.pdf
null
LocLLM: Exploiting Generalizable Human Keypoint Localization via Large Language Model
Dongkai Wang, Shiyu Xuan, Shiliang Zhang
The capacity of existing human keypoint localization models is limited by keypoint priors provided by the training data. To alleviate this restriction and pursue more general model this work studies keypoint localization from a different perspective by reasoning locations based on keypiont clues in text descriptions. We propose LocLLM the first Large-Language Model (LLM) based keypoint localization model that takes images and text instructions as inputs and outputs the desired keypoint coordinates. LocLLM leverages the strong reasoning capability of LLM and clues of keypoint type location and relationship in textual descriptions for keypoint localization. To effectively tune LocLLM we construct localization-based instruction conversations to connect keypoint description with corresponding coordinates in input image and fine-tune the whole model in a parameter-efficient training pipeline. LocLLM shows remarkable performance on standard 2D/3D keypoint localization benchmarks. Moreover incorporating language clues into the localization makes LocLLM show superior flexibility and generalizable capability in cross dataset keypoint localization and even detecting novel type of keypoints unseen during training.
https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_LocLLM_Exploiting_Generalizable_Human_Keypoint_Localization_via_Large_Language_Model_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_LocLLM_Exploiting_Generalizable_Human_Keypoint_Localization_via_Large_Language_Model_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_LocLLM_Exploiting_Generalizable_Human_Keypoint_Localization_via_Large_Language_Model_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wang_LocLLM_Exploiting_Generalizable_CVPR_2024_supplemental.pdf
null
MMA-Diffusion: MultiModal Attack on Diffusion Models
Yijun Yang, Ruiyuan Gao, Xiaosen Wang, Tsung-Yi Ho, Nan Xu, Qiang Xu
In recent years Text-to-Image (T2I) models have seen remarkable advancements gaining widespread adoption. However this progress has inadvertently opened avenues for potential misuse particularly in generating inappropriate or Not-Safe-For-Work (NSFW) content. Our work introduces MMA-Diffusion a framework that presents a significant and realistic threat to the security of T2I models by effectively circumventing current defensive measures in both open-source models and commercial online services. Unlike previous approaches MMA-Diffusion leverages both textual and visual modalities to bypass safeguards like prompt filters and post-hoc safety checkers thus exposing and highlighting the vulnerabilities in existing defense mechanisms. Our codes are available at https://github.com/cure-lab/MMA-Diffusion.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yang_MMA-Diffusion_MultiModal_Attack_on_Diffusion_Models_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_MMA-Diffusion_MultiModal_Attack_on_Diffusion_Models_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_MMA-Diffusion_MultiModal_Attack_on_Diffusion_Models_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yang_MMA-Diffusion_MultiModal_Attack_CVPR_2024_supplemental.pdf
null
Benchmarking Audio Visual Segmentation for Long-Untrimmed Videos
Chen Liu, Peike Patrick Li, Qingtao Yu, Hongwei Sheng, Dadong Wang, Lincheng Li, Xin Yu
Existing audio-visual segmentation datasets typically focus on short-trimmed videos with only one pixel-map annotation for a per-second video clip. In contrast for untrimmed videos the sound duration start- and end-sounding time positions and visual deformation of audible objects vary significantly. Therefore we observed that current AVS models trained on trimmed videos might struggle to segment sounding objects in long videos. To investigate the feasibility of grounding audible objects in videos along both temporal and spatial dimensions we introduce the Long-Untrimmed Audio-Visual Segmentation dataset (LU-AVS) which includes precise frame-level annotations of sounding emission times and provides exhaustive mask annotations for all frames. Considering that pixel-level annotations are difficult to achieve in some complex scenes we also provide the bounding boxes to indicate the sounding regions. Specifically LU-AVS contains 10M mask annotations across 6.6K videos and 11M bounding box annotations across 7K videos. Compared with the existing datasets LU-AVS videos are on average 4 8 times longer with the silent duration being 3 15 times greater. Furthermore we try our best to adapt some baseline models that were originally designed for audio-visual-relevant tasks to examine the challenges of our newly curated LU-AVS. Through comprehensive evaluation we demonstrate the challenges of LU-AVS compared to the ones containing trimmed videos. Therefore LU-AVS provides an ideal yet challenging platform for evaluating audio-visual segmentation and localization on untrimmed long videos. The dataset is publicly available at: https://yenanliu.github.io/LU-AVS/.
https://openaccess.thecvf.com/content/CVPR2024/papers/Liu_Benchmarking_Audio_Visual_Segmentation_for_Long-Untrimmed_Videos_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Benchmarking_Audio_Visual_Segmentation_for_Long-Untrimmed_Videos_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Benchmarking_Audio_Visual_Segmentation_for_Long-Untrimmed_Videos_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liu_Benchmarking_Audio_Visual_CVPR_2024_supplemental.pdf
null
EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation
Md Mostafijur Rahman, Mustafa Munir, Radu Marculescu
An efficient and effective decoding mechanism is crucial in medical image segmentation especially in scenarios with limited computational resources. However these decoding mechanisms usually come with high computational costs. To address this concern we introduce EMCAD a new efficient multi-scale convolutional attention decoder designed to optimize both performance and computational efficiency. EMCAD leverages a unique multi-scale depth-wise convolution block significantly enhancing feature maps through multi-scale convolutions. EMCAD also employs channel spatial and grouped (large-kernel) gated attention mechanisms which are highly effective at capturing intricate spatial relationships while focusing on salient regions. By employing group and depth-wise convolution EMCAD is very efficient and scales well (e.g. only 1.91M parameters and 0.381G FLOPs are needed when using a standard encoder). Our rigorous evaluations across 12 datasets that belong to six medical image segmentation tasks reveal that EMCAD achieves state-of-the-art (SOTA) performance with 79.4% and 80.3% reduction in #Params and #FLOPs respectively. Moreover EMCAD's adaptability to different encoders and versatility across segmentation tasks further establish EMCAD as a promising tool advancing the field towards more efficient and accurate medical image analysis. Our implementation is available at https://github.com/SLDGroup/EMCAD.
https://openaccess.thecvf.com/content/CVPR2024/papers/Rahman_EMCAD_Efficient_Multi-scale_Convolutional_Attention_Decoding_for_Medical_Image_Segmentation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2405.06880
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Rahman_EMCAD_Efficient_Multi-scale_Convolutional_Attention_Decoding_for_Medical_Image_Segmentation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Rahman_EMCAD_Efficient_Multi-scale_Convolutional_Attention_Decoding_for_Medical_Image_Segmentation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Rahman_EMCAD_Efficient_Multi-scale_CVPR_2024_supplemental.pdf
null
VTQA: Visual Text Question Answering via Entity Alignment and Cross-Media Reasoning
Kang Chen, Xiangqian Wu
Achieving the optimal form of Visual Question Answering mandates a profound grasp of understanding grounding and reasoning within the intersecting domains of vision and language. Traditional VQA benchmarks have predominantly focused on simplistic tasks such as counting visual attributes and object detection which do not necessitate intricate cross-modal information understanding and inference. Motivated by the need for a more comprehensive evaluation we introduce a novel dataset comprising 23781 questions derived from 10124 image-text pairs. Specifically the task of this dataset requires the model to align multimedia representations of the same entity to implement multi-hop reasoning between image and text and finally use natural language to answer the question. Furthermore we evaluate this VTQA dataset comparing the performance of both state-of-the-art VQA models and our proposed baseline model the Key Entity Cross-Media Reasoning Network (KECMRN). The VTQA task poses formidable challenges for traditional VQA models underscoring its intrinsic complexity. Conversely KECMRN exhibits a modest improvement signifying its potential in multimedia entity alignment and multi-step reasoning. Our analysis underscores the diversity difficulty and scale of the VTQA task compared to previous multimodal QA datasets. In conclusion we anticipate that this dataset will serve as a pivotal resource for advancing and evaluating models proficient in multimedia entity alignment multi-step reasoning and open-ended answer generation. Our dataset and code is available at https://visual-text-qa.github.io/.
https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_VTQA_Visual_Text_Question_Answering_via_Entity_Alignment_and_Cross-Media_CVPR_2024_paper.pdf
http://arxiv.org/abs/2303.02635
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_VTQA_Visual_Text_Question_Answering_via_Entity_Alignment_and_Cross-Media_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_VTQA_Visual_Text_Question_Answering_via_Entity_Alignment_and_Cross-Media_CVPR_2024_paper.html
CVPR 2024
null
null
QN-Mixer: A Quasi-Newton MLP-Mixer Model for Sparse-View CT Reconstruction
Ishak Ayad, Nicolas Larue, Mai K. Nguyen
Inverse problems span across diverse fields. In medical contexts computed tomography (CT) plays a crucial role in reconstructing a patient's internal structure presenting challenges due to artifacts caused by inherently ill-posed inverse problems. Previous research advanced image quality via post-processing and deep unrolling algorithms but faces challenges such as extended convergence times with ultra-sparse data. Despite enhancements resulting images often show significant artifacts limiting their effectiveness for real-world diagnostic applications. We aim to explore deep second-order unrolling algorithms for solving imaging inverse problems emphasizing their faster convergence and lower time complexity compared to common first-order methods like gradient descent. In this paper we introduce QN-Mixer an algorithm based on the quasi-Newton approach. We use learned parameters through the BFGS algorithm and introduce Incept-Mixer an efficient neural architecture that serves as a non-local regularization term capturing long-range dependencies within images. To address the computational demands typically associated with quasi-Newton algorithms that require full Hessian matrix computations we present a memory-efficient alternative. Our approach intelligently downsamples gradient information significantly reducing computational requirements while maintaining performance. The approach is validated through experiments on the sparse-view CT problem involving various datasets and scanning protocols and is compared with post-processing and deep unrolling state-of-the-art approaches. Our method outperforms existing approaches and achieves state-of-the-art performance in terms of SSIM and PSNR all while reducing the number of unrolling iterations required.
https://openaccess.thecvf.com/content/CVPR2024/papers/Ayad_QN-Mixer_A_Quasi-Newton_MLP-Mixer_Model_for_Sparse-View_CT_Reconstruction_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Ayad_QN-Mixer_A_Quasi-Newton_MLP-Mixer_Model_for_Sparse-View_CT_Reconstruction_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Ayad_QN-Mixer_A_Quasi-Newton_MLP-Mixer_Model_for_Sparse-View_CT_Reconstruction_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ayad_QN-Mixer_A_Quasi-Newton_CVPR_2024_supplemental.pdf
null
Learning CNN on ViT: A Hybrid Model to Explicitly Class-specific Boundaries for Domain Adaptation
Ba Hung Ngo, Nhat-Tuong Do-Tran, Tuan-Ngoc Nguyen, Hae-Gon Jeon, Tae Jong Choi
Most domain adaptation (DA) methods are based on either a convolutional neural networks (CNNs) or a vision transformers (ViTs). They align the distribution differences between domains as encoders without considering their unique characteristics. For instance ViT excels in accuracy due to its superior ability to capture global representations while CNN has an advantage in capturing local representations. This fact has led us to design a hybrid method to fully take advantage of both ViT and CNN called Explicitly Class-specific Boundaries (ECB). ECB learns CNN on ViT to combine their distinct strengths. In particular we leverage ViT's properties to explicitly find class-specific decision boundaries by maximizing the discrepancy between the outputs of the two classifiers to detect target samples far from the source support. In contrast the CNN encoder clusters target features based on the previously defined class-specific boundaries by minimizing the discrepancy between the probabilities of the two classifiers. Finally ViT and CNN mutually exchange knowledge to improve the quality of pseudo labels and reduce the knowledge discrepancies of these models. Compared to conventional DA methods our ECB achieves superior performance which verifies its effectiveness in this hybrid model. The project website can be found https://dotrannhattuong.github.io/ECB/website/.
https://openaccess.thecvf.com/content/CVPR2024/papers/Ngo_Learning_CNN_on_ViT_A_Hybrid_Model_to_Explicitly_Class-specific_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.18360
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Ngo_Learning_CNN_on_ViT_A_Hybrid_Model_to_Explicitly_Class-specific_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Ngo_Learning_CNN_on_ViT_A_Hybrid_Model_to_Explicitly_Class-specific_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ngo_Learning_CNN_on_CVPR_2024_supplemental.pdf
null
A Picture is Worth More Than 77 Text Tokens: Evaluating CLIP-Style Models on Dense Captions
Jack Urbanek, Florian Bordes, Pietro Astolfi, Mary Williamson, Vasu Sharma, Adriana Romero-Soriano
Curation methods for massive vision-language datasets trade off between dataset size and quality. However even the highest quality of available curated captions are far too short to capture the rich visual detail in an image. To show the value of dense and highly-aligned image-text pairs we collect the Densely Captioned Images (DCI) dataset containing 8012 natural images human-annotated with mask-aligned descriptions averaging above 1000 words each. With precise and reliable captions associated with specific parts of an image we can evaluate vision-language models' (VLMs) understanding of image content with a novel task that matches each caption with its corresponding subcrop. As current models are often limited to 77 text tokens we also introduce a summarized version (sDCI) in which each caption length is limited. We show that modern techniques that make progress on standard benchmarks do not correspond with significant improvement on our sDCI based benchmark. Lastly we finetune CLIP using sDCI and show significant improvements over the baseline despite a small training set. By releasing the first human annotated dense image captioning dataset we hope to enable the development of new benchmarks or fine-tuning recipes for the next generation of VLMs to come.
https://openaccess.thecvf.com/content/CVPR2024/papers/Urbanek_A_Picture_is_Worth_More_Than_77_Text_Tokens_Evaluating_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.08578
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Urbanek_A_Picture_is_Worth_More_Than_77_Text_Tokens_Evaluating_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Urbanek_A_Picture_is_Worth_More_Than_77_Text_Tokens_Evaluating_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Urbanek_A_Picture_is_CVPR_2024_supplemental.pdf
null
HanDiffuser: Text-to-Image Generation With Realistic Hand Appearances
Supreeth Narasimhaswamy, Uttaran Bhattacharya, Xiang Chen, Ishita Dasgupta, Saayan Mitra, Minh Hoai
Text-to-image generative models can generate high-quality humans but realism is lost when generating hands. Common artifacts include irregular hand poses shapes incorrect numbers of fingers and physically implausible finger orientations. To generate images with realistic hands we propose a novel diffusion-based architecture called HanDiffuser that achieves realism by injecting hand embeddings in the generative process. HanDiffuser consists of two components: a Text-to-Hand-Params diffusion model to generate SMPL-Body and MANO-Hand parameters from input text prompts and a Text-Guided Hand-Params-to-Image diffusion model to synthesize images by conditioning on the prompts and hand parameters generated by the previous component. We incorporate multiple aspects of hand representation including 3D shapes and joint-level finger positions orientations and articulations for robust learning and reliable performance during inference. We conduct extensive quantitative and qualitative experiments and perform user studies to demonstrate the efficacy of our method in generating images with high-quality hands.
https://openaccess.thecvf.com/content/CVPR2024/papers/Narasimhaswamy_HanDiffuser_Text-to-Image_Generation_With_Realistic_Hand_Appearances_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.01693
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Narasimhaswamy_HanDiffuser_Text-to-Image_Generation_With_Realistic_Hand_Appearances_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Narasimhaswamy_HanDiffuser_Text-to-Image_Generation_With_Realistic_Hand_Appearances_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Narasimhaswamy_HanDiffuser_Text-to-Image_Generation_CVPR_2024_supplemental.pdf
null
Infinigen Indoors: Photorealistic Indoor Scenes using Procedural Generation
Alexander Raistrick, Lingjie Mei, Karhan Kayan, David Yan, Yiming Zuo, Beining Han, Hongyu Wen, Meenal Parakh, Stamatis Alexandropoulos, Lahav Lipson, Zeyu Ma, Jia Deng
We introduce Infinigen Indoors a Blender-based procedural generator of photorealistic indoor scenes. It builds upon the existing Infinigen system which focuses on natural scenes but expands its coverage to indoor scenes by introducing a diverse library of procedural indoor assets including furniture architecture elements appliances and other day-to-day objects. It also introduces a constraint-based arrangement system which consists of a domain-specific language for expressing diverse constraints on scene composition and a solver that generates scene compositions that maximally satisfy the constraints. We provide an export tool that allows the generated 3D objects and scenes to be directly used for training embodied agents in real-time simulators such as Omniverse and Unreal. Infinigen Indoors is open-sourced under the BSD license. Please visit infinigen.org for code and videos.
https://openaccess.thecvf.com/content/CVPR2024/papers/Raistrick_Infinigen_Indoors_Photorealistic_Indoor_Scenes_using_Procedural_Generation_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Raistrick_Infinigen_Indoors_Photorealistic_Indoor_Scenes_using_Procedural_Generation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Raistrick_Infinigen_Indoors_Photorealistic_Indoor_Scenes_using_Procedural_Generation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Raistrick_Infinigen_Indoors_Photorealistic_CVPR_2024_supplemental.pdf
null
MART: Masked Affective RepresenTation Learning via Masked Temporal Distribution Distillation
Zhicheng Zhang, Pancheng Zhao, Eunil Park, Jufeng Yang
Limited training data is a long-standing problem for video emotion analysis (VEA). Existing works leverage the power of large-scale image datasets for transferring while failing to extract the temporal correlation of affective cues in the video. Inspired by psychology research and empirical theory we verify that the degree of emotion may vary in different segments of the video thus introducing the sentiment complementary and emotion intrinsic among temporal segments. We propose an MAE-style method for learning robust affective representation of videos via masking termed MART. First we extract the affective cues of the lexicon and verify the extracted one by computing its matching score with video content in terms of sentiment and emotion scores alongside the temporal dimension. Then with the verified cues we propose masked affective modeling to recover temporal emotion distribution. We present temporal affective complementary learning that pulls the complementary part and pushes the intrinsic one of masked multimodal features where the constraint is set with cross-modal attention among features to mask the video and recover the degree of emotion among segments. Extensive experiments on five benchmarks show the superiority of our method in video sentiment analysis video emotion recognition multimodal sentiment analysis and multimodal emotion recognition.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_MART_Masked_Affective_RepresenTation_Learning_via_Masked_Temporal_Distribution_Distillation_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_MART_Masked_Affective_RepresenTation_Learning_via_Masked_Temporal_Distribution_Distillation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_MART_Masked_Affective_RepresenTation_Learning_via_Masked_Temporal_Distribution_Distillation_CVPR_2024_paper.html
CVPR 2024
null
null
MTLoRA: Low-Rank Adaptation Approach for Efficient Multi-Task Learning
Ahmed Agiza, Marina Neseem, Sherief Reda
Adapting models pre-trained on large-scale datasets to a variety of downstream tasks is a common strategy in deep learning. Consequently parameter-efficient fine-tuning methods have emerged as a promising way to adapt pre-trained models to different tasks while training only a minimal number of parameters. While most of these methods are designed for single-task adaptation parameter-efficient training in Multi-Task Learning (MTL) architectures is still unexplored. In this paper we introduce MTLoRA a novel framework for parameter-efficient training of MTL models. MTLoRA employs Task-Agnostic and Task-Specific Low-Rank Adaptation modules which effectively disentangle the parameter space in MTL fine-tuning thereby enabling the model to adeptly handle both task specialization and interaction within MTL contexts. We applied MTLoRA to hierarchical-transformer-based MTL architectures adapting them to multiple downstream dense prediction tasks. Our extensive experiments on the PASCAL dataset show that MTLoRA achieves higher accuracy on downstream tasks compared to fully fine-tuning the MTL model while reducing the number of trainable parameters by 3.6x. Furthermore MTLoRA establishes a Pareto-optimal trade-off between the number of trainable parameters and the accuracy of the downstream tasks outperforming current state-of-the-art parameter-efficient training methods in both accuracy and efficiency.
https://openaccess.thecvf.com/content/CVPR2024/papers/Agiza_MTLoRA_Low-Rank_Adaptation_Approach_for_Efficient_Multi-Task_Learning_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Agiza_MTLoRA_Low-Rank_Adaptation_Approach_for_Efficient_Multi-Task_Learning_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Agiza_MTLoRA_Low-Rank_Adaptation_Approach_for_Efficient_Multi-Task_Learning_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Agiza_MTLoRA_Low-Rank_Adaptation_CVPR_2024_supplemental.pdf
null
Hierarchical Patch Diffusion Models for High-Resolution Video Generation
Ivan Skorokhodov, Willi Menapace, Aliaksandr Siarohin, Sergey Tulyakov
Diffusion models have demonstrated remarkable performance in image and video synthesis. However scaling them to high-resolution inputs is challenging and requires restructuring the diffusion pipeline into multiple independent components limiting scalability and complicating downstream applications. In this work we study patch diffusion models (PDMs) -- a diffusion paradigm which models the distribution of patches rather than whole inputs keeping up to 0.7% of the original pixels. This makes it very efficient during training and unlocks end-to-end optimization on high-resolution videos. We improve PDMs in two principled ways. First to enforce consistency between patches we develop deep context fusion -- an architectural technique that propagates the context information from low-scale to high-scale patches in a hierarchical manner. Second to accelerate training and inference we propose adaptive computation which allocates more network capacity and computation towards coarse image details. The resulting model sets a new state-of-the-art FVD score of 66.32 and Inception Score of 87.68 in class-conditional video generation on UCF-101 256x256 surpassing recent methods by more than 100%. Then we show that it can be rapidly fine-tuned from a base 36x64 low-resolution generator for high-resolution 64x288x512 text-to-video synthesis. To the best of our knowledge our model is the first diffusion-based architecture which is trained on such high resolutions entirely end-to-end. Project webpage: https://snap-research.github.io/hpdm.
https://openaccess.thecvf.com/content/CVPR2024/papers/Skorokhodov_Hierarchical_Patch_Diffusion_Models_for_High-Resolution_Video_Generation_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Skorokhodov_Hierarchical_Patch_Diffusion_Models_for_High-Resolution_Video_Generation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Skorokhodov_Hierarchical_Patch_Diffusion_Models_for_High-Resolution_Video_Generation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Skorokhodov_Hierarchical_Patch_Diffusion_CVPR_2024_supplemental.pdf
null
Motion Blur Decomposition with Cross-shutter Guidance
Xiang Ji, Haiyang Jiang, Yinqiang Zheng
Motion blur is a frequently observed image artifact especially under insufficient illumination where exposure time has to be prolonged so as to collect more photons for a bright enough image. Rather than simply removing such blurring effects recent researches have aimed at decomposing a blurry image into multiple sharp images with spatial and temporal coherence. Since motion blur decomposition itself is highly ambiguous priors from neighbouring frames or human annotation are usually needed for motion disambiguation. In this paper inspired by the complementary exposure characteristics of a global shutter (GS) camera and a rolling shutter (RS) camera we propose to utilize the ordered scanline-wise delay in a rolling shutter image to robustify motion decomposition of a single blurry image. To evaluate this novel dual imaging setting we construct a triaxial system to collect realistic data as well as a deep network architecture that explicitly addresses temporal and contextual information through reciprocal branches for cross-shutter motion blur decomposition. Experiment results have verified the effectiveness of our proposed algorithm as well as the validity of our dual imaging setting.
https://openaccess.thecvf.com/content/CVPR2024/papers/Ji_Motion_Blur_Decomposition_with_Cross-shutter_Guidance_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.01120
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Ji_Motion_Blur_Decomposition_with_Cross-shutter_Guidance_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Ji_Motion_Blur_Decomposition_with_Cross-shutter_Guidance_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ji_Motion_Blur_Decomposition_CVPR_2024_supplemental.pdf
null
Scene-adaptive and Region-aware Multi-modal Prompt for Open Vocabulary Object Detection
Xiaowei Zhao, Xianglong Liu, Duorui Wang, Yajun Gao, Zhide Liu
Open Vocabulary Object Detection (OVD) aims to detect objects from novel classes described by text inputs based on the generalization ability of trained classes. Existing methods mainly focus on transferring knowledge from large Vision and Language models (VLM) to detectors through knowledge distillation. However these approaches show weak ability in adapting to diverse classes and aligning between the image-level pre-training and region-level detection thereby impeding effective knowledge transfer. Motivated by the prompt tuning we propose scene-adaptive and region-aware multi-modal prompts to address these issues by effectively adapting class-aware knowledge from VLM to the detector at the region level. Specifically to enhance the adaptability to diverse classes we design a scene-adaptive prompt generator from a scene perspective to consider both the commonality and diversity of the class distributions and formulate a novel selection mechanism to facilitate the acquisition of common knowledge across all classes and specific insights relevant to each scene. Meanwhile to bridge the gap between the pre-trained model and the detector we present a region-aware multi-modal alignment module which employs the region prompt to incorporate the positional information for feature distillation and integrates textual prompts to align visual and linguistic representations. Extensive experimental results demonstrate that the proposed method significantly outperforms the state-of-the-art models on the OV-COCO and OV-LVIS datasets surpassing the current method by 3.0% mAP and 4.6% APr .
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhao_Scene-adaptive_and_Region-aware_Multi-modal_Prompt_for_Open_Vocabulary_Object_Detection_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhao_Scene-adaptive_and_Region-aware_Multi-modal_Prompt_for_Open_Vocabulary_Object_Detection_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhao_Scene-adaptive_and_Region-aware_Multi-modal_Prompt_for_Open_Vocabulary_Object_Detection_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhao_Scene-adaptive_and_Region-aware_CVPR_2024_supplemental.pdf
null
MimicDiffusion: Purifying Adversarial Perturbation via Mimicking Clean Diffusion Model
Kaiyu Song, Hanjiang Lai, Yan Pan, Jian Yin
Deep neural networks (DNNs) are vulnerable to adversarial perturbation where an imperceptible perturbation is added to the image that can fool the DNNs. Diffusion-based adversarial purification uses the diffusion model to generate a clean image against such adversarial attacks. Unfortunately the generative process of the diffusion model is also inevitably affected by adversarial perturbation since the diffusion model is also a deep neural network where its input has adversarial perturbation. In this work we propose MimicDiffusion a new diffusion-based adversarial purification technique that directly approximates the generative process of the diffusion model with the clean image as input. Concretely we analyze the differences between the guided terms using the clean image and the adversarial sample. After that we first implement MimicDiffusion based on Manhattan distance. Then we propose two guidance to purify the adversarial perturbation and approximate the clean diffusion model. Extensive experiments on three image datasets including CIFAR-10 CIFAR-100 and ImageNet with three classifier backbones including WideResNet-70-16 WideResNet-28-10 and ResNet-50 demonstrate that MimicDiffusion significantly performs better than the state-of-the-art baselines. On CIFAR-10 CIFAR-100 and ImageNet it achieves 92.67% 61.35% and 61.53% average robust accuracy which are 18.49% 13.23% and 17.64% higher respectively. The code is available at https://github.com/psky1111/MimicDiffusion.
https://openaccess.thecvf.com/content/CVPR2024/papers/Song_MimicDiffusion_Purifying_Adversarial_Perturbation_via_Mimicking_Clean_Diffusion_Model_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Song_MimicDiffusion_Purifying_Adversarial_Perturbation_via_Mimicking_Clean_Diffusion_Model_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Song_MimicDiffusion_Purifying_Adversarial_Perturbation_via_Mimicking_Clean_Diffusion_Model_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Song_MimicDiffusion_Purifying_Adversarial_CVPR_2024_supplemental.pdf
null
Neural Implicit Morphing of Face Images
Guilherme Schardong, Tiago Novello, Hallison Paz, Iurii Medvedev, Vinícius da Silva, Luiz Velho, Nuno Gonçalves
Face morphing is a problem in computer graphics with numerous artistic and forensic applications. It is challenging due to variations in pose lighting gender and ethnicity. This task consists of a warping for feature alignment and a blending for a seamless transition between the warped images. We propose to leverage coord-based neural networks to represent such warpings and blendings of face images. During training we exploit the smoothness and flexibility of such networks by combining energy functionals employed in classical approaches without discretizations. Additionally our method is time-dependent allowing a continuous warping/blending of the images. During morphing inference we need both direct and inverse transformations of the time-dependent warping. The first (second) is responsible for warping the target (source) image into the source (target) image. Our neural warping stores those maps in a single network dismissing the need for inverting them. The results of our experiments indicate that our method is competitive with both classical and generative models under the lens of image quality and face-morphing detectors. Aesthetically the resulting images present a seamless blending of diverse faces not yet usual in the literature.
https://openaccess.thecvf.com/content/CVPR2024/papers/Schardong_Neural_Implicit_Morphing_of_Face_Images_CVPR_2024_paper.pdf
http://arxiv.org/abs/2308.13888
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Schardong_Neural_Implicit_Morphing_of_Face_Images_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Schardong_Neural_Implicit_Morphing_of_Face_Images_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Schardong_Neural_Implicit_Morphing_CVPR_2024_supplemental.zip
null
UniGS: Unified Representation for Image Generation and Segmentation
Lu Qi, Lehan Yang, Weidong Guo, Yu Xu, Bo Du, Varun Jampani, Ming-Hsuan Yang
This paper introduces a novel unified representation of diffusion models for image generation and segmentation. Specifically we use a colormap to represent entity-level masks addressing the challenge of varying entity numbers while aligning the representation closely with the image RGB domain. Two novel modules including the location-aware color palette and progressive dichotomy module are proposed to support our mask representation. On the one hand a location-aware palette guarantees the colors' consistency to entities' locations. On the other hand the progressive dichotomy module can efficiently decode the synthesized colormap to high-quality entity-level masks in a depth-first binary search without knowing the cluster numbers. To tackle the issue of lacking large-scale segmentation training data we employ an inpainting pipeline and then improve the flexibility of diffusion models across various tasks including inpainting image synthesis referring segmentation and entity segmentation. Comprehensive experiments validate the efficiency of our approach demonstrating comparable segmentation mask quality to state-of-the-art and adaptability to multiple tasks.
https://openaccess.thecvf.com/content/CVPR2024/papers/Qi_UniGS_Unified_Representation_for_Image_Generation_and_Segmentation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.01985
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Qi_UniGS_Unified_Representation_for_Image_Generation_and_Segmentation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Qi_UniGS_Unified_Representation_for_Image_Generation_and_Segmentation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Qi_UniGS_Unified_Representation_CVPR_2024_supplemental.pdf
null
Robust Synthetic-to-Real Transfer for Stereo Matching
Jiawei Zhang, Jiahe Li, Lei Huang, Xiaohan Yu, Lin Gu, Jin Zheng, Xiao Bai
With advancements in domain generalized stereo matching networks models pre-trained on synthetic data demonstrate strong robustness to unseen domains. However few studies have investigated the robustness after fine-tuning them in real-world scenarios during which the domain generalization ability can be seriously degraded. In this paper we explore fine-tuning stereo matching networks without compromising their robustness to unseen domains. Our motivation stems from comparing Ground Truth (GT) versus Pseudo Label (PL) for fine-tuning: GT degrades but PL preserves the domain generalization ability. Empirically we find the difference between GT and PL implies valuable information that can regularize networks during fine-tuning. We also propose a framework to utilize this difference for fine-tuning consisting of a frozen Teacher an exponential moving average (EMA) Teacher and a Student network. The core idea is to utilize the EMA Teacher to measure what the Student has learned and dynamically improve GT and PL for fine-tuning. We integrate our framework with state-of-the-art networks and evaluate its effectiveness on several real-world datasets. Extensive experiments show that our method effectively preserves the domain generalization ability during fine-tuning.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Robust_Synthetic-to-Real_Transfer_for_Stereo_Matching_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.07705
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Robust_Synthetic-to-Real_Transfer_for_Stereo_Matching_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Robust_Synthetic-to-Real_Transfer_for_Stereo_Matching_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_Robust_Synthetic-to-Real_Transfer_CVPR_2024_supplemental.pdf
null
Instance-Aware Group Quantization for Vision Transformers
Jaehyeon Moon, Dohyung Kim, Junyong Cheon, Bumsub Ham
Post-training quantization (PTQ) is an efficient model compression technique that quantizes a pretrained full-precision model using only a small calibration set of unlabeled samples without retraining. PTQ methods for convolutional neural networks (CNNs) provide quantization results comparable to full-precision counterparts. Directly applying them to vision transformers (ViTs) however incurs severe performance degradation mainly due to the differences in architectures between CNNs and ViTs. In particular the distribution of activations for each channel vary drastically according to input instances making PTQ methods for CNNs inappropriate for ViTs. To address this we introduce instance-aware group quantization for ViTs (IGQ-ViT). To this end we propose to split the channels of activation maps into multiple groups dynamically for each input instance such that activations within each group share similar statistical properties. We also extend our scheme to quantize softmax attentions across tokens. In addition the number of groups for each layer is adjusted to minimize the discrepancies between predictions from quantized and full-precision models under a bit-operation (BOP) constraint. We show extensive experimental results on image classification object detection and instance segmentation with various transformer architectures demonstrating the effectiveness of our approach.
https://openaccess.thecvf.com/content/CVPR2024/papers/Moon_Instance-Aware_Group_Quantization_for_Vision_Transformers_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.00928
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Moon_Instance-Aware_Group_Quantization_for_Vision_Transformers_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Moon_Instance-Aware_Group_Quantization_for_Vision_Transformers_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Moon_Instance-Aware_Group_Quantization_CVPR_2024_supplemental.pdf
null
A General and Efficient Training for Transformer via Token Expansion
Wenxuan Huang, Yunhang Shen, Jiao Xie, Baochang Zhang, Gaoqi He, Ke Li, Xing Sun, Shaohui Lin
The remarkable performance of Vision Transformers (ViTs) typically requires an extremely large training cost. Existing methods have attempted to accelerate the training of ViTs yet typically disregard method universality with accuracy dropping. Meanwhile they break the training consistency of the original transformers including the consistency of hyper-parameters architecture and strategy which prevents them from being widely applied to different Transformer networks. In this paper we propose a novel token growth scheme Token Expansion (termed ToE) to achieve consistent training acceleration for ViTs. We introduce an "initialization-expansion-merging" pipeline to maintain the integrity of the intermediate feature distribution of original transformers preventing the loss of crucial learnable information in the training process. ToE can not only be seamlessly integrated into the training and fine-tuning process of transformers (e.g. DeiT and LV-ViT) but also effective for efficient training frameworks (e.g. EfficientTrain) without twisting the original training hyper-parameters architecture and introducing additional training strategies. Extensive experiments demonstrate that ToE achieves about 1.3x faster for the training of ViTs in a lossless manner or even with performance gains over the full-token training baselines. Code is available at https://github.com/Osilly/TokenExpansion.
https://openaccess.thecvf.com/content/CVPR2024/papers/Huang_A_General_and_Efficient_Training_for_Transformer_via_Token_Expansion_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.00672
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Huang_A_General_and_Efficient_Training_for_Transformer_via_Token_Expansion_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Huang_A_General_and_Efficient_Training_for_Transformer_via_Token_Expansion_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Huang_A_General_and_CVPR_2024_supplemental.pdf
null
GenZI: Zero-Shot 3D Human-Scene Interaction Generation
Lei Li, Angela Dai
Can we synthesize 3D humans interacting with scenes without learning from any 3D human-scene interaction data? We propose GenZI the first zero-shot approach to generating 3D human-scene interactions. Key to GenZI is our distillation of interaction priors from large vision-language models (VLMs) which have learned a rich semantic space of 2D human-scene compositions. Given a natural language description and a coarse point location of the desired interaction in a 3D scene we first leverage VLMs to imagine plausible 2D human interactions inpainted into multiple rendered views of the scene. We then formulate a robust iterative optimization to synthesize the pose and shape of a 3D human model in the scene guided by consistency with the 2D interaction hypotheses. In contrast to existing learning-based approaches GenZI circumvents the conventional need for captured 3D interaction data and allows for flexible control of the 3D interaction synthesis with easy-to-use text prompts. Extensive experiments show that our zero-shot approach has high flexibility and generality making it applicable to diverse scene types including both indoor and outdoor environments.
https://openaccess.thecvf.com/content/CVPR2024/papers/Li_GenZI_Zero-Shot_3D_Human-Scene_Interaction_Generation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.17737
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Li_GenZI_Zero-Shot_3D_Human-Scene_Interaction_Generation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Li_GenZI_Zero-Shot_3D_Human-Scene_Interaction_Generation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_GenZI_Zero-Shot_3D_CVPR_2024_supplemental.pdf
null
Tyche: Stochastic In-Context Learning for Medical Image Segmentation
Marianne Rakic, Hallee E. Wong, Jose Javier Gonzalez Ortiz, Beth A. Cimini, John V. Guttag, Adrian V. Dalca
Existing learning-based solutions to medical image segmentation have two important shortcomings. First for most new segmentation tasks a new model has to be trained or fine-tuned. This requires extensive resources and machine-learning expertise and is therefore often infeasible for medical researchers and clinicians. Second most existing segmentation methods produce a single deterministic segmentation mask for a given image. In practice however there is often considerable uncertainty about what constitutes the correct segmentation and different expert annotators will often segment the same image differently. We tackle both of these problems with Tyche a framework that uses a context set to generate stochastic predictions for previously unseen tasks without the need to retrain. Tyche differs from other in-context segmentation methods in two important ways. (1) We introduce a novel convolution block architecture that enables interactions among predictions. (2) We introduce in-context test-time augmentation a new mechanism to provide prediction stochasticity. When combined with appropriate model design and loss functions Tyche can predict a set of plausible diverse segmentation candidates for new or unseen medical images and segmentation tasks without the need to retrain. Code available at: https://tyche.csail.mit.edu/.
https://openaccess.thecvf.com/content/CVPR2024/papers/Rakic_Tyche_Stochastic_In-Context_Learning_for_Medical_Image_Segmentation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2401.13650
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Rakic_Tyche_Stochastic_In-Context_Learning_for_Medical_Image_Segmentation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Rakic_Tyche_Stochastic_In-Context_Learning_for_Medical_Image_Segmentation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Rakic_Tyche_Stochastic_In-Context_CVPR_2024_supplemental.pdf
null
DiffAssemble: A Unified Graph-Diffusion Model for 2D and 3D Reassembly
Gianluca Scarpellini, Stefano Fiorini, Francesco Giuliari, Pietro Moreiro, Alessio Del Bue
Reassembly tasks play a fundamental role in many fields and multiple approaches exist to solve specific reassembly problems. In this context we posit that a general unified model can effectively address them all irrespective of the input data type (image 3D etc.). We introduce DiffAssemble a Graph Neural Network (GNN)-based architecture that learns to solve reassembly tasks using a diffusion model formulation. Our method treats the elements of a set whether pieces of 2D patch or 3D object fragments as nodes of a spatial graph. Training is performed by introducing noise into the position and rotation of the elements and iteratively denoising them to reconstruct the coherent initial pose. DiffAssemble achieves state-of-the-art (SOTA) results in most 2D and 3D reassembly tasks and is the first learning-based approach that solves 2D puzzles for both rotation and translation. Furthermore we highlight its remarkable reduction in run-time performing 11 times faster than the quickest optimization-based method for puzzle solving.
https://openaccess.thecvf.com/content/CVPR2024/papers/Scarpellini_DiffAssemble_A_Unified_Graph-Diffusion_Model_for_2D_and_3D_Reassembly_CVPR_2024_paper.pdf
http://arxiv.org/abs/2402.19302
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Scarpellini_DiffAssemble_A_Unified_Graph-Diffusion_Model_for_2D_and_3D_Reassembly_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Scarpellini_DiffAssemble_A_Unified_Graph-Diffusion_Model_for_2D_and_3D_Reassembly_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Scarpellini_DiffAssemble_A_Unified_CVPR_2024_supplemental.zip
null
NeISF: Neural Incident Stokes Field for Geometry and Material Estimation
Chenhao Li, Taishi Ono, Takeshi Uemori, Hajime Mihara, Alexander Gatto, Hajime Nagahara, Yusuke Moriuchi
Multi-view inverse rendering is the problem of estimating the scene parameters such as shapes materials or illuminations from a sequence of images captured under different viewpoints. Many approaches however assume single light bounce and thus fail to recover challenging scenarios like inter-reflections. On the other hand simply extending those methods to consider multi-bounced light requires more assumptions to alleviate the ambiguity. To address this problem we propose Neural Incident Stokes Fields (NeISF) a multi-view inverse rendering framework that reduces ambiguities using polarization cues. The primary motivation for using polarization cues is that it is the accumulation of multi-bounced light providing rich information about geometry and material. Based on this knowledge the proposed incident Stokes field efficiently models the accumulated polarization effect with the aid of an original physically-based differentiable polarimetric renderer. Lastly experimental results show that our method outperforms the existing works in synthetic and real scenarios.
https://openaccess.thecvf.com/content/CVPR2024/papers/Li_NeISF_Neural_Incident_Stokes_Field_for_Geometry_and_Material_Estimation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.13187
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Li_NeISF_Neural_Incident_Stokes_Field_for_Geometry_and_Material_Estimation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Li_NeISF_Neural_Incident_Stokes_Field_for_Geometry_and_Material_Estimation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_NeISF_Neural_Incident_CVPR_2024_supplemental.pdf
null
Training-Free Open-Vocabulary Segmentation with Offline Diffusion-Augmented Prototype Generation
Luca Barsellotti, Roberto Amoroso, Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara
Open-vocabulary semantic segmentation aims at segmenting arbitrary categories expressed in textual form. Previous works have trained over large amounts of image-caption pairs to enforce pixel-level multimodal alignments. However captions provide global information about the semantics of a given image but lack direct localization of individual concepts. Further training on large-scale datasets inevitably brings significant computational costs. In this paper we propose FreeDA a training-free diffusion-augmented method for open-vocabulary semantic segmentation which leverages the ability of diffusion models to visually localize generated concepts and local-global similarities to match class-agnostic regions with semantic classes. Our approach involves an offline stage in which textual-visual reference embeddings are collected starting from a large set of captions and leveraging visual and semantic contexts. At test time these are queried to support the visual matching process which is carried out by jointly considering class-agnostic regions and global semantic similarities. Extensive analyses demonstrate that FreeDA achieves state-of-the-art performance on five datasets surpassing previous methods by more than 7.0 average points in terms of mIoU and without requiring any training. Our source code is available at https://aimagelab.github.io/freeda/.
https://openaccess.thecvf.com/content/CVPR2024/papers/Barsellotti_Training-Free_Open-Vocabulary_Segmentation_with_Offline_Diffusion-Augmented_Prototype_Generation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.06542
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Barsellotti_Training-Free_Open-Vocabulary_Segmentation_with_Offline_Diffusion-Augmented_Prototype_Generation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Barsellotti_Training-Free_Open-Vocabulary_Segmentation_with_Offline_Diffusion-Augmented_Prototype_Generation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Barsellotti_Training-Free_Open-Vocabulary_Segmentation_CVPR_2024_supplemental.pdf
null
YOLO-World: Real-Time Open-Vocabulary Object Detection
Tianheng Cheng, Lin Song, Yixiao Ge, Wenyu Liu, Xinggang Wang, Ying Shan
The You Only Look Once (YOLO) series of detectors have established themselves as efficient and practical tools. However their reliance on predefined and trained object categories limits their applicability in open scenarios. Addressing this limitation we introduce YOLO-World an innovative approach that enhances YOLO with open-vocabulary detection capabilities through vision-language modeling and pre-training on large-scale datasets. Specifically we propose a new Re-parameterizable Vision-Language Path Aggregation Network (RepVL-PAN) and region-text contrastive loss to facilitate the interaction between visual and linguistic information. Our method excels in detecting a wide range of objects in a zero-shot manner with high efficiency. On the challenging LVIS dataset YOLO-World achieves 35.4 AP with 52.0 FPS on V100 which outperforms many state-of-the-art methods in terms of both accuracy and speed. Furthermore the fine-tuned YOLO-World achieves remarkable performance on several downstream tasks including object detection and open-vocabulary instance segmentation. Code and models are available at https://github.com/AILab-CVC/YOLO-World
https://openaccess.thecvf.com/content/CVPR2024/papers/Cheng_YOLO-World_Real-Time_Open-Vocabulary_Object_Detection_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Cheng_YOLO-World_Real-Time_Open-Vocabulary_Object_Detection_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Cheng_YOLO-World_Real-Time_Open-Vocabulary_Object_Detection_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Cheng_YOLO-World_Real-Time_Open-Vocabulary_CVPR_2024_supplemental.pdf
null
ViT-Lens: Towards Omni-modal Representations
Weixian Lei, Yixiao Ge, Kun Yi, Jianfeng Zhang, Difei Gao, Dylan Sun, Yuying Ge, Ying Shan, Mike Zheng Shou
Aiming to advance AI agents large foundation models significantly improve reasoning and instruction execution yet the current focus on vision and language neglects the potential of perceiving diverse modalities in open-world environments. However the success of data-driven vision and language models is costly or even infeasible to be reproduced for rare modalities. In this paper we present ViT-Lens that facilitates efficient omni-modal representation learning by perceiving novel modalities with a pretrained ViT and aligning them to a pre-defined space. Specifically the modality-specific lens is tuned to project any-modal signals to an intermediate embedding space which are then processed by a strong ViT with pre-trained visual knowledge. The encoded representations are optimized toward aligning with the modal-independent space pre-defined by off-the-shelf foundation models. ViT-Lens provides a unified solution for representation learning of increasing modalities with two appealing advantages: (i) Unlocking the great potential of pretrained ViTs to novel modalities effectively with efficient data regime; (ii) Enabling emergent downstream capabilities through modality alignment and shared ViT parameters. We tailor ViT-Lens to learn representations for 3D point cloud depth audio tactile and EEG and set new state-of-the-art results across various understanding tasks such as zero-shot classification. By seamlessly integrating ViT-Lens into Multimodal Foundation Models we enable Any-modality to Text and Image Generation in a zero-shot manner. Code and models are available at https://github.com/TencentARC/ViT-Lens.
https://openaccess.thecvf.com/content/CVPR2024/papers/Lei_ViT-Lens_Towards_Omni-modal_Representations_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Lei_ViT-Lens_Towards_Omni-modal_Representations_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Lei_ViT-Lens_Towards_Omni-modal_Representations_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lei_ViT-Lens_Towards_Omni-modal_CVPR_2024_supplemental.pdf
null
Cross-Dimension Affinity Distillation for 3D EM Neuron Segmentation
Xiaoyu Liu, Miaomiao Cai, Yinda Chen, Yueyi Zhang, Te Shi, Ruobing Zhang, Xuejin Chen, Zhiwei Xiong
Accurate 3D neuron segmentation from electron microscopy (EM) volumes is crucial for neuroscience research. However the complex neuron morphology often leads to over-merge and over-segmentation results. Recent advancements utilize 3D CNNs to predict a 3D affinity map with improved accuracy but suffer from two challenges: high computational cost and limited input size especially for practical deployment for large-scale EM volumes. To address these challenges we propose a novel method to leverage lightweight 2D CNNs for efficient neuron segmentation. Our method employs a 2D Y-shape network to generate two embedding maps from adjacent 2D sections which are then converted into an affinity map by measuring their embedding distance. While the 2D network better captures pixel dependencies inside sections with larger input sizes it overlooks inter-section dependencies. To overcome this we introduce a cross-dimension affinity distillation (CAD) strategy that transfers inter-section dependency knowledge from a 3D teacher network to the 2D student network by ensuring consistency between their output affinity maps. Additionally we design a feature grafting interaction (FGI) module to enhance knowledge transfer by grafting embedding maps from the 2D student onto those from the 3D teacher. Extensive experiments on multiple EM neuron segmentation datasets including a newly built one by ourselves demonstrate that our method achieves superior performance over state-of-the-art methods with only 1/20 inference latency.
https://openaccess.thecvf.com/content/CVPR2024/papers/Liu_Cross-Dimension_Affinity_Distillation_for_3D_EM_Neuron_Segmentation_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Cross-Dimension_Affinity_Distillation_for_3D_EM_Neuron_Segmentation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Cross-Dimension_Affinity_Distillation_for_3D_EM_Neuron_Segmentation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liu_Cross-Dimension_Affinity_Distillation_CVPR_2024_supplemental.pdf
null
HUGS: Human Gaussian Splats
Muhammed Kocabas, Jen-Hao Rick Chang, James Gabriel, Oncel Tuzel, Anurag Ranjan
Recent advances in neural rendering have improved both training and rendering times by orders of magnitude. While these methods demonstrate state-of-the-art quality and speed they are designed for photogrammetry of static scenes and do not generalize well to freely moving humans in the environment. In this work we introduce Human Gaussian Splats (HUGS) that represents an animatable human together with the scene using 3D Gaussian Splatting (3DGS). Our method takes only a monocular video with a small number of (50-100) frames and it automatically learns to disentangle the static scene and a fully animatable human avatar within 30 minutes. We utilize the SMPL body model to initialize the human Gaussians. To capture details that are not modeled by SMPL (e.g cloth hairs) we allow the 3D Gaussians to deviate from the human body model. Utilizing 3D Gaussians for animated humans brings new challenges including the artifacts created when articulating the Gaussians. We propose to jointly optimize the linear blend skinning weights to coordinate the movements of individual Gaussians during animation. Our approach enables novel-pose synthesis of human and novel view synthesis of both the human and the scene. We achieve state-of-the-art rendering quality with a rendering speed of 60 FPS while being 100x faster to train over previous work.
https://openaccess.thecvf.com/content/CVPR2024/papers/Kocabas_HUGS_Human_Gaussian_Splats_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.17910
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Kocabas_HUGS_Human_Gaussian_Splats_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Kocabas_HUGS_Human_Gaussian_Splats_CVPR_2024_paper.html
CVPR 2024
null
null
GeoChat: Grounded Large Vision-Language Model for Remote Sensing
Kartik Kuckreja, Muhammad Sohail Danish, Muzammal Naseer, Abhijit Das, Salman Khan, Fahad Shahbaz Khan
Recent advancements in Large Vision-Language Models (VLMs) have shown great promise in natural image domains allowing users to hold a dialogue about given visual content. However such general-domain VLMs perform poorly for Remote Sensing (RS) scenarios leading to inaccurate or fabricated information when presented with RS domain-specific queries. Such a behavior emerges due to the unique challenges introduced by RS imagery. For example to handle high-resolution RS imagery with diverse scale changes across categories and many small objects region-level reasoning is necessary alongside holistic scene interpretation. Furthermore the lack of domain-specific multimodal instruction following data as well as strong backbone models for RS make it hard for the models to align their behavior with user queries. To address these limitations we propose GeoChat - the first versatile remote sensing VLM that offers multitask conversational capabilities with high-resolution RS images. Specifically GeoChat can not only answer image-level queries but also accepts region inputs to hold region-specific dialogue. Furthermore it can visually ground objects in its responses by referring to their spatial coordinates. To address the lack of domain-specific datasets we generate a novel RS multimodal instruction-following dataset by extending image-text pairs from existing diverse RS datasets. Leveraging this rich dataset we fine-tune our remote sensing VLM based on the LLaVA-1.5 architecture. We establish a comprehensive benchmark for RS multitask conversations and compare with a number of baseline methods. GeoChat demonstrates robust zero-shot performance on various remote sensing tasks e.g. image and region captioning visual question answering scene classification visually grounded conversations and referring object detection. Our codes will be open-sourced.
https://openaccess.thecvf.com/content/CVPR2024/papers/Kuckreja_GeoChat_Grounded_Large_Vision-Language_Model_for_Remote_Sensing_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.15826
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Kuckreja_GeoChat_Grounded_Large_Vision-Language_Model_for_Remote_Sensing_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Kuckreja_GeoChat_Grounded_Large_Vision-Language_Model_for_Remote_Sensing_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kuckreja_GeoChat_Grounded_Large_CVPR_2024_supplemental.pdf
null
PhysPT: Physics-aware Pretrained Transformer for Estimating Human Dynamics from Monocular Videos
Yufei Zhang, Jeffrey O. Kephart, Zijun Cui, Qiang Ji
While current methods have shown promising progress on estimating 3D human motion from monocular videos their motion estimates are often physically unrealistic because they mainly consider kinematics. In this paper we introduce Physics-aware Pretrained Transformer (PhysPT) which improves kinematics-based motion estimates and infers motion forces. PhysPT exploits a Transformer encoder-decoder backbone to effectively learn human dynamics in a self-supervised manner. Moreover it incorporates physics principles governing human motion. Specifically we build a physics-based body representation and contact force model. We leverage them to impose novel physics-inspired training losses (i.e. force loss contact loss and Euler-Lagrange loss) enabling PhysPT to capture physical properties of the human body and the forces it experiences. Experiments demonstrate that once trained PhysPT can be directly applied to kinematics-based estimates to significantly enhance their physical plausibility and generate favourable motion forces. Furthermore we show that these physically meaningful quantities translate into improved accuracy of an important downstream task: human action recognition.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_PhysPT_Physics-aware_Pretrained_Transformer_for_Estimating_Human_Dynamics_from_Monocular_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.04430
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_PhysPT_Physics-aware_Pretrained_Transformer_for_Estimating_Human_Dynamics_from_Monocular_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_PhysPT_Physics-aware_Pretrained_Transformer_for_Estimating_Human_Dynamics_from_Monocular_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_PhysPT_Physics-aware_Pretrained_CVPR_2024_supplemental.pdf
null
Producing and Leveraging Online Map Uncertainty in Trajectory Prediction
Xunjiang Gu, Guanyu Song, Igor Gilitschenski, Marco Pavone, Boris Ivanovic
High-definition (HD) maps have played an integral role in the development of modern autonomous vehicle (AV) stacks albeit with high associated labeling and maintenance costs. As a result many recent works have proposed methods for estimating HD maps online from sensor data enabling AVs to operate outside of previously-mapped regions. However current online map estimation approaches are developed in isolation of their downstream tasks complicating their integration in AV stacks. In particular they do not produce uncertainty or confidence estimates. In this work we extend multiple state-of-the-art online map estimation methods to additionally estimate uncertainty and show how this enables more tightly integrating online mapping with trajectory forecasting. In doing so we find that incorporating uncertainty yields up to 50% faster training convergence and up to 15% better prediction performance on the real-world nuScenes driving dataset.
https://openaccess.thecvf.com/content/CVPR2024/papers/Gu_Producing_and_Leveraging_Online_Map_Uncertainty_in_Trajectory_Prediction_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.16439
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Gu_Producing_and_Leveraging_Online_Map_Uncertainty_in_Trajectory_Prediction_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Gu_Producing_and_Leveraging_Online_Map_Uncertainty_in_Trajectory_Prediction_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Gu_Producing_and_Leveraging_CVPR_2024_supplemental.zip
null
PerceptionGPT: Effectively Fusing Visual Perception into LLM
Renjie Pi, Lewei Yao, Jiahui Gao, Jipeng Zhang, Tong Zhang
The integration of visual inputs with large language models (LLMs) has led to remarkable advancements in multi-modal capabilities giving rise to vision large language models (VLLMs). However effectively harnessing LLMs for intricate visual perception tasks such as detection and segmentation remains a challenge. Conventional approaches achieve this by transforming perception signals (e.g. bounding boxes segmentation masks) into sequences of discrete tokens which struggle with the precision errors and introduces further complexities for training. In this paper we present a novel end-to-end framework named PerceptionGPT which represent the perception signals using LLM's dynamic token embedding. Specifically we leverage lightweight encoders and decoders to handle the perception signals in LLM's embedding space which takes advantage of the representation power of the high-dimensional token embeddings. Our approach significantly eases the training difficulties associated with the discrete representations in prior methods. Furthermore owing to our compact representation the inference speed is also greatly boosted. Consequently PerceptionGPT enables accurate flexible and efficient handling of complex perception signals. We validate the effectiveness of our approach through extensive experiments. The results demonstrate significant improvements over previous methods with only 4% trainable parameters and less than 25% training time.
https://openaccess.thecvf.com/content/CVPR2024/papers/Pi_PerceptionGPT_Effectively_Fusing_Visual_Perception_into_LLM_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.06612
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Pi_PerceptionGPT_Effectively_Fusing_Visual_Perception_into_LLM_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Pi_PerceptionGPT_Effectively_Fusing_Visual_Perception_into_LLM_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Pi_PerceptionGPT_Effectively_Fusing_CVPR_2024_supplemental.pdf
null
Probabilistic Speech-Driven 3D Facial Motion Synthesis: New Benchmarks Methods and Applications
Karren D. Yang, Anurag Ranjan, Jen-Hao Rick Chang, Raviteja Vemulapalli, Oncel Tuzel
We consider the task of animating 3D facial geometry from speech signal. Existing works are primarily deterministic focusing on learning a one-to-one mapping from speech signal to 3D face meshes on small datasets with limited speakers. While these models can achieve high-quality lip articulation for speakers in the training set they are unable to capture the full and diverse distribution of 3D facial motions that accompany speech in the real world. Importantly the relationship between speech and facial motion is one-to-many containing both inter-speaker and intra-speaker variations and necessitating a probabilistic approach. In this paper we identify and address key challenges that have so far limited the development of probabilistic models: lack of datasets and metrics that are suitable for training and evaluating them as well as the difficulty of designing a model that generates diverse results while remaining faithful to a strong conditioning signal as speech. We first propose large-scale benchmark datasets and metrics suitable for probabilistic modeling. Then we demonstrate a probabilistic model that achieves both diversity and fidelity to speech outperforming other methods across the proposed benchmarks. Finally we showcase useful applications of probabilistic models trained on these large-scale datasets: we can generate diverse speech-driven 3D facial motion that matches unseen speaker styles extracted from reference clips; and our synthetic meshes can be used to improve the performance of downstream audio-visual models.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yang_Probabilistic_Speech-Driven_3D_Facial_Motion_Synthesis_New_Benchmarks_Methods_and_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.18168
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Probabilistic_Speech-Driven_3D_Facial_Motion_Synthesis_New_Benchmarks_Methods_and_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Probabilistic_Speech-Driven_3D_Facial_Motion_Synthesis_New_Benchmarks_Methods_and_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yang_Probabilistic_Speech-Driven_3D_CVPR_2024_supplemental.zip
null
LASO: Language-guided Affordance Segmentation on 3D Object
Yicong Li, Na Zhao, Junbin Xiao, Chun Feng, Xiang Wang, Tat-seng Chua
Segmenting affordance in 3D data is key for bridging perception and action in robots. Existing efforts mostly focus on the visual side and overlook the affordance knowledge from a semantic aspect. This oversight not only limits their generalization to unseen objects but more importantly hinders their synergy with large language models (LLMs) which are excellent task planners that can decompose an overarching command into agent-actionable instructions. With this regard we propose a novel task Language-guided Affordance Segmentation on 3D Object (LASO) which challenges a model to segment a 3D object's part relevant to a given affordance question. To facilitate the task we contribute a dataset comprising 19751 point-question pairs covering 8434 object shapes and 870 expert-crafted questions. As a pioneer solution we further propose PointRefer which highlights an adaptive fusion module to identify target affordance regions at different scales. To ensure a text-aware segmentation we adopt a set of affordance queries conditioned on linguistic cues to generate dynamic kernels. These kernels are further used to convolute with point features and generate a segmentation mask. Comprehensive experiments and analyses validate PointRefer's effectiveness. With these efforts We hope that LASO can steer the direction of 3D affordance guiding it towards enhanced integration with the evolving capabilities of LLMs.
https://openaccess.thecvf.com/content/CVPR2024/papers/Li_LASO_Language-guided_Affordance_Segmentation_on_3D_Object_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Li_LASO_Language-guided_Affordance_Segmentation_on_3D_Object_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Li_LASO_Language-guided_Affordance_Segmentation_on_3D_Object_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_LASO_Language-guided_Affordance_CVPR_2024_supplemental.pdf
null
Riemannian Multinomial Logistics Regression for SPD Neural Networks
Ziheng Chen, Yue Song, Gaowen Liu, Ramana Rao Kompella, Xiao-Jun Wu, Nicu Sebe
Deep neural networks for learning Symmetric Positive Definite (SPD) matrices are gaining increasing attention in machine learning. Despite the significant progress most existing SPD networks use traditional Euclidean classifiers on an approximated space rather than intrinsic classifiers that accurately capture the geometry of SPD manifolds. Inspired by Hyperbolic Neural Networks (HNNs) we propose Riemannian Multinomial Logistics Regression (RMLR) for the classification layers in SPD networks. We introduce a unified framework for building Riemannian classifiers under the metrics pulled back from the Euclidean space and showcase our framework under the parameterized Log-Euclidean Metric (LEM) and Log-Cholesky Metric (LCM). Besides our framework offers a novel intrinsic explanation for the most popular LogEig classifier in existing SPD networks. The effectiveness of our method is demonstrated in three applications: radar recognition human action recognition and electroencephalography (EEG) classification. The code is available at https://github.com/GitZH-Chen/SPDMLR.git.
https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_Riemannian_Multinomial_Logistics_Regression_for_SPD_Neural_Networks_CVPR_2024_paper.pdf
http://arxiv.org/abs/2305.11288
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Riemannian_Multinomial_Logistics_Regression_for_SPD_Neural_Networks_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Riemannian_Multinomial_Logistics_Regression_for_SPD_Neural_Networks_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Chen_Riemannian_Multinomial_Logistics_CVPR_2024_supplemental.pdf
null
FreGS: 3D Gaussian Splatting with Progressive Frequency Regularization
Jiahui Zhang, Fangneng Zhan, Muyu Xu, Shijian Lu, Eric Xing
3D Gaussian splatting has achieved very impressive performance in real-time novel view synthesis. However it often suffers from over-reconstruction during Gaussian densification where high-variance image regions are covered by a few large Gaussians only leading to blur and artifacts in the rendered images. We design a progressive frequency regularization (FreGS) technique to tackle the over-reconstruction issue within the frequency space. Specifically FreGS performs coarse-to-fine Gaussian densification by exploiting low-to-high frequency components that can be easily extracted with low-pass and high-pass filters in the Fourier space. By minimizing the discrepancy between the frequency spectrum of the rendered image and the corresponding ground truth it achieves high-quality Gaussian densification and alleviates the over-reconstruction of Gaussian splatting effectively. Experiments over multiple widely adopted benchmarks (e.g. Mip-NeRF360 Tanks-and-Temples and Deep Blending) show that FreGS achieves superior novel view synthesis and outperforms the state-of-the-art consistently.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_FreGS_3D_Gaussian_Splatting_with_Progressive_Frequency_Regularization_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.06908
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_FreGS_3D_Gaussian_Splatting_with_Progressive_Frequency_Regularization_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_FreGS_3D_Gaussian_Splatting_with_Progressive_Frequency_Regularization_CVPR_2024_paper.html
CVPR 2024
null
null
Discriminative Sample-Guided and Parameter-Efficient Feature Space Adaptation for Cross-Domain Few-Shot Learning
Rashindrie Perera, Saman Halgamuge
In this paper we look at cross-domain few-shot classification which presents the challenging task of learning new classes in previously unseen domains with few labelled examples. Existing methods though somewhat effective encounter several limitations which we alleviate through two significant improvements. First we introduce a lightweight parameter-efficient adaptation strategy to address overfitting associated with fine-tuning a large number of parameters on small datasets. This strategy employs a linear transformation of pre-trained features significantly reducing the trainable parameter count. Second we replace the traditional nearest centroid classifier with a discriminative sample-aware loss function enhancing the model's sensitivity to the inter- and intra-class variances within the training set for improved clustering in feature space. Empirical evaluations on the Meta-Dataset benchmark showcase that our approach not only improves accuracy up to 7.7% and 5.3% on previously seen and unseen datasets respectively but also achieves the above performance while being at least 3x more parameter-efficient than existing methods establishing a new state-of-the-art in cross-domain few-shot learning. Our code is available at https://github.com/rashindrie/DIPA.
https://openaccess.thecvf.com/content/CVPR2024/papers/Perera_Discriminative_Sample-Guided_and_Parameter-Efficient_Feature_Space_Adaptation_for_Cross-Domain_Few-Shot_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.04492
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Perera_Discriminative_Sample-Guided_and_Parameter-Efficient_Feature_Space_Adaptation_for_Cross-Domain_Few-Shot_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Perera_Discriminative_Sample-Guided_and_Parameter-Efficient_Feature_Space_Adaptation_for_Cross-Domain_Few-Shot_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Perera_Discriminative_Sample-Guided_and_CVPR_2024_supplemental.pdf
null
What Sketch Explainability Really Means for Downstream Tasks?
Hmrishav Bandyopadhyay, Pinaki Nath Chowdhury, Ayan Kumar Bhunia, Aneeshan Sain, Tao Xiang, Yi-Zhe Song
In this paper we explore the unique modality of sketch for explainability emphasising the profound impact of human strokes compared to conventional pixel-oriented studies. Beyond explanations of network behavior we discern the genuine implications of explainability across diverse downstream sketch-related tasks. We propose a lightweight and portable explainability solution -- a seamless plugin that integrates effortlessly with any pre-trained model eliminating the need for re-training. Demonstrating its adaptability we present four applications: highly studied retrieval and generation and completely novel assisted drawing and sketch adversarial attacks. The centrepiece to our solution is a stroke-level attribution map that takes different forms when linked with downstream tasks. By addressing the inherent non-differentiability of rasterisation we enable explanations at both coarse stroke level (SLA) and partial stroke level (P-SLA) each with its advantages for specific downstream tasks.
https://openaccess.thecvf.com/content/CVPR2024/papers/Bandyopadhyay_What_Sketch_Explainability_Really_Means_for_Downstream_Tasks_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.09480
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Bandyopadhyay_What_Sketch_Explainability_Really_Means_for_Downstream_Tasks_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Bandyopadhyay_What_Sketch_Explainability_Really_Means_for_Downstream_Tasks_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Bandyopadhyay_What_Sketch_Explainability_CVPR_2024_supplemental.pdf
null
Neural Exposure Fusion for High-Dynamic Range Object Detection
Emmanuel Onzon, Maximilian Bömer, Fahim Mannan, Felix Heide
Computer vision in unconstrained outdoor scenarios must tackle challenging high dynamic range (HDR) scenes and rapidly changing illumination conditions. Existing methods address this problem with multi-capture HDR sensors and a hardware image signal processor (ISP) that produces a single fused image as input to a downstream neural network. The output of the HDR sensor is a set of low dynamic range (LDR) exposures and the fusion in the ISP is performed in image space and typically optimized for human perception on a display. Preferring tonemapped content with smooth transition regions over detail (and noise) in the resulting image this image fusion does typically not preserve all information from the LDR exposures that may be essential for downstream computer vision tasks. In this work we depart from conventional HDR image fusion and propose a learned task-driven fusion in the feature domain. Instead of using a single companded image we introduce a novel local cross-attention fusion mechanism that exploits semantic features from all exposures learned in an end-to-end fashion with supervision from downstream detection losses. The proposed method outperforms all tested conventional HDR exposure fusion and auto-exposure methods in challenging automotive HDR scenarios.
https://openaccess.thecvf.com/content/CVPR2024/papers/Onzon_Neural_Exposure_Fusion_for_High-Dynamic_Range_Object_Detection_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Onzon_Neural_Exposure_Fusion_for_High-Dynamic_Range_Object_Detection_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Onzon_Neural_Exposure_Fusion_for_High-Dynamic_Range_Object_Detection_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Onzon_Neural_Exposure_Fusion_CVPR_2024_supplemental.pdf
null
EfficientDreamer: High-Fidelity and Robust 3D Creation via Orthogonal-view Diffusion Priors
Zhipeng Hu, Minda Zhao, Chaoyi Zhao, Xinyue Liang, Lincheng Li, Zeng Zhao, Changjie Fan, Xiaowei Zhou, Xin Yu
While image diffusion models have made significant progress in text-driven 3D content creation they often fail to accurately capture the intended meaning of text prompts especially for view information. This limitation leads to the Janus problem where multi-faced 3D models are generated under the guidance of such diffusion models. In this paper we propose a robust high-quality 3D content generation pipeline by exploiting orthogonal-view image guidance. First we introduce a novel 2D diffusion model that generates an image consisting of four orthogonal-view sub-images based on the given text prompt. Then the 3D content is created using this diffusion model. Notably the generated orthogonal-view image provides strong geometric structure priors and thus improves 3D consistency. As a result it effectively resolves the Janus problem and significantly enhances the quality of 3D content creation. Additionally we present a 3D synthesis fusion network that can further improve the details of the generated 3D contents. Both quantitative and qualitative evaluations demonstrate that our method surpasses previous text-to-3D techniques. Project page: https://efficientdreamer.github.io.
https://openaccess.thecvf.com/content/CVPR2024/papers/Hu_EfficientDreamer_High-Fidelity_and_Robust_3D_Creation_via_Orthogonal-view_Diffusion_Priors_CVPR_2024_paper.pdf
http://arxiv.org/abs/2308.13223
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Hu_EfficientDreamer_High-Fidelity_and_Robust_3D_Creation_via_Orthogonal-view_Diffusion_Priors_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Hu_EfficientDreamer_High-Fidelity_and_Robust_3D_Creation_via_Orthogonal-view_Diffusion_Priors_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Hu_EfficientDreamer_High-Fidelity_and_CVPR_2024_supplemental.pdf
null
HOIAnimator: Generating Text-prompt Human-object Animations using Novel Perceptive Diffusion Models
Wenfeng Song, Xinyu Zhang, Shuai Li, Yang Gao, Aimin Hao, Xia Hou, Chenglizhao Chen, Ning Li, Hong Qin
To date the quest to rapidly and effectively produce human-object interaction (HOI) animations directly from textual descriptions stands at the forefront of computer vision research. The underlying challenge demands both a discriminating interpretation of language and a comprehensive physics-centric model supporting real-world dynamics. To ameliorate this paper advocates HOIAnimator a novel and interactive diffusion model with perception ability and also ingeniously crafted to revolutionize the animation of complex interactions from linguistic narratives. The effectiveness of our model is anchored in two ground-breaking innovations: (1) Our Perceptive Diffusion Models (PDM) brings together two types of models: one focused on human movements and the other on objects. This combination allows for animations where humans and objects move in concert with each other making the overall motion more realistic. Additionally we propose a Perceptive Message Passing (PMP) mechanism to enhance the communication bridging the two models ensuring that the animations are smooth and unified; (2) We devise an Interaction Contact Field (ICF) a sophisticated model that implicitly captures the essence of HOIs. Beyond mere predictive contact points the ICF assesses the proximity of human and object to their respective environment informed by a probabilistic distribution of interactions learned throughout the denoising phase. Our comprehensive evaluation showcases HOIanimator's superior ability to produce dynamic context-aware animations that surpass existing benchmarks in text-driven animation synthesis.
https://openaccess.thecvf.com/content/CVPR2024/papers/Song_HOIAnimator_Generating_Text-prompt_Human-object_Animations_using_Novel_Perceptive_Diffusion_Models_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Song_HOIAnimator_Generating_Text-prompt_Human-object_Animations_using_Novel_Perceptive_Diffusion_Models_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Song_HOIAnimator_Generating_Text-prompt_Human-object_Animations_using_Novel_Perceptive_Diffusion_Models_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Song_HOIAnimator_Generating_Text-prompt_CVPR_2024_supplemental.zip
null
SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis
Ziqiao Peng, Wentao Hu, Yue Shi, Xiangyu Zhu, Xiaomei Zhang, Hao Zhao, Jun He, Hongyan Liu, Zhaoxin Fan
Achieving high synchronization in the synthesis of realistic speech-driven talking head videos presents a significant challenge. Traditional Generative Adversarial Networks (GAN) struggle to maintain consistent facial identity while Neural Radiance Fields (NeRF) methods although they can address this issue often produce mismatched lip movements inadequate facial expressions and unstable head poses. A lifelike talking head requires synchronized coordination of subject identity lip movements facial expressions and head poses. The absence of these synchronizations is a fundamental flaw leading to unrealistic and artificial outcomes. To address the critical issue of synchronization identified as the "devil" in creating realistic talking heads we introduce SyncTalk. This NeRF-based method effectively maintains subject identity enhancing synchronization and realism in talking head synthesis. SyncTalk employs a Face-Sync Controller to align lip movements with speech and innovatively uses a 3D facial blendshape model to capture accurate facial expressions. Our HeadSync Stabilizer optimizes head poses achieving more natural head movements. The Portrait-Sync Generator restores hair details and blends the generated head with the torso for a seamless visual experience. Extensive experiments and user studies demonstrate that SyncTalk outperforms state-of-the-art methods in synchronization and realism. We recommend watching the supplementary video: https://ziqiaopeng.github.io/synctalk
https://openaccess.thecvf.com/content/CVPR2024/papers/Peng_SyncTalk_The_Devil_is_in_the_Synchronization_for_Talking_Head_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.17590
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Peng_SyncTalk_The_Devil_is_in_the_Synchronization_for_Talking_Head_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Peng_SyncTalk_The_Devil_is_in_the_Synchronization_for_Talking_Head_CVPR_2024_paper.html
CVPR 2024
null
null
SFOD: Spiking Fusion Object Detector
Yimeng Fan, Wei Zhang, Changsong Liu, Mingyang Li, Wenrui Lu
Event cameras characterized by high temporal resolution high dynamic range low power consumption and high pixel bandwidth offer unique capabilities for object detection in specialized contexts. Despite these advantages the inherent sparsity and asynchrony of event data pose challenges to existing object detection algorithms. Spiking Neural Networks (SNNs) inspired by the way the human brain codes and processes information offer a potential solution to these difficulties. However their performance in object detection using event cameras is limited in current implementations. In this paper we propose the Spiking Fusion Object Detector (SFOD) a simple and efficient approach to SNN-based object detection. Specifically we design a Spiking Fusion Module achieving the first-time fusion of feature maps from different scales in SNNs applied to event cameras. Additionally through integrating our analysis and experiments conducted during the pretraining of the backbone network on the NCAR dataset we delve deeply into the impact of spiking decoding strategies and loss functions on model performance. Thereby we establish state-of-the-art classification results based on SNNs achieving 93.7% accuracy on the NCAR dataset. Experimental results on the GEN1 detection dataset demonstrate that the SFOD achieves a state-of-the-art mAP of 32.1% outperforming existing SNN-based approaches. Our research not only underscores the potential of SNNs in object detection with event cameras but also propels the advancement of SNNs. Code is available at https://github.com/yimeng-fan/SFOD.
https://openaccess.thecvf.com/content/CVPR2024/papers/Fan_SFOD_Spiking_Fusion_Object_Detector_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.15192
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Fan_SFOD_Spiking_Fusion_Object_Detector_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Fan_SFOD_Spiking_Fusion_Object_Detector_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Fan_SFOD_Spiking_Fusion_CVPR_2024_supplemental.pdf
null
Detector-Free Structure from Motion
Xingyi He, Jiaming Sun, Yifan Wang, Sida Peng, Qixing Huang, Hujun Bao, Xiaowei Zhou
We propose a structure-from-motion framework to recover accurate camera poses and point clouds from unordered images. Traditional SfM systems typically rely on the successful detection of repeatable keypoints across multiple views as the first step which is difficult for texture-poor scenes and poor keypoint detection may break down the whole SfM system. We propose a detector-free SfM framework to draw benefits from the recent success of detector-free matchers to avoid the early determination of keypoints while solving the multi-view inconsistency issue of detector-free matchers. Specifically our framework first reconstructs a coarse SfM model from quantized detector-free matches. Then it refines the model by a novel iterative refinement pipeline which iterates between an attention-based multi-view matching module to refine feature tracks and a geometry refinement module to improve the reconstruction accuracy. Experiments demonstrate that the proposed framework outperforms existing detector-based SfM systems on common benchmark datasets. We also collect a texture-poor SfM dataset to demonstrate the capability of our framework to reconstruct texture-poor scenes. Based on this framework we take first place in Image Matching Challenge 2023.
https://openaccess.thecvf.com/content/CVPR2024/papers/He_Detector-Free_Structure_from_Motion_CVPR_2024_paper.pdf
http://arxiv.org/abs/2306.15669
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/He_Detector-Free_Structure_from_Motion_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/He_Detector-Free_Structure_from_Motion_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/He_Detector-Free_Structure_from_CVPR_2024_supplemental.pdf
null
CG-HOI: Contact-Guided 3D Human-Object Interaction Generation
Christian Diller, Angela Dai
We propose CG-HOI the first method to address the task of generating dynamic 3D human-object interactions (HOIs) from text. We model the motion of both human and object in an interdependent fashion as semantically rich human motion rarely happens in isolation without any interactions. Our key insight is that explicitly modeling contact between the human body surface and object geometry can be used as strong proxy guidance both during training and inference. Using this guidance to bridge human and object motion enables generating more realistic and physically plausible interaction sequences where the human body and corresponding object move in a coherent manner. Our method first learns to model human motion object motion and contact in a joint diffusion process inter-correlated through cross-attention. We then leverage this learned contact for guidance during inference to synthesize realistic and coherent HOIs. Extensive evaluation shows that our joint contact-based human-object interaction approach generates realistic and physically plausible sequences and we show two applications highlighting the capabilities of our method. Conditioned on a given object trajectory we can generate the corresponding human motion without re-training demonstrating strong human-object interdependency learning. Our approach is also flexible and can be applied to static real-world 3D scene scans.
https://openaccess.thecvf.com/content/CVPR2024/papers/Diller_CG-HOI_Contact-Guided_3D_Human-Object_Interaction_Generation_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Diller_CG-HOI_Contact-Guided_3D_Human-Object_Interaction_Generation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Diller_CG-HOI_Contact-Guided_3D_Human-Object_Interaction_Generation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Diller_CG-HOI_Contact-Guided_3D_CVPR_2024_supplemental.pdf
null
Towards Surveillance Video-and-Language Understanding: New Dataset Baselines and Challenges
Tongtong Yuan, Xuange Zhang, Kun Liu, Bo Liu, Chen Chen, Jian Jin, Zhenzhen Jiao
Surveillance videos are important for public security. However current surveillance video tasks mainly focus on classifying and localizing anomalous events. Existing methods are limited to detecting and classifying the predefined events with unsatisfactory semantic understanding although they have obtained considerable performance. To address this issue we propose a new research direction of surveillance video-and-language understanding(VALU) and construct the first multimodal surveillance video dataset. We manually annotate the real-world surveillance dataset UCF-Crime with fine-grained event content and timing. Our newly annotated dataset UCA (UCF-Crime Annotation) contains 23542 sentences with an average length of 20 words and its annotated videos are as long as 110.7 hours. Furthermore we benchmark SOTA models for four multimodal tasks on this newly created dataset which serve as new baselines for surveillance VALU. Through experiments we find that mainstream models used in previously public datasets perform poorly on surveillance video demonstrating new challenges in surveillance VALU. We also conducted experiments on multimodal anomaly detection. These results demonstrate that our multimodal surveillance learning can improve the performance of anomaly detection. All the experiments highlight the necessity of constructing this dataset to advance surveillance AI.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yuan_Towards_Surveillance_Video-and-Language_Understanding_New_Dataset_Baselines_and_Challenges_CVPR_2024_paper.pdf
http://arxiv.org/abs/2309.13925
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yuan_Towards_Surveillance_Video-and-Language_Understanding_New_Dataset_Baselines_and_Challenges_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yuan_Towards_Surveillance_Video-and-Language_Understanding_New_Dataset_Baselines_and_Challenges_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yuan_Towards_Surveillance_Video-and-Language_CVPR_2024_supplemental.pdf
null
AdaRevD: Adaptive Patch Exiting Reversible Decoder Pushes the Limit of Image Deblurring
Xintian Mao, Qingli Li, Yan Wang
Despite the recent progress in enhancing the efficacy of image deblurring the limited decoding capability constrains the upper limit of State-Of-The-Art (SOTA) methods. This paper proposes a pioneering work Adaptive Patch Exiting Reversible Decoder (AdaRevD) to explore their insufficient decoding capability. By inheriting the weights of the well-trained encoder we refactor a reversible decoder which scales up the single-decoder training to multi-decoder training while remaining GPU memory-friendly. Meanwhile we show that our reversible structure gradually disentangles high-level degradation degree and low-level blur pattern (residual of the blur image and its sharp counterpart) from compact degradation representation. Besides due to the spatially-variant motion blur kernels different blur patches have various deblurring difficulties. We further introduce a classifier to learn the degradation degree of image patches enabling them to exit at different sub-decoders for speedup. Experiments show that our AdaRevD pushes the limit of image deblurring e.g. achieving 34.60 dB in PSNR on GoPro dataset.
https://openaccess.thecvf.com/content/CVPR2024/papers/Mao_AdaRevD_Adaptive_Patch_Exiting_Reversible_Decoder_Pushes_the_Limit_of_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Mao_AdaRevD_Adaptive_Patch_Exiting_Reversible_Decoder_Pushes_the_Limit_of_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Mao_AdaRevD_Adaptive_Patch_Exiting_Reversible_Decoder_Pushes_the_Limit_of_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Mao_AdaRevD_Adaptive_Patch_CVPR_2024_supplemental.pdf
null
Learning to Remove Wrinkled Transparent Film with Polarized Prior
Jiaqi Tang, Ruizheng Wu, Xiaogang Xu, Sixing Hu, Ying-Cong Chen
In this paper we study a new problem Film Removal (FR) which attempts to remove the interference of wrinkled transparent films and reconstruct the original information under films for industrial recognition systems. We first physically model the imaging of industrial materials covered by the film. Considering the specular highlight from the film can be effectively recorded by the polarized camera we build a practical dataset with polarization information containing paired data with and without transparent film. We aim to remove interference from the film (specular highlights and other degradations) with an end-to-end framework. To locate the specular highlight we use an angle estimation network to optimize the polarization angle with the minimized specular highlight. The image with minimized specular highlight is set as a prior for supporting the reconstruction network. Based on the prior and the polarized images the reconstruction network can decouple all degradations from the film. Extensive experiments show that our framework achieves SOTA performance in both image reconstruction and industrial downstream tasks. Our code will be released at https://github.com/jqtangust/FilmRemoval.
https://openaccess.thecvf.com/content/CVPR2024/papers/Tang_Learning_to_Remove_Wrinkled_Transparent_Film_with_Polarized_Prior_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.04368
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Tang_Learning_to_Remove_Wrinkled_Transparent_Film_with_Polarized_Prior_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Tang_Learning_to_Remove_Wrinkled_Transparent_Film_with_Polarized_Prior_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Tang_Learning_to_Remove_CVPR_2024_supplemental.pdf
null
OpenEQA: Embodied Question Answering in the Era of Foundation Models
null
null
null
null
null
https://openaccess.thecvf.com/content/CVPR2024/html/Majumdar_OpenEQA_Embodied_Question_Answering_in_the_Era_of_Foundation_Models_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Majumdar_OpenEQA_Embodied_Question_Answering_in_the_Era_of_Foundation_Models_CVPR_2024_paper.html
CVPR 2024
null
null
DreamSalon: A Staged Diffusion Framework for Preserving Identity-Context in Editable Face Generation
Haonan Lin
While large-scale pre-trained text-to-image models can synthesize diverse and high-quality human-centered images novel challenges arise with a nuanced task of "identity fine editing" - precisely modifying specific features of a subject while maintaining its inherent identity and context. Existing personalization methods either require time-consuming optimization or learning additional encoders adept in "identity re-contextualization". However they often struggle with detailed and sensitive tasks like human face editing. To address these challenges we introduce DreamSalon a noise-guided staged-editing framework uniquely focusing on detailed image manipulations and identity-context preservation. By discerning editing and boosting stages via the frequency and gradient of predicted noises DreamSalon first performs detailed manipulations on specific features in the editing stage guided by high-frequency information and then employs stochastic denoising in the boosting stage to improve image quality. For more precise editing DreamSalon semantically mixes source and target textual prompts guided by differences in their embedding covariances to direct the model's focus on specific manipulation areas. Our experiments demonstrate DreamSalon's ability to efficiently and faithfully edit fine details on human faces outperforming existing methods both qualitatively and quantitatively.
https://openaccess.thecvf.com/content/CVPR2024/papers/Lin_DreamSalon_A_Staged_Diffusion_Framework_for_Preserving_Identity-Context_in_Editable_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.19235
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Lin_DreamSalon_A_Staged_Diffusion_Framework_for_Preserving_Identity-Context_in_Editable_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Lin_DreamSalon_A_Staged_Diffusion_Framework_for_Preserving_Identity-Context_in_Editable_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lin_DreamSalon_A_Staged_CVPR_2024_supplemental.pdf
null
Dispel Darkness for Better Fusion: A Controllable Visual Enhancer based on Cross-modal Conditional Adversarial Learning
Hao Zhang, Linfeng Tang, Xinyu Xiang, Xuhui Zuo, Jiayi Ma
We propose a controllable visual enhancer named DDBF which is based on cross-modal conditional adversarial learning and aims to dispel darkness and achieve better visible and infrared modalities fusion. Specifically a guided restoration module (GRM) is firstly designed to enhance weakened information in the low-light visible modality. The GRM utilizes the light-invariant high-contrast characteristics of the infrared modality as the central target distribution and constructs a multi-level conditional adversarial sample set to enable continuous controlled brightness enhancement of visible images. Then we develop an information fusion module (IFM) to integrate the advantageous features of the enhanced visible image and the infrared image. Thanks to customized explicit information preservation and hue fidelity constraints the IFM produces visually pleasing results with rich textures significant contrast and vivid colors. The brightened visible image and the final fused image compose the dual output of our DDBF to meet the diverse visual preferences of users. We evaluate DDBF on the public datasets achieving state-of-the-art performances of low-light enhancement and information integration that is available for both day and night scenarios. The experiments also demonstrate that our DDBF is effective in improving decision accuracy for object detection and semantic segmentation. Moreover we offer a user-friendly interface for the convenient application of our model. The code is publicly available at https://github.com/HaoZhang1018/DDBF.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Dispel_Darkness_for_Better_Fusion_A_Controllable_Visual_Enhancer_based_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Dispel_Darkness_for_Better_Fusion_A_Controllable_Visual_Enhancer_based_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Dispel_Darkness_for_Better_Fusion_A_Controllable_Visual_Enhancer_based_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_Dispel_Darkness_for_CVPR_2024_supplemental.pdf
null
Querying as Prompt: Parameter-Efficient Learning for Multimodal Language Model
Tian Liang, Jing Huang, Ming Kong, Luyuan Chen, Qiang Zhu
Recent advancements in language models pre-trained on large-scale corpora have significantly propelled developments in the NLP domain and advanced progress in multimodal tasks. In this paper we propose a Parameter-Efficient multimodal language model learning strategy named QaP (Querying as Prompt). Its core innovation is a novel modality-bridging method that allows a set of modality-specific queries to be input as soft prompts into a frozen pre-trained language model. Specifically we introduce an efficient Text-Conditioned Resampler that is easy to incorporate into the language models which enables adaptive injection of text-related multimodal information at different levels of the model through query learning. This approach effectively bridges multimodal information to the language models while fully leveraging its token fusion and representation potential. We validated our method across four datasets in three distinct multimodal tasks. The results demonstrate that our QaP multimodal language model achieves state-of-the-art performance in various tasks with training only 4.6% parameters.
https://openaccess.thecvf.com/content/CVPR2024/papers/Liang_Querying_as_Prompt_Parameter-Efficient_Learning_for_Multimodal_Language_Model_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Liang_Querying_as_Prompt_Parameter-Efficient_Learning_for_Multimodal_Language_Model_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Liang_Querying_as_Prompt_Parameter-Efficient_Learning_for_Multimodal_Language_Model_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liang_Querying_as_Prompt_CVPR_2024_supplemental.pdf
null
DePT: Decoupled Prompt Tuning
null
null
null
null
null
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_DePT_Decoupled_Prompt_Tuning_CVPR_2024_paperOriginal.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_DePT_Decoupled_Prompt_Tuning_CVPR_2024_paperOriginal.html
CVPR 2024
null
null
Neural Super-Resolution for Real-time Rendering with Radiance Demodulation
Jia Li, Ziling Chen, Xiaolong Wu, Lu Wang, Beibei Wang, Lei Zhang
It is time-consuming to render high-resolution images in applications such as video games and virtual reality and thus super-resolution technologies become increasingly popular for real-time rendering. However it is challenging to preserve sharp texture details keep the temporal stability and avoid the ghosting artifacts in real-time super-resolution rendering. To address this issue we introduce radiance demodulation to separate the rendered image or radiance into a lighting component and a material component considering the fact that the light component is smoother than the rendered image so that the high-resolution material component with detailed textures can be easily obtained. We perform the super-resolution on the lighting component only and re-modulate it with the high-resolution material component to obtain the final super-resolution image with more texture details. A reliable warping module is proposed by explicitly marking the occluded regions to avoid the ghosting artifacts. To further enhance the temporal stability we design a frame-recurrent neural network and a temporal loss to aggregate the previous and current frames which can better capture the spatial-temporal consistency among reconstructed frames. As a result our method is able to produce temporally stable results in real-time rendering with high-quality details even in the challenging 4 x4 super-resolution scenarios. Code is available at: \href https://github.com/Riga2/NSRD https://github.com/Riga2/NSRD .
https://openaccess.thecvf.com/content/CVPR2024/papers/Li_Neural_Super-Resolution_for_Real-time_Rendering_with_Radiance_Demodulation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2308.06699
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Li_Neural_Super-Resolution_for_Real-time_Rendering_with_Radiance_Demodulation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Li_Neural_Super-Resolution_for_Real-time_Rendering_with_Radiance_Demodulation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_Neural_Super-Resolution_for_CVPR_2024_supplemental.pdf
null
Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene Reconstruction
Ziyi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, Xiaogang Jin
Implicit neural representation has paved the way for new approaches to dynamic scene reconstruction. Nonetheless cutting-edge dynamic neural rendering methods rely heavily on these implicit representations which frequently struggle to capture the intricate details of objects in the scene. Furthermore implicit methods have difficulty achieving real-time rendering in general dynamic scenes limiting their use in a variety of tasks. To address the issues we propose a deformable 3D Gaussians splatting method that reconstructs scenes using 3D Gaussians and learns them in canonical space with a deformation field to model monocular dynamic scenes. We also introduce an annealing smoothing training mechanism with no extra overhead which can mitigate the impact of inaccurate poses on the smoothness of time interpolation tasks in real-world scenes. Through a differential Gaussian rasterizer the deformable 3D Gaussians not only achieve higher rendering quality but also real-time rendering speed. Experiments show that our method outperforms existing methods significantly in terms of both rendering quality and speed making it well-suited for tasks such as novel-view synthesis time interpolation and real-time rendering.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yang_Deformable_3D_Gaussians_for_High-Fidelity_Monocular_Dynamic_Scene_Reconstruction_CVPR_2024_paper.pdf
http://arxiv.org/abs/2309.13101
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Deformable_3D_Gaussians_for_High-Fidelity_Monocular_Dynamic_Scene_Reconstruction_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Deformable_3D_Gaussians_for_High-Fidelity_Monocular_Dynamic_Scene_Reconstruction_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yang_Deformable_3D_Gaussians_CVPR_2024_supplemental.pdf
null
Enhancing 3D Object Detection with 2D Detection-Guided Query Anchors
Haoxuanye Ji, Pengpeng Liang, Erkang Cheng
Multi-camera-based 3D object detection has made notable progress in the past several years. However we observe that there are cases (e.g. faraway regions) in which popular 2D object detectors are more reliable than state-of-the-art 3D detectors. In this paper to improve the performance of query-based 3D object detectors we present a novel query generating approach termed QAF2D which infers 3D query anchors from 2D detection results. A 2D bounding box of an object in an image is lifted to a set of 3D anchors by associating each sampled point within the box with depth yaw angle and size candidates. Then the validity of each 3D anchor is verified by comparing its projection in the image with its corresponding 2D box and only valid anchors are kept and used to construct queries. The class information of the 2D bounding box associated with each query is also utilized to match the predicted boxes with ground truth for the set-based loss. The image feature extraction backbone is shared between the 3D detector and 2D detector by adding a small number of prompt parameters. We integrate QAF2D into three popular query-based 3D object detectors and carry out comprehensive evaluations on the nuScenes dataset. The largest improvement that QAF2D can bring about on the nuScenes validation subset is 2.3% NDS and 2.7% mAP. Code is available at https://github.com/max-vision/QAF2D.
https://openaccess.thecvf.com/content/CVPR2024/papers/Ji_Enhancing_3D_Object_Detection_with_2D_Detection-Guided_Query_Anchors_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.06093
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Ji_Enhancing_3D_Object_Detection_with_2D_Detection-Guided_Query_Anchors_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Ji_Enhancing_3D_Object_Detection_with_2D_Detection-Guided_Query_Anchors_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ji_Enhancing_3D_Object_CVPR_2024_supplemental.pdf
null
Continual Forgetting for Pre-trained Vision Models
Hongbo Zhao, Bolin Ni, Junsong Fan, Yuxi Wang, Yuntao Chen, Gaofeng Meng, Zhaoxiang Zhang
For privacy and security concerns the need to erase unwanted information from pre-trained vision models is becoming evident nowadays. In real-world scenarios erasure requests originate at any time from both users and model owners. These requests usually form a sequence. Therefore under such a setting selective information is expected to be continuously removed from a pre-trained model while maintaining the rest. We define this problem as continual forgetting and identify two key challenges. (i) For unwanted knowledge efficient and effective deleting is crucial. (ii) For remaining knowledge the impact brought by the forgetting procedure should be minimal. To address them we propose Group Sparse LoRA (GS-LoRA). Specifically towards (i) we use LoRA modules to fine-tune the FFN layers in Transformer blocks for each forgetting task independently and towards (ii) a simple group sparse regularization is adopted enabling automatic selection of specific LoRA groups and zeroing out the others. GS-LoRA is effective parameter-efficient data-efficient and easy to implement. We conduct extensive experiments on face recognition object detection and image classification and demonstrate that GS-LoRA manages to forget specific classes with minimal impact on other classes. Codes will be released on https://github.com/bjzhb666/GS-LoRA.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhao_Continual_Forgetting_for_Pre-trained_Vision_Models_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.11530
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhao_Continual_Forgetting_for_Pre-trained_Vision_Models_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhao_Continual_Forgetting_for_Pre-trained_Vision_Models_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhao_Continual_Forgetting_for_CVPR_2024_supplemental.pdf
null
Real Acoustic Fields: An Audio-Visual Room Acoustics Dataset and Benchmark
Ziyang Chen, Israel D. Gebru, Christian Richardt, Anurag Kumar, William Laney, Andrew Owens, Alexander Richard
We present a new dataset called Real Acoustic Fields (RAF) that captures real acoustic room data from multiple modalities. The dataset includes high-quality and densely captured room impulse response data paired with multi-view images and precise 6DoF pose tracking data for sound emitters and listeners in the rooms. We used this dataset to evaluate existing methods for novel-view acoustic synthesis and impulse response generation which previously relied on synthetic data. In our evaluation we thoroughly assessed existing audio and audio-visual models against multiple criteria and proposed settings to enhance their performance on real-world data. We also conducted experiments to investigate the impact of incorporating visual data (i.e. images and depth) into neural acoustic field models. Additionally we demonstrated the effectiveness of a simple sim2real approach where a model is pre-trained with simulated data and fine-tuned with sparse real-world data resulting in significant improvements in the few-shot learning approach. RAF is the first dataset to provide densely captured room acoustic data making it an ideal resource for researchers working on audio and audio-visual neural acoustic field modeling techniques.
https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_Real_Acoustic_Fields_An_Audio-Visual_Room_Acoustics_Dataset_and_Benchmark_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.18821
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Real_Acoustic_Fields_An_Audio-Visual_Room_Acoustics_Dataset_and_Benchmark_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Real_Acoustic_Fields_An_Audio-Visual_Room_Acoustics_Dataset_and_Benchmark_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Chen_Real_Acoustic_Fields_CVPR_2024_supplemental.pdf
null
A Generative Approach for Wikipedia-Scale Visual Entity Recognition
Mathilde Caron, Ahmet Iscen, Alireza Fathi, Cordelia Schmid
In this paper we address web-scale visual entity recognition specifically the task of mapping a given query image to one of the 6 million existing entities in Wikipedia. One way of approaching a problem of such scale is using dual encoder models (e.g. CLIP) where all the entity names and query images are embedded into a unified space paving the way for an approximate kNN search. Alternatively it is also possible to re-purpose a captioning model to directly generate the entity names for a given image. In contrast we introduce a novel Generative Entity Recognition (GER) framework which given an input image learns to auto-regressively decode a semantic and discriminative "code" identifying the target entity. Our experiments demonstrate the efficacy of this GER paradigm showcasing state-of-the-art performance on the challenging OVEN benchmark. GER surpasses strong captioning dual-encoder visual matching and hierarchical classification baselines affirming its advantage in tackling the complexities of web-scale recognition.
https://openaccess.thecvf.com/content/CVPR2024/papers/Caron_A_Generative_Approach_for_Wikipedia-Scale_Visual_Entity_Recognition_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.02041
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Caron_A_Generative_Approach_for_Wikipedia-Scale_Visual_Entity_Recognition_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Caron_A_Generative_Approach_for_Wikipedia-Scale_Visual_Entity_Recognition_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Caron_A_Generative_Approach_CVPR_2024_supplemental.pdf
null
A Physics-informed Low-rank Deep Neural Network for Blind and Universal Lens Aberration Correction
Jin Gong, Runzhao Yang, Weihang Zhang, Jinli Suo, Qionghai Dai
High-end lenses although offering high-quality images suffer from both insufficient affordability and bulky design which hamper their applications in low-budget scenarios or on low-payload platforms. A flexible scheme is to tackle the optical aberration of low-end lenses computationally. However it is highly demanded but quite challenging to build a general model capable of handling non-stationary aberrations and covering diverse lenses especially in a blind manner. To address this issue we propose a universal solution by extensively utilizing the physical properties of camera lenses: (i) reducing the complexity of lens aberrations i.e. lens-specific non-stationary blur by warping annual-ring-shaped sub-images into rectangular stripes to transform non-uniform degenerations into a uniform one (ii) building a low-dimensional non-negative orthogonal representation of lens blur kernels to cover diverse lenses; (iii) designing a decoupling network to decompose the input low-quality image into several components degenerated by above kernel bases and applying corresponding pre-trained deconvolution networks to reverse the degeneration. Benefiting from the proper incorporation of lenses' physical properties and unique network design the proposed method achieves superb imaging quality wide applicability for various lenses high running efficiency and is totally free of kernel calibration. These advantages bring great potential for scenarios requiring lightweight high-quality photography.
https://openaccess.thecvf.com/content/CVPR2024/papers/Gong_A_Physics-informed_Low-rank_Deep_Neural_Network_for_Blind_and_Universal_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Gong_A_Physics-informed_Low-rank_Deep_Neural_Network_for_Blind_and_Universal_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Gong_A_Physics-informed_Low-rank_Deep_Neural_Network_for_Blind_and_Universal_CVPR_2024_paper.html
CVPR 2024
null
null
Open-Vocabulary Object 6D Pose Estimation
Jaime Corsetti, Davide Boscaini, Changjae Oh, Andrea Cavallaro, Fabio Poiesi
We introduce the new setting of open-vocabulary object 6D pose estimation in which a textual prompt is used to specify the object of interest. In contrast to existing approaches in our setting (i) the object of interest is specified solely through the textual prompt (ii) no object model (e.g. CAD or video sequence) is required at inference and (iii) the object is imaged from two RGBD viewpoints of different scenes. To operate in this setting we introduce a novel approach that leverages a Vision-Language Model to segment the object of interest from the scenes and to estimate its relative 6D pose. The key of our approach is a carefully devised strategy to fuse object-level information provided by the prompt with local image features resulting in a feature space that can generalize to novel concepts. We validate our approach on a new benchmark based on two popular datasets REAL275 and Toyota-Light which collectively encompass 34 object instances appearing in four thousand image pairs. The results demonstrate that our approach outperforms both a well-established hand-crafted method and a recent deep learning-based baseline in estimating the relative 6D pose of objects in different scenes. Code and dataset are available at https://jcorsetti.github.io/oryon.
https://openaccess.thecvf.com/content/CVPR2024/papers/Corsetti_Open-Vocabulary_Object_6D_Pose_Estimation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.00690
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Corsetti_Open-Vocabulary_Object_6D_Pose_Estimation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Corsetti_Open-Vocabulary_Object_6D_Pose_Estimation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Corsetti_Open-Vocabulary_Object_6D_CVPR_2024_supplemental.pdf
null
Plug and Play Active Learning for Object Detection
Chenhongyi Yang, Lichao Huang, Elliot J. Crowley
Annotating datasets for object detection is an expensive and time-consuming endeavor. To minimize this burden active learning (AL) techniques are employed to select the most informative samples for annotation within a constrained "annotation budget". Traditional AL strategies typically rely on model uncertainty or sample diversity for query sampling while more advanced methods have focused on developing AL-specific object detector architectures to enhance performance. However these specialized approaches are not readily adaptable to different object detectors due to the significant engineering effort required for integration. To overcome this challenge we introduce Plug and Play Active Learning (PPAL) a simple and effective AL strategy for object detection. PPAL is a two-stage method comprising uncertainty-based and diversity-based sampling phases. In the first stage our Difficulty Calibrated Uncertainty Sampling leverage a category-wise difficulty coefficient that combines both classification and localisation difficulties to re-weight instance uncertainties from which we sample a candidate pool for the subsequent diversity-based sampling. In the second stage we propose Category Conditioned Matching Similarity to better compute the similarities of multi-instance images as ensembles of their instance similarities which is used by the k-Means++ algorithm to sample the final AL queries. PPAL makes no change to model architectures or detector training pipelines; hence it can be easily generalized to different object detectors. We benchmark PPAL on the MS-COCO and Pascal VOC datasets using different detector architectures and show that our method outperforms prior work by a large margin. Code is available at https://github.com/ChenhongyiYang/PPAL
https://openaccess.thecvf.com/content/CVPR2024/papers/Yang_Plug_and_Play_Active_Learning_for_Object_Detection_CVPR_2024_paper.pdf
http://arxiv.org/abs/2211.11612
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Plug_and_Play_Active_Learning_for_Object_Detection_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Plug_and_Play_Active_Learning_for_Object_Detection_CVPR_2024_paper.html
CVPR 2024
null
null
Calibrating Multi-modal Representations: A Pursuit of Group Robustness without Annotations
Chenyu You, Yifei Min, Weicheng Dai, Jasjeet S. Sekhon, Lawrence Staib, James S. Duncan
Fine-tuning pre-trained vision-language models like CLIP has yielded success on diverse downstream tasks. However several pain points persist for this paradigm: (i) directly tuning entire pre-trained models becomes both time-intensive and computationally costly. Additionally these tuned models tend to become highly specialized limiting their practicality for real-world deployment; (ii) recent studies indicate that pre-trained vision-language classifiers may overly depend on spurious features -- patterns that correlate with the target in training data but are not related to the true labeling function; and (iii) existing studies on mitigating the reliance on spurious features largely based on the assumption that we can identify such features does not provide definitive assurance for real-world applications. As a piloting study this work focuses on exploring mitigating the reliance on spurious features for CLIP without using any group annotation. To this end we systematically study the existence of spurious correlation on CLIP and CILP+ERM. We first following recent work on Deep Feature Reweighting (DFR) verify that last-layer retraining can greatly improve group robustness on pretrained CLIP. In view of them we advocate a lightweight representation calibration method for fine-tuning CLIP by first generating a calibration set using the pretrained CLIP and then calibrating representations of samples within this set through contrastive learning all without the need for group labels. Extensive experiments and in-depth visualizations on several benchmarks validate the effectiveness of our proposals largely reducing reliance and significantly boosting the model generalization.
https://openaccess.thecvf.com/content/CVPR2024/papers/You_Calibrating_Multi-modal_Representations_A_Pursuit_of_Group_Robustness_without_Annotations_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.07241
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/You_Calibrating_Multi-modal_Representations_A_Pursuit_of_Group_Robustness_without_Annotations_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/You_Calibrating_Multi-modal_Representations_A_Pursuit_of_Group_Robustness_without_Annotations_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/You_Calibrating_Multi-modal_Representations_CVPR_2024_supplemental.pdf
null
LiSA: LiDAR Localization with Semantic Awareness
Bochun Yang, Zijun Li, Wen Li, Zhipeng Cai, Chenglu Wen, Yu Zang, Matthias Muller, Cheng Wang
LiDAR localization is a fundamental task in robotics and computer vision which estimates the pose of a LiDAR point cloud within a global map. Scene Coordinate Regression (SCR) has demonstrated state-of-the-art performance in this task. In SCR a scene is represented as a neural network which outputs the world coordinates for each point in the input point cloud. However SCR treats all points equally during localization ignoring the fact that not all objects are beneficial for localization. For example dynamic objects and repeating structures often negatively impact SCR. To address this problem we introduce LiSA the first method that incorporates semantic awareness into SCR to boost the localization robustness and accuracy. To avoid extra computation or network parameters during inference we distill the knowledge from a segmentation model to the original SCR network. Experiments show the superior performance of LiSA on standard LiDAR localization benchmarks compared to state-of-the-art methods. Applying knowledge distillation not only preserves high efficiency but also achieves higher localization accuracy than introducing extra semantic segmentation modules. We also analyze the benefit of semantic information for LiDAR localization. Our code is released at https://github.com/Ybchun/LiSA.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yang_LiSA_LiDAR_Localization_with_Semantic_Awareness_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_LiSA_LiDAR_Localization_with_Semantic_Awareness_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_LiSA_LiDAR_Localization_with_Semantic_Awareness_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yang_LiSA_LiDAR_Localization_CVPR_2024_supplemental.zip
null