Search is not available for this dataset
title
string | authors
string | abstract
string | pdf
string | arXiv
string | bibtex
string | url
string | detail_url
string | tags
string | supp
string | string |
---|---|---|---|---|---|---|---|---|---|---|
Improving Image Restoration through Removing Degradations in Textual Representations | Jingbo Lin, Zhilu Zhang, Yuxiang Wei, Dongwei Ren, Dongsheng Jiang, Qi Tian, Wangmeng Zuo | In this paper we introduce a new perspective for improving image restoration by removing degradation in the textual representations of a given degraded image. Intuitively restoration is much easier on text modality than image one. For example it can be easily conducted by removing degradation-related words while keeping the content-aware words. Hence we combine the advantages of images in detail description and ones of text in degradation removal to perform restoration. To address the cross-modal assistance we propose to map the degraded images into textual representations for removing the degradations and then convert the restored textual representations into a guidance image for assisting image restoration. In particular We ingeniously embed an image-to-text mapper and text restoration module into CLIP-equipped text-to-image models to generate the guidance. Then we adopt a simple coarse-to-fine approach to dynamically inject multi-scale information from guidance to image restoration networks. Extensive experiments are conducted on various image restoration tasks including deblurring dehazing deraining and denoising and all-in-one image restoration. The results showcase that our method outperforms state-of-the-art ones across all these tasks. The codes and models are available at https://github.com/mrluin/TextualDegRemoval. | https://openaccess.thecvf.com/content/CVPR2024/papers/Lin_Improving_Image_Restoration_through_Removing_Degradations_in_Textual_Representations_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.17334 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Lin_Improving_Image_Restoration_through_Removing_Degradations_in_Textual_Representations_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Lin_Improving_Image_Restoration_through_Removing_Degradations_in_Textual_Representations_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lin_Improving_Image_Restoration_CVPR_2024_supplemental.pdf | null |
ZONE: Zero-Shot Instruction-Guided Local Editing | Shanglin Li, Bohan Zeng, Yutang Feng, Sicheng Gao, Xiuhui Liu, Jiaming Liu, Lin Li, Xu Tang, Yao Hu, Jianzhuang Liu, Baochang Zhang | Recent advances in vision-language models like Stable Diffusion have shown remarkable power in creative image synthesis and editing.However most existing text-to-image editing methods encounter two obstacles: First the text prompt needs to be carefully crafted to achieve good results which is not intuitive or user-friendly. Second they are insensitive to local edits and can irreversibly affect non-edited regions leaving obvious editing traces. To tackle these problems we propose a Zero-shot instructiON-guided local image Editing approach termed ZONE. We first convert the editing intent from the user-provided instruction (e.g. "make his tie blue") into specific image editing regions through InstructPix2Pix. We then propose a Region-IoU scheme for precise image layer extraction from an off-the-shelf segment model. We further develop an edge smoother based on FFT for seamless blending between the layer and the image.Our method allows for arbitrary manipulation of a specific region with a single instruction while preserving the rest. Extensive experiments demonstrate that our ZONE achieves remarkable local editing results and user-friendliness outperforming state-of-the-art methods. Code is available at https://github.com/lsl001006/ZONE. | https://openaccess.thecvf.com/content/CVPR2024/papers/Li_ZONE_Zero-Shot_Instruction-Guided_Local_Editing_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.16794 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Li_ZONE_Zero-Shot_Instruction-Guided_Local_Editing_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Li_ZONE_Zero-Shot_Instruction-Guided_Local_Editing_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_ZONE_Zero-Shot_Instruction-Guided_CVPR_2024_supplemental.pdf | null |
U-VAP: User-specified Visual Appearance Personalization via Decoupled Self Augmentation | You Wu, Kean Liu, Xiaoyue Mi, Fan Tang, Juan Cao, Jintao Li | Concept personalization methods enable large text-to-image models to learn specific subjects (e.g. objects/poses/3D models) and synthesize renditions in new contexts. Given that the image references are highly biased towards visual attributes state-of-the-art personalization models tend to overfit the whole subject and cannot disentangle visual characteristics in pixel space. In this study we proposed a more challenging setting namely fine-grained visual appearance personalization. Different from existing methods we allow users to provide a sentence describing the desired attributes. A novel decoupled self-augmentation strategy is proposed to generate target-related and non-target samples to learn user-specified visual attributes.These augmented data allow for refining the model's understanding of the target attribute while mitigating the impact of unrelated attributes. At the inference stage adjustments are conducted on semantic space through the learned target and non-target embeddings to further enhance the disentanglement of target attributes. Extensive experiments on various kinds of visual attributes with SOTA personalization methods shows the ability of the proposed method to mimic target visual appearance in novel contexts thus improving the controllability and flexibility of personalization. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_U-VAP_User-specified_Visual_Appearance_Personalization_via_Decoupled_Self_Augmentation_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wu_U-VAP_User-specified_Visual_Appearance_Personalization_via_Decoupled_Self_Augmentation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wu_U-VAP_User-specified_Visual_Appearance_Personalization_via_Decoupled_Self_Augmentation_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wu_U-VAP_User-specified_Visual_CVPR_2024_supplemental.pdf | null |
PointBeV: A Sparse Approach for BeV Predictions | Loick Chambon, Eloi Zablocki, Mickaël Chen, Florent Bartoccioni, Patrick Pérez, Matthieu Cord | Bird's-eye View (BeV) representations have emerged as the de-facto shared space in driving applications offering a unified space for sensor data fusion and supporting various downstream tasks. However conventional models use grids with fixed resolution and range and face computational inefficiencies due to the uniform allocation of resources across all cells. To address this we propose PointBeV a novel sparse BeV segmentation model operating on sparse BeV cells instead of dense grids. This approach offers precise control over memory usage enabling the use of long temporal contexts and accommodating memory-constrained platforms. PointBeV employs an efficient two-pass strategy for training enabling focused computation on regions of interest. At inference time it can be used with various memory/performance trade-offs and flexibly adjusts to new specific use cases. PointBeV achieves state-of-the-art results on the nuScenes dataset for vehicle pedestrian and lane segmentation showcasing superior performance in static and temporal settings despite being trained solely with sparse signals. We release our code with two new efficient modules used in the architecture: Sparse Feature Pulling designed for the effective extraction of features from images to BeV and Submanifold Attention which enables efficient temporal modeling. The code is available at https://github.com/valeoai/PointBeV. | https://openaccess.thecvf.com/content/CVPR2024/papers/Chambon_PointBeV_A_Sparse_Approach_for_BeV_Predictions_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Chambon_PointBeV_A_Sparse_Approach_for_BeV_Predictions_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Chambon_PointBeV_A_Sparse_Approach_for_BeV_Predictions_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Chambon_PointBeV_A_Sparse_CVPR_2024_supplemental.pdf | null |
From-Ground-To-Objects: Coarse-to-Fine Self-supervised Monocular Depth Estimation of Dynamic Objects with Ground Contact Prior | null | null | null | null | null | https://openaccess.thecvf.com/content/CVPR2024/html/Moon_From-Ground-To-Objects_Coarse-to-Fine_Self-supervised_Monocular_Depth_Estimation_of_Dynamic_Objects_with_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Moon_From-Ground-To-Objects_Coarse-to-Fine_Self-supervised_Monocular_Depth_Estimation_of_Dynamic_Objects_with_CVPR_2024_paper.html | CVPR 2024 | null | null |
Linguistic-Aware Patch Slimming Framework for Fine-grained Cross-Modal Alignment | Zheren Fu, Lei Zhang, Hou Xia, Zhendong Mao | Cross-modal alignment aims to build a bridge connecting vision and language. It is an important multi-modal task that efficiently learns the semantic similarities between images and texts. Traditional fine-grained alignment methods heavily rely on pre-trained object detectors to extract region features for subsequent region-word alignment thereby incurring substantial computational costs for region detection and error propagation issues for two-stage training. In this paper we focus on the mainstream vision transformer incorporating patch features for patch-word alignment while addressing the resultant issue of visual patch redundancy and patch ambiguity for semantic alignment. We propose a novel Linguistic-Aware Patch Slimming (LAPS) framework for fine-grained alignment which explicitly identifies redundant visual patches with language supervision and rectifies their semantic and spatial information to facilitate more effective and consistent patch-word alignment. Extensive experiments on various evaluation benchmarks and model backbones show LAPS outperforms the state-of-the-art fine-grained alignment methods by 5%-15% rSum. Our code is available at https://github.com/CrossmodalGroup/LAPS | https://openaccess.thecvf.com/content/CVPR2024/papers/Fu_Linguistic-Aware_Patch_Slimming_Framework_for_Fine-grained_Cross-Modal_Alignment_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Fu_Linguistic-Aware_Patch_Slimming_Framework_for_Fine-grained_Cross-Modal_Alignment_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Fu_Linguistic-Aware_Patch_Slimming_Framework_for_Fine-grained_Cross-Modal_Alignment_CVPR_2024_paper.html | CVPR 2024 | null | null |
HHMR: Holistic Hand Mesh Recovery by Enhancing the Multimodal Controllability of Graph Diffusion Models | Mengcheng Li, Hongwen Zhang, Yuxiang Zhang, Ruizhi Shao, Tao Yu, Yebin Liu | Recent years have witnessed a trend of the deep integration of the generation and reconstruction paradigms. In this paper we extend the ability of controllable generative models for a more comprehensive hand mesh recovery task: direct hand mesh generation inpainting reconstruction and fitting in a single framework which we name as Holistic Hand Mesh Recovery (HHMR). Our key observation is that different kinds of hand mesh recovery tasks can be achieved by a single generative model with strong multimodal controllability and in such a framework realizing different tasks only requires giving different signals as conditions. To achieve this goal we propose an all-in-one diffusion framework based on graph convolution and attention mechanisms for holistic hand mesh recovery. In order to achieve strong control generation capability while ensuring the decoupling of multimodal control signals we map different modalities to a share feature space and apply cross-scale random masking in both modality and feature levels. In this way the correlation between different modalities can be fully exploited during the learning of hand priors. Furthermore we propose Condition-aligned Gradient Guidance to enhance the alignment of the generated model with the control signals which significantly improves the accuracy of the hand mesh reconstruction and fitting. Experiments show that our novel framework can realize multiple hand mesh recovery tasks simultaneously and outperform the existing methods in different tasks which provides more possibilities for subsequent downstream applications including gesture recognition pose generation mesh editing and so on. | https://openaccess.thecvf.com/content/CVPR2024/papers/Li_HHMR_Holistic_Hand_Mesh_Recovery_by_Enhancing_the_Multimodal_Controllability_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Li_HHMR_Holistic_Hand_Mesh_Recovery_by_Enhancing_the_Multimodal_Controllability_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Li_HHMR_Holistic_Hand_Mesh_Recovery_by_Enhancing_the_Multimodal_Controllability_CVPR_2024_paper.html | CVPR 2024 | null | null |
SRTube: Video-Language Pre-Training with Action-Centric Video Tube Features and Semantic Role Labeling | Ju-Hee Lee, Je-Won Kang | In recent years large-scale video-language pre-training (VidLP) has received considerable attention for its effectiveness in relevant tasks. In this paper we propose a novel action-centric VidLP framework that employs video tube features for temporal modeling and language features based on semantic role labeling (SRL). Our video encoder generates multiple tube features along object trajectories identifying action-related regions within videos to overcome the limitations of existing temporal attention mechanisms. Additionally our text encoder incorporates high-level action-related language knowledge previously underutilized in current VidLP models. The SRL captures action-verbs and related semantics among objects in sentences and enhances the ability to perform instance-level text matching thus enriching the cross-modal (CM) alignment process. We also introduce two novel pre-training objectives and a self-supervision strategy to produce a more faithful CM representation. Experimental results demonstrate that our method outperforms existing VidLP frameworks in various downstream tasks and datasets establishing our model a baseline in the modern VidLP framework. | https://openaccess.thecvf.com/content/CVPR2024/papers/Lee_SRTube_Video-Language_Pre-Training_with_Action-Centric_Video_Tube_Features_and_Semantic_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Lee_SRTube_Video-Language_Pre-Training_with_Action-Centric_Video_Tube_Features_and_Semantic_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Lee_SRTube_Video-Language_Pre-Training_with_Action-Centric_Video_Tube_Features_and_Semantic_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lee_SRTube_Video-Language_Pre-Training_CVPR_2024_supplemental.pdf | null |
Prompt Highlighter: Interactive Control for Multi-Modal LLMs | Yuechen Zhang, Shengju Qian, Bohao Peng, Shu Liu, Jiaya Jia | This study targets a critical aspect of multi-modal LLMs' (LLMs&VLMs) inference: explicit controllable text generation. Multi-modal LLMs empower multi-modality understanding with the capability of semantic generation yet bring less explainability and heavier reliance on prompt contents due to their autoregressive generative nature. While manipulating prompt formats could improve outputs designing specific and precise prompts per task can be challenging and ineffective. To tackle this issue we introduce a novel inference method Prompt Highlighter which enables users to highlight specific prompt spans to interactively control the focus during generation. Motivated by the classifier-free diffusion guidance we form regular and unconditional context pairs based on highlighted tokens demonstrating that the autoregressive generation in models can be guided in a classifier-free way. Notably we find that during inference guiding the models with highlighted tokens through the attention weights leads to more desired outputs. Our approach is compatible with current LLMs and VLMs achieving impressive customized generation results without training. Experiments confirm its effectiveness in focusing on input contexts and generating reliable content. Without tuning on LLaVA-v1.5 our method secured 70.7 in the MMBench test and 1552.5 in MME-perception. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Prompt_Highlighter_Interactive_Control_for_Multi-Modal_LLMs_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.04302 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Prompt_Highlighter_Interactive_Control_for_Multi-Modal_LLMs_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Prompt_Highlighter_Interactive_Control_for_Multi-Modal_LLMs_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_Prompt_Highlighter_Interactive_CVPR_2024_supplemental.pdf | null |
Domain-Rectifying Adapter for Cross-Domain Few-Shot Segmentation | null | null | null | null | null | https://openaccess.thecvf.com/content/CVPR2024/html/Su_Domain-Rectifying_Adapter_for_Cross-Domain_Few-Shot_Segmentation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Su_Domain-Rectifying_Adapter_for_Cross-Domain_Few-Shot_Segmentation_CVPR_2024_paper.html | CVPR 2024 | null | null |
Robust Self-calibration of Focal Lengths from the Fundamental Matrix | Viktor Kocur, Daniel Kyselica, Zuzana Kukelova | The problem of self-calibration of two cameras from a given fundamental matrix is one of the basic problems in geometric computer vision. Under the assumption of known principal points and square pixels the Bougnoux formula offers a means to compute the two unknown focal lengths. However in many practical situations the formula yields inaccurate results due to commonly occurring singularities. Moreover the estimates are sensitive to noise in the computed fundamental matrix and to the assumed positions of the principal points. In this paper we therefore propose an efficient and robust iterative method to estimate the focal lengths along with the principal points of the cameras given a fundamental matrix and priors for the estimated camera intrinsics. In addition we study a computationally efficient check of models generated within RANSAC that improves the accuracy of the estimated models while reducing the total computational time. Extensive experiments on real and synthetic data show that our iterative method brings significant improvements in terms of the accuracy of the estimated focal lengths over the Bougnoux formula and other state-of-the-art methods even when relying on inaccurate priors. The code for the methods and experiments is available at https://github.com/kocurvik/robust_self_calibration | https://openaccess.thecvf.com/content/CVPR2024/papers/Kocur_Robust_Self-calibration_of_Focal_Lengths_from_the_Fundamental_Matrix_CVPR_2024_paper.pdf | http://arxiv.org/abs/2311.16304 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Kocur_Robust_Self-calibration_of_Focal_Lengths_from_the_Fundamental_Matrix_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Kocur_Robust_Self-calibration_of_Focal_Lengths_from_the_Fundamental_Matrix_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kocur_Robust_Self-calibration_of_CVPR_2024_supplemental.pdf | null |
Continual Learning for Motion Prediction Model via Meta-Representation Learning and Optimal Memory Buffer Retention Strategy | DaeJun Kang, Dongsuk Kum, Sanmin Kim | Embodied AI such as autonomous vehicles suffers from insufficient long-tailed data because it must be obtained from the physical world. In fact data must be continuously obtained in a series of small batches and the model must also be continuously trained to achieve generalizability and scalability by improving the biased data distribution. This paper addresses the training cost and catastrophic forgetting problems when continuously updating models to adapt to incoming small batches from various environments for real-world motion prediction in autonomous driving. To this end we propose a novel continual motion prediction (CMP) learning framework based on sparse meta-representation learning and an optimal memory buffer retention strategy. In meta-representation learning a model explicitly learns a sparse representation of each driving environment from road geometry to vehicle states by training to reduce catastrophic forgetting based on an augmented modulation network with sparsity regularization. Also in the adaptation phase We develop an Optimal Memory Buffer Retention strategy that smartly preserves diverse samples by focusing on representation similarity. This approach handles the nuanced task distribution shifts characteristic of motion prediction datasets ensuring our model stays responsive to evolving input variations without requiring extensive resources. The experiment results demonstrate that the proposed method shows superior adaptation performance to the conventional continual learning approach which is developed using a synthetic dataset for the continual learning problem. | https://openaccess.thecvf.com/content/CVPR2024/papers/Kang_Continual_Learning_for_Motion_Prediction_Model_via_Meta-Representation_Learning_and_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Kang_Continual_Learning_for_Motion_Prediction_Model_via_Meta-Representation_Learning_and_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Kang_Continual_Learning_for_Motion_Prediction_Model_via_Meta-Representation_Learning_and_CVPR_2024_paper.html | CVPR 2024 | null | null |
PartDistill: 3D Shape Part Segmentation by Vision-Language Model Distillation | Ardian Umam, Cheng-Kun Yang, Min-Hung Chen, Jen-Hui Chuang, Yen-Yu Lin | This paper proposes a cross-modal distillation framework PartDistill which transfers 2D knowledge from vision-language models (VLMs) to facilitate 3D shape part segmentation. PartDistill addresses three major challenges in this task: the lack of 3D segmentation in invisible or undetected regions in the 2D projections inconsistent 2D predictions by VLMs and the lack of knowledge accumulation across different 3D shapes. PartDistill consists of a teacher network that uses a VLM to make 2D predictions and a student network that learns from the 2D predictions while extracting geometrical features from multiple 3D shapes to carry out 3D part segmentation. A bi-directional distillation including forward and backward distillations is carried out within the framework where the former forward distills the 2D predictions to the student network and the latter improves the quality of the 2D predictions which subsequently enhances the final 3D segmentation. Moreover PartDistill can exploit generative models that facilitate effortless 3D shape creation for generating knowledge sources to be distilled. Through extensive experiments PartDistill boosts the existing methods with substantial margins on widely used ShapeNetPart and PartNetE datasets by more than 15% and 12% higher mIoU scores respectively. The code for this work is available at https://github.com/ardianumam/PartDistill. | https://openaccess.thecvf.com/content/CVPR2024/papers/Umam_PartDistill_3D_Shape_Part_Segmentation_by_Vision-Language_Model_Distillation_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.04016 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Umam_PartDistill_3D_Shape_Part_Segmentation_by_Vision-Language_Model_Distillation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Umam_PartDistill_3D_Shape_Part_Segmentation_by_Vision-Language_Model_Distillation_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Umam_PartDistill_3D_Shape_CVPR_2024_supplemental.pdf | null |
CPP-Net: Embracing Multi-Scale Feature Fusion into Deep Unfolding CP-PPA Network for Compressive Sensing | Zhen Guo, Hongping Gan | In the domain of compressive sensing (CS) deep unfolding networks (DUNs) have garnered attention for their good performance and certain degree of interpretability rooted in CS domain achieved by marrying traditional optimization solvers with deep networks. However current DUNs are ill-suited for the intricate task of capturing fine-grained image details leading to perceptible distortions and blurriness in reconstructed images particularly at low CS ratios e.g. 0.10 and below. In this paper we propose CPP-Net a novel deep unfolding CS framework inspired by the primal-dual hybrid strategy of the Chambolle and Pock Proximal Point Algorithm (CP-PPA). First we derive three iteration submodules Xk Vk and Yk by incorporating customized deep learning modules to solve the sparse basis related proximal operator within CP-PPA. Second we design the Dual Path Fusion Block (DPFB) to adeptly extract and fuse multi-scale feature information enhancing sensitivity to feature information at different scales and improving detail reconstruction. Third we introduce the Iteration Fusion Strategy (IFS) to effectively weight the fusion of outputs from diverse reconstruction stages maximizing the utilization of feature information and mitigating the information loss during reconstruction stages. Extensive experiments demonstrate that CPP-Net effectively reduces distortion and blurriness while preserving richer image details outperforming current state-of-the-art methods. Codes are available at https://github.com/ICSResearch/CPP-Net. | https://openaccess.thecvf.com/content/CVPR2024/papers/Guo_CPP-Net_Embracing_Multi-Scale_Feature_Fusion_into_Deep_Unfolding_CP-PPA_Network_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Guo_CPP-Net_Embracing_Multi-Scale_Feature_Fusion_into_Deep_Unfolding_CP-PPA_Network_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Guo_CPP-Net_Embracing_Multi-Scale_Feature_Fusion_into_Deep_Unfolding_CP-PPA_Network_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Guo_CPP-Net_Embracing_Multi-Scale_CVPR_2024_supplemental.pdf | null |
EditGuard: Versatile Image Watermarking for Tamper Localization and Copyright Protection | Xuanyu Zhang, Runyi Li, Jiwen Yu, Youmin Xu, Weiqi Li, Jian Zhang | In the era of AI-generated content (AIGC) malicious tampering poses imminent threats to copyright integrity and information security. Current deep image watermarking while widely accepted for safeguarding visual content can only protect copyright and ensure traceability. They fall short in localizing increasingly realistic image tampering potentially leading to trust crises privacy violations and legal disputes. To solve this challenge we propose an innovative proactive forensics framework EditGuard to unify copyright protection and tamper-agnostic localization especially for AIGC-based editing methods. It can offer a meticulous embedding of imperceptible watermarks and precise decoding of tampered areas and copyright information. Leveraging our observed fragility and locality of image-into-image steganography the realization of EditGuard can be converted into a united image-bit steganography issue thus completely decoupling the training process from the tampering types. Extensive experiments verify that our EditGuard balances the tamper localization accuracy copyright recovery precision and generalizability to various AIGC-based tampering methods especially for image forgery that is difficult for the naked eye to detect. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_EditGuard_Versatile_Image_Watermarking_for_Tamper_Localization_and_Copyright_Protection_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.08883 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_EditGuard_Versatile_Image_Watermarking_for_Tamper_Localization_and_Copyright_Protection_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_EditGuard_Versatile_Image_Watermarking_for_Tamper_Localization_and_Copyright_Protection_CVPR_2024_paper.html | CVPR 2024 | null | null |
3DGStream: On-the-Fly Training of 3D Gaussians for Efficient Streaming of Photo-Realistic Free-Viewpoint Videos | Jiakai Sun, Han Jiao, Guangyuan Li, Zhanjie Zhang, Lei Zhao, Wei Xing | Constructing photo-realistic Free-Viewpoint Videos (FVVs) of dynamic scenes from multi-view videos remains a challenging endeavor. Despite the remarkable advancements achieved by current neural rendering techniques these methods generally require complete video sequences for offline training and are not capable of real-time rendering. To address these constraints we introduce 3DGStream a method designed for efficient FVV streaming of real-world dynamic scenes. Our method achieves fast on-the-fly per-frame reconstruction within 12 seconds and real-time rendering at 200 FPS. Specifically we utilize 3D Gaussians (3DGs) to represent the scene. Instead of the naive approach of directly optimizing 3DGs per-frame we employ a compact Neural Transformation Cache (NTC) to model the translations and rotations of 3DGs markedly reducing the training time and storage required for each FVV frame. Furthermore we propose an adaptive 3DG addition strategy to handle emerging objects in dynamic scenes. Experiments demonstrate that 3DGStream achieves competitive performance in terms of rendering speed image quality training time and model storage when compared with state-of-the-art methods. | https://openaccess.thecvf.com/content/CVPR2024/papers/Sun_3DGStream_On-the-Fly_Training_of_3D_Gaussians_for_Efficient_Streaming_of_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.01444 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Sun_3DGStream_On-the-Fly_Training_of_3D_Gaussians_for_Efficient_Streaming_of_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Sun_3DGStream_On-the-Fly_Training_of_3D_Gaussians_for_Efficient_Streaming_of_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Sun_3DGStream_On-the-Fly_Training_CVPR_2024_supplemental.pdf | null |
FairRAG: Fair Human Generation via Fair Retrieval Augmentation | Robik Shrestha, Yang Zou, Qiuyu Chen, Zhiheng Li, Yusheng Xie, Siqi Deng | Existing text-to-image generative models reflect or even amplify societal biases ingrained in their training data. This is especially concerning for human image generation where models are biased against certain demographic groups. Existing attempts to rectify this issue are hindered by the inherent limitations of the pre-trained models and fail to substantially improve demographic diversity. In this work we introduce Fair Retrieval Augmented Generation (FairRAG) a novel framework that conditions pre-trained generative models on reference images retrieved from an external image database to improve fairness in human generation. FairRAG enables conditioning through a lightweight linear module that projects reference images into the textual space. To enhance fairness FairRAG applies simple-yet-effective debiasing strategies providing images from diverse demographic groups during the generative process. Extensive experiments demonstrate that FairRAG outperforms existing methods in terms of demographic diversity image-text alignment and image fidelity while incurring minimal computational overhead during inference. | https://openaccess.thecvf.com/content/CVPR2024/papers/Shrestha_FairRAG_Fair_Human_Generation_via_Fair_Retrieval_Augmentation_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.19964 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Shrestha_FairRAG_Fair_Human_Generation_via_Fair_Retrieval_Augmentation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Shrestha_FairRAG_Fair_Human_Generation_via_Fair_Retrieval_Augmentation_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Shrestha_FairRAG_Fair_Human_CVPR_2024_supplemental.pdf | null |
DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing | Yujun Shi, Chuhui Xue, Jun Hao Liew, Jiachun Pan, Hanshu Yan, Wenqing Zhang, Vincent Y. F. Tan, Song Bai | Accurate and controllable image editing is a challenging task that has attracted significant attention recently. Notably DragGAN developed by Pan et al. (2023) is an interactive point-based image editing framework that achieves impressive editing results with pixel-level precision. However due to its reliance on generative adversarial networks (GANs) its generality is limited by the capacity of pretrained GAN models. In this work we extend this editing framework to diffusion models and propose a novel approach DragDiffusion. By harnessing large-scale pretrained diffusion models we greatly enhance the applicability of interactive point-based editing on both real and diffusion-generated images. Unlike other diffusion-based editing methods that provide guidance on diffusion latents of multiple time steps our approach achieves efficient yet accurate spatial control by optimizing the latent of only one time step. This novel design is motivated by our observations that UNet features at a specific time step provides sufficient semantic and geometric information to support the drag-based editing. Moreover we introduce two additional techniques namely identity-preserving fine-tuning and reference-latent-control to further preserve the identity of the original image. Lastly we present a challenging benchmark dataset called DragBench---the first benchmark to evaluate the performance of interactive point-based image editing methods. Experiments across a wide range of challenging cases (e.g. images with multiple objects diverse object categories various styles etc.) demonstrate the versatility and generality of DragDiffusion. Code and the DragBench dataset: https://github.com/Yujun-Shi/DragDiffusion. | https://openaccess.thecvf.com/content/CVPR2024/papers/Shi_DragDiffusion_Harnessing_Diffusion_Models_for_Interactive_Point-based_Image_Editing_CVPR_2024_paper.pdf | http://arxiv.org/abs/2306.14435 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Shi_DragDiffusion_Harnessing_Diffusion_Models_for_Interactive_Point-based_Image_Editing_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Shi_DragDiffusion_Harnessing_Diffusion_Models_for_Interactive_Point-based_Image_Editing_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Shi_DragDiffusion_Harnessing_Diffusion_CVPR_2024_supplemental.pdf | null |
FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models | Shivangi Aneja, Justus Thies, Angela Dai, Matthias Nießner | We introduce FaceTalk a novel generative approach designed for synthesizing high-fidelity 3D motion sequences of talking human heads from input audio signal. To capture the expressive detailed nature of human heads including hair ears and finer-scale eye movements we propose to couple speech signal with the latent space of neural parametric head models to create high-fidelity temporally coherent motion sequences. We propose a new latent diffusion model for this task operating in the expression space of neural parametric head models to synthesize audio-driven realistic head sequences. In the absence of a dataset with corresponding NPHM expressions to audio we optimize for these correspondences to produce a dataset of temporally-optimized NPHM expressions fit to audio-video recordings of people talking. To the best of our knowledge this is the first work to propose a generative approach for realistic and high-quality motion synthesis of volumetric human heads representing a significant advancement in the field of audio-driven 3D animation. Notably our approach stands out in its ability to generate plausible motion sequences that can produce high-fidelity head animation coupled with the NPHM shape space. Our experimental results substantiate the effectiveness of FaceTalk consistently achieving superior and visually natural motion encompassing diverse facial expressions and styles outperforming existing methods by 75% in perceptual user study evaluation | https://openaccess.thecvf.com/content/CVPR2024/papers/Aneja_FaceTalk_Audio-Driven_Motion_Diffusion_for_Neural_Parametric_Head_Models_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.08459 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Aneja_FaceTalk_Audio-Driven_Motion_Diffusion_for_Neural_Parametric_Head_Models_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Aneja_FaceTalk_Audio-Driven_Motion_Diffusion_for_Neural_Parametric_Head_Models_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Aneja_FaceTalk_Audio-Driven_Motion_CVPR_2024_supplemental.pdf | null |
Mip-Splatting: Alias-free 3D Gaussian Splatting | Zehao Yu, Anpei Chen, Binbin Huang, Torsten Sattler, Andreas Geiger | Recently 3D Gaussian Splatting has demonstrated impressive novel view synthesis results reaching high fidelity and efficiency. However strong artifacts can be observed when changing the sampling rate e.g. by changing focal length or camera distance. We find that the source for this phenomenon can be attributed to the lack of 3D frequency constraints and the usage of a 2D dilation filter. To address this problem we introduce a 3D smoothing filter to constrains the size of the 3D Gaussian primitives based on the maximal sampling frequency induced by the input views. It eliminates high-frequency artifacts when zooming in. Moreover replacing 2D dilation with a 2D Mip filter which simulates a 2D box filter effectively mitigates aliasing and dilation issues. Our evaluation including scenarios such a training on single-scale images and testing on multiple scales validates the effectiveness of our approach. | https://openaccess.thecvf.com/content/CVPR2024/papers/Yu_Mip-Splatting_Alias-free_3D_Gaussian_Splatting_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Yu_Mip-Splatting_Alias-free_3D_Gaussian_Splatting_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Yu_Mip-Splatting_Alias-free_3D_Gaussian_Splatting_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yu_Mip-Splatting_Alias-free_3D_CVPR_2024_supplemental.pdf | null |
Learning Coupled Dictionaries from Unpaired Data for Image Super-Resolution | Longguang Wang, Juncheng Li, Yingqian Wang, Qingyong Hu, Yulan Guo | The difficulty of acquiring high-resolution (HR) and low-resolution (LR) image pairs in real scenarios limits the performance of existing learning-based image super-resolution (SR) methods in the real world. To conduct training on real-world unpaired data current methods focus on synthesizing pseudo LR images to associate unpaired images. However the realness and diversity of pseudo LR images are vulnerable due to the large image space. In this paper we circumvent the difficulty of image generation and propose an alternative to build the connection between unpaired images in a compact proxy space. Specifically we first construct coupled HR and LR dictionaries and then encode HR and LR images into a common latent code space using these dictionaries. In addition we develop an autoencoder-based framework to couple these dictionaries during optimization by reconstructing input HR and LR images. The coupled dictionaries enable our method to employ a shallow network architecture with only 18 layers to achieve efficient image SR. Extensive experiments show that our method (DictSR) can effectively model the LR-to-HR mapping in coupled dictionaries and produces state-of-the-art performance on benchmark datasets. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_Learning_Coupled_Dictionaries_from_Unpaired_Data_for_Image_Super-Resolution_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Learning_Coupled_Dictionaries_from_Unpaired_Data_for_Image_Super-Resolution_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Learning_Coupled_Dictionaries_from_Unpaired_Data_for_Image_Super-Resolution_CVPR_2024_paper.html | CVPR 2024 | null | null |
Template Free Reconstruction of Human-object Interaction with Procedural Interaction Generation | Xianghui Xie, Bharat Lal Bhatnagar, Jan Eric Lenssen, Gerard Pons-Moll | Reconstructing human-object interaction in 3D from a single RGB image is a challenging task and existing data driven methods do not generalize beyond the objects present in the carefully curated 3D interaction datasets. Capturing large-scale real data to learn strong interaction and 3D shape priors is very expensive due to the combinatorial nature of human-object interactions. In this paper we propose ProciGen (Procedural interaction Generation) a method to procedurally generate datasets with both plausible interaction and diverse object variation. We generate 1M+ human-object interaction pairs in 3D and leverage this large-scale data to train our HDM (Hierarchical Diffusion Model) a novel method to reconstruct interacting human and unseen object instances without any templates. Our HDM is an image-conditioned diffusion model that learns both realistic interaction and highly accurate human and object shapes. Experiments show that our HDM trained with ProciGen significantly outperforms prior methods that require template meshes and our dataset allows training methods with strong generalization ability to unseen object instances. Our code and data are released. | https://openaccess.thecvf.com/content/CVPR2024/papers/Xie_Template_Free_Reconstruction_of_Human-object_Interaction_with_Procedural_Interaction_Generation_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.07063 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Xie_Template_Free_Reconstruction_of_Human-object_Interaction_with_Procedural_Interaction_Generation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Xie_Template_Free_Reconstruction_of_Human-object_Interaction_with_Procedural_Interaction_Generation_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Xie_Template_Free_Reconstruction_CVPR_2024_supplemental.pdf | null |
Deep Video Inverse Tone Mapping Based on Temporal Clues | Yuyao Ye, Ning Zhang, Yang Zhao, Hongbin Cao, Ronggang Wang | Inverse tone mapping (ITM) aims to reconstruct high dynamic range (HDR) radiance from low dynamic range (LDR) content. Although many deep image ITM methods can generate impressive results the field of video ITM is still to be explored. Processing video sequences by image ITM methods may cause temporal inconsistency. Besides they aren't able to exploit the potentially useful information in the temporal domain. In this paper we analyze the process of video filming and then propose a Global Sample and Local Propagate strategy to better find and utilize temporal clues. To better realize the proposed strategy we design a two-stage pipeline which includes modules named Incremental Clue Aggregation Module and Feature and Clue Propagation Module. They can align and fuse frames effectively under the condition of brightness changes and propagate features and temporal clues to all frames efficiently. Our temporal clues based video ITM method can recover realistic and temporal consistent results with high fidelity in over-exposed regions. Qualitative and quantitative experiments on public datasets show that the proposed method has significant advantages over existing methods. | https://openaccess.thecvf.com/content/CVPR2024/papers/Ye_Deep_Video_Inverse_Tone_Mapping_Based_on_Temporal_Clues_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Ye_Deep_Video_Inverse_Tone_Mapping_Based_on_Temporal_Clues_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Ye_Deep_Video_Inverse_Tone_Mapping_Based_on_Temporal_Clues_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ye_Deep_Video_Inverse_CVPR_2024_supplemental.zip | null |
NeRF-HuGS: Improved Neural Radiance Fields in Non-static Scenes Using Heuristics-Guided Segmentation | Jiahao Chen, Yipeng Qin, Lingjie Liu, Jiangbo Lu, Guanbin Li | Neural Radiance Field (NeRF) has been widely recognized for its excellence in novel view synthesis and 3D scene reconstruction. However their effectiveness is inherently tied to the assumption of static scenes rendering them susceptible to undesirable artifacts when confronted with transient distractors such as moving objects or shadows. In this work we propose a novel paradigm namely "Heuristics-Guided Segmentation" (HuGS) which significantly enhances the separation of static scenes from transient distractors by harmoniously combining the strengths of hand-crafted heuristics and state-of-the-art segmentation models thus significantly transcending the limitations of previous solutions. Furthermore we delve into the meticulous design of heuristics introducing a seamless fusion of Structure-from-Motion (SfM)-based heuristics and color residual heuristics catering to a diverse range of texture profiles. Extensive experiments demonstrate the superiority and robustness of our method in mitigating transient distractors for NeRFs trained in non-static scenes. Project page: https://cnhaox.github.io/NeRF-HuGS/ | https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_NeRF-HuGS_Improved_Neural_Radiance_Fields_in_Non-static_Scenes_Using_Heuristics-Guided_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Chen_NeRF-HuGS_Improved_Neural_Radiance_Fields_in_Non-static_Scenes_Using_Heuristics-Guided_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Chen_NeRF-HuGS_Improved_Neural_Radiance_Fields_in_Non-static_Scenes_Using_Heuristics-Guided_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Chen_NeRF-HuGS_Improved_Neural_CVPR_2024_supplemental.zip | null |
Addressing Background Context Bias in Few-Shot Segmentation through Iterative Modulation | Lanyun Zhu, Tianrun Chen, Jianxiong Yin, Simon See, Jun Liu | Existing few-shot segmentation methods usually extract foreground prototypes from support images to guide query image segmentation. However different background contexts of support and query images can cause their foreground features to be misaligned. This phenomenon known as background context bias can hinder the effectiveness of support prototypes in guiding query image segmentation. In this work we propose a novel framework with an iterative structure to address this problem. In each iteration of the framework we first generate a query prediction based on a support foreground feature. Next we extract background context from the query image to modulate the support foreground feature thus eliminating the foreground feature misalignment caused by the different backgrounds. After that we design a confidence-biased attention to eliminate noise and cleanse information. By integrating these components through an iterative structure we create a novel network that can leverage the synergies between different modules to improve their performance in a mutually reinforcing manner. Through these carefully designed components and structures our network can effectively eliminate background context bias in few-shot segmentation thus achieving outstanding performance. We conduct extensive experiments on the PASCAL-5^ i and COCO-20^ i datasets and achieve state-of-the-art (SOTA) results which demonstrate the effectiveness of our approach. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zhu_Addressing_Background_Context_Bias_in_Few-Shot_Segmentation_through_Iterative_Modulation_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zhu_Addressing_Background_Context_Bias_in_Few-Shot_Segmentation_through_Iterative_Modulation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zhu_Addressing_Background_Context_Bias_in_Few-Shot_Segmentation_through_Iterative_Modulation_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhu_Addressing_Background_Context_CVPR_2024_supplemental.pdf | null |
Open-Vocabulary Video Anomaly Detection | Peng Wu, Xuerong Zhou, Guansong Pang, Yujia Sun, Jing Liu, Peng Wang, Yanning Zhang | Current video anomaly detection (VAD) approaches with weak supervisions are inherently limited to a closed-set setting and may struggle in open-world applications where there can be anomaly categories in the test data unseen during training. A few recent studies attempt to tackle a more realistic setting open-set VAD which aims to detect unseen anomalies given seen anomalies and normal videos. However such a setting focuses on predicting frame anomaly scores having no ability to recognize the specific categories of anomalies despite the fact that this ability is essential for building more informed video surveillance systems. This paper takes a step further and explores open-vocabulary video anomaly detection (OVVAD) in which we aim to leverage pre-trained large models to detect and categorize seen and unseen anomalies. To this end we propose a model that decouples OVVAD into two mutually complementary tasks - class-agnostic detection and class-specific classification - and jointly optimizes both tasks. Particularly we devise a semantic knowledge injection module to introduce semantic knowledge from large language models for the detection task and design a novel anomaly synthesis module to generate pseudo unseen anomaly videos with the help of large vision generation models for the classification task. These semantic knowledge and synthesis anomalies substantially extend our model's capability in detecting and categorizing a variety of seen and unseen anomalies. Extensive experiments on three widely-used benchmarks demonstrate our model achieves state-of-the-art performance on OVVAD task. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_Open-Vocabulary_Video_Anomaly_Detection_CVPR_2024_paper.pdf | http://arxiv.org/abs/2311.07042 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wu_Open-Vocabulary_Video_Anomaly_Detection_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wu_Open-Vocabulary_Video_Anomaly_Detection_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wu_Open-Vocabulary_Video_Anomaly_CVPR_2024_supplemental.pdf | null |
ODM: A Text-Image Further Alignment Pre-training Approach for Scene Text Detection and Spotting | Chen Duan, Pei Fu, Shan Guo, Qianyi Jiang, Xiaoming Wei | Abstract In recent years text-image joint pre-training techniques have shown promising results in various tasks. However in Optical Character Recognition (OCR) tasks aligning text instances with their corresponding text regions in images poses a challenge as it requires effective alignment between text and OCR-Text (referring to the text in images as OCR-Text to distinguish from the text in natural language) rather than a holistic understanding of the overall image content. In this paper we propose a new pre-training method called OCR-Text Destylization Modeling (ODM) that transfers diverse styles of text found in images to a uniform style based on the text prompt. With ODM we achieve better alignment between text and OCR-Text and enable pre-trained models to adapt to the complex and diverse styles of scene text detection and spotting tasks. Additionally we have designed a new labeling generation method specifically for ODM and combined it with our proposed Text-Controller module to address the challenge of annotation costs in OCR tasks allowing a larger amount of unlabeled data to participate in pre-training. Extensive experiments on multiple public datasets demonstrate that our method significantly improves performance and outperforms current pre-training methods in scene text detection and spotting tasks. Code is available at https://github.com/PriNing/ODM. | https://openaccess.thecvf.com/content/CVPR2024/papers/Duan_ODM_A_Text-Image_Further_Alignment_Pre-training_Approach_for_Scene_Text_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.00303 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Duan_ODM_A_Text-Image_Further_Alignment_Pre-training_Approach_for_Scene_Text_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Duan_ODM_A_Text-Image_Further_Alignment_Pre-training_Approach_for_Scene_Text_CVPR_2024_paper.html | CVPR 2024 | null | null |
TiNO-Edit: Timestep and Noise Optimization for Robust Diffusion-Based Image Editing | Sherry X Chen, Yaron Vaxman, Elad Ben Baruch, David Asulin, Aviad Moreshet, Kuo-Chin Lien, Misha Sra, Pradeep Sen | Despite many attempts to leverage pre-trained text-to-image models (T2I) like Stable Diffusion (SD) for controllable image editing producing good predictable results remains a challenge. Previous approaches have focused on either fine-tuning pre-trained T2I models on specific datasets to generate certain kinds of images (e.g. with a specific object or person) or on optimizing the weights text prompts and/or learning features for each input image in an attempt to coax the image generator to produce the desired result. However these approaches all have shortcomings and fail to produce good results in a predictable and controllable manner. To address this problem we present TiNO-Edit an SD-based method that focuses on optimizing the noise patterns and diffusion timesteps during editing something previously unexplored in the literature. With this simple change we are able to generate results that both better align with the original images and reflect the desired result. Furthermore we propose a set of new loss functions that operate in the latent domain of SD greatly speeding up the optimization when compared to prior losses which operate in the pixel domain. Our method can be easily applied to variations of SD including Textual Inversion and DreamBooth that encode new concepts and incorporate them into the edited results. We present a host of image-editing capabilities enabled by our approach. Our code is publicly available at https://github.com/SherryXTChen/TiNO-Edit. | https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_TiNO-Edit_Timestep_and_Noise_Optimization_for_Robust_Diffusion-Based_Image_Editing_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Chen_TiNO-Edit_Timestep_and_Noise_Optimization_for_Robust_Diffusion-Based_Image_Editing_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Chen_TiNO-Edit_Timestep_and_Noise_Optimization_for_Robust_Diffusion-Based_Image_Editing_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Chen_TiNO-Edit_Timestep_and_CVPR_2024_supplemental.pdf | null |
Epistemic Uncertainty Quantification For Pre-Trained Neural Networks | Hanjing Wang, Qiang Ji | Epistemic uncertainty quantification (UQ) identifies where models lack knowledge. Traditional UQ methods often based on Bayesian neural networks are not suitable for pre-trained non-Bayesian models. Our study addresses quantifying epistemic uncertainty for any pre-trained model which does not need the original training data or model modifications and can ensure broad applicability regardless of network architectures or training techniques. Specifically we propose a gradient-based approach to assess epistemic uncertainty analyzing the gradients of outputs relative to model parameters and thereby indicating necessary model adjustments to accurately represent the inputs. We first explore theoretical guarantees of gradient-based methods for epistemic UQ questioning the view that this uncertainty is only calculable through differences between multiple models. We further improve gradient-driven UQ by using class-specific weights for integrating gradients and emphasizing distinct contributions from neural network layers. Additionally we enhance UQ accuracy by combining gradient and perturbation methods to refine the gradients. We evaluate our approach on out-of-distribution detection uncertainty calibration and active learning demonstrating its superiority over current state-of-the-art UQ methods for pre-trained models. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_Epistemic_Uncertainty_Quantification_For_Pre-Trained_Neural_Networks_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.10124 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Epistemic_Uncertainty_Quantification_For_Pre-Trained_Neural_Networks_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Epistemic_Uncertainty_Quantification_For_Pre-Trained_Neural_Networks_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wang_Epistemic_Uncertainty_Quantification_CVPR_2024_supplemental.pdf | null |
Diffusion-ES: Gradient-free Planning with Diffusion for Autonomous and Instruction-guided Driving | Brian Yang, Huangyuan Su, Nikolaos Gkanatsios, Tsung-Wei Ke, Ayush Jain, Jeff Schneider, Katerina Fragkiadaki | Diffusion models excel at modeling complex and multimodal trajectory distributions for decision-making and control. Reward-gradient guided denoising has been recently proposed to generate trajectories that maximize both a differentiable reward function and the likelihood under the data distribution captured by a diffusion model. Reward-gradient guided denoising requires a differentiable reward function fitted to both clean and noised samples limiting its applicability as a general trajectory optimizer. In this paper we propose DiffusionES a method that combines gradient-free optimization with trajectory denoising to optimize black-box non-differentiable objectives while staying in the data manifold. Diffusion-ES samples trajectories during evolutionary search from a diffusion model and scores them using a black-box reward function. It mutates high-scoring trajectories using a truncated diffusion process that applies a small number of noising and denoising steps allowing for much more efficient exploration of the solution space. We show that DiffusionES achieves state-of-the-art performance on nuPlan an established closed-loop planning benchmark for autonomous driving. Diffusion-ES outperforms existing sampling-based planners reactive deterministic or diffusion-based policies and reward-gradient guidance. Additionally we show that unlike prior guidance methods our method can optimize non-differentiable language-shaped reward functions generated by few-shot LLM prompting. When guided by a human teacher that issues instructions to follow our method can generate novel highly complex behaviors such as aggressive lane weaving which are not present in the training data. This allows us to solve the hardest nuPlan scenarios which are beyond the capabilities of existing trajectory optimization methods and driving policies. | https://openaccess.thecvf.com/content/CVPR2024/papers/Yang_Diffusion-ES_Gradient-free_Planning_with_Diffusion_for_Autonomous_and_Instruction-guided_Driving_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Diffusion-ES_Gradient-free_Planning_with_Diffusion_for_Autonomous_and_Instruction-guided_Driving_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Diffusion-ES_Gradient-free_Planning_with_Diffusion_for_Autonomous_and_Instruction-guided_Driving_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yang_Diffusion-ES_Gradient-free_Planning_CVPR_2024_supplemental.pdf | null |
AdaShift: Learning Discriminative Self-Gated Neural Feature Activation With an Adaptive Shift Factor | Sudong Cai | Nonlinearities are decisive in neural representation learning. Traditional Activation (Act) functions impose fixed inductive biases on neural networks with oriented biological intuitions. Recent methods leverage self-gated curves to compensate for the rigid traditional Act paradigms in fitting flexibility. However substantial improvements are still impeded by the norm-induced mismatched feature re-calibrations (see Section 1) i.e. the actual importance of a feature can be inconsistent with its explicit intensity such that violates the basic intention of a direct self-gated feature re-weighting. To address this problem we propose to learn discriminative neural feature Act with a novel prototype namely AdaShift which enhances typical self-gated Act by incorporating an adaptive shift factor into the re-weighting function of Act. AdaShift casts dynamic translations on the inputs of a re-weighting function by exploiting comprehensive feature-filter context cues of different ranges in a simple yet effective manner. We obtain the new intuitions of AdaShift by rethinking the feature-filter relationships from a common Softmax-based classification and by generalizing the new observations to a common learning layer that encodes features with updatable filters. Our practical AdaShifts built upon the new Act prototype demonstrate significant improvements to the popular/SOTA Act functions on different vision benchmarks. By simply replacing ReLU with AdaShifts ResNets can match advanced Transformer counterparts (e.g. ResNet-50 vs. Swin-T) with lower cost and fewer parameters. | https://openaccess.thecvf.com/content/CVPR2024/papers/Cai_AdaShift_Learning_Discriminative_Self-Gated_Neural_Feature_Activation_With_an_Adaptive_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Cai_AdaShift_Learning_Discriminative_Self-Gated_Neural_Feature_Activation_With_an_Adaptive_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Cai_AdaShift_Learning_Discriminative_Self-Gated_Neural_Feature_Activation_With_an_Adaptive_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Cai_AdaShift_Learning_Discriminative_CVPR_2024_supplemental.pdf | null |
SCEdit: Efficient and Controllable Image Diffusion Generation via Skip Connection Editing | Zeyinzi Jiang, Chaojie Mao, Yulin Pan, Zhen Han, Jingfeng Zhang | Image diffusion models have been utilized in various tasks such as text-to-image generation and controllable image synthesis. Recent research has introduced tuning methods that make subtle adjustments to the original models yielding promising results in specific adaptations of foundational generative diffusion models. Rather than modifying the main backbone of the diffusion model we delve into the role of skip connection in U-Net and reveal that hierarchical features aggregating long-distance information across encoder and decoder make a significant impact on the content and quality of image generation. Based on the observation we propose an efficient generative tuning framework dubbed SCEdit which integrates and edits Skip Connection using a lightweight tuning module named SC-Tuner. Furthermore the proposed framework allows for straightforward extension to controllable image synthesis by injecting different conditions with Controllable SC-Tuner simplifying and unifying the network design for multi-condition inputs. Our SCEdit substantially reduces training parameters memory usage and computational expense due to its lightweight tuners with backward propagation only passing to the decoder blocks. Extensive experiments conducted on text-to-image generation and controllable image synthesis tasks demonstrate the superiority of our method in terms of efficiency and performance. Project page: https://scedit.github.io/. | https://openaccess.thecvf.com/content/CVPR2024/papers/Jiang_SCEdit_Efficient_and_Controllable_Image_Diffusion_Generation_via_Skip_Connection_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.11392 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Jiang_SCEdit_Efficient_and_Controllable_Image_Diffusion_Generation_via_Skip_Connection_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Jiang_SCEdit_Efficient_and_Controllable_Image_Diffusion_Generation_via_Skip_Connection_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Jiang_SCEdit_Efficient_and_CVPR_2024_supplemental.pdf | null |
MRC-Net: 6-DoF Pose Estimation with MultiScale Residual Correlation | Yuelong Li, Yafei Mao, Raja Bala, Sunil Hadap | We propose a single-shot approach to determining 6-DoF pose of an object with available 3D computer-aided design (CAD) model from a single RGB image. Our method dubbed MRC-Net comprises two stages. The first performs pose classification and renders the 3D object in the classified pose. The second stage performs regression to predict fine-grained residual pose within class. Connecting the two stages is a novel multi-scale residual correlation (MRC) layer that captures high-and-low level correspondences between the input image and rendering from first stage. MRC-Net employs a Siamese network with shared weights between both stages to learn embeddings for input and rendered images. To mitigate ambiguity when predicting discrete pose class labels on symmetric objects we use soft probabilistic labels to define pose class in the first stage. We demonstrate state-of-the-art accuracy outperforming all competing RGB-based methods on four challenging BOP benchmark datasets: T-LESS LM-O YCB-V and ITODD. Our method is non-iterative and requires no complex post-processing. Our code and pretrained models are available at https://github.com/amzn/mrc-net-6d-pose | https://openaccess.thecvf.com/content/CVPR2024/papers/Li_MRC-Net_6-DoF_Pose_Estimation_with_MultiScale_Residual_Correlation_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Li_MRC-Net_6-DoF_Pose_Estimation_with_MultiScale_Residual_Correlation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Li_MRC-Net_6-DoF_Pose_Estimation_with_MultiScale_Residual_Correlation_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_MRC-Net_6-DoF_Pose_CVPR_2024_supplemental.pdf | null |
MonoCD: Monocular 3D Object Detection with Complementary Depths | Longfei Yan, Pei Yan, Shengzhou Xiong, Xuanyu Xiang, Yihua Tan | Monocular 3D object detection has attracted widespread attention due to its potential to accurately obtain object 3D localization from a single image at a low cost. Depth estimation is an essential but challenging subtask of monocular 3D object detection due to the ill-posedness of 2D to 3D mapping. Many methods explore multiple local depth clues such as object heights and keypoints and then formulate the object depth estimation as an ensemble of multiple depth predictions to mitigate the insufficiency of single-depth information. However the errors of existing multiple depths tend to have the same sign which hinders them from neutralizing each other and limits the overall accuracy of combined depth. To alleviate this problem we propose to increase the complementarity of depths with two novel designs. First we add a new depth prediction branch named complementary depth that utilizes global and efficient depth clues from the entire image rather than the local clues to reduce the correlation of depth predictions. Second we propose to fully exploit the geometric relations between multiple depth clues to achieve complementarity in form. Benefiting from these designs our method achieves higher complementarity. Experiments on the KITTI benchmark demonstrate that our method achieves state-of-the-art performance without introducing extra data. In addition complementary depth can also be a lightweight and plug-and-play module to boost multiple existing monocular 3d object detectors. Code is available at https://github.com/elvintanhust/MonoCD. | https://openaccess.thecvf.com/content/CVPR2024/papers/Yan_MonoCD_Monocular_3D_Object_Detection_with_Complementary_Depths_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.03181 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Yan_MonoCD_Monocular_3D_Object_Detection_with_Complementary_Depths_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Yan_MonoCD_Monocular_3D_Object_Detection_with_Complementary_Depths_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yan_MonoCD_Monocular_3D_CVPR_2024_supplemental.pdf | null |
ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object | Chenshuang Zhang, Fei Pan, Junmo Kim, In So Kweon, Chengzhi Mao | We establish rigorous benchmarks for visual perception robustness. Synthetic images such as ImageNet-C ImageNet-9 and Stylized ImageNet provide specific type of evaluation over synthetic corruptions backgrounds and textures yet those robustness benchmarks are restricted in specified variations and have low synthetic quality. In this work we introduce generative model as a data source for synthesizing hard images that benchmark deep models' robustness. Leveraging diffusion models we are able to generate images with more diversified backgrounds textures and materials than any prior work where we term this benchmark as ImageNet-D. Experimental results show that ImageNet-D results in a significant accuracy drop to a range of vision models from the standard ResNet visual classifier to the latest foundation models like CLIP and MiniGPT-4 significantly reducing their accuracy by up to 60%. Our work suggests that diffusion models can be an effective source to test vision models. The code and dataset are available at https://github.com/chenshuang-zhang/imagenet_d. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_ImageNet-D_Benchmarking_Neural_Network_Robustness_on_Diffusion_Synthetic_Object_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_ImageNet-D_Benchmarking_Neural_Network_Robustness_on_Diffusion_Synthetic_Object_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_ImageNet-D_Benchmarking_Neural_Network_Robustness_on_Diffusion_Synthetic_Object_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_ImageNet-D_Benchmarking_Neural_CVPR_2024_supplemental.pdf | null |
Consistent3D: Towards Consistent High-Fidelity Text-to-3D Generation with Deterministic Sampling Prior | Zike Wu, Pan Zhou, Xuanyu Yi, Xiaoding Yuan, Hanwang Zhang | Score distillation sampling (SDS) and its variants have greatly boosted the development of text-to-3D generation but are vulnerable to geometry collapse and poor textures yet. To solve this issue we first deeply analyze the SDS and find that its distillation sampling process indeed corresponds to the trajectory sampling of a stochastic differential equation (SDE): SDS samples along an SDE trajectory to yield a less noisy sample which then serves as a guidance to optimize a 3D model. However the randomness in SDE sampling often leads to a diverse and unpredictable sample which is not always less noisy and thus is not a consistently correct guidance explaining the vulnerability of SDS. Since for any SDE there always exists an ordinary differential equation (ODE) whose trajectory sampling can deterministically and consistently converge to the desired target point as the SDE we propose a novel and effective "Consistent3D" method that explores the ODE deterministic sampling prior for text-to-3D generation. Specifically at each training iteration given a rendered image by a 3D model we first estimate its desired 3D score function by a pre-trained 2D diffusion model and build an ODE for trajectory sampling. Next we design a consistency distillation sampling loss which samples along the ODE trajectory to generate two adjacent samples and uses the less noisy sample to guide another more noisy one for distilling the deterministic prior into the 3D model. Experimental results show the efficacy of our Consistent3D in generating high-fidelity and diverse 3D objects and large-scale scenes as shown in Fig. 1. The codes are available at https://github.com/sail-sg/Consistent3D. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_Consistent3D_Towards_Consistent_High-Fidelity_Text-to-3D_Generation_with_Deterministic_Sampling_Prior_CVPR_2024_paper.pdf | http://arxiv.org/abs/2401.09050 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wu_Consistent3D_Towards_Consistent_High-Fidelity_Text-to-3D_Generation_with_Deterministic_Sampling_Prior_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wu_Consistent3D_Towards_Consistent_High-Fidelity_Text-to-3D_Generation_with_Deterministic_Sampling_Prior_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wu_Consistent3D_Towards_Consistent_CVPR_2024_supplemental.pdf | null |
ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation | Xiaoqi Li, Mingxu Zhang, Yiran Geng, Haoran Geng, Yuxing Long, Yan Shen, Renrui Zhang, Jiaming Liu, Hao Dong | Robot manipulation relies on accurately predicting contact points and end-effector directions to ensure successful operation. However learning-based robot manipulation trained on a limited category within a simulator often struggles to achieve generalizability especially when confronted with extensive categories. Therefore we introduce an innovative approach for robot manipulation that leverages the robust reasoning capabilities of Multimodal Large Language Models (MLLMs) to enhance the stability and generalization of manipulation. By fine-tuning the injected adapters we preserve the inherent common sense and reasoning ability of the MLLMs while equipping them with the ability for manipulation. The fundamental insight lies in the introduced fine-tuning paradigm encompassing object category understanding affordance prior reasoning and object-centric pose prediction to stimulate the reasoning ability of MLLM in manipulation. During inference our approach utilizes an RGB image and text prompt to predict the end effector's pose in chain of thoughts. After the initial contact is established an active impedance adaptation policy is introduced to plan the upcoming waypoints in a closed-loop manner. Moreover in real world we design a test-time adaptation (TTA) strategy for manipulation to enable the model better adapt to the current real-world scene configuration. Experiments in simulator and real-world show the promising performance of ManipLLM. More details and demonstrations can be found at https://sites.google.com/view/manipllm. | https://openaccess.thecvf.com/content/CVPR2024/papers/Li_ManipLLM_Embodied_Multimodal_Large_Language_Model_for_Object-Centric_Robotic_Manipulation_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.16217 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Li_ManipLLM_Embodied_Multimodal_Large_Language_Model_for_Object-Centric_Robotic_Manipulation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Li_ManipLLM_Embodied_Multimodal_Large_Language_Model_for_Object-Centric_Robotic_Manipulation_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_ManipLLM_Embodied_Multimodal_CVPR_2024_supplemental.pdf | null |
BA-SAM: Scalable Bias-Mode Attention Mask for Segment Anything Model | Yiran Song, Qianyu Zhou, Xiangtai Li, Deng-Ping Fan, Xuequan Lu, Lizhuang Ma | In this paper we address the challenge of image resolution variation for the Segment Anything Model (SAM). SAM known for its zero-shot generalizability exhibits a performance degradation when faced with datasets with varying image sizes. Previous approaches tend to resize the image to a fixed size or adopt structure modifications hindering the preservation of SAM's rich prior knowledge. Besides such task-specific tuning necessitates a complete retraining of the model which is cost-expensive and unacceptable for deployment in the downstream tasks. In this paper we reformulate this challenge as a length extrapolation problem where token sequence length varies while maintaining a consistent patch size for images with different sizes. To this end we propose a Scalable Bias-Mode Attention Mask (BA-SAM) to enhance SAM's adaptability to varying image resolutions while eliminating the need for structure modifications. Firstly we introduce a new scaling factor to ensure consistent magnitude in the attention layer's dot product values when the token sequence length changes. Secondly we present a bias-mode attention mask that allows each token to prioritize neighboring information mitigating the impact of untrained distant information. Our BA-SAM demonstrates efficacy in two scenarios: zero-shot and fine-tuning. Extensive evaluation of diverse datasets including DIS5K DUTS ISIC COD10K and COCO reveals its ability to significantly mitigate performance degradation in the zero-shot setting and achieve state-of-the-art performance with minimal fine-tuning. Furthermore we propose a generalized model and benchmark showcasing BA-SAM's generalizability across all four datasets simultaneously. | https://openaccess.thecvf.com/content/CVPR2024/papers/Song_BA-SAM_Scalable_Bias-Mode_Attention_Mask_for_Segment_Anything_Model_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Song_BA-SAM_Scalable_Bias-Mode_Attention_Mask_for_Segment_Anything_Model_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Song_BA-SAM_Scalable_Bias-Mode_Attention_Mask_for_Segment_Anything_Model_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Song_BA-SAM_Scalable_Bias-Mode_CVPR_2024_supplemental.pdf | null |
Text-Enhanced Data-free Approach for Federated Class-Incremental Learning | Minh-Tuan Tran, Trung Le, Xuan-May Le, Mehrtash Harandi, Dinh Phung | Federated Class-Incremental Learning (FCIL) is an underexplored yet pivotal issue involving the dynamic addition of new classes in the context of federated learning. In this field Data-Free Knowledge Transfer (DFKT) plays a crucial role in addressing catastrophic forgetting and data privacy problems. However prior approaches lack the crucial synergy between DFKT and the model training phases causing DFKT to encounter difficulties in generating high-quality data from a non-anchored latent space of the old task model. In this paper we introduce LANDER (Label Text Centered Data-Free Knowledge Transfer) to address this issue by utilizing label text embeddings (LTE) produced by pretrained language models. Specifically during the model training phase our approach treats LTE as anchor points and constrains the feature embeddings of corresponding training samples around them enriching the surrounding area with more meaningful information. In the DFKT phase by using these LTE anchors LANDER can synthesize more meaningful samples thereby effectively addressing the forgetting problem. Additionally instead of tightly constraining embeddings toward the anchor the Bounding Loss is introduced to encourage sample embeddings to remain flexible within a defined radius. This approach preserves the natural differences in sample embeddings and mitigates the embedding overlap caused by heterogeneous federated settings. Extensive experiments conducted on CIFAR100 Tiny-ImageNet and ImageNet demonstrate that LANDER significantly outperforms previous methods and achieves state-of-the-art performance in FCIL. The code is available at https://github.com/tmtuan1307/lander. | https://openaccess.thecvf.com/content/CVPR2024/papers/Tran_Text-Enhanced_Data-free_Approach_for_Federated_Class-Incremental_Learning_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.14101 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Tran_Text-Enhanced_Data-free_Approach_for_Federated_Class-Incremental_Learning_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Tran_Text-Enhanced_Data-free_Approach_for_Federated_Class-Incremental_Learning_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Tran_Text-Enhanced_Data-free_Approach_CVPR_2024_supplemental.pdf | null |
Deciphering 'What' and 'Where' Visual Pathways from Spectral Clustering of Layer-Distributed Neural Representations | Xiao Zhang, David Yunis, Michael Maire | We present an approach for analyzing grouping information contained within a neural network's activations permitting extraction of spatial layout and semantic segmentation from the behavior of large pre-trained vision models. Unlike prior work our method conducts a wholistic analysis of a network's activation state leveraging features from all layers and obviating the need to guess which part of the model contains relevant information. Motivated by classic spectral clustering we formulate this analysis in terms of an optimization objective involving a set of affinity matrices each formed by comparing features within a different layer. Solving this optimization problem using gradient descent allows our technique to scale from single images to dataset-level analysis including in the latter both intra- and inter-image relationships. Analyzing a pre-trained generative transformer provides insight into the computational strategy learned by such models. Equating affinity with key-query similarity across attention layers yields eigenvectors encoding scene spatial layout whereas defining affinity by value vector similarity yields eigenvectors encoding object identity. This result suggests that key and query vectors coordinate attentional information flow according to spatial proximity (a `where' pathway) while value vectors refine a semantic category representation (a `what' pathway). | https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Deciphering_What_and_Where_Visual_Pathways_from_Spectral_Clustering_of_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.06716 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Deciphering_What_and_Where_Visual_Pathways_from_Spectral_Clustering_of_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Deciphering_What_and_Where_Visual_Pathways_from_Spectral_Clustering_of_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_Deciphering_What_and_CVPR_2024_supplemental.pdf | null |
GLaMM: Pixel Grounding Large Multimodal Model | Hanoona Rasheed, Muhammad Maaz, Sahal Shaji, Abdelrahman Shaker, Salman Khan, Hisham Cholakkal, Rao M. Anwer, Eric Xing, Ming-Hsuan Yang, Fahad S. Khan | Large Multimodal Models (LMMs) extend Large Language Models to the vision domain. Initial LMMs used holistic images and text prompts to generate ungrounded textual responses. Recently region-level LMMs have been used to generate visually grounded responses. However they are limited to only referring to a single object category at a time require users to specify the regions or cannot offer dense pixel-wise object grounding. In this work we present Grounding LMM (GLaMM) the first model that can generate natural language responses seamlessly intertwined with corresponding object segmentation masks. GLaMM not only grounds objects appearing in the conversations but is flexible enough to accept both textual and optional visual prompts (region of interest) as input. This empowers users to interact with the model at various levels of granularity both in textual and visual domains. Due to the lack of standard benchmarks for the novel setting of visually Grounded Conversation Generation (GCG) we introduce a comprehensive evaluation protocol with our curated grounded conversations. Our proposed GCG task requires densely grounded concepts in natural scenes at a large-scale. To this end we propose a densely annotated Grounding-anything Dataset (GranD) using our proposed automated annotation pipeline that encompasses 7.5M unique concepts grounded in a total of 810M regions available with segmentation masks. Besides GCG GLaMM also performs effectively on several downstream tasks e.g. referring expression segmentation image and region-level captioning and vision-language conversations. | https://openaccess.thecvf.com/content/CVPR2024/papers/Rasheed_GLaMM_Pixel_Grounding_Large_Multimodal_Model_CVPR_2024_paper.pdf | http://arxiv.org/abs/2311.03356 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Rasheed_GLaMM_Pixel_Grounding_Large_Multimodal_Model_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Rasheed_GLaMM_Pixel_Grounding_Large_Multimodal_Model_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Rasheed_GLaMM_Pixel_Grounding_CVPR_2024_supplemental.pdf | null |
Incremental Residual Concept Bottleneck Models | Chenming Shang, Shiji Zhou, Hengyuan Zhang, Xinzhe Ni, Yujiu Yang, Yuwang Wang | Concept Bottleneck Models (CBMs) map the black-box visual representations extracted by deep neural networks onto a set of interpretable concepts and use the concepts to make predictions enhancing the transparency of the decision-making process. Multimodal pre-trained models can match visual representations with textual concept embeddings allowing for obtaining the interpretable concept bottleneck without the expertise concept annotations. Recent research has focused on the concept bank establishment and the high-quality concept selection. However it is challenging to construct a comprehensive concept bank through humans or large language models which severely limits the performance of CBMs. In this work we propose the Incremental Residual Concept Bottleneck Model (Res-CBM) to address the challenge of concept completeness. Specifically the residual concept bottleneck model employs a set of optimizable vectors to complete missing concepts then the incremental concept discovery module converts the complemented vectors with unclear meanings into potential concepts in the candidate concept bank. Our approach can be applied to any user-defined concept bank as a post-hoc processing method to enhance the performance of any CBMs. Furthermore to measure the descriptive efficiency of CBMs the Concept Utilization Efficiency (CUE) metric is proposed. Experiments show that the Res-CBM outperforms the current state-of-the-art methods in terms of both accuracy and efficiency and achieves comparable performance to black-box models across multiple datasets. | https://openaccess.thecvf.com/content/CVPR2024/papers/Shang_Incremental_Residual_Concept_Bottleneck_Models_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.08978 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Shang_Incremental_Residual_Concept_Bottleneck_Models_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Shang_Incremental_Residual_Concept_Bottleneck_Models_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Shang_Incremental_Residual_Concept_CVPR_2024_supplemental.pdf | null |
SPOC: Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World | Kiana Ehsani, Tanmay Gupta, Rose Hendrix, Jordi Salvador, Luca Weihs, Kuo-Hao Zeng, Kunal Pratap Singh, Yejin Kim, Winson Han, Alvaro Herrasti, Ranjay Krishna, Dustin Schwenk, Eli VanderBilt, Aniruddha Kembhavi | Reinforcement learning (RL) with dense rewards and imitation learning (IL) with human-generated trajectories are the most widely used approaches for training modern embodied agents. RL requires extensive reward shaping and auxiliary losses and is often too slow and ineffective for long-horizon tasks. While IL with human supervision is effective collecting human trajectories at scale is extremely expensive. In this work we show that imitating shortest-path planners in simulation produces agents that given a language instruction can proficiently navigate explore and manipulate objects in both simulation and in the real world using only RGB sensors (no depth map or GPS coordinates). This surprising result is enabled by our end-to-end transformer-based SPOC architecture powerful visual encoders paired with extensive image augmentation and the dramatic scale and diversity of our training data: millions of frames of shortest-path-expert trajectories collected inside approximately 200000 procedurally generated houses containing 40000 unique 3D assets. Our models data training code and newly proposed 10-task benchmarking suite CHORES are available at https://spoc-robot.github.io/. | https://openaccess.thecvf.com/content/CVPR2024/papers/Ehsani_SPOC_Imitating_Shortest_Paths_in_Simulation_Enables_Effective_Navigation_and_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Ehsani_SPOC_Imitating_Shortest_Paths_in_Simulation_Enables_Effective_Navigation_and_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Ehsani_SPOC_Imitating_Shortest_Paths_in_Simulation_Enables_Effective_Navigation_and_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ehsani_SPOC_Imitating_Shortest_CVPR_2024_supplemental.pdf | null |
Real-Time Exposure Correction via Collaborative Transformations and Adaptive Sampling | Ziwen Li, Feng Zhang, Meng Cao, Jinpu Zhang, Yuanjie Shao, Yuehuan Wang, Nong Sang | Most of the previous exposure correction methods learn dense pixel-wise transformations to achieve promising results but consume huge computational resources. Recently Learnable 3D lookup tables (3D LUTs) have demonstrated impressive performance and efficiency for image enhancement. However these methods can only perform global transformations and fail to finely manipulate local regions. Moreover they uniformly downsample the input image which loses the rich color information and limits the learning of color transformation capabilities. In this paper we present a collaborative transformation framework (CoTF) for real-time exposure correction which integrates global transformation with pixel-wise transformations in an efficient manner. Specifically the global transformation adjusts the overall appearance using image-adaptive 3D LUTs to provide decent global contrast and sharp details while the pixel transformation compensates for local context. Then a relation-aware modulation module is designed to combine these two components effectively. In addition we propose an adaptive sampling strategy to preserve more color information by predicting the sampling intervals thus providing higher quality input data for the learning of 3D LUTs. Extensive experiments demonstrate that our method can process high-resolution images in real-time on GPUs while achieving comparable performance against current state-of-the-art methods. The code is available at https://github.com/HUST-IAL/CoTF. | https://openaccess.thecvf.com/content/CVPR2024/papers/Li_Real-Time_Exposure_Correction_via_Collaborative_Transformations_and_Adaptive_Sampling_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Li_Real-Time_Exposure_Correction_via_Collaborative_Transformations_and_Adaptive_Sampling_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Li_Real-Time_Exposure_Correction_via_Collaborative_Transformations_and_Adaptive_Sampling_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_Real-Time_Exposure_Correction_CVPR_2024_supplemental.pdf | null |
Lodge: A Coarse to Fine Diffusion Network for Long Dance Generation Guided by the Characteristic Dance Primitives | Ronghui Li, YuXiang Zhang, Yachao Zhang, Hongwen Zhang, Jie Guo, Yan Zhang, Yebin Liu, Xiu Li | We propose Lodge a network capable of generating extremely long dance sequences conditioned on given music. We design Lodge as a two-stage coarse to fine diffusion architecture and propose the characteristic dance primitives that possess significant expressiveness as intermediate representations between two diffusion models. The first stage is global diffusion which focuses on comprehending the coarse-level music-dance correlation and production characteristic dance primitives. In contrast the second-stage is the local diffusion which parallelly generates detailed motion sequences under the guidance of the dance primitives and choreographic rules. In addition we propose a Foot Refine Block to optimize the contact between the feet and the ground enhancing the physical realism of the motion. Code available at https://li-ronghui.github.io/lodge | https://openaccess.thecvf.com/content/CVPR2024/papers/Li_Lodge_A_Coarse_to_Fine_Diffusion_Network_for_Long_Dance_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.10518 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Li_Lodge_A_Coarse_to_Fine_Diffusion_Network_for_Long_Dance_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Li_Lodge_A_Coarse_to_Fine_Diffusion_Network_for_Long_Dance_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_Lodge_A_Coarse_CVPR_2024_supplemental.zip | null |
UDiFF: Generating Conditional Unsigned Distance Fields with Optimal Wavelet Diffusion | Junsheng Zhou, Weiqi Zhang, Baorui Ma, Kanle Shi, Yu-Shen Liu, Zhizhong Han | Diffusion models have shown remarkable results for image generation editing and inpainting. Recent works explore diffusion models for 3D shape generation with neural implicit functions i.e. signed distance function and occupancy function. However they are limited to shapes with closed surfaces which prevents them from generating diverse 3D real-world contents containing open surfaces. In this work we present UDiFF a 3D diffusion model for unsigned distance fields (UDFs) which is capable to generate textured 3D shapes with open surfaces from text conditions or unconditionally. Our key idea is to generate UDFs in spatial-frequency domain with an optimal wavelet transformation which produces a compact representation space for UDF generation. Specifically instead of selecting an appropriate wavelet transformation which requires expensive manual efforts and still leads to large information loss we propose a data-driven approach to learn the optimal wavelet transformation for UDFs. We evaluate UDiFF to show our advantages by numerical and visual comparisons with the latest methods on widely used benchmarks. Page: https://weiqi-zhang.github.io/UDiFF. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zhou_UDiFF_Generating_Conditional_Unsigned_Distance_Fields_with_Optimal_Wavelet_Diffusion_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.06851 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zhou_UDiFF_Generating_Conditional_Unsigned_Distance_Fields_with_Optimal_Wavelet_Diffusion_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zhou_UDiFF_Generating_Conditional_Unsigned_Distance_Fields_with_Optimal_Wavelet_Diffusion_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhou_UDiFF_Generating_Conditional_CVPR_2024_supplemental.pdf | null |
LoCoNet: Long-Short Context Network for Active Speaker Detection | Xizi Wang, Feng Cheng, Gedas Bertasius | Active Speaker Detection (ASD) aims to identify who is speaking in each frame of a video. Solving ASD involves using audio and visual information in two complementary contexts: long-term intra-speaker context models the temporal dependencies of the same speaker while short-term inter-speaker context models the interactions of speakers in the same scene. Motivated by these observations we propose LoCoNet a simple but effective Long-Short Context Network that leverages Long-term Intra-speaker Modeling (LIM) and Short-term Inter-speaker Modeling (SIM) in an interleaved manner. LIM employs self-attention for long-range temporal dependencies modeling and cross-attention for audio-visual interactions modeling. SIM incorporates convolutional blocks that capture local patterns for short-term inter-speaker context. Experiments show that LoCoNet achieves state-of-the-art performance on multiple datasets with 95.2% (+0.3%) mAP on AVA-ActiveSpeaker 97.2% (+2.7%) mAP on Talkies and 68.4% (+7.7%) mAP on Ego4D. Moreover in challenging cases where multiple speakers are present LoCoNet outperforms previous state-of-the-art methods by 3.0% mAP on AVA-ActiveSpeaker. The code is available at https://github.com/SJTUwxz/LoCoNet_ASD. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_LoCoNet_Long-Short_Context_Network_for_Active_Speaker_Detection_CVPR_2024_paper.pdf | http://arxiv.org/abs/2301.08237 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_LoCoNet_Long-Short_Context_Network_for_Active_Speaker_Detection_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_LoCoNet_Long-Short_Context_Network_for_Active_Speaker_Detection_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wang_LoCoNet_Long-Short_Context_CVPR_2024_supplemental.zip | null |
D3still: Decoupled Differential Distillation for Asymmetric Image Retrieval | Yi Xie, Yihong Lin, Wenjie Cai, Xuemiao Xu, Huaidong Zhang, Yong Du, Shengfeng He | Existing methods for asymmetric image retrieval employ a rigid pairwise similarity constraint between the query network and the larger gallery network. However these one-to-one constraint approaches often fail to maintain retrieval order consistency especially when the query network has limited representational capacity. To overcome this problem we introduce the Decoupled Differential Distillation (D3still) framework. This framework shifts from absolute one-to-one supervision to optimizing the relational differences in pairwise similarities produced by the query and gallery networks thereby preserving a consistent retrieval order across both networks. Our method involves computing a pairwise similarity differential matrix within the gallery domain which is then decomposed into three components: feature representation knowledge inconsistent pairwise similarity differential knowledge and consistent pairwise similarity differential knowledge. This strategic decomposition aligns the retrieval ranking of the query network with the gallery network effectively. Extensive experiments on various benchmark datasets reveal that D3still surpasses state-of-the-art methods in asymmetric image retrieval. Code is available at https://github.com/SCY-X/D3still. | https://openaccess.thecvf.com/content/CVPR2024/papers/Xie_D3still_Decoupled_Differential_Distillation_for_Asymmetric_Image_Retrieval_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Xie_D3still_Decoupled_Differential_Distillation_for_Asymmetric_Image_Retrieval_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Xie_D3still_Decoupled_Differential_Distillation_for_Asymmetric_Image_Retrieval_CVPR_2024_paper.html | CVPR 2024 | null | null |
Transcending Forgery Specificity with Latent Space Augmentation for Generalizable Deepfake Detection | Zhiyuan Yan, Yuhao Luo, Siwei Lyu, Qingshan Liu, Baoyuan Wu | Deepfake detection faces a critical generalization hurdle with performance deteriorating when there is a mismatch between the distributions of training and testing data. A broadly received explanation is the tendency of these detectors to be overfitted to forgery-specific artifacts rather than learning features that are widely applicable across various forgeries. To address this issue we propose a simple yet effective detector called LSDA (\underline L atent \underline S pace \underline D ata \underline A ugmentation) which is based on a heuristic idea: representations with a wider variety of forgeries should be able to learn a more generalizable decision boundary thereby mitigating the overfitting of method-specific features (see Fig. 1). Following this idea we propose to enlarge the forgery space by constructing and simulating variations within and across forgery features in the latent space. This approach encompasses the acquisition of enriched domain-specific features and the facilitation of smoother transitions between different forgery types effectively bridging domain gaps. Our approach culminates in refining a binary classifier that leverages the distilled knowledge from the enhanced features striving for a generalizable deepfake detector. Comprehensive experiments show that our proposed method is surprisingly effective and transcends state-of-the-art detectors across several widely used benchmarks. | https://openaccess.thecvf.com/content/CVPR2024/papers/Yan_Transcending_Forgery_Specificity_with_Latent_Space_Augmentation_for_Generalizable_Deepfake_CVPR_2024_paper.pdf | http://arxiv.org/abs/2311.11278 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Yan_Transcending_Forgery_Specificity_with_Latent_Space_Augmentation_for_Generalizable_Deepfake_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Yan_Transcending_Forgery_Specificity_with_Latent_Space_Augmentation_for_Generalizable_Deepfake_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yan_Transcending_Forgery_Specificity_CVPR_2024_supplemental.pdf | null |
Scaling Laws of Synthetic Images for Model Training ... for Now | Lijie Fan, Kaifeng Chen, Dilip Krishnan, Dina Katabi, Phillip Isola, Yonglong Tian | Recent significant advances in text-to-image models unlock the possibility of training vision systems using synthetic images potentially overcoming the difficulty of collecting curated data at scale. It is unclear however how these models behave at scale as more synthetic data is added to the training set. In this paper we study the scaling laws of synthetic images generated by state of the art text-to-image models for the training of supervised models: image classifiers with label supervision and CLIP with language supervision. We identify several factors including text prompts classifier-free guidance scale and types of text-to-image models that significantly affect scaling behavior. After tuning these factors we observe that synthetic images demonstrate a scaling trend similar to but slightly less effective than real images in CLIP training while they significantly underperform in scaling when training supervised image classifiers. Our analysis indicates that the main reason for this underperformance is the inability of off-the-shelf text-to-image models to generate certain concepts a limitation that significantly impairs the training of image classifiers. Our findings also suggest that scaling synthetic data can be particularly effective in scenarios such as: (1) when there is a limited supply of real images for a supervised problem (e.g. fewer than 0.5 million images in ImageNet) (2) when the evaluation dataset diverges significantly from the training data indicating the out-of-distribution scenario or (3) when synthetic data is used in conjunction with real images as demonstrated in the training of CLIP models. | https://openaccess.thecvf.com/content/CVPR2024/papers/Fan_Scaling_Laws_of_Synthetic_Images_for_Model_Training_..._for_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.04567 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Fan_Scaling_Laws_of_Synthetic_Images_for_Model_Training_..._for_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Fan_Scaling_Laws_of_Synthetic_Images_for_Model_Training_..._for_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Fan_Scaling_Laws_of_CVPR_2024_supplemental.pdf | null |
Towards Large-scale 3D Representation Learning with Multi-dataset Point Prompt Training | Xiaoyang Wu, Zhuotao Tian, Xin Wen, Bohao Peng, Xihui Liu, Kaicheng Yu, Hengshuang Zhao | The rapid advancement of deep learning models is often attributed to their ability to leverage massive training data. In contrast such privilege has not yet fully benefited 3D deep learning mainly due to the limited availability of large-scale 3D datasets. Merging multiple available data sources and letting them collaboratively train a single model is a potential solution. However due to the large domain gap between 3D point cloud datasets such mixed supervision could adversely affect the model's performance and lead to degenerated performance (i.e. negative transfer) compared to single-dataset training. In view of this challenge we introduce Point Prompt Training (PPT) a novel framework for multi-dataset synergistic learning in the context of 3D representation learning that supports multiple pre-training paradigms. Based on this framework we propose Prompt-driven Normalization which adapts the model to different datasets with domain-specific prompts and Language-guided Categorical Alignment that decently unifies the multiple-dataset label spaces by leveraging the relationship between label text. Extensive experiments verify that PPT can overcome the negative transfer associated with synergistic learning and produce generalizable representations. Notably it achieves state-of-the-art performance on each dataset using a single weight-shared model with supervised multi-dataset training. Moreover when served as a pre-training framework it outperforms other pre-training approaches regarding representation quality and attains remarkable state-of-the-art performance across over ten diverse downstream tasks spanning both indoor and outdoor 3D scenarios. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_Towards_Large-scale_3D_Representation_Learning_with_Multi-dataset_Point_Prompt_Training_CVPR_2024_paper.pdf | http://arxiv.org/abs/2308.09718 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wu_Towards_Large-scale_3D_Representation_Learning_with_Multi-dataset_Point_Prompt_Training_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wu_Towards_Large-scale_3D_Representation_Learning_with_Multi-dataset_Point_Prompt_Training_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wu_Towards_Large-scale_3D_CVPR_2024_supplemental.pdf | null |
Learning Triangular Distribution in Visual World | Ping Chen, Xingpeng Zhang, Chengtao Zhou, Dichao Fan, Peng Tu, Le Zhang, Yanlin Qian | Convolution neural network is successful in pervasive vision tasks including label distribution learning which usually takes the form of learning an injection from the non-linear visual features to the well-defined labels. However how the discrepancy between features is mapped to the label discrepancy is ambient and its correctness is not guaranteed.To address these problems we study the mathematical connection between feature and its label presenting a general and simple framework for label distribution learning. We propose a so-called Triangular Distribution Transform (TDT) to build an injective function between feature and label guaranteeing that any symmetric feature discrepancy linearly reflects the difference between labels. The proposed TDT can be used as a plug-in in mainstream backbone networks to address different label distribution learning tasks. Experiments on Facial Age Recognition Illumination Chromaticity Estimation and Aesthetics assessment show that TDT achieves on-par or better results than the prior arts. | https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_Learning_Triangular_Distribution_in_Visual_World_CVPR_2024_paper.pdf | http://arxiv.org/abs/2311.18605 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Learning_Triangular_Distribution_in_Visual_World_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Learning_Triangular_Distribution_in_Visual_World_CVPR_2024_paper.html | CVPR 2024 | null | null |
State Space Models for Event Cameras | Nikola Zubic, Mathias Gehrig, Davide Scaramuzza | Today state-of-the-art deep neural networks that process event-camera data first convert a temporal window of events into dense grid-like input representations. As such they exhibit poor generalizability when deployed at higher inference frequencies (i.e. smaller temporal windows) than the ones they were trained on. We address this challenge by introducing state-space models (SSMs) with learnable timescale parameters to event-based vision. This design adapts to varying frequencies without the need to retrain the network at different frequencies. Additionally we investigate two strategies to counteract aliasing effects when deploying the model at higher frequencies. We comprehensively evaluate our approach against existing methods based on RNN and Transformer architectures across various benchmarks including Gen1 and 1 Mpx event camera datasets. Our results demonstrate that SSM-based models train 33% faster and also exhibit minimal performance degradation when tested at higher frequencies than the training input. Traditional RNN and Transformer models exhibit performance drops of more than 20 mAP with SSMs having a drop of 3.31 mAP highlighting the effectiveness of SSMs in event-based vision tasks. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zubic_State_Space_Models_for_Event_Cameras_CVPR_2024_paper.pdf | http://arxiv.org/abs/2402.15584 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zubic_State_Space_Models_for_Event_Cameras_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zubic_State_Space_Models_for_Event_Cameras_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zubic_State_Space_Models_CVPR_2024_supplemental.pdf | null |
EmbodiedScan: A Holistic Multi-Modal 3D Perception Suite Towards Embodied AI | Tai Wang, Xiaohan Mao, Chenming Zhu, Runsen Xu, Ruiyuan Lyu, Peisen Li, Xiao Chen, Wenwei Zhang, Kai Chen, Tianfan Xue, Xihui Liu, Cewu Lu, Dahua Lin, Jiangmiao Pang | In the realm of computer vision and robotics embodied agents are expected to explore their environment and carry out human instructions. This necessitates the ability to fully understand 3D scenes given their first-person observations and contextualize them into language for interaction. However traditional research focuses more on scene-level input and output setups from a global view. To address the gap we introduce EmbodiedScan a multi-modal ego-centric 3D perception dataset and benchmark for holistic 3D scene understanding. It encompasses over 5k scans encapsulating 1M ego-centric RGB-D views 1M language prompts 160k 3D-oriented boxes spanning over 760 categories some of which partially align with LVIS and dense semantic occupancy with 80 common categories. Building upon this database we introduce a baseline framework named Embodied Perceptron. It is capable of processing an arbitrary number of multi-modal inputs and demonstrates remarkable 3D perception capabilities both within the two series of benchmarks we set up i.e. fundamental 3D perception tasks and language-grounded tasks and in the wild. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_EmbodiedScan_A_Holistic_Multi-Modal_3D_Perception_Suite_Towards_Embodied_AI_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.16170 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_EmbodiedScan_A_Holistic_Multi-Modal_3D_Perception_Suite_Towards_Embodied_AI_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_EmbodiedScan_A_Holistic_Multi-Modal_3D_Perception_Suite_Towards_Embodied_AI_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wang_EmbodiedScan_A_Holistic_CVPR_2024_supplemental.zip | null |
SHINOBI: Shape and Illumination using Neural Object Decomposition via BRDF Optimization In-the-wild | Andreas Engelhardt, Amit Raj, Mark Boss, Yunzhi Zhang, Abhishek Kar, Yuanzhen Li, Deqing Sun, Ricardo Martin Brualla, Jonathan T. Barron, Hendrik P. A. Lensch, Varun Jampani | We present SHINOBI an end-to-end framework for the reconstruction of shape material and illumination from object images captured with varying lighting pose and background. Inverse rendering of an object based on unconstrained image collections is a long-standing challenge in computer vision and graphics and requires a joint optimization over shape radiance and pose. We show that an implicit shape representation based on a multi-resolution hash encoding enables faster and robust shape reconstruction with joint camera alignment optimization that outperforms prior work. Further to enable the editing of illumination and object reflectance (i.e. material) we jointly optimize BRDF and illumination together with the object's shape. Our method is class-agnostic and works on in-the-wild image collections of objects to produce relightable 3D assets for several use cases such as AR/VR movies games etc. | https://openaccess.thecvf.com/content/CVPR2024/papers/Engelhardt_SHINOBI_Shape_and_Illumination_using_Neural_Object_Decomposition_via_BRDF_CVPR_2024_paper.pdf | http://arxiv.org/abs/2401.10171 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Engelhardt_SHINOBI_Shape_and_Illumination_using_Neural_Object_Decomposition_via_BRDF_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Engelhardt_SHINOBI_Shape_and_Illumination_using_Neural_Object_Decomposition_via_BRDF_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Engelhardt_SHINOBI_Shape_and_CVPR_2024_supplemental.pdf | null |
ES3: Evolving Self-Supervised Learning of Robust Audio-Visual Speech Representations | Yuanhang Zhang, Shuang Yang, Shiguang Shan, Xilin Chen | We propose a novel strategy ES3 for self-supervised learning of robust audio-visual speech representations from unlabeled talking face videos. While many recent approaches for this task primarily rely on guiding the learning process using the audio modality alone to capture information shared between audio and video we reframe the problem as the acquisition of shared unique (modality-specific) and synergistic speech information to address the inherent asymmetry between the modalities. Based on this formulation we propose a novel "evolving" strategy that progressively builds joint audio-visual speech representations that are strong for both uni-modal (audio & visual) and bi-modal (audio-visual) speech. First we leverage the more easily learnable audio modality to initialize audio and visual representations by capturing audio-unique and shared speech information. Next we incorporate video-unique speech information and bootstrap the audio-visual representations on top of the previously acquired shared knowledge. Finally we maximize the total audio-visual speech information including synergistic information to obtain robust and comprehensive representations. We implement ES3 as a simple Siamese framework and experiments on both English benchmarks and a newly contributed large-scale Mandarin dataset show its effectiveness. In particular on LRS2-BBC our smallest model is on par with SoTA models with only 1/2 parameters and 1/8 unlabeled data (223h). | https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_ES3_Evolving_Self-Supervised_Learning_of_Robust_Audio-Visual_Speech_Representations_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_ES3_Evolving_Self-Supervised_Learning_of_Robust_Audio-Visual_Speech_Representations_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_ES3_Evolving_Self-Supervised_Learning_of_Robust_Audio-Visual_Speech_Representations_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_ES3_Evolving_Self-Supervised_CVPR_2024_supplemental.pdf | null |
TeTriRF: Temporal Tri-Plane Radiance Fields for Efficient Free-Viewpoint Video | Minye Wu, Zehao Wang, Georgios Kouros, Tinne Tuytelaars | Neural Radiance Fields (NeRF) revolutionize the realm of visual media by providing photorealistic Free-Viewpoint Video (FVV) experiences offering viewers unparalleled immersion and interactivity. However the technology's significant storage requirements and the computational complexity involved in generation and rendering currently limit its broader application. To close this gap this paper presents Temporal Tri-Plane Radiance Fields (TeTriRF) a novel technology that significantly reduces the storage size for Free-Viewpoint Video (FVV) while maintaining low-cost generation and rendering. TeTriRF introduces a hybrid representation with tri-planes and voxel grids to support scaling up to long-duration sequences and scenes with complex motions or rapid changes. We propose a group training scheme tailored to achieving high training efficiency and yielding temporally consistent low-entropy scene representations on feature domain. Leveraging these properties of the representations we introduce a compression pipeline with off-the-shelf video codecs achieving an order of magnitude less storage size compared to the state-of-the-art. Our experiments demonstrate that TeTriRF can achieve competitive quality with a higher compression rate. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_TeTriRF_Temporal_Tri-Plane_Radiance_Fields_for_Efficient_Free-Viewpoint_Video_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.06713 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wu_TeTriRF_Temporal_Tri-Plane_Radiance_Fields_for_Efficient_Free-Viewpoint_Video_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wu_TeTriRF_Temporal_Tri-Plane_Radiance_Fields_for_Efficient_Free-Viewpoint_Video_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wu_TeTriRF_Temporal_Tri-Plane_CVPR_2024_supplemental.pdf | null |
Motion2VecSets: 4D Latent Vector Set Diffusion for Non-rigid Shape Reconstruction and Tracking | Wei Cao, Chang Luo, Biao Zhang, Matthias Nießner, Jiapeng Tang | We introduce Motion2VecSets a 4D diffusion model for dynamic surface reconstruction from point cloud sequences. While existing state-of-the-art methods have demonstrated success in reconstructing non-rigid objects using neural field representations conventional feed-forward networks encounter challenges with ambiguous observations from noisy partial or sparse point clouds. To address these challenges we introduce a diffusion model that explicitly learns the shape and motion distribution of non-rigid objects through an iterative denoising process of compressed latent representations. The diffusion-based priors enable more plausible and probabilistic reconstructions when handling ambiguous inputs. We parameterize 4D dynamics with latent sets instead of using global latent codes. This novel 4D representation allows us to learn local shape and deformation patterns leading to more accurate non-linear motion capture and significantly improving generalizability to unseen motions and identities. For more temporally-coherent object tracking we synchronously denoise deformation latent sets and exchange information across multiple frames. To avoid computational overhead we designed an interleaved space and time attention block to alternately aggregate deformation latents along spatial and temporal domains. Extensive comparisons against state-of-the-art methods demonstrate the superiority of our Motion2VecSets in 4D reconstruction from various imperfect observations. | https://openaccess.thecvf.com/content/CVPR2024/papers/Cao_Motion2VecSets_4D_Latent_Vector_Set_Diffusion_for_Non-rigid_Shape_Reconstruction_CVPR_2024_paper.pdf | http://arxiv.org/abs/2401.06614 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Cao_Motion2VecSets_4D_Latent_Vector_Set_Diffusion_for_Non-rigid_Shape_Reconstruction_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Cao_Motion2VecSets_4D_Latent_Vector_Set_Diffusion_for_Non-rigid_Shape_Reconstruction_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Cao_Motion2VecSets_4D_Latent_CVPR_2024_supplemental.pdf | null |
DiaLoc: An Iterative Approach to Embodied Dialog Localization | Chao Zhang, Mohan Li, Ignas Budvytis, Stephan Liwicki | Multimodal learning has advanced the performance for many vision-language tasks. However most existing works in embodied dialog research focus on navigation and leave the localization task understudied. The few existing dialog-based localization approaches assume the availability of entire dialog prior to localizaiton which is impractical for deployed dialog-based localization. In this paper we propose DiaLoc a new dialog-based localization framework which aligns with a real human operator behavior. Specifically we produce an iterative refinement of location predictions which can visualize current pose believes after each dialog turn. DiaLoc effectively utilizes the multimodal data for multi-shot localization where a fusion encoder fuses vision and dialog information iteratively. We achieve state-of-the-art results on embodied dialog-based localization task in single-shot (+7.08% in Acc5@valUnseen) and multi-shot settings (+10.85% in Acc5@valUnseen). DiaLoc narrows the gap between simulation and real-world applications opening doors for future research on collaborative localization and navigation. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_DiaLoc_An_Iterative_Approach_to_Embodied_Dialog_Localization_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.06846 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_DiaLoc_An_Iterative_Approach_to_Embodied_Dialog_Localization_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_DiaLoc_An_Iterative_Approach_to_Embodied_Dialog_Localization_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_DiaLoc_An_Iterative_CVPR_2024_supplemental.pdf | null |
Self-Training Large Language Models for Improved Visual Program Synthesis With Visual Reinforcement | Zaid Khan, Vijay Kumar BG, Samuel Schulter, Yun Fu, Manmohan Chandraker | Visual program synthesis is a promising approach to exploit the reasoning abilities of large language models for compositional computer vision tasks. Previous work has used few-shot prompting with frozen LLMs to synthesize visual programs. Training an LLM to write better visual programs is an attractive prospect but it is unclear how to accomplish this. No dataset of visual programs for training exists and acquisition of a visual program dataset cannot be easily crowdsourced due to the need for expert annotators. To get around the lack of direct supervision we explore improving the program synthesis abilities of an LLM using feedback from interactive experience. We propose a method where we exploit existing annotations for a vision-language task to improvise a coarse reward signal for that task treat the LLM as a policy and apply reinforced self-training to improve the visual program synthesis ability of the LLM for that task. We describe a series of experiments on object detection compositional visual question answering and image-text retrieval and show that in each case the self-trained LLM outperforms or performs on par with few-shot frozen LLMs that are an order of magnitude larger. Website: https://zaidkhan.me/ViReP | https://openaccess.thecvf.com/content/CVPR2024/papers/Khan_Self-Training_Large_Language_Models_for_Improved_Visual_Program_Synthesis_With_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.04627 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Khan_Self-Training_Large_Language_Models_for_Improved_Visual_Program_Synthesis_With_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Khan_Self-Training_Large_Language_Models_for_Improved_Visual_Program_Synthesis_With_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Khan_Self-Training_Large_Language_CVPR_2024_supplemental.pdf | null |
A2XP: Towards Private Domain Generalization | Geunhyeok Yu, Hyoseok Hwang | Deep Neural Networks (DNNs) have become pivotal in various fields especially in computer vision outperforming previous methodologies. A critical challenge in their deployment is the bias inherent in data across different domains such as image style and environmental conditions leading to domain gaps. This necessitates techniques for learning general representations from biased training data known as domain generalization. This paper presents Attend to eXpert Prompts (A2XP) a novel approach for domain generalization that preserves the privacy and integrity of the network architecture. A2XP consists of two phases: Expert Adaptation and Domain Generalization. In the first phase prompts for each source domain are optimized to guide the model towards the optimal direction. In the second phase two embedder networks are trained to effectively amalgamate these expert prompts aiming for an optimal output. Our extensive experiments demonstrate that A2XP achieves state-of-the-art results over existing non-private domain generalization methods. The experimental results validate that the proposed approach not only tackles the domain generalization challenge in DNNs but also offers a privacy-preserving efficient solution to the broader field of computer vision. | https://openaccess.thecvf.com/content/CVPR2024/papers/Yu_A2XP_Towards_Private_Domain_Generalization_CVPR_2024_paper.pdf | http://arxiv.org/abs/2311.10339 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Yu_A2XP_Towards_Private_Domain_Generalization_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Yu_A2XP_Towards_Private_Domain_Generalization_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yu_A2XP_Towards_Private_CVPR_2024_supplemental.pdf | null |
Event-assisted Low-Light Video Object Segmentation | Hebei Li, Jin Wang, Jiahui Yuan, Yue Li, Wenming Weng, Yansong Peng, Yueyi Zhang, Zhiwei Xiong, Xiaoyan Sun | In the realm of video object segmentation (VOS) the challenge of operating under low-light conditions persists resulting in notably degraded image quality and compromised accuracy when comparing query and memory frames for similarity computation. Event cameras characterized by their high dynamic range and ability to capture motion information of objects offer promise in enhancing object visibility and aiding VOS methods under such low-light conditions. This paper introduces a pioneering framework tailored for low-light VOS leveraging event camera data to elevate segmentation accuracy. Our approach hinges on two pivotal components: the Adaptive Cross-Modal Fusion (ACMF) module aimed at extracting pertinent features while fusing image and event modalities to mitigate noise interference and the Event-Guided Memory Matching (EGMM) module designed to rectify the issue of inaccurate matching prevalent in low-light settings. Additionally we present the creation of a synthetic LLE-DAVIS dataset and the curation of a real-world LLE-VOS dataset encompassing frames and events. Experimental evaluations corroborate the efficacy of our method across both datasets affirming its effectiveness in low-light scenarios. The datasets are available at https://github.com/HebeiFast/EventLowLightVOS. | https://openaccess.thecvf.com/content/CVPR2024/papers/Li_Event-assisted_Low-Light_Video_Object_Segmentation_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.01945 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Li_Event-assisted_Low-Light_Video_Object_Segmentation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Li_Event-assisted_Low-Light_Video_Object_Segmentation_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_Event-assisted_Low-Light_Video_CVPR_2024_supplemental.pdf | null |
Active Domain Adaptation with False Negative Prediction for Object Detection | Yuzuru Nakamura, Yasunori Ishii, Takayoshi Yamashita | Domain adaptation adapts models to various scenes with different appearances. In this field active domain adaptation is crucial in effectively sampling a limited number of data in the target domain. We propose an active domain adaptation method for object detection focusing on quantifying the undetectability of objects. Existing methods for active sampling encounter challenges in considering undetected objects while estimating the uncertainty of model predictions. Our proposed active sampling strategy addresses this issue using an active learning approach that simultaneously accounts for uncertainty and undetectability. Our newly proposed False Negative Prediction Module evaluates the undetectability of images containing undetected objects enabling more informed active sampling. This approach considers previously overlooked undetected objects thereby reducing false negative errors. Moreover using unlabeled data our proposed method utilizes uncertainty-guided pseudo-labeling to enhance domain adaptation further. Extensive experiments demonstrate that the performance of our proposed method closely rivals that of fully supervised learning while requiring only a fraction of the labeling efforts needed for the latter. | https://openaccess.thecvf.com/content/CVPR2024/papers/Nakamura_Active_Domain_Adaptation_with_False_Negative_Prediction_for_Object_Detection_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Nakamura_Active_Domain_Adaptation_with_False_Negative_Prediction_for_Object_Detection_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Nakamura_Active_Domain_Adaptation_with_False_Negative_Prediction_for_Object_Detection_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Nakamura_Active_Domain_Adaptation_CVPR_2024_supplemental.pdf | null |
MLIP: Enhancing Medical Visual Representation with Divergence Encoder and Knowledge-guided Contrastive Learning | Zhe Li, Laurence T. Yang, Bocheng Ren, Xin Nie, Zhangyang Gao, Cheng Tan, Stan Z. Li | The scarcity of annotated data has sparked significant interest in unsupervised pre-training methods that leverage medical reports as auxiliary signals for medical visual representation learning. However existing research overlooks the multi-granularity nature of medical visual representation and lacks suitable contrastive learning techniques to improve the models' generalizability across different granularities leading to the underutilization of image-text information. To address this we propose MLIP a novel framework leveraging domain-specific medical knowledge as guiding signals to integrate language information into the visual domain through image-text contrastive learning. Our model includes global contrastive learning with our designed divergence encoder local token-knowledge-patch alignment contrastive learning and knowledge-guided category-level contrastive learning with expert knowledge. Experimental evaluations reveal the efficacy of our model in enhancing transfer performance for tasks such as image classification object detection and semantic segmentation. Notably MLIP surpasses state-of-the-art methods even with limited annotated data highlighting the potential of multimodal pre-training in advancing medical representation learning. | https://openaccess.thecvf.com/content/CVPR2024/papers/Li_MLIP_Enhancing_Medical_Visual_Representation_with_Divergence_Encoder_and_Knowledge-guided_CVPR_2024_paper.pdf | http://arxiv.org/abs/2402.02045 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Li_MLIP_Enhancing_Medical_Visual_Representation_with_Divergence_Encoder_and_Knowledge-guided_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Li_MLIP_Enhancing_Medical_Visual_Representation_with_Divergence_Encoder_and_Knowledge-guided_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_MLIP_Enhancing_Medical_CVPR_2024_supplemental.pdf | null |
Generative 3D Part Assembly via Part-Whole-Hierarchy Message Passing | Bi'an Du, Xiang Gao, Wei Hu, Renjie Liao | Generative 3D part assembly involves understanding part relationships and predicting their 6-DoF poses for assembling a realistic 3D shape. Prior work often focus on the geometry of individual parts neglecting part-whole hierarchies of objects. Leveraging two key observations: 1) super-part poses provide strong hints about part poses and 2) predicting super-part poses is easier due to fewer super-parts we propose a part-whole-hierarchy message passing network for efficient 3D part assembly. We first introduce super-parts by grouping geometrically similar parts without any semantic labels. Then we employ a part-whole hierarchical encoder wherein a super-part encoder predicts latent super-part poses based on input parts. Subsequently we transform the point cloud using the latent poses feeding it to the part encoder for aggregating super-part information and reasoning about part relationships to predict all part poses. In training only ground-truth part poses are required. During inference the predicted latent poses of super-parts enhance interpretability. Experimental results on the PartNet dataset that our method achieves state-of-the-art performance in part and connectivity accuracy and enables an interpretable hierarchical part assembly. | https://openaccess.thecvf.com/content/CVPR2024/papers/Du_Generative_3D_Part_Assembly_via_Part-Whole-Hierarchy_Message_Passing_CVPR_2024_paper.pdf | http://arxiv.org/abs/2402.17464 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Du_Generative_3D_Part_Assembly_via_Part-Whole-Hierarchy_Message_Passing_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Du_Generative_3D_Part_Assembly_via_Part-Whole-Hierarchy_Message_Passing_CVPR_2024_paper.html | CVPR 2024 | null | null |
VidToMe: Video Token Merging for Zero-Shot Video Editing | Xirui Li, Chao Ma, Xiaokang Yang, Ming-Hsuan Yang | Diffusion models have made significant advances in generating high-quality images but their application to video generation has remained challenging due to the complexity of temporal motion. Zero-shot video editing offers a solution by utilizing pre-trained image diffusion models to translate source videos into new ones. Nevertheless existing methods struggle to maintain strict temporal consistency and efficient memory consumption. In this work we propose a novel approach to enhance temporal consistency in generated videos by merging self-attention tokens across frames. By aligning and compressing temporally redundant tokens across frames our method improves temporal coherence and reduces memory consumption in self-attention computations. The merging strategy matches and aligns tokens according to the temporal correspondence between frames facilitating natural temporal consistency in generated video frames. To manage the complexity of video processing we divide videos into chunks and develop intra-chunk local token merging and inter-chunk global token merging ensuring both short-term video continuity and long-term content consistency. Our video editing approach seamlessly extends the advancements in image editing to video editing rendering favorable results in temporal consistency over state-of-the-art methods. | https://openaccess.thecvf.com/content/CVPR2024/papers/Li_VidToMe_Video_Token_Merging_for_Zero-Shot_Video_Editing_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.10656 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Li_VidToMe_Video_Token_Merging_for_Zero-Shot_Video_Editing_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Li_VidToMe_Video_Token_Merging_for_Zero-Shot_Video_Editing_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_VidToMe_Video_Token_CVPR_2024_supplemental.zip | null |
FaceChain-SuDe: Building Derived Class to Inherit Category Attributes for One-shot Subject-Driven Generation | Pengchong Qiao, Lei Shang, Chang Liu, Baigui Sun, Xiangyang Ji, Jie Chen | Recently subject-driven generation has garnered significant interest due to its ability to personalize text-to-image generation. Typical works focus on learning the new subject's private attributes. However an important fact has not been taken seriously that a subject is not an isolated new concept but should be a specialization of a certain category in the pre-trained model. This results in the subject failing to comprehensively inherit the attributes in its category causing poor attribute-related generations. In this paper motivated by object-oriented programming we model the subject as a derived class whose base class is its semantic category. This modeling enables the subject to inherit public attributes from its category while learning its private attributes from the user-provided example. Specifically we propose a plug-and-play method Subject-Derived regularization (SuDe). It constructs the base-derived class modeling by constraining the subject-driven generated images to semantically belong to the subject's category. Extensive experiments under three baselines and two backbones on various subjects show that our SuDe enables imaginative attribute-related generations while maintaining subject fidelity. For the codes please refer to \href https://github.com/modelscope/facechain FaceChain . | https://openaccess.thecvf.com/content/CVPR2024/papers/Qiao_FaceChain-SuDe_Building_Derived_Class_to_Inherit_Category_Attributes_for_One-shot_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Qiao_FaceChain-SuDe_Building_Derived_Class_to_Inherit_Category_Attributes_for_One-shot_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Qiao_FaceChain-SuDe_Building_Derived_Class_to_Inherit_Category_Attributes_for_One-shot_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Qiao_FaceChain-SuDe_Building_Derived_CVPR_2024_supplemental.pdf | null |
Benchmarking Segmentation Models with Mask-Preserved Attribute Editing | Zijin Yin, Kongming Liang, Bing Li, Zhanyu Ma, Jun Guo | When deploying segmentation models in practice it is critical to evaluate their behaviors in varied and complex scenes. Different from the previous evaluation paradigms only in consideration of global attribute variations (e.g. adverse weather) we investigate both local and global attribute variations for robustness evaluation. To achieve this we construct a mask-preserved attribute editing pipeline to edit visual attributes of real images with precise control of structural information. Therefore the original segmentation labels can be reused for the edited images. Using our pipeline we construct a benchmark covering both object and image attributes (e.g. color material pattern style). We evaluate a broad variety of semantic segmentation models spanning from conventional close-set models to recent open-vocabulary large models on their robustness to different types of variations. We find that both local and global attribute variations affect segmentation performances and the sensitivity of models diverges across different variation types. We argue that local attributes have the same importance as global attributes and should be considered in the robustness evaluation of segmentation models. Code: https://github.com/PRIS-CV/Pascal-EA. | https://openaccess.thecvf.com/content/CVPR2024/papers/Yin_Benchmarking_Segmentation_Models_with_Mask-Preserved_Attribute_Editing_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.01231 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Yin_Benchmarking_Segmentation_Models_with_Mask-Preserved_Attribute_Editing_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Yin_Benchmarking_Segmentation_Models_with_Mask-Preserved_Attribute_Editing_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yin_Benchmarking_Segmentation_Models_CVPR_2024_supplemental.pdf | null |
Analyzing and Improving the Training Dynamics of Diffusion Models | Tero Karras, Miika Aittala, Jaakko Lehtinen, Janne Hellsten, Timo Aila, Samuli Laine | Diffusion models currently dominate the field of data-driven image synthesis with their unparalleled scaling to large datasets. In this paper we identify and rectify several causes for uneven and ineffective training in the popular ADM diffusion model architecture without altering its high-level structure. Observing uncontrolled magnitude changes and imbalances in both the network activations and weights over the course of training we redesign the network layers to preserve activation weight and update magnitudes on expectation. We find that systematic application of this philosophy eliminates the observed drifts and imbalances resulting in considerably better networks at equal computational complexity. Our modifications improve the previous record FID of 2.41 in ImageNet-512 synthesis to 1.81 achieved using fast deterministic sampling. As an independent contribution we present a method for setting the exponential moving average (EMA) parameters post-hoc i.e. after completing the training run. This allows precise tuning of EMA length without the cost of performing several training runs and reveals its surprising interactions with network architecture training time and guidance. | https://openaccess.thecvf.com/content/CVPR2024/papers/Karras_Analyzing_and_Improving_the_Training_Dynamics_of_Diffusion_Models_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.02696 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Karras_Analyzing_and_Improving_the_Training_Dynamics_of_Diffusion_Models_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Karras_Analyzing_and_Improving_the_Training_Dynamics_of_Diffusion_Models_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Karras_Analyzing_and_Improving_CVPR_2024_supplemental.pdf | null |
Hierarchical Correlation Clustering and Tree Preserving Embedding | Morteza Haghir Chehreghani, Mostafa Haghir Chehreghani | We propose a hierarchical correlation clustering method that extends the well-known correlation clustering to produce hierarchical clusters applicable to both positive and negative pairwise dissimilarities. Then in the following we study unsupervised representation learning with such hierarchical correlation clustering. For this purpose we first investigate embedding the respective hierarchy to be used for tree preserving embedding and feature extraction. Thereafter we study the extension of minimax distance measures to correlation clustering as another representation learning paradigm. Finally we demonstrate the performance of our methods on several datasets. | https://openaccess.thecvf.com/content/CVPR2024/papers/Chehreghani_Hierarchical_Correlation_Clustering_and_Tree_Preserving_Embedding_CVPR_2024_paper.pdf | http://arxiv.org/abs/2002.07756 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Chehreghani_Hierarchical_Correlation_Clustering_and_Tree_Preserving_Embedding_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Chehreghani_Hierarchical_Correlation_Clustering_and_Tree_Preserving_Embedding_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Chehreghani_Hierarchical_Correlation_Clustering_CVPR_2024_supplemental.pdf | null |
StableVITON: Learning Semantic Correspondence with Latent Diffusion Model for Virtual Try-On | Jeongho Kim, Guojung Gu, Minho Park, Sunghyun Park, Jaegul Choo | Given a clothing image and a person image an image-based virtual try-on aims to generate a customized image that appears natural and accurately reflects the characteristics of the clothing image. In this work we aim to expand the applicability of the pre-trained diffusion model so that it can be utilized independently for the virtual try-on task. The main challenge is to preserve the clothing details while effectively utilizing the robust generative capability of the pre-trained model. In order to tackle these issues we propose StableVITON learning the semantic correspondence between the clothing and the human body within the latent space of the pre-trained diffusion model in an end-to-end manner. Our proposed zero cross-attention blocks not only preserve the clothing details by learning the semantic correspondence but also generate high-fidelity images by utilizing the inherent knowledge of the pre-trained model in the warping process. Through our proposed novel attention total variation loss and applying augmentation we achieve the sharp attention map resulting in a more precise representation of clothing details. StableVITON outperforms the baselines in qualitative and quantitative evaluation showing promising quality in arbitrary person images. Our code is available at https://github.com/rlawjdghek/StableVITON. | https://openaccess.thecvf.com/content/CVPR2024/papers/Kim_StableVITON_Learning_Semantic_Correspondence_with_Latent_Diffusion_Model_for_Virtual_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.01725 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Kim_StableVITON_Learning_Semantic_Correspondence_with_Latent_Diffusion_Model_for_Virtual_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Kim_StableVITON_Learning_Semantic_Correspondence_with_Latent_Diffusion_Model_for_Virtual_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kim_StableVITON_Learning_Semantic_CVPR_2024_supplemental.pdf | null |
Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion? | Zhengyue Zhao, Jinhao Duan, Kaidi Xu, Chenan Wang, Rui Zhang, Zidong Du, Qi Guo, Xing Hu | Stable Diffusion has established itself as a foundation model in generative AI artistic applications receiving widespread research and application. Some recent fine-tuning methods have made it feasible for individuals to implant personalized concepts onto the basic Stable Diffusion model with minimal computational costs on small datasets. However these innovations have also given rise to issues like facial privacy forgery and artistic copyright infringement. In recent studies researchers have explored the addition of imperceptible adversarial perturbations to images to prevent potential unauthorized exploitation and infringements when personal data is used for fine-tuning Stable Diffusion. Although these studies have demonstrated the ability to protect images it is essential to consider that these methods may not be entirely applicable in real-world scenarios. In this paper we systematically evaluate the use of perturbations to protect images within a practical threat model. The results suggest that these approaches may not be sufficient to safeguard image privacy and copyright effectively. Furthermore we introduce a purification method capable of removing protected perturbations while preserving the original image structure to the greatest extent possible. Experiments reveal that Stable Diffusion can effectively learn from purified images over all protective methods. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zhao_Can_Protective_Perturbation_Safeguard_Personal_Data_from_Being_Exploited_by_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.00084 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zhao_Can_Protective_Perturbation_Safeguard_Personal_Data_from_Being_Exploited_by_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zhao_Can_Protective_Perturbation_Safeguard_Personal_Data_from_Being_Exploited_by_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhao_Can_Protective_Perturbation_CVPR_2024_supplemental.pdf | null |
Make-Your-Anchor: A Diffusion-based 2D Avatar Generation Framework | Ziyao Huang, Fan Tang, Yong Zhang, Xiaodong Cun, Juan Cao, Jintao Li, Tong-Yee Lee | Despite the remarkable process of talking-head-based avatar-creating solutions directly generating anchor-style videos with full-body motions remains challenging. In this study we propose Make-Your-Anchor a novel system necessitating only a one-minute video clip of an individual for training subsequently enabling the automatic generation of anchor-style videos with precise torso and hand movements. Specifically we finetune a proposed structure-guided diffusion model on input video to render 3D mesh conditions into human appearances. We adopt a two-stage training strategy for the diffusion model effectively binding movements with specific appearances. To produce arbitrary long temporal video we extend the 2D U-Net in the frame-wise diffusion model to a 3D style without additional training cost and a simple yet effective batch-overlapped temporal denoising module is proposed to bypass the constraints on video length during inference. Finally a novel identity-specific face enhancement module is introduced to improve the visual quality of facial regions in the output videos. Comparative experiments demonstrate the effectiveness and superiority of the system in terms of visual quality temporal coherence and identity preservation outperforming SOTA diffusion/non-diffusion methods. Project page: https://github.com/ICTMCG/Make-Your-Anchor. | https://openaccess.thecvf.com/content/CVPR2024/papers/Huang_Make-Your-Anchor_A_Diffusion-based_2D_Avatar_Generation_Framework_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Huang_Make-Your-Anchor_A_Diffusion-based_2D_Avatar_Generation_Framework_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Huang_Make-Your-Anchor_A_Diffusion-based_2D_Avatar_Generation_Framework_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Huang_Make-Your-Anchor_A_Diffusion-based_CVPR_2024_supplemental.pdf | null |
MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World | Yining Hong, Zishuo Zheng, Peihao Chen, Yian Wang, Junyan Li, Chuang Gan | Human beings possess the capability to multiply a melange of multisensory cues while actively exploring and interacting with the 3D world. Current multi-modal large language models however passively absorb sensory data as inputs lacking the capacity to actively interact with the objects in the 3D environment and dynamically collect their multisensory information. To usher in the study of this area we propose MultiPLY a multisensory embodied large language model that could incorporate multisensory interactive data including visual audio tactile and thermal information into large language models thereby establishing the correlation among words actions and percepts. To this end we first collect Multisensory Universe a large-scale multisensory interaction dataset comprising 500k data by deploying an LLM-powered embodied agent to engage with the 3D environment. To perform instruction tuning with pre-trained LLM on such generated data we first encode the 3D scene as abstracted object-centric representations and then introduce action tokens denoting that the embodied agent takes certain actions within the environment as well as state tokens that represent the multisensory state observations of the agent at each time step. In the inference time MultiPLY could generate action tokens instructing the agent to take the action in the environment and obtain the next multisensory state observation. The observation is then appended back to the LLM via state tokens to generate subsequent text or action tokens. We demonstrate that MultiPLY outperforms baselines by a large margin through a diverse set of embodied tasks involving object retrieval tool use multisensory captioning and task decomposition. | https://openaccess.thecvf.com/content/CVPR2024/papers/Hong_MultiPLY_A_Multisensory_Object-Centric_Embodied_Large_Language_Model_in_3D_CVPR_2024_paper.pdf | http://arxiv.org/abs/2401.08577 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Hong_MultiPLY_A_Multisensory_Object-Centric_Embodied_Large_Language_Model_in_3D_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Hong_MultiPLY_A_Multisensory_Object-Centric_Embodied_Large_Language_Model_in_3D_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Hong_MultiPLY_A_Multisensory_CVPR_2024_supplemental.pdf | null |
Learning to Visually Localize Sound Sources from Mixtures without Prior Source Knowledge | Dongjin Kim, Sung Jin Um, Sangmin Lee, Jung Uk Kim | The goal of the multi-sound source localization task is to localize sound sources from the mixture individually. While recent multi-sound source localization methods have shown improved performance they face challenges due to their reliance on prior information about the number of objects to be separated. In this paper to overcome this limitation we present a novel multi-sound source localization method that can perform localization without prior knowledge of the number of sound sources. To achieve this goal we propose an iterative object identification (IOI) module which can recognize sound-making objects in an iterative manner. After finding the regions of sound-making objects we devise object similarity-aware clustering (OSC) loss to guide the IOI module to effectively combine regions of the same object but also distinguish between different objects and backgrounds. It enables our method to perform accurate localization of sound-making objects without any prior knowledge. Extensive experimental results on the MUSIC and VGGSound benchmarks show the significant performance improvements of the proposed method over the existing methods for both single and multi-source. Our code is available at: https://github.com/VisualAIKHU/NoPrior_MultiSSL | https://openaccess.thecvf.com/content/CVPR2024/papers/Kim_Learning_to_Visually_Localize_Sound_Sources_from_Mixtures_without_Prior_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.17420 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Kim_Learning_to_Visually_Localize_Sound_Sources_from_Mixtures_without_Prior_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Kim_Learning_to_Visually_Localize_Sound_Sources_from_Mixtures_without_Prior_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kim_Learning_to_Visually_CVPR_2024_supplemental.pdf | null |
Learning Dynamic Tetrahedra for High-Quality Talking Head Synthesis | Zicheng Zhang, Ruobing Zheng, Bonan Li, Congying Han, Tianqi Li, Meng Wang, Tiande Guo, Jingdong Chen, Ziwen Liu, Ming Yang | Recent works in implicit representations such as Neural Radiance Fields (NeRF) have advanced the generation of realistic and animatable head avatars from video sequences. These implicit methods are still confronted by visual artifacts and jitters since the lack of explicit geometric constraints poses a fundamental challenge in accurately modeling complex facial deformations. In this paper we introduce Dynamic Tetrahedra (DynTet) a novel hybrid representation that encodes explicit dynamic meshes by neural networks to ensure geometric consistency across various motions and viewpoints. DynTet is parameterized by the coordinate-based networks which learn signed distance deformation and material texture anchoring the training data into a predefined tetrahedra grid. Leveraging Marching Tetrahedra DynTet efficiently decodes textured meshes with a consistent topology enabling fast rendering through a differentiable rasterizer and supervision via a pixel loss. To enhance training efficiency we incorporate classical 3D Morphable Models to facilitate geometry learning and define a canonical space for simplifying texture learning. These advantages are readily achievable owing to the effective geometric representation employed in DynTet. Compared with prior works DynTet demonstrates significant improvements in fidelity lip synchronization and real-time performance according to various metrics. Beyond producing stable and visually appealing synthesis videos our method also outputs the dynamic meshes which is promising to enable many emerging applications. Code is available at https://github.com/zhangzc21/DynTet. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Learning_Dynamic_Tetrahedra_for_High-Quality_Talking_Head_Synthesis_CVPR_2024_paper.pdf | http://arxiv.org/abs/2402.17364 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Learning_Dynamic_Tetrahedra_for_High-Quality_Talking_Head_Synthesis_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Learning_Dynamic_Tetrahedra_for_High-Quality_Talking_Head_Synthesis_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_Learning_Dynamic_Tetrahedra_CVPR_2024_supplemental.zip | null |
Collaborative Learning of Anomalies with Privacy (CLAP) for Unsupervised Video Anomaly Detection: A New Baseline | Anas Al-lahham, Muhammad Zaigham Zaheer, Nurbek Tastan, Karthik Nandakumar | nsupervised (US) video anomaly detection (VAD) in surveillance applications is gaining more popularity lately due to its practical real-world applications. Due to the extremely challenging nature of this task where learning is carried out without any annotations privacy-critical collaborative learning of US-VAD systems has not been studied yet. As surveillance videos are privacy sensitive and the availability of large-scale video data may enable better US-VAD systems collaborative learning can be highly rewarding in this setting. In this paper we propose a new baseline for anomaly detection capable of localizing anomalous events in complex surveillance scenarios in a fully unsupervised fashion without any labels on a privacy-retaining participant-based distributed training configuration. Additionally we propose three new evaluation protocols to extensively evaluate anomaly detection approaches on various scenarios of collaborations and data availability. Moreover based on these protocols we modify existing VAD datasets to extensively evaluate our approach as well as existing US SOTA methods on two large-scale datasets including UCF-Crime and XD-Violence. All proposed evaluation protocols dataset splits and codes are available here: \href https://github.com/AnasEmad11/CLAP https://github.com/AnasEmad11/CLAP . | https://openaccess.thecvf.com/content/CVPR2024/papers/Al-lahham_Collaborative_Learning_of_Anomalies_with_Privacy_CLAP_for_Unsupervised_Video_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.00847 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Al-lahham_Collaborative_Learning_of_Anomalies_with_Privacy_CLAP_for_Unsupervised_Video_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Al-lahham_Collaborative_Learning_of_Anomalies_with_Privacy_CLAP_for_Unsupervised_Video_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Al-lahham_Collaborative_Learning_of_CVPR_2024_supplemental.pdf | null |
Regressor-Segmenter Mutual Prompt Learning for Crowd Counting | Mingyue Guo, Li Yuan, Zhaoyi Yan, Binghui Chen, Yaowei Wang, Qixiang Ye | Crowd counting has achieved significant progress by training regressors to predict instance positions. In heavily crowded scenarios however regressors are challenged by uncontrollable annotation variance which causes density map bias and context information inaccuracy. In this study we propose mutual prompt learning (mPrompt) which leverages a regressor and a segmenter as guidance for each other solving bias and inaccuracy caused by annotation variance while distinguishing foreground from background. In specific mPrompt leverages point annotations to tune the segmenter and predict pseudo head masks in a way of point prompt learning. It then uses the predicted segmentation masks which serve as spatial constraint to rectify biased point annotations as context prompt learning. mPrompt defines a way of mutual information maximization from prompt learning mitigating the impact of annotation variance while improving model accuracy. Experiments show that mPrompt significantly reduces the Mean Average Error (MAE) demonstrating the potential to be general framework for down-stream vision tasks. Code is available at https://github.com/csguomy/mPrompt. | https://openaccess.thecvf.com/content/CVPR2024/papers/Guo_Regressor-Segmenter_Mutual_Prompt_Learning_for_Crowd_Counting_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.01711 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Guo_Regressor-Segmenter_Mutual_Prompt_Learning_for_Crowd_Counting_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Guo_Regressor-Segmenter_Mutual_Prompt_Learning_for_Crowd_Counting_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Guo_Regressor-Segmenter_Mutual_Prompt_CVPR_2024_supplemental.pdf | null |
Instantaneous Perception of Moving Objects in 3D | Di Liu, Bingbing Zhuang, Dimitris N. Metaxas, Manmohan Chandraker | The perception of 3D motion of surrounding traffic participants is crucial for driving safety. While existing works primarily focus on general large motions we contend that the instantaneous detection and quantification of subtle motions is equally important as they indicate the nuances in driving behavior that may be safety critical such as behaviors near a stop sign of parking positions. We delve into this under-explored task examining its unique challenges and developing our solution accompanied by a carefully designed benchmark. Specifically due to the lack of correspondences between consecutive frames of sparse Lidar point clouds static objects might appear to be moving - the so-called swimming effect. This intertwines with the true object motion thereby posing ambiguity in accurate estimation especially for subtle motion. To address this we propose to leverage local occupancy completion of object point clouds to densify the shape cue and mitigate the impact of swimming artifacts. The occupancy completion is learned in an end-to-end fashion together with the detection of moving objects and the estimation of their motion instantaneously as soon as objects start to move. Extensive experiments demonstrate superior performance compared to standard 3D motion estimation approaches particularly highlighting our method's specialized treatment of subtle motion. | https://openaccess.thecvf.com/content/CVPR2024/papers/Liu_Instantaneous_Perception_of_Moving_Objects_in_3D_CVPR_2024_paper.pdf | http://arxiv.org/abs/2405.02781 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Instantaneous_Perception_of_Moving_Objects_in_3D_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Instantaneous_Perception_of_Moving_Objects_in_3D_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liu_Instantaneous_Perception_of_CVPR_2024_supplemental.zip | null |
CORE-MPI: Consistency Object Removal with Embedding MultiPlane Image | Donggeun Yoon, Donghyeon Cho | Novel view synthesis is attractive for social media but it often contains unwanted details such as personal information that needs to be edited out for a better experience. Multiplane image (MPI) is desirable for social media because of its generality but it is complex and computationally expensive making object removal challenging. To address these challenges we propose CORE-MPI which employs embedding images to improve the consistency and accessibility of MPI object removal. CORE-MPI allows for real-time transmission and interaction with embedding images on social media facilitating object removal with a single mask. However recovering the geometric information hidden in the embedding images is a significant challenge. Therefore we propose a dual-network approach where one network focuses on color restoration and the other on inpainting the embedding image including geometric information. For the training of CORE-MPI we introduce a pseudo-reference loss aimed at proficient color recovery even in complex scenes or with large masks. Furthermore we present a disparity consistency loss to preserve the geometric consistency of the inpainted region. We demonstrate the effectiveness of CORE-MPI on RealEstate10K and UCSD datasets. | https://openaccess.thecvf.com/content/CVPR2024/papers/Yoon_CORE-MPI_Consistency_Object_Removal_with_Embedding_MultiPlane_Image_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Yoon_CORE-MPI_Consistency_Object_Removal_with_Embedding_MultiPlane_Image_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Yoon_CORE-MPI_Consistency_Object_Removal_with_Embedding_MultiPlane_Image_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yoon_CORE-MPI_Consistency_Object_CVPR_2024_supplemental.zip | null |
3D Geometry-Aware Deformable Gaussian Splatting for Dynamic View Synthesis | Zhicheng Lu, Xiang Guo, Le Hui, Tianrui Chen, Min Yang, Xiao Tang, Feng Zhu, Yuchao Dai | In this paper we propose a 3D geometry-aware deformable Gaussian Splatting method for dynamic view synthesis. Existing neural radiance fields (NeRF) based solutions learn the deformation in an implicit manner which cannot incorporate 3D scene geometry. Therefore the learned deformation is not necessarily geometrically coherent which results in unsatisfactory dynamic view synthesis and 3D dynamic reconstruction. Recently 3D Gaussian Splatting provides a new representation of the 3D scene building upon which the 3D geometry could be exploited in learning the complex 3D deformation. Specifically the scenes are represented as a collection of 3D Gaussian where each 3D Gaussian is optimized to move and rotate over time to model the deformation. To enforce the 3D scene geometry constraint during deformation we explicitly extract 3D geometry features and integrate them in learning the 3D deformation. In this way our solution achieves 3D geometry-aware deformation modeling which enables improved dynamic view synthesis and 3D dynamic reconstruction. Extensive experimental results on both synthetic and real datasets prove the superiority of our solution which achieves new state-of-the-art performance. The project is available at \href https://npucvr.github.io/GaGS/ https://npucvr.github.io/GaGS/ . | https://openaccess.thecvf.com/content/CVPR2024/papers/Lu_3D_Geometry-Aware_Deformable_Gaussian_Splatting_for_Dynamic_View_Synthesis_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.06270 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Lu_3D_Geometry-Aware_Deformable_Gaussian_Splatting_for_Dynamic_View_Synthesis_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Lu_3D_Geometry-Aware_Deformable_Gaussian_Splatting_for_Dynamic_View_Synthesis_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lu_3D_Geometry-Aware_Deformable_CVPR_2024_supplemental.pdf | null |
Person-in-WiFi 3D: End-to-End Multi-Person 3D Pose Estimation with Wi-Fi | Kangwei Yan, Fei Wang, Bo Qian, Han Ding, Jinsong Han, Xing Wei | Wi-Fi signals in contrast to cameras offer privacy protection and occlusion resilience for some practical scenarios such as smart homes elderly care and virtual reality. Recent years have seen remarkable progress in the estimation of single-person 2D pose single-person 3D pose and multi-person 2D pose. This paper takes a step forward by introducing Person-in-WiFi 3D a pioneering Wi-Fi system that accomplishes multi-person 3D pose estimation. Person-in-WiFi 3D has two main updates. Firstly it has a greater number of Wi-Fi devices to enhance the capability for capturing spatial reflections from multiple individuals. Secondly it leverages the Transformer for end-to-end estimation. Compared to its predecessor Person-in-WiFi 3D is storage-efficient and fast. We deployed a proof-of-concept system in 4mx3.5m areas and collected a dataset of over 97K frames with seven volunteers. Person-in-WiFi 3D attains 3D joint localization errors of 91.7mm (1-person) 108.1mm (2-person) and 125.3mm (3-person) comparable to cameras and millimeter-wave radars. | https://openaccess.thecvf.com/content/CVPR2024/papers/Yan_Person-in-WiFi_3D_End-to-End_Multi-Person_3D_Pose_Estimation_with_Wi-Fi_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Yan_Person-in-WiFi_3D_End-to-End_Multi-Person_3D_Pose_Estimation_with_Wi-Fi_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Yan_Person-in-WiFi_3D_End-to-End_Multi-Person_3D_Pose_Estimation_with_Wi-Fi_CVPR_2024_paper.html | CVPR 2024 | null | null |
Backpropagation-free Network for 3D Test-time Adaptation | Yanshuo Wang, Ali Cheraghian, Zeeshan Hayder, Jie Hong, Sameera Ramasinghe, Shafin Rahman, David Ahmedt-Aristizabal, Xuesong Li, Lars Petersson, Mehrtash Harandi | Real-world systems often encounter new data over time which leads to experiencing target domain shifts. Existing Test-Time Adaptation (TTA) methods tend to apply computationally heavy and memory-intensive backpropagation-based approaches to handle this. Here we propose a novel method that uses a backpropagation-free approach for TTA for the specific case of 3D data. Our model uses a two-stream architecture to maintain knowledge about the source domain as well as complementary target-domain-specific information. The backpropagation-free property of our model helps address the well-known forgetting problem and mitigates the error accumulation issue. The proposed method also eliminates the need for the usually noisy process of pseudo-labeling and reliance on costly self-supervised training. Moreover our method leverages subspace learning effectively reducing the distribution variance between the two domains. Furthermore the source-domain-specific and the target-domain-specific streams are aligned using a novel entropy-based adaptive fusion strategy. Extensive experiments on popular benchmarks demonstrate the effectiveness of our method. The code will be available at https://github.com/abie-e/BFTT3D. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_Backpropagation-free_Network_for_3D_Test-time_Adaptation_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.18442 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Backpropagation-free_Network_for_3D_Test-time_Adaptation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Backpropagation-free_Network_for_3D_Test-time_Adaptation_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wang_Backpropagation-free_Network_for_CVPR_2024_supplemental.pdf | null |
Resource-Efficient Transformer Pruning for Finetuning of Large Models | Fatih Ilhan, Gong Su, Selim Furkan Tekin, Tiansheng Huang, Sihao Hu, Ling Liu | With the recent advances in vision transformers and large language models (LLMs) finetuning costly large models on downstream learning tasks poses significant challenges under limited computational resources. This paper presents a REsource and ComputAtion-efficient Pruning framework (RECAP) for the finetuning of transformer-based large models. RECAP by design bridges the gap between efficiency and performance through an iterative process cycling between pruning finetuning and updating stages to explore different chunks of the given large-scale model. At each iteration we first prune the model with Taylor-approximation-based importance estimation and then only update a subset of the pruned model weights based on the Fisher-information criterion. In this way RECAP achieves two synergistic and yet conflicting goals: reducing the GPU memory footprint while maintaining model performance unlike most existing pruning methods that require the model to be finetuned beforehand for better preservation of model performance. We perform extensive experiments with a wide range of large transformer-based architectures on various computer vision and natural language understanding tasks. Compared to recent pruning techniques we demonstrate that RECAP offers significant improvements in GPU memory efficiency capable of reducing the footprint by up to 65%. | https://openaccess.thecvf.com/content/CVPR2024/papers/Ilhan_Resource-Efficient_Transformer_Pruning_for_Finetuning_of_Large_Models_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Ilhan_Resource-Efficient_Transformer_Pruning_for_Finetuning_of_Large_Models_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Ilhan_Resource-Efficient_Transformer_Pruning_for_Finetuning_of_Large_Models_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ilhan_Resource-Efficient_Transformer_Pruning_CVPR_2024_supplemental.pdf | null |
ParamISP: Learned Forward and Inverse ISPs using Camera Parameters | Woohyeok Kim, Geonu Kim, Junyong Lee, Seungyong Lee, Seung-Hwan Baek, Sunghyun Cho | RAW images are rarely shared mainly due to its excessive data size compared to their sRGB counterparts obtained by camera ISPs. Learning the forward and inverse processes of camera ISPs has been recently demonstrated enabling physically-meaningful RAW-level image processing on input sRGB images. However existing learning-based ISP methods fail to handle the large variations in the ISP processes with respect to camera parameters such as ISO and exposure time and have limitations when used for various applications. In this paper we propose ParamISP a learning-based method for forward and inverse conversion between sRGB and RAW images that adopts a novel neural-network module to utilize camera parameters which is dubbed as ParamNet. Given the camera parameters provided in the EXIF data ParamNet converts them into a feature vector to control the ISP networks. Extensive experiments demonstrate that ParamISP achieve superior RAW and sRGB reconstruction results compared to previous methods and it can be effectively used for a variety of applications such as deblurring dataset synthesis raw deblurring HDR reconstruction and camera-to-camera transfer. | https://openaccess.thecvf.com/content/CVPR2024/papers/Kim_ParamISP_Learned_Forward_and_Inverse_ISPs_using_Camera_Parameters_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.13313 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Kim_ParamISP_Learned_Forward_and_Inverse_ISPs_using_Camera_Parameters_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Kim_ParamISP_Learned_Forward_and_Inverse_ISPs_using_Camera_Parameters_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kim_ParamISP_Learned_Forward_CVPR_2024_supplemental.pdf | null |
Perturbing Attention Gives You More Bang for the Buck: Subtle Imaging Perturbations That Efficiently Fool Customized Diffusion Models | Jingyao Xu, Yuetong Lu, Yandong Li, Siyang Lu, Dongdong Wang, Xiang Wei | Diffusion models (DMs) embark a new era of generative modeling and offer more opportunities for efficient generating high-quality and realistic data samples. However their widespread use has also brought forth new challenges in model security which motivates the creation of more effective adversarial attackers on DMs to understand its vulnerability. We propose CAAT a simple but generic and efficient approach that does not require costly training to effectively fool latent diffusion models (LDMs). The approach is based on the observation that cross-attention layers exhibits higher sensitivity to gradient change allowing for leveraging subtle perturbations on published images to significantly corrupt the generated images. We show that a subtle perturbation on an image can significantly impact the cross-attention layers thus changing the mapping between text and image during the fine-tuning of customized diffusion models. Extensive experiments demonstrate that CAAT is compatible with diverse diffusion models and outperforms baseline attack methods in a more effective (more noise) and efficient (twice as fast as Anti-DreamBooth and Mist) manner. | https://openaccess.thecvf.com/content/CVPR2024/papers/Xu_Perturbing_Attention_Gives_You_More_Bang_for_the_Buck_Subtle_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.15081 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Xu_Perturbing_Attention_Gives_You_More_Bang_for_the_Buck_Subtle_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Xu_Perturbing_Attention_Gives_You_More_Bang_for_the_Buck_Subtle_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Xu_Perturbing_Attention_Gives_CVPR_2024_supplemental.pdf | null |
Fairy: Fast Parallelized Instruction-Guided Video-to-Video Synthesis | Bichen Wu, Ching-Yao Chuang, Xiaoyan Wang, Yichen Jia, Kapil Krishnakumar, Tong Xiao, Feng Liang, Licheng Yu, Peter Vajda | In this paper we introduce Fairy a minimalist yet robust adaptation of image-editing diffusion models enhancing them for video editing applications. Our approach centers on the concept of anchor-based cross-frame attention a mechanism that implicitly propagates diffusion features across frames ensuring superior temporal coherence and high-fidelity synthesis. Fairy not only addresses limitations of previous models including memory and processing speed. It also improves temporal consistency through a unique data augmentation strategy. This strategy renders the model equivariant to affine transformations in both source and target images. Remarkably efficient Fairy generates 120-frame 512x384 videos (4-second duration at 30 FPS) in just 14 seconds outpacing prior works by at least 44x. A comprehensive user study involving 1000 generated samples confirms that our approach delivers superior quality decisively outperforming established methods. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_Fairy_Fast_Parallelized_Instruction-Guided_Video-to-Video_Synthesis_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.13834 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wu_Fairy_Fast_Parallelized_Instruction-Guided_Video-to-Video_Synthesis_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wu_Fairy_Fast_Parallelized_Instruction-Guided_Video-to-Video_Synthesis_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wu_Fairy_Fast_Parallelized_CVPR_2024_supplemental.pdf | null |
SmartEdit: Exploring Complex Instruction-based Image Editing with Multimodal Large Language Models | Yuzhou Huang, Liangbin Xie, Xintao Wang, Ziyang Yuan, Xiaodong Cun, Yixiao Ge, Jiantao Zhou, Chao Dong, Rui Huang, Ruimao Zhang, Ying Shan | Current instruction-based image editing methods such as InstructPix2Pix often fail to produce satisfactory results in complex scenarios due to their dependence on the simple CLIP text encoder in diffusion models. To rectify this this paper introduces SmartEdit a novel approach of instruction-based image editing that leverages Multimodal Large Language Models (MLLMs) to enhance its understanding and reasoning capabilities. However direct integration of these elements still faces challenges in situations requiring complex reasoning. To mitigate this we propose a Bidirectional Interaction Module (BIM) that enables comprehensive bidirectional information interactions between the input image and the MLLM output. During training we initially incorporate perception data to boost the perception and understanding capabilities of diffusion models. Subsequently we demonstrate that a small amount of complex instruction editing data can effectively stimulate SmartEdit's editing capabilities for more complex instructions. We further construct a new evaluation dataset Reason-Edit specifically tailored for complex instruction-based image editing. Both quantitative and qualitative results on this evaluation dataset indicate that our SmartEdit surpasses previous methods paving the way for the practical application of complex instruction-based image editing. | https://openaccess.thecvf.com/content/CVPR2024/papers/Huang_SmartEdit_Exploring_Complex_Instruction-based_Image_Editing_with_Multimodal_Large_Language_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.06739 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Huang_SmartEdit_Exploring_Complex_Instruction-based_Image_Editing_with_Multimodal_Large_Language_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Huang_SmartEdit_Exploring_Complex_Instruction-based_Image_Editing_with_Multimodal_Large_Language_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Huang_SmartEdit_Exploring_Complex_CVPR_2024_supplemental.pdf | null |
SeNM-VAE: Semi-Supervised Noise Modeling with Hierarchical Variational Autoencoder | Dihan Zheng, Yihang Zou, Xiaowen Zhang, Chenglong Bao | The data bottleneck has emerged as a fundamental challenge in learning based image restoration methods. Researchers have attempted to generate synthesized training data using paired or unpaired samples to address this challenge. This study proposes SeNM-VAE a semi-supervised noise modeling method that leverages both paired and unpaired datasets to generate realistic degraded data. Our approach is based on modeling the conditional distribution of degraded and clean images with a specially designed graphical model. Under the variational inference framework we develop an objective function for handling both paired and unpaired data. We employ our method to generate paired training samples for real-world image denoising and super-resolution tasks. Our approach excels in the quality of synthetic degraded images compared to other unpaired and paired noise modeling methods. Furthermore our approach demonstrates remarkable performance in downstream image restoration tasks even with limited paired data. With more paired data our method achieves the best performance on the SIDD dataset. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zheng_SeNM-VAE_Semi-Supervised_Noise_Modeling_with_Hierarchical_Variational_Autoencoder_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zheng_SeNM-VAE_Semi-Supervised_Noise_Modeling_with_Hierarchical_Variational_Autoencoder_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zheng_SeNM-VAE_Semi-Supervised_Noise_Modeling_with_Hierarchical_Variational_Autoencoder_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zheng_SeNM-VAE_Semi-Supervised_Noise_CVPR_2024_supplemental.pdf | null |
Multimodal Industrial Anomaly Detection by Crossmodal Feature Mapping | Alex Costanzino, Pierluigi Zama Ramirez, Giuseppe Lisanti, Luigi Di Stefano | Recent advancements have shown the potential of leveraging both point clouds and images to localize anomalies. Nevertheless their applicability in industrial manufacturing is often constrained by significant drawbacks such as the use of memory banks which leads to a substantial increase in terms of memory footprint and inference times. We propose a novel light and fast framework that learns to map features from one modality to the other on nominal samples and detect anomalies by pinpointing inconsistencies between observed and mapped features. Extensive experiments show that our approach achieves state-of-the-art detection and segmentation performance in both the standard and few-shot settings on the MVTec 3D-AD dataset while achieving faster inference and occupying less memory than previous multimodal AD methods. Furthermore we propose a layer pruning technique to improve memory and time efficiency with a marginal sacrifice in performance. | https://openaccess.thecvf.com/content/CVPR2024/papers/Costanzino_Multimodal_Industrial_Anomaly_Detection_by_Crossmodal_Feature_Mapping_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.04521 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Costanzino_Multimodal_Industrial_Anomaly_Detection_by_Crossmodal_Feature_Mapping_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Costanzino_Multimodal_Industrial_Anomaly_Detection_by_Crossmodal_Feature_Mapping_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Costanzino_Multimodal_Industrial_Anomaly_CVPR_2024_supplemental.pdf | null |
FFF: Fixing Flawed Foundations in Contrastive Pre-Training Results in Very Strong Vision-Language Models | Adrian Bulat, Yassine Ouali, Georgios Tzimiropoulos | Despite noise and caption quality having been acknowledged as important factors impacting vision-language contrastive pre-training in this paper we show that the full potential of improving the training process by addressing such issues is yet to be realized. Specifically we firstly study and analyze two issues affecting training: incorrect assignment of negative pairs and low caption quality and diversity. Then we devise effective solutions for addressing both problems which essentially require training with multiple true positive pairs. Finally we propose training with sigmoid loss to address such a requirement. We show very large gains over the current state-of-the-art for both image recognition ( +6% on average over 11 datasets) and image retrieval ( +19% on Flickr30k and +15% on MSCOCO). | https://openaccess.thecvf.com/content/CVPR2024/papers/Bulat_FFF_Fixing_Flawed_Foundations_in_Contrastive_Pre-Training_Results_in_Very_CVPR_2024_paper.pdf | http://arxiv.org/abs/2405.10286 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Bulat_FFF_Fixing_Flawed_Foundations_in_Contrastive_Pre-Training_Results_in_Very_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Bulat_FFF_Fixing_Flawed_Foundations_in_Contrastive_Pre-Training_Results_in_Very_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Bulat_FFF_Fixing_Flawed_CVPR_2024_supplemental.pdf | null |
Anchor-based Robust Finetuning of Vision-Language Models | null | null | null | null | null | https://openaccess.thecvf.com/content/CVPR2024/html/Han_Anchor-based_Robust_Finetuning_of_Vision-Language_Models_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Han_Anchor-based_Robust_Finetuning_of_Vision-Language_Models_CVPR_2024_paper.html | CVPR 2024 | null | null |
Low-power Continuous Remote Behavioral Localization with Event Cameras | Friedhelm Hamann, Suman Ghosh, Ignacio Juarez Martinez, Tom Hart, Alex Kacelnik, Guillermo Gallego | Researchers in natural science need reliable methods for quantifying animal behavior. Recently numerous computer vision methods emerged to automate the process. However observing wild species at remote locations remains a challenging task due to difficult lighting conditions and constraints on power supply and data storage. Event cameras offer unique advantages for battery-dependent remote monitoring due to their low power consumption and high dynamic range capabilities. We use this novel sensor to quantify a behavior in Chinstrap penguins called ecstatic display. We formulate the problem as a temporal action detection task determining the start and end times of the behavior. For this purpose we recorded a colony of breeding penguins in Antarctica for several weeks and labeled event data on 16 nests. The developed method consists of a generator of candidate time intervals (proposals) and a classifier of the actions within them. The experiments show that the event cameras' natural response to motion is effective for continuous behavior monitoring and detection reaching a mean average precision (mAP) of 58% (which increases to 63% in good weather conditions). The results also demonstrate the robustness against various lighting conditions contained in the challenging dataset. The low-power capabilities of the event camera allow it to record significantly longer than with a conventional camera. This work pioneers the use of event cameras for remote wildlife observation opening new interdisciplinary opportunities. https:// tub-rip.github.io/ eventpenguins/ | https://openaccess.thecvf.com/content/CVPR2024/papers/Hamann_Low-power_Continuous_Remote_Behavioral_Localization_with_Event_Cameras_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.03799 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Hamann_Low-power_Continuous_Remote_Behavioral_Localization_with_Event_Cameras_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Hamann_Low-power_Continuous_Remote_Behavioral_Localization_with_Event_Cameras_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Hamann_Low-power_Continuous_Remote_CVPR_2024_supplemental.pdf | null |
SportsHHI: A Dataset for Human-Human Interaction Detection in Sports Videos | Tao Wu, Runyu He, Gangshan Wu, Limin Wang | Video-based visual relation detection tasks such as video scene graph generation play important roles in fine-grained video understanding. However current video visual relation detection datasets have two main limitations that hinder the progress of research in this area. First they do not explore complex human-human interactions in multi-person scenarios. Second the relation types of existing datasets have relatively low-level semantics and can be often recognized by appearance or simple prior information without the need for detailed spatio-temporal context reasoning. Nevertheless comprehending high-level interactions between humans is crucial for understanding complex multi-person videos such as sports and surveillance videos. To address this issue we propose a new video visual relation detection task: video human-human interaction detection and build a dataset named SportsHHI for it. SportsHHI contains 34 high-level interaction classes from basketball and volleyball sports. 118075 human bounding boxes and 50649 interaction instances are annotated on 11398 keyframes. To benchmark this we propose a two-stage baseline method and conduct extensive experiments to reveal the key factors for a successful human-human interaction detector. We hope that SportsHHI can stimulate research on human interaction understanding in videos and promote the development of spatio-temporal context modeling techniques in video visual relation detection. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_SportsHHI_A_Dataset_for_Human-Human_Interaction_Detection_in_Sports_Videos_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.04565 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wu_SportsHHI_A_Dataset_for_Human-Human_Interaction_Detection_in_Sports_Videos_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wu_SportsHHI_A_Dataset_for_Human-Human_Interaction_Detection_in_Sports_Videos_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wu_SportsHHI_A_Dataset_CVPR_2024_supplemental.pdf | null |
DiSR-NeRF: Diffusion-Guided View-Consistent Super-Resolution NeRF | Jie Long Lee, Chen Li, Gim Hee Lee | We present DiSR-NeRF a diffusion-guided framework for view-consistent super-resolution (SR) NeRF. Unlike prior works we circumvent the requirement for high-resolution (HR) reference images by leveraging existing powerful 2D super-resolution models. Nonetheless independent SR 2D images are often inconsistent across different views. We thus propose Iterative 3D Synchronization (I3DS) to mitigate the inconsistency problem via the inherent multi-view consistency property of NeRF. Specifically our I3DS alternates between upscaling low-resolution (LR) rendered images with diffusion models and updating the underlying 3D representation with standard NeRF training. We further introduce Renoised Score Distillation (RSD) a novel score-distillation objective for 2D image resolution. Our RSD combines features from ancestral sampling and Score Distillation Sampling (SDS) to generate sharp images that are also LR-consistent. Qualitative and quantitative results on both synthetic and real-world datasets demonstrate that our DiSR-NeRF can achieve better results on NeRF super-resolution compared with existing works. Code and video results available at the project website. | https://openaccess.thecvf.com/content/CVPR2024/papers/Lee_DiSR-NeRF_Diffusion-Guided_View-Consistent_Super-Resolution_NeRF_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Lee_DiSR-NeRF_Diffusion-Guided_View-Consistent_Super-Resolution_NeRF_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Lee_DiSR-NeRF_Diffusion-Guided_View-Consistent_Super-Resolution_NeRF_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lee_DiSR-NeRF_Diffusion-Guided_View-Consistent_CVPR_2024_supplemental.pdf | null |
Dispersed Structured Light for Hyperspectral 3D Imaging | Suhyun Shin, Seokjun Choi, Felix Heide, Seung-Hwan Baek | Hyperspectral 3D imaging aims to acquire both depth and spectral information of a scene. However existing methods are either prohibitively expensive and bulky or compromise on spectral and depth accuracy. In this paper we present Dispersed Structured Light (DSL) a cost-effective and compact method for accurate hyperspectral 3D imaging. DSL modifies a traditional projector-camera system by placing a sub-millimeter thick diffraction grating film front of the projector. This configuration enables dispersing structured light based on light wavelength. To utilize the dispersed structured light we devise a model for dispersive projection image formation and a per-pixel hyperspectral 3D reconstruction method. We validate DSL by instantiating a compact experimental prototype. DSL achieves spectral accuracy of 18.8nm full-width half-maximum (FWHM) and depth error of 1mm outperforming prior work on practical hyperspectral 3D imaging. DSL promises accurate and practical hyperspectral 3D imaging for diverse application domains including computer vision and graphics cultural heritage geology and biology. | https://openaccess.thecvf.com/content/CVPR2024/papers/Shin_Dispersed_Structured_Light_for_Hyperspectral_3D_Imaging_CVPR_2024_paper.pdf | http://arxiv.org/abs/2311.18287 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Shin_Dispersed_Structured_Light_for_Hyperspectral_3D_Imaging_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Shin_Dispersed_Structured_Light_for_Hyperspectral_3D_Imaging_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Shin_Dispersed_Structured_Light_CVPR_2024_supplemental.pdf | null |
CrowdDiff: Multi-hypothesis Crowd Density Estimation using Diffusion Models | Yasiru Ranasinghe, Nithin Gopalakrishnan Nair, Wele Gedara Chaminda Bandara, Vishal M. Patel | Crowd counting is a fundamental problem in crowd analysis which is typically accomplished by estimating a crowd density map and summing over the density values. However this approach suffers from background noise accumulation and loss of density due to the use of broad Gaussian kernels to create the ground truth density maps. This issue can be overcome by narrowing the Gaussian kernel. However existing approaches perform poorly when trained with ground truth density maps with broad kernels. To deal with this limitation we propose using conditional diffusion models to predict density maps as diffusion models show high fidelity to training data during generation. With that we present CrowdDiff that generates the crowd density map as a reverse diffusion process. Furthermore as the intermediate time steps of the diffusion process are noisy we incorporate a regression branch for direct crowd estimation only during training to improve the feature learning. In addition owing to the stochastic nature of the diffusion model we introduce producing multiple density maps to improve the counting performance contrary to the existing crowd counting pipelines. We conduct extensive experiments on publicly available datasets to validate the effectiveness of our method. CrowdDiff outperforms existing \sota crowd counting methods on several public crowd analysis benchmarks with significant improvements. CrowdDiff project is available at: https://dylran.github.io/crowddiff.github.io. | https://openaccess.thecvf.com/content/CVPR2024/papers/Ranasinghe_CrowdDiff_Multi-hypothesis_Crowd_Density_Estimation_using_Diffusion_Models_CVPR_2024_paper.pdf | http://arxiv.org/abs/2303.12790 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Ranasinghe_CrowdDiff_Multi-hypothesis_Crowd_Density_Estimation_using_Diffusion_Models_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Ranasinghe_CrowdDiff_Multi-hypothesis_Crowd_Density_Estimation_using_Diffusion_Models_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ranasinghe_CrowdDiff_Multi-hypothesis_Crowd_CVPR_2024_supplemental.pdf | null |
It's All About Your Sketch: Democratising Sketch Control in Diffusion Models | Subhadeep Koley, Ayan Kumar Bhunia, Deeptanshu Sekhri, Aneeshan Sain, Pinaki Nath Chowdhury, Tao Xiang, Yi-Zhe Song | This paper unravels the potential of sketches for diffusion models addressing the deceptive promise of direct sketch control in generative AI. We importantly democratise the process enabling amateur sketches to generate precise images living up to the commitment of "what you sketch is what you get". A pilot study underscores the necessity revealing that deformities in existing models stem from spatial-conditioning. To rectify this we propose an abstraction-aware framework utilising a sketch adapter adaptive time-step sampling and discriminative guidance from a pre-trained fine-grained sketch-based image retrieval model working synergistically to reinforce fine-grained sketch-photo association. Our approach operates seamlessly during inference without the need for textual prompts; a simple rough sketch akin to what you and I can create suffices! We welcome everyone to examine results presented in the paper and its supplementary. Contributions include democratising sketch control introducing an abstraction-aware framework and leveraging discriminative guidance validated through extensive experiments. | https://openaccess.thecvf.com/content/CVPR2024/papers/Koley_Its_All_About_Your_Sketch_Democratising_Sketch_Control_in_Diffusion_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Koley_Its_All_About_Your_Sketch_Democratising_Sketch_Control_in_Diffusion_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Koley_Its_All_About_Your_Sketch_Democratising_Sketch_Control_in_Diffusion_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Koley_Its_All_About_CVPR_2024_supplemental.pdf | null |
GLID: Pre-training a Generalist Encoder-Decoder Vision Model | Jihao Liu, Jinliang Zheng, Yu Liu, Hongsheng Li | This paper proposes a GeneraLIst encoder-Decoder (GLID) pre-training method for better handling various downstream computer vision tasks. While self-supervised pre-training approaches e.g. Masked Autoencoder have shown success in transfer learning task-specific sub-architectures are still required to be appended for different downstream tasks which cannot enjoy the benefits of large-scale pre-training. GLID overcomes this challenge by allowing the pre-trained generalist encoder-decoder to be fine-tuned on various vision tasks with minimal task-specific architecture modifications. In the GLID training scheme pre-training pretext task and other downstream tasks are modeled as "query-to-answer" problems including the pre-training pretext task and other downstream tasks. We pre-train a task-agnostic encoder-decoder with query-mask pairs. During fine-tuning GLID maintains the pre-trained encoder-decoder and queries only replacing the topmost linear transformation layer with task-specific linear heads. This minimizes the pretrain-finetune architecture inconsistency and enables the pre-trained model to better adapt to downstream tasks. GLID achieves competitive performance on various vision tasks including object detection image segmentation pose estimation and depth estimation outperforming or matching specialist models such as Mask2Former DETR ViTPose and BinsFormer. | https://openaccess.thecvf.com/content/CVPR2024/papers/Liu_GLID_Pre-training_a_Generalist_Encoder-Decoder_Vision_Model_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.07603 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Liu_GLID_Pre-training_a_Generalist_Encoder-Decoder_Vision_Model_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Liu_GLID_Pre-training_a_Generalist_Encoder-Decoder_Vision_Model_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liu_GLID_Pre-training_a_CVPR_2024_supplemental.pdf | null |
Diffusion-FOF: Single-View Clothed Human Reconstruction via Diffusion-Based Fourier Occupancy Field | Yuanzhen Li, Fei Luo, Chunxia Xiao | Reconstructing a clothed human from a single-view image has several challenging issues including flexibly representing various body shapes and poses estimating complete 3D geometry and consistent texture and achieving more fine-grained details. To address them we propose a new diffusion-based Fourier occupancy field method to improve the human representing ability and the geometry generating ability. First we estimate the back-view image from the given reference image by incorporating a style consistency constraint. Then we extract multi-scale features of the two images as conditional and design a diffusion model to generate the Fourier occupancy field in the wavelet domain. We refine the initial estimated Fourier occupancy field with image features as conditions to improve the geometric accuracy. Finally the reference and estimated back-view images are mapped onto the human model creating a textured clothed human model. Substantial experiments are conducted and the experimental results show that our method outperforms the state-of-the-art methods in geometry and texture reconstruction performance. | https://openaccess.thecvf.com/content/CVPR2024/papers/Li_Diffusion-FOF_Single-View_Clothed_Human_Reconstruction_via_Diffusion-Based_Fourier_Occupancy_Field_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Li_Diffusion-FOF_Single-View_Clothed_Human_Reconstruction_via_Diffusion-Based_Fourier_Occupancy_Field_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Li_Diffusion-FOF_Single-View_Clothed_Human_Reconstruction_via_Diffusion-Based_Fourier_Occupancy_Field_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_Diffusion-FOF_Single-View_Clothed_CVPR_2024_supplemental.pdf | null |
Subsets and Splits