Search is not available for this dataset
title
string
authors
string
abstract
string
pdf
string
arXiv
string
bibtex
string
url
string
detail_url
string
tags
string
supp
string
string
Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks
Bin Xiao, Haiping Wu, Weijian Xu, Xiyang Dai, Houdong Hu, Yumao Lu, Michael Zeng, Ce Liu, Lu Yuan
We introduce Florence-2 a novel vision foundation model with a unified prompt-based representation for various computer vision and vision-language tasks. While existing large vision models excel in transfer learning they struggle to perform diverse tasks with simple instructions a capability that implies handling the complexity of various spatial hierarchy and semantic granularity. Florence-2 was designed to take text-prompt as task instructions and generate desirable results in text forms whether it be captioning object detection grounding or segmentation. This multi-task learning setup demands large-scale high-quality annotated data. To this end we co-developed FLD-5B that consists of 5.4 billion comprehensive visual annotations on 126 million images using an iterative strategy of automated image annotation and model refinement. We adopted a sequence-to-sequence structure to train Florence-2 to perform versatile and comprehensive vision tasks. Extensive evaluations on numerous tasks demonstrated Florence-2 to be a strong vision foundation model contender with unprecedented zero-shot and fine-tuning capabilities.
https://openaccess.thecvf.com/content/CVPR2024/papers/Xiao_Florence-2_Advancing_a_Unified_Representation_for_a_Variety_of_Vision_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Xiao_Florence-2_Advancing_a_Unified_Representation_for_a_Variety_of_Vision_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Xiao_Florence-2_Advancing_a_Unified_Representation_for_a_Variety_of_Vision_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Xiao_Florence-2_Advancing_a_CVPR_2024_supplemental.pdf
null
NeRF On-the-go: Exploiting Uncertainty for Distractor-free NeRFs in the Wild
Weining Ren, Zihan Zhu, Boyang Sun, Jiaqi Chen, Marc Pollefeys, Songyou Peng
Neural Radiance Fields (NeRFs) have shown remarkable success in synthesizing photorealistic views from multi-view images of static scenes but face challenges in dynamic real-world environments with distractors like moving objects shadows and lighting changes. Existing methods manage controlled environments and low occlusion ratios but fall short in render quality especially under high occlusion scenarios. In this paper we introduce NeRF On-the-go a simple yet effective approach that enables the robust synthesis of novel views in complex in-the-wild scenes from only casually captured image sequences. Delving into uncertainty our method not only efficiently eliminates distractors even when they are predominant in captures but also achieves a notably faster convergence speed. Through comprehensive experiments on various scenes our method demonstrates a significant improvement over state-of-the-art techniques. This advancement opens new avenues for NeRF in diverse and dynamic real-world applications.
https://openaccess.thecvf.com/content/CVPR2024/papers/Ren_NeRF_On-the-go_Exploiting_Uncertainty_for_Distractor-free_NeRFs_in_the_Wild_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Ren_NeRF_On-the-go_Exploiting_Uncertainty_for_Distractor-free_NeRFs_in_the_Wild_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Ren_NeRF_On-the-go_Exploiting_Uncertainty_for_Distractor-free_NeRFs_in_the_Wild_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ren_NeRF_On-the-go_Exploiting_CVPR_2024_supplemental.pdf
null
3D Human Pose Perception from Egocentric Stereo Videos
Hiroyasu Akada, Jian Wang, Vladislav Golyanik, Christian Theobalt
While head-mounted devices are becoming more compact they provide egocentric views with significant self-occlusions of the device user. Hence existing methods often fail to accurately estimate complex 3D poses from egocentric views. In this work we propose a new transformer-based framework to improve egocentric stereo 3D human pose estimation which leverages the scene information and temporal context of egocentric stereo videos. Specifically we utilize 1) depth features from our 3D scene reconstruction module with uniformly sampled windows of egocentric stereo frames and 2) human joint queries enhanced by temporal features of the video inputs. Our method is able to accurately estimate human poses even in challenging scenarios such as crouching and sitting. Furthermore we introduce two new benchmark datasets i.e. UnrealEgo2 and UnrealEgo-RW (RealWorld). UnrealEgo2 is a large-scale in-the-wild dataset captured in synthetic 3D scenes. UnrealEgo-RW is a real-world dataset captured with our newly developed device. The proposed datasets offer a much larger number of egocentric stereo views with a wider variety of human motions than the existing datasets allowing comprehensive evaluation of existing and upcoming methods. Our extensive experiments show that the proposed approach significantly outperforms previous methods. UnrealEgo2 UnrealEgo-RW and trained models are available on our project page and Benchmark Challenge.
https://openaccess.thecvf.com/content/CVPR2024/papers/Akada_3D_Human_Pose_Perception_from_Egocentric_Stereo_Videos_CVPR_2024_paper.pdf
http://arxiv.org/abs/2401.00889
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Akada_3D_Human_Pose_Perception_from_Egocentric_Stereo_Videos_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Akada_3D_Human_Pose_Perception_from_Egocentric_Stereo_Videos_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Akada_3D_Human_Pose_CVPR_2024_supplemental.pdf
null
Grid Diffusion Models for Text-to-Video Generation
Taegyeong Lee, Soyeong Kwon, Taehwan Kim
Recent advances in the diffusion models have significantly improved text-to-image generation. However generating videos from text is a more challenging task than generating images from text due to the much larger dataset and higher computational cost required. Most existing video generation methods use either a 3D U-Net architecture that considers the temporal dimension or autoregressive generation. These methods require large datasets and are limited in terms of computational costs compared to text-to-image generation. To tackle these challenges we propose a simple but effective novel grid diffusion for text-to-video generation without temporal dimension in architecture and a large text-video paired dataset. We can generate a high-quality video using a fixed amount of GPU memory regardless of the number of frames by representing the video as a grid image. Additionally since our method reduces the dimensions of the video to the dimensions of the image various image-based methods can be applied to videos such as text-guided video manipulation from image manipulation. Our proposed method outperforms the existing methods in both quantitative and qualitative evaluations demonstrating the suitability of our model for real-world video generation.
https://openaccess.thecvf.com/content/CVPR2024/papers/Lee_Grid_Diffusion_Models_for_Text-to-Video_Generation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.00234
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Lee_Grid_Diffusion_Models_for_Text-to-Video_Generation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Lee_Grid_Diffusion_Models_for_Text-to-Video_Generation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lee_Grid_Diffusion_Models_CVPR_2024_supplemental.zip
null
Boosting Object Detection with Zero-Shot Day-Night Domain Adaptation
Zhipeng Du, Miaojing Shi, Jiankang Deng
Detecting objects in low-light scenarios presents a persistent challenge as detectors trained on well-lit data exhibit significant performance degradation on low-light data due to low visibility. Previous methods mitigate this issue by exploring image enhancement or object detection techniques with real low-light image datasets. However the progress is impeded by the inherent difficulties about collecting and annotating low-light images. To address this challenge we propose to boost low-light object detection with zero-shot day-night domain adaptation which aims to generalize a detector from well-lit scenarios to low-light ones without requiring real low-light data. Revisiting Retinex theory in the low-level vision we first design a reflectance representation learning module to learn Retinex-based illumination invariance in images with a carefully designed illumination invariance reinforcement strategy. Next an interchange-redecomposition-coherence procedure is introduced to improve over the vanilla Retinex image decomposition process by performing two sequential image decompositions and introducing a redecomposition cohering loss. Extensive experiments on ExDark DARK FACE and CODaN datasets show strong low-light generalizability of our method. Our code is available at https://github.com/ZPDu/DAI-Net.
https://openaccess.thecvf.com/content/CVPR2024/papers/Du_Boosting_Object_Detection_with_Zero-Shot_Day-Night_Domain_Adaptation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.01220
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Du_Boosting_Object_Detection_with_Zero-Shot_Day-Night_Domain_Adaptation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Du_Boosting_Object_Detection_with_Zero-Shot_Day-Night_Domain_Adaptation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Du_Boosting_Object_Detection_CVPR_2024_supplemental.pdf
null
LucidDreamer: Towards High-Fidelity Text-to-3D Generation via Interval Score Matching
Yixun Liang, Xin Yang, Jiantao Lin, Haodong Li, Xiaogang Xu, Yingcong Chen
The recent advancements in text-to-3D generation mark a significant milestone in generative models unlocking new possibilities for creating imaginative 3D assets across various real-world scenarios. While recent advancements in text-to-3D generation have shown promise they often fall short in rendering detailed and high-quality 3D models. This problem is especially prevalent as many methods base themselves on Score Distillation Sampling (SDS). This paper identifies a notable deficiency in SDS that it brings inconsistent and low-quality updating direction for the 3D model causing the over-smoothing effect. To address this we propose a novel approach called Interval Score Matching (ISM). ISM employs deterministic diffusing trajectories and utilizes interval-based score matching to counteract over-smoothing. Furthermore we incorporate 3D Gaussian Splatting into our text-to-3D generation pipeline. Extensive experiments show that our model largely outperforms the state-of-the-art in quality and training efficiency.
https://openaccess.thecvf.com/content/CVPR2024/papers/Liang_LucidDreamer_Towards_High-Fidelity_Text-to-3D_Generation_via_Interval_Score_Matching_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.11284
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Liang_LucidDreamer_Towards_High-Fidelity_Text-to-3D_Generation_via_Interval_Score_Matching_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Liang_LucidDreamer_Towards_High-Fidelity_Text-to-3D_Generation_via_Interval_Score_Matching_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liang_LucidDreamer_Towards_High-Fidelity_CVPR_2024_supplemental.pdf
null
PTM-VQA: Efficient Video Quality Assessment Leveraging Diverse PreTrained Models from the Wild
Kun Yuan, Hongbo Liu, Mading Li, Muyi Sun, Ming Sun, Jiachao Gong, Jinhua Hao, Chao Zhou, Yansong Tang
Video quality assessment (VQA) is a challenging problem due to the numerous factors that can affect the perceptual quality of a video e.g. content attractiveness distortion type motion pattern and level. However annotating the Mean opinion score (MOS) for videos is expensive and time-consuming which limits the scale of VQA datasets and poses a significant obstacle for deep learning-based methods. In this paper we propose a VQA method named PTM-VQA which leverages PreTrained Models to transfer knowledge from models pretrained on various pre-tasks enabling benefits for VQA from different aspects. Specifically we extract features of videos from different pretrained models with frozen weights and integrate them to generate representation. Since these models possess various fields of knowledge and are often trained with labels irrelevant to quality we propose an Intra-Consistency and Inter-Divisibility (ICID) loss to impose constraints on features extracted by multiple pretrained models. The intra-consistency constraint ensures that features extracted by different pretrained models are in the same unified quality-aware latent space while the inter-divisibility introduces pseudo clusters based on the annotation of samples and tries to separate features of samples from different clusters. Furthermore with a constantly growing number of pretrained models it is crucial to determine which models to use and how to use them. To address this problem we propose an efficient scheme to select suitable candidates. Models with better clustering performance on VQA datasets are chosen to be our candidates. Extensive experiments demonstrate the effectiveness of the proposed method.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yuan_PTM-VQA_Efficient_Video_Quality_Assessment_Leveraging_Diverse_PreTrained_Models_from_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yuan_PTM-VQA_Efficient_Video_Quality_Assessment_Leveraging_Diverse_PreTrained_Models_from_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yuan_PTM-VQA_Efficient_Video_Quality_Assessment_Leveraging_Diverse_PreTrained_Models_from_CVPR_2024_paper.html
CVPR 2024
null
null
Versatile Medical Image Segmentation Learned from Multi-Source Datasets via Model Self-Disambiguation
Xiaoyang Chen, Hao Zheng, Yuemeng Li, Yuncong Ma, Liang Ma, Hongming Li, Yong Fan
A versatile medical image segmentation model applicable to images acquired with diverse equipment and protocols can facilitate model deployment and maintenance. However building such a model typically demands a large diverse and fully annotated dataset which is challenging to obtain due to the labor-intensive nature of data curation. To address this challenge we propose a cost-effective alternative that harnesses multi-source data with only partial or sparse segmentation labels for training substantially reducing the cost of developing a versatile model. We devise strategies for model self-disambiguation prior knowledge incorporation and imbalance mitigation to tackle challenges associated with inconsistently labeled multi-source data including label ambiguity and modality dataset and class imbalances. Experimental results on a multi-modal dataset compiled from eight different sources for abdominal structure segmentation have demonstrated the effectiveness and superior performance of our method compared to state-of-the-art alternative approaches. We anticipate that its cost-saving features which optimize the utilization of existing annotated data and reduce annotation efforts for new data will have a significant impact in the field.
https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_Versatile_Medical_Image_Segmentation_Learned_from_Multi-Source_Datasets_via_Model_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.10696
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Versatile_Medical_Image_Segmentation_Learned_from_Multi-Source_Datasets_via_Model_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Versatile_Medical_Image_Segmentation_Learned_from_Multi-Source_Datasets_via_Model_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Chen_Versatile_Medical_Image_CVPR_2024_supplemental.pdf
null
Improving Generalization via Meta-Learning on Hard Samples
Nishant Jain, Arun S. Suggala, Pradeep Shenoy
Learned reweighting (LRW) approaches to supervised learning use an optimization criterion to assign weights for training instances in order to maximize performance on a representative validation dataset. We pose and formalize the problem of optimized selection of the validation set used in LRW training to improve classifier generalization. In particular we show that using hard-to-classify instances in the validation set has both a theoretical connection to and strong empirical evidence of generalization. We provide an efficient algorithm for training this meta-optimized model as well as a simple train-twice heuristic for careful comparative study. We demonstrate that LRW with easy validation data performs consistently worse than LRW with hard validation data establishing the validity of our meta-optimization problem. Our proposed algorithm outperforms a wide range of baselines on a range of datasets and domain shift challenges (Imagenet-1K CIFAR-100 Clothing-1M CAMELYON WILDS etc.) with 1% gains using VIT-B on Imagenet. We also show that using naturally hard examples for validation (Imagenet-R / Imagenet-A) in LRW training for Imagenet improves performance on both clean and naturally hard test instances by 1-2%. Secondary analyses show that using hard validation data in an LRW framework improves margins on test data hinting at the mechanism underlying our empirical gains. We believe this work opens up new research directions for the meta-optimization of meta-learning in a supervised learning context.
https://openaccess.thecvf.com/content/CVPR2024/papers/Jain_Improving_Generalization_via_Meta-Learning_on_Hard_Samples_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.12236
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Jain_Improving_Generalization_via_Meta-Learning_on_Hard_Samples_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Jain_Improving_Generalization_via_Meta-Learning_on_Hard_Samples_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Jain_Improving_Generalization_via_CVPR_2024_supplemental.pdf
null
Align and Aggregate: Compositional Reasoning with Video Alignment and Answer Aggregation for Video Question-Answering
Zhaohe Liao, Jiangtong Li, Li Niu, Liqing Zhang
Despite the recent progress made in Video Question-Answering (VideoQA) these methods typically function as black-boxes making it difficult to understand their reasoning processes and perform consistent compositional reasoning. To address these challenges we propose a model-agnostic Video Alignment and Answer Aggregation (VA3) framework which is capable of enhancing both compositional consistency and accuracy of existing VidQA methods by integrating video aligner and answer aggregator modules. The video aligner hierarchically selects the relevant video clips based on the question while the answer aggregator deduces the answer to the question based on its sub-questions with compositional consistency ensured by the information flow along the question decompose graph and the contrastive learning strategy. We evaluate our framework on three settings of the AGQA-Decomp dataset with three baseline methods and propose new metrics to measure the compositional consistency of VidQA methods more comprehensively. Moreover we propose a large language model (LLM) based automatic question decompose pipeline to apply our framework on any VidQA data. We extend MSVD and NExT-QA datasets with it to evaluate such scheme and our VA3 framework on broader scenarios. Extensive experiments show that our framework improves both compositional consistency and accuracy of existing methods leading to more interpretable models in real-world applications.
https://openaccess.thecvf.com/content/CVPR2024/papers/Liao_Align_and_Aggregate_Compositional_Reasoning_with_Video_Alignment_and_Answer_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Liao_Align_and_Aggregate_Compositional_Reasoning_with_Video_Alignment_and_Answer_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Liao_Align_and_Aggregate_Compositional_Reasoning_with_Video_Alignment_and_Answer_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liao_Align_and_Aggregate_CVPR_2024_supplemental.pdf
null
REACTO: Reconstructing Articulated Objects from a Single Video
Chaoyue Song, Jiacheng Wei, Chuan Sheng Foo, Guosheng Lin, Fayao Liu
In this paper we address the challenge of reconstructing general articulated 3D objects from a single video. Existing works employing dynamic neural radiance fields have advanced the modeling of articulated objects like humans and animals from videos but face challenges with piece-wise rigid general articulated objects due to limitations in their deformation models. To tackle this we propose Quasi-Rigid Blend Skinning a novel deformation model that enhances the rigidity of each part while maintaining flexible deformation of the joints. Our primary insight combines three distinct approaches: 1) an enhanced bone rigging system for improved component modeling 2) the use of quasi-sparse skinning weights to boost part rigidity and reconstruction fidelity and 3) the application of geodesic point assignment for precise motion and seamless deformation. Our method outperforms previous works in producing higher-fidelity 3D reconstructions of general articulated objects as demonstrated on both real and synthetic datasets. Project page: https://chaoyuesong.github.io/REACTO.
https://openaccess.thecvf.com/content/CVPR2024/papers/Song_REACTO_Reconstructing_Articulated_Objects_from_a_Single_Video_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.11151
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Song_REACTO_Reconstructing_Articulated_Objects_from_a_Single_Video_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Song_REACTO_Reconstructing_Articulated_Objects_from_a_Single_Video_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Song_REACTO_Reconstructing_Articulated_CVPR_2024_supplemental.pdf
null
Egocentric Whole-Body Motion Capture with FisheyeViT and Diffusion-Based Motion Refinement
Jian Wang, Zhe Cao, Diogo Luvizon, Lingjie Liu, Kripasindhu Sarkar, Danhang Tang, Thabo Beeler, Christian Theobalt
In this work we explore egocentric whole-body motion capture using a single fisheye camera which simultaneously estimates human body and hand motion. This task presents significant challenges due to three factors: the lack of high-quality datasets fisheye camera distortion and human body self-occlusion. To address these challenges we propose a novel approach that leverages FisheyeViT to extract fisheye image features which are subsequently converted into pixel-aligned 3D heatmap representations for 3D human body pose prediction. For hand tracking we incorporate dedicated hand detection and hand pose estimation networks for regressing 3D hand poses. Finally we develop a diffusion-based whole-body motion prior model to refine the estimated whole-body motion while accounting for joint uncertainties. To train these networks we collect a large synthetic dataset EgoWholeBody comprising 840000 high-quality egocentric images captured across a diverse range of whole-body motion sequences. Quantitative and qualitative evaluations demonstrate the effectiveness of our method in producing high-quality whole-body motion estimates from a single egocentric camera.
https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_Egocentric_Whole-Body_Motion_Capture_with_FisheyeViT_and_Diffusion-Based_Motion_Refinement_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.16495
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Egocentric_Whole-Body_Motion_Capture_with_FisheyeViT_and_Diffusion-Based_Motion_Refinement_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Egocentric_Whole-Body_Motion_Capture_with_FisheyeViT_and_Diffusion-Based_Motion_Refinement_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wang_Egocentric_Whole-Body_Motion_CVPR_2024_supplemental.pdf
null
Language Embedded 3D Gaussians for Open-Vocabulary Scene Understanding
Jin-Chuan Shi, Miao Wang, Hao-Bin Duan, Shao-Hua Guan
Open-vocabulary querying in 3D space is challenging but essential for scene understanding tasks such as object localization and segmentation. Language-embedded scene representations have made progress by incorporating language features into 3D spaces. However their efficacy heavily depends on neural networks that are resource-intensive in training and rendering. Although recent 3D Gaussians offer efficient and high-quality novel view synthesis directly embedding language features in them leads to prohibitive memory usage and decreased performance. In this work we introduce Language Embedded 3D Gaussians a novel scene representation for open-vocabulary query tasks. Instead of embedding high-dimensional raw semantic features on 3D Gaussians we propose a dedicated quantization scheme that drastically alleviates the memory requirement and a novel embedding procedure that achieves smoother yet high accuracy query countering the multi-view feature inconsistencies and the high-frequency inductive bias in point-based representations. Our comprehensive experiments show that our representation achieves the best visual quality and language querying accuracy across current language-embedded representations while maintaining real-time rendering frame rates on a single desktop GPU.
https://openaccess.thecvf.com/content/CVPR2024/papers/Shi_Language_Embedded_3D_Gaussians_for_Open-Vocabulary_Scene_Understanding_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.18482
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Shi_Language_Embedded_3D_Gaussians_for_Open-Vocabulary_Scene_Understanding_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Shi_Language_Embedded_3D_Gaussians_for_Open-Vocabulary_Scene_Understanding_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Shi_Language_Embedded_3D_CVPR_2024_supplemental.pdf
null
Towards Automated Movie Trailer Generation
Dawit Mureja Argaw, Mattia Soldan, Alejandro Pardo, Chen Zhao, Fabian Caba Heilbron, Joon Son Chung, Bernard Ghanem
Movie trailers are an essential tool for promoting films and attracting audiences. However the process of creating trailers can be time-consuming and expensive. To streamline this process we propose an automatic trailer generation framework that generates plausible trailers from a full movie by automating shot selection and composition. Our approach draws inspiration from machine translation techniques and models the movies and trailers as sequences of shots thus formulating the trailer generation problem as a sequence-to-sequence task. We introduce Trailer Generation Transformer (TGT) a deep-learning framework utilizing an encoder-decoder architecture. TGT movie encoder is tasked with contextualizing each movie shot representation via self-attention while the autoregressive trailer decoder predicts the feature representation of the next trailer shot accounting for the relevance of shots' temporal order in trailers. Our TGT significantly outperforms previous methods on a comprehensive suite of metrics.
https://openaccess.thecvf.com/content/CVPR2024/papers/Argaw_Towards_Automated_Movie_Trailer_Generation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.03477
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Argaw_Towards_Automated_Movie_Trailer_Generation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Argaw_Towards_Automated_Movie_Trailer_Generation_CVPR_2024_paper.html
CVPR 2024
null
null
Differentiable Information Bottleneck for Deterministic Multi-view Clustering
Xiaoqiang Yan, Zhixiang Jin, Fengshou Han, Yangdong Ye
In recent several years the information bottleneck (IB) principle provides an information-theoretic framework for deep multi-view clustering (MVC) by compressing multi-view observations while preserving the relevant information of multiple views. Although existing IB-based deep MVC methods have achieved huge success they rely on variational approximation and distribution assumption to estimate the lower bound of mutual information which is a notoriously hard and impractical problem in high-dimensional multi-view spaces. In this work we propose a new differentiable information bottleneck (DIB) method which provides a deterministic and analytical MVC solution by fitting the mutual information without the necessity of variational approximation. Specifically we first propose to directly fit the mutual information of high-dimensional spaces by leveraging normalized kernel Gram matrix which does not require any auxiliary neural estimator to estimate the lower bound of mutual information. Then based on the new mutual information measurement a deterministic multi-view neural network with analytical gradients is explicitly trained to parameterize IB principle which derives a deterministic compression of input variables from different views. Finally a triplet consistency discovery mechanism is devised which is capable of mining the feature consistency cluster consistency and joint consistency based on the deterministic and compact representations. Extensive experimental results show the superiority of our DIB method on 6 benchmarks compared with 13 state-of-the-art baselines.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yan_Differentiable_Information_Bottleneck_for_Deterministic_Multi-view_Clustering_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.15681
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yan_Differentiable_Information_Bottleneck_for_Deterministic_Multi-view_Clustering_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yan_Differentiable_Information_Bottleneck_for_Deterministic_Multi-view_Clustering_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yan_Differentiable_Information_Bottleneck_CVPR_2024_supplemental.pdf
null
Sheared Backpropagation for Fine-tuning Foundation Models
Zhiyuan Yu, Li Shen, Liang Ding, Xinmei Tian, Yixin Chen, Dacheng Tao
Fine-tuning is the process of extending the training of pre-trained models on specific target tasks thereby significantly enhancing their performance across various applications. However fine-tuning often demands large memory consumption posing a challenge for low-memory devices that some previous memory-efficient fine-tuning methods attempted to mitigate by pruning activations for gradient computation albeit at the cost of significant computational overhead from the pruning processes during training. To address these challenges we introduce PreBackRazor a novel activation pruning scheme offering both computational and memory efficiency through a sparsified backpropagation strategy which judiciously avoids unnecessary activation pruning and storage and gradient computation. Before activation pruning our approach samples a probability of selecting a portion of parameters to freeze utilizing a bandit method for updates to prioritize impactful gradients on convergence. During the feed-forward pass each model layer adjusts adaptively based on parameter activation status obviating the need for sparsification and storage of redundant activations for subsequent backpropagation. Benchmarking on fine-tuning foundation models our approach maintains baseline accuracy across diverse tasks yielding over 20% speedup and around 10% memory reduction. Moreover integrating with an advanced CUDA kernel achieves up to 60% speedup without extra memory costs or accuracy loss significantly enhancing the efficiency of fine-tuning foundation models on memory-constrained devices.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yu_Sheared_Backpropagation_for_Fine-tuning_Foundation_Models_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yu_Sheared_Backpropagation_for_Fine-tuning_Foundation_Models_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yu_Sheared_Backpropagation_for_Fine-tuning_Foundation_Models_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yu_Sheared_Backpropagation_for_CVPR_2024_supplemental.pdf
null
Action-slot: Visual Action-centric Representations for Multi-label Atomic Activity Recognition in Traffic Scenes
Chi-Hsi Kung, Shu-Wei Lu, Yi-Hsuan Tsai, Yi-Ting Chen
In this paper we study multi-label atomic activity recognition. Despite the notable progress in action recognition it is still challenging to recognize atomic activities due to a deficiency in holistic understanding of both multiple road users' motions and their contextual information. In this paper we introduce Action-slot a slot attention-based approach that learns visual action-centric representations capturing both motion and contextual information. Our key idea is to design action slots that are capable of paying attention to regions where atomic activities occur without the need for explicit perception guidance. To further enhance slot attention we introduce a background slot that competes with action slots aiding the training process in avoiding unnecessary focus on background regions devoid of activities. Yet the imbalanced class distribution in the existing dataset hampers the assessment of rare activities. To address the limitation we collect a synthetic dataset called TACO which is four times larger than OATS and features a balanced distribution of atomic activities. To validate the effectiveness of our method we conduct comprehensive experiments and ablation studies against various action recognition baselines. We also show that the performance of multi-label atomic activity recognition on real-world datasets can be improved by pretraining representations on TACO.
https://openaccess.thecvf.com/content/CVPR2024/papers/Kung_Action-slot_Visual_Action-centric_Representations_for_Multi-label_Atomic_Activity_Recognition_in_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Kung_Action-slot_Visual_Action-centric_Representations_for_Multi-label_Atomic_Activity_Recognition_in_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Kung_Action-slot_Visual_Action-centric_Representations_for_Multi-label_Atomic_Activity_Recognition_in_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kung_Action-slot_Visual_Action-centric_CVPR_2024_supplemental.pdf
null
Animatable Gaussians: Learning Pose-dependent Gaussian Maps for High-fidelity Human Avatar Modeling
Zhe Li, Zerong Zheng, Lizhen Wang, Yebin Liu
Modeling animatable human avatars from RGB videos is a long-standing and challenging problem. Recent works usually adopt MLP-based neural radiance fields (NeRF) to represent 3D humans but it remains difficult for pure MLPs to regress pose-dependent garment details. To this end we introduce Animatable Gaussians a new avatar representation that leverages powerful 2D CNNs and 3D Gaussian splatting to create high-fidelity avatars. To associate 3D Gaussians with the animatable avatar we learn a parametric template from the input videos and then parameterize the template on two front & back canonical Gaussian maps where each pixel represents a 3D Gaussian. The learned template is adaptive to the wearing garments for modeling looser clothes like dresses. Such template-guided 2D parameterization enables us to employ a powerful StyleGAN-based CNN to learn the pose-dependent Gaussian maps for modeling detailed dynamic appearances. Furthermore we introduce a pose projection strategy for better generalization given novel poses. Overall our method can create lifelike avatars with dynamic realistic and generalized appearances. Experiments show that our method outperforms other state-of-the-art approaches. Code: https://github.com/lizhe00/AnimatableGaussians.
https://openaccess.thecvf.com/content/CVPR2024/papers/Li_Animatable_Gaussians_Learning_Pose-dependent_Gaussian_Maps_for_High-fidelity_Human_Avatar_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Li_Animatable_Gaussians_Learning_Pose-dependent_Gaussian_Maps_for_High-fidelity_Human_Avatar_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Li_Animatable_Gaussians_Learning_Pose-dependent_Gaussian_Maps_for_High-fidelity_Human_Avatar_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_Animatable_Gaussians_Learning_CVPR_2024_supplemental.pdf
null
Latency Correction for Event-guided Deblurring and Frame Interpolation
Yixin Yang, Jinxiu Liang, Bohan Yu, Yan Chen, Jimmy S. Ren, Boxin Shi
Event cameras with their high temporal resolution dynamic range and low power consumption are particularly good at time-sensitive applications like deblurring and frame interpolation. However their performance is hindered by latency variability especially under low-light conditions and with fast-moving objects. This paper addresses the challenge of latency in event cameras -- the temporal discrepancy between the actual occurrence of changes in the corresponding timestamp assigned by the sensor. Focusing on event-guided deblurring and frame interpolation tasks we propose a latency correction method based on a parameterized latency model. To enable data-driven learning we develop an event-based temporal fidelity to describe the sharpness of latent images reconstructed from events and the corresponding blurry images and reformulate the event-based double integral model differentiable to latency. The proposed method is validated using synthetic and real-world datasets demonstrating the benefits of latency correction for deblurring and interpolation across different lighting conditions.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yang_Latency_Correction_for_Event-guided_Deblurring_and_Frame_Interpolation_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Latency_Correction_for_Event-guided_Deblurring_and_Frame_Interpolation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Latency_Correction_for_Event-guided_Deblurring_and_Frame_Interpolation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yang_Latency_Correction_for_CVPR_2024_supplemental.pdf
null
Retraining-Free Model Quantization via One-Shot Weight-Coupling Learning
Chen Tang, Yuan Meng, Jiacheng Jiang, Shuzhao Xie, Rongwei Lu, Xinzhu Ma, Zhi Wang, Wenwu Zhu
Quantization is of significance for compressing the over-parameterized deep neural models and deploying them on resource-limited devices. Fixed-precision quantization suffers from performance drop due to the limited numerical representation ability. Conversely mixed-precision quantization (MPQ) is advocated to compress the model effectively by allocating heterogeneous bit-width for layers. MPQ is typically organized into a searching-retraining two-stage process. Previous works only focus on determining the optimal bit-width configuration in the first stage efficiently while ignoring the considerable time costs in the second stage. However retraining always consumes hundreds of GPU-hours on the cutting-edge GPUs thus hindering deployment efficiency significantly. In this paper we devise a one-shot training-searching paradigm for mixed-precision model compression. Specifically in the first stage all potential bit-width configurations are coupled and thus optimized simultaneously within a set of shared weights. However our observations reveal a previously unseen and severe bit-width interference phenomenon among highly coupled weights during optimization leading to considerable performance degradation under a high compression ratio. To tackle this problem we first design a bit-width scheduler to dynamically freeze the most turbulent bit-width of layers during training to ensure the rest bit-widths converged properly. Then taking inspiration from information theory we present an information distortion mitigation technique to align the behaviour of the bad-performing bit-widths to the well-performing ones. In the second stage an inference-only greedy search scheme is devised to evaluate the goodness of configurations without introducing any additional training costs. Extensive experiments on three representative models and three datasets demonstrate the effectiveness of the proposed method.
https://openaccess.thecvf.com/content/CVPR2024/papers/Tang_Retraining-Free_Model_Quantization_via_One-Shot_Weight-Coupling_Learning_CVPR_2024_paper.pdf
http://arxiv.org/abs/2401.01543
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Tang_Retraining-Free_Model_Quantization_via_One-Shot_Weight-Coupling_Learning_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Tang_Retraining-Free_Model_Quantization_via_One-Shot_Weight-Coupling_Learning_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Tang_Retraining-Free_Model_Quantization_CVPR_2024_supplemental.pdf
null
EVCap: Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension
Jiaxuan Li, Duc Minh Vo, Akihiro Sugimoto, Hideki Nakayama
Large language models (LLMs)-based image captioning has the capability of describing objects not explicitly observed in training data; yet novel objects occur frequently necessitating the requirement of sustaining up-to-date object knowledge for open-world comprehension. Instead of relying on large amounts of data and/or scaling up network parameters we introduce a highly effective retrieval-augmented image captioning method that prompts LLMs with object names retrieved from External Visual--name memory (EVCap). We build ever-changing object knowledge memory using objects' visuals and names enabling us to (i) update the memory at a minimal cost and (ii) effortlessly augment LLMs with retrieved object names by utilizing a lightweight and fast-to-train model. Our model which was trained only on the COCO dataset can adapt to out-of-domain without requiring additional fine-tuning or re-training. Our experiments conducted on benchmarks and synthetic commonsense-violating data show that EVCap with only 3.97M trainable parameters exhibits superior performance compared to other methods based on frozen pre-trained LLMs. Its performance is also competitive to specialist SOTAs that require extensive training.
https://openaccess.thecvf.com/content/CVPR2024/papers/Li_EVCap_Retrieval-Augmented_Image_Captioning_with_External_Visual-Name_Memory_for_Open-World_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.15879
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Li_EVCap_Retrieval-Augmented_Image_Captioning_with_External_Visual-Name_Memory_for_Open-World_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Li_EVCap_Retrieval-Augmented_Image_Captioning_with_External_Visual-Name_Memory_for_Open-World_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_EVCap_Retrieval-Augmented_Image_CVPR_2024_supplemental.pdf
null
SIFU: Side-view Conditioned Implicit Function for Real-world Usable Clothed Human Reconstruction
Zechuan Zhang, Zongxin Yang, Yi Yang
Creating high-quality 3D models of clothed humans from single images for real-world applications is crucial. Despite recent advancements accurately reconstructing humans in complex poses or with loose clothing from in-the-wild images along with predicting textures for unseen areas remains a significant challenge. A key limitation of previous methods is their insufficient prior guidance in transitioning from 2D to 3D and in texture prediction. In response we introduce SIFU (Side-view Conditioned Implicit Function for Real-world Usable Clothed Human Reconstruction) a novel approach combining a Side-view Decoupling Transformer with a 3D Consistent Texture Refinement pipeline. SIFU employs a cross-attention mechanism within the transformer using SMPL-X normals as queries to effectively decouple side-view features in the process of mapping 2D features to 3D. This method not only improves the precision of the 3D models but also their robustness especially when SMPL-X estimates are not perfect. Our texture refinement process leverages text-to-image diffusion-based prior to generate realistic and consistent textures for invisible views. Through extensive experiments SIFU surpasses SOTA methods in both geometry and texture reconstruction showcasing enhanced robustness in complex scenarios and achieving an unprecedented Chamfer and P2S measurement. Our approach extends to practical applications such as 3D printing and scene building demonstrating its broad utility in real-world scenarios.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_SIFU_Side-view_Conditioned_Implicit_Function_for_Real-world_Usable_Clothed_Human_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.06704
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_SIFU_Side-view_Conditioned_Implicit_Function_for_Real-world_Usable_Clothed_Human_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_SIFU_Side-view_Conditioned_Implicit_Function_for_Real-world_Usable_Clothed_Human_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_SIFU_Side-view_Conditioned_CVPR_2024_supplemental.mp4
null
WinSyn: : A High Resolution Testbed for Synthetic Data
Tom Kelly, John Femiani, Peter Wonka
We present WinSyn a unique dataset and testbed for creating high-quality synthetic data with procedural modeling techniques. The dataset contains high-resolution photographs of windows selected from locations around the world with 89318 individual window crops showcasing diverse geometric and material characteristics. We evaluate a procedural model by training semantic segmentation networks on both synthetic and real images and then comparing their performances on a shared test set of real images. Specifically we measure the difference in mean Intersection over Union (mIoU) and determine the effective number of real images to match synthetic data's training performance. We design a baseline procedural model as a benchmark and provide 21290 synthetically generated images. By tuning the procedural model key factors are identified which significantly influence the model's fidelity in replicating real-world scenarios. Importantly we highlight the challenge of procedural modeling using current techniques especially in their ability to replicate the spatial semantics of real-world scenarios. This insight is critical because of the potential of procedural models to bridge hidden scene aspects such as depth reflectivity material properties and lighting conditions.
https://openaccess.thecvf.com/content/CVPR2024/papers/Kelly_WinSyn__A_High_Resolution_Testbed_for_Synthetic_Data_CVPR_2024_paper.pdf
http://arxiv.org/abs/2310.08471
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Kelly_WinSyn__A_High_Resolution_Testbed_for_Synthetic_Data_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Kelly_WinSyn__A_High_Resolution_Testbed_for_Synthetic_Data_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kelly_WinSyn__A_CVPR_2024_supplemental.zip
null
Autoregressive Queries for Adaptive Tracking with Spatio-Temporal Transformers
Jinxia Xie, Bineng Zhong, Zhiyi Mo, Shengping Zhang, Liangtao Shi, Shuxiang Song, Rongrong Ji
The rich spatio-temporal information is crucial to capture the complicated target appearance variations in visual tracking. However most top-performing tracking algorithms rely on many hand-crafted components for spatio-temporal information aggregation. Consequently the spatio-temporal information is far away from being fully explored. To alleviate this issue we propose an adaptive tracker with spatio-temporal transformers (named AQATrack) which adopts simple autoregressive queries to effectively learn spatio-temporal information without many hand-designed components. Firstly we introduce a set of learnable and autoregressive queries to capture the instantaneous target appearance changes in a sliding window fashion. Then we design a novel attention mechanism for the interaction of existing queries to generate a new query in current frame. Finally based on the initial target template and learnt autoregressive queries a spatio-temporal information fusion module (STM) is designed for spatiotemporal formation aggregation to locate a target object. Benefiting from the STM we can effectively combine the static appearance and instantaneous changes to guide robust tracking. Extensive experiments show that our method significantly improves the tracker's performance on six popular tracking benchmarks: LaSOT LaSOText TrackingNet GOT-10k TNL2K and UAV123. Code and models will be https://github.com/orgs/GXNU-ZhongLab.
https://openaccess.thecvf.com/content/CVPR2024/papers/Xie_Autoregressive_Queries_for_Adaptive_Tracking_with_Spatio-Temporal_Transformers_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Xie_Autoregressive_Queries_for_Adaptive_Tracking_with_Spatio-Temporal_Transformers_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Xie_Autoregressive_Queries_for_Adaptive_Tracking_with_Spatio-Temporal_Transformers_CVPR_2024_paper.html
CVPR 2024
null
null
Misalignment-Robust Frequency Distribution Loss for Image Transformation
Zhangkai Ni, Juncheng Wu, Zian Wang, Wenhan Yang, Hanli Wang, Lin Ma
This paper aims to address a common challenge in deep learning-based image transformation methods such as image enhancement and super-resolution which heavily rely on precisely aligned paired datasets with pixel-level alignments. However creating precisely aligned paired images presents significant challenges and hinders the advancement of methods trained on such data. To overcome this challenge this paper introduces a novel and simple Frequency Distribution Loss (FDL) for computing distribution distance within the frequency domain. Specifically we transform image features into the frequency domain using Discrete Fourier Transformation (DFT). Subsequently frequency components (amplitude and phase) are processed separately to form the FDL loss function. Our method is empirically proven effective as a training constraint due to the thoughtful utilization of global information in the frequency domain. Extensive experimental evaluations focusing on image enhancement and super-resolution tasks demonstrate that FDL outperforms existing misalignment-robust loss functions. Furthermore we explore the potential of our FDL for image style transfer that relies solely on completely misaligned data. Our code is available at: https://github.com/eezkni/FDL
https://openaccess.thecvf.com/content/CVPR2024/papers/Ni_Misalignment-Robust_Frequency_Distribution_Loss_for_Image_Transformation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2402.18192
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Ni_Misalignment-Robust_Frequency_Distribution_Loss_for_Image_Transformation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Ni_Misalignment-Robust_Frequency_Distribution_Loss_for_Image_Transformation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ni_Misalignment-Robust_Frequency_Distribution_CVPR_2024_supplemental.pdf
null
Language-aware Visual Semantic Distillation for Video Question Answering
Bo Zou, Chao Yang, Yu Qiao, Chengbin Quan, Youjian Zhao
Significant advancements in video question answering (VideoQA) have been made thanks to thriving large image-language pretraining frameworks. Although these image-language models can efficiently represent both video and language branches they typically employ a goal-free vision perception process and do not interact vision with language well during the answer generation thus omitting crucial visual cues. In this paper we are inspired by the human recognition and learning pattern and propose VideoDistill a framework with language-aware (i.e. goal-driven) behavior in both vision perception and answer generation process. VideoDistill generates answers only from question-related visual embeddings and follows a thinking-observing-answering approach that closely resembles human behavior distinguishing it from previous research. Specifically we develop a language-aware gating mechanism to replace the standard cross-attention avoiding language's direct fusion into visual representations. We incorporate this mechanism into two key components of the entire framework. The first component is a differentiable sparse sampling module which selects frames containing the necessary dynamics and semantics relevant to the questions. The second component is a vision refinement module that merges existing spatial-temporal attention layers to ensure the extraction of multi-grained visual semantics associated with the questions. We conduct experimental evaluations on various challenging video question-answering benchmarks and VideoDistill achieves state-of-the-art performance in both general and long-form VideoQA datasets. In Addition we verify that VideoDistill can effectively alleviate the utilization of language shortcut solutions in the EgoTaskQA dataset.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zou_Language-aware_Visual_Semantic_Distillation_for_Video_Question_Answering_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zou_Language-aware_Visual_Semantic_Distillation_for_Video_Question_Answering_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zou_Language-aware_Visual_Semantic_Distillation_for_Video_Question_Answering_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zou_Language-aware_Visual_Semantic_CVPR_2024_supplemental.pdf
null
Lane2Seq: Towards Unified Lane Detection via Sequence Generation
Kunyang Zhou
In this paper we present a novel sequence generation-based framework for lane detection called Lane2Seq. It unifies various lane detection formats by casting lane detection as a sequence generation task. This is different from previous lane detection methods which depend on well-designed task-specific head networks and corresponding loss functions. Lane2Seq only adopts a plain transformer-based encoder-decoder architecture with a simple cross-entropy loss. Additionally we propose a new multi-format model tuning based on reinforcement learning to incorporate the task-specific knowledge into Lane2Seq. Experimental results demonstrate that such a simple sequence generation paradigm not only unifies lane detection but also achieves competitive performance on benchmarks. For example Lane2Seq gets 97.95% and 97.42% F1 score on Tusimple and LLAMAS datasets establishing a new state-of-the-art result for two benchmarks.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhou_Lane2Seq_Towards_Unified_Lane_Detection_via_Sequence_Generation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2402.17172
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhou_Lane2Seq_Towards_Unified_Lane_Detection_via_Sequence_Generation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhou_Lane2Seq_Towards_Unified_Lane_Detection_via_Sequence_Generation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhou_Lane2Seq_Towards_Unified_CVPR_2024_supplemental.pdf
null
Disentangled Prompt Representation for Domain Generalization
De Cheng, Zhipeng Xu, Xinyang Jiang, Nannan Wang, Dongsheng Li, Xinbo Gao
Domain Generalization (DG) aims to develop a versatile model capable of performing well on unseen target domains. Recent advancements in pre-trained Visual Foundation Models (VFMs) such as CLIP show significant potential in enhancing the generalization abilities of deep models. Although there is a growing focus on VFM-based domain prompt tuning for DG effectively learning prompts that disentangle invariant features across all domains remains a major challenge. In this paper we propose addressing this challenge by leveraging the controllable and flexible language prompt of the VFM. Observing that the text modality of VFMs is inherently easier to disentangle we introduce a novel text feature guided visual prompt tuning framework. This framework first automatically disentangles the text prompt using a large language model (LLM) and then learns domain-invariant visual representation guided by the disentangled text feature. Moreover we also devise domain-specific prototype learning to fully exploit domain-specific information to combine with the invariant feature prediction. Extensive experiments on mainstream DG datasets namely PACS VLCS OfficeHome DomainNet and TerraInc demonstrate that the proposed method achieves superior performances to state-of-the-art DG methods.
https://openaccess.thecvf.com/content/CVPR2024/papers/Cheng_Disentangled_Prompt_Representation_for_Domain_Generalization_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Cheng_Disentangled_Prompt_Representation_for_Domain_Generalization_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Cheng_Disentangled_Prompt_Representation_for_Domain_Generalization_CVPR_2024_paper.html
CVPR 2024
null
null
Abductive Ego-View Accident Video Understanding for Safe Driving Perception
Jianwu Fang, Lei-lei Li, Junfei Zhou, Junbin Xiao, Hongkai Yu, Chen Lv, Jianru Xue, Tat-Seng Chua
We present MM-AU a novel dataset for Multi-Modal Accident video Understanding. MM-AU contains 11727 in-the-wild ego-view accident videos each with temporally aligned text descriptions. We annotate over 2.23 million object boxes and 58650 pairs of video-based accident reasons covering 58 accident categories. MM-AU supports various accident understanding tasks particularly multimodal video diffusion to understand accident cause-effect chains for safe driving. With MM-AU we present an Abductive accident Video understanding framework for Safe Driving perception (AdVersa-SD). AdVersa-SD performs video diffusion via an Object-Centric Video Diffusion (OAVD) method which is driven by an abductive CLIP model. This model involves a contrastive interaction loss to learn the pair co-occurrence of normal near-accident accident frames with the corresponding text descriptions such as accident reasons prevention advice and accident categories. OAVD enforces the object region learning while fixing the content of the original frame background in video generation to find the dominant objects for certain accidents. Extensive experiments verify the abductive ability of AdVersa-SD and the superiority of OAVD against the state-of-the-art diffusion models. Additionally we provide careful benchmark evaluations for object detection and accident reason answering since AdVersa-SD relies on precise object and accident reason information.
https://openaccess.thecvf.com/content/CVPR2024/papers/Fang_Abductive_Ego-View_Accident_Video_Understanding_for_Safe_Driving_Perception_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.00436
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Fang_Abductive_Ego-View_Accident_Video_Understanding_for_Safe_Driving_Perception_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Fang_Abductive_Ego-View_Accident_Video_Understanding_for_Safe_Driving_Perception_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Fang_Abductive_Ego-View_Accident_CVPR_2024_supplemental.pdf
null
Cross-spectral Gated-RGB Stereo Depth Estimation
Samuel Brucker, Stefanie Walz, Mario Bijelic, Felix Heide
Gated cameras flood-illuminate a scene and capture the time-gated impulse response of a scene. By employing nanosecond-scale gates existing sensors are capable of capturing mega-pixel gated images delivering dense depth improving on today's LiDAR sensors in spatial resolution and depth precision. Although gated depth estimation methods deliver a million of depth estimates per frame their resolution is still an order below existing RGB imaging methods. In this work we combine high-resolution stereo HDR RCCB cameras with gated imaging allowing us to exploit depth cues from active gating multi-view RGB and multi-view NIR sensing -- multi-view and gated cues across the entire spectrum. The resulting capture system consists only of low-cost CMOS sensors and flood-illumination. We propose a novel stereo-depth estimation method that is capable of exploiting these multi-modal multi-view depth cues including the active illumination that is measured by the RCCB camera when removing the IR-cut filter. The proposed method achieves accurate depth at long ranges outperforming the next best existing method by 39% for ranges of 100 to 220 m in MAE on accumulated LiDAR ground-truth. Our code models and datasets are available here (https://light.princeton.edu/gatedrccbstereo/).
https://openaccess.thecvf.com/content/CVPR2024/papers/Brucker_Cross-spectral_Gated-RGB_Stereo_Depth_Estimation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2405.12759
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Brucker_Cross-spectral_Gated-RGB_Stereo_Depth_Estimation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Brucker_Cross-spectral_Gated-RGB_Stereo_Depth_Estimation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Brucker_Cross-spectral_Gated-RGB_Stereo_CVPR_2024_supplemental.pdf
null
KVQ: Kwai Video Quality Assessment for Short-form Videos
Yiting Lu, Xin Li, Yajing Pei, Kun Yuan, Qizhi Xie, Yunpeng Qu, Ming Sun, Chao Zhou, Zhibo Chen
Short-form UGC video platforms like Kwai and TikTok have been an emerging and irreplaceable mainstream media form thriving on user-friendly engagement and kaleidoscope creation etc. However the advancing content generation modes e.g. special effects and sophisticated processing workflows e.g. de-artifacts have introduced significant challenges to recent UGC video quality assessment: (i) the ambiguous contents hinder the identification of quality-determined regions. (ii) the diverse and complicated hybrid distortions are hard to distinguish. To tackle the above challenges and assist in the development of short-form videos we establish the first large-scale Kwai short Video database for Quality assessment termed KVQ which comprises 600 user-uploaded short videos and 3600 processed videos through the diverse practical processing workflows including pre-processing transcoding and enhancement. Among them the absolute quality score of each video and partial ranking score among indistinguish samples are provided by a team of professional researchers specializing in image processing. Based on this database we propose the first short-form video quality evaluator i.e. KSVQE which enables the quality evaluator to identify the quality-determined semantics with the content understanding of large vision language models (i.e. CLIP) and distinguish the distortions with the distortion under- standing module. Experimental results have shown the effectiveness of KSVQE on our KVQ database and popular VQA databases. The project can be found at https: //lixinustc.github.io/projects/KVQ/.
https://openaccess.thecvf.com/content/CVPR2024/papers/Lu_KVQ_Kwai_Video_Quality_Assessment_for_Short-form_Videos_CVPR_2024_paper.pdf
http://arxiv.org/abs/2402.07220
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Lu_KVQ_Kwai_Video_Quality_Assessment_for_Short-form_Videos_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Lu_KVQ_Kwai_Video_Quality_Assessment_for_Short-form_Videos_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lu_KVQ_Kwai_Video_CVPR_2024_supplemental.pdf
null
Degrees of Freedom Matter: Inferring Dynamics from Point Trajectories
Yan Zhang, Sergey Prokudin, Marko Mihajlovic, Qianli Ma, Siyu Tang
Understanding the dynamics of generic 3D scenes is fundamentally challenging in computer vision essential in enhancing applications related to scene reconstruction motion tracking and avatar creation. In this work we address the task as the problem of inferring dense long-range motion of 3D points. By observing a set of point trajectories we aim to learn an implicit motion field parameterized by a neural network to predict the movement of novel points within the same domain without relying on any data-driven or scene-specific priors. To achieve this our approach builds upon the recently introduced dynamic point field model that learns smooth deformation fields between the canonical frame and individual observation frames. However temporal consistency between consecutive frames is neglected and the number of required parameters increases linearly with the sequence length due to per-frame modeling. To address these shortcomings we exploit the intrinsic regularization provided by SIREN and modify the input layer to produce a spatiotemporally smooth motion field. Additionally we analyze the motion field Jacobian matrix and discover that the motion degrees of freedom (DOFs) in an infinitesimal area around a point and the network hidden variables have different behaviors to affect the model's representational power. This enables us to improve the model representation capability while retaining the model compactness. Furthermore to reduce the risk of overfitting we introduce a regularization term based on the assumption of piece-wise motion smoothness. Our experiments assess the model's performance in predicting unseen point trajectories and its application in temporal mesh alignment with guidance. The results demonstrate its superiority and effectiveness. The code and data for the project are publicly available at https://yz-cnsdqz.github.io/eigenmotion/DOMA.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Degrees_of_Freedom_Matter_Inferring_Dynamics_from_Point_Trajectories_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Degrees_of_Freedom_Matter_Inferring_Dynamics_from_Point_Trajectories_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Degrees_of_Freedom_Matter_Inferring_Dynamics_from_Point_Trajectories_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_Degrees_of_Freedom_CVPR_2024_supplemental.pdf
null
LEMON: Learning 3D Human-Object Interaction Relation from 2D Images
Yuhang Yang, Wei Zhai, Hongchen Luo, Yang Cao, Zheng-Jun Zha
Learning 3D human-object interaction relation is pivotal to embodied AI and interaction modeling. Most existing methods approach the goal by learning to predict isolated interaction elements e.g. human contact object affordance and human-object spatial relation primarily from the perspective of either the human or the object. Which underexploit certain correlations between the interaction counterparts (human and object) and struggle to address the uncertainty in interactions. Actually objects' functionalities potentially affect humans' interaction intentions which reveals what the interaction is. Meanwhile the interacting humans and objects exhibit matching geometric structures which presents how to interact. In light of this we propose harnessing these inherent correlations between interaction counterparts to mitigate the uncertainty and jointly anticipate the above interaction elements in 3D space. To achieve this we present LEMON (LEarning 3D huMan-Object iNteraction relation) a unified model that mines interaction intentions of the counterparts and employs curvatures to guide the extraction of geometric correlations combining them to anticipate the interaction elements. Besides the 3D Interaction Relation dataset (3DIR) is collected to serve as the test bed for training and evaluation. Extensive experiments demonstrate the superiority of LEMON over methods estimating each element in isolation. The code and dataset are available at https://yyvhang.github.io/LEMON/
https://openaccess.thecvf.com/content/CVPR2024/papers/Yang_LEMON_Learning_3D_Human-Object_Interaction_Relation_from_2D_Images_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.08963
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_LEMON_Learning_3D_Human-Object_Interaction_Relation_from_2D_Images_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_LEMON_Learning_3D_Human-Object_Interaction_Relation_from_2D_Images_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yang_LEMON_Learning_3D_CVPR_2024_supplemental.pdf
null
Low-Latency Neural Stereo Streaming
Qiqi Hou, Farzad Farhadzadeh, Amir Said, Guillaume Sautiere, Hoang Le
The rise of new video modalities like virtual reality or autonomous driving has increased the demand for efficient multi-view video compression methods both in terms of rate-distortion (R-D) performance and in terms of delay and runtime. While most recent stereo video compression approaches have shown promising performance they compress left and right views sequentially leading to poor parallelization and runtime performance. This work presents Low-Latency neural codec for Stereo video Streaming (LLSS) a novel parallel stereo video coding method designed for fast and efficient low-latency stereo video streaming. Instead of using a sequential cross-view motion compensation like existing methods LLSS introduces a bidirectional feature shifting module to directly exploit mutual information among views and encode them effectively with a joint cross-view prior model for entropy coding. Thanks to this design LLSS processes left and right views in parallel minimizing latency; all while substantially improving R-D performance compared to both existing neural and conventional codecs.
https://openaccess.thecvf.com/content/CVPR2024/papers/Hou_Low-Latency_Neural_Stereo_Streaming_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.17879
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Hou_Low-Latency_Neural_Stereo_Streaming_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Hou_Low-Latency_Neural_Stereo_Streaming_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Hou_Low-Latency_Neural_Stereo_CVPR_2024_supplemental.pdf
null
Understanding Video Transformers via Universal Concept Discovery
Matthew Kowal, Achal Dave, Rares Ambrus, Adrien Gaidon, Konstantinos G. Derpanis, Pavel Tokmakov
This paper studies the problem of concept-based interpretability of transformer representations for videos. Concretely we seek to explain the decision-making process of video transformers based on high-level spatiotemporal concepts that are automatically discovered. Prior research on concept-based interpretability has concentrated solely on image-level tasks. Comparatively video models deal with the added temporal dimension increasing complexity and posing challenges in identifying dynamic concepts over time. In this work we systematically address these challenges by introducing the first Video Transformer Concept Discovery (VTCD) algorithm. To this end we propose an efficient approach for unsupervised identification of units of video transformer representations - concepts and ranking their importance to the output of a model. The resulting concepts are highly interpretable revealing spatio-temporal reasoning mechanisms and object-centric representations in unstructured video models. Performing this analysis jointly over a diverse set of supervised and self-supervised representations we discover that some of these mechanism are universal in video transformers. Finally we show that VTCD can be used for fine-grained action recognition and video object segmentation.
https://openaccess.thecvf.com/content/CVPR2024/papers/Kowal_Understanding_Video_Transformers_via_Universal_Concept_Discovery_CVPR_2024_paper.pdf
http://arxiv.org/abs/2401.10831
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Kowal_Understanding_Video_Transformers_via_Universal_Concept_Discovery_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Kowal_Understanding_Video_Transformers_via_Universal_Concept_Discovery_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kowal_Understanding_Video_Transformers_CVPR_2024_supplemental.pdf
null
Exploring the Transferability of Visual Prompting for Multimodal Large Language Models
Yichi Zhang, Yinpeng Dong, Siyuan Zhang, Tianzan Min, Hang Su, Jun Zhu
Although Multimodal Large Language Models (MLLMs) have demonstrated promising versatile capabilities their performance is still inferior to specialized models on downstream tasks which makes adaptation necessary to enhance their utility. However fine-tuning methods require independent training for every model leading to huge computation and memory overheads. In this paper we propose a novel setting where we aim to improve the performance of diverse MLLMs with a group of shared parameters optimized for a downstream task. To achieve this we propose Transferable Visual Prompting (TVP) a simple and effective approach to generate visual prompts that can transfer to different models and improve their performance on downstream tasks after trained on only one model. We introduce two strategies to address the issue of cross-model feature corruption of existing visual prompting methods and enhance the transferability of the learned prompts including 1) Feature Consistency Alignment: which imposes constraints to the prompted feature changes to maintain task-agnostic knowledge; 2) Task Semantics Enrichment: which encourages the prompted images to contain richer task-specific semantics with language guidance. We validate the effectiveness of TVP through extensive experiments with 6 modern MLLMs on a wide variety of tasks ranging from object recognition and counting to multimodal reasoning and hallucination correction.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Exploring_the_Transferability_of_Visual_Prompting_for_Multimodal_Large_Language_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.11207
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Exploring_the_Transferability_of_Visual_Prompting_for_Multimodal_Large_Language_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Exploring_the_Transferability_of_Visual_Prompting_for_Multimodal_Large_Language_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_Exploring_the_Transferability_CVPR_2024_supplemental.pdf
null
PointOBB: Learning Oriented Object Detection via Single Point Supervision
Junwei Luo, Xue Yang, Yi Yu, Qingyun Li, Junchi Yan, Yansheng Li
Single point-supervised object detection is gaining attention due to its cost-effectiveness. However existing approaches focus on generating horizontal bounding boxes (HBBs) while ignoring oriented bounding boxes (OBBs) commonly used for objects in aerial images. This paper proposes PointOBB the first single Point-based OBB generation method for oriented object detection. PointOBB operates through the collaborative utilization of three distinctive views: an original view a resized view and a rotated/flipped (rot/flp) view. Upon the original view we leverage the resized and rot/flp views to build a scale augmentation module and an angle acquisition module respectively. In the former module a Scale-Sensitive Consistency (SSC) loss is designed to enhance the deep network's ability to perceive the object scale. For accurate object angle predictions the latter module incorporates self-supervised learning to predict angles which is associated with a scale-guided Dense-to-Sparse (DS) matching strategy for aggregating dense angles corresponding to sparse objects. The resized and rot/flp views are switched using a progressive multi-view switching strategy during training to achieve coupled optimization of scale and angle. Experimental results on the DIOR-R and DOTA-v1.0 datasets demonstrate that PointOBB achieves promising performance and significantly outperforms potential point-supervised baselines.
https://openaccess.thecvf.com/content/CVPR2024/papers/Luo_PointOBB_Learning_Oriented_Object_Detection_via_Single_Point_Supervision_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.14757
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Luo_PointOBB_Learning_Oriented_Object_Detection_via_Single_Point_Supervision_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Luo_PointOBB_Learning_Oriented_Object_Detection_via_Single_Point_Supervision_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Luo_PointOBB_Learning_Oriented_CVPR_2024_supplemental.pdf
null
Intrinsic Image Diffusion for Indoor Single-view Material Estimation
Peter Kocsis, Vincent Sitzmann, Matthias Nießner
We present Intrinsic Image Diffusion a generative model for appearance decomposition of indoor scenes. Given a single input view we sample multiple possible material explanations represented as albedo roughness and metallic maps. Appearance decomposition poses a considerable challenge in computer vision due to the inherent ambiguity between lighting and material properties and the lack of real datasets. To address this issue we advocate for a probabilistic formulation where instead of attempting to directly predict the true material properties we employ a conditional generative model to sample from the solution space. Furthermore we show that utilizing the strong learned prior of recent diffusion models trained on large-scale real-world images can be adapted to material estimation and highly improves the generalization to real images. Our method produces significantly sharper more consistent and more detailed materials outperforming state-of-the-art methods by 1.5dB on PSNR and by 45% better FID score on albedo prediction. We demonstrate the effectiveness of our approach through experiments on both synthetic and real-world datasets.
https://openaccess.thecvf.com/content/CVPR2024/papers/Kocsis_Intrinsic_Image_Diffusion_for_Indoor_Single-view_Material_Estimation_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Kocsis_Intrinsic_Image_Diffusion_for_Indoor_Single-view_Material_Estimation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Kocsis_Intrinsic_Image_Diffusion_for_Indoor_Single-view_Material_Estimation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kocsis_Intrinsic_Image_Diffusion_CVPR_2024_supplemental.pdf
null
SHAP-EDITOR: Instruction-Guided Latent 3D Editing in Seconds
Minghao Chen, Junyu Xie, Iro Laina, Andrea Vedaldi
We propose a novel feed-forward 3D editing framework called Shap-Editor. Prior research on editing 3D objects primarily concentrated on editing individual objects by leveraging off-the-shelf 2D image editing networks utilizing a process called 3D distillation which transfers knowledge from the 2D network to the 3D asset. Distillation necessitates at least tens of minutes per asset to attain satisfactory editing results thus it is not very practical. In contrast we ask whether 3D editing can be carried out directly by a feed-forward network eschewing test-time optimization. In particular we hypothesise that this process can be greatly simplified by first encoding 3D objects into a suitable latent space. We validate this hypothesis by building upon the latent space of Shap-E. We demonstrate that direct 3D editing in this space is possible and efficient by learning a feed-forward editor network that only requires approximately one second per edit. Our experiments show that Shap-Editor generalises well to both in-distribution and out-of-distribution 3D assets with different prompts and achieves superior performance compared to methods that carry out test-time optimisation for each edited instance.
https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_SHAP-EDITOR_Instruction-Guided_Latent_3D_Editing_in_Seconds_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_SHAP-EDITOR_Instruction-Guided_Latent_3D_Editing_in_Seconds_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_SHAP-EDITOR_Instruction-Guided_Latent_3D_Editing_in_Seconds_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Chen_SHAP-EDITOR_Instruction-Guided_Latent_CVPR_2024_supplemental.pdf
null
HyperSDFusion: Bridging Hierarchical Structures in Language and Geometry for Enhanced 3D Text2Shape Generation
Zhiying Leng, Tolga Birdal, Xiaohui Liang, Federico Tombari
3D shape generation from text is a fundamental task in 3D representation learning. The text-shape pairs exhibit a hierarchical structure where a general text like "chair" covers all 3D shapes of the chair while more detailed prompts refer to more specific shapes. Furthermore both text and 3D shapes are inherently hierarchical structures. However existing Text2Shape methods such as SDFusion do not exploit that. In this work we propose HyperSDFusion a dual-branch diffusion model that generates 3D shapes from a given text. Since hyperbolic space is suitable for handling hierarchical data we propose to learn the hierarchical representations of text and 3D shapes in hyperbolic space. First we introduce a hyperbolic text-image encoder to learn the sequential and multi-modal hierarchical features of text in hyperbolic space. In addition we design a hyperbolic text-graph convolution module to learn the hierarchical features of text in hyperbolic space. In order to fully utilize these text features we introduce a dual-branch structure to embed text features in 3D feature space. At last to endow the generated 3D shapes with a hierarchical structure we devise a hyperbolic hierarchical loss. Our method is the first to explore the hyperbolic hierarchical representation for text-to-shape generation. Experimental results on the existing text-to-shape paired dataset Text2Shape achieved state-of-the-art results. We release our implementation under HyperSDFusion.github.io.
https://openaccess.thecvf.com/content/CVPR2024/papers/Leng_HyperSDFusion_Bridging_Hierarchical_Structures_in_Language_and_Geometry_for_Enhanced_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.00372
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Leng_HyperSDFusion_Bridging_Hierarchical_Structures_in_Language_and_Geometry_for_Enhanced_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Leng_HyperSDFusion_Bridging_Hierarchical_Structures_in_Language_and_Geometry_for_Enhanced_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Leng_HyperSDFusion_Bridging_Hierarchical_CVPR_2024_supplemental.pdf
null
OmniParser: A Unified Framework for Text Spotting Key Information Extraction and Table Recognition
Jianqiang Wan, Sibo Song, Wenwen Yu, Yuliang Liu, Wenqing Cheng, Fei Huang, Xiang Bai, Cong Yao, Zhibo Yang
Recently visually-situated text parsing (VsTP) has experienced notable advancements driven by the increasing demand for automated document understanding and the emergence of Generative Large Language Models (LLMs) capable of processing document-based questions. Various methods have been proposed to address the challenging problem of VsTP. However due to the diversified targets and heterogeneous schemas previous works usually design task-specific architectures and objectives for individual tasks which inadvertently leads to modal isolation and complex workflow. In this paper we propose a unified paradigm for parsing visually-situated text across diverse scenarios. Specifically we devise a universal model called OmniParser which can simultaneously handle three typical visually-situated text parsing tasks: text spotting key information extraction and table recognition. In OmniParser all tasks share the unified encoder-decoder architecture the unified objective: point-conditioned text generation and the unified input & output representation: prompt & structured sequences. Extensive experiments demonstrate that the proposed OmniParser achieves state-of-the-art (SOTA) or highly competitive performances on 7 datasets for the three visually-situated text parsing tasks despite its unified concise design. The code is available at https://github.com/AlibabaResearch/AdvancedLiterateMachinery.
https://openaccess.thecvf.com/content/CVPR2024/papers/Wan_OmniParser_A_Unified_Framework_for_Text_Spotting_Key_Information_Extraction_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.19128
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wan_OmniParser_A_Unified_Framework_for_Text_Spotting_Key_Information_Extraction_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wan_OmniParser_A_Unified_Framework_for_Text_Spotting_Key_Information_Extraction_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wan_OmniParser_A_Unified_CVPR_2024_supplemental.pdf
null
Are Conventional SNNs Really Efficient? A Perspective from Network Quantization
Guobin Shen, Dongcheng Zhao, Tenglong Li, Jindong Li, Yi Zeng
Spiking Neural Networks (SNNs) have been widely praised for their high energy efficiency and immense potential. However comprehensive research that critically contrasts and correlates SNNs with quantized Artificial Neural Networks (ANNs) remains scant often leading to skewed comparisons lacking fairness towards ANNs. This paper introduces a unified perspective illustrating that the time steps in SNNs and quantized bit-widths of activation values present analogous representations. Building on this we present a more pragmatic and rational approach to estimating the energy consumption of SNNs. Diverging from the conventional Synaptic Operations (SynOps) we champion the "Bit Budget" concept. This notion permits an intricate discourse on strategically allocating computational and storage resources between weights activation values and temporal steps under stringent hardware constraints. Guided by the Bit Budget paradigm we discern that pivoting efforts towards spike patterns and weight quantization rather than temporal attributes elicits profound implications for model performance. Utilizing the Bit Budget for holistic design consideration of SNNs elevates model performance across diverse data types encompassing static imagery and neuromorphic datasets. Our revelations bridge the theoretical chasm between SNNs and quantized ANNs and illuminate a pragmatic trajectory for future endeavors in energy-efficient neural computations.
https://openaccess.thecvf.com/content/CVPR2024/papers/Shen_Are_Conventional_SNNs_Really_Efficient_A_Perspective_from_Network_Quantization_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Shen_Are_Conventional_SNNs_Really_Efficient_A_Perspective_from_Network_Quantization_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Shen_Are_Conventional_SNNs_Really_Efficient_A_Perspective_from_Network_Quantization_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Shen_Are_Conventional_SNNs_CVPR_2024_supplemental.pdf
null
Training Like a Medical Resident: Context-Prior Learning Toward Universal Medical Image Segmentation
Yunhe Gao
A major focus of clinical imaging workflow is disease diagnosis and management leading to medical imaging datasets strongly tied to specific clinical objectives. This scenario has led to the prevailing practice of developing task-specific segmentation models without gaining insights from widespread imaging cohorts. Inspired by the training program of medical radiology residents we propose a shift towards universal medical image segmentation a paradigm aiming to build medical image understanding foundation models by leveraging the diversity and commonality across clinical targets body regions and imaging modalities. Towards this goal we develop Hermes a novel context-prior learning approach to address the challenges of data heterogeneity and annotation differences in medical image segmentation. In a large collection of eleven diverse datasets (2438 3D images) across five modalities (CT PET T1 T2 and cine MRI) and multiple body regions we demonstrate the merit of the universal paradigm over the traditional paradigm on addressing multiple tasks within a single model. By exploiting the synergy across tasks Hermes achieves state-of-the-art performance on all testing datasets and shows superior model scalability. Results on two additional datasets reveals Hermes' strong performance for transfer learning incremental learning and generalization to downstream tasks. Hermes's learned priors demonstrate an appealing trait to reflect the intricate relations among tasks and modalities which aligns with the established anatomical and imaging principles in radiology. The code is available.
https://openaccess.thecvf.com/content/CVPR2024/papers/Gao_Training_Like_a_Medical_Resident_Context-Prior_Learning_Toward_Universal_Medical_CVPR_2024_paper.pdf
http://arxiv.org/abs/2306.02416
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Gao_Training_Like_a_Medical_Resident_Context-Prior_Learning_Toward_Universal_Medical_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Gao_Training_Like_a_Medical_Resident_Context-Prior_Learning_Toward_Universal_Medical_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Gao_Training_Like_a_CVPR_2024_supplemental.pdf
null
Material Palette: Extraction of Materials from a Single Image
Ivan Lopes, Fabio Pizzati, Raoul de Charette
Physically-Based Rendering (PBR) is key to modeling the interaction between light and materials and finds extensive applications across computer graphics domains. However acquiring PBR materials is costly and requires special apparatus. In this paper we propose a method to extract PBR materials from a single real-world image. We do so in two steps: first we map regions of the image to material concept tokens using a diffusion model allowing the sampling of texture images resembling each material in the scene. Second we leverage a separate network to decompose the generated textures into spatially varying BRDFs (SVBRDFs) offering us readily usable materials for rendering applications. Our approach relies on existing synthetic material libraries with SVBRDF ground truth. It exploits a diffusion-generated RGB texture dataset to allow generalization to new samples using unsupervised domain adaptation (UDA). Our contributions are thoroughly evaluated on synthetic and real-world datasets. We further demonstrate the applicability of our method for editing 3D scenes with materials estimated from real photographs. Along with video we share code and models as open-source on the project page: https://github.com/astra-vision/MaterialPalette
https://openaccess.thecvf.com/content/CVPR2024/papers/Lopes_Material_Palette_Extraction_of_Materials_from_a_Single_Image_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.17060
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Lopes_Material_Palette_Extraction_of_Materials_from_a_Single_Image_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Lopes_Material_Palette_Extraction_of_Materials_from_a_Single_Image_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lopes_Material_Palette_Extraction_CVPR_2024_supplemental.pdf
null
Initialization Matters for Adversarial Transfer Learning
Andong Hua, Jindong Gu, Zhiyu Xue, Nicholas Carlini, Eric Wong, Yao Qin
With the prevalence of the Pretraining-Finetuning paradigm in transfer learning the robustness of downstream tasks has become a critical concern. In this work we delve into adversarial robustness in transfer learning and reveal the critical role of initialization including both the pretrained model and the linear head. First we discover the necessity of an adversarially robust pretrained model. Specifically we reveal that with a standard pretrained model Parameter-Efficient Finetuning (PEFT) methods either fail to be adversarially robust or continue to exhibit significantly degraded adversarial robustness on downstream tasks even with adversarial training during finetuning. Leveraging a robust pretrained model surprisingly we observe that a simple linear probing can outperform full finetuning and other PEFT methods with random initialization on certain datasets. We further identify that linear probing excels in preserving robustness from the robust pretraining. Based on this we propose Robust Linear Initialization (RoLI) for adversarial finetuning which initializes the linear head with the weights obtained by adversarial linear probing to maximally inherit the robustness from pretraining. Across five different image classification datasets we demonstrate the effectiveness of RoLI and achieve new state-of-the-art results. Our code is available at https://github.com/DongXzz/RoLI.
https://openaccess.thecvf.com/content/CVPR2024/papers/Hua_Initialization_Matters_for_Adversarial_Transfer_Learning_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.05716
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Hua_Initialization_Matters_for_Adversarial_Transfer_Learning_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Hua_Initialization_Matters_for_Adversarial_Transfer_Learning_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Hua_Initialization_Matters_for_CVPR_2024_supplemental.pdf
null
RealCustom: Narrowing Real Text Word for Real-Time Open-Domain Text-to-Image Customization
Mengqi Huang, Zhendong Mao, Mingcong Liu, Qian He, Yongdong Zhang
Text-to-image customization which aims to synthesize text-driven images for the given subjects has recently revolutionized content creation. Existing works follow the pseudo-word paradigm i.e. represent the given subjects as pseudo-words and then compose them with the given text. However the inherent entangled influence scope of pseudo-words with the given text results in a dual-optimum paradox i.e. the similarity of the given subjects and the controllability of the given text could not be optimal simultaneously. We present RealCustom that for the first time disentangles similarity from controllability by precisely limiting subject influence to relevant parts only achieved by gradually narrowing real text word from its general connotation to the specific subject and using its cross-attention to distinguish relevance. Specifically RealCustom introduces a novel "train-inference" decoupled framework: (1) during training RealCustom learns general alignment between visual conditions to original textual conditions by a novel adaptive scoring module to adaptively modulate influence quantity; (2) during inference a novel adaptive mask guidance strategy is proposed to iteratively update the influence scope and influence quantity of the given subjects to gradually narrow the generation of the real text word. Comprehensive experiments demonstrate the superior real-time customization ability of RealCustom in the open domain achieving both unprecedented similarity of the given subjects and controllability of the given text for the first time. The project page is https://corleone-huang.github.io/realcustom/.
https://openaccess.thecvf.com/content/CVPR2024/papers/Huang_RealCustom_Narrowing_Real_Text_Word_for_Real-Time_Open-Domain_Text-to-Image_Customization_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.00483
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Huang_RealCustom_Narrowing_Real_Text_Word_for_Real-Time_Open-Domain_Text-to-Image_Customization_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Huang_RealCustom_Narrowing_Real_Text_Word_for_Real-Time_Open-Domain_Text-to-Image_Customization_CVPR_2024_paper.html
CVPR 2024
null
null
MicroDiffusion: Implicit Representation-Guided Diffusion for 3D Reconstruction from Limited 2D Microscopy Projections
Mude Hui, Zihao Wei, Hongru Zhu, Fei Xia, Yuyin Zhou
Volumetric optical microscopy using non-diffracting beams enables rapid imaging of 3D volumes by projecting them axially to 2D images but lacks crucial depth information. Addressing this we introduce MicroDiffusion a pioneering tool facilitating high-quality depth-resolved 3D volume reconstruction from limited 2D projections. While existing Implicit Neural Representation (INR) models often yield incomplete outputs and Denoising Diffusion Probabilistic Models (DDPM) excel at capturing details our method integrates INR's structural coherence with DDPM's fine-detail enhancement capabilities. We pretrain an INR model to transform 2D axially-projected images into a preliminary 3D volume. This pretrained INR acts as a global prior guiding DDPM's generative process through a linear interpolation between INR outputs and noise inputs. This strategy enriches the diffusion process with structured 3D information enhancing detail and reducing noise in localized 2D images.By conditioning the diffusion model on the closest 2D projection MicroDiffusion substantially enhances fidelity in resulting 3D reconstructions surpassing INR and standard DDPM outputs with unparalleled image quality and structural fidelity. Our code and dataset are available athttps://github.com/UCSC-VLAA/MicroDiffusion.
https://openaccess.thecvf.com/content/CVPR2024/papers/Hui_MicroDiffusion_Implicit_Representation-Guided_Diffusion_for_3D_Reconstruction_from_Limited_2D_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.10815
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Hui_MicroDiffusion_Implicit_Representation-Guided_Diffusion_for_3D_Reconstruction_from_Limited_2D_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Hui_MicroDiffusion_Implicit_Representation-Guided_Diffusion_for_3D_Reconstruction_from_Limited_2D_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Hui_MicroDiffusion_Implicit_Representation-Guided_CVPR_2024_supplemental.pdf
null
Task-Conditioned Adaptation of Visual Features in Multi-Task Policy Learning
Pierre Marza, Laetitia Matignon, Olivier Simonin, Christian Wolf
Successfully addressing a wide variety of tasks is a core ability of autonomous agents requiring flexibly adapting the underlying decision-making strategies and as we argue in this work also adapting the perception modules. An analogical argument would be the human visual system which uses top-down signals to focus attention determined by the current task. Similarly we adapt pre-trained large vision models conditioned on specific downstream tasks in the context of multi-task policy learning. We introduce task-conditioned adapters that do not require finetuning any pre-trained weights combined with a single policy trained with behavior cloning and capable of addressing multiple tasks. We condition the visual adapters on task embeddings which can be selected at inference if the task is known or alternatively inferred from a set of example demonstrations. To this end we propose a new optimization-based estimator. We evaluate the method on a wide variety of tasks from the CortexBench benchmark and show that compared to existing work it can be addressed with a single policy. In particular we demonstrate that adapting visual features is a key design choice and that the method generalizes to unseen tasks given a few demonstrations.
https://openaccess.thecvf.com/content/CVPR2024/papers/Marza_Task-Conditioned_Adaptation_of_Visual_Features_in_Multi-Task_Policy_Learning_CVPR_2024_paper.pdf
http://arxiv.org/abs/2402.07739
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Marza_Task-Conditioned_Adaptation_of_Visual_Features_in_Multi-Task_Policy_Learning_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Marza_Task-Conditioned_Adaptation_of_Visual_Features_in_Multi-Task_Policy_Learning_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Marza_Task-Conditioned_Adaptation_of_CVPR_2024_supplemental.zip
null
L0-Sampler: An L0 Model Guided Volume Sampling for NeRF
Liangchen Li, Juyong Zhang
Since its proposal Neural Radiance Fields (NeRF) has achieved great success in related tasks mainly adopting the hierarchical volume sampling (HVS) strategy for volume rendering. However the HVS of NeRF approximates distributions using piecewise constant functions which provides a relatively rough estimation. Based on the observation that a well-trained weight function w(t) and the L_0 distance between points and the surface have very high similarity we propose L_0-Sampler by incorporating the L_0 model into w(t) to guide the sampling process. Specifically we propose using piecewise exponential functions rather than piecewise constant functions for interpolation which can not only approximate quasi-L_0 weight distributions along rays quite well but can be easily implemented with a few lines of code change without additional computational burden. Stable performance improvements can be achieved by applying L_0-Sampler to NeRF and related tasks like 3D reconstruction. Code is available at https://ustc3dv.github.io/L0-Sampler/.
https://openaccess.thecvf.com/content/CVPR2024/papers/Li_L0-Sampler_An_L0_Model_Guided_Volume_Sampling_for_NeRF_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Li_L0-Sampler_An_L0_Model_Guided_Volume_Sampling_for_NeRF_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Li_L0-Sampler_An_L0_Model_Guided_Volume_Sampling_for_NeRF_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_L0-Sampler_An_L0_CVPR_2024_supplemental.zip
null
Hybrid Proposal Refiner: Revisiting DETR Series from the Faster R-CNN Perspective
Jinjing Zhao, Fangyun Wei, Chang Xu
With the transformative impact of the Transformer DETR pioneered the application of the encoder-decoder architecture to object detection. A collection of follow-up research e.g. Deformable DETR aims to enhance DETR while adhering to the encoder-decoder design. In this work we revisit the DETR series through the lens of Faster R-CNN. We find that the DETR resonates with the underlying principles of Faster R-CNN's RPN-refiner design but benefits from end-to-end detection owing to the incorporation of Hungarian matching. We systematically adapt the Faster R-CNN towards the Deformable DETR by integrating or repurposing each component of Deformable DETR and note that Deformable DETR's improved performance over Faster R-CNN is attributed to the adoption of advanced modules such as a superior proposal refiner (e.g. deformable attention rather than RoI Align). When viewing the DETR through the RPN-refiner paradigm we delve into various proposal refinement techniques such as deformable attention cross attention and dynamic convolution. These proposal refiners cooperate well with each other; thus we synergistically combine them to establish a Hybrid Proposal Refiner (HPR). Our HPR is versatile and can be incorporated into various DETR detectors. For instance by integrating HPR to a strong DETR detector we achieve an AP of 54.9 on the COCO benchmark utilizing a ResNet-50 backbone and a 36-epoch training schedule. Code and models are available at https://github.com/ZhaoJingjing713/HPR.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhao_Hybrid_Proposal_Refiner_Revisiting_DETR_Series_from_the_Faster_R-CNN_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhao_Hybrid_Proposal_Refiner_Revisiting_DETR_Series_from_the_Faster_R-CNN_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhao_Hybrid_Proposal_Refiner_Revisiting_DETR_Series_from_the_Faster_R-CNN_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhao_Hybrid_Proposal_Refiner_CVPR_2024_supplemental.pdf
null
Practical Measurements of Translucent Materials with Inter-Pixel Translucency Prior
Zhenyu Chen, Jie Guo, Shuichang Lai, Ruoyu Fu, Mengxun Kong, Chen Wang, Hongyu Sun, Zhebin Zhang, Chen Li, Yanwen Guo
Material appearance is a key component of photorealism with a pronounced impact on human perception. Although there are many prior works targeting at measuring opaque materials using light-weight setups (e.g. consumer-level cameras) little attention is paid on acquiring the optical properties of translucent materials which are also quite common in nature. In this paper we present a practical method for acquiring scattering properties of translucent materials based solely on ordinary images captured with unknown lighting and camera parameters. The key to our method is an inter-pixel translucency prior which states that image pixels of a given homogeneous translucent material typically form curves (dubbed translucent curves) in the RGB space of which the shapes are determined by the parameters of the material. We leverage this prior in a specially-designed convolutional neural network comprising multiple encoders a translucency-aware feature fusion module and a cascaded decoder. We demonstrate through both visual comparisons and quantitative evaluations that high accuracy can be achieved on a wide range of real-world translucent materials.
https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_Practical_Measurements_of_Translucent_Materials_with_Inter-Pixel_Translucency_Prior_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Practical_Measurements_of_Translucent_Materials_with_Inter-Pixel_Translucency_Prior_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Practical_Measurements_of_Translucent_Materials_with_Inter-Pixel_Translucency_Prior_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Chen_Practical_Measurements_of_CVPR_2024_supplemental.pdf
null
TurboSL: Dense Accurate and Fast 3D by Neural Inverse Structured Light
Parsa Mirdehghan, Maxx Wu, Wenzheng Chen, David B. Lindell, Kiriakos N. Kutulakos
We show how to turn a noisy and fragile active triangulation technique--three-pattern structured light with a grayscale camera--into a fast and powerful tool for 3D capture: able to output sub-pixel accurate disparities at megapixel resolution along with reflectance normals and a no-reference estimate of its own pixelwise 3D error. To achieve this we formulate structured-light decoding as a neural inverse rendering problem. We show that despite having just three or four input images--all from the same viewpoint--this problem can be tractably solved by TurboSL an algorithm that combines (1) a precise image formation model (2) a signed distance field scene representation and (3) projection pattern sequences optimized for accuracy instead of precision. We use TurboSL to reconstruct a variety of complex scenes from images captured at up to 60 fps with a camera and a common projector. Our experiments highlight TurboSL's potential for dense and highly-accurate 3D acquisition from data captured in fractions of a second.
https://openaccess.thecvf.com/content/CVPR2024/papers/Mirdehghan_TurboSL_Dense_Accurate_and_Fast_3D_by_Neural_Inverse_Structured_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Mirdehghan_TurboSL_Dense_Accurate_and_Fast_3D_by_Neural_Inverse_Structured_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Mirdehghan_TurboSL_Dense_Accurate_and_Fast_3D_by_Neural_Inverse_Structured_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Mirdehghan_TurboSL_Dense_Accurate_CVPR_2024_supplemental.zip
null
Text2QR: Harmonizing Aesthetic Customization and Scanning Robustness for Text-Guided QR Code Generation
Guangyang Wu, Xiaohong Liu, Jun Jia, Xuehao Cui, Guangtao Zhai
In the digital era QR codes serve as a linchpin connecting virtual and physical realms. Their pervasive integration across various applications highlights the demand for aesthetically pleasing codes without compromised scannability. However prevailing methods grapple with the intrinsic challenge of balancing customization and scannability. Notably stable-diffusion models have ushered in an epoch of high-quality customizable content generation. This paper introduces Text2QR a pioneering approach leveraging these advancements to address a fundamental challenge: concurrently achieving user-defined aesthetics and scanning robustness. To ensure stable generation of aesthetic QR codes we introduce the QR Aesthetic Blueprint (QAB) module generating a blueprint image exerting control over the entire generation process. Subsequently the Scannability Enhancing Latent Refinement (SELR) process refines the output iteratively in the latent space enhancing scanning robustness. This approach harnesses the potent generation capabilities of stable-diffusion models navigating the trade-off between image aesthetics and QR code scannability. Our experiments demonstrate the seamless fusion of visual appeal with the practical utility of aesthetic QR codes markedly outperforming prior methods. Codes are available at https://github.com/mulns/Text2QR
https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_Text2QR_Harmonizing_Aesthetic_Customization_and_Scanning_Robustness_for_Text-Guided_QR_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.06452
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Wu_Text2QR_Harmonizing_Aesthetic_Customization_and_Scanning_Robustness_for_Text-Guided_QR_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Wu_Text2QR_Harmonizing_Aesthetic_Customization_and_Scanning_Robustness_for_Text-Guided_QR_CVPR_2024_paper.html
CVPR 2024
null
null
GS-IR: 3D Gaussian Splatting for Inverse Rendering
Zhihao Liang, Qi Zhang, Ying Feng, Ying Shan, Kui Jia
We propose GS-IR a novel inverse rendering approach based on 3D Gaussian Splatting (GS) that leverages forward mapping volume rendering to achieve photorealistic novel view synthesis and relighting results. Unlike previous works that use implicit neural representations and volume rendering (e.g. NeRF) which suffer from low expressive power and high computational complexity we extend GS a top-performance representation for novel view synthesis to estimate scene geometry surface material and environment illumination from multi-view images captured under unknown lighting conditions. There are two main problems when introducing GS to inverse rendering: 1) GS does not support producing plausible normal natively; 2) forward mapping (e.g. rasterization and splatting) cannot trace the occlusion like backward mapping (e.g. ray tracing). To address these challenges our GS-IR proposes an efficient optimization scheme that incorporates a depth-derivation-based regularization for normal estimation and a baking-based occlusion to model indirect lighting. The flexible and expressive GS representation allows us to achieve fast and compact geometry reconstruction photorealistic novel view synthesis and effective physically-based rendering. We demonstrate the superiority of our method over baseline methods through qualitative and quantitative evaluations on various challenging scenes.
https://openaccess.thecvf.com/content/CVPR2024/papers/Liang_GS-IR_3D_Gaussian_Splatting_for_Inverse_Rendering_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Liang_GS-IR_3D_Gaussian_Splatting_for_Inverse_Rendering_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Liang_GS-IR_3D_Gaussian_Splatting_for_Inverse_Rendering_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liang_GS-IR_3D_Gaussian_CVPR_2024_supplemental.pdf
null
SynFog: A Photo-realistic Synthetic Fog Dataset based on End-to-end Imaging Simulation for Advancing Real-World Defogging in Autonomous Driving
Yiming Xie, Henglu Wei, Zhenyi Liu, Xiaoyu Wang, Xiangyang Ji
To advance research in learning-based defogging algorithms various synthetic fog datasets have been developed. However exsiting datasets created using the Atmospheric Scattering Model (ASM) or real-time rendering engines often struggle to produce photo-realistic foggy images that accurately mimic the actual imaging process. This limitation hinders the effective generalization of models from synthetic to real data. In this paper we introduce an end-to-end simulation pipeline designed to generate photo-realistic foggy images. This pipeline comprehensively considers the entire physically-based foggy scene imaging process closely aligning with real-world image capture methods. Based on this pipeline we present a new synthetic fog dataset named SynFog which features both sky light and active lighting conditions as well as three levels of fog density. Experimental results demonstrate that models trained on SynFog exhibit superior performance in visual perception and detection accuracy compared to others when applied to real-world foggy images.
https://openaccess.thecvf.com/content/CVPR2024/papers/Xie_SynFog_A_Photo-realistic_Synthetic_Fog_Dataset_based_on_End-to-end_Imaging_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.17094
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Xie_SynFog_A_Photo-realistic_Synthetic_Fog_Dataset_based_on_End-to-end_Imaging_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Xie_SynFog_A_Photo-realistic_Synthetic_Fog_Dataset_based_on_End-to-end_Imaging_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Xie_SynFog_A_Photo-realistic_CVPR_2024_supplemental.pdf
null
Video Harmonization with Triplet Spatio-Temporal Variation Patterns
Zonghui Guo, Xinyu Han, Jie Zhang, Shiguang Shan, Haiyong Zheng
Video harmonization is an important and challenging task that aims to obtain visually realistic composite videos by automatically adjusting the foreground's appearance to harmonize with the background. Inspired by the short-term and long-term gradual adjustment process of manual harmonization we present a Video Triplet Transformer framework to model three spatio-temporal variation patterns within videos i.e. short-term spatial as well as long-term global and dynamic for video-to-video tasks like video harmonization. Specifically for short-term harmonization we adjust foreground appearance to consist with background in spatial dimension based on the neighbor frames; for long-term harmonization we not only explore global appearance variations to enhance temporal consistency but also alleviate motion offset constraints to align similar contextual appearances dynamically. Extensive experiments and ablation studies demonstrate the effectiveness of our method achieving state-of-the-art performance in video harmonization video enhancement and video demoireing tasks. We also propose a temporal consistency metric to better evaluate the harmonized videos. Code is available at https://github.com/zhenglab/VideoTripletTransformer.
https://openaccess.thecvf.com/content/CVPR2024/papers/Guo_Video_Harmonization_with_Triplet_Spatio-Temporal_Variation_Patterns_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Guo_Video_Harmonization_with_Triplet_Spatio-Temporal_Variation_Patterns_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Guo_Video_Harmonization_with_Triplet_Spatio-Temporal_Variation_Patterns_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Guo_Video_Harmonization_with_CVPR_2024_supplemental.pdf
null
TRINS: Towards Multimodal Language Models that Can Read
Ruiyi Zhang, Yanzhe Zhang, Jian Chen, Yufan Zhou, Jiuxiang Gu, Changyou Chen, Tong Sun
Large multimodal language models have shown remarkable proficiency in understanding and editing images. However a majority of these visually-tuned models struggle to comprehend the textual content embedded in images primarily due to the limitation of training data. In this work we introduce TRINS: a Text-Rich image1 INStruction dataset with the objective of enhancing the reading ability of the multimodal large language model. TRINS is built upon LAION 2 using hybrid data annotation strategies that include machine-assisted and human-assisted annotation process. It contains 39153 text-rich images captions and 102437 questions. Specifically we show that the number of words per annotation in TRINS is significantly longer than that of related datasets providing new challenges. Furthermore we introduce a simple and effective architecture called a Language-Vision Reading Assistant (LaRA) which is good at understanding textual content within images. LaRA outperforms existing state-of-the-art multimodal large language models on the TRINS dataset as well as other classical benchmarks. Lastly we conducted a comprehensive evaluation with TRINS on various text-rich image understanding and generation tasks demonstrating its effectiveness.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_TRINS_Towards_Multimodal_Language_Models_that_Can_Read_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_TRINS_Towards_Multimodal_Language_Models_that_Can_Read_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_TRINS_Towards_Multimodal_Language_Models_that_Can_Read_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_TRINS_Towards_Multimodal_CVPR_2024_supplemental.pdf
null
Self-Supervised Representation Learning from Arbitrary Scenarios
Zhaowen Li, Yousong Zhu, Zhiyang Chen, Zongxin Gao, Rui Zhao, Chaoyang Zhao, Ming Tang, Jinqiao Wang
Current self-supervised methods can primarily be categorized into contrastive learning and masked image modeling. Extensive studies have demonstrated that combining these two approaches can achieve state-of-the-art performance. However these methods essentially reinforce the global consistency of contrastive learning without taking into account the conflicts between these two approaches which hinders their generalizability to arbitrary scenarios. In this paper we theoretically prove that MAE serves as a patch-level contrastive learning where each patch within an image is considered as a distinct category. This presents a significant conflict with global-level contrastive learning which treats all patches in an image as an identical category. To address this conflict this work abandons the non-generalizable global-level constraints and proposes explicit patch-level contrastive learning as a solution. Specifically this work employs the encoder of MAE to generate dual-branch features which then perform patch-level learning through a decoder. In contrast to global-level data augmentation in contrastive learning our approach leverages patch-level feature augmentation to mitigate interference from global-level learning. Consequently our approach can learn heterogeneous representations from a single image while avoiding the conflicts encountered by previous methods. Massive experiments affirm the potential of our method for learning from arbitrary scenarios.
https://openaccess.thecvf.com/content/CVPR2024/papers/Li_Self-Supervised_Representation_Learning_from_Arbitrary_Scenarios_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Li_Self-Supervised_Representation_Learning_from_Arbitrary_Scenarios_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Li_Self-Supervised_Representation_Learning_from_Arbitrary_Scenarios_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_Self-Supervised_Representation_Learning_CVPR_2024_supplemental.pdf
null
Improved Zero-Shot Classification by Adapting VLMs with Text Descriptions
Oindrila Saha, Grant Van Horn, Subhransu Maji
The zero-shot performance of existing vision-language models (VLMs) such as CLIP is limited by the availability of large-scale aligned image and text datasets in specific domains. In this work we leverage two complementary sources of information -- descriptions of categories generated by large language models (LLMs) and abundant fine-grained image classification datasets -- to improve the zero-shot classification performance of VLMs across fine-grained domains. On the technical side we develop methods to train VLMs with this "bag-level" image-text supervision. We find that simply using these attributes at test-time does not improve performance but our training strategy for example on the iNaturalist dataset leads to an average improvement of 4-5% in zero-shot classification accuracy for novel categories of birds and flowers. Similar improvements are observed in domains where a subset of the categories was used to fine-tune the model. By prompting LLMs in various ways we generate descriptions that capture visual appearance habitat and geographic regions and pair them with existing attributes such as the taxonomic structure of the categories. We systematically evaluate their ability to improve zero-shot categorization in natural domains. Our findings suggest that geographic priors can be just as effective and are complementary to visual appearance. Our method also outperforms prior work on prompt-based tuning of VLMs. We release the benchmark consisting of 14 datasets at https://github.com/cvl-umass/AdaptCLIPZS which will contribute to future research in zero-shot recognition.
https://openaccess.thecvf.com/content/CVPR2024/papers/Saha_Improved_Zero-Shot_Classification_by_Adapting_VLMs_with_Text_Descriptions_CVPR_2024_paper.pdf
http://arxiv.org/abs/2401.02460
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Saha_Improved_Zero-Shot_Classification_by_Adapting_VLMs_with_Text_Descriptions_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Saha_Improved_Zero-Shot_Classification_by_Adapting_VLMs_with_Text_Descriptions_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Saha_Improved_Zero-Shot_Classification_CVPR_2024_supplemental.pdf
null
Living Scenes: Multi-object Relocalization and Reconstruction in Changing 3D Environments
Liyuan Zhu, Shengyu Huang, Konrad Schindler, Iro Armeni
Research into dynamic 3D scene understanding has primarily focused on short-term change tracking from dense observations while little attention has been paid to long-term changes with sparse observations. We address this gap with MoRE a novel approach for multi-object relocalization and reconstruction in evolving environments. We view these environments as Living Scenes and consider the problem of transforming scans taken at different points in time into a 3D reconstruction of the object instances whose accuracy and completeness increase over time. At the core of our method lies an SE(3) equivariant representation in a single encoder-decoder network trained on synthetic data. This representation enables us to seamlessly tackle instance matching registration and reconstruction. We also introduce a joint optimization algorithm that facilitates the accumulation of point clouds originating from the same instance across multiple scans taken at different points in time. We validate our method on synthetic and real-world data and demonstrate state-of-the-art performance in both end-to-end performance and individual subtasks.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhu_Living_Scenes_Multi-object_Relocalization_and_Reconstruction_in_Changing_3D_Environments_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.09138
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhu_Living_Scenes_Multi-object_Relocalization_and_Reconstruction_in_Changing_3D_Environments_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhu_Living_Scenes_Multi-object_Relocalization_and_Reconstruction_in_Changing_3D_Environments_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhu_Living_Scenes_Multi-object_CVPR_2024_supplemental.pdf
null
CricaVPR: Cross-image Correlation-aware Representation Learning for Visual Place Recognition
Feng Lu, Xiangyuan Lan, Lijun Zhang, Dongmei Jiang, Yaowei Wang, Chun Yuan
Over the past decade most methods in visual place recognition (VPR) have used neural networks to produce feature representations. These networks typically produce a global representation of a place image using only this image itself and neglect the cross-image variations (e.g. viewpoint and illumination) which limits their robustness in challenging scenes. In this paper we propose a robust global representation method with cross-image correlation awareness for VPR named CricaVPR. Our method uses the attention mechanism to correlate multiple images within a batch. These images can be taken in the same place with different conditions or viewpoints or even captured from different places. Therefore our method can utilize the cross-image variations as a cue to guide the representation learning which ensures more robust features are produced. To further facilitate the robustness we propose a multi-scale convolution-enhanced adaptation method to adapt pre-trained visual foundation models to the VPR task which introduces the multi-scale local information to further enhance the cross-image correlation-aware representation. Experimental results show that our method outperforms state-of-the-art methods by a large margin with significantly less training time. The code is released at https://github.com/Lu-Feng/CricaVPR.
https://openaccess.thecvf.com/content/CVPR2024/papers/Lu_CricaVPR_Cross-image_Correlation-aware_Representation_Learning_for_Visual_Place_Recognition_CVPR_2024_paper.pdf
http://arxiv.org/abs/2402.19231
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Lu_CricaVPR_Cross-image_Correlation-aware_Representation_Learning_for_Visual_Place_Recognition_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Lu_CricaVPR_Cross-image_Correlation-aware_Representation_Learning_for_Visual_Place_Recognition_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lu_CricaVPR_Cross-image_Correlation-aware_CVPR_2024_supplemental.pdf
null
ECLIPSE: A Resource-Efficient Text-to-Image Prior for Image Generations
Maitreya Patel, Changhoon Kim, Sheng Cheng, Chitta Baral, Yezhou Yang
Text-to-image (T2I) diffusion models notably the unCLIP models (e.g. DALL-E-2) achieve state-of-the-art (SOTA) performance on various compositional T2I benchmarks at the cost of significant computational resources. The unCLIP stack comprises T2I prior and diffusion image decoder. The T2I prior model alone adds a billion parameters compared to the Latent Diffusion Models which increases the computational and high-quality data requirements. We introduce ECLIPSE a novel contrastive learning method that is both parameter and data-efficient. ECLIPSE leverages pre-trained vision-language models (e.g. CLIP) to distill the knowledge into the prior model. We demonstrate that the ECLIPSE trained prior with only 3.3% of the parameters and trained on a mere 2.8% of the data surpasses the baseline T2I priors with an average of 71.6% preference score under resource-limited setting. It also attains performance on par with SOTA big models achieving an average of 63.36% preference score in terms of the ability to follow the text compositions. Extensive experiments on two unCLIP diffusion image decoders Karlo and Kandinsky affirm that ECLIPSE priors consistently deliver high performance while significantly reducing resource dependency. Project page: https://eclipse-t2i.vercel.app/
https://openaccess.thecvf.com/content/CVPR2024/papers/Patel_ECLIPSE_A_Resource-Efficient_Text-to-Image_Prior_for_Image_Generations_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.04655
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Patel_ECLIPSE_A_Resource-Efficient_Text-to-Image_Prior_for_Image_Generations_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Patel_ECLIPSE_A_Resource-Efficient_Text-to-Image_Prior_for_Image_Generations_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Patel_ECLIPSE_A_Resource-Efficient_CVPR_2024_supplemental.pdf
null
Adaptive Bidirectional Displacement for Semi-Supervised Medical Image Segmentation
Hanyang Chi, Jian Pang, Bingfeng Zhang, Weifeng Liu
Consistency learning is a central strategy to tackle unlabeled data in semi-supervised medical image segmentation (SSMIS) which enforces the model to produce consistent predictions under the perturbation. However most current approaches solely focus on utilizing a specific single perturbation which can only cope with limited cases while employing multiple perturbations simultaneously is hard to guarantee the quality of consistency learning. In this paper we propose an Adaptive Bidirectional Displacement (ABD) approach to solve the above challenge. Specifically we first design a bidirectional patch displacement based on reliable prediction confidence for unlabeled data to generate new samples which can effectively suppress uncontrollable regions and still retain the influence of input perturbations. Meanwhile to enforce the model to learn the potentially uncontrollable content a bidirectional displacement operation with inverse confidence is proposed for the labeled images which generates samples with more unreliable information to facilitate model learning. Extensive experiments show that ABD achieves new state-of-the-art performances for SSMIS significantly improving different baselines. Source code is available at https://github.com/chy-upc/ABD.
https://openaccess.thecvf.com/content/CVPR2024/papers/Chi_Adaptive_Bidirectional_Displacement_for_Semi-Supervised_Medical_Image_Segmentation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2405.00378
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Chi_Adaptive_Bidirectional_Displacement_for_Semi-Supervised_Medical_Image_Segmentation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Chi_Adaptive_Bidirectional_Displacement_for_Semi-Supervised_Medical_Image_Segmentation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Chi_Adaptive_Bidirectional_Displacement_CVPR_2024_supplemental.pdf
null
Accurate Training Data for Occupancy Map Prediction in Automated Driving Using Evidence Theory
Jonas Kälble, Sascha Wirges, Maxim Tatarchenko, Eddy Ilg
Automated driving fundamentally requires knowledge about the surrounding geometry of the scene. Modern approaches use only captured images to predict occupancy maps that represent the geometry. Training these approaches requires accurate data that may be acquired with the help of LiDAR scanners. We show that the techniques used for current benchmarks and training datasets to convert LiDAR scans into occupancy grid maps yield very low quality and subsequently present a novel approach using evidence theory that yields more accurate reconstructions. We demonstrate that these are superior by a large margin both qualitatively and quantitatively and that we additionally obtain meaningful uncertainty estimates. When converting the occupancy maps back to depth estimates and comparing them with the raw LiDAR measurements our method yields a MAE improvement of 30% to 52% on nuScenes and 53% on Waymo over other occupancy ground-truth data. Finally we use the improved occupancy maps to train a state-of-the-art occupancy prediction method and demonstrate that it improves the MAE by 25% on nuScenes.
https://openaccess.thecvf.com/content/CVPR2024/papers/Kalble_Accurate_Training_Data_for_Occupancy_Map_Prediction_in_Automated_Driving_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Kalble_Accurate_Training_Data_for_Occupancy_Map_Prediction_in_Automated_Driving_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Kalble_Accurate_Training_Data_for_Occupancy_Map_Prediction_in_Automated_Driving_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kalble_Accurate_Training_Data_CVPR_2024_supplemental.zip
null
DiffusionLight: Light Probes for Free by Painting a Chrome Ball
Pakkapon Phongthawee, Worameth Chinchuthakun, Nontaphat Sinsunthithet, Varun Jampani, Amit Raj, Pramook Khungurn, Supasorn Suwajanakorn
We present a simple yet effective technique to estimate lighting in a single input image. Current techniques rely heavily on HDR panorama datasets to train neural networks to regress an input with limited field-of-view to a full environment map. However these approaches often struggle with real-world uncontrolled settings due to the limited diversity and size of their datasets. To address this problem we leverage diffusion models trained on billions of standard images to render a chrome ball into the input image. Despite its simplicity this task remains challenging: the diffusion models often insert incorrect or inconsistent objects and cannot readily generate chrome balls in HDR format. Our research uncovers a surprising relationship between the appearance of chrome balls and the initial diffusion noise map which we utilize to consistently generate high-quality chrome balls. We further fine-tune an LDR diffusion model (Stable Diffusion XL) with LoRA enabling it to perform exposure bracketing for HDR light estimation. Our method produces convincing light estimates across diverse settings and demonstrates superior generalization to in-the-wild scenarios.
https://openaccess.thecvf.com/content/CVPR2024/papers/Phongthawee_DiffusionLight_Light_Probes_for_Free_by_Painting_a_Chrome_Ball_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.09168
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Phongthawee_DiffusionLight_Light_Probes_for_Free_by_Painting_a_Chrome_Ball_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Phongthawee_DiffusionLight_Light_Probes_for_Free_by_Painting_a_Chrome_Ball_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Phongthawee_DiffusionLight_Light_Probes_CVPR_2024_supplemental.pdf
null
Instance-level Expert Knowledge and Aggregate Discriminative Attention for Radiology Report Generation
Shenshen Bu, Taiji Li, Yuedong Yang, Zhiming Dai
Automatic radiology report generation can provide substantial advantages to clinical physicians by effectively reducing their workload and improving efficiency. Despite the promising potential of current methods challenges persist in effectively extracting and preventing degradation of prominent features as well as enhancing attention on pivotal regions. In this paper we propose an Instance-level Expert Knowledge and Aggregate Discriminative Attention framework (EKAGen) for radiology report generation. We convert expert reports into an embedding space and generate comprehensive representations for each disease which serve as Preliminary Knowledge Support (PKS). To prevent feature disruption we select the representations in the embedding space with the smallest distances to PKS as Rectified Knowledge Support (RKS). Then EKAGen diagnoses the diseases and retrieves knowledge from RKS creating Instance-level Expert Knowledge (IEK) for each query image boosting generation. Additionally we introduce Aggregate Discriminative Attention Map (ADM) which uses weak supervision to create maps of discriminative regions that highlight pivotal regions. For training we propose a Global Information Self-Distillation (GID) strategy using an iteratively optimized model to distill global knowledge into EKAGen. Extensive experiments and analyses on IU X-Ray and MIMIC-CXR datasets demonstrate that EKAGen outperforms previous state-of-the-art methods.
https://openaccess.thecvf.com/content/CVPR2024/papers/Bu_Instance-level_Expert_Knowledge_and_Aggregate_Discriminative_Attention_for_Radiology_Report_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Bu_Instance-level_Expert_Knowledge_and_Aggregate_Discriminative_Attention_for_Radiology_Report_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Bu_Instance-level_Expert_Knowledge_and_Aggregate_Discriminative_Attention_for_Radiology_Report_CVPR_2024_paper.html
CVPR 2024
null
null
Task-Adaptive Saliency Guidance for Exemplar-free Class Incremental Learning
Xialei Liu, Jiang-Tian Zhai, Andrew D. Bagdanov, Ke Li, Ming-Ming Cheng
Exemplar-free Class Incremental Learning (EFCIL) aims to sequentially learn tasks with access only to data from the current one. EFCIL is of interest because it mitigates concerns about privacy and long-term storage of data while at the same time alleviating the problem of catastrophic forgetting in incremental learning. In this work we introduce task-adaptive saliency for EFCIL and propose a new framework which we call Task-Adaptive Saliency Supervision (TASS) for mitigating the negative effects of saliency drift between different tasks. We first apply boundary-guided saliency to maintain task adaptivity and plasticity on model attention. Besides we introduce task-agnostic low-level signals as auxiliary supervision to increase the stability of model attention. Finally we introduce a module for injecting and recovering saliency noise to increase the robustness of saliency preservation. Our experiments demonstrate that our method can better preserve saliency maps across tasks and achieve state-of-the-art results on the CIFAR-100 Tiny-ImageNet and ImageNet-Subset EFCIL benchmarks. Code is available at https://github.com/scok30/tass.
https://openaccess.thecvf.com/content/CVPR2024/papers/Liu_Task-Adaptive_Saliency_Guidance_for_Exemplar-free_Class_Incremental_Learning_CVPR_2024_paper.pdf
http://arxiv.org/abs/2212.08251
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Task-Adaptive_Saliency_Guidance_for_Exemplar-free_Class_Incremental_Learning_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Task-Adaptive_Saliency_Guidance_for_Exemplar-free_Class_Incremental_Learning_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liu_Task-Adaptive_Saliency_Guidance_CVPR_2024_supplemental.pdf
null
Rethinking the Spatial Inconsistency in Classifier-Free Diffusion Guidance
Dazhong Shen, Guanglu Song, Zeyue Xue, Fu-Yun Wang, Yu Liu
Classifier-Free Guidance (CFG) has been widely used in text-to-image diffusion models where the CFG scale is introduced to control the strength of text guidance on the whole image space. However we argue that a global CFG scale results in spatial inconsistency on varying semantic strengths and suboptimal image quality. To address this problem we present a novel approach Semantic-aware Classifier-Free Guidance (S-CFG) to customize the guidance degrees for different semantic units in text-to-image diffusion models. Specifically we first design a training-free semantic segmentation method to partition the latent image into relatively independent semantic regions at each denoising step. In particular the cross-attention map in the denoising U-net backbone is renormalized for assigning each patch to the corresponding token while the self-attention map is used to complete the semantic regions. Then to balance the amplification of diverse semantic units we adaptively adjust the CFG scales across different semantic regions to rescale the text guidance degrees into a uniform level. Finally extensive experiments demonstrate the superiority of S-CFG over the original CFG strategy on various text-to-image diffusion models without requiring any extra training cost. our codes are available at https://github.com/SmilesDZgk/S-CFG.
https://openaccess.thecvf.com/content/CVPR2024/papers/Shen_Rethinking_the_Spatial_Inconsistency_in_Classifier-Free_Diffusion_Guidance_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.05384
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Shen_Rethinking_the_Spatial_Inconsistency_in_Classifier-Free_Diffusion_Guidance_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Shen_Rethinking_the_Spatial_Inconsistency_in_Classifier-Free_Diffusion_Guidance_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Shen_Rethinking_the_Spatial_CVPR_2024_supplemental.pdf
null
Language-driven All-in-one Adverse Weather Removal
Hao Yang, Liyuan Pan, Yan Yang, Wei Liang
All-in-one (AiO) frameworks restore various adverse weather degradations with a single set of networks jointly. To handle various weather conditions an AiO framework is expected to adaptively learn weather-specific knowledge for different degradations and shared knowledge for common patterns. However existing method: 1) rely on extra supervision signals which are usually unknown in real-world applications; 2) employ fixed network structures which restrict the diversity of weather-specific knowledge. In this paper we propose a Language-driven Restoration framework (LDR) to alleviate the aforementioned issues. First we leverage the power of pre-trained vision-language (PVL) models to enrich the diversity of weather-specific knowledge by reasoning about the occurrence type and severity of degradation generating description-based degradation priors. Then with the guidance of degradation prior we sparsely select restoration experts from a candidate list dynamically based on a Mixture-of-Experts (MoE) structure. This enables us to adaptively learn the weather-specific and shared knowledge to handle various weather conditions (e.g. unknown or mixed weather). Experiments on extensive restoration scenarios show our superior performance (see Fig. 1). The source code will be made available.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yang_Language-driven_All-in-one_Adverse_Weather_Removal_CVPR_2024_paper.pdf
http://arxiv.org/abs/2312.01381
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Language-driven_All-in-one_Adverse_Weather_Removal_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Language-driven_All-in-one_Adverse_Weather_Removal_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yang_Language-driven_All-in-one_Adverse_CVPR_2024_supplemental.pdf
null
Each Test Image Deserves A Specific Prompt: Continual Test-Time Adaptation for 2D Medical Image Segmentation
Ziyang Chen, Yongsheng Pan, Yiwen Ye, Mengkang Lu, Yong Xia
Distribution shift widely exists in medical images acquired from different medical centres and poses a significant obstacle to deploying the pre-trained semantic segmentation model in real-world applications. Test-time adaptation has proven its effectiveness in tackling the cross-domain distribution shift during inference. However most existing methods achieve adaptation by updating the pre-trained models rendering them susceptible to error accumulation and catastrophic forgetting when encountering a series of distribution shifts (i.e. under the continual test-time adaptation setup). To overcome these challenges caused by updating the models in this paper we freeze the pre-trained model and propose the Visual Prompt-based Test-Time Adaptation (VPTTA) method to train a specific prompt for each test image to align the statistics in the batch normalization layers. Specifically we present the low-frequency prompt which is lightweight with only a few parameters and can be effectively trained in a single iteration. To enhance prompt initialization we equip VPTTA with a memory bank to benefit the current prompt from previous ones. Additionally we design a warm-up mechanism which mixes source and target statistics to construct warm-up statistics thereby facilitating the training process. Extensive experiments demonstrate the superiority of our VPTTA over other state-of-the-art methods on two medical image segmentation benchmark tasks. The code and weights of pre-trained source models are available at https://github.com/Chen-Ziyang/VPTTA.
https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_Each_Test_Image_Deserves_A_Specific_Prompt_Continual_Test-Time_Adaptation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.18363
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Each_Test_Image_Deserves_A_Specific_Prompt_Continual_Test-Time_Adaptation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Each_Test_Image_Deserves_A_Specific_Prompt_Continual_Test-Time_Adaptation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Chen_Each_Test_Image_CVPR_2024_supplemental.pdf
null
KTPFormer: Kinematics and Trajectory Prior Knowledge-Enhanced Transformer for 3D Human Pose Estimation
Jihua Peng, Yanghong Zhou, P. Y. Mok
This paper presents a novel Kinematics and Trajectory Prior Knowledge-Enhanced Transformer (KTPFormer) which overcomes the weakness in existing transformer-based methods for 3D human pose estimation that the derivation of Q K V vectors in their self-attention mechanisms are all based on simple linear mapping. We propose two prior attention modules namely Kinematics Prior Attention (KPA) and Trajectory Prior Attention (TPA) to take advantage of the known anatomical structure of the human body and motion trajectory information to facilitate effective learning of global dependencies and features in the multi-head self-attention. KPA models kinematic relationships in the human body by constructing a topology of kinematics while TPA builds a trajectory topology to learn the information of joint motion trajectory across frames. Yielding Q K V vectors with prior knowledge the two modules enable KTPFormer to model both spatial and temporal correlations simultaneously. Extensive experiments on three benchmarks (Human3.6M MPI-INF-3DHP and HumanEva) show that KTPFormer achieves superior performance in comparison to state-of-the-art methods. More importantly our KPA and TPA modules have lightweight plug-and-play designs and can be integrated into various transformer-based networks (i.e. diffusion-based) to improve the performance with only a very small increase in the computational overhead. The code is available at: https://github.com/JihuaPeng/KTPFormer.
https://openaccess.thecvf.com/content/CVPR2024/papers/Peng_KTPFormer_Kinematics_and_Trajectory_Prior_Knowledge-Enhanced_Transformer_for_3D_Human_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.00658
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Peng_KTPFormer_Kinematics_and_Trajectory_Prior_Knowledge-Enhanced_Transformer_for_3D_Human_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Peng_KTPFormer_Kinematics_and_Trajectory_Prior_Knowledge-Enhanced_Transformer_for_3D_Human_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Peng_KTPFormer_Kinematics_and_CVPR_2024_supplemental.pdf
null
MAPLM: A Real-World Large-Scale Vision-Language Benchmark for Map and Traffic Scene Understanding
Xu Cao, Tong Zhou, Yunsheng Ma, Wenqian Ye, Can Cui, Kun Tang, Zhipeng Cao, Kaizhao Liang, Ziran Wang, James M. Rehg, Chao Zheng
Vision-language generative AI has demonstrated remarkable promise for empowering cross-modal scene understanding of autonomous driving and high-definition (HD) map systems. However current benchmark datasets lack multi-modal point cloud image and language data pairs. Recent approaches utilize visual instruction learning and cross-modal prompt engineering to expand vision-language models into this domain. In this paper we propose a new vision-language benchmark that can be used to finetune traffic and HD map domain-specific foundation models. Specifically we annotate and leverage large-scale broad-coverage traffic and map data extracted from huge HD map annotations and use CLIP and LLaMA-2 / Vicuna to finetune a baseline model with instruction-following data. Our experimental results across various algorithms reveal that while visual instruction-tuning large language models (LLMs) can effectively learn meaningful representations from MAPLM-QA there remains significant room for further advancements. To facilitate applying LLMs and multi-modal data into self-driving research we will release our visual-language QA data and the baseline models at GitHub.com/LLVM-AD/MAPLM.
https://openaccess.thecvf.com/content/CVPR2024/papers/Cao_MAPLM_A_Real-World_Large-Scale_Vision-Language_Benchmark_for_Map_and_Traffic_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Cao_MAPLM_A_Real-World_Large-Scale_Vision-Language_Benchmark_for_Map_and_Traffic_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Cao_MAPLM_A_Real-World_Large-Scale_Vision-Language_Benchmark_for_Map_and_Traffic_CVPR_2024_paper.html
CVPR 2024
null
null
EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World
Yifei Huang, Guo Chen, Jilan Xu, Mingfang Zhang, Lijin Yang, Baoqi Pei, Hongjie Zhang, Lu Dong, Yali Wang, Limin Wang, Yu Qiao
Being able to map the activities of others into one's own point of view is one fundamental human skill even from a very early age. Taking a step toward understanding this human ability we introduce EgoExoLearn a large-scale dataset that emulates the human demonstration following process in which individuals record egocentric videos as they execute tasks guided by demonstration videos. Focusing on the potential applications of daily assistance and professional support EgoExoLearn contains egocentric and demonstration video data spanning 120 hours captured in daily life scenarios and specialized laboratories. Along with the videos we record high-quality gaze data and provide detailed multimodal annotations formulating a playground for modeling the human ability to bridge asynchronous procedural actions from different viewpoints. To this end we present benchmarks such as cross-view association cross-view action planning and cross-view referenced skill assessment along with detailed analysis. We expect EgoExoLearn can serve as an important resource for bridging the actions across views thus paving the way for creating AI agents capable of seamlessly learning by observing humans in the real world. The dataset and benchmark codes are available at https://github.com/OpenGVLab/EgoExoLearn.
https://openaccess.thecvf.com/content/CVPR2024/papers/Huang_EgoExoLearn_A_Dataset_for_Bridging_Asynchronous_Ego-_and_Exo-centric_View_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Huang_EgoExoLearn_A_Dataset_for_Bridging_Asynchronous_Ego-_and_Exo-centric_View_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Huang_EgoExoLearn_A_Dataset_for_Bridging_Asynchronous_Ego-_and_Exo-centric_View_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Huang_EgoExoLearn_A_Dataset_CVPR_2024_supplemental.pdf
null
Differentiable Micro-Mesh Construction
Yishun Dou, Zhong Zheng, Qiaoqiao Jin, Rui Shi, Yuhan Li, Bingbing Ni
Micro-mesh (u-mesh) is a new graphics primitive for compact representation of extreme geometry consisting of a low-polygon base mesh enriched by per micro-vertex displacement. A new generation of GPUs supports this structure with hardware evolution on u-mesh ray tracing achieving real-time rendering in pixel level geometric details. In this article we present a differentiable framework to convert standard meshes into this efficient format offering a holistic scheme in contrast to the previous stage-based methods. In our construction context a u-mesh is defined where each base triangle is a parametric primitive which is then reparameterized with Laplacian operators for efficient geometry optimization. Our framework offers numerous advantages for high-quality u-mesh production: (i) end-to-end geometry optimization and displacement baking; (ii) enabling the differentiation of renderings with respect to umesh for faithful reprojectability; (iii) high scalability for integrating useful features for u-mesh production and rendering such as minimizing shell volume maintaining the isotropy of the base mesh and visual-guided adaptive level of detail. Extensive experiments on u-mesh construction for a large set of high-resolution meshes demonstrate the superior quality achieved by the proposed scheme.
https://openaccess.thecvf.com/content/CVPR2024/papers/Dou_Differentiable_Micro-Mesh_Construction_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Dou_Differentiable_Micro-Mesh_Construction_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Dou_Differentiable_Micro-Mesh_Construction_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Dou_Differentiable_Micro-Mesh_Construction_CVPR_2024_supplemental.pdf
null
Improved Implicit Neural Representation with Fourier Reparameterized Training
null
null
null
null
null
https://openaccess.thecvf.com/content/CVPR2024/html/Shi_Improved_Implicit_Neural_Representation_with_Fourier_Reparameterized_Training_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Shi_Improved_Implicit_Neural_Representation_with_Fourier_Reparameterized_Training_CVPR_2024_paper.html
CVPR 2024
null
null
SNED: Superposition Network Architecture Search for Efficient Video Diffusion Model
Zhengang Li, Yan Kang, Yuchen Liu, Difan Liu, Tobias Hinz, Feng Liu, Yanzhi Wang
While AI-generated content has garnered significant attention achieving photo-realistic video synthesis remains a formidable challenge. Despite the promising advances in diffusion models for video generation quality the complex model architecture and substantial computational demands for both training and inference create a significant gap between these models and real-world applications. This paper presents SNED a superposition network architecture search method for efficient video diffusion model. Our method employs a supernet training paradigm that targets various model cost and resolution options using a weight-sharing method. Moreover we propose the supernet training sampling warm-up for fast training optimization. To showcase the flexibility of our method we conduct experiments involving both pixel-space and latent-space video diffusion models. The results demonstrate that our framework consistently produces comparable results across different model options with high efficiency. According to the experiment for the pixel-space video diffusion model we can achieve consistent video generation results simultaneously across 64 x 64 to 256 x 256 resolutions with a large range of model sizes from 640M to 1.6B number of parameters for pixel-space video diffusion models.
https://openaccess.thecvf.com/content/CVPR2024/papers/Li_SNED_Superposition_Network_Architecture_Search_for_Efficient_Video_Diffusion_Model_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Li_SNED_Superposition_Network_Architecture_Search_for_Efficient_Video_Diffusion_Model_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Li_SNED_Superposition_Network_Architecture_Search_for_Efficient_Video_Diffusion_Model_CVPR_2024_paper.html
CVPR 2024
null
null
Groupwise Query Specialization and Quality-Aware Multi-Assignment for Transformer-based Visual Relationship Detection
Jongha Kim, Jihwan Park, Jinyoung Park, Jinyoung Kim, Sehyung Kim, Hyunwoo J. Kim
Visual Relationship Detection (VRD) has seen significant advancements with Transformer-based architectures recently. However we identify two key limitations in a conventional label assignment for training Transformer-based VRD models which is a process of mapping a ground-truth (GT) to a prediction. Under the conventional assignment an 'unspecialized' query is trained since a query is expected to detect every relation which makes it difficult for a query to specialize in specific relations. Furthermore a query is also insufficiently trained since a GT is assigned only to a single prediction therefore near-correct or even correct predictions are suppressed by being assigned 'no relation' as a GT. To address these issues we propose Groupwise Query Specialization and Quality-Aware Multi-Assignment (SpeaQ). Groupwise Query Specialization trains a 'specialized' query by dividing queries and relations into disjoint groups and directing a query in a specific query group solely toward relations in the corresponding relation group. Quality-Aware Multi-Assignment further facilitates the training by assigning a GT to multiple predictions that are significantly close to a GT in terms of a subject an object and the relation in between. Experimental results and analyses show that SpeaQ effectively trains 'specialized' queries which better utilize the capacity of a model resulting in consistent performance gains with 'zero' additional inference cost across multiple VRD models and benchmarks. Code is available at https://github.com/mlvlab/SpeaQ.
https://openaccess.thecvf.com/content/CVPR2024/papers/Kim_Groupwise_Query_Specialization_and_Quality-Aware_Multi-Assignment_for_Transformer-based_Visual_Relationship_CVPR_2024_paper.pdf
http://arxiv.org/abs/2403.17709
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Kim_Groupwise_Query_Specialization_and_Quality-Aware_Multi-Assignment_for_Transformer-based_Visual_Relationship_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Kim_Groupwise_Query_Specialization_and_Quality-Aware_Multi-Assignment_for_Transformer-based_Visual_Relationship_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kim_Groupwise_Query_Specialization_CVPR_2024_supplemental.pdf
null
LeftRefill: Filling Right Canvas based on Left Reference through Generalized Text-to-Image Diffusion Model
Chenjie Cao, Yunuo Cai, Qiaole Dong, Yikai Wang, Yanwei Fu
This paper introduces LeftRefill an innovative approach to efficiently harness large Text-to-Image (T2I) diffusion models for reference-guided image synthesis. As the name implies LeftRefill horizontally stitches reference and target views together as a whole input. The reference image occupies the left side while the target canvas is positioned on the right. Then LeftRefill paints the right-side target canvas based on the left-side reference and specific task instructions. Such a task formulation shares some similarities with contextual inpainting akin to the actions of a human painter. This novel formulation efficiently learns both structural and textured correspondence between reference and target without other image encoders or adapters. We inject task and view information through cross-attention modules in T2I models and further exhibit multi-view reference ability via the re-arranged self-attention modules. These enable LeftRefill to perform consistent generation as a generalized model without requiring test-time fine-tuning or model modifications. Thus LeftRefill can be seen as a simple yet unified framework to address reference-guided synthesis. As an exemplar we leverage LeftRefill to address two different challenges: reference-guided inpainting and novel view synthesis based on the pre-trained StableDiffusion. Codes and models are released at https://github.com/ewrfcas/LeftRefill.
https://openaccess.thecvf.com/content/CVPR2024/papers/Cao_LeftRefill_Filling_Right_Canvas_based_on_Left_Reference_through_Generalized_CVPR_2024_paper.pdf
http://arxiv.org/abs/2305.11577
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Cao_LeftRefill_Filling_Right_Canvas_based_on_Left_Reference_through_Generalized_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Cao_LeftRefill_Filling_Right_Canvas_based_on_Left_Reference_through_Generalized_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Cao_LeftRefill_Filling_Right_CVPR_2024_supplemental.pdf
null
Personalized Residuals for Concept-Driven Text-to-Image Generation
Cusuh Ham, Matthew Fisher, James Hays, Nicholas Kolkin, Yuchen Liu, Richard Zhang, Tobias Hinz
We present personalized residuals and localized attention-guided sampling for efficient concept-driven generation using text-to-image diffusion models. Our method first represents concepts by freezing the weights of a pretrained text-conditioned diffusion model and learning low-rank residuals for a small subset of the model's layers. The residual-based approach then directly enables application of our proposed sampling technique which applies the learned residuals only in areas where the concept is localized via cross-attention and applies the original diffusion weights in all other regions. Localized sampling therefore combines the learned identity of the concept with the existing generative prior of the underlying diffusion model. We show that personalized residuals effectively capture the identity of a concept in 3 minutes on a single GPU without the use of regularization images and with fewer parameters than previous models and localized sampling allows using the original model as strong prior for large parts of the image.
https://openaccess.thecvf.com/content/CVPR2024/papers/Ham_Personalized_Residuals_for_Concept-Driven_Text-to-Image_Generation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2405.12978
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Ham_Personalized_Residuals_for_Concept-Driven_Text-to-Image_Generation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Ham_Personalized_Residuals_for_Concept-Driven_Text-to-Image_Generation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ham_Personalized_Residuals_for_CVPR_2024_supplemental.pdf
null
Condition-Aware Neural Network for Controlled Image Generation
Han Cai, Muyang Li, Qinsheng Zhang, Ming-Yu Liu, Song Han
We present Condition-Aware Neural Network (CAN) a new method for adding control to image generative models. In parallel to prior conditional control methods CAN controls the image generation process by dynamically manipulating the weight of the neural network. This is achieved by introducing a condition-aware weight generation module that generates conditional weight for convolution/linear layers based on the input condition. We test CAN on class-conditional image generation on ImageNet and text-to-image generation on COCO. CAN consistently delivers significant improvements for diffusion transformer models including DiT and UViT. In particular CAN combined with EfficientViT (CaT) achieves 2.78 FID on ImageNet 512x512 surpassing DiT-XL/2 while requiring 52x fewer MACs per sampling step.
https://openaccess.thecvf.com/content/CVPR2024/papers/Cai_Condition-Aware_Neural_Network_for_Controlled_Image_Generation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.01143
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Cai_Condition-Aware_Neural_Network_for_Controlled_Image_Generation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Cai_Condition-Aware_Neural_Network_for_Controlled_Image_Generation_CVPR_2024_paper.html
CVPR 2024
null
null
Versatile Navigation Under Partial Observability via Value-guided Diffusion Policy
Gengyu Zhang, Hao Tang, Yan Yan
Route planning for navigation under partial observability plays a crucial role in modern robotics and autonomous driving. Existing route planning approaches can be categorized into two main classes: traditional autoregressive and diffusion-based methods. The former often fails due to its myopic nature while the latter either assumes full observability or struggles to adapt to unfamiliar scenarios due to strong couplings with behavior cloning from experts. To address these deficiencies we propose a versatile diffusion-based approach for both 2D and 3D route planning under partial observability. Specifically our value-guided diffusion policy first generates plans to predict actions across various timesteps providing ample foresight to the planning. It then employs a differentiable planner with state estimations to derive a value function directing the agent's exploration and goal-seeking behaviors without seeking experts while explicitly addressing partial observability. During inference our policy is further enhanced by a best-plan-selection strategy substantially boosting the planning success rate. Moreover we propose projecting point clouds derived from RGB-D inputs onto 2D grid-based bird-eye-view maps via semantic segmentation generalizing to 3D environments. This simple yet effective adaption enables zero-shot transfer from 2D-trained policy to 3D cutting across the laborious training for 3D policy and thus certifying our versatility. Experimental results demonstrate our superior performance particularly in navigating situations beyond expert demonstrations surpassing state-of-the-art autoregressive and diffusion-based baselines for both 2D and 3D scenarios.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Versatile_Navigation_Under_Partial_Observability_via_Value-guided_Diffusion_Policy_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.02176
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Versatile_Navigation_Under_Partial_Observability_via_Value-guided_Diffusion_Policy_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Versatile_Navigation_Under_Partial_Observability_via_Value-guided_Diffusion_Policy_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_Versatile_Navigation_Under_CVPR_2024_supplemental.pdf
null
All in One Framework for Multimodal Re-identification in the Wild
He Li, Mang Ye, Ming Zhang, Bo Du
In Re-identification (ReID) recent advancements yield noteworthy progress in both unimodal and cross-modal retrieval tasks. However the challenge persists in developing a unified framework that could effectively handle varying multimodal data including RGB infrared sketches and textual information. Additionally the emergence of large-scale models shows promising performance in various vision tasks but the foundation model in ReID is still blank. In response to these challenges a novel multimodal learning paradigm for ReID is introduced referred to as All-in-One (AIO) which harnesses a frozen pre-trained big model as an encoder enabling effective multimodal retrieval without additional fine-tuning. The diverse multimodal data in AIO are seamlessly tokenized into a unified space allowing the modality-shared frozen encoder to extract identity-consistent features comprehensively across all modalities. Furthermore a meticulously crafted ensemble of cross-modality heads is designed to guide the learning trajectory. AIO is the first framework to perform all-in-one ReID encompassing four commonly used modalities. Experiments on cross-modal and multimodal ReID reveal that AIO not only adeptly handles various modal data but also excels in challenging contexts showcasing exceptional performance in zero-shot and domain generalization scenarios. Code will be available at: https://github.com/lihe404/AIO.
https://openaccess.thecvf.com/content/CVPR2024/papers/Li_All_in_One_Framework_for_Multimodal_Re-identification_in_the_Wild_CVPR_2024_paper.pdf
http://arxiv.org/abs/2405.04741
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Li_All_in_One_Framework_for_Multimodal_Re-identification_in_the_Wild_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Li_All_in_One_Framework_for_Multimodal_Re-identification_in_the_Wild_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_All_in_One_CVPR_2024_supplemental.pdf
null
Looking 3D: Anomaly Detection with 2D-3D Alignment
Ankan Bhunia, Changjian Li, Hakan Bilen
Automatic anomaly detection based on visual cues holds practical significance in various domains such as manufacturing and product quality assessment. This paper introduces a new conditional anomaly detection problem which involves identifying anomalies in a query image by comparing it to a reference shape. To address this challenge we have created a large dataset BrokenChairs-180K consisting of around 180K images with diverse anomalies geometries and textures paired with 8143 reference 3D shapes. To tackle this task we have proposed a novel transformer-based approach that explicitly learns the correspondence between the query image and reference 3D shape via feature alignment and leverages a customized attention mechanism for anomaly detection. Our approach has been rigorously evaluated through comprehensive experiments serving as a benchmark for future research in this domain.
https://openaccess.thecvf.com/content/CVPR2024/papers/Bhunia_Looking_3D_Anomaly_Detection_with_2D-3D_Alignment_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Bhunia_Looking_3D_Anomaly_Detection_with_2D-3D_Alignment_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Bhunia_Looking_3D_Anomaly_Detection_with_2D-3D_Alignment_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Bhunia_Looking_3D_Anomaly_CVPR_2024_supplemental.pdf
null
Purified and Unified Steganographic Network
Guobiao Li, Sheng Li, Zicong Luo, Zhenxing Qian, Xinpeng Zhang
Steganography is the art of hiding secret data into the cover media for covert communication. In recent years more and more deep neural network (DNN)-based steganographic schemes are proposed to train steganographic networks for secret embedding and recovery which are shown to be promising. Compared with the handcrafted steganographic tools steganographic networks tend to be large in size. It raises concerns on how to imperceptibly and effectively transmit these networks to the sender and receiver to facilitate the covert communication. To address this issue we propose in this paper a Purified and Unified Steganographic Network (PUSNet). It performs an ordinary machine learning task in a purified network which could be triggered into steganographic networks for secret embedding or recovery using different keys. We formulate the construction of the PUSNet into a sparse weight filling problem to flexibly switch between the purified and steganographic networks. We further instantiate our PUSNet as an image denoising network with two steganographic networks concealed for secret image embedding and recovery. Comprehensive experiments demonstrate that our PUSNet achieves good performance on secret image embedding secret image recovery and image denoising in a single architecture. It is also shown to be capable of imperceptibly carrying the steganographic networks in a purified network. Steganography is the art of hiding secret data into the cover media for covert communication. In recent years more and more deep neural network (DNN)-based steganographic schemes are proposed to train steganographic networks for secret embedding and recovery which are shown to be promising. Compared with the handcrafted steganographic tools steganographic networks tend to be large in size. It raises concerns on how to imperceptibly and effectively transmit these networks to the sender and receiver to facilitate the covert communication. To address this issue we propose in this paper a Purified and Unified Steganographic Network (PUSNet). It performs an ordinary machine learning task in a purified network which could be triggered into steganographic networks for secret embedding or recovery using different keys. We formulate the construction of the PUSNet into a sparse weight filling problem to flexibly switch between the purified and steganographic networks. We further instantiate our PUSNet as an image denoising network with two steganographic networks concealed for secret image embedding and recovery. Comprehensive experiments demonstrate that our PUSNet achieves good performance on secret image embedding secret image recovery and image denoising in a single architecture. It is also shown to be capable of imperceptibly carrying the steganographic networks in a purified network. Code is available at https://github.com/albblgb/PUSNet
https://openaccess.thecvf.com/content/CVPR2024/papers/Li_Purified_and_Unified_Steganographic_Network_CVPR_2024_paper.pdf
http://arxiv.org/abs/2402.17210
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Li_Purified_and_Unified_Steganographic_Network_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Li_Purified_and_Unified_Steganographic_Network_CVPR_2024_paper.html
CVPR 2024
null
null
VS: Reconstructing Clothed 3D Human from Single Image via Vertex Shift
Leyuan Liu, Yuhan Li, Yunqi Gao, Changxin Gao, Yuanyuan Liu, Jingying Chen
Various applications require high-fidelity and artifact-free 3D human reconstructions. However current implicit function-based methods inevitably produce artifacts while existing deformation methods are difficult to reconstruct high-fidelity humans wearing loose clothing. In this paper we propose a two-stage deformation method named Vertex Shift (VS) for reconstructing clothed 3D humans from single images. Specifically VS first stretches the estimated SMPL-X mesh into a coarse 3D human model using shift fields inferred from normal maps then refines the coarse 3D human model into a detailed 3D human model via a graph convolutional network embedded with implicit-function-learned features. This "stretch-refine" strategy addresses large deformations required for reconstructing loose clothing and delicate deformations for recovering intricate and detailed surfaces achieving high-fidelity reconstructions that faithfully convey the pose clothing and surface details from the input images. The graph convolutional network's ability to exploit neighborhood vertices coupled with the advantages inherited from the deformation methods ensure VS rarely produces artifacts like distortions and non-human shapes and never produces artifacts like holes broken parts and dismembered limbs. As a result VS can reconstruct high-fidelity and artifact-less clothed 3D humans from single images even under scenarios of challenging poses and loose clothing. Experimental results on three benchmarks and two in-the-wild datasets demonstrate that VS significantly outperforms current state-of-the-art methods. The code and models of VS are available for research purposes at https://github.com/starVisionTeam/VS.
https://openaccess.thecvf.com/content/CVPR2024/papers/Liu_VS_Reconstructing_Clothed_3D_Human_from_Single_Image_via_Vertex_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Liu_VS_Reconstructing_Clothed_3D_Human_from_Single_Image_via_Vertex_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Liu_VS_Reconstructing_Clothed_3D_Human_from_Single_Image_via_Vertex_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liu_VS_Reconstructing_Clothed_CVPR_2024_supplemental.zip
null
PARA-Drive: Parallelized Architecture for Real-time Autonomous Driving
Xinshuo Weng, Boris Ivanovic, Yan Wang, Yue Wang, Marco Pavone
Recent works have proposed end-to-end autonomous vehicle (AV) architectures comprised of differentiable modules achieving state-of-the-art driving performance. While they provide advantages over the traditional perception-prediction-planning pipeline (e.g. removing information bottlenecks between components and alleviating integration challenges) they do so using a diverse combination of tasks modules and their interconnectivity. As of yet however there has been no systematic analysis of the necessity of these modules or the impact of their connectivity placement and internal representations on overall driving performance. Addressing this gap our work conducts a comprehensive exploration of the design space of end-to-end modular AV stacks. Our findings culminate in the development of PARA-Drive: a fully parallel end-to-end AV architecture. PARA-Drive not only achieves state-of-the-art performance in perception prediction and planning but also significantly enhances runtime speed by nearly 3x without compromising on interpretability or safety.
https://openaccess.thecvf.com/content/CVPR2024/papers/Weng_PARA-Drive_Parallelized_Architecture_for_Real-time_Autonomous_Driving_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Weng_PARA-Drive_Parallelized_Architecture_for_Real-time_Autonomous_Driving_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Weng_PARA-Drive_Parallelized_Architecture_for_Real-time_Autonomous_Driving_CVPR_2024_paper.html
CVPR 2024
null
null
TEA: Test-time Energy Adaptation
Yige Yuan, Bingbing Xu, Liang Hou, Fei Sun, Huawei Shen, Xueqi Cheng
Test Time Adaptation (TTA) aims to improve model generalizability when test data diverges from training distribution with the distinct advantage of not requiring access to training data and processes especially valuable in the context of pre-trained models. However current TTA methods fail to address the fundamental issue: covariate shift i.e. the decreased generalizability can be attributed to the model's reliance on the marginal distribution of the training data which may impair model calibration and introduce confirmation bias. To address this we propose a novel energy-based perspective enhancing the model's perception of target data distributions without requiring access to training data or processes. Building on this perspective we introduce Test-time Energy Adaptation (TEA) which transforms the trained classifier into an energy-based model and aligns the model's distribution with the test data's enhancing its ability to perceive test distributions and thus improving overall generalizability. Extensive experiments across multiple tasks benchmarks and architectures demonstrate TEA's superior generalization performance against state-of-the-art methods. Further in-depth analyses reveal that TEA can equip the model with a comprehensive perception of test distribution ultimately paving the way toward improved generalization and calibration. Code is available at https://github.com/yuanyige/tea.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yuan_TEA_Test-time_Energy_Adaptation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.14402
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yuan_TEA_Test-time_Energy_Adaptation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yuan_TEA_Test-time_Energy_Adaptation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yuan_TEA_Test-time_Energy_CVPR_2024_supplemental.pdf
null
NEAT: Distilling 3D Wireframes from Neural Attraction Fields
Nan Xue, Bin Tan, Yuxi Xiao, Liang Dong, Gui-Song Xia, Tianfu Wu, Yujun Shen
This paper studies the problem of structured 3D recon- struction using wireframes that consist of line segments and junctions focusing on the computation of structured boundary geometries of scenes. Instead of leveraging matching-based solutions from 2D wireframes (or line segments) for 3D wireframe reconstruction as done in prior arts we present NEAT a rendering-distilling formulation using neural fields to represent 3D line segments with 2D observations and bipartite matching for perceiving and dis- tilling of a sparse set of 3D global junctions. The proposed NEAT enjoys the joint optimization of the neural fields and the global junctions from scratch using view-dependent 2D observations without precomputed cross-view feature matching. Comprehensive experiments on the DTU and BlendedMVS datasets demonstrate our NEAT's superiority over state-of-the-art alternatives for 3D wireframe recon- struction. Moreover the distilled 3D global junctions by NEAT are a better initialization than SfM points for the recently-emerged 3D Gaussian Splatting for high-fidelity novel view synthesis using about 20 times fewer initial 3D points. Project page: https://xuenan.net/neat
https://openaccess.thecvf.com/content/CVPR2024/papers/Xue_NEAT_Distilling_3D_Wireframes_from_Neural_Attraction_Fields_CVPR_2024_paper.pdf
http://arxiv.org/abs/2307.10206
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Xue_NEAT_Distilling_3D_Wireframes_from_Neural_Attraction_Fields_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Xue_NEAT_Distilling_3D_Wireframes_from_Neural_Attraction_Fields_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Xue_NEAT_Distilling_3D_CVPR_2024_supplemental.pdf
null
Prompt Augmentation for Self-supervised Text-guided Image Manipulation
Rumeysa Bodur, Binod Bhattarai, Tae-Kyun Kim
Text-guided image editing finds applications in various creative and practical fields. While recent studies in image generation have advanced the field they often struggle with the dual challenges of coherent image transformation and context preservation. In response our work introduces prompt augmentation a method amplifying a single input prompt into several target prompts strengthening textual context and enabling localised image editing. Specifically we use the augmented prompts to delineate the intended manipulation area. We propose a Contrastive Loss tailored to driving effective image editing by displacing edited areas and drawing preserved regions closer. Acknowledging the continuous nature of image manipulations we further refine our approach by incorporating the similarity concept creating a Soft Contrastive Loss. The new losses are incorporated to the diffusion model demonstrating improved or competitive image editing results on public datasets and generated images over state-of-the-art approaches.
https://openaccess.thecvf.com/content/CVPR2024/papers/Bodur_Prompt_Augmentation_for_Self-supervised_Text-guided_Image_Manipulation_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Bodur_Prompt_Augmentation_for_Self-supervised_Text-guided_Image_Manipulation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Bodur_Prompt_Augmentation_for_Self-supervised_Text-guided_Image_Manipulation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Bodur_Prompt_Augmentation_for_CVPR_2024_supplemental.pdf
null
Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs
Shiyu Xuan, Qingpei Guo, Ming Yang, Shiliang Zhang
Multi-modal Large Language Models (MLLMs) have shown remarkable capabilities in various multi-modal tasks. Nevertheless their performance in fine-grained image understanding tasks is still limited. To address this issue this paper proposes a new framework to enhance the fine-grained image understanding abilities of MLLMs. Specifically we present a new method for constructing the instruction tuning dataset at a low cost by leveraging annotations in existing datasets. A self-consistent bootstrapping method is also introduced to extend existing dense object annotations into high-quality referring-expression-bounding-box pairs. These methods enable the generation of high-quality instruction data which includes a wide range of fundamental abilities essential for fine-grained image perception. Moreover we argue that the visual encoder should be tuned during instruction tuning to mitigate the gap between full image perception and fine-grained image perception. Experimental results demonstrate the superior performance of our method. For instance our model exhibits a 5.2% accuracy improvement over Qwen-VL on GQA and surpasses the accuracy of Kosmos-2 by 24.7% on RefCOCO_val. We have also attained the top rank on the leaderboard of MMBench. This promising performance is achieved by training on only publicly available data making it easily reproducible. The models datasets and codes are publicly available at https://github.com/SY-Xuan/Pink.
https://openaccess.thecvf.com/content/CVPR2024/papers/Xuan_Pink_Unveiling_the_Power_of_Referential_Comprehension_for_Multi-modal_LLMs_CVPR_2024_paper.pdf
http://arxiv.org/abs/2310.00582
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Xuan_Pink_Unveiling_the_Power_of_Referential_Comprehension_for_Multi-modal_LLMs_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Xuan_Pink_Unveiling_the_Power_of_Referential_Comprehension_for_Multi-modal_LLMs_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Xuan_Pink_Unveiling_the_CVPR_2024_supplemental.pdf
null
LDP: Language-driven Dual-Pixel Image Defocus Deblurring Network
Hao Yang, Liyuan Pan, Yan Yang, Richard Hartley, Miaomiao Liu
Recovering sharp images from dual-pixel (DP) pairs with disparity-dependent blur is a challenging task. Existing blur map-based deblurring methods have demonstrated promising results. In this paper we propose to the best of our knowledge the first framework to introduce the contrastive language-image pre-training framework (CLIP) to achieve accurate blur map estimation from DP pairs unsupervisedly. To this end we first carefully design text prompts to enable CLIP to understand blur-related geometric prior knowledge from the DP pair. Then we propose a format to input stereo DP pair to the CLIP without any fine-tuning where the CLIP is pre-trained on monocular images. Given the estimated blur map we introduce a blur-prior attention block a blur-weighting loss and a blur-aware loss to recover the all-in-focus image. Our method achieves state-of-the-art performance in extensive experiments (see Fig. 1).
https://openaccess.thecvf.com/content/CVPR2024/papers/Yang_LDP_Language-driven_Dual-Pixel_Image_Defocus_Deblurring_Network_CVPR_2024_paper.pdf
http://arxiv.org/abs/2307.09815
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_LDP_Language-driven_Dual-Pixel_Image_Defocus_Deblurring_Network_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yang_LDP_Language-driven_Dual-Pixel_Image_Defocus_Deblurring_Network_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yang_LDP_Language-driven_Dual-Pixel_CVPR_2024_supplemental.pdf
null
MMSum: A Dataset for Multimodal Summarization and Thumbnail Generation of Videos
Jielin Qiu, Jiacheng Zhu, William Han, Aditesh Kumar, Karthik Mittal, Claire Jin, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Ding Zhao, Bo Li, Lijuan Wang
Multimodal summarization with multimodal output (MSMO) has emerged as a promising research direction. Nonetheless numerous limitations exist within existing public MSMO datasets including insufficient maintenance data inaccessibility limited size and the absence of proper categorization which pose significant challenges. To address these challenges and provide a comprehensive dataset for this new direction we have meticulously curated the MMSum dataset. Our new dataset features (1) Human-validated summaries for both video and textual content providing superior human instruction and labels for multimodal learning. (2) Comprehensively and meticulously arranged categorization spanning 17 principal categories and 170 subcategories to encapsulate a diverse array of real-world scenarios. (3) Benchmark tests performed on the proposed dataset to assess various tasks and methods including video summarization text summarization and multimodal summarization. To champion accessibility and collaboration we released the MMSum dataset and the data collection tool as fully open-source resources fostering transparency and accelerating future developments.
https://openaccess.thecvf.com/content/CVPR2024/papers/Qiu_MMSum_A_Dataset_for_Multimodal_Summarization_and_Thumbnail_Generation_of_CVPR_2024_paper.pdf
http://arxiv.org/abs/2306.04216
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Qiu_MMSum_A_Dataset_for_Multimodal_Summarization_and_Thumbnail_Generation_of_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Qiu_MMSum_A_Dataset_for_Multimodal_Summarization_and_Thumbnail_Generation_of_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Qiu_MMSum_A_Dataset_CVPR_2024_supplemental.pdf
null
HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data
Qifan Yu, Juncheng Li, Longhui Wei, Liang Pang, Wentao Ye, Bosheng Qin, Siliang Tang, Qi Tian, Yueting Zhuang
Multi-modal Large Language Models (MLLMs) tuned on machine-generated instruction-following data have demonstrated remarkable performance in various multimodal understanding and generation tasks. However the hallucinations inherent in machine-generated data which could lead to hallucinatory outputs in MLLMs remain under-explored. This work aims to investigate various hallucinations (i.e. object relation attribute hallucinations) and mitigate those hallucinatory toxicities in large-scale machine-generated visual instruction datasets. Drawing on the human ability to identify factual errors we present a novel hallucination detection and elimination framework HalluciDoctor based on the cross-checking paradigm. We use our framework to identify and eliminate hallucinations in the training data automatically. Interestingly HalluciDoctor also indicates that spurious correlations arising from long-tail object co-occurrences contribute to hallucinations. Based on that we execute counterfactual visual instruction expansion to balance data distribution thereby enhancing MLLMs' resistance to hallucinations. Comprehensive experiments on hallucination evaluation benchmarks show that our method successfully mitigates 44.6% hallucinations relatively and maintains competitive performance compared to LLaVA. The data and code for this paper are publicly available.
https://openaccess.thecvf.com/content/CVPR2024/papers/Yu_HalluciDoctor_Mitigating_Hallucinatory_Toxicity_in_Visual_Instruction_Data_CVPR_2024_paper.pdf
http://arxiv.org/abs/2311.13614
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Yu_HalluciDoctor_Mitigating_Hallucinatory_Toxicity_in_Visual_Instruction_Data_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Yu_HalluciDoctor_Mitigating_Hallucinatory_Toxicity_in_Visual_Instruction_Data_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yu_HalluciDoctor_Mitigating_Hallucinatory_CVPR_2024_supplemental.pdf
null
Pre-trained Vision and Language Transformers Are Few-Shot Incremental Learners
Keon-Hee Park, Kyungwoo Song, Gyeong-Moon Park
Few-Shot Class Incremental Learning (FSCIL) is a task that requires a model to learn new classes incrementally without forgetting when only a few samples for each class are given. FSCIL encounters two significant challenges: catastrophic forgetting and overfitting and these challenges have driven prior studies to primarily rely on shallow models such as ResNet-18. Even though their limited capacity can mitigate both forgetting and overfitting issues it leads to inadequate knowledge transfer during few-shot incremental sessions. In this paper we argue that large models such as vision and language transformers pre-trained on large datasets can be excellent few-shot incremental learners. To this end we propose a novel FSCIL framework called PriViLege Pre-trained Vision and Language transformers with prompting functions and knowledge distillation. Our framework effectively addresses the challenges of catastrophic forgetting and overfitting in large models through new pre-trained knowledge tuning (PKT) and two losses: entropy-based divergence loss and semantic knowledge distillation loss. Experimental results show that the proposed PriViLege significantly outperforms the existing state-of-the-art methods with a large margin e.g. +9.38% in CUB200 +20.58% in CIFAR-100 and +13.36% in miniImageNet. Our implementation code is available at https://github.com/KHU-AGI/PriViLege.
https://openaccess.thecvf.com/content/CVPR2024/papers/Park_Pre-trained_Vision_and_Language_Transformers_Are_Few-Shot_Incremental_Learners_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.02117
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Park_Pre-trained_Vision_and_Language_Transformers_Are_Few-Shot_Incremental_Learners_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Park_Pre-trained_Vision_and_Language_Transformers_Are_Few-Shot_Incremental_Learners_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Park_Pre-trained_Vision_and_CVPR_2024_supplemental.pdf
null
Guess The Unseen: Dynamic 3D Scene Reconstruction from Partial 2D Glimpses
Inhee Lee, Byungjun Kim, Hanbyul Joo
In this paper we present a method to reconstruct the world and multiple dynamic humans in 3D from a monocular video input. As a key idea we represent both the world and multiple humans via the recently emerging 3D Gaussian Splatting (3D-GS) representation enabling to conveniently and efficiently compose and render them together. In particular we address the scenarios with severely limited and sparse observations in 3D human reconstruction a common challenge encountered in the real world. To tackle this challenge we introduce a novel approach to optimize the 3D-GS representation in a canonical space by fusing the sparse cues in the common space where we leverage a pre-trained 2D diffusion model to synthesize unseen views while keeping the consistency with the observed 2D appearances. We demonstrate our method can reconstruct high-quality animatable 3D humans in various challenging examples in the presence of occlusion image crops few-shot and extremely sparse observations. After reconstruction our method is capable of not only rendering the scene in any novel views at arbitrary time instances but also editing the 3D scene by removing individual humans or applying different motions for each human. Through various experiments we demonstrate the quality and efficiency of our methods over alternative existing approaches.
https://openaccess.thecvf.com/content/CVPR2024/papers/Lee_Guess_The_Unseen_Dynamic_3D_Scene_Reconstruction_from_Partial_2D_CVPR_2024_paper.pdf
http://arxiv.org/abs/2404.14410
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Lee_Guess_The_Unseen_Dynamic_3D_Scene_Reconstruction_from_Partial_2D_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Lee_Guess_The_Unseen_Dynamic_3D_Scene_Reconstruction_from_Partial_2D_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lee_Guess_The_Unseen_CVPR_2024_supplemental.pdf
null
C^2RV: Cross-Regional and Cross-View Learning for Sparse-View CBCT Reconstruction
Yiqun Lin, Jiewen Yang, Hualiang Wang, Xinpeng Ding, Wei Zhao, Xiaomeng Li
Cone beam computed tomography (CBCT) is an important imaging technology widely used in medical scenarios such as diagnosis and preoperative planning. Using fewer projection views to reconstruct CT also known as sparse-view reconstruction can reduce ionizing radiation and further benefit interventional radiology. Compared with sparse-view reconstruction for traditional parallel/fan-beam CT CBCT reconstruction is more challenging due to the increased dimensionality caused by the measurement process based on cone-shaped X-ray beams. As a 2D-to-3D reconstruction problem although implicit neural representations have been introduced to enable efficient training only local features are considered and different views are processed equally in previous works resulting in spatial inconsistency and poor performance on complicated anatomies. To this end we propose C^2RV by leveraging explicit multi-scale volumetric representations to enable cross-regional learning in the 3D space. Additionally the scale-view cross-attention module is introduced to adaptively aggregate multi-scale and multi-view features. Extensive experiments demonstrate that our C^2RV achieves consistent and significant improvement over previous state-of-the-art methods on datasets with diverse anatomy. Code is available at https://github.com/xmed-lab/C2RV-CBCT.
https://openaccess.thecvf.com/content/CVPR2024/papers/Lin_C2RV_Cross-Regional_and_Cross-View_Learning_for_Sparse-View_CBCT_Reconstruction_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Lin_C2RV_Cross-Regional_and_Cross-View_Learning_for_Sparse-View_CBCT_Reconstruction_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Lin_C2RV_Cross-Regional_and_Cross-View_Learning_for_Sparse-View_CBCT_Reconstruction_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Lin_C2RV_Cross-Regional_and_CVPR_2024_supplemental.pdf
null
HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models
Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Wei Wei, Tingbo Hou, Yael Pritch, Neal Wadhwa, Michael Rubinstein, Kfir Aberman
Personalization has emerged as a prominent aspect within the field of generative AI enabling the synthesis of individuals in diverse contexts and styles while retaining high-fidelity to their identities. However the process of personalization presents inherent challenges in terms of time and memory requirements. Fine-tuning each personalized model needs considerable GPU time investment and storing a personalized model per subject can be demanding in terms of storage capacity. To overcome these challenges we propose HyperDreamBooth - a hypernetwork capable of efficiently generating a small set of personalized weights from a single image of a person. By composing these weights into the diffusion model coupled with fast finetuning HyperDreamBooth can generate a person's face in various contexts and styles with high subject details while also preserving the model's crucial knowledge of diverse styles and semantic modifications. Our method achieves personalization on faces in roughly 20 seconds 25x faster than DreamBooth and 125x faster than Textual Inversion using as few as one reference image with the same quality and style diversity as DreamBooth. Also our method yields a model that is 10000x smaller than a normal DreamBooth model.
https://openaccess.thecvf.com/content/CVPR2024/papers/Ruiz_HyperDreamBooth_HyperNetworks_for_Fast_Personalization_of_Text-to-Image_Models_CVPR_2024_paper.pdf
http://arxiv.org/abs/2307.06949
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Ruiz_HyperDreamBooth_HyperNetworks_for_Fast_Personalization_of_Text-to-Image_Models_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Ruiz_HyperDreamBooth_HyperNetworks_for_Fast_Personalization_of_Text-to-Image_Models_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ruiz_HyperDreamBooth_HyperNetworks_for_CVPR_2024_supplemental.pdf
null
Language-guided Image Reflection Separation
Haofeng Zhong, Yuchen Hong, Shuchen Weng, Jinxiu Liang, Boxin Shi
This paper studies the problem of language-guided reflection separation which aims at addressing the ill-posed reflection separation problem by introducing language descriptions to provide layer content. We propose a unified framework to solve this problem which leverages the cross-attention mechanism with contrastive learning strategies to construct the correspondence between language descriptions and image layers. A gated network design and a randomized training strategy are employed to tackle the recognizable layer ambiguity. The effectiveness of the proposed method is validated by the significant performance advantage over existing reflection separation methods on both quantitative and qualitative comparisons.
https://openaccess.thecvf.com/content/CVPR2024/papers/Zhong_Language-guided_Image_Reflection_Separation_CVPR_2024_paper.pdf
http://arxiv.org/abs/2402.11874
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Zhong_Language-guided_Image_Reflection_Separation_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Zhong_Language-guided_Image_Reflection_Separation_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhong_Language-guided_Image_Reflection_CVPR_2024_supplemental.pdf
null
HardMo: A Large-Scale Hardcase Dataset for Motion Capture
Jiaqi Liao, Chuanchen Luo, Yinuo Du, Yuxi Wang, Xucheng Yin, Man Zhang, Zhaoxiang Zhang, Junran Peng
Recent years have witnessed rapid progress in monocular human mesh recovery. Despite their impressive performance on public benchmarks existing methods are vulnerable to unusual poses which prevents them from deploying to challenging scenarios such as dance and martial arts. This issue is mainly attributed to the domain gap induced by the data scarcity in relevant cases. Most existing datasets are captured in constrained scenarios and lack samples of such complex movements. For this reason we propose a data collection pipeline comprising automatic crawling precise annotation and hardcase mining. Based on this pipeline we establish a large dataset in a short time. The dataset named HardMo contains 7M images along with precise annotations covering 15 categories of dance and 14 categories of martial arts. Empirically we find that the prediction failure in dance and martial arts is mainly characterized by the misalignment of hand-wrist and foot-ankle. To dig deeper into the two hardcases we leverage the proposed automatic pipeline to filter collected data and construct two subsets named HardMo-Hand and HardMo-Foot. Extensive experiments demonstrate the effectiveness of the annotation pipeline and the data-driven solution to failure cases. Specifically after being trained on HardMo HMR an early pioneering method can even outperform the current state of the art 4DHumans on our benchmarks.
https://openaccess.thecvf.com/content/CVPR2024/papers/Liao_HardMo_A_Large-Scale_Hardcase_Dataset_for_Motion_Capture_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Liao_HardMo_A_Large-Scale_Hardcase_Dataset_for_Motion_Capture_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Liao_HardMo_A_Large-Scale_Hardcase_Dataset_for_Motion_Capture_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liao_HardMo_A_Large-Scale_CVPR_2024_supplemental.pdf
null
View-Category Interactive Sharing Transformer for Incomplete Multi-View Multi-Label Learning
Shilong Ou, Zhe Xue, Yawen Li, Meiyu Liang, Yuanqiang Cai, Junjiang Wu
As a problem often encountered in real-world scenarios multi-view multi-label learning has attracted considerable research attention. However due to oversights in data collection and uncertainties in manual annotation real-world data often suffer from incompleteness. Regrettably most existing multi-view multi-label learning methods sidestep missing views and labels. Furthermore they often neglect the potential of harnessing complementary information between views and labels thus constraining their classification capabilities. To address these challenges we propose a view-category interactive sharing transformer tailored for incomplete multi-view multi-label learning. Within this network we incorporate a two-layer transformer module to characterize the interplay between views and labels. Additionally to address view incompleteness a KNN-style missing view generation module is employed. Finally we introduce a view-category consistency guided embedding enhancement module to align different views and improve the discriminating power of the embeddings. Collectively these modules synergistically integrate to classify the incomplete multi-view multi-label data effectively. Extensive experiments substantiate that our approach outperforms the existing state-of-the-art methods.
https://openaccess.thecvf.com/content/CVPR2024/papers/Ou_View-Category_Interactive_Sharing_Transformer_for_Incomplete_Multi-View_Multi-Label_Learning_CVPR_2024_paper.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2024/html/Ou_View-Category_Interactive_Sharing_Transformer_for_Incomplete_Multi-View_Multi-Label_Learning_CVPR_2024_paper.html
https://openaccess.thecvf.com/content/CVPR2024/html/Ou_View-Category_Interactive_Sharing_Transformer_for_Incomplete_Multi-View_Multi-Label_Learning_CVPR_2024_paper.html
CVPR 2024
https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ou_View-Category_Interactive_Sharing_CVPR_2024_supplemental.pdf
null