Search is not available for this dataset
title
string | authors
string | abstract
string | pdf
string | arXiv
string | bibtex
string | url
string | detail_url
string | tags
string | supp
string | string |
---|---|---|---|---|---|---|---|---|---|---|
When StyleGAN Meets Stable Diffusion: a W+ Adapter for Personalized Image Generation | Xiaoming Li, Xinyu Hou, Chen Change Loy | Text-to-image diffusion models have remarkably excelled in producing diverse high-quality and photo-realistic images. This advancement has spurred a growing interest in incorporating specific identities into generated content. Most current methods employ an inversion approach to embed a target visual concept into the text embedding space using a single reference image. However the newly synthesized faces either closely resemble the reference image in terms of facial attributes such as expression or exhibit a reduced capacity for identity preservation. Text descriptions intended to guide the facial attributes of the synthesized face may fall short owing to the intricate entanglement of identity information with identity-irrelevant facial attributes derived from the reference image. To address these issues we present the novel use of the extended StyleGAN embedding space \mathcal W _+ to achieve enhanced identity preservation and disentanglement for diffusion models. By aligning this semantically meaningful human face latent space with text-to-image diffusion models we succeed in maintaining high fidelity in identity preservation coupled with the capacity for semantic editing. Additionally we propose new training objectives to balance the influences of both prompt and identity conditions ensuring that the identity-irrelevant background remains \lxm negligibly affected during facial attribute modifications. Extensive experiments reveal that our method adeptly generates personalized text-to-image outputs that are not only compatible with prompt descriptions but also amenable to common StyleGAN editing directions in diverse settings. Our code and model are available at https://github.com/csxmli2016/w-plus-adapter. | https://openaccess.thecvf.com/content/CVPR2024/papers/Li_When_StyleGAN_Meets_Stable_Diffusion_a_W_Adapter_for_Personalized_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Li_When_StyleGAN_Meets_Stable_Diffusion_a_W_Adapter_for_Personalized_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Li_When_StyleGAN_Meets_Stable_Diffusion_a_W_Adapter_for_Personalized_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_When_StyleGAN_Meets_CVPR_2024_supplemental.pdf | null |
ToNNO: Tomographic Reconstruction of a Neural Network's Output for Weakly Supervised Segmentation of 3D Medical Images | Marius Schmidt-Mengin, Alexis Benichoux, Shibeshih Belachew, Nikos Komodakis, Nikos Paragios | Annotating lots of 3D medical images for training segmentation models is time-consuming. The goal of weakly supervised semantic segmentation is to train segmentation models without using any ground truth segmentation masks. Our work addresses the case where only image-level categorical labels indicating the presence or absence of a particular region of interest (such as tumours or lesions) are available. Most existing methods rely on class activation mapping (CAM). We propose a novel approach ToNNO which is based on the Tomographic reconstruction of a Neural Network's Output. Our technique extracts stacks of slices with different angles from the input 3D volume feeds these slices to a 2D encoder and applies the inverse Radon transform in order to reconstruct a 3D heatmap of the encoder's predictions. This generic method allows to perform dense prediction tasks on 3D volumes using any 2D image encoder. We apply it to weakly supervised medical image segmentation by training the 2D encoder to output high values for slices containing the regions of interest. We test it on four large scale medical image datasets and outperform 2D CAM methods. We then extend ToNNO by combining tomographic reconstruction with CAM methods proposing Averaged CAM and Tomographic CAM which obtain even better results. | https://openaccess.thecvf.com/content/CVPR2024/papers/Schmidt-Mengin_ToNNO_Tomographic_Reconstruction_of_a_Neural_Networks_Output_for_Weakly_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Schmidt-Mengin_ToNNO_Tomographic_Reconstruction_of_a_Neural_Networks_Output_for_Weakly_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Schmidt-Mengin_ToNNO_Tomographic_Reconstruction_of_a_Neural_Networks_Output_for_Weakly_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Schmidt-Mengin_ToNNO_Tomographic_Reconstruction_CVPR_2024_supplemental.pdf | null |
Learning to Navigate Efficiently and Precisely in Real Environments | Guillaume Bono, Hervé Poirier, Leonid Antsfeld, Gianluca Monaci, Boris Chidlovskii, Christian Wolf | In the context of autonomous navigation of terrestrial robots the creation of realistic models for agent dynamics and sensing is a widespread habit in the robotics literature and in commercial applications where they are used for model based control and/or for localization and mapping. The more recent Embodied AI literature on the other hand focuses on modular or end-to-end agents trained in simulators like Habitat or AI-Thor where the emphasis is put on photo-realistic rendering and scene diversity but high-fidelity robot motion is assigned a less privileged role. The resulting sim2real gap significantly impacts transfer of the trained models to real robotic platforms. In this work we explore end-to-end training of agents in simulation in settings which minimize the sim2real gap both in sensing and in actuation. Our agent directly predicts (discretized) velocity commands which are maintained through closed-loop control in the real robot. The behavior of the real robot (including the underlying low-level controller) is identified and simulated in a modified Habitat simulator. Noise models for odometry and localization further contribute in lowering the sim2real gap. We evaluate on real navigation scenarios explore different localization and point goal calculation methods and report significant gains in performance and robustness compared to prior work. | https://openaccess.thecvf.com/content/CVPR2024/papers/Bono_Learning_to_Navigate_Efficiently_and_Precisely_in_Real_Environments_CVPR_2024_paper.pdf | http://arxiv.org/abs/2401.14349 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Bono_Learning_to_Navigate_Efficiently_and_Precisely_in_Real_Environments_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Bono_Learning_to_Navigate_Efficiently_and_Precisely_in_Real_Environments_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Bono_Learning_to_Navigate_CVPR_2024_supplemental.mp4 | null |
CAM Back Again: Large Kernel CNNs from a Weakly Supervised Object Localization Perspective | Shunsuke Yasuki, Masato Taki | Recently convolutional neural networks (CNNs) with large size kernels have attracted much attention in the computer vision field following the success of the Vision Transformers. Large kernel CNNs have been reported to perform well in downstream vision tasks as well as in classification performance. The reason for the high-performance of large kernel CNNs in downstream tasks has been attributed to the large effective receptive field (ERF) produced by large size kernels but this view has not been fully tested. We therefore revisit the performance of large kernel CNNs in downstream task focusing on the weakly supervised object localization (WSOL) task. WSOL a difficult downstream task that is not fully supervised provides a new angle to explore the capabilities of the large kernel CNNs. Our study compares the modern large kernel CNNs ConvNeXt RepLKNet and SLaK to test the validity of the naive expectation that ERF size is important for improving downstream task performance. Our analysis of the factors contributing to high performance provides a different perspective in which the main factor is feature map improvement. Furthermore we find that modern CNNs are robust to the CAM problems of local regions of objects being activated which has long been discussed in WSOL. CAM is the most classic WSOL method but because of the above-mentioned problems it is often used as a baseline method for comparison. However experiments on the CUB-200-2011 dataset show that simply combining a large kernel CNN CAM and simple data augmentation methods can achieve performance (90.99% MaxBoxAcc) comparable to the latest WSOL method which is CNN-based and requires special training or complex post-processing. | https://openaccess.thecvf.com/content/CVPR2024/papers/Yasuki_CAM_Back_Again_Large_Kernel_CNNs_from_a_Weakly_Supervised_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.06676 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Yasuki_CAM_Back_Again_Large_Kernel_CNNs_from_a_Weakly_Supervised_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Yasuki_CAM_Back_Again_Large_Kernel_CNNs_from_a_Weakly_Supervised_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yasuki_CAM_Back_Again_CVPR_2024_supplemental.pdf | null |
VkD: Improving Knowledge Distillation using Orthogonal Projections | Roy Miles, Ismail Elezi, Jiankang Deng | Knowledge distillation is an effective method for training small and efficient deep learning models. However the efficacy of a single method can degenerate when transferring to other tasks modalities or even other architectures. To address this limitation we propose a novel constrained feature distillation method. This method is derived from a small set of core principles which results in two emerging components: an orthogonal projection and a task-specific normalisation. Equipped with both of these components our transformer models can outperform all previous methods on ImageNet and reach up to a 4.4% relative improvement over the previous state-of-the-art methods. To further demonstrate the generality of our method we apply it to object detection and image generation whereby we obtain consistent and substantial performance improvements over state-of-the-art. Code and models are publicly available. | https://openaccess.thecvf.com/content/CVPR2024/papers/Miles_VkD_Improving_Knowledge_Distillation_using_Orthogonal_Projections_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Miles_VkD_Improving_Knowledge_Distillation_using_Orthogonal_Projections_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Miles_VkD_Improving_Knowledge_Distillation_using_Orthogonal_Projections_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Miles_VkD_Improving_Knowledge_CVPR_2024_supplemental.pdf | null |
Putting the Object Back into Video Object Segmentation | Ho Kei Cheng, Seoung Wug Oh, Brian Price, Joon-Young Lee, Alexander Schwing | We present Cutie a video object segmentation (VOS) network with object-level memory reading which puts the object representation from memory back into the video object segmentation result. Recent works on VOS employ bottom-up pixel-level memory reading which struggles due to matching noise especially in the presence of distractors resulting in lower performance in more challenging data. In contrast Cutie performs top-down object-level memory reading by adapting a small set of object queries. Via those it interacts with the bottom-up pixel features iteratively with a query-based object transformer (qt hence Cutie). The object queries act as a high-level summary of the target object while high-resolution feature maps are retained for accurate segmentation. Together with foreground-background masked attention Cutie cleanly separates the semantics of the foreground object from the background. On the challenging MOSE dataset Cutie improves by 8.7 J&F over XMem with a similar running time and improves by 4.2 J&F over DeAOT while being three times faster. Code is available at: hkchengrex.github.io/Cutie | https://openaccess.thecvf.com/content/CVPR2024/papers/Cheng_Putting_the_Object_Back_into_Video_Object_Segmentation_CVPR_2024_paper.pdf | http://arxiv.org/abs/2310.12982 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Cheng_Putting_the_Object_Back_into_Video_Object_Segmentation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Cheng_Putting_the_Object_Back_into_Video_Object_Segmentation_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Cheng_Putting_the_Object_CVPR_2024_supplemental.pdf | null |
Concept Weaver: Enabling Multi-Concept Fusion in Text-to-Image Models | Gihyun Kwon, Simon Jenni, Dingzeyu Li, Joon-Young Lee, Jong Chul Ye, Fabian Caba Heilbron | While there has been significant progress in customizing text-to-image generation models generating images that combine multiple personalized concepts remains challenging. In this work we introduce Concept Weaver a method for composing customized text-to-image diffusion models at inference time. Specifically the method breaks the process into two steps: creating a template image aligned with the semantics of input prompts and then personalizing the template using a concept fusion strategy. The fusion strategy incorporates the appearance of the target concepts into the template image while retaining its structural details. The results indicate that our method can generate multiple custom concepts with higher identity fidelity compared to alternative approaches. Furthermore the method is shown to seamlessly handle more than two concepts and closely follow the semantic meaning of the input prompt without blending appearances across different subjects. | https://openaccess.thecvf.com/content/CVPR2024/papers/Kwon_Concept_Weaver_Enabling_Multi-Concept_Fusion_in_Text-to-Image_Models_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.03913 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Kwon_Concept_Weaver_Enabling_Multi-Concept_Fusion_in_Text-to-Image_Models_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Kwon_Concept_Weaver_Enabling_Multi-Concept_Fusion_in_Text-to-Image_Models_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kwon_Concept_Weaver_Enabling_CVPR_2024_supplemental.pdf | null |
PKU-DyMVHumans: A Multi-View Video Benchmark for High-Fidelity Dynamic Human Modeling | Xiaoyun Zheng, Liwei Liao, Xufeng Li, Jianbo Jiao, Rongjie Wang, Feng Gao, Shiqi Wang, Ronggang Wang | High-quality human reconstruction and photo-realistic rendering of a dynamic scene is a long-standing problem in computer vision and graphics. Despite considerable efforts invested in developing various capture systems and reconstruction algorithms recent advancements still struggle with loose or oversized clothing and overly complex poses. In part this is due to the challenges of acquiring high-quality human datasets. To facilitate the development of these fields in this paper we present PKU-DyMVHumans a versatile human-centric dataset for high-fidelity reconstruction and rendering of dynamic human scenarios from dense multi-view videos. It comprises 8.2 million frames captured by more than 56 synchronized cameras across diverse scenarios. These sequences comprise 32 human subjects across 45 different scenarios each with a high-detailed appearance and realistic human motion. Inspired by recent advancements in neural radiance field (NeRF)-based scene representations we carefully set up an off-the-shelf framework that is easy to provide those state-of-the-art NeRF-based implementations and benchmark on PKU-DyMVHumans dataset. It is paving the way for various applications like fine-grained foreground/background decomposition high-quality human reconstruction and photo-realistic novel view synthesis of a dynamic scene. Extensive studies are performed on the benchmark demonstrating new observations and challenges that emerge from using such high-fidelity dynamic data. The project page and data is available at: https://pku-dymvhumans.github.io. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zheng_PKU-DyMVHumans_A_Multi-View_Video_Benchmark_for_High-Fidelity_Dynamic_Human_Modeling_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zheng_PKU-DyMVHumans_A_Multi-View_Video_Benchmark_for_High-Fidelity_Dynamic_Human_Modeling_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zheng_PKU-DyMVHumans_A_Multi-View_Video_Benchmark_for_High-Fidelity_Dynamic_Human_Modeling_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zheng_PKU-DyMVHumans_A_Multi-View_CVPR_2024_supplemental.pdf | null |
Cross-Domain Few-Shot Segmentation via Iterative Support-Query Correspondence Mining | Jiahao Nie, Yun Xing, Gongjie Zhang, Pei Yan, Aoran Xiao, Yap-Peng Tan, Alex C. Kot, Shijian Lu | Cross-Domain Few-Shot Segmentation (CD-FSS) poses the challenge of segmenting novel categories from a distinct domain using only limited exemplars. In this paper we undertake a comprehensive study of CD-FSS and uncover two crucial insights: (i) the necessity of a fine-tuning stage to effectively transfer the learned meta-knowledge across domains and (ii) the overfitting risk during the naive fine-tuning due to the scarcity of novel category examples. With these insights we propose a novel cross-domain fine-tuning strategy that addresses the challenging CD-FSS tasks. We first design Bi-directional Few-shot Prediction (BFP) which establishes support-query correspondence in a bi-directional manner crafting augmented supervision to reduce the overfitting risk. Then we further extend BFP into Iterative Few-shot Adaptor (IFA) which is a recursive framework to capture the support-query correspondence iteratively targeting maximal exploitation of supervisory signals from the sparse novel category samples. Extensive empirical evaluations show that our method significantly outperforms the state-of-the-arts (+7.8%) which verifies that IFA tackles the cross-domain challenges and mitigates the overfitting simultaneously. The code is available at: https://github.com/niejiahao1998/IFA. | https://openaccess.thecvf.com/content/CVPR2024/papers/Nie_Cross-Domain_Few-Shot_Segmentation_via_Iterative_Support-Query_Correspondence_Mining_CVPR_2024_paper.pdf | http://arxiv.org/abs/2401.08407 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Nie_Cross-Domain_Few-Shot_Segmentation_via_Iterative_Support-Query_Correspondence_Mining_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Nie_Cross-Domain_Few-Shot_Segmentation_via_Iterative_Support-Query_Correspondence_Mining_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Nie_Cross-Domain_Few-Shot_Segmentation_CVPR_2024_supplemental.pdf | null |
CausalPC: Improving the Robustness of Point Cloud Classification by Causal Effect Identification | Yuanmin Huang, Mi Zhang, Daizong Ding, Erling Jiang, Zhaoxiang Wang, Min Yang | Deep neural networks have demonstrated remarkable performance in point cloud classification. However previous works show they are vulnerable to adversarial perturbations that can manipulate their predictions. Given the distinctive modality of point clouds various attack strategies have emerged posing challenges for existing defenses to achieve effective generalization. In this study we for the first time introduce causal modeling to enhance the robustness of point cloud classification models. Our insight is from the observation that adversarial examples closely resemble benign point clouds from the human perspective. In our causal modeling we incorporate two critical variables the structural information (standing for the key feature leading to the classification) and the hidden confounders (standing for the noise interfering with the classification). The resulting overall framework CausalPC consists of three sub-modules to identify the causal effect for robust classification. The framework is model-agnostic and adaptable for integration with various point cloud classifiers. Our approach significantly improves the adversarial robustness of three mainstream point cloud classification models on two benchmark datasets. For instance the classification accuracy for DGCNN on ModelNet40 increases from 29.2% to 72.0% with CausalPC whereas the best-performing baseline achieves only 42.4%. | https://openaccess.thecvf.com/content/CVPR2024/papers/Huang_CausalPC_Improving_the_Robustness_of_Point_Cloud_Classification_by_Causal_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Huang_CausalPC_Improving_the_Robustness_of_Point_Cloud_Classification_by_Causal_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Huang_CausalPC_Improving_the_Robustness_of_Point_Cloud_Classification_by_Causal_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Huang_CausalPC_Improving_the_CVPR_2024_supplemental.pdf | null |
LASA: Instance Reconstruction from Real Scans using A Large-scale Aligned Shape Annotation Dataset | Haolin Liu, Chongjie Ye, Yinyu Nie, Yingfan He, Xiaoguang Han | Instance shape reconstruction from a 3D scene involves recovering the full geometries of multiple objects at the semantic instance level. Many methods leverage data-driven learning due to the intricacies of scene complexity and significant indoor occlusions. Training these methods often requires a large-scale high-quality dataset with aligned and paired shape annotations with real-world scans. Existing datasets are either synthetic or misaligned restricting the performance of data-driven methods on real data. To this end we introduce LASA a Large-scale Aligned Shape Annotation Dataset comprising 10412 high-quality CAD annotations aligned with 920 real-world scene scans from ArkitScenes created manually by professional artists. On this top we propose a novel Diffusion-based Cross-Modal Shape Reconstruction (DisCo) method. It is empowered by a hybrid feature aggregation design to fuse multi-modal inputs and recover high-fidelity object geometries. Besides we present an Occupancy-Guided 3D Object Detection (OccGOD) method and demonstrate that our shape annotations provide scene occupancy clues that can further improve 3D object detection. Supported by LASA extensive experiments show that our methods achieve state-of-the-art performance in both instance-level scene reconstruction and 3D object detection tasks. | https://openaccess.thecvf.com/content/CVPR2024/papers/Liu_LASA_Instance_Reconstruction_from_Real_Scans_using_A_Large-scale_Aligned_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.12418 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Liu_LASA_Instance_Reconstruction_from_Real_Scans_using_A_Large-scale_Aligned_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Liu_LASA_Instance_Reconstruction_from_Real_Scans_using_A_Large-scale_Aligned_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liu_LASA_Instance_Reconstruction_CVPR_2024_supplemental.pdf | null |
LaRE^2: Latent Reconstruction Error Based Method for Diffusion-Generated Image Detection | Yunpeng Luo, Junlong Du, Ke Yan, Shouhong Ding | The evolution of Diffusion Models has dramatically improved image generation quality making it increasingly difficult to differentiate between real and generated images. This development while impressive also raises significant privacy and security concerns. In response to this we propose a novel Latent REconstruction error guided feature REfinement method (LaRE^2) for detecting the diffusion-generated images. We come up with the Latent Reconstruction Error (LaRE) the first reconstruction-error based feature in the latent space for generated image detection. LaRE surpasses existing methods in terms of feature extraction efficiency while preserving crucial cues required to differentiate between the real and the fake. To exploit LaRE we propose an Error-Guided feature REfinement module (EGRE) which can refine the image feature guided by LaRE to enhance the discriminativeness of the feature. Our EGRE utilizes an align-then-refine mechanism which effectively refines the image feature for generated-image detection from both spatial and channel perspectives. Extensive experiments on the large-scale GenImage benchmark demonstrate the superiority of our LaRE^2 which surpasses the best SoTA method by up to 11.9%/12.1% average ACC/AP across 8 different image generators. LaRE also surpasses existing methods in terms of feature extraction cost delivering an impressive speed enhancement of 8 times. | https://openaccess.thecvf.com/content/CVPR2024/papers/Luo_LaRE2_Latent_Reconstruction_Error_Based_Method_for_Diffusion-Generated_Image_Detection_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Luo_LaRE2_Latent_Reconstruction_Error_Based_Method_for_Diffusion-Generated_Image_Detection_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Luo_LaRE2_Latent_Reconstruction_Error_Based_Method_for_Diffusion-Generated_Image_Detection_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Luo_LaRE2_Latent_Reconstruction_CVPR_2024_supplemental.pdf | null |
DiffSCI: Zero-Shot Snapshot Compressive Imaging via Iterative Spectral Diffusion Model | Zhenghao Pan, Haijin Zeng, Jiezhang Cao, Kai Zhang, Yongyong Chen | This paper endeavors to advance the precision of snapshot compressive imaging (SCI) reconstruction for multispectral image (MSI). To achieve this we integrate the advantageous attributes of established SCI techniques and an image generative model propose a novel structured zero-shot diffusion model dubbed DiffSCI. DiffSCI leverages the structural insights from the deep prior and optimization-based methodologies complemented by the generative capabilities offered by the contemporary denoising diffusion model. Specifically firstly we employ a pre-trained diffusion model which has been trained on a substantial corpus of RGB images as the generative denoiser within the Plug-and-Play framework for the first time. This integration allows for the successful completion of SCI reconstruction especially in the case that current methods struggle to address effectively. Secondly we systematically account for spectral band correlations and introduce a robust methodology to mitigate wavelength mismatch thus enabling seamless adaptation of the RGB diffusion model to MSIs.Thirdly an accelerated algorithm is implemented to expedite the resolution of the data subproblem. This augmentation not only accelerates the convergence rate but also elevates the quality of the reconstruction process.We present extensive testing to show that DiffSCI exhibits discernible performance enhancements over prevailing self-supervised and zero-shot approaches surpassing even supervised transformer counterparts across both simulated and real datasets. Code is at https://github.com/PAN083/DiffSCI. | https://openaccess.thecvf.com/content/CVPR2024/papers/Pan_DiffSCI_Zero-Shot_Snapshot_Compressive_Imaging_via_Iterative_Spectral_Diffusion_Model_CVPR_2024_paper.pdf | http://arxiv.org/abs/2311.11417 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Pan_DiffSCI_Zero-Shot_Snapshot_Compressive_Imaging_via_Iterative_Spectral_Diffusion_Model_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Pan_DiffSCI_Zero-Shot_Snapshot_Compressive_Imaging_via_Iterative_Spectral_Diffusion_Model_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Pan_DiffSCI_Zero-Shot_Snapshot_CVPR_2024_supplemental.pdf | null |
DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven Holistic 3D Expression and Gesture Generation | Junming Chen, Yunfei Liu, Jianan Wang, Ailing Zeng, Yu Li, Qifeng Chen | We propose DiffSHEG a Diffusion-based approach for Speech-driven Holistic 3D Expression and Gesture generation. While previous works focused on co-speech gesture or expression generation individually the joint generation of synchronized expressions and gestures remains barely explored. To address this our diffusion-based co-speech motion generation Transformer enables uni-directional information flow from expression to gesture facilitating improved matching of joint expression-gesture distributions. Furthermore we introduce an outpainting-based sampling strategy for arbitrary long sequence generation in diffusion models offering flexibility and computational efficiency. Our method provides a practical solution that produces high-quality synchronized expression and gesture generation driven by speech. Evaluated on two public datasets our approach achieves state-of-the-art performance both quantitatively and qualitatively. Additionally a user study confirms the superiority of our method over prior approaches. By enabling the real-time generation of expressive and synchronized motions our method showcases its potential for various applications in the development of digital humans and embodied agents. | https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_DiffSHEG_A_Diffusion-Based_Approach_for_Real-Time_Speech-driven_Holistic_3D_Expression_CVPR_2024_paper.pdf | http://arxiv.org/abs/2401.04747 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Chen_DiffSHEG_A_Diffusion-Based_Approach_for_Real-Time_Speech-driven_Holistic_3D_Expression_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Chen_DiffSHEG_A_Diffusion-Based_Approach_for_Real-Time_Speech-driven_Holistic_3D_Expression_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Chen_DiffSHEG_A_Diffusion-Based_CVPR_2024_supplemental.pdf | null |
MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models | Sanjoy Chowdhury, Sayan Nag, K J Joseph, Balaji Vasan Srinivasan, Dinesh Manocha | Music is a universal language that can communicate emotions and feelings. It forms an essential part of the whole spectrum of creative media ranging from movies to social media posts. Machine learning models that can synthesize music are predominantly conditioned on textual descriptions of it. Inspired by how musicians compose music not just from a movie script but also through visualizations we propose MeLFusion a model that can effectively use cues from a textual description and the corresponding image to synthesize music. MeLFusion is a text-to-music diffusion model with a novel "visual synapse" which effectively infuses the semantics from the visual modality into the generated music. To facilitate research in this area we introduce a new dataset MeLBench and propose a new evaluation metric IMSM. Our exhaustive experimental evaluation suggests that adding visual information to the music synthesis pipeline significantly improves the quality of generated music measured both objectively and subjectively with a relative gain of up to 67.98% on the FAD score. We hope that our work will gather attention to this pragmatic yet relatively under-explored research area. | https://openaccess.thecvf.com/content/CVPR2024/papers/Chowdhury_MeLFusion_Synthesizing_Music_from_Image_and_Language_Cues_using_Diffusion_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Chowdhury_MeLFusion_Synthesizing_Music_from_Image_and_Language_Cues_using_Diffusion_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Chowdhury_MeLFusion_Synthesizing_Music_from_Image_and_Language_Cues_using_Diffusion_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Chowdhury_MeLFusion_Synthesizing_Music_CVPR_2024_supplemental.pdf | null |
T4P: Test-Time Training of Trajectory Prediction via Masked Autoencoder and Actor-specific Token Memory | Daehee Park, Jaeseok Jeong, Sung-Hoon Yoon, Jaewoo Jeong, Kuk-Jin Yoon | Trajectory prediction is a challenging problem that requires considering interactions among multiple actors and the surrounding environment. While data-driven approaches have been used to address this complex problem they suffer from unreliable predictions under distribution shifts during test time. Accordingly several online learning methods have been proposed using regression loss from the ground truth of observed data leveraging the auto-labeling nature of trajectory prediction task. We mainly tackle the following two issues. First previous works underfit and overfit as they only optimize the last layer of motion decoder. To this end we employ the masked autoencoder (MAE) for representation learning to encourage complex interaction modeling in shifted test distribution for updating deeper layers. Second utilizing the sequential nature of driving data we propose an actor-specific token memory that enables the test-time learning of actor-wise motion characteristics. Our proposed method has been validated across various challenging cross-dataset distribution shift scenarios including nuScenes Lyft Waymo and Interaction. Our method surpasses the performance of existing state-of-the-art online learning methods in terms of both prediction accuracy and computational efficiency. The code is available at https://github.com/daeheepark/T4P. | https://openaccess.thecvf.com/content/CVPR2024/papers/Park_T4P_Test-Time_Training_of_Trajectory_Prediction_via_Masked_Autoencoder_and_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.10052 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Park_T4P_Test-Time_Training_of_Trajectory_Prediction_via_Masked_Autoencoder_and_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Park_T4P_Test-Time_Training_of_Trajectory_Prediction_via_Masked_Autoencoder_and_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Park_T4P_Test-Time_Training_CVPR_2024_supplemental.pdf | null |
Noisy-Correspondence Learning for Text-to-Image Person Re-identification | Yang Qin, Yingke Chen, Dezhong Peng, Xi Peng, Joey Tianyi Zhou, Peng Hu | Text-to-image person re-identification (TIReID) is a compelling topic in the cross-modal community which aims to retrieve the target person based on a textual query. Although numerous TIReID methods have been proposed and achieved promising performance they implicitly assume the training image-text pairs are correctly aligned which is not always the case in real-world scenarios. In practice the image-text pairs inevitably exist under-correlated or even false-correlated a.k.a noisy correspondence (NC) due to the low quality of the images and annotation errors. To address this problem we propose a novel Robust Dual Embedding method (RDE) that can learn robust visual-semantic associations even with NC. Specifically RDE consists of two main components: 1) A Confident Consensus Division (CCD) module that leverages the dual-grained decisions of dual embedding modules to obtain a consensus set of clean training data which enables the model to learn correct and reliable visual-semantic associations. 2) A Triplet Alignment Loss (TAL) relaxes the conventional Triplet Ranking loss with the hardest negative samples to a log-exponential upper bound over all negative ones thus preventing the model collapse under NC and can also focus on hard-negative samples for promising performance. We conduct extensive experiments on three public benchmarks namely CUHK-PEDES ICFG-PEDES and RSTPReID to evaluate the performance and robustness of our RDE. Our method achieves state-of-the-art results both with and without synthetic noisy correspondences on all three datasets. Code is available at https://github.com/QinYang79/RDE. | https://openaccess.thecvf.com/content/CVPR2024/papers/Qin_Noisy-Correspondence_Learning_for_Text-to-Image_Person_Re-identification_CVPR_2024_paper.pdf | http://arxiv.org/abs/2308.09911 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Qin_Noisy-Correspondence_Learning_for_Text-to-Image_Person_Re-identification_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Qin_Noisy-Correspondence_Learning_for_Text-to-Image_Person_Re-identification_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Qin_Noisy-Correspondence_Learning_for_CVPR_2024_supplemental.pdf | null |
InstaGen: Enhancing Object Detection by Training on Synthetic Dataset | Chengjian Feng, Yujie Zhong, Zequn Jie, Weidi Xie, Lin Ma | In this paper we present a novel paradigm to enhance the ability of object detector e.g. expanding categories or improving detection performance by training on syn- thetic dataset generated from diffusion models. Specifically we integrate an instance-level grounding head into a pre- trained generative diffusion model to augment it with the ability of localising instances in the generated images. The grounding head is trained to align the text embedding of category names with the regional visual feature of the diffusion model using supervision from an off-the-shelf object detector and a novel self-training scheme on (novel) categories not covered by the detector. We conduct thorough experiments to show that this enhanced version of diffusion model termed as InstaGen can serve as a data synthesizer to enhance object detectors by training on its generated samples demonstrating superior performance over existing state-of-the-art methods in open-vocabulary (+4.5 AP) and data-sparse (+1.2 ? 5.2 AP) scenarios. | https://openaccess.thecvf.com/content/CVPR2024/papers/Feng_InstaGen_Enhancing_Object_Detection_by_Training_on_Synthetic_Dataset_CVPR_2024_paper.pdf | http://arxiv.org/abs/2402.05937 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Feng_InstaGen_Enhancing_Object_Detection_by_Training_on_Synthetic_Dataset_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Feng_InstaGen_Enhancing_Object_Detection_by_Training_on_Synthetic_Dataset_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Feng_InstaGen_Enhancing_Object_CVPR_2024_supplemental.pdf | null |
PanoRecon: Real-Time Panoptic 3D Reconstruction from Monocular Video | Dong Wu, Zike Yan, Hongbin Zha | We introduce the Panoptic 3D Reconstruction task a unified and holistic scene understanding task for a monocular video. And we present PanoRecon - a novel framework to address this new task which realizes an online geometry reconstruction alone with dense semantic and instance labeling. Specifically PanoRecon incrementally performs panoptic 3D reconstruction for each video fragment consisting of multiple consecutive key frames from a volumetric feature representation using feed-forward neural networks. We adopt a depth-guided back-projection strategy to sparse and purify the volumetric feature representation. We further introduce a voxel clustering module to get object instances in each local fragment and then design a tracking and fusion algorithm for the integration of instances from different fragments to ensure temporal coherence. Such design enables our PanoRecon to yield a coherent and accurate panoptic 3D reconstruction. Experiments on ScanNetV2 demonstrate a very competitive geometry reconstruction result compared with state-of-the-art reconstruction methods as well as promising 3D panoptic segmentation result with only RGB input while being real-time. Code is available at: https://github.com/Riser6/PanoRecon. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_PanoRecon_Real-Time_Panoptic_3D_Reconstruction_from_Monocular_Video_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wu_PanoRecon_Real-Time_Panoptic_3D_Reconstruction_from_Monocular_Video_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wu_PanoRecon_Real-Time_Panoptic_3D_Reconstruction_from_Monocular_Video_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wu_PanoRecon_Real-Time_Panoptic_CVPR_2024_supplemental.zip | null |
Animating General Image with Large Visual Motion Model | Dengsheng Chen, Xiaoming Wei, Xiaolin Wei | We present the pioneering Large Visual Motion Model (LVMM) meticulously engineered to analyze the intrinsic dynamics encapsulated within real-world imagery. Our model fortified with a wealth of prior knowledge extracted from billions of image pairs demonstrates promising results in predicting a diverse spectrum of scene dynamics. As a result it can infuse any generic image with authentic dynamic effects enhancing its visual allure. | https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_Animating_General_Image_with_Large_Visual_Motion_Model_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Animating_General_Image_with_Large_Visual_Motion_Model_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Animating_General_Image_with_Large_Visual_Motion_Model_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Chen_Animating_General_Image_CVPR_2024_supplemental.pdf | null |
Visual Point Cloud Forecasting enables Scalable Autonomous Driving | Zetong Yang, Li Chen, Yanan Sun, Hongyang Li | In contrast to extensive studies on general vision pre-training for scalable visual autonomous driving remains seldom explored. Visual autonomous driving applications require features encompassing semantics 3D geometry and temporal information simultaneously for joint perception prediction and planning posing dramatic challenges for pre-training. To resolve this we bring up a new pre-training task termed as visual point cloud forecasting - predicting future point clouds from historical visual input. The key merit of this task captures the synergic learning of semantics 3D structures and temporal dynamics. Hence it shows superiority in various downstream tasks. To cope with this new problem we present ViDAR a general model to pre-train downstream visual encoders. It first extracts historical embeddings by the encoder. These representations are then transformed to 3D geometric space via a novel Latent Rendering operator for future point cloud prediction. Experiments show significant gain in downstream tasks e.g. 3.1% NDS on 3D detection 10% error reduction on motion forecasting and 15% less collision rate on planning. | https://openaccess.thecvf.com/content/CVPR2024/papers/Yang_Visual_Point_Cloud_Forecasting_enables_Scalable_Autonomous_Driving_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.17655 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Visual_Point_Cloud_Forecasting_enables_Scalable_Autonomous_Driving_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Visual_Point_Cloud_Forecasting_enables_Scalable_Autonomous_Driving_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yang_Visual_Point_Cloud_CVPR_2024_supplemental.pdf | null |
Towards Transferable Targeted 3D Adversarial Attack in the Physical World | Yao Huang, Yinpeng Dong, Shouwei Ruan, Xiao Yang, Hang Su, Xingxing Wei | Compared with transferable untargeted attacks transferable targeted adversarial attacks could specify the misclassification categories of adversarial samples posing a greater threat to security-critical tasks. In the meanwhile 3D adversarial samples due to their potential of multi-view robustness can more comprehensively identify weaknesses in existing deep learning systems possessing great application value. However the field of transferable targeted 3D adversarial attacks remains vacant. The goal of this work is to develop a more effective technique that could generate transferable targeted 3D adversarial examples filling the gap in this field. To achieve this goal we design a novel framework named TT3D that could rapidly reconstruct from few multi-view images into Transferable Targeted 3D textured meshes. While existing mesh-based texture optimization methods compute gradients in the high-dimensional mesh space and easily fall into local optima leading to unsatisfactory transferability and distinct distortions TT3D innovatively performs dual optimization towards both feature grid and Multi-layer Perceptron (MLP) parameters in the grid-based NeRF space which significantly enhances black-box transferability while enjoying naturalness. Experimental results show that TT3D not only exhibits superior cross-model transferability but also maintains considerable adaptability across different renders and vision tasks. More importantly we produce 3D adversarial examples with 3D printing techniques in the real world and verify their robust performance under various scenarios. | https://openaccess.thecvf.com/content/CVPR2024/papers/Huang_Towards_Transferable_Targeted_3D_Adversarial_Attack_in_the_Physical_World_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.09558 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Huang_Towards_Transferable_Targeted_3D_Adversarial_Attack_in_the_Physical_World_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Huang_Towards_Transferable_Targeted_3D_Adversarial_Attack_in_the_Physical_World_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Huang_Towards_Transferable_Targeted_CVPR_2024_supplemental.pdf | null |
SwitchLight: Co-design of Physics-driven Architecture and Pre-training Framework for Human Portrait Relighting | Hoon Kim, Minje Jang, Wonjun Yoon, Jisoo Lee, Donghyun Na, Sanghyun Woo | We introduce a co-designed approach for human portrait relighting that combines a physics-guided architecture with a pre-training framework. Drawing on the Cook-Torrance reflectance model we have meticulously configured the architecture design to precisely simulate light-surface interactions. Furthermore to overcome the limitation of scarce high-quality lightstage data we have developed a self-supervised pre-training strategy. This novel combination of accurate physical modeling and expanded training dataset establishes a new benchmark in relighting realism. | https://openaccess.thecvf.com/content/CVPR2024/papers/Kim_SwitchLight_Co-design_of_Physics-driven_Architecture_and_Pre-training_Framework_for_Human_CVPR_2024_paper.pdf | http://arxiv.org/abs/2402.18848 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Kim_SwitchLight_Co-design_of_Physics-driven_Architecture_and_Pre-training_Framework_for_Human_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Kim_SwitchLight_Co-design_of_Physics-driven_Architecture_and_Pre-training_Framework_for_Human_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kim_SwitchLight_Co-design_of_CVPR_2024_supplemental.pdf | null |
DIRECT-3D: Learning Direct Text-to-3D Generation on Massive Noisy 3D Data | Qihao Liu, Yi Zhang, Song Bai, Adam Kortylewski, Alan Yuille | We present DIRECT-3D a diffusion-based 3D generative model for creating high-quality 3D assets (represented by Neural Radiance Fields) from text prompts. Unlike recent 3D generative models that rely on clean and well-aligned 3D data limiting them to single or few-class generation our model is directly trained on extensive noisy and unaligned `in-the-wild' 3D assets mitigating the key challenge (i.e. data scarcity) in large-scale 3D generation. In particular DIRECT-3D is a tri-plane diffusion model that integrates two innovations: 1) A novel learning framework where noisy data are filtered and aligned automatically during the training process. Specifically after an initial warm-up phase using a small set of clean data an iterative optimization is introduced in the diffusion process to explicitly estimate the 3D pose of objects and select beneficial data based on conditional density. 2) An efficient 3D representation that is achieved by disentangling object geometry and color features with two separate conditional diffusion models that are optimized hierarchically. Given a prompt input our model generates high-quality high-resolution realistic and complex 3D objects with accurate geometric details in seconds. We achieve state-of-the-art performance in both single-class generation and text-to-3D generation. We also demonstrate that DIRECT-3D can serve as a useful 3D geometric prior of objects for example to alleviate the well-known Janus problem in 2D-lifting methods such as DreamFusion. | https://openaccess.thecvf.com/content/CVPR2024/papers/Liu_DIRECT-3D_Learning_Direct_Text-to-3D_Generation_on_Massive_Noisy_3D_Data_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Liu_DIRECT-3D_Learning_Direct_Text-to-3D_Generation_on_Massive_Noisy_3D_Data_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Liu_DIRECT-3D_Learning_Direct_Text-to-3D_Generation_on_Massive_Noisy_3D_Data_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liu_DIRECT-3D_Learning_Direct_CVPR_2024_supplemental.pdf | null |
Synthesize Step-by-Step: Tools Templates and LLMs as Data Generators for Reasoning-Based Chart VQA | Zhuowan Li, Bhavan Jasani, Peng Tang, Shabnam Ghadar | Understanding data visualizations like charts and plots requires reasoning about both visual elements and numerics. Although strong in extractive questions current chart visual question answering (chart VQA) models suffer on complex reasoning questions. In this work we address the lack of reasoning ability by data augmentation. We leverage Large Language Models (LLMs) which have shown to have strong reasoning ability as an automatic data annotator that generates question-answer annotations for chart images. The key innovation in our method lies in the Synthesize Step-by-Step strategy: our LLM-based data generator learns to decompose the complex question into step-by-step sub-questions (rationales) which are then used to derive the final answer using external tools i.e. Python. This step-wise generation procedure is trained on synthetic data generated using a template-based QA generation pipeline. Experimental results highlight the significance of the proposed step-by-step generation. By training with the LLM-augmented data (LAMENDA) we significantly enhance the chart VQA models achieving the state-of-the-art accuracy on the ChartQA and PlotQA datasets. In particular our approach improves the accuracy of the previous state-of-the-art approach from 38% to 54% on the human-written questions in the ChartQA dataset which needs strong reasoning. We hope our work underscores the potential of synthetic data and encourages further exploration of data augmentation using LLMs for reasoning-heavy tasks. | https://openaccess.thecvf.com/content/CVPR2024/papers/Li_Synthesize_Step-by-Step_Tools_Templates_and_LLMs_as_Data_Generators_for_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Li_Synthesize_Step-by-Step_Tools_Templates_and_LLMs_as_Data_Generators_for_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Li_Synthesize_Step-by-Step_Tools_Templates_and_LLMs_as_Data_Generators_for_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_Synthesize_Step-by-Step_Tools_CVPR_2024_supplemental.pdf | null |
LayoutLLM: Layout Instruction Tuning with Large Language Models for Document Understanding | Chuwei Luo, Yufan Shen, Zhaoqing Zhu, Qi Zheng, Zhi Yu, Cong Yao | Recently leveraging large language models (LLMs) or multimodal large language models (MLLMs) for document understanding has been proven very promising. However previous works that employ LLMs/MLLMs for document understanding have not fully explored and utilized the document layout information which is vital for precise document understanding. In this paper we propose LayoutLLM an LLM/MLLM based method for document understanding. The core of LayoutLLM is a layout instruction tuning strategy which is specially designed to enhance the comprehension and utilization of document layouts. The proposed layout instruction tuning strategy consists of two components: Layout-aware Pre-training and Layout-aware Supervised Fine-tuning. To capture the characteristics of document layout in Layout-aware Pre-training three groups of pre-training tasks corresponding to document-level region-level and segment-level information are introduced. Furthermore a novel module called layout chain-of-thought (LayoutCoT) is devised to enable LayoutLLM to focus on regions relevant to the question and generate accurate answers. LayoutCoT is effective for boosting the performance of document understanding. Meanwhile it brings a certain degree of interpretability which could facilitate manual inspection and correction. Experiments on standard benchmarks show that the proposed LayoutLLM significantly outperforms existing methods that adopt open-source 7B LLMs/MLLMs for document understanding. | https://openaccess.thecvf.com/content/CVPR2024/papers/Luo_LayoutLLM_Layout_Instruction_Tuning_with_Large_Language_Models_for_Document_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.05225 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Luo_LayoutLLM_Layout_Instruction_Tuning_with_Large_Language_Models_for_Document_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Luo_LayoutLLM_Layout_Instruction_Tuning_with_Large_Language_Models_for_Document_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Luo_LayoutLLM_Layout_Instruction_CVPR_2024_supplemental.pdf | null |
ProTeCt: Prompt Tuning for Taxonomic Open Set Classification | Tz-Ying Wu, Chih-Hui Ho, Nuno Vasconcelos | Visual-language foundation models like CLIP learn generalized representations that enable zero-shot open-set classification. Few-shot adaptation methods based on prompt tuning have been shown to further improve performance on downstream datasets. However these methods do not fare well in the taxonomic open set (TOS) setting where the classifier is asked to make prediction from label set across different levels of semantic granularity. Frequently they infer incorrect labels at coarser taxonomic class levels even when the inference at the leaf level (original class labels) is correct. To address this problem we propose a prompt tuning technique that calibrates the hierarchical consistency of model predictions. A set of metrics of hierarchical consistency the Hierarchical Consistent Accuracy (HCA) and the Mean Treecut Accuracy (MTA) are first proposed to evaluate TOS model performance. A new Prompt Tuning for Hierarchical Consistency (ProTeCt) technique is then proposed to calibrate classification across label set granularities. Results show that ProTeCt can be combined with existing prompt tuning methods to significantly improve TOS classification without degrading the leaf level classification performance. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_ProTeCt_Prompt_Tuning_for_Taxonomic_Open_Set_Classification_CVPR_2024_paper.pdf | http://arxiv.org/abs/2306.02240 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wu_ProTeCt_Prompt_Tuning_for_Taxonomic_Open_Set_Classification_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wu_ProTeCt_Prompt_Tuning_for_Taxonomic_Open_Set_Classification_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wu_ProTeCt_Prompt_Tuning_CVPR_2024_supplemental.pdf | null |
Adapters Strike Back | Jan-Martin O. Steitz, Stefan Roth | Adapters provide an efficient and lightweight mechanism for adapting trained transformer models to a variety of different tasks. However they have often been found to be outperformed by other adaptation mechanisms including low-rank adaptation. In this paper we provide an in-depth study of adapters their internal structure as well as various implementation choices. We uncover pitfalls for using adapters and suggest a concrete improved adapter architecture called Adapter+ that not only outperforms previous adapter implementations but surpasses a number of other more complex adaptation mechanisms in several challenging settings. Despite this our suggested adapter is highly robust and unlike previous work requires little to no manual intervention when addressing a novel scenario. Adapter+ reaches state-of-the-art average accuracy on the VTAB benchmark even without a per-task hyperparameter optimization. | https://openaccess.thecvf.com/content/CVPR2024/papers/Steitz_Adapters_Strike_Back_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Steitz_Adapters_Strike_Back_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Steitz_Adapters_Strike_Back_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Steitz_Adapters_Strike_Back_CVPR_2024_supplemental.pdf | null |
Masked Autoencoders for Microscopy are Scalable Learners of Cellular Biology | Oren Kraus, Kian Kenyon-Dean, Saber Saberian, Maryam Fallah, Peter McLean, Jess Leung, Vasudev Sharma, Ayla Khan, Jia Balakrishnan, Safiye Celik, Dominique Beaini, Maciej Sypetkowski, Chi Vicky Cheng, Kristen Morse, Maureen Makes, Ben Mabey, Berton Earnshaw | Featurizing microscopy images for use in biological research remains a significant challenge especially for large-scale experiments spanning millions of images. This work explores the scaling properties of weakly supervised classifiers and self-supervised masked autoencoders (MAEs) when training with increasingly larger model backbones and microscopy datasets. Our results show that ViT-based MAEs outperform weakly supervised classifiers on a variety of tasks achieving as much as a 11.5% relative improvement when recalling known biological relationships curated from public databases. Additionally we develop a new channel-agnostic MAE architecture (CA-MAE) that allows for inputting images of different numbers and orders of channels at inference time. We demonstrate that CA-MAEs effectively generalize by inferring and evaluating on a microscopy image dataset (JUMP-CP) generated under different experimental conditions with a different channel structure than our pretraining data (RPI-93M). Our findings motivate continued research into scaling self-supervised learning on microscopy data in order to create powerful foundation models of cellular biology that have the potential to catalyze advancements in drug discovery and beyond. | https://openaccess.thecvf.com/content/CVPR2024/papers/Kraus_Masked_Autoencoders_for_Microscopy_are_Scalable_Learners_of_Cellular_Biology_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.10242 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Kraus_Masked_Autoencoders_for_Microscopy_are_Scalable_Learners_of_Cellular_Biology_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Kraus_Masked_Autoencoders_for_Microscopy_are_Scalable_Learners_of_Cellular_Biology_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kraus_Masked_Autoencoders_for_CVPR_2024_supplemental.pdf | null |
OHTA: One-shot Hand Avatar via Data-driven Implicit Priors | Xiaozheng Zheng, Chao Wen, Zhuo Su, Zeran Xu, Zhaohu Li, Yang Zhao, Zhou Xue | In this paper we delve into the creation of one-shot hand avatars attaining high-fidelity and drivable hand representations swiftly from a single image. With the burgeoning domains of the digital human the need for quick and personalized hand avatar creation has become increasingly critical. Existing techniques typically require extensive input data and may prove cumbersome or even impractical in certain scenarios. To enhance accessibility we present a novel method OHTA (One-shot Hand avaTAr) that enables the creation of detailed hand avatars from merely one image. OHTA tackles the inherent difficulties of this data-limited problem by learning and utilizing data-driven hand priors. Specifically we design a hand prior model initially employed for 1) learning various hand priors with available data and subsequently for 2) the inversion and fitting of the target identity with prior knowledge. OHTA demonstrates the capability to create high-fidelity hand avatars with consistent animatable quality solely relying on a single image. Furthermore we illustrate the versatility of OHTA through diverse applications encompassing text-to-avatar conversion hand editing and identity latent space manipulation. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zheng_OHTA_One-shot_Hand_Avatar_via_Data-driven_Implicit_Priors_CVPR_2024_paper.pdf | http://arxiv.org/abs/2402.18969 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zheng_OHTA_One-shot_Hand_Avatar_via_Data-driven_Implicit_Priors_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zheng_OHTA_One-shot_Hand_Avatar_via_Data-driven_Implicit_Priors_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zheng_OHTA_One-shot_Hand_CVPR_2024_supplemental.zip | null |
Segment and Caption Anything | Xiaoke Huang, Jianfeng Wang, Yansong Tang, Zheng Zhang, Han Hu, Jiwen Lu, Lijuan Wang, Zicheng Liu | We propose a method to efficiently equip the Segment Anything Model (SAM) with the ability to generate regional captions. SAM presents strong generalizability to segment anything while is short for semantic understanding. By introducing a lightweight query-based feature mixer we align the region-specific features with the embedding space of language models for later caption generation. As the number of trainable parameters is small (typically in the order of tens of millions) it costs less computation less memory usage and less communication bandwidth resulting in both fast and scalable training. To address the scarcity problem of regional caption data we propose to first pre-train our model on objection detection and segmentation tasks. We call this step weak supervision pretraining since the pretraining data only contains category names instead of full-sentence descriptions. The weak supervision pretraining allows us to leverage many publicly available object detection and segmentation datasets. We conduct extensive experiments to demonstrate the superiority of our method and validate each design choice. This work serves as a stepping stone towards scaling up regional captioning data and sheds light on exploring efficient ways to augment SAM with regional semantics. | https://openaccess.thecvf.com/content/CVPR2024/papers/Huang_Segment_and_Caption_Anything_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.00869 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Huang_Segment_and_Caption_Anything_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Huang_Segment_and_Caption_Anything_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Huang_Segment_and_Caption_CVPR_2024_supplemental.pdf | null |
Human Motion Prediction Under Unexpected Perturbation | Jiangbei Yue, Baiyi Li, Julien Pettré, Armin Seyfried, He Wang | We investigate a new task in human motion prediction which is predicting motions under unexpected physical perturbation potentially involving multiple people. Compared with existing research this task involves predicting less controlled unpremeditated and pure reactive motions in response to external impact and how such motions can propagate through people. It brings new challenges such as data scarcity and predicting complex interactions. To this end we propose a new method capitalizing differentiable physics and deep neural networks leading to an explicit Latent Differentiable Physics (LDP) model. Through experiments we demonstrate that LDP has high data efficiency outstanding prediction accuracy strong generalizability and good explainability. Since there is no similar research a comprehensive comparison with 11 adapted baselines from several relevant domains is conducted showing LDP outperforming existing research both quantitatively and qualitatively improving prediction accuracy by as much as 70% and demonstrating significantly stronger generalization. | https://openaccess.thecvf.com/content/CVPR2024/papers/Yue_Human_Motion_Prediction_Under_Unexpected_Perturbation_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Yue_Human_Motion_Prediction_Under_Unexpected_Perturbation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Yue_Human_Motion_Prediction_Under_Unexpected_Perturbation_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yue_Human_Motion_Prediction_CVPR_2024_supplemental.pdf | null |
Text-to-3D Generation with Bidirectional Diffusion using both 2D and 3D priors | Lihe Ding, Shaocong Dong, Zhanpeng Huang, Zibin Wang, Yiyuan Zhang, Kaixiong Gong, Dan Xu, Tianfan Xue | Most 3D generation research focuses on up-projecting 2D foundation models into the 3D space either by minimizing 2D Score Distillation Sampling (SDS) loss or fine-tuning on multi-view datasets. Without explicit 3D priors these methods often lead to geometric anomalies and multi-view inconsistency. Recently researchers have attempted to improve the genuineness of 3D objects by directly training on 3D datasets albeit at the cost of low-quality texture generation due to the limited texture diversity in 3D datasets. To harness the advantages of both approaches we propose Bidirectional Diffusion (BiDiff) a unified framework that incorporates both a 3D and a 2D diffusion process to preserve both 3D fidelity and 2D texture richness respectively. Moreover as a simple combination may yield inconsistent generation results we further bridge them with novel bidirectional guidance. In addition our method can be used as an initialization of optimization-based models to further improve the quality of 3D model and efficiency of optimization reducing the process from 3.4 hours to 20 minutes. Experimental results have shown that our model achieves high-quality diverse and scalable 3D generation. Project website https://bidiff.github.io/. | https://openaccess.thecvf.com/content/CVPR2024/papers/Ding_Text-to-3D_Generation_with_Bidirectional_Diffusion_using_both_2D_and_3D_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.04963 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Ding_Text-to-3D_Generation_with_Bidirectional_Diffusion_using_both_2D_and_3D_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Ding_Text-to-3D_Generation_with_Bidirectional_Diffusion_using_both_2D_and_3D_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ding_Text-to-3D_Generation_with_CVPR_2024_supplemental.pdf | null |
CLIP-Driven Open-Vocabulary 3D Scene Graph Generation via Cross-Modality Contrastive Learning | Lianggangxu Chen, Xuejiao Wang, Jiale Lu, Shaohui Lin, Changbo Wang, Gaoqi He | 3D Scene Graph Generation (3DSGG) aims to classify objects and their predicates within 3D point cloud scenes. However current 3DSGG methods struggle with two main challenges. 1) The dependency on labor-intensive ground-truth annotations. 2) Closed-set classes training hampers the recognition of novel objects and predicates. Addressing these issues our idea is to extract cross-modality features by CLIP from text and image data naturally related to 3D point clouds. Cross-modality features are used to train a robust 3D scene graph (3DSG) feature extractor. Specifically we propose a novel Cross-Modality Contrastive Learning 3DSGG (CCL-3DSGG) method. Firstly to align the text with 3DSG the text is parsed into word level that are consistent with the 3DSG annotation. To enhance robustness during the alignment adjectives are exchanged for different objects as negative samples. Then to align the image with 3DSG the camera view is treated as a positive sample and other views as negatives. Lastly the recognition of novel object and predicate classes is achieved by calculating the cosine similarity between prompts and 3DSG features. Our rigorous experiments confirm the superior open-vocabulary capability and applicability of CCL-3DSGG in real-world contexts. | https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_CLIP-Driven_Open-Vocabulary_3D_Scene_Graph_Generation_via_Cross-Modality_Contrastive_Learning_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Chen_CLIP-Driven_Open-Vocabulary_3D_Scene_Graph_Generation_via_Cross-Modality_Contrastive_Learning_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Chen_CLIP-Driven_Open-Vocabulary_3D_Scene_Graph_Generation_via_Cross-Modality_Contrastive_Learning_CVPR_2024_paper.html | CVPR 2024 | null | null |
Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving | Mozhgan Pourkeshavarz, Mohammad Sabokrou, Amir Rasouli | In autonomous driving behavior prediction is fundamental for safe motion planning hence the security and robustness of prediction models against adversarial attacks are of paramount importance. We propose a novel adversarial backdoor attack against trajectory prediction models as a means of studying their potential vulnerabilities. Our attack affects the victim at training time via naturalistic hence stealthy poisoned samples crafted using a novel two-step approach. First the triggers are crafted by perturbing the trajectory of attacking vehicle and then disguised by transforming the scene using a bi-level optimization technique. The proposed attack does not depend on a particular model architecture and operates in a black-box manner thus can be effective without any knowledge of the victim model. We conduct extensive empirical studies using state-of-the-art prediction models on two benchmark datasets using metrics customized for trajectory prediction. We show that the proposed attack is highly effective as it can significantly hinder the performance of prediction models unnoticeable by the victims and efficient as it forces the victim to generate malicious behavior even under constrained conditions. Via ablative studies we analyze the impact of different attack design choices followed by an evaluation of existing defence mechanisms against the proposed attack. | https://openaccess.thecvf.com/content/CVPR2024/papers/Pourkeshavarz_Adversarial_Backdoor_Attack_by_Naturalistic_Data_Poisoning_on_Trajectory_Prediction_CVPR_2024_paper.pdf | http://arxiv.org/abs/2306.15755 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Pourkeshavarz_Adversarial_Backdoor_Attack_by_Naturalistic_Data_Poisoning_on_Trajectory_Prediction_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Pourkeshavarz_Adversarial_Backdoor_Attack_by_Naturalistic_Data_Poisoning_on_Trajectory_Prediction_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Pourkeshavarz_Adversarial_Backdoor_Attack_CVPR_2024_supplemental.pdf | null |
Make-It-Vivid: Dressing Your Animatable Biped Cartoon Characters from Text | Junshu Tang, Yanhong Zeng, Ke Fan, Xuheng Wang, Bo Dai, Kai Chen, Lizhuang Ma | Creating and animating 3D biped cartoon characters is crucial and valuable in various applications. Compared with geometry the diverse texture design plays an important role in making 3D biped cartoon characters vivid and charming. Therefore we focus on automatic texture design for cartoon characters based on input instructions. This is challenging for domain-specific requirements and a lack of high-quality data. To address this challenge we propose Make-It-Vivid the first attempt to enable high-quality texture generation from text in UV space. We prepare a detailed text-texture paired data for 3D characters by using vision-question-answering agents. Then we customize a pretrained text-to-image model to generate texture map with template structure while preserving the natural 2D image knowledge. Furthermore to enhance fine-grained details we propose a novel adversarial learning scheme to shorten the domain gap between original dataset and realistic texture domain. Extensive experiments show that our approach outperforms current texture generation methods resulting in efficient character texturing and faithful generation with prompts. Besides we showcase various applications such as out of domain generation and texture stylization. We also provide an efficient generation system for automatic text-guided textured character generation and animation. | https://openaccess.thecvf.com/content/CVPR2024/papers/Tang_Make-It-Vivid_Dressing_Your_Animatable_Biped_Cartoon_Characters_from_Text_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Tang_Make-It-Vivid_Dressing_Your_Animatable_Biped_Cartoon_Characters_from_Text_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Tang_Make-It-Vivid_Dressing_Your_Animatable_Biped_Cartoon_Characters_from_Text_CVPR_2024_paper.html | CVPR 2024 | null | null |
StraightPCF: Straight Point Cloud Filtering | Dasith de Silva Edirimuni, Xuequan Lu, Gang Li, Lei Wei, Antonio Robles-Kelly, Hongdong Li | Point cloud filtering is a fundamental 3D vision task which aims to remove noise while recovering the underlying clean surfaces. State-of-the-art methods remove noise by moving noisy points along stochastic trajectories to the clean surfaces. These methods often require regularization within the training objective and/or during post-processing to ensure fidelity. In this paper we introduce StraightPCF a new deep learning based method for point cloud filtering. It works by moving noisy points along straight paths thus reducing discretization errors while ensuring faster convergence to the clean surfaces. We model noisy patches as intermediate states between high noise patch variants and their clean counterparts and design the VelocityModule to infer a constant flow velocity from the former to the latter. This constant flow leads to straight filtering trajectories. In addition we introduce a DistanceModule that scales the straight trajectory using an estimated distance scalar to attain convergence near the clean surface. Our network is lightweight and only has 530K parameters being 17% of IterativePFN (a most recent point cloud filtering network). Extensive experiments on both synthetic and real-world data show our method achieves state-of-the-art results. Our method also demonstrates nice distributions of filtered points without the need for regularization. The implementation code can be found at: https://github.com/ddsediri/StraightPCF. | https://openaccess.thecvf.com/content/CVPR2024/papers/de_Silva_Edirimuni_StraightPCF_Straight_Point_Cloud_Filtering_CVPR_2024_paper.pdf | http://arxiv.org/abs/2405.08322 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/de_Silva_Edirimuni_StraightPCF_Straight_Point_Cloud_Filtering_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/de_Silva_Edirimuni_StraightPCF_Straight_Point_Cloud_Filtering_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/de_Silva_Edirimuni_StraightPCF_Straight_Point_CVPR_2024_supplemental.pdf | null |
Mirasol3B: A Multimodal Autoregressive Model for Time-Aligned and Contextual Modalities | AJ Piergiovanni, Isaac Noble, Dahun Kim, Michael S. Ryoo, Victor Gomes, Anelia Angelova | One of the main challenges of multimodal learning is the need to combine heterogeneous modalities (e.g. video audio text). For example video and audio are obtained at much higher rates than text and are roughly aligned in time. They are often not synchronized with text which comes as a global context e.g. a title or a description. Furthermore video and audio inputs are of much larger volumes and grow as the video length increases which naturally requires more compute dedicated to these modalities and makes modeling of long-range dependencies harder. We here decouple the multimodal modeling dividing it into separate autoregressive models processing the inputs according to the characteristics of the modalities. We propose a multimodal model consisting of an autoregressive component for the time-synchronized modalities (audio and video) and an autoregressive component for the context modalities which are not necessarily aligned in time but are still sequential. To address the long-sequences of the video-audio inputs we further partition the video and audio sequences in consecutive snippets and autoregressively process their representations. To that end we propose a Combiner mechanism which models the audio-video information jointly producing compact but expressive representations. This allows us to scale to 512 input video frames without increase in model parameters. Our approach achieves the state-of-the-art on multiple well established multimodal benchmarks. It effectively addresses the high computational demand of media inputs by learning compact representations controlling the sequence length of the audio-video feature representations and modeling their dependencies in time. | https://openaccess.thecvf.com/content/CVPR2024/papers/Piergiovanni_Mirasol3B_A_Multimodal_Autoregressive_Model_for_Time-Aligned_and_Contextual_Modalities_CVPR_2024_paper.pdf | http://arxiv.org/abs/2311.05698 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Piergiovanni_Mirasol3B_A_Multimodal_Autoregressive_Model_for_Time-Aligned_and_Contextual_Modalities_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Piergiovanni_Mirasol3B_A_Multimodal_Autoregressive_Model_for_Time-Aligned_and_Contextual_Modalities_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Piergiovanni_Mirasol3B_A_Multimodal_CVPR_2024_supplemental.pdf | null |
Neural Sign Actors: A Diffusion Model for 3D Sign Language Production from Text | Vasileios Baltatzis, Rolandos Alexandros Potamias, Evangelos Ververas, Guanxiong Sun, Jiankang Deng, Stefanos Zafeiriou | Sign Languages (SL) serve as the primary mode of communication for the Deaf and Hard of Hearing communities. Deep learning methods for SL recognition and translation have achieved promising results. However Sign Language Production (SLP) poses a challenge as the generated motions must be realistic and have precise semantic meaning. Most SLP methods rely on 2D data which hinders their realism. In this work a diffusion-based SLP model is trained on a curated large-scale dataset of 4D signing avatars and their corresponding text transcripts. The proposed method can generate dynamic sequences of 3D avatars from an unconstrained domain of discourse using a diffusion process formed on a novel and anatomically informed graph neural network defined on the SMPL-X body skeleton. Through quantitative and qualitative experiments we show that the proposed method considerably outperforms previous methods of SLP. This work makes an important step towards realistic neural sign avatars bridging the communication gap between Deaf and hearing communities. | https://openaccess.thecvf.com/content/CVPR2024/papers/Baltatzis_Neural_Sign_Actors_A_Diffusion_Model_for_3D_Sign_Language_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.02702 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Baltatzis_Neural_Sign_Actors_A_Diffusion_Model_for_3D_Sign_Language_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Baltatzis_Neural_Sign_Actors_A_Diffusion_Model_for_3D_Sign_Language_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Baltatzis_Neural_Sign_Actors_CVPR_2024_supplemental.pdf | null |
On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm | Peng Sun, Bei Shi, Daiwei Yu, Tao Lin | Contemporary machine learning which involves training large neural networks on massive datasets faces significant computational challenges. Dataset distillation as a recent emerging strategy aims to compress real-world datasets for efficient training. However this line of research currently struggles with large-scale and high-resolution datasets hindering its practicality and feasibility. Thus we re-examine existing methods and identify three properties essential for real-world applications: realism diversity and efficiency. As a remedy we propose RDED a novel computationally-efficient yet effective data distillation paradigm to enable both diversity and realism of the distilled data. Extensive empirical results over various model architectures and datasets demonstrate the advancement of RDED: we can distill a dataset to 10 images per class from full ImageNet-1K within 7 minutes achieving a notable 42% accuracy with ResNet-18 on a single RTX-4090 GPU (while the SOTA only achieves 21% but requires 6 hours). Code: https://github.com/LINs-lab/RDED. | https://openaccess.thecvf.com/content/CVPR2024/papers/Sun_On_the_Diversity_and_Realism_of_Distilled_Dataset_An_Efficient_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.03526 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Sun_On_the_Diversity_and_Realism_of_Distilled_Dataset_An_Efficient_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Sun_On_the_Diversity_and_Realism_of_Distilled_Dataset_An_Efficient_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Sun_On_the_Diversity_CVPR_2024_supplemental.pdf | null |
Semantics-aware Motion Retargeting with Vision-Language Models | Haodong Zhang, Zhike Chen, Haocheng Xu, Lei Hao, Xiaofei Wu, Songcen Xu, Zhensong Zhang, Yue Wang, Rong Xiong | Capturing and preserving motion semantics is essential to motion retargeting between animation characters. However most of the previous works neglect the semantic information or rely on human-designed joint-level representations. Here we present a novel Semantics-aware Motion reTargeting (SMT) method with the advantage of vision-language models to extract and maintain meaningful motion semantics. We utilize a differentiable module to render 3D motions. Then the high-level motion semantics are incorporated into the motion retargeting process by feeding the vision-language model with the rendered images and aligning the extracted semantic embeddings. To ensure the preservation of fine-grained motion details and high-level semantics we adopt a two-stage pipeline consisting of skeleton-aware pre-training and fine-tuning with semantics and geometry constraints. Experimental results show the effectiveness of the proposed method in producing high-quality motion retargeting results while accurately preserving motion semantics. Project page can be found at https://sites.google.com/view/smtnet. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Semantics-aware_Motion_Retargeting_with_Vision-Language_Models_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.01964 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Semantics-aware_Motion_Retargeting_with_Vision-Language_Models_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_Semantics-aware_Motion_Retargeting_with_Vision-Language_Models_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_Semantics-aware_Motion_Retargeting_CVPR_2024_supplemental.pdf | null |
Semantically-Shifted Incremental Adapter-Tuning is A Continual ViTransformer | Yuwen Tan, Qinhao Zhou, Xiang Xiang, Ke Wang, Yuchuan Wu, Yongbin Li | Class-incremental learning (CIL) aims to enable models to continuously learn new classes while overcoming catastrophic forgetting. The introduction of pre-trained models has brought new tuning paradigms to CIL. In this paper we revisit different parameter-efficient tuning (PET) methods within the context of continual learning. We observe that adapter tuning demonstrates superiority over prompt-based methods even without parameter expansion in each learning session. Motivated by this we propose incrementally tuning the shared adapter without imposing parameter update constraints enhancing the learning capacity of the backbone. Additionally we employ feature sampling from stored prototypes to retrain a unified classifier further improving its performance. We estimate the semantic shift of old prototypes without access to past samples and update stored prototypes session by session. Our proposed method eliminates model expansion and avoids retaining any image samples. It surpasses previous pre-trained model-based CIL methods and demonstrates remarkable continual learning capabilities. Experimental results on five CIL benchmarks validate the effectiveness of our approach achieving state-of-the-art (SOTA) performance. | https://openaccess.thecvf.com/content/CVPR2024/papers/Tan_Semantically-Shifted_Incremental_Adapter-Tuning_is_A_Continual_ViTransformer_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.19979 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Tan_Semantically-Shifted_Incremental_Adapter-Tuning_is_A_Continual_ViTransformer_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Tan_Semantically-Shifted_Incremental_Adapter-Tuning_is_A_Continual_ViTransformer_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Tan_Semantically-Shifted_Incremental_Adapter-Tuning_CVPR_2024_supplemental.pdf | null |
Low-Rank Approximation for Sparse Attention in Multi-Modal LLMs | Lin Song, Yukang Chen, Shuai Yang, Xiaohan Ding, Yixiao Ge, Ying-Cong Chen, Ying Shan | This paper focuses on the high computational complexity in Large Language Models (LLMs) a significant challenge in both natural language processing (NLP) and multi-modal tasks. We propose Low-Rank Approximation for Sparse At- tention (LoRA-Sparse) an innovative approach that strate- gically reduces this complexity. LoRA-Sparse introduces low-rank linear projection layers for sparse attention ap- proximation. It utilizes an order-mimic training methodol- ogy which is crucial for efficiently approximating the self- attention mechanism in LLMs. We empirically show that sparse attention not only reduces computational demands but also enhances model performance in both NLP and multi-modal tasks. This surprisingly shows that redundant attention in LLMs might be non-beneficial. We extensively validate LoRA-Sparse through rigorous empirical studies in both (NLP) and multi-modal tasks demonstrating its effec- tiveness and general applicability. Based on LLaMA and LLaVA models our methods can reduce more than half of the self-attention computation with even better performance than full-attention baselines. | https://openaccess.thecvf.com/content/CVPR2024/papers/Song_Low-Rank_Approximation_for_Sparse_Attention_in_Multi-Modal_LLMs_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Song_Low-Rank_Approximation_for_Sparse_Attention_in_Multi-Modal_LLMs_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Song_Low-Rank_Approximation_for_Sparse_Attention_in_Multi-Modal_LLMs_CVPR_2024_paper.html | CVPR 2024 | null | null |
TASeg: Temporal Aggregation Network for LiDAR Semantic Segmentation | Xiaopei Wu, Yuenan Hou, Xiaoshui Huang, Binbin Lin, Tong He, Xinge Zhu, Yuexin Ma, Boxi Wu, Haifeng Liu, Deng Cai, Wanli Ouyang | Training deep models for LiDAR semantic segmentation is challenging due to the inherent sparsity of point clouds. Utilizing temporal data is a natural remedy against the sparsity problem as it makes the input signal denser. However previous multi-frame fusion algorithms fall short in utilizing sufficient temporal information due to the memory constraint and they also ignore the informative temporal images. To fully exploit rich information hidden in long-term temporal point clouds and images we present the Temporal Aggregation Network termed TASeg. Specifically we propose a Temporal LiDAR Aggregation and Distillation (TLAD) algorithm which leverages historical priors to assign different aggregation steps for different classes. It can largely reduce memory and time overhead while achieving higher accuracy. Besides TLAD trains a teacher injected with gt priors to distill the model further boosting the performance. To make full use of temporal images we design a Temporal Image Aggregation and Fusion (TIAF) module which can greatly expand the camera FOV and enhance the present features. Temporal LiDAR points in the camera FOV are used as mediums to transform temporal image features to the present coordinate for temporal multi-modal fusion. Moreover we develop a Static-Moving Switch Augmentation (SMSA) algorithm which utilizes sufficient temporal information to enable objects to switch their motion states freely thus greatly increasing static and moving training samples. Our TASeg ranks 1st on three challenging tracks i.e. SemanticKITTI single-scan track multi-scan track and nuScenes LiDAR segmentation track strongly demonstrating the superiority of our method. Codes are available at https://github.com/LittlePey/TASeg. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_TASeg_Temporal_Aggregation_Network_for_LiDAR_Semantic_Segmentation_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wu_TASeg_Temporal_Aggregation_Network_for_LiDAR_Semantic_Segmentation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wu_TASeg_Temporal_Aggregation_Network_for_LiDAR_Semantic_Segmentation_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wu_TASeg_Temporal_Aggregation_CVPR_2024_supplemental.pdf | null |
Bootstrapping SparseFormers from Vision Foundation Models | Ziteng Gao, Zhan Tong, Kevin Qinghong Lin, Joya Chen, Mike Zheng Shou | The recently proposed SparseFormer architecture provides an alternative approach to visual understanding by utilizing a significantly lower number of visual tokens via adjusting RoIs greatly reducing computational costs while still achieving promising performance. However training SparseFormers from scratch is still expensive and scaling up the number of parameters can be challenging. In this paper we propose to bootstrap SparseFormers from ViT-based vision foundation models in a simple and efficient way. Since the majority of SparseFormer blocks are the standard transformer ones we can inherit weights from large-scale pre-trained vision transformers and freeze them as much as possible. Therefore we only need to train the SparseFormer-specific lightweight focusing transformer to adjust token RoIs and fine-tune a few early pre-trained blocks to align the final token representation. In such a way we can bootstrap SparseFormer architectures from various large-scale pre-trained models (e.g. IN-21K pre-trained AugRegs or CLIPs) using a rather smaller amount of training samples (e.g. IN-1K) and without labels or captions within just a few hours. As a result the bootstrapped unimodal SparseFormer (from AugReg-ViT-L/16-384) can reach 84.9% accuracy on IN-1K with only 49 tokens and the multimodal SparseFormer from CLIPs also demonstrates notable zero-shot performance with highly reduced computational cost without seeing any caption during the bootstrapping procedure. In addition CLIP-bootstrapped SparseFormers which align the output space with language without seeing a word can serve as efficient vision encoders in multimodal large language models. Code and models are available at https://github.com/showlab/sparseformer | https://openaccess.thecvf.com/content/CVPR2024/papers/Gao_Bootstrapping_SparseFormers_from_Vision_Foundation_Models_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.01987 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Gao_Bootstrapping_SparseFormers_from_Vision_Foundation_Models_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Gao_Bootstrapping_SparseFormers_from_Vision_Foundation_Models_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Gao_Bootstrapping_SparseFormers_from_CVPR_2024_supplemental.pdf | null |
EventPS: Real-Time Photometric Stereo Using an Event Camera | Bohan Yu, Jieji Ren, Jin Han, Feishi Wang, Jinxiu Liang, Boxin Shi | Photometric stereo is a well-established technique to estimate the surface normal of an object. However the requirement of capturing multiple high dynamic range images under different illumination conditions limits the speed and real-time applications. This paper introduces EventPS a novel approach to real-time photometric stereo using an event camera. Capitalizing on the exceptional temporal resolution dynamic range and low bandwidth characteristics of event cameras EventPS estimates surface normal only from the radiance changes significantly enhancing data efficiency. EventPS seamlessly integrates with both optimization-based and deep-learning-based photometric stereo techniques to offer a robust solution for non-Lambertian surfaces. Extensive experiments validate the effectiveness and efficiency of EventPS compared to frame-based counterparts. Our algorithm runs at over 30 fps in real-world scenarios unleashing the potential of EventPS in time-sensitive and high-speed downstream applications. | https://openaccess.thecvf.com/content/CVPR2024/papers/Yu_EventPS_Real-Time_Photometric_Stereo_Using_an_Event_Camera_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Yu_EventPS_Real-Time_Photometric_Stereo_Using_an_Event_Camera_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Yu_EventPS_Real-Time_Photometric_Stereo_Using_an_Event_Camera_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yu_EventPS_Real-Time_Photometric_CVPR_2024_supplemental.zip | null |
Unsupervised Semantic Segmentation Through Depth-Guided Feature Correlation and Sampling | Leon Sick, Dominik Engel, Pedro Hermosilla, Timo Ropinski | Traditionally training neural networks to perform semantic segmentation requires expensive human-made annotations. But more recently advances in the field of unsupervised learning have made significant progress on this issue and towards closing the gap to supervised algorithms. To achieve this semantic knowledge is distilled by learning to correlate randomly sampled features from images across an entire dataset. In this work we build upon these advances by incorporating information about the structure of the scene into the training process through the use of depth information. We achieve this by (1) learning depth-feature correlation by spatially correlating the feature maps with the depth maps to induce knowledge about the structure of the scene and (2) exploiting farthest-point sampling to more effectively select relevant features by utilizing 3D sampling techniques on depth information of the scene. Finally we demonstrate the effectiveness of our technical contributions through extensive experimentation and present significant improvements in performance across multiple benchmark datasets. | https://openaccess.thecvf.com/content/CVPR2024/papers/Sick_Unsupervised_Semantic_Segmentation_Through_Depth-Guided_Feature_Correlation_and_Sampling_CVPR_2024_paper.pdf | http://arxiv.org/abs/2309.12378 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Sick_Unsupervised_Semantic_Segmentation_Through_Depth-Guided_Feature_Correlation_and_Sampling_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Sick_Unsupervised_Semantic_Segmentation_Through_Depth-Guided_Feature_Correlation_and_Sampling_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Sick_Unsupervised_Semantic_Segmentation_CVPR_2024_supplemental.pdf | null |
On the Road to Portability: Compressing End-to-End Motion Planner for Autonomous Driving | Kaituo Feng, Changsheng Li, Dongchun Ren, Ye Yuan, Guoren Wang | End-to-end motion planning models equipped with deep neural networks have shown great potential for enabling full autonomous driving. However the oversized neural networks render them impractical for deployment on resource-constrained systems which unavoidably requires more computational time and resources during reference. To handle this knowledge distillation offers a promising approach that compresses models by enabling a smaller student model to learn from a larger teacher model. Nevertheless how to apply knowledge distillation to compress motion planners has not been explored so far. In this paper we propose PlanKD the first knowledge distillation framework tailored for compressing end-to-end motion planners. First considering that driving scenes are inherently complex often containing planning-irrelevant or even noisy information transferring such information is not beneficial for the student planner. Thus we design an information bottleneck based strategy to only distill planning-relevant information rather than transfer all information indiscriminately. Second different waypoints in an output planned trajectory may hold varying degrees of importance for motion planning where a slight deviation in certain crucial waypoints might lead to a collision. Therefore we devise a safety-aware waypoint-attentive distillation module that assigns adaptive weights to different waypoints based on the importance to encourage the student to accurately mimic more crucial waypoints thereby improving overall safety. Experiments demonstrate that our PlanKD can boost the performance of smaller planners by a large margin and significantly reduce their reference time. | https://openaccess.thecvf.com/content/CVPR2024/papers/Feng_On_the_Road_to_Portability_Compressing_End-to-End_Motion_Planner_for_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.01238 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Feng_On_the_Road_to_Portability_Compressing_End-to-End_Motion_Planner_for_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Feng_On_the_Road_to_Portability_Compressing_End-to-End_Motion_Planner_for_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Feng_On_the_Road_CVPR_2024_supplemental.pdf | null |
RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with Diffusion Models | Ozgur Kara, Bariscan Kurtkaya, Hidir Yesiltepe, James M. Rehg, Pinar Yanardag | Recent advancements in diffusion-based models have demonstrated significant success in generating images from text. However video editing models have not yet reached the same level of visual quality and user control. To address this we introduce RAVE a zero-shot video editing method that leverages pre-trained text-to-image diffusion models without additional training. RAVE takes an input video and a text prompt to produce high-quality videos while preserving the original motion and semantic structure. It employs a novel noise shuffling strategy leveraging spatio-temporal interactions between frames to produce temporally consistent videos faster than existing methods. It is also efficient in terms of memory requirements allowing it to handle longer videos. RAVE is capable of a wide range of edits from local attribute modifications to shape transformations. In order to demonstrate the versatility of RAVE we create a comprehensive video evaluation dataset ranging from object-focused scenes to complex human activities like dancing and typing and dynamic scenes featuring swimming fish and boats. Our qualitative and quantitative experiments highlight the effectiveness of RAVE in diverse video editing scenarios compared to existing methods. Our code dataset and videos can be found in \href https://rave-video-edit.github.io/. | https://openaccess.thecvf.com/content/CVPR2024/papers/Kara_RAVE_Randomized_Noise_Shuffling_for_Fast_and_Consistent_Video_Editing_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.04524 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Kara_RAVE_Randomized_Noise_Shuffling_for_Fast_and_Consistent_Video_Editing_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Kara_RAVE_Randomized_Noise_Shuffling_for_Fast_and_Consistent_Video_Editing_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kara_RAVE_Randomized_Noise_CVPR_2024_supplemental.pdf | null |
PredToken: Predicting Unknown Tokens and Beyond with Coarse-to-Fine Iterative Decoding | Xuesong Nie, Haoyuan Jin, Yunfeng Yan, Xi Chen, Zhihang Zhu, Donglian Qi | Predictive learning models which aim to predict future frames based on past observations are crucial to constructing world models. These models need to maintain low-level consistency and capture high-level dynamics in unannotated spatiotemporal data. Transitioning from frame-wise to token-wise prediction presents a viable strategy for addressing these needs. How to improve token representation and optimize token decoding presents significant challenges. This paper introduces PredToken a novel predictive framework that addresses these issues by decoupling space-time tokens into distinct components for iterative cascaded decoding. Concretely we first design a "decomposition quantization and reconstruction" schema based on VQGAN to improve the token representation. This scheme disentangles low- and high-frequency representations and employs a dimension-aware quantization model allowing more low-level details to be preserved. Building on this we present a "coarse-to-fine iterative decoding" method. It leverages dynamic soft decoding to refine coarse tokens and static soft decoding for fine tokens enabling more high-level dynamics to be captured. These designs make PredToken produce high-quality predictions. Extensive experiments demonstrate the superiority of our method on various real-world spatiotemporal predictive benchmarks. Furthermore PredToken can also be extended to other visual generative tasks to yield realistic outcomes. | https://openaccess.thecvf.com/content/CVPR2024/papers/Nie_PredToken_Predicting_Unknown_Tokens_and_Beyond_with_Coarse-to-Fine_Iterative_Decoding_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Nie_PredToken_Predicting_Unknown_Tokens_and_Beyond_with_Coarse-to-Fine_Iterative_Decoding_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Nie_PredToken_Predicting_Unknown_Tokens_and_Beyond_with_Coarse-to-Fine_Iterative_Decoding_CVPR_2024_paper.html | CVPR 2024 | null | null |
Video-Based Human Pose Regression via Decoupled Space-Time Aggregation | Jijie He, Wenwu Yang | By leveraging temporal dependency in video sequences multi-frame human pose estimation algorithms have demonstrated remarkable results in complicated situations such as occlusion motion blur and video defocus. These algorithms are predominantly based on heatmaps resulting in high computation and storage requirements per frame which limits their flexibility and real-time application in video scenarios particularly on edge devices. In this paper we develop an efficient and effective video-based human pose regression method which bypasses intermediate representations such as heatmaps and instead directly maps the input to the output joint coordinates. Despite the inherent spatial correlation among adjacent joints of the human pose the temporal trajectory of each individual joint exhibits relative independence. In light of this we propose a novel Decoupled Space-Time Aggregation network (DSTA) to separately capture the spatial contexts between adjacent joints and the temporal cues of each individual joint thereby avoiding the conflation of spatiotemporal dimensions. Concretely DSTA learns a dedicated feature token for each joint to facilitate the modeling of their spatiotemporal dependencies. With the proposed joint-wise local-awareness attention mechanism our method is capable of efficiently and flexibly utilizing the spatial dependency of adjacent joints and the temporal dependency of each joint itself. Extensive experiments demonstrate the superiority of our method. Compared to previous regression-based single-frame human pose estimation methods DSTA significantly enhances performance achieving an 8.9 mAP improvement on PoseTrack2017. Furthermore our approach either surpasses or is on par with the state-of-the-art heatmap-based multi-frame human pose estimation methods. Project page: https://github.com/zgspose/DSTA. | https://openaccess.thecvf.com/content/CVPR2024/papers/He_Video-Based_Human_Pose_Regression_via_Decoupled_Space-Time_Aggregation_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.19926 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/He_Video-Based_Human_Pose_Regression_via_Decoupled_Space-Time_Aggregation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/He_Video-Based_Human_Pose_Regression_via_Decoupled_Space-Time_Aggregation_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/He_Video-Based_Human_Pose_CVPR_2024_supplemental.zip | null |
L-MAGIC: Language Model Assisted Generation of Images with Coherence | Zhipeng Cai, Matthias Mueller, Reiner Birkl, Diana Wofk, Shao-Yen Tseng, Junda Cheng, Gabriela Ben-Melech Stan, Vasudev Lai, Michael Paulitsch | In the current era of generative AI breakthroughs generating panoramic scenes from a single input image remains a key challenge. Most existing methods use diffusion-based iterative or simultaneous multi-view inpainting. However the lack of global scene layout priors leads to subpar outputs with duplicated objects (e.g. multiple beds in a bedroom) or requires time-consuming human text inputs for each view. We propose L-MAGIC a novel method leveraging large language models for guidance while diffusing multiple coherent views of 360 degree panoramic scenes. L-MAGIC harnesses pre-trained diffusion and language models without fine-tuning ensuring zero-shot performance. The output quality is further enhanced by super-resolution and multi-view fusion techniques. Extensive experiments demonstrate that the resulting panoramic scenes feature better scene layouts and perspective view rendering quality compared to related works with >70% preference in human evaluations. Combined with conditional diffusion models L-MAGIC can accept various input modalities including but not limited to text depth maps sketches and colored scripts. Applying depth estimation further enables 3D point cloud generation and dynamic scene exploration with fluid camera motion. | https://openaccess.thecvf.com/content/CVPR2024/papers/Cai_L-MAGIC_Language_Model_Assisted_Generation_of_Images_with_Coherence_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Cai_L-MAGIC_Language_Model_Assisted_Generation_of_Images_with_Coherence_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Cai_L-MAGIC_Language_Model_Assisted_Generation_of_Images_with_Coherence_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Cai_L-MAGIC_Language_Model_CVPR_2024_supplemental.pdf | null |
3D Face Tracking from 2D Video through Iterative Dense UV to Image Flow | Felix Taubner, Prashant Raina, Mathieu Tuli, Eu Wern Teh, Chul Lee, Jinmiao Huang | When working with 3D facial data improving fidelity and avoiding the uncanny valley effect is critically dependent on accurate 3D facial performance capture. Because such methods are expensive and due to the widespread availability of 2D videos recent methods have focused on how to perform monocular 3D face tracking. However these methods often fall short in capturing precise facial movements due to limitations in their network architecture training and evaluation processes. Addressing these challenges we propose a novel face tracker FlowFace that introduces an innovative 2D alignment network for dense per-vertex alignment. Unlike prior work FlowFace is trained on high-quality 3D scan annotations rather than weak supervision or synthetic data. Our 3D model fitting module jointly fits a 3D face model from one or many observations integrating existing neutral shape priors for enhanced identity and expression disentanglement and per-vertex deformations for detailed facial feature reconstruction. Additionally we propose a novel metric and benchmark for assessing tracking accuracy. Our method exhibits superior performance on both custom and publicly available benchmarks. We further validate the effectiveness of our tracker by generating high-quality 3D data from 2D videos which leads to performance gains on downstream tasks. | https://openaccess.thecvf.com/content/CVPR2024/papers/Taubner_3D_Face_Tracking_from_2D_Video_through_Iterative_Dense_UV_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.09819 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Taubner_3D_Face_Tracking_from_2D_Video_through_Iterative_Dense_UV_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Taubner_3D_Face_Tracking_from_2D_Video_through_Iterative_Dense_UV_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Taubner_3D_Face_Tracking_CVPR_2024_supplemental.zip | null |
Carve3D: Improving Multi-view Reconstruction Consistency for Diffusion Models with RL Finetuning | Desai Xie, Jiahao Li, Hao Tan, Xin Sun, Zhixin Shu, Yi Zhou, Sai Bi, Sören Pirk, Arie E. Kaufman | Multi-view diffusion models obtained by applying Supervised Finetuning (SFT) to text-to-image diffusion models have driven recent breakthroughs in text-to-3D research. However due to the limited size and quality of existing 3D datasets they still suffer from multi-view inconsistencies and Neural Radiance Field (NeRF) reconstruction artifacts. We argue that multi-view diffusion models can benefit from further Reinforcement Learning Finetuning (RLFT) which allows models to learn from the data generated by themselves and improve beyond their dataset limitations during SFT. To this end we introduce Carve3D an improved RLFT algorithm coupled with a novel Multi-view Reconstruction Consistency (MRC) metric to enhance the consistency of multi-view diffusion models. To measure the MRC metric on a set of multi-view images we compare them with their corresponding NeRF renderings at the same camera viewpoints. The resulting model which we denote as Carve3DM demonstrates superior multi-view consistency and NeRF reconstruction quality than existing models. Our results suggest that pairing SFT with Carve3D's RLFT is essential for developing multi-view-consistent diffusion models mirroring the standard Large Language Model (LLM) alignment pipeline. Our code training and testing data and video results are available at: https://desaixie.github.io/carve-3d. | https://openaccess.thecvf.com/content/CVPR2024/papers/Xie_Carve3D_Improving_Multi-view_Reconstruction_Consistency_for_Diffusion_Models_with_RL_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.13980 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Xie_Carve3D_Improving_Multi-view_Reconstruction_Consistency_for_Diffusion_Models_with_RL_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Xie_Carve3D_Improving_Multi-view_Reconstruction_Consistency_for_Diffusion_Models_with_RL_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Xie_Carve3D_Improving_Multi-view_CVPR_2024_supplemental.pdf | null |
Random Entangled Tokens for Adversarially Robust Vision Transformer | Huihui Gong, Minjing Dong, Siqi Ma, Seyit Camtepe, Surya Nepal, Chang Xu | Vision Transformers (ViTs) have emerged as a compelling alternative to Convolutional Neural Networks (CNNs) in the realm of computer vision showcasing tremendous potential. However recent research has unveiled a susceptibility of ViTs to adversarial attacks akin to their CNN counterparts. Adversarial training and randomization are two representative effective defenses for CNNs. Some researchers have attempted to apply adversarial training to ViTs and achieved comparable robustness to CNNs while it is not easy to directly apply randomization to ViTs because of the architecture difference between CNNs and ViTs. In this paper we delve into the structural intricacies of ViTs and propose a novel defense mechanism termed Random entangled image Transformer (ReiT) which seamlessly integrates adversarial training and randomization to bolster the adversarial robustness of ViTs. Recognizing the challenge posed by the structural disparities between ViTs and CNNs we introduce a novel module input-independent random entangled self-attention (II-ReSA). This module optimizes random entangled tokens that lead to "dissimilar" self-attention outputs by leveraging model parameters and the sampled random tokens thereby synthesizing the self-attention module outputs and random entangled tokens to diminish adversarial similarity. ReiT incorporates two distinct random entangled tokens and employs dual randomization offering an effective countermeasure against adversarial examples while ensuring comprehensive deduction guarantees. Through extensive experiments conducted on various ViT variants and benchmarks we substantiate the superiority of our proposed method in enhancing the adversarial robustness of Vision Transformers. | https://openaccess.thecvf.com/content/CVPR2024/papers/Gong_Random_Entangled_Tokens_for_Adversarially_Robust_Vision_Transformer_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Gong_Random_Entangled_Tokens_for_Adversarially_Robust_Vision_Transformer_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Gong_Random_Entangled_Tokens_for_Adversarially_Robust_Vision_Transformer_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Gong_Random_Entangled_Tokens_CVPR_2024_supplemental.pdf | null |
Shadow Generation for Composite Image Using Diffusion Model | Qingyang Liu, Junqi You, Jianting Wang, Xinhao Tao, Bo Zhang, Li Niu | In the realm of image composition generating realistic shadow for the inserted foreground remains a formidable challenge. Previous works have developed image-to-image translation models which are trained on paired training data. However they are struggling to generate shadows with accurate shapes and intensities hindered by data scarcity and inherent task complexity. In this paper we resort to foundation model with rich prior knowledge of natural shadow images. Specifically we first adapt ControlNet to our task and then propose intensity modulation modules to improve the shadow intensity. Moreover we extend the small-scale DESOBA dataset to DESOBAv2 using a novel data acquisition pipeline. Experimental results on both DESOBA and DESOBAv2 datasets as well as real composite images demonstrate the superior capability of our model for shadow generation task. The dataset code and model are released at https://github.com/bcmi/Object-Shadow-Generation-Dataset-DESOBAv2. | https://openaccess.thecvf.com/content/CVPR2024/papers/Liu_Shadow_Generation_for_Composite_Image_Using_Diffusion_Model_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.15234 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Shadow_Generation_for_Composite_Image_Using_Diffusion_Model_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Shadow_Generation_for_Composite_Image_Using_Diffusion_Model_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liu_Shadow_Generation_for_CVPR_2024_supplemental.pdf | null |
DisCo: Disentangled Control for Realistic Human Dance Generation | Tan Wang, Linjie Li, Kevin Lin, Yuanhao Zhai, Chung-Ching Lin, Zhengyuan Yang, Hanwang Zhang, Zicheng Liu, Lijuan Wang | Generative AI has made significant strides in computer vision particularly in text-driven image/video synthesis (T2I/T2V). Despite the notable advancements it remains challenging in human-centric content synthesis such as realistic dance generation. Current methodologies primarily tailored for human motion transfer encounter difficulties when confronted with real-world dance scenarios (e.g. social media dance) which require to generalize across a wide spectrum of poses and intricate human details. In this paper we depart from the traditional paradigm of human motion transfer and emphasize two additional critical attributes for the synthesis of human dance content in social media contexts: (i) Generalizability: the model should be able to generalize beyond generic human viewpoints as well as unseen human subjects backgrounds and poses; (ii) Compositionality: it should allow for the seamless composition of seen/unseen subjects backgrounds and poses from different sources. To address these challenges we introduce DISCO which includes a novel model architecture with disentangled control to improve the compositionality of dance synthesis and an effective human attribute pre-training for better generalizability to unseen humans. Extensive qualitative and quantitative results demonstrate that DISCO can generate high-quality human dance images and videos with diverse appearances and flexible motions. Code is available at https://disco-dance.github.io/. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_DisCo_Disentangled_Control_for_Realistic_Human_Dance_Generation_CVPR_2024_paper.pdf | http://arxiv.org/abs/2307.00040 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_DisCo_Disentangled_Control_for_Realistic_Human_Dance_Generation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_DisCo_Disentangled_Control_for_Realistic_Human_Dance_Generation_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wang_DisCo_Disentangled_Control_CVPR_2024_supplemental.pdf | null |
L2B: Learning to Bootstrap Robust Models for Combating Label Noise | Yuyin Zhou, Xianhang Li, Fengze Liu, Qingyue Wei, Xuxi Chen, Lequan Yu, Cihang Xie, Matthew P. Lungren, Lei Xing | Deep neural networks have shown great success in representation learning. Deep neural networks have shown great success in representation learning. However when learning with noisy labels (LNL) they can easily overfit and fail to generalize to new data. This paper introduces a simple and effective method named Learning to Bootstrap (L2B) which enables models to bootstrap themselves using their own predictions without being adversely affected by erroneous pseudo-labels. It achieves this by dynamically adjusting the importance weight between real observed and generated labels as well as between different samples through meta-learning. Unlike existing instance reweighting methods the key to our method lies in a new versatile objective that enables implicit relabeling concurrently leading to significant improvements without incurring additional costs. L2B offers several benefits over the baseline methods. It yields more robust models that are less susceptible to the impact of noisy labels by guiding the bootstrapping procedure more effectively. It better exploits the valuable information contained in corrupted instances by adapting the weights of both instances and labels. Furthermore L2B is compatible with existing LNL methods and delivers competitive results spanning natural and medical imaging tasks including classification and segmentation under both synthetic and real-world noise. Extensive experiments demonstrate that our method effectively mitigates the challenges of noisy labels often necessitating few to no validation samples and is well generalized to other tasks such as image segmentation. This not only positions it as a robust complement to existing LNL techniques but also underscores its practical applicability. The code and models are available at https://github.com/yuyinzhou/l2b. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zhou_L2B_Learning_to_Bootstrap_Robust_Models_for_Combating_Label_Noise_CVPR_2024_paper.pdf | http://arxiv.org/abs/2202.04291 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zhou_L2B_Learning_to_Bootstrap_Robust_Models_for_Combating_Label_Noise_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zhou_L2B_Learning_to_Bootstrap_Robust_Models_for_Combating_Label_Noise_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhou_L2B_Learning_to_CVPR_2024_supplemental.pdf | null |
GaussianShader: 3D Gaussian Splatting with Shading Functions for Reflective Surfaces | Yingwenqi Jiang, Jiadong Tu, Yuan Liu, Xifeng Gao, Xiaoxiao Long, Wenping Wang, Yuexin Ma | The advent of neural 3D Gaussians has recently brought about a revolution in the field of neural rendering facilitating the generation of high-quality renderings at real-time speeds. However the explicit and discrete representation encounters challenges when applied to scenes featuring reflective surfaces. In this paper we present GaussianShader a novel method that applies a simplified shading function on 3D Gaussians to enhance the neural rendering in scenes with reflective surfaces while preserving the training and rendering efficiency. The main challenge in applying the shading function lies in the accurate normal estimation on discrete 3D Gaussians. Specifically we proposed a novel normal estimation framework based on the shortest axis directions of 3D Gaussians with a delicately designed loss to make the consistency between the normals and the geometries of Gaussian spheres. Experiments show that GaussianShader strikes a commendable balance between efficiency and visual quality. Our method surpasses Gaussian Splatting in PSNR on specular object datasets exhibiting an improvement of 1.57dB. When compared to prior works handling reflective surfaces such as Ref-NeRF our optimization time is significantly accelerated (23h vs. 0.58h). | https://openaccess.thecvf.com/content/CVPR2024/papers/Jiang_GaussianShader_3D_Gaussian_Splatting_with_Shading_Functions_for_Reflective_Surfaces_CVPR_2024_paper.pdf | http://arxiv.org/abs/2311.17977 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Jiang_GaussianShader_3D_Gaussian_Splatting_with_Shading_Functions_for_Reflective_Surfaces_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Jiang_GaussianShader_3D_Gaussian_Splatting_with_Shading_Functions_for_Reflective_Surfaces_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Jiang_GaussianShader_3D_Gaussian_CVPR_2024_supplemental.pdf | null |
Tactile-Augmented Radiance Fields | Yiming Dou, Fengyu Yang, Yi Liu, Antonio Loquercio, Andrew Owens | We present a scene representation that brings vision and touch into a shared 3D space which we call a tactile-augmented radiance field. This representation capitalizes on two key insights: (i) ubiquitous vision-based touch sensors are built on perspective cameras and (ii) visually and structurally similar regions of a scene share the same tactile features. We use these insights to train a conditional diffusion model that provided with an RGB image and a depth map rendered from a neural radiance field generates its corresponding tactile "image". To train this diffusion model we collect the largest collection of spatially-aligned visual and tactile data. Through qualitative and quantitative experiments we demonstrate the accuracy of our cross-modal generative model and the utility of collected and rendered visual-tactile pairs across a range of downstream tasks. Project page: https://dou-yiming.github.io/TaRF | https://openaccess.thecvf.com/content/CVPR2024/papers/Dou_Tactile-Augmented_Radiance_Fields_CVPR_2024_paper.pdf | http://arxiv.org/abs/2405.04534 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Dou_Tactile-Augmented_Radiance_Fields_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Dou_Tactile-Augmented_Radiance_Fields_CVPR_2024_paper.html | CVPR 2024 | null | null |
Intensity-Robust Autofocus for Spike Camera | Changqing Su, Zhiyuan Ye, Yongsheng Xiao, You Zhou, Zhen Cheng, Bo Xiong, Zhaofei Yu, Tiejun Huang | Spike cameras a novel neuromorphic visual sensor can capture full-time spatial information through spike stream offering ultra-high temporal resolution and an extensive dynamic range. Autofocus control (AC) plays a pivotal role in a camera to efficiently capture information in challenging real-world scenarios. Nevertheless due to disparities in data modality and information characteristics compared to frame stream and event stream the current lack of efficient AC methods has made it challenging for spike cameras to adapt to intricate real-world conditions. To address this challenge we introduce a spike-based autofocus framework that includes a spike-specific focus measure called spike dispersion (SD) which effectively mitigates the influence of variations in scene light intensity during the focusing process by leveraging the spike camera's ability to record full-time spatial light intensity. Additionally the framework integrates a fast search strategy called spike-based golden fast search (SGFS) allowing rapid focal positioning without the need for a complete focus range traversal. To validate the performance of our method we have collected a spike-based autofocus dataset (SAD) containing synthetic data and real-world data under varying scene brightness and motion scenarios. Experimental results on these datasets demonstrate that our method offers state-of-the-art accuracy and efficiency. Furthermore experiments with data captured under varying scene brightness levels illustrate the robustness of our method to changes in light intensity during the focusing process. | https://openaccess.thecvf.com/content/CVPR2024/papers/Su_Intensity-Robust_Autofocus_for_Spike_Camera_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Su_Intensity-Robust_Autofocus_for_Spike_Camera_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Su_Intensity-Robust_Autofocus_for_Spike_Camera_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Su_Intensity-Robust_Autofocus_for_CVPR_2024_supplemental.pdf | null |
FairCLIP: Harnessing Fairness in Vision-Language Learning | Yan Luo, Min Shi, Muhammad Osama Khan, Muhammad Muneeb Afzal, Hao Huang, Shuaihang Yuan, Yu Tian, Luo Song, Ava Kouhana, Tobias Elze, Yi Fang, Mengyu Wang | Fairness is a critical concern in deep learning especially in healthcare where these models influence diagnoses and treatment decisions. Although fairness has been investigated in the vision-only domain the fairness of medical vision-language (VL) models remains unexplored due to the scarcity of medical VL datasets for studying fairness. To bridge this research gap we introduce the first fair vision-language medical dataset (Harvard-FairVLMed) that provides detailed demographic attributes ground-truth labels and clinical notes to facilitate an in-depth examination of fairness within VL foundation models. Using Harvard-FairVLMed we conduct a comprehensive fairness analysis of two widely-used VL models (CLIP and BLIP2) pre-trained on both natural and medical domains across four different protected attributes. Our results highlight significant biases in all VL models with Asian Male Non-Hispanic and Spanish being the preferred subgroups across the protected attributes of race gender ethnicity and language respectively. In order to alleviate these biases we propose FairCLIP an optimal-transport-based approach that achieves a favorable trade-off between performance and fairness by reducing the Sinkhorn distance between the overall sample distribution and the distributions corresponding to each demographic group. As the first VL dataset of its kind Harvard-FairVLMed holds the potential to catalyze advancements in the development of machine learning models that are both ethically aware and clinically effective. Our dataset and code are available at https://ophai.hms.harvard.edu/datasets/harvard-fairvlmed10k. | https://openaccess.thecvf.com/content/CVPR2024/papers/Luo_FairCLIP_Harnessing_Fairness_in_Vision-Language_Learning_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.19949 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Luo_FairCLIP_Harnessing_Fairness_in_Vision-Language_Learning_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Luo_FairCLIP_Harnessing_Fairness_in_Vision-Language_Learning_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Luo_FairCLIP_Harnessing_Fairness_CVPR_2024_supplemental.pdf | null |
StreamingFlow: Streaming Occupancy Forecasting with Asynchronous Multi-modal Data Streams via Neural Ordinary Differential Equation | Yining Shi, Kun Jiang, Ke Wang, Jiusi Li, Yunlong Wang, Mengmeng Yang, Diange Yang | Predicting the future occupancy states of the surrounding environment is a vital task for autonomous driving. However current best-performing single-modality methods or multi-modality fusion perception methods are only able to predict uniform snapshots of future occupancy states and require strictly synchronized sensory data for sensor fusion. We propose a novel framework StreamingFlow to lift these strong limitations. StreamingFlow is a novel BEV occupancy predictor that ingests asynchronous multi-sensor data streams for fusion and performs streaming forecasting of the future occupancy map at any future timestamps. By integrating neural ordinary differential equations (N-ODE) into recurrent neural networks StreamingFlow learns derivatives of BEV features over temporal horizons updates the implicit sensor's BEV features as part of the fusion process and propagates BEV states to the desired future time point. It shows good zero-shot generalization ability of prediction reflected in the interpolation of the observed prediction time horizon and the reasonable inference of the unseen farther future period. Extensive experiments on two large-scale datasets nuScenes and Lyft L5 demonstrate that StreamingFlow significantly outperforms previous vision-based LiDAR-based methods and shows superior performance compared to state-of-the-art fusion-based methods. | https://openaccess.thecvf.com/content/CVPR2024/papers/Shi_StreamingFlow_Streaming_Occupancy_Forecasting_with_Asynchronous_Multi-modal_Data_Streams_via_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Shi_StreamingFlow_Streaming_Occupancy_Forecasting_with_Asynchronous_Multi-modal_Data_Streams_via_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Shi_StreamingFlow_Streaming_Occupancy_Forecasting_with_Asynchronous_Multi-modal_Data_Streams_via_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Shi_StreamingFlow_Streaming_Occupancy_CVPR_2024_supplemental.pdf | null |
pix2gestalt: Amodal Segmentation by Synthesizing Wholes | Ege Ozguroglu, Ruoshi Liu, Dídac Surís, Dian Chen, Achal Dave, Pavel Tokmakov, Carl Vondrick | We introduce pix2gestalt a framework for zero-shot amodal segmentation which learns to estimate the shape and appearance of whole objects that are only partially visible behind occlusions. By capitalizing on large-scale diffusion models and transferring their representations to this task we learn a conditional diffusion model for reconstructing whole objects in challenging zero-shot cases including examples that break natural and physical priors such as art. As training data we use a synthetically curated dataset containing occluded objects paired with their whole counterparts. Experiments show that our approach outperforms supervised baselines on established benchmarks. Our model can furthermore be used to significantly improve the performance of existing object recognition and 3D reconstruction methods in the presence of occlusions. | https://openaccess.thecvf.com/content/CVPR2024/papers/Ozguroglu_pix2gestalt_Amodal_Segmentation_by_Synthesizing_Wholes_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Ozguroglu_pix2gestalt_Amodal_Segmentation_by_Synthesizing_Wholes_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Ozguroglu_pix2gestalt_Amodal_Segmentation_by_Synthesizing_Wholes_CVPR_2024_paper.html | CVPR 2024 | null | null |
Weakly Supervised Point Cloud Semantic Segmentation via Artificial Oracle | Hyeokjun Kweon, Jihun Kim, Kuk-Jin Yoon | Manual annotation of every point in a point cloud is a costly and labor-intensive process. While weakly supervised point cloud semantic segmentation (WSPCSS) with sparse annotation shows promise the limited information from initial sparse labels can place an upper bound on performance. As a new research direction for WSPCSS we propose a novel Region Exploration via Artificial Labeling (REAL) framework. It leverages a foundational image model as an artificial oracle within the active learning context eliminating the need for manual annotation by a human oracle. To integrate the 2D model into the 3D domain we first introduce a Projection-based Point-toSegment (PP2S) module designed to enable prompt segmentation of 3D data without additional training. The REAL framework samples query points based on model predictions and requests annotations from PP2S dynamically refining labels and improving model training. Furthermore to overcome several challenges of employing an artificial model as an oracle we formulate effective query sampling and label updating strategies. Our comprehensive experiments and comparisons demonstrate that the REAL framework significantly outperforms existing methods across various benchmarks. The code is available at https://github.com/jihun1998/AO. | https://openaccess.thecvf.com/content/CVPR2024/papers/Kweon_Weakly_Supervised_Point_Cloud_Semantic_Segmentation_via_Artificial_Oracle_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Kweon_Weakly_Supervised_Point_Cloud_Semantic_Segmentation_via_Artificial_Oracle_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Kweon_Weakly_Supervised_Point_Cloud_Semantic_Segmentation_via_Artificial_Oracle_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kweon_Weakly_Supervised_Point_CVPR_2024_supplemental.pdf | null |
Language Model Guided Interpretable Video Action Reasoning | Ning Wang, Guangming Zhu, HS Li, Liang Zhang, Syed Afaq Ali Shah, Mohammed Bennamoun | Although neural networks excel in video action recognition tasks their "black-box" nature makes it challenging to understand the rationale behind their decisions. Recent approaches used inherently interpretable models to analyze video actions in a manner akin to human reasoning. However it has been observed that these interpretable models tend to underperform when compared to their black-box counterparts. In this work we present a new framework called Language-guided Interpretable Action Recognition framework (LaIAR). This framework leverages knowledge from language models to enhance both the recognition capabilities and the interpretability of video models. In essence we reframe the challenge of understanding video model decisions as a task of aligning video and language models. Using the logical reasoning captured by the language model we steer the training of the video model. This integrated approach not only improves the video model's adaptability to different domains but also boosts its overall performance. Extensive experiments on Charades and CAD-120 datasets demonstrate the superior performance and interpretability of our proposed method. The code of LaIAR is available at https://github.com/NingWang2049/LaIAR. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_Language_Model_Guided_Interpretable_Video_Action_Reasoning_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.01591 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Language_Model_Guided_Interpretable_Video_Action_Reasoning_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_Language_Model_Guided_Interpretable_Video_Action_Reasoning_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wang_Language_Model_Guided_CVPR_2024_supplemental.pdf | null |
Forecasting of 3D Whole-body Human Poses with Grasping Objects | Haitao Yan, Qiongjie Cui, Jiexin Xie, Shijie Guo | In the context of computer vision and human-robot interaction forecasting 3D human poses is crucial for understanding human behavior and enhancing the predictive capabilities of intelligent systems. While existing methods have made significant progress they often focus on predicting major body joints overlooking fine-grained gestures and their interaction with objects. Human hand movements particularly during object interactions play a pivotal role and provide more precise expressions of human poses. This work fills this gap and introduces a novel paradigm: forecasting 3D whole-body human poses with a focus on grasping objects. This task involves predicting activities across all joints in the body and hands encompassing the complexities of internal heterogeneity and external interactivity. To tackle these challenges we also propose a novel approach: C^3HOST cross-context cross-modal consolidation for 3D whole-body pose forecasting effectively handles the complexities of internal heterogeneity and external interactivity. C^3HOST involves distinct steps including the heterogeneous content encoding and alignment and cross-modal feature learning and interaction. These enable us to predict activities across all body and hand joints ensuring high-precision whole-body human pose prediction even during object grasping. Extensive experiments on two benchmarks demonstrate that our model significantly enhances the accuracy of whole-body human motion prediction. The project page is available at https://sites.google.com/view/c3host. | https://openaccess.thecvf.com/content/CVPR2024/papers/Yan_Forecasting_of_3D_Whole-body_Human_Poses_with_Grasping_Objects_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Yan_Forecasting_of_3D_Whole-body_Human_Poses_with_Grasping_Objects_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Yan_Forecasting_of_3D_Whole-body_Human_Poses_with_Grasping_Objects_CVPR_2024_paper.html | CVPR 2024 | null | null |
COTR: Compact Occupancy TRansformer for Vision-based 3D Occupancy Prediction | Qihang Ma, Xin Tan, Yanyun Qu, Lizhuang Ma, Zhizhong Zhang, Yuan Xie | The autonomous driving community has shown significant interest in 3D occupancy prediction driven by its exceptional geometric perception and general object recognition capabilities. To achieve this current works try to construct a Tri-Perspective View (TPV) or Occupancy (OCC) representation extending from the Bird-Eye-View perception. However compressed views like TPV representation lose 3D geometry information while raw and sparse OCC representation requires heavy but redundant computational costs. To address the above limitations we propose Compact Occupancy TRansformer (COTR) with a geometry-aware occupancy encoder and a semantic-aware group decoder to reconstruct a compact 3D OCC representation. The occupancy encoder first generates a compact geometrical OCC feature through efficient explicit-implicit view transformation. Then the occupancy decoder further enhances the semantic discriminability of the compact OCC representation by a coarse-to-fine semantic grouping strategy. Empirical experiments show that there are evident performance gains across multiple baselines e.g. COTR outperforms baselines with a relative improvement of 8%-15% demonstrating the superiority of our method. | https://openaccess.thecvf.com/content/CVPR2024/papers/Ma_COTR_Compact_Occupancy_TRansformer_for_Vision-based_3D_Occupancy_Prediction_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.01919 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Ma_COTR_Compact_Occupancy_TRansformer_for_Vision-based_3D_Occupancy_Prediction_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Ma_COTR_Compact_Occupancy_TRansformer_for_Vision-based_3D_Occupancy_Prediction_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ma_COTR_Compact_Occupancy_CVPR_2024_supplemental.pdf | null |
Accelerating Diffusion Sampling with Optimized Time Steps | Shuchen Xue, Zhaoqiang Liu, Fei Chen, Shifeng Zhang, Tianyang Hu, Enze Xie, Zhenguo Li | Diffusion probabilistic models (DPMs) have shown remarkable performance in high-resolution image synthesis but their sampling efficiency is still to be desired due to the typically large number of sampling steps. Recent advancements in high-order numerical ODE solvers for DPMs have enabled the generation of high-quality images with much fewer sampling steps. While this is a significant development most sampling methods still employ uniform time steps which is not optimal when using a small number of steps. To address this issue we propose a general framework for designing an optimization problem that seeks more appropriate time steps for a specific numerical ODE solver for DPMs. This optimization problem aims to minimize the distance between the ground-truth solution to the ODE and an approximate solution corresponding to the numerical solver. It can be efficiently solved using the constrained trust region method taking less than 15 seconds. Our extensive experiments on both unconditional and conditional sampling using pixel- and latent-space DPMs demonstrate that when combined with the state-of-the-art sampling method UniPC our optimized time steps significantly improve image generation performance in terms of FID scores for datasets such as CIFAR-10 and ImageNet compared to using uniform time steps. | https://openaccess.thecvf.com/content/CVPR2024/papers/Xue_Accelerating_Diffusion_Sampling_with_Optimized_Time_Steps_CVPR_2024_paper.pdf | http://arxiv.org/abs/2402.17376 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Xue_Accelerating_Diffusion_Sampling_with_Optimized_Time_Steps_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Xue_Accelerating_Diffusion_Sampling_with_Optimized_Time_Steps_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Xue_Accelerating_Diffusion_Sampling_CVPR_2024_supplemental.pdf | null |
See Say and Segment: Teaching LMMs to Overcome False Premises | Tsung-Han Wu, Giscard Biamby, David Chan, Lisa Dunlap, Ritwik Gupta, Xudong Wang, Joseph E. Gonzalez, Trevor Darrell | Current open-source Large Multimodal Models (LMMs) excel at tasks such as open-vocabulary language grounding and segmentation but can suffer under false premises when queries imply the existence of something that is not actually present in the image. We observe that existing methods that fine-tune an LMM to segment images significantly degrade their ability to reliably determine ("see") if an object is present and to interact naturally with humans ("say") a form of catastrophic forgetting. In this work we propose a cascading and joint training approach for LMMs to solve this task avoiding catastrophic forgetting of previous skills. Our resulting model can "see" by detecting whether objects are present in an image "say" by telling the user if they are not proposing alternative queries or correcting semantic errors in the query and finally "segment" by outputting the mask of the desired objects if they exist. Additionally we introduce a novel False Premise Correction benchmark dataset an extension of existing RefCOCO(+/g) referring segmentation datasets (which we call FP-RefCOCO(+/g)). The results show that our method not only detects false premises up to 55% better than existing approaches but under false premise conditions produces relative cIOU improvements of more than 31% over baselines and produces natural language feedback judged helpful up to 67% of the time. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_See_Say_and_Segment_Teaching_LMMs_to_Overcome_False_Premises_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.08366 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wu_See_Say_and_Segment_Teaching_LMMs_to_Overcome_False_Premises_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wu_See_Say_and_Segment_Teaching_LMMs_to_Overcome_False_Premises_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wu_See_Say_and_CVPR_2024_supplemental.pdf | null |
Is Ego Status All You Need for Open-Loop End-to-End Autonomous Driving? | Zhiqi Li, Zhiding Yu, Shiyi Lan, Jiahan Li, Jan Kautz, Tong Lu, Jose M. Alvarez | End-to-end autonomous driving recently emerged as a promising research direction to target autonomy from a full-stack perspective. Along this line many of the latest works follow an open-loop evaluation setting on nuScenes to study the planning behavior. In this paper we delve deeper into the problem by conducting thorough analyses and demystifying more devils in the details. We initially observed that the nuScenes dataset characterized by relatively simple driving scenarios leads to an under-utilization of perception information in end-to-end models incorporating ego status such as the ego vehicle's velocity. These models tend to rely predominantly on the ego vehicle's status for future path planning. Beyond the limitations of the dataset we also note that current metrics do not comprehensively assess the planning quality leading to potentially biased conclusions drawn from existing benchmarks. To address this issue we introduce a new metric to evaluate whether the predicted trajectories adhere to the road. We further propose a simple baseline able to achieve competitive results without relying on perception annotations. Given the current limitations on the benchmark and metrics we suggest the community reassess relevant prevailing research and be cautious about whether the continued pursuit of state-of-the-art would yield convincing and universal conclusions. Code and models are available at https://github.com/NVlabs/BEV-Planner. | https://openaccess.thecvf.com/content/CVPR2024/papers/Li_Is_Ego_Status_All_You_Need_for_Open-Loop_End-to-End_Autonomous_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.03031 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Li_Is_Ego_Status_All_You_Need_for_Open-Loop_End-to-End_Autonomous_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Li_Is_Ego_Status_All_You_Need_for_Open-Loop_End-to-End_Autonomous_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_Is_Ego_Status_CVPR_2024_supplemental.pdf | null |
Unsupervised Template-assisted Point Cloud Shape Correspondence Network | Jiacheng Deng, Jiahao Lu, Tianzhu Zhang | Unsupervised point cloud shape correspondence aims to establish point-wise correspondences between source and target point clouds. Existing methods obtain correspondences directly by computing point-wise feature similarity between point clouds. However non-rigid objects possess strong deformability and unusual shapes making it a longstanding challenge to directly establish correspondences between point clouds with unconventional shapes. To address this challenge we propose an unsupervised Template-Assisted point cloud shape correspondence Network termed TANet including a template generation module and a template assistance module. The proposed TANet enjoys several merits. Firstly the template generation module establishes a set of learnable templates with explicit structures. Secondly we introduce a template assistance module that extensively leverages the generated templates to establish more accurate shape correspondences from multiple perspectives. Extensive experiments on four human and animal datasets demonstrate that TANet achieves favorable performance against state-of-the-art methods. | https://openaccess.thecvf.com/content/CVPR2024/papers/Deng_Unsupervised_Template-assisted_Point_Cloud_Shape_Correspondence_Network_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.16412 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Deng_Unsupervised_Template-assisted_Point_Cloud_Shape_Correspondence_Network_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Deng_Unsupervised_Template-assisted_Point_Cloud_Shape_Correspondence_Network_CVPR_2024_paper.html | CVPR 2024 | null | null |
CGI-DM: Digital Copyright Authentication for Diffusion Models via Contrasting Gradient Inversion | Xiaoyu Wu, Yang Hua, Chumeng Liang, Jiaru Zhang, Hao Wang, Tao Song, Haibing Guan | Diffusion Models (DMs) have evolved into advanced image generation tools especially for few-shot generation where a pre-trained model is fine-tuned on a small set of images to capture a specific style or object. Despite their success concerns exist about potential copyright violations stemming from the use of unauthorized data in this process. In response we present Contrasting Gradient Inversion for Diffusion Models (CGI-DM) a novel method featuring vivid visual representations for digital copyright authentication. Our approach involves removing partial information of an image and recovering missing details by exploiting conceptual differences between the pre-trained and fine-tuned models. We formulate the differences as KL divergence between latent variables of the two models when given the same input image which can be maximized through Monte Carlo sampling and Projected Gradient Descent (PGD). The similarity between original and recovered images serves as a strong indicator of potential infringements. Extensive experiments on the WikiArt and Dreambooth datasets demonstrate the high accuracy of CGI-DM in digital copyright authentication surpassing alternative validation techniques. Code implementation is available at https://github.com/Nicholas0228/Revelio. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_CGI-DM_Digital_Copyright_Authentication_for_Diffusion_Models_via_Contrasting_Gradient_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wu_CGI-DM_Digital_Copyright_Authentication_for_Diffusion_Models_via_Contrasting_Gradient_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wu_CGI-DM_Digital_Copyright_Authentication_for_Diffusion_Models_via_Contrasting_Gradient_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wu_CGI-DM_Digital_Copyright_CVPR_2024_supplemental.zip | null |
Making Visual Sense of Oracle Bones for You and Me | Runqi Qiao, Lan Yang, Kaiyue Pang, Honggang Zhang | Visual perception evolves over time. This is particularly the case of oracle bone scripts where visual glyphs seem intuitive to people from distant past prove difficult to be understood in contemporary eyes. While semantic correspondence of an oracle can be found via a dictionary lookup this proves to be not enough for public viewers to connect the dots i.e. why does this oracle mean that? Common solution relies on a laborious curation process to collect visual guide for each oracle (Fig.1) which hinges on the case-by-case effort and taste of curators. This paper delves into one natural follow-up question: can AI take over?Begin with a comprehensive human study we show participants could indeed make better sense of an oracle glyph subjected to a proper visual guide and its efficacy can be approximated via a novel metric termed TransOV (Transferable Oracle Visuals). We then define a new conditional visual generation task based on an oracle glyph and its semantic meaning and importantly approach it by circumventing any form of model training in the presence of fatal lack of oracle data. At its heart is to leverage foundation model like GPT-4V to reason about the visual cues hidden inside an oracle and take advantage of an existing text-to-image model for final visual guide generation. Extensive empirical evidence shows our AI-enabled visual guides achieve significantly comparable TransOV performance compared with those collected under manual efforts. Finally we demonstrate the versatility of our system under a more complex setting where it is required to work alongside an AI image denoiser to cope with raw oracle scan image inputs (cf. processed clean oracle glyphs). Code is available at https://github.com/RQ-Lab/OBS-Visual. | https://openaccess.thecvf.com/content/CVPR2024/papers/Qiao_Making_Visual_Sense_of_Oracle_Bones_for_You_and_Me_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Qiao_Making_Visual_Sense_of_Oracle_Bones_for_You_and_Me_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Qiao_Making_Visual_Sense_of_Oracle_Bones_for_You_and_Me_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Qiao_Making_Visual_Sense_CVPR_2024_supplemental.pdf | null |
Finsler-Laplace-Beltrami Operators with Application to Shape Analysis | Simon Weber, Thomas Dagès, Maolin Gao, Daniel Cremers | The Laplace-Beltrami operator (LBO) emerges from studying manifolds equipped with a Riemannian metric. It is often called the swiss army knife of geometry processing as it allows to capture intrinsic shape information and gives rise to heat diffusion geodesic distances and a multitude of shape descriptors. It also plays a central role in geometric deep learning. In this work we explore Finsler manifolds as a generalization of Riemannian manifolds. We revisit the Finsler heat equation and derive a Finsler heat kernel and a Finsler-Laplace-Beltrami Operator (FLBO): a novel theoretically justified anisotropic Laplace-Beltrami operator (ALBO). In experimental evaluations we demonstrate that the proposed FLBO is a valuable alternative to the traditional Riemannian-based LBO and ALBOs for spatial filtering and shape correspondence estimation. We hope that the proposed Finsler heat kernel and the FLBO will inspire further exploration of Finsler geometry in the computer vision community. | https://openaccess.thecvf.com/content/CVPR2024/papers/Weber_Finsler-Laplace-Beltrami_Operators_with_Application_to_Shape_Analysis_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Weber_Finsler-Laplace-Beltrami_Operators_with_Application_to_Shape_Analysis_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Weber_Finsler-Laplace-Beltrami_Operators_with_Application_to_Shape_Analysis_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Weber_Finsler-Laplace-Beltrami_Operators_with_CVPR_2024_supplemental.pdf | null |
Minimal Perspective Autocalibration | Andrea Porfiri Dal Cin, Timothy Duff, Luca Magri, Tomas Pajdla | We introduce a new family of minimal problems for reconstruction from multiple views. Our primary focus is a novel approach to autocalibration a long-standing problem in computer vision. Traditional approaches to this problem such as those based on Kruppa's equations or the modulus constraint rely explicitly on the knowledge of multiple fundamental matrices or a projective reconstruction. In contrast we consider a novel formulation involving constraints on image points the unknown depths of 3D points and a partially specified calibration matrix K. For 2 and 3 views we present a comprehensive taxonomy of minimal autocalibration problems obtained by relaxing some of these constraints. These problems are organized into classes according to the number of views and any assumed prior knowledge of K. Within each class we determine problems with the fewest---or a relatively small number of---solutions. From this zoo of problems we devise three practical solvers. Experiments with synthetic and real data and interfacing our solvers with COLMAP demonstrate that we achieve superior accuracy compared to state-of-the-art calibration methods. The code is available at https://github.com/andreadalcin/MinimalPerspectiveAutocalibration. | https://openaccess.thecvf.com/content/CVPR2024/papers/Dal_Cin_Minimal_Perspective_Autocalibration_CVPR_2024_paper.pdf | http://arxiv.org/abs/2405.05605 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Dal_Cin_Minimal_Perspective_Autocalibration_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Dal_Cin_Minimal_Perspective_Autocalibration_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Dal_Cin_Minimal_Perspective_Autocalibration_CVPR_2024_supplemental.pdf | null |
MOHO: Learning Single-view Hand-held Object Reconstruction with Multi-view Occlusion-Aware Supervision | Chenyangguang Zhang, Guanlong Jiao, Yan Di, Gu Wang, Ziqin Huang, Ruida Zhang, Fabian Manhardt, Bowen Fu, Federico Tombari, Xiangyang Ji | Previous works concerning single-view hand-held object reconstruction typically rely on supervision from 3D ground-truth models which are hard to collect in real world. In contrast readily accessible hand-object videos offer a promising training data source but they only give heavily occluded object observations. In this paper we present a novel synthetic-to-real framework to exploit Multi-view Occlusion-aware supervision from hand-object videos for Hand-held Object reconstruction (MOHO) from a single image tackling two predominant challenges in such setting: hand-induced occlusion and object's self-occlusion. First in the synthetic pre-training stage we render a large-scaled synthetic dataset SOMVideo with hand-object images and multi-view occlusion-free supervisions adopted to address hand-induced occlusion in both 2D and 3D spaces. Second in the real-world finetuning stage MOHO leverages the amodal-mask-weighted geometric supervision to mitigate the unfaithful guidance caused by the hand-occluded supervising views in real world. Moreover domain-consistent occlusion-aware features are amalgamated in MOHO to resist object's self-occlusion for inferring the complete object shape. Extensive experiments on HO3D and DexYCB datasets demonstrate 2D-supervised MOHO gains superior results against 3D-supervised methods by a large margin. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_MOHO_Learning_Single-view_Hand-held_Object_Reconstruction_with_Multi-view_Occlusion-Aware_Supervision_CVPR_2024_paper.pdf | http://arxiv.org/abs/2310.11696 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_MOHO_Learning_Single-view_Hand-held_Object_Reconstruction_with_Multi-view_Occlusion-Aware_Supervision_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_MOHO_Learning_Single-view_Hand-held_Object_Reconstruction_with_Multi-view_Occlusion-Aware_Supervision_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhang_MOHO_Learning_Single-view_CVPR_2024_supplemental.pdf | null |
BANF: Band-Limited Neural Fields for Levels of Detail Reconstruction | Akhmedkhan Shabanov, Shrisudhan Govindarajan, Cody Reading, Lily Goli, Daniel Rebain, Kwang Moo Yi, Andrea Tagliasacchi | Largely due to their implicit nature neural fields lack a direct mechanism for filtering as Fourier analysis from discrete signal processing is not directly applicable to these representations. Effective filtering of neural fields is critical to enable level-of-detail processing in downstream applications and support operations that involve sampling the field on regular grids (e.g. marching cubes). Existing methods that attempt to decompose neural fields in the frequency domain either resort to heuristics or require extensive modifications to the neural field architecture. We show that via a simple modification one can obtain neural fields that are low-pass filtered and in turn show how this can be exploited to obtain a frequency decomposition of the entire signal. We demonstrate the validity of our technique by investigating level-of-detail reconstruction and showing how coarser representations can be computed effectively. | https://openaccess.thecvf.com/content/CVPR2024/papers/Shabanov_BANF_Band-Limited_Neural_Fields_for_Levels_of_Detail_Reconstruction_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.13024 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Shabanov_BANF_Band-Limited_Neural_Fields_for_Levels_of_Detail_Reconstruction_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Shabanov_BANF_Band-Limited_Neural_Fields_for_Levels_of_Detail_Reconstruction_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Shabanov_BANF_Band-Limited_Neural_CVPR_2024_supplemental.zip | null |
Time- Memory- and Parameter-Efficient Visual Adaptation | Otniel-Bogdan Mercea, Alexey Gritsenko, Cordelia Schmid, Anurag Arnab | As foundation models become more popular there is a growing need to efficiently finetune them for downstream tasks. Although numerous adaptation methods have been proposed they are designed to be efficient only in terms of how many parameters are trained. They however typically still require backpropagating gradients throughout the model meaning that their training-time and -memory cost does not reduce as significantly. We propose an adaptation method which does not backpropagate gradients through the backbone. We achieve this by designing a lightweight network in parallel that operates on features from the frozen pretrained backbone. As a result our method is efficient not only in terms of parameters but also in training-time and memory usage. Our approach achieves state-of-the-art accuracy-parameter trade-offs on the popular VTAB benchmark and we further show how we outperform prior works with respect to training-time and -memory usage too. We further demonstrate the training efficiency and scalability of our method by adapting a vision transformer backbone of 4 billion parameters for the computationally demanding task of video classification without any intricate model parallelism. Here we outperform a prior adaptor-based method which could only scale to a 1 billion parameter backbone or fully-finetuning a smaller backbone with the same GPU and less training time. | https://openaccess.thecvf.com/content/CVPR2024/papers/Mercea_Time-_Memory-_and_Parameter-Efficient_Visual_Adaptation_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Mercea_Time-_Memory-_and_Parameter-Efficient_Visual_Adaptation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Mercea_Time-_Memory-_and_Parameter-Efficient_Visual_Adaptation_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Mercea_Time-_Memory-_and_CVPR_2024_supplemental.pdf | null |
SecondPose: SE(3)-Consistent Dual-Stream Feature Fusion for Category-Level Pose Estimation | Yamei Chen, Yan Di, Guangyao Zhai, Fabian Manhardt, Chenyangguang Zhang, Ruida Zhang, Federico Tombari, Nassir Navab, Benjamin Busam | Category-level object pose estimation aiming to predict the 6D pose and 3D size of objects from known categories typically struggles with large intra-class shape variation. Existing works utilizing mean shapes often fall short of capturing this variation. To address this issue we present SecondPose a novel approach integrating object-specific geometric features with semantic category priors from DINOv2. Leveraging the advantage of DINOv2 in providing SE(3)-consistent semantic features we hierarchically extract two types of SE(3)-invariant geometric features to further encapsulate local-to-global object-specific information. These geometric features are then point-aligned with DINOv2 features to establish a consistent object representation under SE(3) transformations facilitating the mapping from camera space to the pre-defined canonical space thus further enhancing pose estimation. Extensive experiments on NOCS-REAL275 demonstrate that SecondPose achieves a 12.4% leap forward over the state-of-the-art. Moreover on a more complex dataset HouseCat6D which provides photometrically challenging objects SecondPose still surpasses other competitors by a large margin. Code is released at https://github.com/NOrangeeroli/SecondPose.git. | https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_SecondPose_SE3-Consistent_Dual-Stream_Feature_Fusion_for_Category-Level_Pose_Estimation_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Chen_SecondPose_SE3-Consistent_Dual-Stream_Feature_Fusion_for_Category-Level_Pose_Estimation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Chen_SecondPose_SE3-Consistent_Dual-Stream_Feature_Fusion_for_Category-Level_Pose_Estimation_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Chen_SecondPose_SE3-Consistent_Dual-Stream_CVPR_2024_supplemental.pdf | null |
Physical Property Understanding from Language-Embedded Feature Fields | Albert J. Zhai, Yuan Shen, Emily Y. Chen, Gloria X. Wang, Xinlei Wang, Sheng Wang, Kaiyu Guan, Shenlong Wang | Can computers perceive the physical properties of objects solely through vision? Research in cognitive science and vision science has shown that humans excel at identifying materials and estimating their physical properties based purely on visual appearance. In this paper we present a novel approach for dense prediction of the physical properties of objects using a collection of images. Inspired by how humans reason about physics through vision we leverage large language models to propose candidate materials for each object. We then construct a language-embedded point cloud and estimate the physical properties of each 3D point using a zero-shot kernel regression approach. Our method is accurate annotation-free and applicable to any object in the open world. Experiments demonstrate the effectiveness of the proposed approach in various physical property reasoning tasks such as estimating the mass of common objects as well as other properties like friction and hardness. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zhai_Physical_Property_Understanding_from_Language-Embedded_Feature_Fields_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.04242 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zhai_Physical_Property_Understanding_from_Language-Embedded_Feature_Fields_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zhai_Physical_Property_Understanding_from_Language-Embedded_Feature_Fields_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhai_Physical_Property_Understanding_CVPR_2024_supplemental.zip | null |
EgoGen: An Egocentric Synthetic Data Generator | Gen Li, Kaifeng Zhao, Siwei Zhang, Xiaozhong Lyu, Mihai Dusmanu, Yan Zhang, Marc Pollefeys, Siyu Tang | Understanding the world in first-person view is fundamental in Augmented Reality (AR). This immersive perspective brings dramatic visual changes and unique challenges compared to third-person views. Synthetic data has empowered third-person-view vision models but its application to embodied egocentric perception tasks remains largely unexplored. A critical challenge lies in simulating natural human movements and behaviors that effectively steer the embodied cameras to capture a faithful egocentric representation of the 3D world. To address this challenge we introduce EgoGen a new synthetic data generator that can produce accurate and rich ground-truth training data for egocentric perception tasks. At the heart of EgoGen is a novel human motion synthesis model that directly leverages egocentric visual inputs of a virtual human to sense the 3D environment. Combined with collision-avoiding motion primitives and a two-stage reinforcement learning approach our motion synthesis model offers a closed-loop solution where the embodied perception and movement of the virtual human are seamlessly coupled. Compared to previous works our model eliminates the need for a pre-defined global path and is directly applicable to dynamic environments. Combined with our easy-to-use and scalable data generation pipeline we demonstrate EgoGen's efficacy in three tasks: mapping and localization for head-mounted cameras egocentric camera tracking and human mesh recovery from egocentric views. EgoGen will be fully open-sourced offering a practical solution for creating realistic egocentric training data and aiming to serve as a useful tool for egocentric computer vision research. | https://openaccess.thecvf.com/content/CVPR2024/papers/Li_EgoGen_An_Egocentric_Synthetic_Data_Generator_CVPR_2024_paper.pdf | http://arxiv.org/abs/2401.08739 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Li_EgoGen_An_Egocentric_Synthetic_Data_Generator_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Li_EgoGen_An_Egocentric_Synthetic_Data_Generator_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Li_EgoGen_An_Egocentric_CVPR_2024_supplemental.pdf | null |
Suppress and Rebalance: Towards Generalized Multi-Modal Face Anti-Spoofing | Xun Lin, Shuai Wang, Rizhao Cai, Yizhong Liu, Ying Fu, Wenzhong Tang, Zitong Yu, Alex Kot | Face Anti-Spoofing (FAS) is crucial for securing face recognition systems against presentation attacks. With advancements in sensor manufacture and multi-modal learning techniques many multi-modal FAS approaches have emerged. However they face challenges in generalizing to unseen attacks and deployment conditions. These challenges arise from (1) modality unreliability where some modality sensors like depth and infrared undergo significant domain shifts in varying environments leading to the spread of unreliable information during cross-modal feature fusion and (2) modality imbalance where training overly relies on a dominant modality hinders the convergence of others reducing effectiveness against attack types that are indistinguishable by sorely using the dominant modality. To address modality unreliability we propose the Uncertainty-Guided Cross-Adapter (U-Adapter) to recognize unreliably detected regions within each modality and suppress the impact of unreliable regions on other modalities. For modality imbalance we propose a Rebalanced Modality Gradient Modulation (ReGrad) strategy to rebalance the convergence speed of all modalities by adaptively adjusting their gradients. Besides we provide the first large-scale benchmark for evaluating multi-modal FAS performance under domain generalization scenarios. Extensive experiments demonstrate that our method outperforms state-of-the-art methods. Source codes and protocols are released on https://github.com/OMGGGGG/mmdg. | https://openaccess.thecvf.com/content/CVPR2024/papers/Lin_Suppress_and_Rebalance_Towards_Generalized_Multi-Modal_Face_Anti-Spoofing_CVPR_2024_paper.pdf | http://arxiv.org/abs/2402.19298 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Lin_Suppress_and_Rebalance_Towards_Generalized_Multi-Modal_Face_Anti-Spoofing_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Lin_Suppress_and_Rebalance_Towards_Generalized_Multi-Modal_Face_Anti-Spoofing_CVPR_2024_paper.html | CVPR 2024 | null | null |
LEAD: Exploring Logit Space Evolution for Model Selection | Zixuan Hu, Xiaotong Li, Shixiang Tang, Jun Liu, Yichun Hu, Ling-Yu Duan | The remarkable success of "pretrain-then-finetune" paradigm has led to a proliferation of available pre-trained models for vision tasks. This surge presents a significant challenge in efficiently choosing the most suitable pre-trained models for downstream tasks. The critical aspect of this challenge lies in effectively predicting the model transferability by considering the underlying fine-tuning dynamics. Existing methods often model fine-tuning dynamics in feature space with linear transformations which do not precisely align with the fine-tuning objective and fail to grasp the essential nonlinearity from optimization. To this end we present LEAD a finetuning-aligned approach based on the network output of logits. LEAD proposes a theoretical framework to model the optimization process and derives an ordinary differential equation (ODE) to depict the nonlinear evolution toward the final logit state. Additionally we design a class-aware decomposition method to consider the varying evolution dynamics across classes and further ensure practical applicability. Integrating the closely aligned optimization objective and nonlinear modeling capabilities derived from the differential equation our method offers a concise solution to effectively bridge the optimization gap in a single step bypassing the lengthy fine-tuning process. The comprehensive experiments on 24 supervised and self-supervised pre-trained models across 10 downstream datasets demonstrate impressive performances and showcase its broad adaptability even in low-data scenarios. | https://openaccess.thecvf.com/content/CVPR2024/papers/Hu_LEAD_Exploring_Logit_Space_Evolution_for_Model_Selection_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Hu_LEAD_Exploring_Logit_Space_Evolution_for_Model_Selection_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Hu_LEAD_Exploring_Logit_Space_Evolution_for_Model_Selection_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Hu_LEAD_Exploring_Logit_CVPR_2024_supplemental.pdf | null |
Video ReCap: Recursive Captioning of Hour-Long Videos | Md Mohaiminul Islam, Ngan Ho, Xitong Yang, Tushar Nagarajan, Lorenzo Torresani, Gedas Bertasius | Most video captioning models are designed to process short video clips of few seconds and output text describing low-level visual concepts (e.g. objects scenes atomic actions). However most real-world videos last for minutes or hours and have a complex hierarchical structure spanning different temporal granularities. We propose Video ReCap a recursive video captioning model that can process video inputs of dramatically different lengths (from 1 second to 2 hours) and output video captions at multiple hierarchy levels. The recursive video-language architecture exploits the synergy between different video hierarchies and can process hour-long videos efficiently. We utilize a curriculum learning training scheme to learn the hierarchical structure of videos starting from clip-level captions describing atomic actions then focusing on segment-level descriptions and concluding with generating summaries for hour-long videos. Furthermore we introduce Ego4D-HCap dataset by augmenting Ego4D with 8267 manually collected long-range video summaries. Our recursive model can flexibly generate captions at different hierarchy levels while also being useful for other complex video understanding tasks such as VideoQA on EgoSchema. Data code and models are publicly available at https://sites.google.com/view/vidrecap. | https://openaccess.thecvf.com/content/CVPR2024/papers/Islam_Video_ReCap_Recursive_Captioning_of_Hour-Long_Videos_CVPR_2024_paper.pdf | http://arxiv.org/abs/2402.13250 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Islam_Video_ReCap_Recursive_Captioning_of_Hour-Long_Videos_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Islam_Video_ReCap_Recursive_Captioning_of_Hour-Long_Videos_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Islam_Video_ReCap_Recursive_CVPR_2024_supplemental.pdf | null |
Towards Realistic Scene Generation with LiDAR Diffusion Models | Haoxi Ran, Vitor Guizilini, Yue Wang | Diffusion models (DMs) excel in photo-realistic image synthesis but their adaptation to LiDAR scene generation poses a substantial hurdle. This is primarily because DMs operating in the point space struggle to preserve the curve-like patterns and 3D geometry of LiDAR scenes which consumes much of their representation power. In this paper we propose LiDAR Diffusion Models (LiDMs) to generate LiDAR-realistic scenes from a latent space tailored to capture the realism of LiDAR scenes by incorporating geometric priors into the learning pipeline. Our method targets three major desiderata: pattern realism geometry realism and object realism. Specifically we introduce curve-wise compression to simulate real-world LiDAR patterns point-wise coordinate supervision to learn scene geometry and patch-wise encoding for a full 3D object context. With these three core designs our method achieves competitive performance on unconditional LiDAR generation in 64-beam scenario and state of the art on conditional LiDAR generation while maintaining high efficiency compared to point-based DMs (up to 107xfaster). Furthermore by compressing LiDAR scenes into a latent space we enable the controllability of DMs with various conditions such as semantic maps camera views and text prompts. Our code and pretrained weights are available at https://github.com/hancyran/LiDAR-Diffusion. | https://openaccess.thecvf.com/content/CVPR2024/papers/Ran_Towards_Realistic_Scene_Generation_with_LiDAR_Diffusion_Models_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.00815 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Ran_Towards_Realistic_Scene_Generation_with_LiDAR_Diffusion_Models_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Ran_Towards_Realistic_Scene_Generation_with_LiDAR_Diffusion_Models_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Ran_Towards_Realistic_Scene_CVPR_2024_supplemental.pdf | null |
Diffusion Reflectance Map: Single-Image Stochastic Inverse Rendering of Illumination and Reflectance | Yuto Enyo, Ko Nishino | Reflectance bounds the frequency spectrum of illumination in the object appearance. In this paper we introduce the first stochastic inverse rendering method which recovers the attenuated frequency spectrum of an illumination jointly with the reflectance of an object of known geometry from a single image. Our key idea is to solve this blind inverse problem in the reflectance map an appearance representation invariant to the underlying geometry by learning to reverse the image formation with a novel diffusion model which we refer to as the Diffusion Reflectance Map Network (DRMNet). Given an observed reflectance map converted and completed from the single input image DRMNet generates a reflectance map corresponding to a perfect mirror sphere while jointly estimating the reflectance. The forward process can be understood as gradually filtering a natural illumination with lower and lower frequency reflectance and additive Gaussian noise. DRMNet learns to invert this process with two subnetworks IllNet and RefNet which work in concert towards this joint estimation. The network is trained on an extensive synthetic dataset and is demonstrated to generalize to real images showing state-of-the-art accuracy on established datasets. | https://openaccess.thecvf.com/content/CVPR2024/papers/Enyo_Diffusion_Reflectance_Map_Single-Image_Stochastic_Inverse_Rendering_of_Illumination_and_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.04529 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Enyo_Diffusion_Reflectance_Map_Single-Image_Stochastic_Inverse_Rendering_of_Illumination_and_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Enyo_Diffusion_Reflectance_Map_Single-Image_Stochastic_Inverse_Rendering_of_Illumination_and_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Enyo_Diffusion_Reflectance_Map_CVPR_2024_supplemental.pdf | null |
Universal Segmentation at Arbitrary Granularity with Language Instruction | Yong Liu, Cairong Zhang, Yitong Wang, Jiahao Wang, Yujiu Yang, Yansong Tang | This paper aims to achieve universal segmentation of arbitrary semantic level. Despite significant progress in recent years specialist segmentation approaches are limited to specific tasks and data distribution. Retraining a new model for adaptation to new scenarios or settings takes expensive computation and time cost which raises the demand for versatile and universal segmentation model that can cater to various granularity. Although some attempts have been made for unifying different segmentation tasks or generalization to various scenarios limitations in the definition of paradigms and input-output spaces make it difficult for them to achieve accurate understanding of content at arbitrary granularity. To this end we present UniLSeg a universal segmentation model that can perform segmentation at any semantic level with the guidance of language instructions. For training UniLSeg we reorganize a group of tasks from original diverse distributions into a unified data format where images with texts describing segmentation targets as input and corresponding masks are output. Combined with a automatic annotation engine for utilizing numerous unlabeled data UniLSeg achieves excellent performance on various tasks and settings surpassing both specialist and unified segmentation models. | https://openaccess.thecvf.com/content/CVPR2024/papers/Liu_Universal_Segmentation_at_Arbitrary_Granularity_with_Language_Instruction_CVPR_2024_paper.pdf | https://arxiv.org/abs/2312.01623 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Universal_Segmentation_at_Arbitrary_Granularity_with_Language_Instruction_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Universal_Segmentation_at_Arbitrary_Granularity_with_Language_Instruction_CVPR_2024_paper.html | CVPR 2024 | null | https://openaccess.thecvf.com |
GaussianAvatars: Photorealistic Head Avatars with Rigged 3D Gaussians | Shenhan Qian, Tobias Kirschstein, Liam Schoneveld, Davide Davoli, Simon Giebenhain, Matthias Nießner | We introduce GaussianAvatars a new method to create photorealistic head avatars that are fully controllable in terms of expression pose and viewpoint. The core idea is a dynamic 3D representation based on 3D Gaussian splats that are rigged to a parametric morphable face model. This combination facilitates photorealistic rendering while allowing for precise animation control via the underlying parametric model e.g. through expression transfer from a driving sequence or by manually changing the morphable model parameters. We parameterize each splat by a local coordinate frame of a triangle and optimize for explicit displacement offset to obtain a more accurate geometric representation. During avatar reconstruction we jointly optimize for the morphable model parameters and Gaussian splat parameters in an end-to-end fashion. We demonstrate the animation capabilities of our photorealistic avatar in several challenging scenarios. For instance we show reenactments from a driving video where our method outperforms existing works by a significant margin. | https://openaccess.thecvf.com/content/CVPR2024/papers/Qian_GaussianAvatars_Photorealistic_Head_Avatars_with_Rigged_3D_Gaussians_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.02069 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Qian_GaussianAvatars_Photorealistic_Head_Avatars_with_Rigged_3D_Gaussians_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Qian_GaussianAvatars_Photorealistic_Head_Avatars_with_Rigged_3D_Gaussians_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Qian_GaussianAvatars_Photorealistic_Head_CVPR_2024_supplemental.pdf | null |
MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI | Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, Cong Wei, Botao Yu, Ruibin Yuan, Renliang Sun, Ming Yin, Boyuan Zheng, Zhenzhu Yang, Yibo Liu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | We introduce MMMU: a new benchmark designed to evaluate multimodal models on massive multi-discipline tasks demanding college-level subject knowledge and deliberate reasoning. MMMU includes 11.5K meticulously collected multimodal questions from college exams quizzes and textbooks covering six core disciplines: Art & Design Business Science Health & Medicine Humanities & Social Science and Tech & Engineering. These questions span 30 subjects and 183 subfields comprising 30 highly heterogeneous image types such as charts diagrams maps tables music sheets and chemical structures. Unlike existing benchmarks MMMU focuses on advanced perception and reasoning with domain-specific knowledge challenging models to perform tasks akin to those faced by experts. The evaluation of 28 open-source LMMs as well as the proprietary GPT-4V(ision) and Gemini highlights the substantial challenges posed by MMMU. Even the advanced GPT-4V and Gemini Ultra only achieve accuracies of 56% and 59% respectively indicating significant room for improvement. We believe MMMU will stimulate the community to build next-generation multimodal foundation models towards expert artificial general intelligence. | https://openaccess.thecvf.com/content/CVPR2024/papers/Yue_MMMU_A_Massive_Multi-discipline_Multimodal_Understanding_and_Reasoning_Benchmark_for_CVPR_2024_paper.pdf | http://arxiv.org/abs/2311.16502 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Yue_MMMU_A_Massive_Multi-discipline_Multimodal_Understanding_and_Reasoning_Benchmark_for_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Yue_MMMU_A_Massive_Multi-discipline_Multimodal_Understanding_and_Reasoning_Benchmark_for_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Yue_MMMU_A_Massive_CVPR_2024_supplemental.pdf | null |
Layout-Agnostic Scene Text Image Synthesis with Diffusion Models | Qilong Zhangli, Jindong Jiang, Di Liu, Licheng Yu, Xiaoliang Dai, Ankit Ramchandani, Guan Pang, Dimitris N. Metaxas, Praveen Krishnan | While diffusion models have significantly advanced the quality of image generation their capability to accurately and coherently render text within these images remains a substantial challenge. Conventional diffusion-based methods for scene text generation are typically limited by their reliance on an intermediate layout output. This dependency often results in a constrained diversity of text styles and fonts an inherent limitation stemming from the deterministic nature of the layout generation phase. To address these challenges this paper introduces SceneTextGen a novel diffusion-based model specifically designed to circumvent the need for a predefined layout stage. By doing so SceneTextGen facilitates a more natural and varied representation of text. The novelty of SceneTextGen lies in its integration of three key components: a character-level encoder for capturing detailed typographic properties coupled with a character-level instance segmentation model and a word-level spotting model to address the issues of unwanted text generation and minor character inaccuracies. We validate the performance of our method by demonstrating improved character recognition rates on generated images across different public visual text datasets in comparison to both standard diffusion based methods and text specific methods. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zhangli_Layout-Agnostic_Scene_Text_Image_Synthesis_with_Diffusion_Models_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zhangli_Layout-Agnostic_Scene_Text_Image_Synthesis_with_Diffusion_Models_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zhangli_Layout-Agnostic_Scene_Text_Image_Synthesis_with_Diffusion_Models_CVPR_2024_paper.html | CVPR 2024 | null | null |
EarthLoc: Astronaut Photography Localization by Indexing Earth from Space | Gabriele Berton, Alex Stoken, Barbara Caputo, Carlo Masone | Astronaut photography spanning six decades of human spaceflight presents a unique Earth observations dataset with immense value for both scientific research and disaster response. Despite its significance accurately localizing the geographical extent of these images crucial for effective utilization poses substantial challenges. Current manual localization efforts are time-consuming motivating the need for automated solutions. We propose a novel approach - leveraging image retrieval - to address this challenge efficiently. We introduce innovative training techniques including Year-Wise Data Augmentation and a Neutral-Aware Multi-Similarity Loss which contribute to the development of a high-performance model EarthLoc. We develop six evaluation datasets and perform a comprehensive benchmark comparing EarthLoc to existing methods showcasing its superior efficiency and accuracy. Our approach marks a significant advancement in automating the localization of astronaut photography which will help bridge a critical gap in Earth observations data. Code and datasets are available at this https://github.com/gmberton/EarthLoc | https://openaccess.thecvf.com/content/CVPR2024/papers/Berton_EarthLoc_Astronaut_Photography_Localization_by_Indexing_Earth_from_Space_CVPR_2024_paper.pdf | http://arxiv.org/abs/2403.06758 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Berton_EarthLoc_Astronaut_Photography_Localization_by_Indexing_Earth_from_Space_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Berton_EarthLoc_Astronaut_Photography_Localization_by_Indexing_Earth_from_Space_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Berton_EarthLoc_Astronaut_Photography_CVPR_2024_supplemental.pdf | null |
SmartMask: Context Aware High-Fidelity Mask Generation for Fine-grained Object Insertion and Layout Control | Jaskirat Singh, Jianming Zhang, Qing Liu, Cameron Smith, Zhe Lin, Liang Zheng | The field of generative image inpainting and object insertion has made significant progress with the recent advent of latent diffusion models. Utilizing a precise object mask can greatly enhance these applications. However due to the challenges users encounter in creating high-fidelity masks there is a tendency for these methods to rely on more coarse masks (e.g. bounding box) for these applications. This results in limited control and compromised background content preservation. To overcome these limitations we introduce SmartMask which allows any novice user to create detailed masks for precise object insertion. Combined with a ControlNet-Inpaint model our experiments demonstrate that SmartMask achieves superior object insertion quality preserving the background content more effectively than previous methods. Notably unlike prior works the proposed approach can also be used even without user-mask guidance which allows it to perform mask-free object insertion at diverse positions and scales. Furthermore we find that when used iteratively with a novel instruction-tuning based planning model SmartMask can be used to design detailed layouts from scratch. As compared with user-scribble based layout design we observe that SmartMask allows for better quality outputs with layout-to-image generation methods. | https://openaccess.thecvf.com/content/CVPR2024/papers/Singh_SmartMask_Context_Aware_High-Fidelity_Mask_Generation_for_Fine-grained_Object_Insertion_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.05039 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Singh_SmartMask_Context_Aware_High-Fidelity_Mask_Generation_for_Fine-grained_Object_Insertion_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Singh_SmartMask_Context_Aware_High-Fidelity_Mask_Generation_for_Fine-grained_Object_Insertion_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Singh_SmartMask_Context_Aware_CVPR_2024_supplemental.pdf | null |
Text-Image Alignment for Diffusion-Based Perception | Neehar Kondapaneni, Markus Marks, Manuel Knott, Rogerio Guimaraes, Pietro Perona | Diffusion models are generative models with impressive text-to-image synthesis capabilities and have spurred a new wave of creative methods for classical machine learning tasks. However the best way to harness the perceptual knowledge of these generative models for visual tasks is still an open question. Specifically it is unclear how to use the prompting interface when applying diffusion backbones to vision tasks. We find that automatically generated captions can improve text-image alignment and significantly enhance a model's cross-attention maps leading to better perceptual performance. Our approach improves upon the current state-of-the-art in diffusion-based semantic segmentation on ADE20K and the current overall SOTA for depth estimation on NYUv2. Furthermore our method generalizes to the cross-domain setting. We use model personalization and caption modifications to align our model to the target domain and find improvements over unaligned baselines. Our cross-domain object detection model trained on Pascal VOC achieves SOTA results on Watercolor2K. Our cross-domain segmentation method trained on Cityscapes achieves SOTA results on Dark Zurich-val and Nighttime Driving. Project page: vision.caltech.edu/TADP/. Code: github.com/damaggu/TADP | https://openaccess.thecvf.com/content/CVPR2024/papers/Kondapaneni_Text-Image_Alignment_for_Diffusion-Based_Perception_CVPR_2024_paper.pdf | http://arxiv.org/abs/2310.00031 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Kondapaneni_Text-Image_Alignment_for_Diffusion-Based_Perception_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Kondapaneni_Text-Image_Alignment_for_Diffusion-Based_Perception_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Kondapaneni_Text-Image_Alignment_for_CVPR_2024_supplemental.pdf | null |
Customization Assistant for Text-to-Image Generation | Yufan Zhou, Ruiyi Zhang, Jiuxiang Gu, Tong Sun | Customizing pre-trained text-to-image generation model has attracted massive research interest recently due to its huge potential in real-world applications. Although existing methods are able to generate creative content for a novel concept contained in single user-input image their capability are still far from perfection. Specifically most existing methods require fine-tuning the generative model on testing images. Some existing methods do not require fine-tuning while their performance are unsatisfactory. Furthermore the interaction between users and models are still limited to directive and descriptive prompts such as instructions and captions. In this work we build a customization assistant based on pre-trained large language model and diffusion model which can not only perform customized generation in a tuning-free manner but also enable more user-friendly interactions: users can chat with the assistant and input either ambiguous text or clear instruction. Specifically we propose a new framework consists of a new model design and a novel training strategy. The resulting assistant can perform customized generation in 2-5 seconds without any test time fine-tuning. Extensive experiments are conducted competitive results have been obtained across different domains illustrating the effectiveness of the proposed method. | https://openaccess.thecvf.com/content/CVPR2024/papers/Zhou_Customization_Assistant_for_Text-to-Image_Generation_CVPR_2024_paper.pdf | http://arxiv.org/abs/2312.03045 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Zhou_Customization_Assistant_for_Text-to-Image_Generation_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Zhou_Customization_Assistant_for_Text-to-Image_Generation_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Zhou_Customization_Assistant_for_CVPR_2024_supplemental.pdf | null |
GaussianEditor: Editing 3D Gaussians Delicately with Text Instructions | Junjie Wang, Jiemin Fang, Xiaopeng Zhang, Lingxi Xie, Qi Tian | Recently impressive results have been achieved in 3D scene editing with text instructions based on a 2D diffusion model. However current diffusion models primarily generate images by predicting noise in the latent space and the editing is usually applied to the whole image which makes it challenging to perform delicate especially localized editing for 3D scenes. Inspired by recent 3D Gaussian splatting we propose a systematic framework named GaussianEditor to edit 3D scenes delicately via 3D Gaussians with text instructions. Benefiting from the explicit property of 3D Gaussians we design a series of techniques to achieve delicate editing. Specifically we first extract the region of interest (RoI) corresponding to the text instruction aligning it to 3D Gaussians. The Gaussian RoI is further used to control the editing process. Our framework can achieve more delicate and precise editing of 3D scenes than previous methods while enjoying much faster training speed i.e. within 20 minutes on a single V100 GPU more than twice as fast as Instruct-NeRF2NeRF (45 minutes -- 2 hours). The project page is at GaussianEditor.github.io. | https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_GaussianEditor_Editing_3D_Gaussians_Delicately_with_Text_Instructions_CVPR_2024_paper.pdf | http://arxiv.org/abs/2311.16037 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_GaussianEditor_Editing_3D_Gaussians_Delicately_with_Text_Instructions_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Wang_GaussianEditor_Editing_3D_Gaussians_Delicately_with_Text_Instructions_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Wang_GaussianEditor_Editing_3D_CVPR_2024_supplemental.pdf | null |
MemFlow: Optical Flow Estimation and Prediction with Memory | Qiaole Dong, Yanwei Fu | Optical flow is a classical task that is important to the vision community. Classical optical flow estimation uses two frames as input whilst some recent methods consider multiple frames to explicitly model long-range information. The former ones limit their ability to fully leverage temporal coherence along the video sequence; and the latter ones incur heavy computational overhead typically not possible for real-time flow estimation. Some multi-frame-based approaches even necessitate unseen future frames for current estimation compromising real-time applicability in safety-critical scenarios. To this end we present MemFlow a real-time method for optical flow estimation and prediction with memory. Our method enables memory read-out and update modules for aggregating historical motion information in real-time. Furthermore we integrate resolution-adaptive re-scaling to accommodate diverse video resolutions. Besides our approach seamlessly extends to the future prediction of optical flow based on past observations. Leveraging effective historical motion aggregation our method outperforms VideoFlow with fewer parameters and faster inference speed on Sintel and KITTI-15 datasets in terms of generalization performance. At the time of submission MemFlow also leads in performance on the 1080p Spring dataset. Codes and models will be available at: https://dqiaole.github.io/MemFlow/. | https://openaccess.thecvf.com/content/CVPR2024/papers/Dong_MemFlow_Optical_Flow_Estimation_and_Prediction_with_Memory_CVPR_2024_paper.pdf | http://arxiv.org/abs/2404.04808 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Dong_MemFlow_Optical_Flow_Estimation_and_Prediction_with_Memory_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Dong_MemFlow_Optical_Flow_Estimation_and_Prediction_with_Memory_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Dong_MemFlow_Optical_Flow_CVPR_2024_supplemental.pdf | null |
Novel Class Discovery for Ultra-Fine-Grained Visual Categorization | Yu Liu, Yaqi Cai, Qi Jia, Binglin Qiu, Weimin Wang, Nan Pu | Ultra-fine-grained visual categorization (Ultra-FGVC) aims at distinguishing highly similar sub-categories within fine-grained objects such as different soybean cultivars. Compared to traditional fine-grained visual categorization Ultra-FGVC encounters more hurdles due to the small inter-class and large intra-class variation. Given these challenges relying on human annotation for Ultra-FGVC is impractical. To this end our work introduces a novel task termed Ultra-Fine-Grained Novel Class Discovery (UFG-NCD) which leverages partially annotated data to identify new categories of unlabeled images for Ultra-FGVC. To tackle this problem we devise a Region-Aligned Proxy Learning (RAPL) framework which comprises a Channel-wise Region Alignment (CRA) module and a Semi-Supervised Proxy Learning (SemiPL) strategy. The CRA module is designed to extract and utilize discriminative features from local regions facilitating knowledge transfer from labeled to unlabeled classes. Furthermore SemiPL strengthens representation learning and knowledge transfer with proxy-guided supervised learning and proxy-guided contrastive learning. Such techniques leverage class distribution information in the embedding space improving the mining of subtle differences between labeled and unlabeled ultra-fine-grained classes. Extensive experiments demonstrate that RAPL significantly outperforms baselines across various datasets indicating its effectiveness in handling the challenges of UFG-NCD. Code is available at https://github.com/SSDUT-Caiyq/UFG-NCD. | https://openaccess.thecvf.com/content/CVPR2024/papers/Liu_Novel_Class_Discovery_for_Ultra-Fine-Grained_Visual_Categorization_CVPR_2024_paper.pdf | http://arxiv.org/abs/2405.06283 | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Novel_Class_Discovery_for_Ultra-Fine-Grained_Visual_Categorization_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Novel_Class_Discovery_for_Ultra-Fine-Grained_Visual_Categorization_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Liu_Novel_Class_Discovery_CVPR_2024_supplemental.pdf | null |
GenHowTo: Learning to Generate Actions and State Transformations from Instructional Videos | null | null | null | null | null | https://openaccess.thecvf.com/content/CVPR2024/html/Soucek_GenHowTo_Learning_to_Generate_Actions_and_State_Transformations_from_Instructional_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Soucek_GenHowTo_Learning_to_Generate_Actions_and_State_Transformations_from_Instructional_CVPR_2024_paper.html | CVPR 2024 | null | null |
Paint-it: Text-to-Texture Synthesis via Deep Convolutional Texture Map Optimization and Physically-Based Rendering | Kim Youwang, Tae-Hyun Oh, Gerard Pons-Moll | We present Paint-it a text-driven high-fidelity texture map synthesis method for 3D meshes via neural re-parameterized texture optimization. Paint-it synthesizes texture maps from a text description by synthesis-through-optimization exploiting the Score-Distillation Sampling (SDS). We observe that directly applying SDS yields undesirable texture quality due to its noisy gradients. We reveal the importance of texture parameterization when using SDS. Specifically we propose Deep Convolutional Physically-Based Rendering (DC-PBR) parameterization which re-parameterizes the physically-based rendering (PBR) texture maps with randomly initialized convolution-based neural kernels instead of a standard pixel-based parameterization. We show that DC-PBR inherently schedules the optimization curriculum according to texture frequency and naturally filters out the noisy signals from SDS. In experiments Paint-it obtains remarkable quality PBR texture maps within 15 min. given only a text description. We demonstrate the generalizability and practicality of Paint-it by synthesizing high-quality texture maps for large-scale mesh datasets and showing test-time applications such as relighting and material control using a popular graphics engine. | https://openaccess.thecvf.com/content/CVPR2024/papers/Youwang_Paint-it_Text-to-Texture_Synthesis_via_Deep_Convolutional_Texture_Map_Optimization_and_CVPR_2024_paper.pdf | null | https://openaccess.thecvf.com | https://openaccess.thecvf.com/content/CVPR2024/html/Youwang_Paint-it_Text-to-Texture_Synthesis_via_Deep_Convolutional_Texture_Map_Optimization_and_CVPR_2024_paper.html | https://openaccess.thecvf.com/content/CVPR2024/html/Youwang_Paint-it_Text-to-Texture_Synthesis_via_Deep_Convolutional_Texture_Map_Optimization_and_CVPR_2024_paper.html | CVPR 2024 | https://openaccess.thecvf.com/content/CVPR2024/supplemental/Youwang_Paint-it_Text-to-Texture_Synthesis_CVPR_2024_supplemental.zip | null |
Subsets and Splits