title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Unsupervised Embedding and Association Network for Multi-Object Tracking
| null |
How to generate robust trajectories of multiple objects without using any manual identity annotation? Recently, identity embedding features from Re-ID models are adopted to associate targets into trajectories.
However, most previous methods equipped with embedding features heavily rely on manual identity annotations, which bring a high cost for the multi-object tracking (MOT) task.
To address the above problem, we present an unsupervised embedding and association network (UEANet) for learning discriminative embedding features with pseudo identity labels.
Specifically, we firstly generate the pseudo identity labels by adopting a Kalman filter tracker to associate multiple targets into trajectories and assign a unique identity label to each trajectory.
Secondly, we train the transformer-based identity embedding branch and MLP-based data association branch of UEANet with these pseudo labels, and UEANet extracts branch-dependent features for the unsupervised MOT task.
Experimental results show that UEANet confirms the outstanding ability to suppress IDS and achieves comparable performance compared with state-of-the-art methods on three MOT datasets.
|
Yu-Lei Li
| null | null | 2,022 |
ijcai
|
Self-supervised Learning and Adaptation for Single Image Dehazing
| null |
Existing deep image dehazing methods usually depend on supervised learning with a large number of hazy-clean image pairs which are expensive or difficult to collect. Moreover, dehazing performance of the learned model may deteriorate significantly when the training hazy-clean image pairs are insufficient and are different from real hazy images in applications. In this paper, we show that exploiting large scale training set and adapting to real hazy images are two critical issues in learning effective deep dehazing models. Under the depth guidance estimated by a well-trained depth estimation network, we leverage the conventional atmospheric scattering model to generate massive hazy-clean image pairs for the self-supervised pre-training of dehazing network. Furthermore, self-supervised adaptation is presented to adapt pre-trained network to real hazy images. Learning without forgetting strategy is also deployed in self-supervised adaptation by combining self-supervision and model adaptation via contrastive learning. Experiments show that our proposed method performs favorably against the state-of-the-art methods, and is quite efficient, i.e., handling a 4K image in 23 ms. The codes are available at https://github.com/DongLiangSXU/SLAdehazing.
|
Yudong Liang, Bin Wang, Wangmeng Zuo, Jiaying Liu, Wenqi Ren
| null | null | 2,022 |
ijcai
|
Multi-View Visual Semantic Embedding
| null |
Visual Semantic Embedding (VSE) is a dominant method for cross-modal vision-language retrieval. Its purpose is to learn an embedding space so that visual data can be embedded in a position close to the corresponding text description. However, there are large intra-class variations in the vision-language data. For example, multiple texts describing the same image may be described from different views, and the descriptions of different views are often dissimilar. The mainstream VSE method embeds samples from the same class in similar positions, which will suppress intra-class variations and lead to inferior generalization performance. This paper proposes a Multi-View Visual Semantic Embedding (MV-VSE) framework, which learns multiple embeddings for one visual data and explicitly models intra-class variations. To optimize MV-VSE, a multi-view upper bound loss is proposed, and the multi-view embeddings are jointly optimized while retaining intra-class variations. MV-VSE is plug-and-play and can be applied to various VSE models and loss functions without excessively increasing model complexity. Experimental results on the Flickr30K and MS-COCO datasets demonstrate the superior performance of our framework.
|
Zheng Li, Caili Guo, Zerun Feng, Jenq-Neng Hwang, Xijun Xue
| null | null | 2,022 |
ijcai
|
FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
| null |
Network quantization significantly reduces model inference complexity and has been widely used in real-world deployments. However, most existing quantization methods have been developed mainly on Convolutional Neural Networks (CNNs), and suffer severe degradation when applied to fully quantized vision transformers. In this work, we demonstrate that many of these difficulties arise because of serious inter-channel variation in LayerNorm inputs, and present, Power-of-Two Factor (PTF), a systematic method to reduce the performance degradation and inference complexity of fully quantized vision transformers. In addition, observing an extreme non-uniform distribution in attention maps, we propose Log-Int-Softmax (LIS) to sustain that and simplify inference by using 4-bit quantization and the BitShift operator. Comprehensive experiments on various transformer-based architectures and benchmarks show that our Fully Quantized Vision Transformer (FQ-ViT) outperforms previous works while even using lower bit-width on attention maps. For instance, we reach 84.89% top-1 accuracy with ViT-L on ImageNet and 50.8 mAP with Cascade Mask R-CNN (Swin-S) on COCO. To our knowledge, we are the first to achieve lossless accuracy degradation (~1%) on fully quantized vision transformers. The code is available at https://github.com/megvii-research/FQ-ViT.
|
Yang Lin, Tianyu Zhang, Peiqin Sun, Zheng Li, Shuchang Zhou
| null | null | 2,022 |
ijcai
|
RMGN: A Regional Mask Guided Network for Parser-free Virtual Try-on
| null |
Virtual try-on (VTON) aims at fitting target clothes to reference person images, which is widely adopted in e-commerce. Existing VTON approaches can be narrowly categorized into Parser-Based (PB) and Parser-Free (PF) by whether relying on the parser information to mask the persons’clothes and synthesize try-on images. Although abandoning parser information has improved the applicability of PF methods, the ability of detail synthesizing has also been sacrificed. As a result, the distraction from original cloth may persist in synthesized images, especially in complicated postures and high resolution applications. To address the aforementioned issue, we propose a novel PF method named Regional Mask Guided Network (RMGN). More specifically, a regional mask is proposed to explicitly fuse the features of target clothes and reference persons so that the persisted distraction can be eliminated. A posture awareness loss and a multi-level feature extractor are further proposed to handle the complicated postures and synthesize high resolution images. Extensive experiments demonstrate that our proposed RMGN outperforms both state-of-the-art PB and PF methods. Ablation studies further verify the effectiveness of modules in RMGN. Code is available at https://github.com/jokerlc/RMGN-VITON.
|
Chao Lin, Zhao Li, Sheng Zhou, Shichang Hu, Jialun Zhang, Linhao Luo, Jiarun Zhang, Longtao Huang, Yuan He
| null | null | 2,022 |
ijcai
|
TCCNet: Temporally Consistent Context-Free Network for Semi-supervised Video Polyp Segmentation
| null |
Automatic video polyp segmentation (VPS) is highly valued for the early diagnosis of colorectal cancer. However, existing methods are limited in three respects: 1) most of them work on static images, while ignoring the temporal information in consecutive video frames; 2) all of them are fully supervised and easily overfit in presence of limited annotations; 3) the context of polyp (i.e., lumen, specularity and mucosa tissue) varies in an endoscopic clip, which may affect the predictions of adjacent frames. To resolve these challenges, we propose a novel Temporally Consistent Context-Free Network (TCCNet) for semi-supervised VPS. It contains a segmentation branch and a propagation branch with a co-training scheme to supervise the predictions of unlabeled image. To maintain the temporal consistency of predictions, we design a Sequence-Corrected Reverse Attention module and a Propagation-Corrected Reverse Attention module. A Context-Free Loss is also proposed to mitigate the impact of varying contexts. Extensive experiments show that even trained under 1/15 label ratio, TCCNet is comparable to the state-of-the-art fully supervised methods for VPS. Also, TCCNet surpasses existing semi-supervised methods for natural image and other medical image segmentation tasks.
|
Xiaotong Li, Jilan Xu, Yuejie Zhang, Rui Feng, Rui-Wei Zhao, Tao Zhang, Xuequan Lu, Shang Gao
| null | null | 2,022 |
ijcai
|
Learning to Estimate Object Poses without Real Image Annotations
| null |
This paper presents a simple yet effective approach for learning 6DoF object poses without real image annotations. Previous methods have attempted to train pose estimators on synthetic data, but they do not generalize well to real images due to the sim-to-real domain gap and produce inaccurate pose estimates. We find that, in most cases, the synthetically trained pose estimators are able to provide reasonable initialization for depth-based pose refinement methods which yield accurate pose estimates. Motivated by this, we propose a novel learning framework, which utilizes the accurate results of depth-based pose refinement methods to supervise the RGB-based pose estimator. Our method significantly outperforms previous self-supervised methods on several benchmarks. Even compared with fully-supervised methods that use real annotated data, we achieve competitive results without using any real annotation. The code is available at https://github.com/zju3dv/pvnet-depth-sup.
|
Haotong Lin, Sida Peng, Zhize Zhou, Xiaowei Zhou
| null | null | 2,022 |
ijcai
|
PRNet: Point-Range Fusion Network for Real-Time LiDAR Semantic Segmentation
| null |
Accurate and real-time LiDAR semantic segmentation is necessary for advanced autonomous driving systems. To guarantee a fast inference speed, previous methods utilize the highly optimized 2D convolutions to extract features on the range view (RV), which is the most compact representation of the LiDAR point clouds. However, these methods often suffer from lower accuracy for two reasons: 1) the information loss during the projection from 3D points to the RV, 2) the semantic ambiguity when 3D points labels are assigned according to the RV predictions. In this work, we introduce an end-to-end point-range fusion network (PRNet) that extracts semantic features mainly on the RV and iteratively fuses the RV features back to the 3D points for the final prediction. Besides, a novel range view projection (RVP) operation is designed to alleviate the information loss during the projection to the RV, and a point-range convolution (PRConv) is proposed to automatically mitigate the semantic ambiguity during transmitting features from the RV back to 3D points. Experiments on the SemanticKITTI and nuScenes benchmarks demonstrate that the PRNet pushes the range-based methods to a new state-of-the-art, and achieves a better speed-accuracy trade-off.
|
Xiaoyan Li, Gang Zhang, Tao Jiang, Xufen Cai, Zhenhua Wang
| null | null | 2,022 |
ijcai
|
Biological Instance Segmentation with a Superpixel-Guided Graph
| null |
Recent advanced proposal-free instance segmentation methods have made significant progress in biological images. However, existing methods are vulnerable to local imaging artifacts and similar object appearances, resulting in over-merge and over-segmentation. To reduce these two kinds of errors, we propose a new biological instance segmentation framework based on a superpixel-guided graph, which consists of two stages, i.e., superpixel-guided graph construction and superpixel agglomeration. Specifically, the first stage generates enough superpixels as graph nodes to avoid over-merge, and extracts node and edge features to construct an initialized graph. The second stage agglomerates superpixels into instances based on the relationship of graph nodes predicted by a graph neural network (GNN). To solve over-segmentation and prevent introducing additional over-merge, we specially design two loss functions to supervise the GNN, i.e., a repulsion-attraction (RA) loss to better distinguish the relationship of nodes in the feature space, and a maximin agglomeration score (MAS) loss to pay more attention to crucial edge classification. Extensive experiments on three representative biological datasets demonstrate the superiority of our method over existing state-of-the-art methods. Code is available at https://github.com/liuxy1103/BISSG.
|
Xiaoyu Liu, Wei Huang, Yueyi Zhang, Zhiwei Xiong
| null | null | 2,022 |
ijcai
|
Intrinsic Image Decomposition by Pursuing Reflectance Image
| null |
Intrinsic image decomposition is a fundamental problem for many computer vision applications. While recent deep learning based methods have achieved very promising results on the synthetic densely labeled datasets, the results on the real-world dataset are still far from human level performance. This is mostly because collecting dense supervision on a real-world dataset is impossible. Only a sparse set of pairwise judgement from human is often used. It's very difficult for models to learn in such settings.
In this paper, we investigate the possibilities of only using reflectance images for supervision during training. In this way, the demand for labeled data is greatly reduced. In order to achieve this goal, we take a deep investigation into the reflectance images. We find that reflectance images are actually comprised of two components: the flat surfaces with low frequency information, and the boundaries with high frequency details. Then, we propose to disentangle the learning process of the two components of the reflectance images. We argue that through this procedure, the reflectance images can be better modeled, and in the meantime, the shading images, though not supervised, can also achieve decent result. Extensive experiments show that our proposed network outperforms current state-of-the-art results by a large margin on the most challenging real-world IIW dataset. We also surprisingly find that on the densely labeled datasets (MIT and MPI-Sintel), our network can also achieve state-of-the-art results on both reflectance and shading images, when we only apply supervision on the reflectance images during training.
|
Tzu-Heng Lin, Pengxiao Wang, Yizhou Wang
| null | null | 2,022 |
ijcai
|
Feature Dense Relevance Network for Single Image Dehazing
| null |
Existing learning-based dehazing methods do not fully use non-local information, which makes the restoration of seriously degraded region very tough. We propose a novel dehazing network by defining the Feature Dense Relevance module (FDR) and the Shallow Feature Mapping module (SFM). The FDR is defined based on multi-head attention to construct the dense relationship between different local features in the whole image. It enables the network to restore the degraded local regions by non-local information in complex scenes. In addition, the raw distant skip-connection easily leads to artifacts while it cannot deal with the shallow features effectively. Therefore, we define the SFM by combining the atmospheric scattering model and the distant skip-connection to effectively deal with the shallow features in different scales. It not only maps the degraded textures into clear textures by distant dependence, but also
reduces artifacts and color distortions effectively. We introduce contrastive loss and focal frequency loss in the network to obtain a realitic and clear image. The extensive experiments on several synthetic and real-world datasets demonstrate that our network surpasses most of the state-of-the-art methods.
|
Yun Liang, Enze Huang, Zifeng Zhang, Zhuo Su, Dong Wang
| null | null | 2,022 |
ijcai
|
Learning Multi-dimensional Edge Feature-based AU Relation Graph for Facial Action Unit Recognition
| null |
The activations of Facial Action Units (AUs) mutually influence one another. While the relationship between a pair of AUs can be complex and unique, existing approaches fail to specifically and explicitly represent such cues for each pair of AUs in each facial display. This paper proposes an AU relationship modelling approach that deep learns a unique graph to explicitly describe the relationship between each pair of AUs of the target facial display. Our approach first encodes each AU's activation status and its association with other AUs into a node feature. Then, it learns a pair of multi-dimensional edge features to describe multiple task-specific relationship cues between each pair of AUs. During both node and edge feature learning, our approach also considers the influence of the unique facial display on AUs' relationship by taking the full face representation as an input. Experimental results on BP4D and DISFA datasets show that both node and edge feature learning modules provide large performance improvements for CNN and transformer-based backbones, with our best systems achieving the state-of-the-art AU recognition results. Our approach not only has a strong capability in modelling relationship cues for AU recognition but also can be easily incorporated into various backbones. Our PyTorch code is made available at https://github.com/CVI-SZU/ME-GraphAU.
|
Cheng Luo, Siyang Song, Weicheng Xie, Linlin Shen, Hatice Gunes
| null | null | 2,022 |
ijcai
|
Cost Ensemble with Gradient Selecting for GANs
| null |
Generative Adversarial Networks(GANs) are powerful generative models on numerous tasks and datasets but are also known for their training instability and mode collapse. The latter is because the optimal transportation map is discontinuous, but DNNs can only approximate continuous ones. One way to solve the problem is to introduce multiple discriminators or generators. However, their impacts are limited because the cost function of each component is the same. That is, they are homogeneous. In contrast, multiple discriminators with different cost functions can yield various gradients for the generator, which indicates we can use them to search for more transportation maps in the latent space. Inspired by this, we have proposed a framework to combat the mode collapse problem, containing multiple discriminators with different cost functions, named CES-GAN. Unfortunately, it may also lead to the generator being hard to train because the performance between discriminators is unbalanced, according to the Cannikin Law. Thus, a gradient selecting mechanism is also proposed to pick up proper gradients. We provide mathematical statements to prove our assumptions and conduct extensive experiments to verify the performance. The results show that CES-GAN is lightweight and more effective for fighting against the mode collapse problem than similar works.
|
Minghui Liu, Jiali Deng, Meiyi Yang, Xuan Cheng, Nianbo Liu, Ming Liu, Xiaomin Wang
| null | null | 2,022 |
ijcai
|
Vision Shared and Representation Isolated Network for Person Search
| null |
Person search is a widely-concerned computer vision task that aims to jointly solve the problems of pedestrian detection and person re-identification in panoramic scenes. However, the pedestrian detection focuses on the consistency of pedestrians, while the person re-identification attempts to extract the discriminative features of pedestrians. The inevitable conflict greatly restricts the researches on the one-stage person search methods. To address this issue, we propose a Vision Shared and Representation Isolated (VSRI) network to decouple the two conflicted subtasks simultaneously, through which two independent representations are constructed for the two subtasks. To enhance the discrimination of the re-ID representation, a Multi-Level Feature Fusion (MLFF) module is proposed. The MLFF adopts the Spatial Pyramid Feature Fusion (SPFF) module to obtain diverse features from the stem network. Moreover, the multi-head self-attention mechanism is employed to construct a Multi-head Attention Driven Extraction (MADE) module and the cascaded convolution unit is adopted to devise a Feature Decomposition and Cascaded Integration (FDCI) module, which facilitates the MLFF to obtain more discriminative representations of the pedestrians. The proposed method outperforms the state-of-the-art methods on the mainstream datasets.
|
Yang Liu, Yingping Li, Chengyu Kong, Yuqiu Kong, Shenglan Liu, Feilong Wang
| null | null | 2,022 |
ijcai
|
MA-ViT: Modality-Agnostic Vision Transformers for Face Anti-Spoofing
| null |
The existing multi-modal face anti-spoofing (FAS) frameworks are designed based on two strategies: halfway and late fusion. However, the former requires test modalities consistent with the training input, which seriously limits its deployment scenarios. And the latter is built on multiple branches to process different modalities independently, which limits their use in applications with low memory or fast execution requirements. In this work, we present a single branch based Transformer framework, namely Modality-Agnostic Vision Transformer (MA-ViT), which aims to improve the performance of arbitrary modal attacks with the help of multi-modal data. Specifically, MA-ViT adopts the early fusion to aggregate all the available training modalities’ data and enables flexible testing of any given modal samples. Further, we develop the Modality-Agnostic Transformer Block (MATB) in MA-ViT, which consists of two stacked attentions named Modal-Disentangle Attention (MDA) and Cross-Modal Attention (CMA), to eliminate modality-related information for each modal sequences and supplement modality-agnostic liveness features from another modal sequences, respectively. Experiments demonstrate that the single model trained based on MA-ViT can not only flexibly evaluate different modal samples, but also outperforms existing single-modal frameworks by a large margin, and approaches the multi-modal frameworks introduced with smaller FLOPs and model parameters.
|
Ajian Liu, Yanyan Liang
| null | null | 2,022 |
ijcai
|
Long-Short Term Cross-Transformer in Compressed Domain for Few-Shot Video Classification
| null |
Compared with image few-shot learning, most of the existing few-shot video classification methods perform worse on feature matching, because they fail to sufficiently exploit the temporal information and relation. Specifically, frames are usually evenly sampled, which may miss important frames. On the other hand, the heuristic model simply encodes the equally treated frames in sequence, which results in the lack of both long-term and short-term temporal modeling and interaction. To alleviate these limitations, we take advantage of the compressed domain knowledge and propose a long-short term Cross-Transformer (LSTC) for few-shot video classification. For short terms, the motion vector (MV) contains temporal cues and reflects the importance of each frame. For long terms, a video can be natively divided into a sequence of GOPs (Group Of Picture). Using this compressed domain knowledge helps to obtain a more accurate spatial-temporal feature space. Consequently, we design the long-short term selection module, short-term module, and long-term module to comprise the LSTC. Long-short term selection is performed to select informative compressed domain data. Long/short-term modules are utilized to sufficiently exploit the temporal information so that the query and support can be well-matched by cross-attention. Experimental results show the superiority of our method on various datasets.
|
Wenyang Luo, Yufan Liu, Bing Li, Weiming Hu, Yanan Miao, Yangxi Li
| null | null | 2,022 |
ijcai
|
TopoSeg: Topology-aware Segmentation for Point Clouds
| null |
Point cloud segmentation plays an important role in AI applications such as autonomous driving, AR, and VR. However, previous point cloud segmentation neural networks rarely pay attention to the topological correctness of the segmentation results. In this paper, focusing on the perspective of topology awareness. First, to optimize the distribution of segmented predictions from the perspective of topology, we introduce the persistent homology theory in topology into a 3D point cloud deep learning framework. Second, we propose a topology-aware 3D point cloud segmentation module, TopoSeg. Specifically, we design a topological loss function embedded in TopoSeg module, which imposes topological constraints on the segmentation of 3D point clouds. Experiments show that our proposed TopoSeg module can be easily embedded into the point cloud segmentation network and improve the segmentation performance. In addition, based on the constructed topology loss function, we propose a topology-aware point cloud edge extraction algorithm, which is demonstrated that has strong robustness.
|
Weiquan Liu, Hanyun Guo, Weini Zhang, Yu Zang, Cheng Wang, Jonathan Li
| null | null | 2,022 |
ijcai
|
Dynamic Group Transformer: A General Vision Transformer Backbone with Dynamic Group Attention
| null |
Recently, Transformers have shown promising performance in various vision tasks. To reduce the quadratic computation complexity caused by each query attending to all keys/values, various methods have constrained the range of attention within local regions, where each query only attends to keys/values within a hand-crafted window. However, these hand-crafted window partition mechanisms are data-agnostic and ignore their input content, so it is likely that one query maybe attend to irrelevant keys/values. To address this issue, we propose a Dynamic Group Attention (DG-Attention), which dynamically divides all queries into multiple groups and selects the most relevant keys/values for each group. Our DG-Attention can flexibly model more relevant dependencies without any spatial constraint that is used in hand-crafted window based attention. Built on the DG-Attention, we develop a general vision transformer backbone named Dynamic Group Transformer (DGT). Extensive experiments show that our models can outperform the state-of-the-art methods on multiple common vision tasks, including image classification, semantic segmentation, object detection, and instance segmentation.
|
Kai Liu, Tianyi Wu, Cong Liu, Guodong Guo
| null | null | 2,022 |
ijcai
|
ChimeraMix: Image Classification on Small Datasets via Masked Feature Mixing
| null |
Deep convolutional neural networks require large amounts of labeled data samples. For many real-world applications, this is a major limitation which is commonly treated by augmentation methods. In this work, we address the problem of learning deep neural networks on small datasets. Our proposed architecture called ChimeraMix learns a data augmentation by generating compositions of instances. The generative model encodes images in pairs, combines the features guided by a mask, and creates new samples. For evaluation, all methods are trained from scratch without any additional data. Several experiments on benchmark datasets, e.g. ciFAIR-10, STL-10, and ciFAIR-100, demonstrate the superior performance of ChimeraMix compared to current state-of-the-art methods for classification on small datasets. Code is available at https://github.com/creinders/ChimeraMix.
|
Christoph Reinders, Frederik Schubert, Bodo Rosenhahn
| null | null | 2,022 |
ijcai
|
Deep Video Harmonization With Color Mapping Consistency
| null |
Video harmonization aims to adjust the foreground of a composite video to make it compatible with the background. So far, video harmonization has only received limited attention and there is no public dataset for video harmonization. In this work, we construct a new video harmonization dataset HYouTube by adjusting the foreground of real videos to create synthetic composite videos. Moreover, we consider the temporal consistency in video harmonization task. Unlike previous works which establish the spatial correspondence, we design a novel framework based on the assumption of color mapping consistency, which leverages the color mapping of neighboring frames to refine the current frame. Extensive experiments on our HYouTube dataset prove the effectiveness of our proposed framework. Our dataset and code are available at https://github.com/bcmi/Video-Harmonization-Dataset-HYouTube.
|
Xinyuan Lu, Shengyuan Huang, Li Niu, Wenyan Cong, Liqing Zhang
| null | null | 2,022 |
ijcai
|
Learning Degradation Uncertainty for Unsupervised Real-world Image Super-resolution
| null |
Acquiring degraded images with paired high-resolution (HR) images is often challenging, impeding the advance of image super-resolution in real-world applications. By generating realistic low-resolution (LR) images with degradation similar to that in real-world scenarios, simulated paired LR-HR data can be constructed for supervised training. However, most of the existing work ignores the degradation uncertainty of the generated realistic LR images, since only one LR image has been generated given an HR image. To address this weakness, we propose learning the degradation uncertainty of generated LR images and sampling multiple LR images from the learned LR image (mean) and degradation uncertainty (variance) and construct LR-HR pairs to train the super-resolution (SR) networks. Specifically, uncertainty can be learned by minimizing the proposed loss based on Kullback-Leibler (KL) divergence. Furthermore, the uncertainty in the feature domain is exploited by a novel perceptual loss; and we propose to calculate the adversarial loss from the gradient information in the SR stage for stable training performance and better visual quality. Experimental results on popular real-world datasets show that our proposed method has performed better than other unsupervised approaches.
|
Qian Ning, Jingzhu Tang, Fangfang Wu, Weisheng Dong, Xin Li, Guangming Shi
| null | null | 2,022 |
ijcai
|
Improved Deep Unsupervised Hashing with Fine-grained Semantic Similarity Mining for Multi-Label Image Retrieval
| null |
In this paper, we study deep unsupervised hashing, a critical problem for approximate nearest neighbor research. Most recent methods solve this problem by semantic similarity reconstruction for guiding hashing network learning or contrastive learning of hash codes. However, in multi-label scenarios, these methods usually either generate an inaccurate similarity matrix without reflection of similarity ranking or suffer from the violation of the underlying assumption in contrastive learning, resulting in limited retrieval performance. To tackle this issue, we propose a novel method termed HAMAN, which explores semantics from a fine-grained view to enhance the ability of multi-label image retrieval. In particular, we reconstruct the pairwise similarity structure by matching fine-grained patch features generated by the pre-trained neural network, serving as reliable guidance for similarity preserving of hash codes. Moreover, a novel conditional contrastive learning on hash codes is proposed to adopt self-supervised learning in multi-label scenarios. According to extensive experiments on three multi-label datasets, the proposed method outperforms a broad range of state-of-the-art methods.
|
Zeyu Ma, Xiao Luo, Yingjie Chen, Mixiao Hou, Jinxing Li, Minghua Deng, Guangming Lu
| null | null | 2,022 |
ijcai
|
Copy Motion From One to Another: Fake Motion Video Generation
| null |
One compelling application of artificial intelligence is to generate a video of a target person performing arbitrary desired motion (from a source person). While the state-of-the-art methods are able to synthesize a video demonstrating similar broad stroke motion details, they are generally lacking in texture details. A pertinent manifestation appears as distorted face, feet, and hands, and such
flaws are very sensitively perceived by human observers. Furthermore, current methods typically employ GANs with a L2 loss to assess the authenticity of the generated videos, inherently requiring a large amount of training samples to learn the texture details for adequate video generation. In this work, we tackle these challenges from three aspects: 1) We disentangle each video frame into
foreground (the person) and background, focusing on generating the foreground to reduce the underlying dimension of the network output. 2) We propose a theoretically motivated Gromov-Wasserstein loss that facilitates learning the mapping from a pose to a foreground image. 3) To enhance texture details, we encode facial features with geometric guidance and employ local GANs to refine the face, feet, and hands. Extensive experiments show that our method is able to generate realistic target person videos, faithfully copying complex motions from a source person. Our code and datasets are released at https://github.com/Sifann/FakeMotion.
|
Zhenguang Liu, Sifan Wu, Chejian Xu, Xiang Wang, Lei Zhu, Shuang Wu, Fuli Feng
| null | null | 2,022 |
ijcai
|
Continual Semantic Segmentation Leveraging Image-level Labels and Rehearsal
| null |
Despite the remarkable progress of deep learning models for semantic segmentation, the success of these models is strongly limited by the following aspects: 1) large datasets with pixel-level annotations must be available and 2) training must be performed with all classes simultaneously. Indeed, in incremental learning scenarios, where new classes are added to an existing framework, these models are prone to catastrophic forgetting of previous classes. To address these two limitations, we propose a weakly-supervised mechanism for continual semantic segmentation that can leverage cheap image-level annotations and a novel rehearsal strategy that intertwines the learning of past and new classes. Specifically, we explore two rehearsal technique variants: 1) imprinting past objects on new images and 2) transferring past representations in intermediate features maps. We conduct extensive experiments on Pascal-VOC by varying the proportion of fully- and weakly-supervised data in various setups and show that our contributions consistently improve the mIoU on both past and novel classes. Interestingly, we also observe that models trained with less data in incremental steps sometimes outperform the same architectures trained with more data. We discuss the significance of these results and propose some hypotheses regarding the dynamics between forgetting and learning.
|
Mathieu Pagé Fortin, Brahim Chaib-draa
| null | null | 2,022 |
ijcai
|
Automatic Recognition of Emotional Subgroups in Images
| null |
Both social group detection and group emotion recognition in images are growing fields of interest, but never before have they been combined. In this work we aim to detect emotional subgroups in images, which can be of great importance for crowd surveillance or event analysis. To this end, human annotators are instructed to label a set of 171 images, and their recognition strategies are analysed. Three main strategies for labeling images are identified, with each strategy assigning either 1) more weight to emotions (emotion-based fusion), 2) more weight to spatial structures (group-based fusion), or 3) equal weight to both (summation strategy). Based on these strategies, algorithms are developed to automatically recognize emotional subgroups. In particular, K-means and hierarchical clustering are used with location and emotion features derived from a fine-tuned VGG network. Additionally, we experiment with face size and gaze direction as extra input features. The best performance comes from hierarchical clustering with emotion, location and gaze direction as input.
|
Emmeke Veltmeijer, Charlotte Gerritsen, Koen Hindriks
| null | null | 2,022 |
ijcai
|
Iterative Few-shot Semantic Segmentation from Image Label Text
| null |
Few-shot semantic segmentation aims to learn to segment unseen class objects with the guidance of only a few support images. Most previous methods rely on the pixel-level label of support images. In this paper, we focus on a more challenging setting, in which only the image-level labels are available. We propose a general framework to firstly generate coarse masks with the help of the powerful vision-language model CLIP, and then iteratively and mutually refine the mask predictions of support and query images. Extensive experiments on PASCAL-5i and COCO-20i datasets demonstrate that our method not only outperforms the state-of-the-art weakly supervised approaches by a significant margin, but also achieves comparable or better results to recent supervised methods. Moreover, our method owns an excellent generalization ability for the images in the wild and uncommon classes. Code will be available at https://github.com/Whileherham/IMR-HSNet.
|
Haohan Wang, Liang Liu, Wuhao Zhang, Jiangning Zhang, Zhenye Gan, Yabiao Wang, Chengjie Wang, Haoqian Wang
| null | null | 2,022 |
ijcai
|
Video Frame Interpolation Based on Deformable Kernel Region
| null |
Video frame interpolation task has recently become more and more prevalent in the computer vision field. At present, a number of researches based on deep learning have achieved great success. Most of them are either based on optical flow information, or interpolation kernel, or a combination of these two methods. However, these methods have ignored that there are grid restrictions on the position of kernel region during synthesizing each target pixel. These limitations result in that they cannot well adapt to the irregularity of object shape and uncertainty of motion, which may lead to irrelevant reference pixels used for interpolation. In order to solve this problem, we revisit the deformable convolution for video interpolation, which can break the fixed grid restrictions on the kernel region, making the distribution of reference points more suitable for the shape of the object, and thus warp a more accurate interpolation frame. Experiments are conducted on four datasets to demonstrate the superior performance of the proposed model in comparison to the state-of-the-art alternatives.
|
Haoyue Tian, Pan Gao, Xiaojiang Peng
| null | null | 2,022 |
ijcai
|
Multilevel Hierarchical Network with Multiscale Sampling for Video Question Answering
| null |
Video question answering (VideoQA) is challenging given its multimodal combination of visual understanding and natural language processing. While most existing approaches ignore the visual appearance-motion information at different temporal scales, it is unknown how to incorporate the multilevel processing capacity of a deep learning model with such multiscale information. Targeting these issues, this paper proposes a novel Multilevel Hierarchical Network (MHN) with multiscale sampling for VideoQA. MHN comprises two modules, namely Recurrent Multimodal Interaction (RMI) and Parallel Visual Reasoning (PVR). With a multiscale sampling, RMI iterates the interaction of appearance-motion information at each scale and the question embeddings to build the multilevel question-guided visual representations. Thereon, with a shared transformer encoder, PVR infers the visual cues at each level in parallel to fit with answering different question types that may rely on the visual information at relevant levels. Through extensive experiments on three VideoQA datasets, we demonstrate improved performances than previous state-of-the-arts and justify the effectiveness of each part of our method.
|
Min Peng, Chongyang Wang, Yuan Gao, Yu Shi, Xiang-Dong Zhou
| null | null | 2,022 |
ijcai
|
Boundary-Guided Camouflaged Object Detection
| null |
Camouflaged object detection (COD), segmenting objects that are elegantly blended into their surroundings, is a valuable yet challenging task. Existing deep-learning methods often fall into the difficulty of accurately identifying the camouflaged object with complete and fine object structure. To this end, in this paper, we propose a novel boundary-guided network (BGNet) for camouflaged object detection. Our method explores valuable and extra object-related edge semantics to guide representation learning of COD, which forces the model to generate features that highlight object structure, thereby promoting camouflaged object detection of accurate boundary localization. Extensive experiments on three challenging benchmark datasets demonstrate that our BGNet significantly outperforms the existing 18 state-of-the-art methods under four widely-used evaluation metrics. Our code is publicly available at: https://github.com/thograce/BGNet.
|
Yujia Sun, Shuo Wang, Chenglizhao Chen, Tian-Zhu Xiang
| null | null | 2,022 |
ijcai
|
A Unified Framework for Adversarial Attack and Defense in Constrained Feature Space
| null |
The generation of feasible adversarial examples is necessary for properly assessing models that work in constrained feature space. However, it remains a challenging task to enforce constraints into attacks that were designed for computer vision. We propose a unified framework to generate feasible adversarial examples that satisfy given domain constraints. Our framework can handle both linear and non-linear constraints. We instantiate our framework into two algorithms: a gradient-based attack that introduces constraints in the loss function to maximize, and a multi-objective search algorithm that aims for misclassification, perturbation minimization, and constraint satisfaction. We show that our approach is effective in four different domains, with a success rate of up to 100%, where state-of-the-art attacks fail to generate a single feasible example. In addition to adversarial retraining, we propose to introduce engineered non-convex constraints to improve model adversarial robustness. We demonstrate that this new defense is as effective as adversarial retraining. Our framework forms the starting point for research on constrained adversarial attacks and provides relevant baselines and datasets that future research can exploit.
|
Thibault Simonetto, Salijona Dyrmishi, Salah Ghamizi, Maxime Cordy, Yves Le Traon
| null | null | 2,022 |
ijcai
|
Spatiality-guided Transformer for 3D Dense Captioning on Point Clouds
| null |
Dense captioning in 3D point clouds is an emerging vision-and-language task involving object-level 3D scene understanding. Apart from coarse semantic class prediction and bounding box regression as in traditional 3D object detection, 3D dense captioning aims at producing a further and finer instance-level label of natural language description on visual appearance and spatial relations for each scene object of interest. To detect and describe objects in a scene, following the spirit of neural machine translation, we propose a transformer-based encoder-decoder architecture, namely SpaCap3D, to transform objects into descriptions, where we especially investigate the relative spatiality of objects in 3D scenes and design a spatiality-guided encoder via a token-to-token spatial relation learning objective and an object-centric decoder for precise and spatiality-enhanced object caption generation. Evaluated on two benchmark datasets, ScanRefer and ReferIt3D, our proposed SpaCap3D outperforms the baseline method Scan2Cap by 4.94% and 9.61% in [email protected], respectively. Our project page with source code and supplementary files is available at https://SpaCap3D.github.io/.
|
Heng Wang, Chaoyi Zhang, Jianhui Yu, Weidong Cai
| null | null | 2,022 |
ijcai
|
Harnessing Fourier Isovists and Geodesic Interaction for Long-Term Crowd Flow Prediction
| null |
With the rise in popularity of short-term Human Trajectory Prediction (HTP), Long-Term Crowd Flow Prediction (LTCFP) has been proposed to forecast crowd movement in large and complex environments. However, the input representations, models, and datasets for LTCFP are currently limited. To this end, we propose Fourier Isovists, a novel input representation based on egocentric visibility, which consistently improves all existing models. We also propose GeoInteractNet (GINet), which couples the layers between a multi-scale attention network (M-SCAN) and a convolutional encoder-decoder network (CED). M-SCAN approximates a super-resolution map of where humans are likely to interact on the way to their goals and produces multi-scale attention maps. The CED then uses these maps in either its encoder's inputs or its decoder's attention gates, which allows GINet to produce super-resolution predictions with substantially higher accuracy than existing models even with Fourier Isovists. In order to evaluate the scalability of models to large and complex environments, which the only existing LTCFP dataset is unsuitable for, a new synthetic crowd dataset with both real and synthetic environments has been generated. In its nascent state, LTCFP has much to gain from our key contributions. The Supplementary Materials, dataset, and code are available at sssohn.github.io/GeoInteractNet.
|
Samuel S. Sohn, Seonghyeon Moon, Honglu Zhou, Mihee Lee, Sejong Yoon, Vladimir Pavlovic, Mubbasir Kapadia
| null | null | 2,022 |
ijcai
|
Dynamic Domain Generalization
| null |
Domain generalization (DG) is a fundamental yet very challenging research topic in machine learning. The existing arts mainly focus on learning domain-invariant features with limited source domains in a static model. Unfortunately, there is a lack of training-free mechanism to adjust the model when generalized to the agnostic target domains. To tackle this problem, we develop a brand-new DG variant, namely Dynamic Domain Generalization (DDG), in which the model learns to twist the network parameters to adapt to the data from different domains. Specifically, we leverage a meta-adjuster to twist the network parameters based on the static model with respect to different data from different domains. In this way, the static model is optimized to learn domain-shared features, while the meta-adjuster is designed to learn domain-specific features. To enable this process, DomainMix is exploited to simulate data from diverse domains during teaching the meta-adjuster to adapt to the agnostic target domains. This learning mechanism urges the model to generalize to different agnostic target domains via adjusting the model without training. Extensive experiments demonstrate the effectiveness of our proposed method. Code is available: https://github.com/MetaVisionLab/DDG
|
Zhishu Sun, Zhifeng Shen, Luojun Lin, Yuanlong Yu, Zhifeng Yang, Shicai Yang, Weijie Chen
| null | null | 2,022 |
ijcai
|
SimMC: Simple Masked Contrastive Learning of Skeleton Representations for Unsupervised Person Re-Identification
| null |
Recent advances in skeleton-based person re-identification (re-ID) obtain impressive performance via either hand-crafted skeleton descriptors or skeleton representation learning with deep learning paradigms. However, they typically require skeletal pre-modeling and label information for training, which leads to limited applicability of these methods. In this paper, we focus on unsupervised skeleton-based person re-ID, and present a generic Simple Masked Contrastive learning (SimMC) framework to learn effective representations from unlabeled 3D skeletons for person re-ID. Specifically, to fully exploit skeleton features within each skeleton sequence, we first devise a masked prototype contrastive learning (MPC) scheme to cluster the most typical skeleton features (skeleton prototypes) from different subsequences randomly masked from raw sequences, and contrast the inherent similarity between skeleton features and different prototypes to learn discriminative skeleton representations without using any label. Then, considering that different subsequences within the same sequence usually enjoy strong correlations due to the nature of motion continuity, we propose the masked intra-sequence contrastive learning (MIC) to capture intra-sequence pattern consistency between subsequences, so as to encourage learning more effective skeleton representations for person re-ID. Extensive experiments validate that the proposed SimMC outperforms most state-of-the-art skeleton-based methods. We further show its scalability and efficiency in enhancing the performance of existing models. Our codes are available at https://github.com/Kali-Hac/SimMC.
|
Haocong Rao, Chunyan Miao
| null | null | 2,022 |
ijcai
|
Augmenting Anchors by the Detector Itself
| null |
Usually, it is difficult to determine the scale and aspect ratio of anchors for anchor-based object detection methods. Current state-of-the-art object detectors either determine anchor parameters according to objects' shape and scale in a dataset, or avoid this problem by utilizing anchor-free methods, however, the former scheme is dataset-specific and the latter methods could not get better performance than the former ones. In this paper, we propose a novel anchor augmentation method named AADI, which means Augmenting Anchors by the Detector Itself. AADI is not an anchor-free method, instead, it can convert the scale and aspect ratio of anchors from a continuous space to a discrete space, which greatly alleviates the problem of anchors' designation. Furthermore, AADI is a learning-based anchor augmentation method, but it does not add any parameters or hyper-parameters, which is beneficial for research and downstream tasks. Extensive experiments on COCO dataset demonstrate the effectiveness of AADI, specifically, AADI achieves significant performance boosts on many state-of-the-art object detectors (eg. at least +2.4 box AP on Faster R-CNN, +2.2 box AP on Mask R-CNN, and +0.9 box AP on Cascade Mask R-CNN). We hope that this simple and cost-efficient method can be widely used in object detection. Code and models are available at https://github.com/WanXiaopei/aadi.
|
Xiaopei Wan, Guoqiu Li, Yujiu Yang, Zhenhua Guo
| null | null | 2,022 |
ijcai
|
Absolute Wrong Makes Better: Boosting Weakly Supervised Object Detection via Negative Deterministic Information
| null |
Weakly supervised object detection (WSOD) is a challenging task, in which image-level labels (e.g., categories of the instances in the whole image) are used to train an object detector. Many existing methods follow the standard multiple instance learning (MIL) paradigm and have achieved promising performance. However, the lack of deterministic information leads to part domination and missing instances. To address these issues, this paper focuses on identifying and fully exploiting the deterministic information in WSOD. We discover that negative instances (i.e. absolutely wrong instances), ignored in most of the previous studies, normally contain valuable deterministic information. Based on this observation, we here propose a negative deterministic information (NDI) based method for improving WSOD, namely NDI-WSOD. Specifically, our method consists of two stages: NDI collecting and exploiting. In the collecting stage, we design several processes to identify and distill the NDI from negative instances online. In the exploiting stage, we utilize the extracted NDI to construct a novel negative contrastive learning mechanism and a negative guided instance selection strategy for dealing with the issues of part domination and missing instances, respectively. Experimental results on several public benchmarks including VOC 2007, VOC 2012 and MS COCO show that our method achieves satisfactory performance.
|
Guanchun Wang, Xiangrong Zhang, Zelin Peng, Xu Tang, Huiyu Zhou, Licheng Jiao
| null | null | 2,022 |
ijcai
|
Hypertron: Explicit Social-Temporal Hypergraph Framework for Multi-Agent Forecasting
| null |
Forecasting the future trajectories of multiple agents is a core technology for human-robot interaction systems. To predict multi-agent trajectories more accurately, it is inevitable that models need to improve interpretability and reduce redundancy. However, many methods adopt implicit weight calculation or black-box networks to learn the semantic interaction of agents, which obviously lack enough interpretation. In addition, most of the existing works model the relation among all agents in a one-to-one manner, which might lead to irrational trajectory predictions due to its redundancy and noise. To address the above issues, we present Hypertron, a human-understandable and lightweight hypergraph-based multi-agent forecasting framework, to explicitly estimate the motions of multiple agents and generate reasonable trajectories. The framework explicitly interacts among multiple agents and learns their latent intentions by our coarse-to-fine hypergraph convolution interaction module. Our experiments on several challenging real-world trajectory forecasting datasets show that Hypertron outperforms a wide array of state-of-the-art methods while saving over 60% parameters and reducing 30% inference time.
|
Yu Tian, Xingliang Huang, Ruigang Niu, Hongfeng Yu, Peijin Wang, Xian Sun
| null | null | 2,022 |
ijcai
|
IDPT: Interconnected Dual Pyramid Transformer for Face Super-Resolution
| null |
Face Super-resolution (FSR) task works for generating high-resolution (HR) face images from the corresponding low-resolution (LR) inputs, which has received a lot of attentions because of the wide application prospects. However, due to the diversity of facial texture and the difficulty of reconstructing detailed content from degraded images, FSR technology is still far away from being solved. In this paper, we propose a novel and effective face super-resolution framework based on Transformer, namely Interconnected Dual Pyramid Transformer (IDPT). Instead of straightly stacking cascaded feature reconstruction blocks, the proposed IDPT designs the pyramid encoder/decoder Transformer architecture to extract coarse and detailed facial textures respectively, while the relationship between the dual pyramid Transformers is further explored by a bottom pyramid feature extractor. The pyramid encoder/decoder structure is devised to adapt various characteristics of textures in different spatial spaces hierarchically. A novel fusing modulation module is inserted in each spatial layer to guide the refinement of detailed texture by the corresponding coarse texture, while fusing the shallow-layer coarse feature and corresponding deep-layer detailed feature simultaneously. Extensive experiments and visualizations on various datasets demonstrate the superiority of the proposed method for face super-resolution tasks.
|
Jingang Shi, Yusi Wang, Songlin Dong, Xiaopeng Hong, Zitong Yu, Fei Wang, Changxin Wang, Yihong Gong
| null | null | 2,022 |
ijcai
|
Emotion-Controllable Generalized Talking Face Generation
| null |
Despite the significant progress in recent years, very few of the AI-based talking face generation methods attempt to render natural emotions. Moreover, the scope of the methods is majorly limited to the characteristics of the training dataset, hence they fail to generalize to arbitrary unseen faces. In this paper, we propose a one-shot facial geometry-aware emotional talking face generation method that can generalize to arbitrary faces. We propose a graph convolutional neural network that uses speech content feature, along with an independent emotion input to generate emotion and speech-induced motion on facial geometry-aware landmark representation. This representation is further used in our optical flow-guided texture generation network for producing the texture. We propose a two-branch texture generation network, with motion and texture branches designed to consider the motion and texture content independently. Compared to the previous emotion talking face methods, our method can adapt to arbitrary faces captured in-the-wild by fine-tuning with only a single image of the target identity in neutral emotion.
|
Sanjana Sinha, Sandika Biswas, Ravindra Yadav, Brojeshwar Bhowmick
| null | null | 2,022 |
ijcai
|
Adaptive Convolutional Dictionary Network for CT Metal Artifact Reduction
| null |
Inspired by the great success of deep neural networks, learning-based methods have gained promising performances for metal artifact reduction (MAR) in computed tomography (CT) images. However, most of the existing approaches put less emphasis on modelling and embedding the intrinsic prior knowledge underlying this specific MAR task into their network designs. Against this issue, we propose an adaptive convolutional dictionary network (ACDNet), which leverages both model-based and learning-based methods. Specifically, we explore the prior structures of metal artifacts, e.g., non-local repetitive streaking patterns, and encode them as an explicit weighted convolutional dictionary model. Then, a simple-yet-effective algorithm is carefully designed to solve the model. By unfolding every iterative substep of the proposed algorithm into a network module, we explicitly embed the prior structure into a deep network , i.e., a clear interpretability for the MAR task. Furthermore, our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image based on its content. Hence, our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods. Comprehensive experiments executed on synthetic and clinical datasets show the superiority of our ACDNet in terms of effectiveness and model generalization. Code and supplementary material are available at https://github.com/hongwang01/ACDNet.
|
Hong Wang, Yuexiang Li, Deyu Meng, Yefeng Zheng
| null | null | 2,022 |
ijcai
|
Uncertainty-Guided Pixel Contrastive Learning for Semi-Supervised Medical Image Segmentation
| null |
Recently, contrastive learning has shown great potential in medical image segmentation. Due to the lack of expert annotations, however, it is challenging to apply contrastive learning in semi-supervised scenes. To solve this problem, we propose a novel uncertainty-guided pixel contrastive learning method for semi-supervised medical image segmentation. Specifically, we construct an uncertainty map for each unlabeled image and then remove the uncertainty region in the uncertainty map to reduce the possibility of noise sampling. The uncertainty map is determined by a well-designed consistency learning mechanism, which generates comprehensive predictions for unlabeled data by encouraging consistent network outputs from two different decoders. In addition, we suggest that the effective global representations learned by an image encoder should be equivariant to different geometric transformations. To this end, we construct an equivariant contrastive loss to strengthen global representation learning ability of the encoder. Extensive experiments conducted on popular medical image benchmarks demonstrate that the proposed method achieves better segmentation performance than the state-of-the-art methods.
|
Tao Wang, Jianglin Lu, Zhihui Lai, Jiajun Wen, Heng Kong
| null | null | 2,022 |
ijcai
|
CARD: Semi-supervised Semantic Segmentation via Class-agnostic Relation based Denoising
| null |
Recent semi-supervised semantic segmentation methods focus on mining extra supervision from unlabeled data by generating pseudo labels. However, noisy labels are inevitable in this process which prevent effective self-supervision. This paper proposes that noisy labels can be corrected based on semantic connections among features. Since a segmentation classifier produces both high and low-quality predictions, we can trace back to feature encoder to investigate how a feature in a noisy group is related to those in the confident groups. Discarding the weak predictions from the classifier, rectified predictions are assigned to the wrongly predicted features through the feature relations. The key to such an idea lies in mining reliable feature connections. With this goal, we propose a class-agnostic relation network to precisely capture semantic connections among features while ignoring their semantic categories. The feature relations enable us to perform effective noisy label corrections to boost self-training performance. Extensive experiments on PASCAL VOC and Cityscapes demonstrate the state-of-the-art performances of the proposed methods under various semi-supervised settings.
|
Xiaoyang Wang, Jimin Xiao, Bingfeng Zhang, Limin Yu
| null | null | 2,022 |
ijcai
|
Source-Adaptive Discriminative Kernels based Network for Remote Sensing Pansharpening
| null |
For the pansharpening problem, previous convolutional neural networks (CNNs) mainly concatenate high-resolution panchromatic (PAN) images and low-resolution multispectral (LR-MS) images in their architectures, which ignores the distinctive attributes of different sources. In this paper, we propose a convolution network with source-adaptive discriminative kernels, called ADKNet, for the pansharpening task. Those kernels consist of spatial kernels generated from PAN images containing rich spatial details and spectral kernels generated from LR-MS images containing abundant spectral information. The kernel generating process is specially designed to extract information discriminately and effectively. Furthermore, the kernels are learned in a pixel-by-pixel manner to characterize different information in distinct areas. Extensive experimental results indicate that ADKNet outperforms current state-of-the-art (SOTA) pansharpening methods in both quantitative and qualitative assessments, in the meanwhile only with about 60,000 network parameters. Also, the proposed network is extended to the hyperspectral image super-resolution (HSISR) problem, still yields SOTA performance, proving the universality of our model. The code is available at http://github.com/liangjiandeng/ADKNet.
|
Siran Peng, Liang-Jian Deng, Jin-Fan Hu, Yuwei Zhuo
| null | null | 2,022 |
ijcai
|
I2CNet: An Intra- and Inter-Class Context Information Fusion Network for Blastocyst Segmentation
| null |
The quality of a blastocyst directly determines the embryo's implantation potential, thus making it essential to objectively and accurately identify the blastocyst morphology. In this work, we propose an automatic framework named I2CNet to perform the blastocyst segmentation task in human embryo images. The I2CNet contains two components: IntrA-Class Context Module (IACCM) and InteR-Class Context Module (IRCCM). The IACCM aggregates the representations of specific areas sharing the same category for each pixel, where the categorized regions are learned under the supervision of the groundtruth. This aggregation decomposes a K-category recognition task into K recognition tasks of two labels while maintaining the ability of garnering intra-class features. In addition, the IRCCM is designed based on the blastocyst morphology to compensate for inter-class information which is gradually gathered from inside out. Meanwhile, a weighted mapping function is applied to facilitate edges of the inter classes and stimulate some hard samples. Eventually, the learned intra- and inter-class cues are integrated from coarse to fine, rendering sufficient information interaction and fusion between multi-scale features. Quantitative and qualitative experiments demonstrate that the superiority of our model compared with other representative methods. The I2CNet achieves accuracy of 94.14% and Jaccard of 85.25% on blastocyst public dataset.
|
Hua Wang, Linwei Qiu, Jingfei Hu, Jicong Zhang
| null | null | 2,022 |
ijcai
|
KUNet: Imaging Knowledge-Inspired Single HDR Image Reconstruction
| null |
Recently, with the rise of high dynamic range (HDR) display devices, there is a great demand to transfer traditional low dynamic range (LDR) images into HDR versions. The key to success is how to solve the many-to-many mapping problem. However, the existing approaches either do not consider constraining solution space or just simply imitate the inverse camera imaging pipeline in stages, without directly formulating the HDR image generation process. In this work, we address this problem by integrating LDR-to-HDR imaging knowledge into an UNet architecture, dubbed as Knowledge-inspired UNet (KUNet). The conversion from LDR-to-HDR image is mathematically formulated, and can be conceptually divided into recovering missing details, adjusting imaging parameters and reducing imaging noise. Accordingly, we develop a basic knowledge-inspired block (KIB) including three subnetworks corresponding to the three procedures in this HDR imaging process. The KIB blocks are cascaded in the similar way to the UNet to construct HDR image with rich global information. In addition, we also propose a knowledge inspired jump-connect structure to fit a dynamic range gap between HDR and LDR images. Experimental results demonstrate that the proposed KUNet achieves superior performance compared with the state-of-the-art methods. The code, dataset and appendix materials are available at https://github.com/wanghu178/KUNet.git.
|
Hu Wang, Mao Ye, Xiatian Zhu, Shuai Li, Ce Zhu, Xue Li
| null | null | 2,022 |
ijcai
|
Corner Affinity: A Robust Grouping Algorithm to Make Corner-guided Detector Great Again
| null |
Corner-guided detector enjoys potential ability to yield precise bounding boxes. However, unreliable corner pairs, generated by heuristic grouping guidance, hinder the development of this detector. In this paper, we propose a novel corner grouping algorithm, termed as Corner Affinity, to significantly boost the reliability and robustness of corner grouping. The proposed Corner Affinity is a couple of two interactional factors, namely, 1) the structure affinity (SA), applying to generate preliminary corner pairs through the corresponding object's shallow construction information. 2) the contexts affinity (CA), running as optimizing corner pairs via embedding deeper semantic features of affiliated instances. Equipped with the Corner Affinity, a detector can produce high-quality bounding boxes upon preferable paired corner keypoints. Experimental results show the superiority of our design on multiple benchmark datasets. Specifically, for CornerNet baseline, the proposed Corner Affinity brings AP boostings of 5.8% on COCO, 35.8% on Citypersons, and 17.2% on UCAS-AOD without bells and whistles.
|
Haoran Wei, Chenglong Liu, Ping Guo, Yangguang Zhu, Jiamei Fu, Bing Wang, Peng Wang
| null | null | 2,022 |
ijcai
|
RePre: Improving Self-Supervised Vision Transformer with Reconstructive Pre-training
| null |
Recently, self-supervised vision transformers have attracted unprecedented attention for their impressive representation learning ability.
However, the dominant method, contrastive learning, mainly relies on an instance discrimination pretext task, which learns a global understanding of the image.
This paper incorporates local feature learning into self-supervised vision transformers via Reconstructive Pre-training (RePre).
Our RePre extends contrastive frameworks by adding a branch for reconstructing raw image pixels in parallel with the existing contrastive objective.
RePre equips with a lightweight convolution-based decoder that fuses the multi-hierarchy features from the transformer encoder.
The multi-hierarchy features provide rich supervisions from low to high semantic information, crucial for our RePre.
Our RePre brings decent improvements on various contrastive frameworks with different vision transformer architectures.
Transfer performance in downstream tasks outperforms supervised pre-training and state-of-the-art (SOTA) self-supervised counterparts.
|
Luya Wang, Feng Liang, Yangguang Li, Honggang Zhang, Wanli Ouyang, Jing Shao
| null | null | 2,022 |
ijcai
|
Multi-scale Spatial Representation Learning via Recursive Hermite Polynomial Networks
| null |
Multi-scale representation learning aims to leverage diverse features from different layers of Convolutional Neural Networks (CNNs) for boosting the feature robustness to scale variance. For dense prediction tasks, two key properties should be satisfied: the high spatial variance across convolutional layers, and the sub-scale granularity inside a convolutional layer for fine-grained features. To pursue the two properties, this paper proposes Recursive Hermite Polynomial Networks (RHP-Nets for short). The proposed RHP-Nets consist of two major components: 1) a dilated convolution to maintain the spatial resolution across layers, and 2) a family of Hermite polynomials over a subset of dilated grids, which recursively constructs sub-scale representations to avoid the artifacts caused by naively applying the dilation convolution. The resultant sub-scale granular features are fused via trainable Hermite coefficients to form the multi-resolution representations that can be fed into the next deeper layer, and thus allowing feature interchanging at all levels. Extensive experiments are conducted to demonstrate the efficacy of our design, and reveal its superiority over state-of-the-art alternatives on a variety of image recognition tasks. Besides, introspective studies are provided to further understand the properties of our method.
|
Lin (Yuanbo) Wu, Deyin Liu, Xiaojie Guo, Richang Hong, Liangchen Liu, Rui Zhang
| null | null | 2,022 |
ijcai
|
PACE: Predictive and Contrastive Embedding for Unsupervised Action Segmentation
| null |
Action segmentation, inferring temporal positions of human actions in an untrimmed video, is an important prerequisite for various video understanding tasks. Recently, unsupervised action segmentation (UAS) has emerged as a more challenging task due to the unavailability of frame-level annotations. Existing clustering- or prediction-based UAS approaches suffer from either over-segmentation or overfitting, leading to unsatisfactory results. To address those problems,we propose Predictive And Contrastive Embedding (PACE), a unified UAS framework leveraging both predictability and similarity information for more accurate action segmentation. On the basis of an auto-regressive transformer encoder, predictive embeddings are learned by exploiting the predictability of video context, while contrastive embeddings are generated by leveraging the similarity of adjacent short video clips. Extensive experiments on three challenging benchmarks demonstrate the superiority of our method, with up to 26.9% improvements in F1-score over the state of the art.
|
Jiahao Wang, Jie Qin, Yunhong Wang, Annan Li
| null | null | 2,022 |
ijcai
|
Boosting Multi-Label Image Classification with Complementary Parallel Self-Distillation
| null |
Multi-Label Image Classification (MLIC) appro-aches usually exploit label correlations to achieve good performance. However, emphasizing correlation like co-occurrence may overlook discriminative features and lead to model overfitting. In this study, we propose a generic framework named Parallel Self-Distillation (PSD) for boosting MLIC models. PSD decomposes the original MLIC task into several simpler MLIC sub-tasks via two elaborated complementary task decomposition strategies named Co-occurrence Graph Partition (CGP) and Dis-occurrence Graph Partition (DGP). Then, the MLIC models of fewer categories are trained with these sub-tasks in parallel for respectively learning the joint patterns and the category-specific patterns of labels. Finally, knowledge distillation is leveraged to learn a compact global ensemble of full categories with these learned patterns for reconciling the label correlation exploitation and model overfitting. Extensive results on MS-COCO and NUS-WIDE datasets demonstrate that our framework can be easily plugged into many MLIC approaches and improve performances of recent state-of-the-art approaches. The source code is released at https://github.com/Robbie-Xu/CPSD.
|
Jiazhi Xu, Sheng Huang, Fengtao Zhou, Luwen Huangfu, Daniel Zeng, Bo Liu
| null | null | 2,022 |
ijcai
|
A Decoder-free Transformer-like Architecture for High-efficiency Single Image Deraining
| null |
Despite the success of vision Transformers for the image deraining task, they are limited by computation-heavy and slow runtime. In this work, we investigate Transformer decoder is not necessary and has huge computational costs. Therefore, we revisit the standard vision Transformer as well as its successful variants and propose a novel Decoder-Free Transformer-Like (DFTL) architecture for fast and accurate single image deraining. Specifically, we adopt a cheap linear projection to represent visual information with lower
computational costs than previous linear projections. Then we replace standard Transformer decoder block with designed Progressive Patch Merging (PPM), which attains comparable performance and efficiency. DFTL could significantly alleviate the computation and GPU memory requirements through proposed modules. Extensive experiments
demonstrate the superiority of DFTL compared with competitive Transformer architectures, e.g., ViT, DETR, IPT, Uformer, and Restormer. The code is available at https://github.com/XiaoXiao-Woo/derain.
|
Xiao Wu, Ting-Zhu Huang, Liang-Jian Deng, Tian-Jing Zhang
| null | null | 2,022 |
ijcai
|
SCMT: Self-Correction Mean Teacher for Semi-supervised Object Detection
| null |
Semi-Supervised Object Detection (SSOD) aims to improve performance by leveraging a large amount of unlabeled data. Existing works usually adopt the teacher-student framework to enforce student to learn consistent predictions over the pseudo-labels generated by teacher. However, the performance of the student model is limited since the noise inherently exists in pseudo-labels. In this paper, we investigate the causes and effects of noisy pseudo-labels and propose a simple yet effective approach denoted as Self-Correction Mean Teacher(SCMT) to reduce the adverse effects. Specifically, we propose to dynamically re-weight the unsupervised loss of each student's proposal with additional supervision information from the teacher model, and assign smaller loss weights to possible noisy proposals. Extensive experiments on MS-COCO benchmark have shown the superiority of our proposed SCMT, which can significantly improve the supervised baseline by more than 11% mAP under all 1%, 5% and 10% COCO-standard settings, and surpasses state-of-the-art methods by about 1.5% mAP. Even under the challenging COCO-additional setting, SCMT still improves the supervised baseline by 4.9% mAP, and significantly outperforms previous methods by 1.2% mAP, achieving a new state-of-the-art performance.
|
Feng Xiong, Jiayi Tian, Zhihui Hao, Yulin He, Xiaofeng Ren
| null | null | 2,022 |
ijcai
|
Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation Graph Distillation
| null |
Due to the prosperity of Artificial Intelligence (AI) techniques, more and more backdoors are designed by adversaries to attack Deep Neural Networks (DNNs). Although the state-of-the-art method Neural Attention Distillation (NAD) can effectively erase backdoor triggers from DNNs, it still suffers from non-negligible Attack Success Rate (ASR) together with lowered classification ACCuracy (ACC), since NAD focuses on backdoor defense using attention features (i.e., attention maps) of the same order. In this paper, we introduce a novel backdoor defense framework named Attention Relation Graph Distillation (ARGD), which fully explores the correlation among attention features with different orders using our proposed Attention Relation Graphs (ARGs). Based on the alignment of ARGs between teacher and student models during knowledge distillation, ARGD can more effectively eradicate backdoors than NAD. Comprehensive experimental results show that, against six latest backdoor attacks, ARGD outperforms NAD by up to 94.85% reduction in ASR, while ACC can be improved by up to 3.23%.
|
Jun Xia, Ting Wang, Jiepin Ding, Xian Wei, Mingsong Chen
| null | null | 2,022 |
ijcai
|
BiCo-Net: Regress Globally, Match Locally for Robust 6D Pose Estimation
| null |
The challenges of learning a robust 6D pose function lie in 1) severe occlusion and 2) systematic noises in depth images. Inspired by the success of point-pair features, the goal of this paper is to recover the 6D pose of an object instance segmented from RGB-D images by locally matching pairs of oriented points between the model and camera space. To this end, we propose a novel Bi-directional Correspondence Mapping Network (BiCo-Net) to first generate point clouds guided by a typical pose regression, which can thus incorporate pose-sensitive information to optimize generation of local coordinates and their normal vectors. As pose predictions via geometric computation only rely on one single pair of local oriented points, our BiCo-Net can achieve robustness against sparse and occluded point clouds. An ensemble of redundant pose predictions from locally matching and direct pose regression further refines final pose output against noisy observations. Experimental results on three popularly benchmarking datasets can verify that our method can achieve state-of-the-art performance, especially for the more challenging severe occluded scenes. Source codes are available at https://github.com/Gorilla-Lab-SCUT/BiCo-Net.
|
Zelin Xu, Yichen Zhang, Ke Chen, Kui Jia
| null | null | 2,022 |
ijcai
|
Webly-Supervised Fine-Grained Recognition with Partial Label Learning
| null |
The task of webly-supervised fine-grained recognition is to boost recognition accuracy of classifying subordinate categories (e.g., different bird species) by utilizing freely available but noisy web data. As the label noises significantly hurt the network training, it is desirable to distinguish and eliminate noisy images. In this paper, we propose two strategies, i.e., open-set noise removal and closed-set noise correction, to both remove such two kinds of web noises w.r.t. fine-grained recognition. Specifically, for open-set noise removal, we utilize a pre-trained deep model to perform deep descriptor transformation to estimate the positive correlation between these web images, and detect the open-set noises based on the correlation values. Regarding closed-set noise correction, we develop a top-k recall optimization loss for firstly assigning a label set towards each web image to reduce the impact of hard label assignment for closed-set noises. Then, we further propose to correct the sample with its label set as the true single label from a partial label learning perspective. Experiments on several webly-supervised fine-grained benchmark datasets show that our method obviously outperforms other existing state-of-the-art methods.
|
Yu-Yan Xu, Yang Shen, Xiu-Shen Wei, Jian Yang
| null | null | 2,022 |
ijcai
|
Double-Check Soft Teacher for Semi-Supervised Object Detection
| null |
In the semi-supervised object detection task, due to the scarcity of labeled data and the diversity and complexity of objects to be detected, the quality of pseudo-labels generated by existing methods for unlabeled data is relatively low, which severely restricts the performance of semi-supervised object detection. In this paper, we revisit the pseudo-labeling based Teacher-Student mutual learning framework for semi-supervised object detection and identify that the inconsistency of the location and feature of the candidate object proposals between the Teacher and the Student branches are the fatal cause of the low quality of the pseudo labels. To address this issue, we propose a simple yet effective technique within the mainstream teacher-student framework, called Double Check Soft Teacher, to overcome the harm caused by insufficient quality of pseudo labels. Specifically, our proposed method leverages teacher model to generate pseudo labels for the student model. Especially, the candidate boxes generated by the student model based on the pseudo label will be sent to the teacher model for "double check", and then the teacher model will output probabilistic soft label with background class for those candidate boxes, which will be used to train the student model. Together with a pseudo labeling mechanism based on the sum of the TOP-K prediction score, which improves the recall rate of pseudo labels, Double Check Soft Teacher consistently surpasses state-of-the-art methods by significant margins on the MS-COCO benchmark, pushing the new state-of-the-art. Source codes are available at https://github.com/wkfdb/DCST.
|
Kuo Wang, Yuxiang Nie, Chaowei Fang, Chengzhi Han, Xuewen Wu, Xiaohui Wang Wang, Liang Lin, Fan Zhou, Guanbin Li
| null | null | 2,022 |
ijcai
|
Perceptual Learned Video Compression with Recurrent Conditional GAN
| null |
This paper proposes a Perceptual Learned Video Compression (PLVC) approach with recurrent conditional GAN. We employ the recurrent auto-encoder-based compression network as the generator, and most importantly, we propose a recurrent conditional discriminator, which judges raw vs. compressed video conditioned on both spatial and temporal features, including the latent representation, temporal motion and hidden states in recurrent cells. This way, the adversarial training pushes the generated video to be not only spatially photo-realistic but also temporally consistent with the groundtruth and coherent among video frames. The experimental results show that the learned PLVC model compresses video with good perceptual quality at low bit-rate, and that it outperforms the official HEVC test model (HM 16.20) and the existing learned video compression approaches for several perceptual quality metrics and user studies. The project page is available at https://github.com/RenYang-home/PLVC.
|
Ren Yang, Radu Timofte, Luc Van Gool
| null | null | 2,022 |
ijcai
|
CrowdFormer: An Overlap Patching Vision Transformer for Top-Down Crowd Counting
| null |
Crowd counting methods typically predict a density map as an intermediate representation of counting, and achieve good performance. However, due to the perspective phenomenon, there is a scale variation in real scenes, which causes the density map-based methods suffer from a severe scene generalization problem because only a limited number of scales are fitted in density map prediction and generation. To address this issue, we propose a novel vision transformer network, i.e., CrowdFormer, and a density kernels fusion framework for more accurate density map estimation and generation, respectively. Thereafter, we incorporate these two innovations into an adaptive learning system, which can take both the annotation dot map and original image as input, and jointly learns the density map estimator and generator within an end-to-end framework. The experimental results demonstrate that the proposed model achieves the state-of-the-art in the terms of MAE and MSE (e.g., it achieved a MAE of 67.1 and MSE of 301.6 on NWPU-Crowd dataset.), and confirm the effectiveness of the proposed two designs. The code is https://github.com/special-yang/Top_Down-CrowdCounting.
|
Shaopeng Yang, Weiyu Guo, Yuheng Ren
| null | null | 2,022 |
ijcai
|
Multi-level Consistency Learning for Semi-supervised Domain Adaptation
| null |
Semi-supervised domain adaptation (SSDA) aims to apply knowledge learned from a fully labeled source domain to a scarcely labeled target domain. In this paper, we propose a Multi-level Consistency Learning (MCL) framework for SSDA. Specifically, our MCL regularizes the consistency of different views of target domain samples at three levels: (i) at inter-domain level, we robustly and accurately align the source and target domains using a prototype-based optimal transport method that utilizes the pros and cons of different views of target samples; (ii) at intra-domain level, we facilitate the learning of both discriminative and compact target feature representations by proposing a novel class-wise contrastive clustering loss; (iii) at sample level, we follow standard practice and improve the prediction accuracy by conducting a consistency-based self-training. Empirically, we verified the effectiveness of our MCL framework on three popular SSDA benchmarks, i.e., VisDA2017, DomainNet, and Office-Home datasets, and the experimental results demonstrate that our MCL framework achieves the state-of-the-art performance.
|
Zizheng Yan, Yushuang Wu, Guanbin Li, Yipeng Qin, Xiaoguang Han, Shuguang Cui
| null | null | 2,022 |
ijcai
|
Entity-aware and Motion-aware Transformers for Language-driven Action Localization
| null |
Language-driven action localization in videos is a challenging task that involves not only visual-linguistic matching but also action boundary prediction. Recent progress has been achieved through aligning language queries to video segments, but estimating precise boundaries is still under-explored. In this paper, we propose entity-aware and motion-aware Transformers that progressively localize actions in videos by first coarsely locating clips with entity queries and then finely predicting exact boundaries in a shrunken temporal region with motion queries. The entity-aware Transformer incorporates the textual entities into visual representation learning via cross-modal and cross-frame attentions to facilitate attending action-related video clips. The motion-aware Transformer captures fine-grained motion changes at multiple temporal scales via integrating long short-term memory into the self-attention module to further improve the precision of action boundary prediction. Extensive experiments on the Charades-STA and TACoS datasets demonstrate that our method achieves better performance than existing methods.
|
Shuo Yang, Xinxiao Wu
| null | null | 2,022 |
ijcai
|
Learning Implicit Body Representations from Double Diffusion Based Neural Radiance Fields
| null |
In this paper, we present a novel double diffusion based neural radiance field, dubbed DD-NeRF, to reconstruct human body geometry and render the human body appearance in novel views from a sparse set of images. We first propose a double diffusion mechanism to achieve expressive representations of input images by fully exploiting human body priors and image appearance details at two levels. At the coarse level, we first model the coarse human body poses and shapes via an unclothed 3D deformable vertex model as guidance. At the fine level, we present a multi-view sampling network to capture subtle geometric deformations and image detailed appearances, such as clothing and hair, from multiple input views. Considering the sparsity of the two level features, we diffuse them into feature volumes in the canonical space to construct neural radiance fields. Then, we present a signed distance function (SDF) regression network to construct body surfaces from the diffused features. Thanks to our double diffused representations, our method can even synthesize novel views of unseen subjects. Experiments on various datasets demonstrate that our approach outperforms the state-of-the-art in both geometric reconstruction and novel view synthesis.
|
Guangming Yao, Hongzhi Wu, Yi Yuan, Lincheng Li, Kun Zhou, Xin Yu
| null | null | 2,022 |
ijcai
|
Learning to Hash Naturally Sorts
| null |
Learning to hash pictures a list-wise sorting problem. Its testing metrics, e.g., mean-average precision, count on a sorted candidate list ordered by pair-wise code similarity. However, scarcely does one train a deep hashing model with the sorted results end-to-end because of the non-differentiable nature of the sorting operation. This inconsistency in the objectives of training and test may lead to sub-optimal performance since the training loss often fails to reflect the actual retrieval metric. In this paper, we tackle this problem by introducing Naturally-Sorted Hashing (NSH). We sort the Hamming distances of samples' hash codes and accordingly gather their latent representations for self-supervised training. Thanks to the recent advances in differentiable sorting approximations, the hash head receives gradients from the sorter so that the hash encoder can be optimized along with the training procedure. Additionally, we describe a novel Sorted Noise-Contrastive Estimation (SortedNCE) loss that selectively picks positive and negative samples for contrastive learning, which allows NSH to mine data semantic relations during training in an unsupervised manner. Our extensive experiments show the proposed NSH model significantly outperforms the existing unsupervised hashing methods on three benchmarked datasets.
|
Jiaguo Yu, Yuming Shen, Menghan Wang, Haofeng Zhang, Philip H.S. Torr
| null | null | 2,022 |
ijcai
|
Learning Prototype via Placeholder for Zero-shot Recognition
| null |
Zero-shot learning (ZSL) aims to recognize unseen classes by exploiting semantic descriptions shared between seen classes and unseen classes.
Current methods show that it is effective to learn visual-semantic alignment by projecting semantic embeddings into the visual space as class prototypes. However, such a projection function is only concerned with seen classes. When applied to unseen classes, the prototypes often perform suboptimally due to domain shift.
In this paper, we propose to learn prototypes via placeholders, termed LPL, to eliminate the domain shift between seen and unseen classes. Specifically, we combine seen classes to hallucinate new classes which play as placeholders of the unseen classes in the visual and semantic space. Placed between seen classes, the placeholders encourage prototypes of seen classes to be highly dispersed. And more space is spared for the insertion of well-separated unseen ones. Empirically, well-separated prototypes help counteract visual-semantic misalignment caused by domain shift.
Furthermore, we exploit a novel semantic-oriented fine-tuning method to guarantee the semantic reliability of placeholders.
Extensive experiments on five benchmark datasets demonstrate the significant performance gain of LPL over the state-of-the-art methods.
|
Zaiquan Yang, Yang Liu, Wenjia Xu, Chong Huang, Lei Zhou, Chao Tong
| null | null | 2,022 |
ijcai
|
Weakening the Influence of Clothing: Universal Clothing Attribute Disentanglement for Person Re-Identification
| null |
Most existing Re-ID studies focus on the short-term cloth-consistent setting and thus dominate by the visual appearance of clothing. However, the same person would wear different clothes and different people would wear the same clothes in reality, which invalidates these methods. To tackle the challenge of clothes change, we propose a Universal Clothing Attribute Disentanglement network (UCAD) which can effectively weaken the influence of clothing (identity-unrelated) and force the model to learn identity-related features that are unrelated to the worn clothing. For further study of Re-ID in cloth-changing scenarios, we construct a large-scale dataset called CSCC with the following unique features: (1) Severe: A large number of people have cloth-changing over four seasons. (2) High definition: The resolution of the cameras ranges from 1920×1080 to 3840×2160, which ensures that the recorded people are clear. Furthermore, we provide two variants of CSCC considering different degrees of cloth-changing, namely moderate and severe, so that researchers can effectively evaluate their models from various aspects. Experiments on several cloth-changing datasets including our CSCC and short-term dataset Market-1501 prove the superiority of UCAD. The dataset is available at https://github.com/yomin-y/UCAD.
|
Yuming Yan, Huimin Yu, Shuzhao Li, Zhaohui Lu, Jianfeng He, Haozhuo Zhang, Runfa Wang
| null | null | 2,022 |
ijcai
|
Multi-Proxy Learning from an Entropy Optimization Perspective
| null |
Deep Metric Learning, a task that learns a feature embedding space where semantically similar samples are located closer than dissimilar samples, is a cornerstone of many computer vision applications. Most of the existing proxy-based approaches usually exploit the global context via learning a single proxy for each training class, which struggles in capturing the complex non-uniform data distribution with different patterns. In this work, we present an easy-to-implement framework to effectively capture the local neighbor relationships via learning multiple proxies for each class that collectively approximate the intra-class distribution. In the context of large intra-class visual diversity, we revisit the entropy learning under the multi-proxy learning framework and provide a training routine that both minimizes the entropy of intra-class probability distribution and maximizes the entropy of inter-class probability distribution. In this way, our model is able to better capture the intra-class variations and smooth the inter-class differences and thus facilitates to extract more semantic feature representations for the downstream tasks. Extensive experimental results demonstrate that the proposed approach achieves competitive performances. Codes and an appendix are provided.
|
Yunlong Yu, Dingyi Zhang, Yingming Li, Zhongfei Zhang
| null | null | 2,022 |
ijcai
|
Learning Sparse Interpretable Features For NAS Scoring From Liver Biopsy Images
| null |
Liver biopsy images play a key role in the diagnosis of global non-alcoholic fatty liver disease (NAFLD). The NAFLD activity score (NAS) on liver biopsy images grades the amount of histological findings that reflect the progression of NAFLD. However, liver biopsy image analysis remains a challenging task due to its complex tissue structures and sparse distribution of histological findings. In this paper, we propose a sparse interpretable feature learning method (SparseX) to efficiently estimate NAS scores. First, we introduce an interpretable spatial sampling strategy based on histological features to effectively select informative tissue
regions containing tissue alterations. Then, SparseX formulates the feature learning as a low-rank decomposition problem. Non-negative matrix factorization (NMF)-based attributes learning is embedded into a deep network to compress and select sparse features for a small portion of tissue alterations contributing to diagnosis. Experiments conducted on the internal Liver-NAS and public SteatosisRaw datasets show the effectiveness of the proposed method in terms of classification performance and interpretability. regions containing tissue alterations. Then, SparseX formulates the feature learning as a low-rank decomposition problem. Non-negative matrix factorization (NMF)-based attributes learning is embedded into a deep network to compress and select sparse features for a small portion of tissue alterations contributing to diagnosis. Experiments conducted on the internal Liver-NAS and public SteatosisRaw datasets show the effectiveness of the proposed method in terms of classification performance and interpretability.
|
Chong Yin, Siqi Liu, Vincent Wai-Sun Wong, Pong C Yuen
| null | null | 2,022 |
ijcai
|
To Fold or Not to Fold: a Necessary and Sufficient Condition on Batch-Normalization Layers Folding
| null |
Batch-Normalization (BN) layers have become fundamental components in the evermore complex deep neural network architectures. Such models require acceleration processes for deployment on edge devices. However, BN layers add computation bottlenecks due to the sequential operation processing: thus, a key, yet often overlooked component of the acceleration process is BN layers folding. In this paper, we demonstrate that the current BN folding approaches are suboptimal in terms of how many layers can be removed. We therefore provide a necessary and sufficient condition for BN folding and a corresponding optimal algorithm. The proposed approach systematically outperforms existing baselines and allows to dramatically reduce the inference time of deep neural networks.
|
Edouard Yvinec, Arnaud Dapogny, Kevin Bailly
| null | null | 2,022 |
ijcai
|
RAPQ: Rescuing Accuracy for Power-of-Two Low-bit Post-training Quantization
| null |
We introduce a Power-of-Two post-training quantization( PTQ) method for deep neural network that meets hardware requirements and does not call for long-time retraining. PTQ requires a small set of calibration data and is easier for deployment, but results in lower accuracy than Quantization-Aware Training( QAT). Power-of-Two quantization can convert the multiplication introduced by quantization and dequantization to bit-shift that is adopted by many efficient accelerators. However, the Power-of-Two scale has fewer candidate values, which leads to more rounding or clipping errors. We propose a novel Power-of-Two PTQ framework, dubbed RAPQ, which dynamically adjusts the Power-of-Two scales of the whole network instead of statically determining them layer by layer. It can theoretically trade off the rounding error and clipping error of the whole network. Meanwhile, the reconstruction method in RAPQ is based on the BN information of every unit. Extensive experiments on ImageNet prove the excellent performance of our proposed method. Without bells and whistles, RAPQ can reach accuracy of 65% and 48% on ResNet-18 and MobileNetV2 respectively with weight INT2 activation INT4. We are the first to propose PTQ for the more constrained but hardware-friendly Power-of-Two quantization and prove that it can achieve nearly the same accuracy as SOTA PTQ method. The code will be released.
|
Hongyi Yao, Pu Li, Jian Cao, Xiangcheng Liu, Chenying Xie, Bingzhang Wang
| null | null | 2,022 |
ijcai
|
Towards Universal Backward-Compatible Representation Learning
| null |
Conventional model upgrades for visual search systems require offline refresh of gallery features by feeding gallery images into new models (dubbed as “backfill”), which is time-consuming and expensive, especially in large-scale applications. The task of backward-compatible representation learning is therefore introduced to support backfill-free model upgrades, where the new query features are interoperable with the old gallery features. Despite the success, previous works only investigated a close-set training scenario (i.e., the new training set shares the same classes as the old one), and are limited by more realistic and challenging open-set scenarios. To this end, we first introduce a new problem of universal backward-compatible representation learning, covering all possible data split in model upgrades. We further propose a simple yet effective method, dubbed as Universal Backward-Compatible Training (UniBCT) with a novel structural prototype refinement algorithm, to learn compatible representations in all kinds of model upgrading benchmarks in a unified manner. Comprehensive experiments on the large-scale face recognition datasets MS1Mv3 and IJB-C fully demonstrate the effectiveness of our method. Source code is available at https://github.com/TencentARC/OpenCompatible.
|
Binjie Zhang, Yixiao Ge, Yantao Shen, Shupeng Su, Fanzi Wu, Chun Yuan, Xuyuan Xu, Yexin Wang, Ying Shan
| null | null | 2,022 |
ijcai
|
Improving Transferability of Adversarial Examples with Virtual Step and Auxiliary Gradients
| null |
Deep neural networks have been demonstrated to be vulnerable to adversarial examples, which fool networks by adding human-imperceptible perturbations to benign examples. At present, the practical transfer-based black-box attacks are attracting significant attention. However, most existing transfer-based attacks achieve only relatively limited success rates. We propose to improve the transferability of adversarial examples through the use of a virtual step and auxiliary gradients. Here, the “virtual step” refers to using an unusual step size and clipping adversarial perturbations only in the last iteration, while the “auxiliary gradients” refer to using not only gradients corresponding to the ground-truth label (for untargeted attacks), but also gradients corresponding to some other labels to generate adversarial perturbations. Our proposed virtual step and auxiliary gradients can be easily integrated into existing gradient-based attacks. Extensive experiments on ImageNet show that the adversarial examples crafted by our method can effectively transfer to different networks. For single-model attacks, our method outperforms the state-of-the-art baselines, improving the success rates by a large margin of 12%~28%. Our code is publicly available at https://github.com/mingcheung/Virtual-Step-and-Auxiliary-Gradients.
|
Ming Zhang, Xiaohui Kuang, Hu Li, Zhendong Wu, Yuanping Nie, Gang Zhao
| null | null | 2,022 |
ijcai
|
Few-Shot Adaptation of Pre-Trained Networks for Domain Shift
| null |
Deep networks are prone to performance degradation when there is a domain shift between the source (training) data and target (test) data. Recent test-time adaptation methods update batch normalization layers of pre-trained source models deployed in new target environments with streaming data. Although these methods can adapt on-the-fly without first collecting a large target domain dataset, their performance is dependent on streaming conditions such as mini-batch size and class-distribution which can be unpredictable in practice. In this work, we propose a framework for few-shot domain adaptation to address the practical challenges of data-efficient adaptation. Specifically, we propose a constrained optimization of feature normalization statistics in pre-trained source models supervised by a small target domain support set. Our method is easy to implement and improves source model performance with as little as one sample per class for classification tasks. Extensive experiments on 5 cross-domain classification and 4 semantic segmentation datasets show that our proposed method achieves more accurate and reliable performance than test-time adaptation, while not being constrained by streaming conditions.
|
Wenyu Zhang, Li Shen, Wanyue Zhang, Chuan-Sheng Foo
| null | null | 2,022 |
ijcai
|
Enhancing the Transferability of Adversarial Examples with Random Patch
| null |
Adversarial examples can fool deep learning models, and their transferability is critical for attacking black-box models in real-world scenarios. Existing state-of-the-art transferable adversarial attacks tend to exploit intrinsic features of objects to generate adversarial examples. This paper proposes the Random Patch Attack (RPA) to significantly improve the transferability of adversarial examples by the patch-wise random transformation that effectively highlights important intrinsic features of objects. Specifically, we introduce random patch transformations to original images to variate model-specific features. Important object-related features are preserved after aggregating the transformed images since they stay consistent in multiple transformations while model-specific elements are neutralized. The obtained essential features steer noises to perturb the object-related regions, generating the adversarial examples of superior transferability across different models. Extensive experimental results demonstrate the effectiveness of the proposed RPA. Compared to the state-of-the-art transferable attacks, our attacks improve the black-box attack success rate by 2.9\% against normally trained models, 4.7\% against defense models, and 4.6\% against vision transformers on average, reaching a maximum of 99.1\%, 93.2\%, and 87.8\%, respectively.
|
Yaoyuan Zhang, Yu-an Tan, Tian Chen, Xinrui Liu, Quanxin Zhang, Yuanzhang Li
| null | null | 2,022 |
ijcai
|
S2 Transformer for Image Captioning
| null |
Transformer-based architectures with grid features represent the state-of-the-art in visual and language reasoning tasks, such as visual question answering and image-text matching. However, directly applying them to image captioning may result in spatial and fine-grained semantic information loss. Their applicability to image captioning is still largely under-explored. Towards this goal, we propose a simple yet effective method, Spatial- and Scale-aware Transformer (S2 Transformer) for image captioning. Specifically, we firstly propose a Spatial-aware Pseudo-supervised (SP) module, which resorts to feature clustering to help preserve spatial information for grid features. Next, to maintain the model size and produce superior results, we build a simple weighted residual connection, named Scale-wise Reinforcement (SR) module, to simultaneously explore both low- and high-level encoded features with rich semantics. Extensive experiments on the MSCOCO benchmark demonstrate that our method achieves new state-of-art performance without bringing excessive parameters compared with the vanilla transformer. The source code is available at https://github.com/zchoi/S2-Transformer
|
Pengpeng Zeng, Haonan Zhang, Jingkuan Song, Lianli Gao
| null | null | 2,022 |
ijcai
|
Plane Geometry Diagram Parsing
| null |
Geometry diagram parsing plays a key role in geometry problem solving, wherein the primitive extraction and relation parsing remain challenging due to the complex layout and between-primitive relationship. In this paper, we propose a powerful diagram parser based on deep learning and graph reasoning. Specifically, a modified instance segmentation method is proposed to extract geometric primitives, and the graph neural network (GNN) is leveraged to realize relation parsing and primitive classification incorporating geometric features and prior knowledge. All the modules are integrated into an end-to-end model called PGDPNet to perform all the sub-tasks simultaneously. In addition, we build a new large-scale geometry diagram dataset named PGDP5K with primitive level annotations. Experiments on PGDP5K and an existing dataset IMP-Geometry3K show that our model outperforms state-of-the-art methods in four sub-tasks remarkably. Our code, dataset and appendix material are available at https://github.com/mingliangzhang2018/PGDP.
|
Ming-Liang Zhang, Fei Yin, Yi-Han Hao, Cheng-Lin Liu
| null | null | 2,022 |
ijcai
|
Region-level Contrastive and Consistency Learning for Semi-Supervised Semantic Segmentation
| null |
Current semi-supervised semantic segmentation methods mainly focus on designing pixel-level consistency and contrastive regularization. However, pixel-level regularization is sensitive to noise from pixels with incorrect predictions, and pixel-level contrastive regularization has a large memory and computational cost. To address the issues, we propose a novel region-level contrastive and consistency learning framework (RC^2L) for semi-supervised semantic segmentation. Specifically, we first propose a Region Mask Contrastive (RMC) loss and a Region Feature Contrastive (RFC) loss to accomplish region-level contrastive property. Furthermore, Region Class Consistency (RCC) loss and Semantic Mask Consistency (SMC) loss are proposed for achieving region-level consistency. Based on the proposed region-level contrastive and consistency regularization, we develop a region-level contrastive and consistency learning framework (RC^2L) for semi-supervised semantic segmentation, and evaluate our RC^2L on two challenging benchmarks (PASCAL VOC 2012 and Cityscapes), outperforming the state-of-the-art.
|
Jianrong Zhang, Tianyi Wu, Chuanghao Ding, Hongwei Zhao, Guodong Guo
| null | null | 2,022 |
ijcai
|
Distilling Inter-Class Distance for Semantic Segmentation
| null |
Knowledge distillation is widely adopted in semantic segmentation to reduce the computation cost. The previous knowledge distillation methods for semantic segmentation focus on pixel-wise feature alignment and intra-class feature variation distillation, neglecting to transfer the knowledge of the inter-class distance in the feature space, which is important for semantic segmentation such a pixel-wise classification task. To address this issue, we propose an Inter-class Distance Distillation (IDD) method to transfer the inter-class distance in the feature space from the teacher network to the student network. Furthermore, semantic segmentation is a position-dependent task, thus we exploit a position information distillation module to help the student network encode more position information. Extensive experiments on three popular datasets: Cityscapes, Pascal VOC and ADE20K show that our method is helpful to improve the accuracy of semantic segmentation models and achieves the state-of-the-art performance. E.g. it boosts the benchmark model (``PSPNet+ResNet18") by 7.50% in accuracy on the Cityscapes dataset.
|
Zhengbo Zhang, Chunluan Zhou, Zhigang Tu
| null | null | 2,022 |
ijcai
|
SAR-to-Optical Image Translation via Neural Partial Differential Equations
| null |
Synthetic Aperture Radar (SAR) becomes prevailing in remote sensing while SAR images are challenging to interpret by human visual perception due to the active imaging mechanism and speckle noise. Recent researches on SAR-to-optical image translation provide a promising solution and have attracted increasing attentions, though still suffering from low optical image quality with geometric distortion due to the large domain gap. In this paper, we mitigate this issue from a novel perspective, i.e., neural partial differential equations (PDE). First, based on the efficient numerical scheme for solving PDE, i.e., Taylor Central Difference (TCD), we devise a basic TCD residual block to build the backbone network, which promotes the extraction of useful information in SAR images by aggregating and enhancing features from different levels. Furthermore, inspired by the Perona-Malik Diffusion (PMD), we devise a PMD neural module to implement feature diffusion through layers, aiming at removing the noises in smooth regions while preserving the geometric structures. Assembling them together, we propose a novel SAR-to-Optical image translation network named S2O-NPDE, which delivers optical images with finer structures and less noise while enjoying an explainability advantage from explicit mathematical derivation. Experiments on the popular SEN1-2 dataset show that our model outperforms state-of-the-art methods in terms of both objective metrics and visual quality.
|
Mingjin Zhang, Chengyu He, Jing Zhang, Yuxiang Yang, Xiaoqi Peng, Jie Guo
| null | null | 2,022 |
ijcai
|
CATrans: Context and Affinity Transformer for Few-Shot Segmentation
| null |
Few-shot segmentation (FSS) aims to segment novel categories given scarce annotated support images. The crux of FSS is how to aggregate dense correlations between support and query images for query segmentation while being robust to the large variations in appearance and context. To this end, previous Transformer-based methods explore global consensus either on context similarity or affinity map between support-query pairs. In this work, we effectively integrate the context and affinity information via the proposed novel Context and Affinity Transformer (CATrans) in a hierarchical architecture. Specifically, the Relation-guided Context Transformer (RCT) propagates context information from support to query images conditioned on more informative support features. Based on the observation that a huge feature distinction between support and query pairs brings barriers for context knowledge transfer, the Relation-guided Affinity Transformer (RAT) measures attention-aware affinity as auxiliary information for FSS, in which the self-affinity is responsible for more reliable cross-affinity. We conduct experiments to demonstrate the effectiveness of the proposed model, outperforming the state-of-the-art methods.
|
Shan Zhang, Tianyi Wu, Sitong Wu, Guodong Guo
| null | null | 2,022 |
ijcai
|
A Probabilistic Code Balance Constraint with Compactness and Informativeness Enhancement for Deep Supervised Hashing
| null |
Building on deep representation learning, deep supervised hashing has achieved promising performance in tasks like similarity retrieval. However, conventional code balance constraints (i.e., bit balance and bit uncorrelation) imposed on avoiding overfitting and improving hash code quality are unsuitable for deep supervised hashing owing to their inefficiency and impracticality of simultaneously learning deep data representations and hash functions. To address this issue, we propose probabilistic code balance constraints on deep supervised hashing to force each hash code to conform to a discrete uniform distribution. Accordingly, a Wasserstein regularizer aligns the distribution of generated hash codes to a uniform distribution. Theoretical analyses reveal that the proposed constraints form a general deep hashing framework for both bit balance and bit uncorrelation and maximizing the mutual information between data input and their corresponding hash codes. Extensive empirical analyses on two benchmark datasets further demonstrate the enhancement of compactness and informativeness of hash codes for deep supervised hash to improve retrieval performance (code available at: https://github.com/mumuxi/dshwr).
|
Qi Zhang, Liang Hu, Longbing Cao, Chongyang Shi, Shoujin Wang, Dora D. Liu
| null | null | 2,022 |
ijcai
|
Domain Adversarial Learning for Color Constancy
| null |
Color Constancy aims to eliminate the color cast of RAW images caused by non-neutral illuminants. Though contemporary approaches based on convolutional neural networks significantly improve illuminant estimation, they suffer from the seriously insufficient data problem. To solve this problem by effectively utilizing multi-domain data, we propose the Domain Adversarial Learning Color Constancy (DALCC) which consists of the Domain Adversarial Learning Branch (DALB) and the Feature Reweighting Module (FRM). In DALB, the Camera Domain Classifier and the feature extractor compete against each other in an adversarial way to encourage the emergence of domain-invariant features. At the same time, the Illuminant Transformation Module performs color space conversion to solve the inconsistent color space problem caused by those domain-invariant features. They collaboratively avoid model degradation of multi-device training caused by the domain discrepancy of feature distribution, which enables our DALCC to benefit from multi-domain data. Besides, to better utilize multi-domain data, we propose the FRM that reweights the feature map to suppress Non-Primary Illuminant regions, which reduces the influence of misleading illuminant information. Experiments show that the proposed DALCC can more effectively take advantage of multi-domain data and thus achieve state-of-the-art performance on commonly used benchmark datasets.
|
Zhifeng Zhang, Xuejing Kang, Anlong Ming
| null | null | 2,022 |
ijcai
|
Visual Emotion Representation Learning via Emotion-Aware Pre-training
| null |
Despite recent progress in deep learning, visual emotion recognition remains a challenging problem due to ambiguity of emotion perception, diverse concepts related to visual emotion and lack of large-scale annotated dataset. In this paper, we present a large-scale multimodal pre-training method to learn visual emotion representation by aligning emotion, object, attribute triplet with a contrastive loss. We conduct our pre-training on a large web dataset with noisy tags and fine-tune on visual emotion classification datasets. Our method achieves state-of-the-art performance for visual emotion classification.
|
Yue Zhang, Wanying Ding, Ran Xu, Xiaohua Hu
| null | null | 2,022 |
ijcai
|
Imperceptible Backdoor Attack: From Input Space to Feature Representation
| null |
Backdoor attacks are rapidly emerging threats to deep neural networks (DNNs). In the backdoor attack scenario, attackers usually implant the backdoor into the target model by manipulating the training dataset or training process. Then, the compromised model behaves normally for benign input yet makes mistakes when the pre-defined trigger appears. In this paper, we analyze the drawbacks of existing attack approaches and propose a novel imperceptible backdoor attack. We treat the trigger pattern as a special kind of noise following a multinomial distribution. A U-net-based network is employed to generate concrete parameters of multinomial distribution for each benign input. This elaborated trigger ensures that our approach is invisible to both humans and statistical detection. Besides the design of the trigger, we also consider the robustness of our approach against model diagnose-based defences. We force the feature representation of malicious input stamped with the trigger to be entangled with the benign one. We demonstrate the effectiveness and robustness against multiple state-of-the-art defences through extensive datasets and networks. Our trigger only modifies less than 1\% pixels of a benign image while the modification magnitude is 1. Our source code is available at https://github.com/Ekko-zn/IJCAI2022-Backdoor.
|
Nan Zhong, Zhenxing Qian, Xinpeng Zhang
| null | null | 2,022 |
ijcai
|
Rainy WCity: A Real Rainfall Dataset with Diverse Conditions for Semantic Driving Scene Understanding
| null |
Scene understanding in adverse weather conditions (e.g. rainy and foggy days) has drawn increasing attention, arising some specific benchmarks and algorithms. However, scene segmentation under rainy weather is still challenging and under-explored due to the following limitations on the datasets and methods: 1) Manually synthetic rainy samples with empirically settings and human subjective assumptions; 2) Limited rainy conditions, including the rain patterns, intensity, and degradation factors; 3) Separated training manners for image deraining and semantic segmentation. To break these limitations, we pioneer a real, comprehensive, and well-annotated scene understanding dataset under rainy weather, named Rainy WCity. It covers various rain patterns and their bring-in negative visual effects, covering wiper, droplet, reflection, refraction, shadow, windshield-blurring, etc. In addition, to alleviate dependence on paired training samples, we design an unsupervised contrastive learning network for real image deraining and the final rainy scene semantic segmentation via multi-task joint optimization. A comprehensive comparison analysis is also provided, which shows that scene understanding in rainy weather is a largely open problem. Finally, we summarize our general observations, identify open research challenges, and point out future directions.
|
Xian Zhong, Shidong Tu, Xianzheng Ma, Kui Jiang, Wenxin Huang, Zheng Wang
| null | null | 2,022 |
ijcai
|
Test-time Fourier Style Calibration for Domain Generalization
| null |
The topic of generalizing machine learning models learned on a collection of source domains to unknown target domains is challenging. While many domain generalization (DG) methods have achieved promising results, they primarily rely on the source domains at train-time without manipulating the target domains at test-time. Thus, it is still possible that those methods can overfit to source domains and perform poorly on target domains. Driven by the observation that domains are strongly related to styles, we argue that reducing the gap between source and target styles can boost models’ generalizability. To solve the dilemma of having no access to the target domain during training, we introduce Test-time Fourier Style Calibration (TF-Cal) for calibrating the target domain style on the fly during testing. To access styles, we utilize Fourier transformation to decompose features into amplitude (style) features and phase (semantic) features. Furthermore, we present an effective technique to Augment Amplitude Features (AAF) to complement TF-Cal. Extensive experiments on several popular DG benchmarks and a segmentation dataset for medical images demonstrate that our method outperforms state-of-the-art methods.
|
Xingchen Zhao, Chang Liu, Anthony Sicilia, Seong Jae Hwang, Yun Fu
| null | null | 2,022 |
ijcai
|
Learning to Generate Image Source-Agnostic Universal Adversarial Perturbations
| null |
Adversarial perturbations are critical for certifying the robustness of deep learning models. A ``universal adversarial perturbation'' (UAP) can simultaneously attack multiple images, and thus offers a more unified threat model, obviating an image-wise attack algorithm. However, the existing UAP generator is underdeveloped when images are drawn from different image sources (e.g., with different image resolutions). Towards an authentic universality across image sources, we take a novel view of UAP generation as a customized instance of ``few-shot learning'', which leverages bilevel optimization and learning-to-optimize (L2O) techniques for UAP generation with improved attack success rate (ASR). We begin by considering the popular model agnostic meta-learning (MAML) framework to meta-learn a UAP generator. However, we see that the MAML framework does not directly offer the universal attack across image sources, requiring us to integrate it with another meta-learning framework of L2O. The resulting scheme for meta-learning a UAP generator (i) has better performance (50% higher ASR) than baselines such as Projected Gradient Descent, (ii) has better performance (37% faster) than the vanilla L2O and MAML frameworks (when applicable), and (iii) is able to simultaneously handle UAP generation for different victim models and data sources.
|
Pu Zhao, Parikshit Ram, Songtao Lu, Yuguang Yao, Djallel Bouneffouf, Xue Lin, Sijia Liu
| null | null | 2,022 |
ijcai
|
C3-STISR: Scene Text Image Super-resolution with Triple Clues
| null |
Scene text image super-resolution (STISR) has been regarded as an important pre-processing task for text recognition from low-resolution scene text images. Most recent approaches use the recognizer's feedback as clues to guide super-resolution. However, directly using recognition clue has two problems: 1) Compatibility. It is in the form of probability distribution, has an obvious modal gap with STISR - a pixel-level task; 2) Inaccuracy. it usually contains wrong information, thus will mislead the main task and degrade super-resolution performance. In this paper, we present a novel method C3-STISR that jointly exploits the recognizer's feedback, visual and linguistical information as clues to guide super-resolution. Here, visual clue is from the images of texts predicted by the recognizer, which is informative and more compatible with the STISR task; while linguistical clue is generated by a pre-trained character-level language model, which is able to correct the predicted texts. We design effective extraction and fusion mechanisms for the triple cross-modal clues to generate a comprehensive and unified guidance for super-resolution. Extensive experiments on TextZoom show that C3-STISR outperforms the SOTA methods in fidelity and recognition performance. Code is available in https://github.com/zhaominyiz/C3-STISR.
|
Minyi Zhao, Miao Wang, Fan Bai, Bingjia Li, Jie Wang, Shuigeng Zhou
| null | null | 2,022 |
ijcai
|
QCDCL with Cube Learning or Pure Literal Elimination - What is Best?
| null |
Quantified conflict-driven clause learning (QCDCL) is one of the main approaches for solving quantified Boolean formulas (QBF). We formalise and investigate several versions of QCDCL that include cube learning and/or pure-literal elimination, and formally compare the resulting solving models via proof complexity techniques. Our results show that almost all of the QCDCL models are exponentially incomparable with respect to proof size (and hence solver running time), pointing towards different orthogonal ways how to practically implement QCDCL.
|
Benjamin Böhm, Tomáš Peitl, Olaf Beyersdorff
| null | null | 2,022 |
ijcai
|
Fine-grained Complexity of Partial Minimum Satisfiability
| null |
There is a well-known approach to cope with NP-hard problems in practice: reduce the given problem to SAT or MAXSAT and run a SAT or a MaxSAT solver. This method is very efficient since SAT/MaxSAT solvers are extremely well-studied, as well as the complexity of these problems. At AAAI 2011, Li et al. proposed an alternative to this approach and suggested the Partial Minimum Satisfiability problem as a reduction target for NP-hard problems. They developed the MinSatz solver and showed that reducing to Partial Minimum Satisfiability and using MinSatz is in some cases more efficient than reductions to SAT or MaxSAT. Since then many results connected to the Partial Minimum Satisfiability problem were published. However, to the best of our knowledge, the worst-case complexity of Partial Minimum Satisfiability has not been studied up until now. Our goal is to fix the issue and show a O*((2-ɛ)^m) lower bound under the SETH assumption (here m is the total number of clauses), as well as several other lower bounds and parameterized exact algorithms with better-than-trivial running time.
|
Ivan Bliznets, Danil Sagunov, Kirill Simonov
| null | null | 2,022 |
ijcai
|
A Solver + Gradient Descent Training Algorithm for Deep Neural Networks
| null |
We present a novel hybrid algorithm for training Deep Neural Networks that combines the state-of-the-art Gradient Descent (GD) method with a Mixed Integer Linear Programming (MILP) solver, outperforming GD and variants in terms of accuracy, as well as resource and data efficiency for both regression and classification tasks.
Our GD+Solver hybrid algorithm, called GDSolver, works as follows: given a DNN D as input, GDSolver invokes GD to partially train D until it gets stuck in a local minima, at which point GDSolver invokes an MILP solver to exhaustively search a region of the loss landscape around the weight assignments of D’s final layer parameters with the goal of tunnelling through and escaping the local minima. The process is repeated until desired accuracy is achieved.
In our experiments, we find that GDSolver not only scales well to additional data and very large model sizes, but also outperforms all other competing methods in terms of rates of convergence and data efficiency. For regression tasks, GDSolver produced models that, on average, had 31.5% lower MSE in 48% less time, and for classification tasks on MNIST and CIFAR10, GDSolver was able to achieve the highest accuracy over all competing methods, using only 50% of the training data that GD baselines required.
|
Dhananjay Ashok, Vineel Nagisetty, Christopher Srinivasa, Vijay Ganesh
| null | null | 2,022 |
ijcai
|
Hierarchical Bilevel Learning with Architecture and Loss Search for Hadamard-based Image Restoration
| null |
In the past few decades, Hadamard-based image restoration problems (e.g., low-light image enhancement) attract wide concerns in multiple areas related to artificial intelligence. However, existing works mostly focus on heuristically defining architecture and loss by the engineering experiences that came from extensive practices. This way brings about expensive verification costs for seeking out the optimal solution. To this end, we develop a novel hierarchical bilevel learning scheme to discover the architecture and loss simultaneously for different Hadamard-based image restoration tasks.
More concretely, we first establish a new Hadamard-inspired neural unit to aggregate domain knowledge into the network design. Then we model a triple-level optimization that consists of the architecture, loss and parameters optimizations to deliver a macro perspective for network learning. Then we introduce a new hierarchical bilevel learning scheme for solving the built triple-level model to progressively generate the desired architecture and loss. We also define an architecture search space consisting of a series of simple operations and an image quality-oriented loss search space.
Extensive experiments on three Hadamard-based image restoration tasks (including low-light image enhancement, single image haze removal and underwater image enhancement) fully verify our superiority against state-of-the-art methods.
|
Guijing Zhu, Long Ma, Xin Fan, Risheng Liu
| null | null | 2,022 |
ijcai
|
Accelerated Multiplicative Weights Update Avoids Saddle Points Almost Always
| null |
We consider nonconvex optimization problem with constraint that is a product of simplices. A commonly used algorithm in solving this type of problem is the Multiplicative Weights Update (MWU), an algorithm that is widely used in game theory, machine learning and multi agent systems. Despite it has been known that MWU avoids saddle points, there is a question that remains unaddressed: ``Is there an accelerated version of MWU that avoids saddle points provably?'' In this paper we provide a positive answer to above question. We provide an accelerated MWU based on Riemannian Accelerated Gradient Descent, and prove that the Riemannian Accelerated Gradient Descent, thus the accelerated MWU, avoid saddle points.
|
Yi Feng, Ioannis Panageas, Xiao Wang
| null | null | 2,022 |
ijcai
|
DPSampler: Exact Weighted Sampling Using Dynamic Programming
| null |
The problem of exact weighted sampling of solutions of Boolean formulas has applications in Bayesian inference, testing, and verification. The state-of-the-art approach to sampling involves carefully decomposing the input formula and compiling a data structure called d-DNNF in the process. Recent work in the closely connected field of model counting, however, has shown that smartly composing different subformulas using dynamic programming and Algebraic Decision Diagrams (ADDs) can outperform d-DNNF-style approaches on many benchmarks. In this work, we present a modular algorithm called DPSampler that extends such dynamic-programming techniques to the problem of exact weighted sampling.
DPSampler operates in three phases. First, an execution plan in the form of a project-join tree is computed using tree decompositions. Second, the plan is used to compile the input formula into a succinct tree-of-ADDs representation. Third, this tree is traversed to generate a random sample. This decoupling of planning, compilation and sampling phases enables usage of specialized libraries for each purpose in a black-box fashion. Further, our novel ADD-sampling algorithm avoids the need for expensive dynamic memory allocation required in previous work. Extensive experiments over diverse sets of benchmarks show DPSampler is more scalable and versatile than existing approaches.
|
Jeffrey M. Dudek, Aditya A. Shrotri, Moshe Y. Vardi
| null | null | 2,022 |
ijcai
|
Combining Constraint Solving and Bayesian Techniques for System Optimization
| null |
Application domains of Bayesian optimization include optimizing black-box functions or very complex functions.
The functions we are interested in describe complex real-world systems applied in industrial settings.
Even though they do have explicit representations, standard optimization techniques fail to provide validated solutions and correctness guarantees for them.
In this paper we present a combination of Bayesian optimization and SMT-based constraint solving to achieve safe and stable solutions with optimality guarantees.
|
Franz Brauße, Zurab Khasidashvili, Konstantin Korovin
| null | null | 2,022 |
ijcai
|
HifiHead: One-Shot High Fidelity Neural Head Synthesis with 3D Control
| null |
We propose HifiHead, a high fidelity neural talking head synthesis method, which can well preserve the source image's appearance and control the motion (e.g., pose, expression, gaze) flexibly with 3D morphable face models (3DMMs) parameters derived from a driving image or indicated by users. Existing head synthesis works mainly focus on low-resolution inputs. Instead, we exploit the powerful generative prior embedded in StyleGAN to achieve high-quality head synthesis and editing. Specifically, we first extract the source image's appearance and driving image's motion to construct 3D face descriptors, which are employed as latent style codes for the generator. Meanwhile, hierarchical representations are extracted from the source and rendered 3D images respectively to provide faithful appearance and shape guidance. Considering the appearance representations need high-resolution flow fields for spatial transform, we propose a coarse-to-fine style-based generator consisting of a series of feature alignment and refinement (FAR) blocks. Each FAR block updates the dense flow fields and refines RGB outputs simultaneously for efficiency. Extensive experiments show that our method blends source appearance and target motion more accurately along with more photo-realistic results than previous state-of-the-art approaches.
|
Feida Zhu, Junwei Zhu, Wenqing Chu, Ying Tai, Zhifeng Xie, Xiaoming Huang, Chengjie Wang
| null | null | 2,022 |
ijcai
|
A Multivariate Complexity Analysis of Qualitative Reasoning Problems
| null |
Qualitative reasoning is an important subfield of artificial intelligence where one describes relationships with qualitative, rather than numerical, relations. Many such reasoning tasks, e.g., Allen's interval algebra, can be solved in 2^O(n*log n) time, but single-exponential running times 2^O(n) are currently far out of reach.
In this paper we consider single-exponential algorithms via a multivariate analysis consisting of a fine-grained parameter n (e.g., the number of variables) and a coarse-grained parameter k expected to be relatively small.
We introduce the classes FPE and XE of problems solvable in f(k)*2^O(n), respectively f(k)^n, time, and prove several fundamental properties of these classes. We proceed by studying temporal reasoning problems and (1) show that the partially ordered time problem of effective width k is solvable in 16^{kn} time and is thus included in XE, and (2) that the network consistency problem for Allen's interval algebra with no interval overlapping with more than k others is solvable in (2nk)^{2k}*2^n time and is included in FPE. Our multivariate approach is in no way limited to these to specific problems and may be a generally useful approach for obtaining single-exponential algorithms.
|
Leif Eriksson, Victor Lagerkvist
| null | null | 2,022 |
ijcai
|
Large Neighbourhood Search for Anytime MaxSAT Solving
| null |
Large Neighbourhood Search (LNS) is an algorithmic framework for optimization problems that can yield good performance in many domains. In this paper, we present a method for applying LNS to improve anytime maximum satisfiability (MaxSAT) solving by introducing a neighbourhood selection policy that shows good empirical performance. We show that our LNS solver can often improve the suboptimal solutions produced by other anytime MaxSAT solvers. When starting with a suboptimal solution of reasonable quality, our approach often finds a better solution than the original anytime solver can achieve. We demonstrate that implementing our LNS solver on top of three different state-of-the-art anytime solvers improves the anytime performance of all three solvers within the standard time limit used in the incomplete tracks of the annual MaxSAT Evaluation.
|
Randy Hickey, Fahiem Bacchus
| null | null | 2,022 |
ijcai
|
Online Matching with Controllable Rewards and Arrival Probabilities
| null |
Online bipartite matching has attracted much attention due to its importance in various applications such as advertising, ride-sharing, and crowdsourcing. In most online matching problems, the rewards and node arrival probabilities are given in advance and are not controllable. However, many real-world matching services require them to be controllable and the decision-maker faces a non-trivial problem of optimizing them. In this study, we formulate a new optimization problem, Online Matching with Controllable Rewards and Arrival probabilities (OM-CRA), to simultaneously determine not only the matching strategy but also the rewards and arrival probabilities. Even though our problem is more complex than the existing ones, we propose a fast 1/2-approximation algorithm for OM-CRA. The proposed approach transforms OM-CRA to a saddle-point problem by approximating the objective function, and then solves it by the Primal-Dual Hybrid Gradient (PDHG) method with acceleration through the use of the problem structure. In simulations on real data from crowdsourcing and ride-sharing platforms, we show that the proposed algorithm can find solutions with high total rewards in practical times.
|
Yuya Hikima, Yasunori Akagi, Naoki Marumo, Hideaki Kim
| null | null | 2,022 |
ijcai
|
Best Heuristic Identification for Constraint Satisfaction
| null |
In constraint satisfaction problems, the variable ordering heuristic takes a central place by selecting the variables to branch on during backtrack search. As many hand-crafted branching heuristics have been proposed in the literature, a key issue is to identify, from a pool of candidate heuristics, which one is the best for solving a given constraint satisfaction task. Based on the observation that modern constraint solvers are using restart sequences, the best heuristic identification problem can be cast in the context of multi-armed bandits as a non-stochastic best arm identification problem. Namely, during each run of some given restart sequence, the bandit algorithm selects a branching heuristic and receives a reward for this heuristic before proceeding to the next run. The goal is to identify the best heuristic using few runs, and without any stochastic assumption about the constraint solver. In this study, we propose an adaptive variant of Successive Halving that exploits Luby's universal restart sequence. We analyze the convergence of this bandit algorithm in the non-stochastic setting,
and we demonstrate its empirical effectiveness on various constraint satisfaction benchmarks.
|
Frederic Koriche, Christophe Lecoutre, Anastasia Paparrizou, Hugues Wattez
| null | null | 2,022 |
ijcai
|
AllSATCC: Boosting AllSAT Solving with Efficient Component Analysis
| null |
All Solution SAT (AllSAT) is a variant of Propositional Satisfiability, which aims to find all satisfying assignments for a given formula. AllSAT has significant applications in different domains, such as software testing, data mining, and network verification. In this paper, observing that the lack of component analysis may result in more work for algorithms with non-chronological backtracking, we propose a DPLL-based algorithm for solving AllSAT problem, named AllSATCC, which takes advantage of component analysis to reduce work repetition caused by non-chronological backtracking. The experimental results show that our algorithm outperforms the state-of-the-art algorithms on most instances.
|
Jiaxin Liang, Feifei Ma, Junping Zhou, Minghao Yin
| null | null | 2,022 |
ijcai
|
Using Constraint Programming and Graph Representation Learning for Generating Interpretable Cloud Security Policies
| null |
Modern software systems rely on mining insights from business sensitive data stored in public clouds. A data breach usually incurs significant (monetary) loss for a commercial organization. Conceptually, cloud security heavily relies on Identity Access Management (IAM) policies that IT admins need to properly configure and periodically update. Security negligence and human errors often lead to misconfiguring IAM policies which may open a backdoor for attackers. To address these challenges, first, we develop a novel framework that encodes generating optimal IAM policies using constraint programming (CP). We identify reducing dormant permissions of cloud users as an optimality criterion, which intuitively implies minimizing unnecessary datastore access permissions. Second, to make IAM policies interpretable, we use graph representation learning applied to historical access patterns of users to augment our CP model with similarity constraints: similar users should be grouped together and share common IAM policies. Third, we describe multiple attack models and show that our optimized IAM policies significantly reduce the impact of security attacks using real data from 8 commercial organizations, and synthetic instances.
|
Mikhail Kazdagli, Mohit Tiwari, Akshat Kumar
| null | null | 2,022 |
ijcai
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.