title
stringlengths
5
246
categories
stringlengths
5
94
abstract
stringlengths
54
5.03k
authors
stringlengths
0
6.72k
doi
stringlengths
12
54
id
stringlengths
6
10
year
float64
2.02k
2.02k
venue
stringclasses
13 values
Monte Carlo Filtering Objectives
null
Learning generative models and inferring latent trajectories have shown to be challenging for time series due to the intractable marginal likelihoods of flexible generative models. It can be addressed by surrogate objectives for optimization. We propose Monte Carlo filtering objectives (MCFOs), a family of variational objectives for jointly learning parametric generative models and amortized adaptive importance proposals of time series. MCFOs extend the choices of likelihood estimators beyond Sequential Monte Carlo in state-of-the-art objectives, possess important properties revealing the factors for the tightness of objectives, and allow for less biased and variant gradient estimates. We demonstrate that the proposed MCFOs and gradient estimations lead to efficient and stable model learning, and learned generative models well explain data and importance proposals are more sample efficient on various kinds of time series data.
Shuangshuang Chen, Sihao Ding, Yiannis Karayiannidis, Mårten Björkman
null
null
2,021
ijcai
CuCo: Graph Representation with Curriculum Contrastive Learning
null
Graph-level representation learning is to learn low-dimensional representation for the entire graph, which has shown a large impact on real-world applications. Recently, limited by expensive labeled data, contrastive learning based graph-level representation learning attracts considerable attention. However, these methods mainly focus on graph augmentation for positive samples, while the effect of negative samples is less explored. In this paper, we study the impact of negative samples on learning graph-level representations, and a novel curriculum contrastive learning framework for self-supervised graph-level representation, called CuCo, is proposed. Specifically, we introduce four graph augmentation techniques to obtain the positive and negative samples, and utilize graph neural networks to learn their representations. Then a scoring function is proposed to sort negative samples from easy to hard and a pacing function is to automatically select the negative samples in each training procedure. Extensive experiments on fifteen graph classification real-world datasets, as well as the parameter analysis, well demonstrate that our proposed CuCo yields truly encouraging results in terms of performance on classification and convergence.
Guanyi Chu, Xiao Wang, Chuan Shi, Xunqiang Jiang
null
null
2,021
ijcai
Time-Aware Multi-Scale RNNs for Time Series Modeling
null
Multi-scale information is crucial for modeling time series. Although most existing methods consider multiple scales in the time-series data, they assume all kinds of scales are equally important for each sample, making them unable to capture the dynamic temporal patterns of time series. To this end, we propose Time-Aware Multi-Scale Recurrent Neural Networks (TAMS-RNNs), which disentangle representations of different scales and adaptively select the most important scale for each sample at each time step. First, the hidden state of the RNN is disentangled into multiple independently updated small hidden states, which use different update frequencies to model time-series multi-scale information. Then, at each time step, the temporal context information is used to modulate the features of different scales, selecting the most important time-series scale. Therefore, the proposed model can capture the multi-scale information for each time series at each time step adaptively. Extensive experiments demonstrate that the model outperforms state-of-the-art methods on multivariate time series classification and human motion prediction tasks. Furthermore, visualized analysis on music genre recognition verifies the effectiveness of the model.
Zipeng Chen, Qianli Ma, Zhenxi Lin
null
null
2,021
ijcai
Reinforcement Learning for Sparse-Reward Object-Interaction Tasks in a First-person Simulated 3D Environment
null
Learning how to execute complex tasks involving multiple objects in a 3D world is challenging when there is no ground-truth information about the objects or any demonstration to learn from. When an agent only receives a signal from task-completion, this makes it challenging to learn the object-representations which support learning the correct object-interactions needed to complete the task. In this work, we formulate learning an attentive object dynamics model as a classification problem, using random object-images to define incorrect labels for our object-dynamics model. We show empirically that this enables object-representation learning that captures an object's category (is it a toaster?), its properties (is it on?), and object-relations (is something inside of it?). With this, our core learner (a relational RL agent) receives the dense training signal it needs to rapidly learn object-interaction tasks. We demonstrate results in the 3D AI2Thor simulated kitchen environment with a range of challenging food preparation tasks. We compare our method's performance to several related approaches and against the performance of an oracle: an agent that is supplied with ground-truth information about objects in the scene. We find that our agent achieves performance closest to the oracle in terms of both learning speed and maximum success rate.
Wilka Carvalho, Anthony Liang, Kimin Lee, Sungryull Sohn, Honglak Lee, Richard Lewis, Satinder Singh
null
null
2,021
ijcai
Dependent Multi-Task Learning with Causal Intervention for Image Captioning
null
Recent work for image captioning mainly followed an extract-then-generate paradigm, pre-extracting a sequence of object-based features and then formulating image captioning as a single sequence-to-sequence task. Although promising, we observed two problems in generated captions: 1) content inconsistency where models would generate contradicting facts; 2) not informative enough where models would miss parts of important information. From a causal perspective, the reason is that models have captured spurious statistical correlations between visual features and certain expressions (e.g., visual features of "long hair" and "woman"). In this paper, we propose a dependent multi-task learning framework with the causal intervention (DMTCI). Firstly, we involve an intermediate task, bag-of-categories generation, before the final task, image captioning. The intermediate task would help the model better understand the visual features and thus alleviate the content inconsistency problem. Secondly, we apply Pearl's do-calculus on the model, cutting off the link between the visual features and possible confounders and thus letting models focus on the causal visual features. Specifically, the high-frequency concept set is considered as the proxy confounders where the real confounders are inferred in the continuous space. Finally, we use a multi-agent reinforcement learning (MARL) strategy to enable end-to-end training and reduce the inter-task error accumulations. The extensive experiments show that our model outperforms the baseline models and achieves competitive performance with state-of-the-art models.
Wenqing Chen, Jidong Tian, Caoyun Fan, Hao He, Yaohui Jin
null
null
2,021
ijcai
Convexified Graph Neural Networks for Distributed Control in Robotic Swarms
null
A network of robots can be viewed as a signal graph, describing the underlying network topology with naturally distributed architectures, whose nodes are assigned to data values associated with each robot. Graph neural networks (GNNs) learn representations from signal graphs, thus making them well-suited candidates for learning distributed controllers. Oftentimes, existing GNN architectures assume ideal scenarios, while ignoring the possibility that this distributed graph may change along time due to link failures or topology variations, which can be found in dynamic settings. A mismatch between the graphs on which GNNs were trained and the ones on which they are tested is thus formed. Utilizing online learning, GNNs can be retrained at testing time, overcoming this issue. However, most online algorithms are centralized and work on convex problems (which GNNs scarcely lead to). This paper introduces novel architectures which solve the convexity restriction and can be easily updated in a distributed, online manner. Finally, we provide experiments, showing how these models can be applied to optimizing formation control in a swarm of flocking robots.
Saar Cohen, Noa Agmon
null
null
2,021
ijcai
Jointly Learning Prices and Product Features
null
Product Design is an important problem in marketing research where a firm tries to learn what features of a product are more valuable to consumers. We study this problem from the viewpoint of online learning: a firm repeatedly interacts with a buyer by choosing a product configuration as well as a price and observing the buyer's purchasing decision. The goal of the firm is to maximize revenue throughout the course of $T$ rounds by learning the buyer's preferences. We study both the case of a set of discrete products and the case of a continuous set of allowable product features. In both cases we provide nearly tight upper and lower regret bounds.
Ehsan Emamjomeh-Zadeh, Renato Paes Leme, Jon Schneider, Balasubramanian Sivan
null
null
2,021
ijcai
Optimal ANN-SNN Conversion for Fast and Accurate Inference in Deep Spiking Neural Networks
null
Spiking Neural Networks (SNNs), as bio-inspired energy-efficient neural networks, have attracted great attentions from researchers and industry. The most efficient way to train deep SNNs is through ANN-SNN conversion. However, the conversion usually suffers from accuracy loss and long inference time, which impede the practical application of SNN. In this paper, we theoretically analyze ANN-SNN conversion and derive sufficient conditions of the optimal conversion. To better correlate ANN-SNN and get greater accuracy, we propose Rate Norm Layer to replace the ReLU activation function in source ANN training, enabling direct conversion from a trained ANN to an SNN. Moreover, we propose an optimal fit curve to quantify the fit between the activation value of source ANN and the actual firing rate of target SNN. We show that the inference time can be reduced by optimizing the upper bound of the fit curve in the revised ANN to achieve fast inference. Our theory can explain the existing work on fast reasoning and get better results. The experimental results show that the proposed method achieves near loss-less conversion with VGG-16, PreActResNet-18, and deeper structures. Moreover, it can reach 8.6× faster reasoning performance under 0.265× energy consumption of the typical method. The code is available at https://github.com/DingJianhao/OptSNNConvertion-RNL-RIL.
Jianhao Ding, Zhaofei Yu, Yonghong Tian, Tiejun Huang
null
null
2,021
ijcai
Time-Series Representation Learning via Temporal and Contextual Contrasting
null
Learning decent representations from unlabeled time-series data with temporal dynamics is a very challenging task. In this paper, we propose an unsupervised Time-Series representation learning framework via Temporal and Contextual Contrasting (TS-TCC), to learn time-series representation from unlabeled data. First, the raw time-series data are transformed into two different yet correlated views by using weak and strong augmentations. Second, we propose a novel temporal contrasting module to learn robust temporal representations by designing a tough cross-view prediction task. Last, to further learn discriminative representations, we propose a contextual contrasting module built upon the contexts from the temporal contrasting module. It attempts to maximize the similarity among different contexts of the same sample while minimizing similarity among contexts of different samples. Experiments have been carried out on three real-world time-series datasets. The results manifest that training a linear classifier on top of the features learned by our proposed TS-TCC performs comparably with the supervised training. Additionally, our proposed TS-TCC shows high efficiency in few-labeled data and transfer learning scenarios. The code is publicly available at https://github.com/emadeldeen24/TS-TCC.
Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee Keong Kwoh, Xiaoli Li, Cuntai Guan
null
null
2,021
ijcai
Automatic Translation of Music-to-Dance for In-Game Characters
null
Music-to-dance translation is an emerging and powerful feature in recent role-playing games. Previous works of this topic consider music-to-dance as a supervised motion generation problem based on time-series data. However, these methods require a large amount of training data pairs and may suffer from the degradation of movements. This paper provides a new solution to this task where we re-formulate the translation as a piece-wise dance phrase retrieval problem based on the choreography theory. With such a design, players are allowed to optionally edit the dance movements on top of our generation while other regression-based methods ignore such user interactivity. Considering that the dance motion capture is expensive that requires the assistance of professional dancers, we train our method under a semi-supervised learning fashion with a large unlabeled music dataset (20x than our labeled one) and also introduce self-supervised pre-training to improve the training stability and generalization performance. Experimental results suggest that our method not only generalizes well over various styles of music but also succeeds in choreography for game players. Our project including the large-scale dataset and supplemental materials is available at https://github.com/FuxiCV/music-to-dance.
Yinglin Duan, Tianyang Shi, Zhipeng Hu, Zhengxia Zou, Changjie Fan, Yi Yuan, Xi Li
null
null
2,021
ijcai
Learning Groupwise Explanations for Black-Box Models
null
We study two user demands that are important during the exploitation of explanations in practice: 1) understanding the overall model behavior faithfully with limited cognitive load and 2) predicting the model behavior accurately on unseen instances. We illustrate that the two user demands correspond to two major sub-processes in the human cognitive process and propose a unified framework to fulfill them simultaneously. Given a local explanation method, our framework jointly 1) learns a limited number of groupwise explanations that interpret the model behavior on most instances with high fidelity and 2) specifies the region where each explanation applies. Experiments on six datasets demonstrate the effectiveness of our method.
Jingyue Gao, Xiting Wang, Yasha Wang, Yulan Yan, Xing Xie
null
null
2,021
ijcai
Deep Reinforcement Learning for Multi-contact Motion Planning of Hexapod Robots
null
Legged locomotion in a complex environment requires careful planning of the footholds of legged robots. In this paper, a novel Deep Reinforcement Learning (DRL) method is proposed to implement multi-contact motion planning for hexapod robots moving on uneven plum-blossom piles. First, the motion of hexapod robots is formulated as a Markov Decision Process (MDP) with a specified reward function. Second, a transition feasibility model is proposed for hexapod robots, which describes the feasibility of the state transition under the condition of satisfying kinematics and dynamics, and in turn determines the rewards. Third, the footholds and Center-of-Mass (CoM) sequences are sampled from a diagonal Gaussian distribution and the sequences are optimized through learning the optimal policies using the designed DRL algorithm. Both of the simulation and experimental results on physical systems demonstrate the feasibility and efficiency of the proposed method. Videos are shown at https://videoviewpage.wixsite.com/mcrl.
Huiqiao Fu, Kaiqiang Tang, Peng Li, Wenqi Zhang, Xinpeng Wang, Guizhou Deng, Tao Wang, Chunlin Chen
null
null
2,021
ijcai
BAMBOO: A Multi-instance Multi-label Approach Towards VDI User Logon Behavior Modeling
null
Different to traditional on-premise VDI , the virtual desktops in DaaS (Desktop as a Service) are hosted in public cloud where virtual machines are charged based on usage. Accordingly, an adaptive power management system which can turn off spare virtual machines without sacrificing end user experience is of significant customer value as it can greatly help reduce the running cost. Generally, logon behavior modeling for VDI users serves as the key enabling-technique to fulfill intelligent power management. Prior attempts work by modeling logon behavior in a user-dependent manner with tailored single-instance feature representation, where the strong relationships among pool-sharing VDI users are ignored in the modeling framework. In this paper, a novel formulation towards VDI user logon behavior modeling is proposed by employing the multi-instance multi-label (MIML) techniques. Specifically, each user is grouped with supporting users whose behaviors are jointly modeled in the feature space with multi-instance representation as well as in the output space with multi-label prediction. The resulting MIML formulation is optimized by adapting the popular MIML boosting procedure via balanced error-rate minimization. Experimental studies on real VDI customers' data clearly validate the effectiveness of the proposed MIML-based approach against state-of-the-art VDI user logon behavior modeling techniques.
Wenping Fan, Yao Zhang, Qichen Hao, Xinya Wu, Min-Ling Zhang
null
null
2,021
ijcai
Contrastive Model Invertion for Data-Free Knolwedge Distillation
null
Model inversion, whose goal is to recover training data from a pre-trained model, has been recently proved feasible. However, existing inversion methods usually suffer from the mode collapse problem, where the synthesized instances are highly similar to each other and thus show limited effectiveness for downstream tasks, such as knowledge distillation. In this paper, we propose Contrastive Model Inversion (CMI), where the data diversity is explicitly modeled as an optimizable objective, to alleviate the mode collapse issue. Our main observation is that, under the constraint of the same amount of data, higher data diversity usually indicates stronger instance discrimination. To this end, we introduce in CMI a contrastive learning objective that encourages the synthesizing instances to be distinguishable from the already synthesized ones in previous batches. Experiments of pre-trained models on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that CMI not only generates more visually plausible instances than the state of the arts, but also achieves significantly superior performance when the generated data are used for knowledge distillation. Code is available at https://github.com/zju-vipa/DataFree.
Gongfan Fang, Jie Song, Xinchao Wang, Chengchao Shen, Xingen Wang, Mingli Song
null
null
2,021
ijcai
Video Summarization via Label Distributions Dual-Reward
null
Reinforcement learning maps from perceived state representation to actions, which is adopted to solve the video summarization problem. The reward is crucial for deal with the video summarization task via reinforcement learning, since the reward signal defines the goal of video summarization. However, existing reward mechanism in reinforcement learning cannot handle the ambiguity which appears frequently in video summarization, i.e., the diverse consciousness by different people on the same video. To solve this problem, in this paper label distributions are mapped from the CNN and LSTM-based state representation to capture the subjectiveness of video summaries. The dual-reward is designed by measuring the similarity between user score distributions and the generated label distributions. Not only the average score but also the the variance of the subjective opinions are considered in summary generation. Experimental results on several benchmark datasets show that our proposed method outperforms other approaches under various settings.
Yongbiao Gao, Ning Xu, Xin Geng
null
null
2,021
ijcai
Method of Moments for Topic Models with Mixed Discrete and Continuous Features
null
Topic models are characterized by a latent class variable that represents the different topics. Traditionally, their observable variables are modeled as discrete variables like, for instance, in the prototypical latent Dirichlet allocation (LDA) topic model. In LDA, words in text documents are encoded by discrete count vectors with respect to some dictionary. The classical approach for learning topic models optimizes a likelihood function that is non-concave due to the presence of the latent variable. Hence, this approach mostly boils down to using search heuristics like the EM algorithm for parameter estimation. Recently, it was shown that topic models can be learned with strong algorithmic and statistical guarantees through Pearson's method of moments. Here, we extend this line of work to topic models that feature discrete as well as continuous observable variables (features). Moving beyond discrete variables as in LDA allows for more sophisticated features and a natural extension of topic models to other modalities than text, like, for instance, images. We provide algorithmic and statistical guarantees for the method of moments applied to the extended topic model that we corroborate experimentally on synthetic data. We also demonstrate the applicability of our model on real-world document data with embedded images that we preprocess into continuous state-of-the-art feature vectors.
Joachim Giesen, Paul Kahlmeyer, Sören Laue, Matthias Mitterreiter, Frank Nussbaum, Christoph Staudt, Sina Zarrieß
null
null
2,021
ijcai
On the Convergence of Stochastic Compositional Gradient Descent Ascent Method
null
The compositional minimax problem covers plenty of machine learning models such as the distributionally robust compositional optimization problem. However, it is yet another understudied problem to optimize the compositional minimax problem. In this paper, we develop a novel efficient stochastic compositional gradient descent ascent method for optimizing the compositional minimax problem. Moreover, we establish the theoretical convergence rate of our proposed method. To the best of our knowledge, this is the first work achieving such a convergence rate for the compositional minimax problem. Finally, we conduct extensive experiments to demonstrate the effectiveness of our proposed method.
Hongchang Gao, Xiaoqian Wang, Lei Luo, Xinghua Shi
null
null
2,021
ijcai
Learning Nash Equilibria in Zero-Sum Stochastic Games via Entropy-Regularized Policy Approximation
null
We explore the use of policy approximations to reduce the computational cost of learning Nash equilibria in zero-sum stochastic games. We propose a new Q-learning type algorithm that uses a sequence of entropy-regularized soft policies to approximate the Nash policy during the Q-function updates. We prove that under certain conditions, by updating the entropy regularization, the algorithm converges to a Nash equilibrium. We also demonstrate the proposed algorithm's ability to transfer previous training experiences, enabling the agents to adapt quickly to new environments. We provide a dynamic hyper-parameter scheduling scheme to further expedite convergence. Empirical results applied to a number of stochastic games verify that the proposed algorithm converges to the Nash equilibrium, while exhibiting a major speed-up over existing algorithms.
Yue Guan, Qifan Zhang, Panagiotis Tsiotras
null
null
2,021
ijcai
The Successful Ingredients of Policy Gradient Algorithms
null
Despite the sublime success in recent years, the underlying mechanisms powering the advances of reinforcement learning are yet poorly understood. In this paper, we identify these mechanisms - which we call ingredients - in on-policy policy gradient methods and empirically determine their impact on the learning. To allow an equitable assessment, we conduct our experiments based on a unified and modular implementation. Our results underline the significance of recent algorithmic advances and demonstrate that reaching state-of-the-art performance may not need sophisticated algorithms but can also be accomplished by the combination of a few simple ingredients.
Sven Gronauer, Martin Gottwald, Klaus Diepold
null
null
2,021
ijcai
Towards Understanding Deep Learning from Noisy Labels with Small-Loss Criterion
null
Deep neural networks need large amounts of labeled data to achieve good performance. In real-world applications, labels are usually collected from non-experts such as crowdsourcing to save cost and thus are noisy. In the past few years, deep learning methods for dealing with noisy labels have been developed, many of which are based on the small-loss criterion. However, there are few theoretical analyses to explain why these methods could learn well from noisy labels. In this paper, we theoretically explain why the widely-used small-loss criterion works. Based on the explanation, we reformalize the vanilla small-loss criterion to better tackle noisy labels. The experimental results verify our theoretical explanation and also demonstrate the effectiveness of the reformalization.
Xian-Jin Gui, Wei Wang, Zhang-Hao Tian
null
null
2,021
ijcai
Riemannian Stochastic Recursive Momentum Method for non-Convex Optimization
null
We propose a stochastic recursive momentum method for Riemannian non-convex optimization that achieves a nearly-optimal complexity to find epsilon-approximate solution with one sample. The new algorithm requires one-sample gradient evaluations per iteration and does not require restarting with a large batch gradient, which is commonly used to obtain a faster rate. Extensive experiment results demonstrate the superiority of the proposed algorithm. Extensions to nonsmooth and constrained optimization settings are also discussed.
Andi Han, Junbin Gao
null
null
2,021
ijcai
DA-GCN: A Domain-aware Attentive Graph Convolution Network for Shared-account Cross-domain Sequential Recommendation
null
Shared-account Cross-domain Sequential Recommendation (SCSR) is the task of recommending the next item based on a sequence of recorded user behaviors, where multiple users share a single account, and their behaviours are available in multiple domains. Existing work on solving SCSR mainly relies on mining sequential patterns via RNN-based models, which are not expressive enough to capture the relationships among multiple entities. Moreover, all existing algorithms try to bridge two domains via knowledge transfer in the latent space, and the explicit cross-domain graph structure is unexploited. In this work, we propose a novel graph-based solution, namely DA-GCN, to address the above challenges. Specifically, we first link users and items in each domain as a graph. Then, we devise a domain-aware graph convolution network to learn user-specific node representations. To fully account for users' domain-specific preferences on items, two novel attention mechanisms are further developed to selectively guide the message passing process. Extensive experiments on two real-world datasets are conducted to demonstrate the superiority of our DA-GCN method.
Lei Guo, Li Tang, Tong Chen, Lei Zhu, Quoc Viet Hung Nguyen, Hongzhi Yin
null
null
2,021
ijcai
Enabling Retrain-free Deep Neural Network Pruning Using Surrogate Lagrangian Relaxation
null
Network pruning is a widely used technique to reduce computation cost and model size for deep neural networks. However, the typical three-stage pipeline, i.e., training, pruning and retraining (fine-tuning) significantly increases the overall training trails. In this paper, we develop a systematic weight-pruning optimization approach based on Surrogate Lagrangian relaxation (SLR), which is tailored to overcome difficulties caused by the discrete nature of the weight-pruning problem while ensuring fast convergence. We further accelerate the convergence of the SLR by using quadratic penalties. Model parameters obtained by SLR during the training phase are much closer to their optimal values as compared to those obtained by other state-of-the-art methods. We evaluate the proposed method on image classification tasks using CIFAR-10 and ImageNet, as well as object detection tasks using COCO 2014 and Ultra-Fast-Lane-Detection using TuSimple lane detection dataset. Experimental results demonstrate that our SLR-based weight-pruning optimization approach achieves higher compression rate than state-of-the-arts under the same accuracy requirement. It also achieves a high model accuracy even at the hard-pruning stage without retraining (reduces the traditional three-stage pruning to two-stage). Given a limited budget of retraining epochs, our approach quickly recovers the model accuracy.
Deniz Gurevin, Mikhail Bragin, Caiwen Ding, Shanglin Zhou, Lynn Pepin, Bingbing Li, Fei Miao
null
null
2,021
ijcai
Hindsight Value Function for Variance Reduction in Stochastic Dynamic Environment
null
Policy gradient methods are appealing in deep reinforcement learning but suffer from high variance of gradient estimate. To reduce the variance, the state value function is applied commonly. However, the effect of the state value function becomes limited in stochastic dynamic environments, where the unexpected state dynamics and rewards will increase the variance. In this paper, we propose to replace the state value function with a novel hindsight value function, which leverages the information from the future to reduce the variance of the gradient estimate for stochastic dynamic environments. Particularly, to obtain an ideally unbiased gradient estimate, we propose an information-theoretic approach, which optimizes the embeddings of the future to be independent of previous actions. In our experiments, we apply the proposed hindsight value function in stochastic dynamic environments, including discrete-action environments and continuous-action environments. Compared with the standard state value function, the proposed hindsight value function consistently reduces the variance, stabilizes the training, and improves the eventual policy.
Jiaming Guo, Rui Zhang, Xishan Zhang, Shaohui Peng, Qi Yi, Zidong Du, Xing Hu, Qi Guo, Yunji Chen
null
null
2,021
ijcai
InverseNet: Augmenting Model Extraction Attacks with Training Data Inversion
null
Cloud service providers, including Google, Amazon, and Alibaba, have now launched machine-learning-as-a-service (MLaaS) platforms, allowing clients to access sophisticated cloud-based machine learning models via APIs. Unfortunately, however, the commercial value of these models makes them alluring targets for theft, and their strategic position as part of the IT infrastructure of many companies makes them an enticing springboard for conducting further adversarial attacks. In this paper, we put forth a novel and effective attack strategy, dubbed InverseNet, that steals the functionality of black-box cloud-based models with only a small number of queries. The crux of the innovation is that, unlike existing model extraction attacks that rely on public datasets or adversarial samples, InverseNet constructs inversed training samples to increase the similarity between the extracted substitute model and the victim model. Further, only a small number of data samples with high confidence scores (rather than an entire dataset) are used to reconstruct the inversed dataset, which substantially reduces the attack cost. Extensive experiments conducted on three simulated victim models and Alibaba Cloud's commercially-available API demonstrate that InverseNet yields a model with significantly greater functional similarity to the victim model than the current state-of-the-art attacks at a substantially lower query budget.
Xueluan Gong, Yanjiao Chen, Wenbin Yang, Guanghao Mei, Qian Wang
null
null
2,021
ijcai
Behavior Mimics Distribution: Combining Individual and Group Behaviors for Federated Learning
null
Federated Learning (FL) has become an active and promising distributed machine learning paradigm. As a result of statistical heterogeneity, recent studies clearly show that the performance of popular FL methods (e.g., FedAvg) deteriorates dramatically due to the client drift caused by local updates. This paper proposes a novel Federated Learning algorithm (called IGFL), which leverages both Individual and Group behaviors to mimic distribution, thereby improving the ability to deal with heterogeneity. Unlike existing FL methods, our IGFL can be applied to both client and server optimization. As a by-product, we propose a new attention-based federated learning in the server optimization of IGFL. To the best of our knowledge, this is the first time to incorporate attention mechanisms into federated optimization. We conduct extensive experiments and show that IGFL can significantly improve the performance of existing federated learning methods. Especially when the distributions of data among individuals are diverse, IGFL can improve the classification accuracy by about 13% compared with prior baselines.
Hua Huang, Fanhua Shang, Yuanyuan Liu, Hongying Liu
null
null
2,021
ijcai
Model-Based Reinforcement Learning for Infinite-Horizon Discounted Constrained Markov Decision Processes
null
In many real-world reinforcement learning (RL) problems, in addition to maximizing the objective, the learning agent has to maintain some necessary safety constraints. We formulate the problem of learning a safe policy as an infinite-horizon discounted Constrained Markov Decision Process (CMDP) with an unknown transition probability matrix, where the safety requirements are modeled as constraints on expected cumulative costs. We propose two model-based constrained reinforcement learning (CRL) algorithms for learning a safe policy, namely, (i) GM-CRL algorithm, where the algorithm has access to a generative model, and (ii) UC-CRL algorithm, where the algorithm learns the model using an upper confidence style online exploration method. We characterize the sample complexity of these algorithms, i.e., the the number of samples needed to ensure a desired level of accuracy with high probability, both with respect to objective maximization and constraint satisfaction.
Aria HasanzadeZonuzy, Dileep Kalathil, Srinivas Shakkottai
null
null
2,021
ijcai
State-Based Recurrent SPMNs for Decision-Theoretic Planning under Partial Observability
null
The sum-product network (SPN) has been extended to model sequence data with the recurrent SPN (RSPN), and to decision-making problems with sum-product-max networks (SPMN). In this paper, we build on the concepts introduced by these extensions and present state-based recurrent SPMNs (S-RSPMNs) as a generalization of SPMNs to sequential decision-making problems where the state may not be perfectly observed. As with recurrent SPNs, S-RSPMNs utilize a repeatable template network to model sequences of arbitrary lengths. We present an algorithm for learning compact template structures by identifying unique belief states and the transitions between them through a state matching process that utilizes augmented data. In our knowledge, this is the first data-driven approach that learns graphical models for planning under partial observability, which can be solved efficiently. S-RSPMNs retain the linear solution complexity of SPMNs, and we demonstrate significant improvements in compactness of representation and the run time of structure learning and inference in sequential domains.
Layton Hayes, Prashant Doshi, Swaraj Pawar, Hari Teja Tatavarti
null
null
2,021
ijcai
Fine-Grained Air Quality Inference via Multi-Channel Attention Model
null
In this paper, we study the problem of fine-grained air quality inference that predicts the air quality level of any location from air quality readings of nearby monitoring stations. We point out the importance of explicitly modeling both static and dynamic spatial correlations, and consequently propose a novel multi-channel attention model (MCAM) that models static and dynamic spatial correlations as separate channels. The static channel combines the beauty of attention mechanisms and graph-based spatial modeling via an adapted bilateral filtering technique, which considers not only locations' Euclidean distances but also their similarity of geo-context features. The dynamic channel learns stations' time-dependent spatial influence on a target location at each time step via long short-term memory (LSTM) networks and attention mechanisms. In addition, we introduce two novel ideas, atmospheric dispersion theories and the hysteretic nature of air pollutant dispersion, to better model the dynamic spatial correlation. We also devise a multi-channel graph convolutional fusion network to effectively fuse the graph outputs, along with other features, from both channels. Our extensive experiments on real-world benchmark datasets demonstrate that MCAM significantly outperforms the state-of-the-art solutions.
Qilong Han, Dan Lu, Rui Chen
null
null
2,021
ijcai
DEEPSPLIT: An Efficient Splitting Method for Neural Network Verification via Indirect Effect Analysis
null
We propose a novel, complete algorithm for the verification and analysis of feed-forward, ReLU-based neural networks. The algorithm, based on symbolic interval propagation, introduces a new method for determining split-nodes which evaluates the indirect effect that splitting has on the relaxations of successor nodes. We combine this with a new efficient linear-programming encoding of the splitting constraints to further improve the algorithm’s performance. The resulting implementation, DeepSplit, achieved speedups of 1–2 orders of magnitude and 21-34% fewer timeouts when compared to the current SoA toolkits.
Patrick Henriksen, Alessio Lomuscio
null
null
2,021
ijcai
Learning CNF Theories Using MDL and Predicate Invention
null
We revisit the problem of learning logical theories from examples, one of the most quintessential problems in machine learning. More specifically, we develop an approach to learn CNF-formulae from satisfiability. This is a setting in which the examples correspond to partial interpretations and an example is classified as positive when it is logically consistent with the theory. We present a novel algorithm, called Mistle -- Minimal SAT Theory Learner, for learning such theories. The distinguishing features are that 1) Mistle performs predicate invention and inverse resolution, 2) is based on the MDL principle to compress the data, and 3) combines this with frequent pattern mining to find the most interesting theories. The experiments demonstrate that Mistle can learn CNF theories accurately and works well in tasks involving compression and classification.
Arcchit Jain, Clément Gautrais, Angelika Kimmig, Luc De Raedt
null
null
2,021
ijcai
Interpretable Minority Synthesis for Imbalanced Classification
null
This paper proposes a novel oversampling approach that strives to balance the class priors with a considerably imbalanced data distribution of high dimensionality. The crux of our approach lies in learning interpretable latent representations that can model the synthetic mechanism of the minority samples by using a generative adversarial network(GAN). A Bayesian regularizer is imposed to guide the GAN to extract a set of salient features that are either disentangled or intensionally entangled, with their interplay controlled by a prescribed structure, defined with human-in-the-loop. As such, our GAN enjoys an improved sample complexity, being able to synthesize high-quality minority samples even if the sizes of minority classes are extremely small during training. Empirical studies substantiate that our approach can empower simple classifiers to achieve superior imbalanced classification performance over the state-of-the-art competitors and is robust across various imbalance settings. Code is released in github.com/fudonglin/IMSIC.
Yi He, Fudong Lin, Xu Yuan, Nian-Feng Tzeng
null
null
2,021
ijcai
Asynchronous Active Learning with Distributed Label Querying
null
Active learning tries to learn an effective model with lowest labeling cost. Most existing active learning methods work in a synchronous way, which implies that the label querying can be performed only after the model updating in each iteration. While training models is usually time-consuming, it may lead to serious latency between two queries, especially in the crowdsourcing environments where there are many online annotators working simultaneously. This will significantly decrease the labeling efficiency and strongly limit the application of active learning in real tasks. To overcome this challenge, we propose a multi-server multi-worker framework for asynchronous active learning in the distributed environment. By maintaining two shared pools of candidate queries and labeled data respectively, the servers, the workers and the annotators efficiently corporate with each other without synchronization. Moreover, diverse sampling strategies from distributed workers are incorporated to select the most useful instances for model improving. Both theoretical analysis and experimental study validate the effectiveness of the proposed approach.
Sheng-Jun Huang, Chen-Chen Zong, Kun-Peng Ning, Hai-Bo Ye
null
null
2,021
ijcai
Beyond the Spectrum: Detecting Deepfakes via Re-Synthesis
null
The rapid advances in deep generative models over the past years have led to highly realistic media, known as deepfakes, that are commonly indistinguishable from real to human eyes. These advances make assessing the authenticity of visual data increasingly difficult and pose a misinformation threat to the trustworthiness of visual content in general. Although recent work has shown strong detection accuracy of such deepfakes, the success largely relies on identifying frequency artifacts in the generated images, which will not yield a sustainable detection approach as generative models continue evolving and closing the gap to real images. In order to overcome this issue, we propose a novel fake detection that is designed to re-synthesize testing images and extract visual cues for detection. The re-synthesis procedure is flexible, allowing us to incorporate a series of visual tasks - we adopt super-resolution, denoising and colorization as the re-synthesis. We demonstrate the improved effectiveness, cross-GAN generalization, and robustness against perturbations of our approach in a variety of detection scenarios involving multiple generators over CelebA-HQ, FFHQ, and LSUN datasets. Source code is available at https://github.com/SSAW14/BeyondtheSpectrum.
Yang He, Ning Yu, Margret Keuper, Mario Fritz
null
null
2,021
ijcai
On the Neural Tangent Kernel of Deep Networks with Orthogonal Initialization
null
The prevailing thinking is that orthogonal weights are crucial to enforcing dynamical isometry and speeding up training. The increase in learning speed that results from orthogonal initialization in linear networks has been well-proven. However, while the same is believed to also hold for nonlinear networks when the dynamical isometry condition is satisfied, the training dynamics behind this contention have not been thoroughly explored. In this work, we study the dynamics of ultra-wide networks across a range of architectures, including Fully Connected Networks (FCNs) and Convolutional Neural Networks (CNNs) with orthogonal initialization via neural tangent kernel (NTK). Through a series of propositions and lemmas, we prove that two NTKs, one corresponding to Gaussian weights and one to orthogonal weights, are equal when the network width is infinite. Further, during training, the NTK of an orthogonally-initialized infinite-width network should theoretically remain constant. This suggests that the orthogonal initialization cannot speed up training in the NTK (lazy training) regime, contrary to the prevailing thoughts. In order to explore under what circumstances can orthogonality accelerate training, we conduct a thorough empirical investigation outside the NTK regime. We find that when the hyper-parameters are set to achieve a linear regime in nonlinear activation, orthogonal initialization can improve the learning speed with a large learning rate or large depth.
Wei Huang, Weitao Du, Richard Yi Da Xu
null
null
2,021
ijcai
SalientSleepNet: Multimodal Salient Wave Detection Network for Sleep Staging
null
Sleep staging is fundamental for sleep assessment and disease diagnosis. Although previous attempts to classify sleep stages have achieved high classification performance, several challenges remain open: 1) How to effectively extract salient waves in multimodal sleep data; 2) How to capture the multi-scale transition rules among sleep stages; 3) How to adaptively seize the key role of specific modality for sleep staging. To address these challenges, we propose SalientSleepNet, a multimodal salient wave detection network for sleep staging. Specifically, SalientSleepNet is a temporal fully convolutional network based on the $U^2$-Net architecture that is originally proposed for salient object detection in computer vision. It is mainly composed of two independent $U^2$-like streams to extract the salient features from multimodal data, respectively. Meanwhile, the multi-scale extraction module is designed to capture multi-scale transition rules among sleep stages. Besides, the multimodal attention module is proposed to adaptively capture valuable information from multimodal data for the specific sleep stage. Experiments on the two datasets demonstrate that SalientSleepNet outperforms the state-of-the-art baselines. It is worth noting that this model has the least amount of parameters compared with the existing deep neural network models.
Ziyu Jia, Youfang Lin, Jing Wang, Xuehui Wang, Peiyi Xie, Yingbin Zhang
null
null
2,021
ijcai
Reinforcement Learning for Route Optimization with Robustness Guarantees
null
Application of deep learning to NP-hard combinatorial optimization problems is an emerging research trend, and a number of interesting approaches have been published over the last few years. In this work we address robust optimization, which is a more complex variant where a max-min problem is to be solved. We obtain robust solutions by solving the inner minimization problem exactly and apply Reinforcement Learning to learn a heuristic for the outer problem. The minimization term in the inner objective represents an obstacle to existing RL-based approaches, as its value depends on the full solution in a non-linear manner and cannot be evaluated for partial solutions constructed by the agent over the course of each episode. We overcome this obstacle by defining the reward in terms of the one-step advantage over a baseline policy whose role can be played by any fast heuristic for the given problem. The agent is trained to maximize the total advantage, which, as we show, is equivalent to the original objective. We validate our approach by solving min-max versions of standard benchmarks for the Capacitated Vehicle Routing and the Traveling Salesperson Problem, where our agents obtain near-optimal solutions and improve upon the baselines.
Tobias Jacobs, Francesco Alesiani, Gulcin Ermis
null
null
2,021
ijcai
RetCL: A Selection-based Approach for Retrosynthesis via Contrastive Learning
null
Retrosynthesis, of which the goal is to find a set of reactants for synthesizing a target product, is an emerging research area of deep learning. While the existing approaches have shown promising results, they currently lack the ability to consider availability (e.g., stability or purchasability) of the reactants or generalize to unseen reaction templates (i.e., chemical reaction rules). In this paper, we propose a new approach that mitigates the issues by reformulating retrosynthesis into a selection problem of reactants from a candidate set of commercially available molecules. To this end, we design an efficient reactant selection framework, named RetCL (retrosynthesis via contrastive learning), for enumerating all of the candidate molecules based on selection scores computed by graph neural networks. For learning the score functions, we also propose a novel contrastive training scheme with hard negative mining. Extensive experiments demonstrate the benefits of the proposed selection-based approach. For example, when all 671k reactants in the USPTO database are given as candidates, our RetCL achieves top-1 exact match accuracy of 71.3% for the USPTO-50k benchmark, while a recent transformer-based approach achieves 59.6%. We also demonstrate that RetCL generalizes well to unseen templates in various settings in contrast to template-based approaches.
Hankook Lee, Sungsoo Ahn, Seung-Woo Seo, You Young Song, Eunho Yang, Sung Ju Hwang, Jinwoo Shin
null
null
2,021
ijcai
Topological Uncertainty: Monitoring Trained Neural Networks through Persistence of Activation Graphs
null
Although neural networks are capable of reaching astonishing performance on a wide variety of contexts, properly training networks on complicated tasks requires expertise and can be expensive from a computational perspective. In industrial applications, data coming from an open-world setting might widely differ from the benchmark datasets on which a network was trained. Being able to monitor the presence of such variations without retraining the network is of crucial importance. In this paper, we develop a method to monitor trained neural networks based on the topological properties of their activation graphs. To each new observation, we assign a Topological Uncertainty, a score that aims to assess the reliability of the predictions by investigating the whole network instead of its final layer only as typically done by practitioners. Our approach entirely works at a post-training level and does not require any assumption on the network architecture, optimization scheme, nor the use of data augmentation or auxiliary datasets; and can be faithfully applied on a large range of network architectures and data types. We showcase experimentally the potential of Topological Uncertainty in the context of trained network selection, Out-Of-Distribution detection, and shift-detection, both on synthetic and real datasets of images and graphs.
Théo Lacombe, Yuichi Ike, Mathieu Carrière, Frédéric Chazal, Marc Glisse, Yuhei Umeda
null
null
2,021
ijcai
Towards Scalable Complete Verification of Relu Neural Networks via Dependency-based Branching
null
We introduce an efficient method for the complete verification of ReLU-based feed-forward neural networks. The method implements branching on the ReLU states on the basis of a notion of dependency between the nodes. This results in dividing the original verification problem into a set of sub-problems whose MILP formulations require fewer integrality constraints. We evaluate the method on all of the ReLU-based fully connected networks from the first competition for neural network verification. The experimental results obtained show 145% performance gains over the present state-of-the-art in complete verification.
Panagiotis Kouvaros, Alessio Lomuscio
null
null
2,021
ijcai
Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation
null
Knowledge distillation (KD), transferring knowledge from a cumbersome teacher model to a lightweight student model, has been investigated to design efficient neural architectures. Generally, the objective function of KD is the Kullback-Leibler (KL) divergence loss between the softened probability distributions of the teacher model and the student model with the temperature scaling hyperparameter τ. Despite its widespread use, few studies have discussed how such softening influences generalization. Here, we theoretically show that the KL divergence loss focuses on the logit matching when τ increases and the label matching when τ goes to 0 and empirically show that the logit matching is positively correlated to performance improvement in general. From this observation, we consider an intuitive KD loss function, the mean squared error (MSE) between the logit vectors, so that the student model can directly learn the logit of the teacher model. The MSE loss outperforms the KL divergence loss, explained by the penultimate layer representations difference between the two losses. Furthermore, we show that sequential distillation can improve performance and that KD, using the KL divergence loss with small τ particularly, mitigates the label noise. The code to reproduce the experiments is publicly available online at https://github.com/jhoon-oh/kd_data/.
Taehyeon Kim, Jaehoon Oh, Nak Yil Kim, Sangwook Cho, Se-Young Yun
null
null
2,021
ijcai
Knowledge Consolidation based Class Incremental Online Learning with Limited Data
null
We propose a novel approach for class incremental online learning in a limited data setting. This problem setting is challenging because of the following constraints: (1) Classes are given incrementally, which necessitates a class incremental learning approach; (2) Data for each class is given in an online fashion, i.e., each training example is seen only once during training; (3) Each class has very few training examples; and (4) We do not use or assume access to any replay/memory to store data from previous classes. Therefore, in this setting, we have to handle twofold problems of catastrophic forgetting and overfitting. In our approach, we learn robust representations that are generalizable across tasks without suffering from the problems of catastrophic forgetting and overfitting to accommodate future classes with limited samples. Our proposed method leverages the meta-learning framework with knowledge consolidation. The meta-learning framework helps the model for rapid learning when samples appear in an online fashion. Simultaneously, knowledge consolidation helps to learn a robust representation against forgetting under online updates to facilitate future learning. Our approach significantly outperforms other methods on several benchmarks.
Mohammed Asad Karim, Vinay Kumar Verma, Pravendra Singh, Vinay Namboodiri, Piyush Rai
null
null
2,021
ijcai
TextGTL: Graph-based Transductive Learning for Semi-supervised Text Classification via Structure-Sensitive Interpolation
null
Compared with traditional sequential learning models, graph-based neural networks exhibit excellent properties when encoding text, such as the capacity of capturing global and local information simultaneously. Especially in the semi-supervised scenario, propagating information along the edge can effectively alleviate the sparsity of labeled data. In this paper, beyond the existing architecture of heterogeneous word-document graphs, for the first time, we investigate how to construct lightweight non-heterogeneous graphs based on different linguistic information to better serve free text representation learning. Then, a novel semi-supervised framework for text classification that refines graph topology under theoretical guidance and shares information across different text graphs, namely Text-oriented Graph-based Transductive Learning (TextGTL), is proposed. TextGTL also performs attribute space interpolation based on dense substructure in graphs to predict low-entropy labels with high-quality feature nodes for data augmentation. To verify the effectiveness of TextGTL, we conduct extensive experiments on various benchmark datasets, observing significant performance gains over conventional heterogeneous graphs. In addition, we also design ablation studies to dive deep into the validity of components in TextTGL.
Chen Li, Xutan Peng, Hao Peng, Jianxin Li, Lihong Wang
null
null
2,021
ijcai
Regularising Knowledge Transfer by Meta Functional Learning
null
Machine learning classifiers’ capability is largely dependent on the scale of available training data and limited by the model overfitting in data-scarce learning tasks. To address this problem, this work proposes a novel Meta Functional Learning (MFL) by meta-learning a generalisable functional model from data-rich tasks whilst simultaneously regularising knowledge transfer to data-scarce tasks. The MFL computes meta-knowledge on functional regularisation generalisable to different learning tasks by which functional training on limited labelled data promotes more discriminative functions to be learned. Moreover, we adopt an Iterative Update strategy on MFL (MFL-IU). This improves knowledge transfer regularisation from MFL by progressively learning the functional regularisation in knowledge transfer. Experiments on three Few-Shot Learning (FSL) benchmarks (miniImageNet, CIFAR-FS and CUB) show that meta functional learning for regularisation knowledge transfer can benefit improving FSL classifiers.
Pan Li, Yanwei Fu, Shaogang Gong
null
null
2,021
ijcai
An Adaptive News-Driven Method for CVaR-sensitive Online Portfolio Selection in Non-Stationary Financial Markets
null
CVaR-sensitive online portfolio selection (CS-OLPS) becomes increasingly important for investors because of its effectiveness to minimize conditional value at risk (CVaR) and control extreme losses. However, the non-stationary nature of financial markets makes it very difficult to address the CS-OLPS problem effectively. To address the CS-OLPS problem in non-stationary markets, we propose an effective news-driven method, named CAND, which adaptively exploits news to determine the adjustment tendency and adjustment scale for tracking the dynamic optimal portfolio with minimal CVaR in each trading round. In addition, we devise a filtering mechanism to reduce the errors caused by the noisy news for further improving CAND's effectiveness. We rigorously prove a sub-linear regret of CAND. Extensive experiments on three real-world datasets demonstrate CAND’s superiority over the state-of-the-art portfolio methods in terms of returns and risks.
Qianqiao Liang, Mengying Zhu, Xiaolin Zheng, Yan Wang
null
null
2,021
ijcai
Epsilon Best Arm Identification in Spectral Bandits
null
We propose an analysis of Probably Approximately Correct (PAC) identification of an ϵ-best arm in graph bandit models with Gaussian distributions. We consider finite but potentially very large bandit models where the set of arms is endowed with a graph structure, and we assume that the arms' expectations μ are smooth with respect to this graph. Our goal is to identify an arm whose expectation is at most ϵ below the largest of all means. We focus on the fixed-confidence setting: given a risk parameter δ, we consider sequential strategies that yield an ϵ-optimal arm with probability at least 1-δ. All such strategies use at least T*(μ)log(1/δ) samples, where R is the smoothness parameter. We identify the complexity term T*(μ) as the solution of a min-max problem for which we give a game-theoretic analysis and an approximation procedure. This procedure is the key element required by the asymptotically optimal Track-and-Stop strategy.
Tomáš Kocák, Aurélien Garivier
null
null
2,021
ijcai
Learning to Learn Personalized Neural Network for Ventricular Arrhythmias Detection on Intracardiac EGMs
null
Life-threatening ventricular arrhythmias (VAs) detection on intracardiac electrograms (IEGMs) is essential to Implantable Cardioverter Defibrillators (ICDs). However, current VAs detection methods count on a variety of heuristic detection criteria, and require frequent manual interventions to personalize criteria parameters for each patient to achieve accurate detection. In this work, we propose a one-dimensional convolutional neural network (1D-CNN) based life-threatening VAs detection on IEGMs. The network architecture is elaborately designed to satisfy the extreme resource constraints of the ICD while maintaining high detection accuracy. We further propose a meta-learning algorithm with a novel patient-wise training tasks formatting strategy to personalize the 1D-CNN. The algorithm generates a well-generalized model initialization containing across-patient knowledge, and performs a quick adaptation of the model to the specific patient's IEGMs. In this way, a new patient could be immediately assigned with personalized 1D-CNN model parameters using limited input data. Compared with the conventional VAs detection method, the proposed method achieves 2.2% increased sensitivity for detecting VAs rhythm and 8.6% increased specificity for non-VAs rhythm.
Zhenge Jia, Zhepeng Wang, Feng Hong, Lichuan PING, Yiyu Shi, Jingtong Hu
null
null
2,021
ijcai
On Guaranteed Optimal Robust Explanations for NLP Models
null
We build on abduction-based explanations for machine learning and develop a method for computing local explanations for neural network models in natural language processing (NLP). Our explanations comprise a subset of the words of the input text that satisfies two key features: optimality w.r.t. a user-defined cost function, such as the length of explanation, and robustness, in that they ensure prediction invariance for any bounded perturbation in the embedding space of the left-out words. We present two solution algorithms, respectively based on implicit hitting sets and maximum universal subsets, introducing a number of algorithmic improvements to speed up convergence of hard instances. We show how our method can be configured with different perturbation sets in the embedded space and used to detect bias in predictions by enforcing include/exclude constraints on biased terms, as well as to enhance existing heuristic-based NLP explanation frameworks such as Anchors. We evaluate our framework on three widely used sentiment analysis tasks and texts of up to 100 words from SST, Twitter and IMDB datasets, demonstrating the effectiveness of the derived explanations.
Emanuele La Malfa, Rhiannon Michelmore, Agnieszka M. Zbrzezny, Nicola Paoletti, Marta Kwiatkowska
null
null
2,021
ijcai
Pairwise Half-graph Discrimination: A Simple Graph-level Self-supervised Strategy for Pre-training Graph Neural Networks
null
Self-supervised learning has gradually emerged as a powerful technique for graph representation learning. However, transferable, generalizable, and robust representation learning on graph data still remains a challenge for pre-training graph neural networks. In this paper, we propose a simple and effective self-supervised pre-training strategy, named Pairwise Half-graph Discrimination (PHD), that explicitly pre-trains a graph neural network at graph-level. PHD is designed as a simple binary classification task to discriminate whether two half-graphs come from the same source. Experiments demonstrate that the PHD is an effective pre-training strategy that offers comparable or superior performance on 13 graph classification tasks compared with state-of-the-art strategies, and achieves notable improvements when combined with node-level strategies. Moreover, the visualization of learned representation revealed that PHD strategy indeed empowers the model to learn graph-level knowledge like the molecular scaffold. These results have established PHD as a powerful and effective self-supervised learning strategy in graph-level representation learning.
Pengyong Li, Jun Wang, Ziliang Li, Yixuan Qiao, Xianggen Liu, Fei Ma, Peng Gao, Sen Song, Guotong Xie
null
null
2,021
ijcai
SHPOS: A Theoretical Guaranteed Accelerated Particle Optimization Sampling Method
null
Recently, the Stochastic Particle Optimization Sampling (SPOS) method is proposed to solve the particle-collapsing pitfall of deterministic Particle Variational Inference methods by ultilizing the stochastic Overdamped Langevin dynamics to enhance exploration. In this paper, we propose an accelerated particle optimization sampling method called Stochastic Hamiltonian Particle Optimization Sampling (SHPOS). Compared to the first-order dynamics used in SPOS, SHPOS adopts an augmented second-order dynamics, which involves an extra momentum term to achieve acceleration. We establish a non-asymptotic convergence analysis for SHPOS, and show that it enjoys a faster convergence rate than SPOS. Besides, we also propose a variance-reduced stochastic gradient variant of SHPOS for tasks with large-scale datasets and complex models. Experiments on both synthetic and real data validate our theory and demonstrate the superiority of SHPOS over the state-of-the-art.
Zhijian Li, Chao Zhang, Hui Qian, Xin Du, Lingwei Peng
null
null
2,021
ijcai
Residential Electric Load Forecasting via Attentive Transfer of Graph Neural Networks
null
An accurate short-term electric load forecasting is critical for modern electric power systems' safe and economical operation. Electric load forecasting can be formulated as a multi-variate time series problem. Residential houses in the same neighborhood may be affected by similar factors and share some latent spatial dependencies. However, most of the existing works on electric load forecasting fail to explore such dependencies. In recent years, graph neural networks (GNNs) have shown impressive success in modeling such dependencies. However, such GNN based models usually would require a large amount of training data. We may have a minimal amount of data available to train a reliable forecasting model for houses in a new neighborhood area. At the same time, we may have a large amount of historical data collected from other houses that can be leveraged to improve the new neighborhood's prediction performance. In this paper, we propose an attentive transfer learning-based GNN model that can utilize the learned prior knowledge to improve the learning process in a new area. The transfer process is achieved by an attention network, which generically avoids negative transfer by leveraging knowledge from multiple sources. Extensive experiments have been conducted on real-world data sets. Results have shown that the proposed framework can consistently outperform baseline models in different areas.
Weixuan Lin, Di Wu
null
null
2,021
ijcai
Graph Filter-based Multi-view Attributed Graph Clustering
null
Graph clustering has become an important research topic due to the proliferation of graph data. However, existing methods suffer from two major drawbacks. On the one hand, most methods can not simultaneously exploit attribute and graph structure information. On the other hand, most methods are incapable of handling multi-view data which contain sets of different features and graphs. In this paper, we propose a novel Multi-view Attributed Graph Clustering (MvAGC) method, which is simple yet effective. Firstly, a graph filter is applied to features to obtain a smooth representation without the need of learning the parameters of neural networks. Secondly, a novel strategy is designed to select a few anchor points, so as to reduce the computation complexity. Thirdly, a new regularizer is developed to explore high-order neighborhood information. Our extensive experiments indicate that our method works surprisingly well with respect to state-of-the-art deep neural network methods. The source code is available at https://github.com/sckangz/MvAGC.
Zhiping Lin, Zhao Kang
null
null
2,021
ijcai
Smart Contract Vulnerability Detection: From Pure Neural Network to Interpretable Graph Feature and Expert Pattern Fusion
null
Smart contracts hold digital coins worth billions of dollars, their security issues have drawn extensive attention in the past years. Towards smart contract vulnerability detection, conventional methods heavily rely on fixed expert rules, leading to low accuracy and poor scalability. Recent deep learning approaches alleviate this issue but fail to encode useful expert knowledge. In this paper, we explore combining deep learning with expert patterns in an explainable fashion. Specifically, we develop automatic tools to extract expert patterns from the source code. We then cast the code into a semantic graph to extract deep graph features. Thereafter, the global graph feature and local expert patterns are fused to cooperate and approach the final prediction, while yielding their interpretable weights. Experiments are conducted on all available smart contracts with source code in two platforms, Ethereum and VNT Chain. Empirically, our system significantly outperforms state-of-the-art methods. Our code is released.
Zhenguang Liu, Peng Qian, Xiang Wang, Lei Zhu, Qinming He, Shouling Ji
null
null
2,021
ijcai
Graph Entropy Guided Node Embedding Dimension Selection for Graph Neural Networks
null
Graph representation learning has achieved great success in many areas, including e-commerce, chemistry, biology, etc. However, the fundamental problem of choosing the appropriate dimension of node embedding for a given graph still remains unsolved. The commonly used strategies for Node Embedding Dimension Selection (NEDS) based on grid search or empirical knowledge suffer from heavy computation and poor model performance. In this paper, we revisit NEDS from the perspective of minimum entropy principle. Subsequently, we propose a novel Minimum Graph Entropy (MinGE) algorithm for NEDS with graph data. To be specific, MinGE considers both feature entropy and structure entropy on graphs, which are carefully designed according to the characteristics of the rich information in them. The feature entropy, which assumes the embeddings of adjacent nodes to be more similar, connects node features and link topology on graphs. The structure entropy takes the normalized degree as basic unit to further measure the higher-order structure of graphs. Based on them, we design MinGE to directly calculate the ideal node embedding dimension for any graph. Finally, comprehensive experiments with popular Graph Neural Networks (GNNs) on benchmark datasets demonstrate the effectiveness and generalizability of our proposed MinGE.
Gongxu Luo, Jianxin Li, Hao Peng, Carl Yang, Lichao Sun, Philip S. Yu, Lifang He
null
null
2,021
ijcai
Stochastic Actor-Executor-Critic for Image-to-Image Translation
null
Training a model-free deep reinforcement learning model to solve image-to-image translation is difficult since it involves high-dimensional continuous state and action spaces. In this paper, we draw inspiration from the recent success of the maximum entropy reinforcement learning framework designed for challenging continuous control problems to develop stochastic policies over high dimensional continuous spaces including image representation, generation, and control simultaneously. Central to this method is the Stochastic Actor-Executor-Critic (SAEC) which is an off-policy actor-critic model with an additional executor to generate realistic images. Specifically, the actor focuses on the high-level representation and control policy by a stochastic latent action, as well as explicitly directs the executor to generate low-level actions to manipulate the state. Experiments on several image-to-image translation tasks have demonstrated the effectiveness and robustness of the proposed SAEC when facing high-dimensional continuous space problems.
Ziwei Luo, Jing Hu, Xin Wang, Siwei Lyu, Bin Kong, Youbing Yin, Qi Song, Xi Wu
null
null
2,021
ijcai
Hierarchical Temporal Multi-Instance Learning for Video-based Student Learning Engagement Assessment
null
Video-based automatic assessment of a student's learning engagement on the fly can provide immense values for delivering personalized instructional services, a vehicle particularly important for massive online education. To train such an assessor, a major challenge lies in the collection of sufficient labels at the appropriate temporal granularity since a learner's engagement status may continuously change throughout a study session. Supplying labels at either frame or clip level incurs a high annotation cost. To overcome such a challenge, this paper proposes a novel hierarchical multiple instance learning (MIL) solution, which only requires labels anchored on full-length videos to learn to assess student engagement at an arbitrary temporal granularity and for an arbitrary duration in a study session. The hierarchical model mainly comprises a bottom module and a top module, respectively dedicated to learning the latent relationship between a clip and its constituent frames and that between a video and its constituent clips, with the constraints on the training stage that the average engagements of local clips is that of the video label. To verify the effectiveness of our method, we compare the performance of the proposed approach with that of several state-of-the-art peer solutions through extensive experiments.
Jiayao Ma, Xinbo Jiang, Songhua Xu, Xueying Qin
null
null
2,021
ijcai
Transfer Learning via Optimal Transportation for Integrative Cancer Patient Stratification
null
The Stratification of early-stage cancer patients for the prediction of clinical outcome is a challenging task since cancer is associated with various molecular aberrations. A single biomarker often cannot provide sufficient information to stratify early-stage patients effectively. Understanding the complex mechanism behind cancer development calls for exploiting biomarkers from multiple modalities of data such as histopathology images and genomic data. The integrative analysis of these biomarkers sheds light on cancer diagnosis, subtyping, and prognosis. Another difficulty is that labels for early-stage cancer patients are scarce and not reliable enough for predicting survival times. Given the fact that different cancer types share some commonalities, we explore if the knowledge learned from one cancer type can be utilized to improve prognosis accuracy for another cancer type. We propose a novel unsupervised multi-view transfer learning algorithm to simultaneously analyze multiple biomarkers in different cancer types. We integrate multiple views using non-negative matrix factorization and formulate the transfer learning model based on the Optimal Transport theory to align features of different cancer types. We evaluate the stratification performance on three early-stage cancers from the Cancer Genome Atlas (TCGA) project. Comparing with other benchmark methods, our framework achieves superior accuracy for patient outcome prediction.
Ziyu Liu, Wei Shao, Jie Zhang, Min Zhang, Kun Huang
null
null
2,021
ijcai
Adversarial Spectral Kernel Matching for Unsupervised Time Series Domain Adaptation
null
Unsupervised domain adaptation (UDA) has been received increasing attention since it does not require labels in target domain. Most existing UDA methods learn domain-invariant features by minimizing discrepancy distance computed by a certain metric between domains. However, these discrepancy-based methods cannot be robustly applied to unsupervised time series domain adaptation (UTSDA). That is because discrepancy metrics in these methods contain only low-order and local statistics, which have limited expression for time series distributions and therefore result in failure of domain matching. Actually, the real-world time series are always non-local distributions, i.e., with non-stationary and non-monotonic statistics. In this paper, we propose an Adversarial Spectral Kernel Matching (AdvSKM) method, where a hybrid spectral kernel network is specifically designed as inner kernel to reform the Maximum Mean Discrepancy (MMD) metric for UTSDA. The hybrid spectral kernel network can precisely characterize non-stationary and non-monotonic statistics in time series distributions. Embedding hybrid spectral kernel network to MMD not only guarantees precise discrepancy metric but also benefits domain matching. Besides, the differentiable architecture of the spectral kernel network enables adversarial kernel learning, which brings more discriminatory expression for discrepancy matching. The results of extensive experiments on several real-world UTSDA tasks verify the effectiveness of our proposed method.
Qiao Liu, Hui Xue
null
null
2,021
ijcai
Multi-Cause Effect Estimation with Disentangled Confounder Representation
null
One fundamental problem in causality learning is to estimate the causal effects of one or multiple treatments (e.g., medicines in the prescription) on an important outcome (e.g., cure of a disease). One major challenge of causal effect estimation is the existence of unobserved confounders -- the unobserved variables that affect both the treatments and the outcome. Recent studies have shown that by modeling how instances are assigned with different treatments together, the patterns of unobserved confounders can be captured through their learned latent representations. However, the interpretability of the representations in these works is limited. In this paper, we focus on the multi-cause effect estimation problem from a new perspective by learning disentangled representations of confounders. The disentangled representations not only facilitate the treatment effect estimation but also strengthen the understanding of causality learning process. Experimental results on both synthetic and real-world datasets show the superiority of our proposed framework from different aspects.
Jing Ma, Ruocheng Guo, Aidong Zhang, Jundong Li
null
null
2,021
ijcai
Evaluating Relaxations of Logic for Neural Networks: A Comprehensive Study
null
Symbolic knowledge can provide crucial inductive bias for training neural models, especially in low data regimes. A successful strategy for incorporating such knowledge involves relaxing logical statements into sub-differentiable losses for optimization. In this paper, we study the question of how best to relax logical expressions that represent labeled examples and knowledge about a problem; we focus on sub-differentiable t-norm relaxations of logic. We present theoretical and empirical criteria for characterizing which relaxation would perform best in various scenarios. In our theoretical study driven by the goal of preserving tautologies, the Lukasiewicz t-norm performs best. However, in our empirical analysis on the text chunking and digit recognition tasks, the product t-norm achieves best predictive performance. We analyze this apparent discrepancy, and conclude with a list of best practices for defining loss functions via logic.
Mattia Medina Grespan, Ashim Gupta, Vivek Srikumar
null
null
2,021
ijcai
Details (Don't) Matter: Isolating Cluster Information in Deep Embedded Spaces
null
Deep clustering techniques combine representation learning with clustering objectives to improve their performance. Among existing deep clustering techniques, autoencoder-based methods are the most prevalent ones. While they achieve promising clustering results, they suffer from an inherent conflict between preserving details, as expressed by the reconstruction loss, and finding similar groups by ignoring details, as expressed by the clustering loss. This conflict leads to brittle training procedures, dependence on trade-off hyperparameters and less interpretable results. We propose our framework, ACe/DeC, that is compatible with Autoencoder Centroid based Deep Clustering methods and automatically learns a latent representation consisting of two separate spaces. The clustering space captures all cluster-specific information and the shared space explains general variation in the data. This separation resolves the above mentioned conflict and allows our method to learn both detailed reconstructions and cluster specific abstractions. We evaluate our framework with extensive experiments to show several benefits: (1) cluster performance – on various data sets we outperform relevant baselines; (2) no hyperparameter tuning – this improved performance is achieved without introducing new clustering specific hyperparameters; (3) interpretability – isolating the cluster specific information in a separate space is advantageous for data exploration and interpreting the clustering results; and (4) dimensionality of the embedded space – we automatically learn a low dimensional space for clustering. Our ACe/DeC framework isolates cluster information, increases stability and interpretability, while improving cluster performance.
Lukas Miklautz, Lena G. M. Bauer, Dominik Mautz, Sebastian Tschiatschek, Christian Böhm, Claudia Plant
null
null
2,021
ijcai
TIDOT: A Teacher Imitation Learning Approach for Domain Adaptation with Optimal Transport
null
Using the principle of imitation learning and the theory of optimal transport we propose in this paper a novel model for unsupervised domain adaptation named Teacher Imitation Domain Adaptation with Optimal Transport (TIDOT). Our model includes two cooperative agents: a teacher and a student. The former agent is trained to be an expert on labeled data in the source domain, whilst the latter one aims to work with unlabeled data in the target domain. More specifically, optimal transport is applied to quantify the total of the distance between embedded distributions of the source and target data in the joint space, and the distance between predictive distributions of both agents, thus by minimizing this quantity TIDOT could mitigate not only the data shift but also the label shift. Comprehensive empirical studies show that TIDOT outperforms existing state-of-the-art performance on benchmark datasets.
Tuan Nguyen, Trung Le, Nhan Dam, Quan Hung Tran, Truyen Nguyen, Dinh Phung
null
null
2,021
ijcai
What Changed? Interpretable Model Comparison
null
We consider the problem of distinguishing two machine learning (ML) models built for the same task in a human-interpretable way. As models can fail or succeed in different ways, classical accuracy metrics may mask crucial qualitative differences. This problem arises in a few contexts. In business applications with periodically retrained models, an updated model may deviate from its predecessor for some segments without a change in overall accuracy. In automated ML systems, where several ML pipelines are generated, the top pipelines have comparable accuracy but may have more subtle differences. We present a method for interpretable comparison of binary classification models by approximating them with Boolean decision rules. We introduce stabilization conditions that allow for the two rule sets to be more directly comparable. A method is proposed to compare two rule sets based on their statistical and semantic similarity by solving assignment problems and highlighting changes. An empirical evaluation on several benchmark datasets illustrates the insights that may be obtained and shows that artificially induced changes can be reliably recovered by our method.
Rahul Nair, Massimiliano Mattetti, Elizabeth Daly, Dennis Wei, Oznur Alkan, Yunfeng Zhang
null
null
2,021
ijcai
Learning Embeddings from Knowledge Graphs With Numeric Edge Attributes
null
Numeric values associated to edges of a knowledge graph have been used to represent uncertainty, edge importance, and even out-of-band knowledge in a growing number of scenarios, ranging from genetic data to social networks. Nevertheless, traditional knowledge graph embedding models are not designed to capture such information, to the detriment of predictive power. We propose a novel method that injects numeric edge attributes into the scoring layer of a traditional knowledge graph embedding architecture. Experiments with publicly available numeric-enriched knowledge graphs show that our method outperforms traditional numeric-unaware baselines as well as the recent UKGE model.
Sumit Pai, Luca Costabello
null
null
2,021
ijcai
Explaining Deep Neural Network Models with Adversarial Gradient Integration
null
Deep neural networks (DNNs) have became one of the most high performing tools in a broad range of machine learning areas. However, the multilayer non-linearity of the network architectures prevent us from gaining a better understanding of the models’ predictions. Gradient based attribution methods (e.g., Integrated Gradient (IG)) that decipher input features’ contribution to the prediction task have been shown to be highly effective yet requiring a reference input as the anchor for explaining model’s output. The performance of DNN model interpretation can be quite inconsistent with regard to the choice of references. Here we propose an Adversarial Gradient Integration (AGI) method that integrates the gradients from adversarial examples to the target example along the curve of steepest ascent to calculate the resulting contributions from all input features. Our method doesn’t rely on the choice of references, hence can avoid the ambiguity and inconsistency sourced from the reference selection. We demonstrate the performance of our AGI method and compare with competing methods in explaining image classification results. Code is available from https://github.com/pd90506/AGI.
Deng Pan, Xin Li, Dongxiao Zhu
null
null
2,021
ijcai
Multi-Agent Reinforcement Learning for Automated Peer-to-Peer Energy Trading in Double-Side Auction Market
null
With increasing prosumers employed with distributed energy resources (DER), advanced energy management has become increasingly important. To this end, integrating demand-side DER into electricity market is a trend for future smart grids. The double-side auction (DA) market is viewed as a promising peer-to-peer (P2P) energy trading mechanism that enables interactions among prosumers in a distributed manner. To achieve the maximum profit in a dynamic electricity market, prosumers act as price makers to simultaneously optimize their operations and trading strategies. However, the traditional DA market is difficult to be explicitly modelled due to its complex clearing algorithm and the stochastic bidding behaviors of the participants. For this reason, in this paper we model this task as a multi-agent reinforcement learning (MARL) problem and propose an algorithm called DA-MADDPG that is modified based on MADDPG by abstracting the other agents’ observations and actions through the DA market public information for each agent’s critic. The experiments show that 1) prosumers obtain more economic benefits in P2P energy trading w.r.t. the conventional electricity market independently trading with the utility company; and 2) DA-MADDPG performs better than the traditional Zero Intelligence (ZI) strategy and the other MARL algorithms, e.g., IQL, IDDPG, IPPO and MADDPG.
Dawei Qiu, Jianhong Wang, Junkai Wang, Goran Strbac
null
null
2,021
ijcai
Two Birds with One Stone: Series Saliency for Accurate and Interpretable Multivariate Time Series Forecasting
null
It is important yet challenging to perform accurate and interpretable time series forecasting. Though deep learning methods can boost forecasting accuracy, they often sacrifice interpretability. In this paper, we present a new scheme of series saliency to boost both accuracy and interpretability. By extracting series images from sliding windows of the time series, we design series saliency as a mixup strategy with a learnable mask between the series images and their perturbed versions. Series saliency is model agnostic and performs as an adaptive data augmentation method for training deep models. Moreover, by slightly changing the objective, we optimize series saliency to find a mask for interpretable forecasting in both feature and time dimensions. Experimental results on several real datasets demonstrate that series saliency is effective to produce accurate time-series forecasting results as well as generate temporal interpretations.
Qingyi Pan, Wenbo Hu, Ning Chen
null
null
2,021
ijcai
Minimization of Limit-Average Automata
null
LimAvg-automata are weighted automata over infinite words that aggregate weights along runs with the limit-average value function. In this paper, we study the minimization problem for (deterministic) LimAvg-automata. Our main contribution is an equivalence relation on words characterizing LimAvg-automata, i.e., the equivalence classes of this relation correspond to states of an equivalent LimAvg-automaton. In contrast to relations characterizing DFA, our relation depends not only on the function defined by the target automaton, but also on its structure. We show two applications of this relation. First, we present a minimization algorithm for LimAvg-automata, which returns a minimal LimAvg-automaton among those equivalent and structurally similar to the input one. Second, we present an extension of Angluin's L^*-algorithm with syntactic queries, which learns in polynomial time a LimAvg-automaton equivalent to the target one.
Jakub Michaliszyn, Jan Otop
null
null
2,021
ijcai
Online Risk-Averse Submodular Maximization
null
We present a polynomial-time online algorithm for maximizing the conditional value at risk (CVaR) of a monotone stochastic submodular function. Given T i.i.d. samples from an underlying distribution arriving online, our algorithm produces a sequence of solutions that converges to a (1−1/e)-approximate solution with a convergence rate of O(T −1/4 ) for monotone continuous DR-submodular functions. Compared with previous offline algorithms, which require Ω(T) space, our online algorithm only requires O( √ T) space. We extend our on- line algorithm to portfolio optimization for mono- tone submodular set functions under a matroid constraint. Experiments conducted on real-world datasets demonstrate that our algorithm can rapidly achieve CVaRs that are comparable to those obtained by existing offline algorithms.
Tasuku Soma, Yuichi Yoshida
null
null
2,021
ijcai
Regularizing Variational Autoencoder with Diversity and Uncertainty Awareness
null
As one of the most popular generative models, Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference. However, when the decoder network is sufficiently expressive, VAE may lead to posterior collapse; that is, uninformative latent representations may be learned. To this end, in this paper, we propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space, and thus the representation can be learned in a meaningful and compact manner. Specifically, we first theoretically demonstrate that it will result in better latent space with high diversity and low uncertainty awareness by controlling the distribution of posterior’s parameters across the whole data accordingly. Then, without the introduction of new loss terms or modifying training strategies, we propose to exploit Dropout on the variances and Batch-Normalization on the means simultaneously to regularize their distributions implicitly. Furthermore, to evaluate the generalization effect, we also exploit DU-VAE for inverse autoregressive flow based-VAE (VAE-IAF) empirically. Finally, extensive experiments on three benchmark datasets clearly show that our approach can outperform state-of-the-art baselines on both likelihood estimation and underlying classification tasks.
Dazhong Shen, Chuan Qin, Chao Wang, Hengshu Zhu, Enhong Chen, Hui Xiong
null
null
2,021
ijcai
Towards Robust Model Reuse in the Presence of Latent Domains
null
Model reuse tries to adapt well pre-trained models to a new target task, without access of raw data. It attracts much attention since it reduces the learning resources. Previous model reuse studies typically operate in a single-domain scenario, i.e., the target samples arise from one single domain. However, in practice the target samples often arise from multiple latent or unknown domains, e.g., the images for cars may arise from latent domains such as photo, line drawing, cartoon, etc. The methods based on single-domain may no longer be feasible for multiple latent domains and may sometimes even lead to performance degeneration. To address the above issue, in this paper we propose the MRL (Model Reuse for multiple Latent domains) method. Both domain characteristics and pre-trained models are considered for the exploration of instances in the target task. Theoretically, the overall considerations are packed in a bi-level optimization framework with a reliable generalization. Moreover, through an ensemble of multiple models, the model robustness is improved with a theoretical guarantee. Empirical results on diverse real-world data sets clearly validate the effectiveness of proposed algorithms.
Jie-Jing Shao, Zhanzhan Cheng, Yu-Feng Li, Shiliang Pu
null
null
2,021
ijcai
Source-free Domain Adaptation via Avatar Prototype Generation and Adaptation
null
We study a practical domain adaptation task, called source-free unsupervised domain adaptation (UDA) problem, in which we cannot access source domain data due to data privacy issues but only a pre-trained source model and unlabeled target data are available. This task, however, is very difficult due to one key challenge: the lack of source data and target domain labels makes model adaptation very challenging. To address this, we propose to mine the hidden knowledge in the source model and exploit it to generate source avatar prototypes (i.e. representative features for each source class) as well as target pseudo labels for domain alignment. To this end, we propose a Contrastive Prototype Generation and Adaptation (CPGA) method. Specifically, CPGA consists of two stages: (1) prototype generation: by exploring the classification boundary information of the source model, we train a prototype generator to generate avatar prototypes via contrastive learning. (2) prototype adaptation: based on the generated source prototypes and target pseudo labels, we develop a new robust contrastive prototype adaptation strategy to align each pseudo-labeled target data to the corresponding source prototypes. Extensive experiments on three UDA benchmark datasets demonstrate the effectiveness and superiority of the proposed method.
Zhen Qiu, Yifan Zhang, Hongbin Lin, Shuaicheng Niu, Yanxia Liu, Qing Du, Mingkui Tan
null
null
2,021
ijcai
Positive-Unlabeled Learning from Imbalanced Data
null
Positive-unlabeled (PU) learning deals with the binary classification problem when only positive (P) and unlabeled (U) data are available, without negative (N) data. Existing PU methods perform well on the balanced dataset. However, in real applications such as financial fraud detection or medical diagnosis, data are always imbalanced. It remains unclear whether existing PU methods can perform well on imbalanced data. In this paper, we explore this problem and propose a general learning objective for PU learning targeting specially at imbalanced data. By this general learning objective, state-of-the-art PU methods based on optimizing a consistent risk can be adapted to conquer the imbalance. We theoretically show that in expectation, optimizing our learning objective is equivalent to learning a classifier on the oversampled balanced data with both P and N data available, and further provide an estimation error bound. Finally, experimental results validate the effectiveness of our proposal compared to state-of-the-art PU methods.
Guangxin Su, Weitong Chen, Miao Xu
null
null
2,021
ijcai
Physics-aware Spatiotemporal Modules with Auxiliary Tasks for Meta-Learning
null
Modeling the dynamics of real-world physical systems is critical for spatiotemporal prediction tasks, but challenging when data is limited. The scarcity of real-world data and the difficulty in reproducing the data distribution hinder directly applying meta-learning techniques. Although the knowledge of governing partial differential equations (PDE) of the data can be helpful for the fast adaptation to few observations, it is mostly infeasible to exactly find the equation for observations in real-world physical systems. In this work, we propose a framework, physics-aware meta-learning with auxiliary tasks, whose spatial modules incorporate PDE-independent knowledge and temporal modules utilize the generalized features from the spatial modules to be adapted to the limited data, respectively. The framework is inspired by a local conservation law expressed mathematically as a continuity equation and does not require the exact form of governing equation to model the spatiotemporal observations. The proposed method mitigates the need for a large number of real-world tasks for meta-learning by leveraging spatial information in simulated data to meta-initialize the spatial modules. We apply the proposed framework to both synthetic and real-world spatiotemporal prediction tasks and demonstrate its superior performance with limited observations.
Sungyong Seo, Chuizheng Meng, Sirisha Rambhatla, Yan Liu
null
null
2,021
ijcai
MFNP: A Meta-optimized Model for Few-shot Next POI Recommendation
null
Next Point-of-Interest (POI) recommendation is of great value for location-based services. Existing solutions mainly rely on extensive observed data and are brittle to users with few interactions. Unfortunately, the problem of few-shot next POI recommendation has not been well studied yet. In this paper, we propose a novel meta-optimized model MFNP, which can rapidly adapt to users with few check-in records. Towards the cold-start problem, it seamlessly integrates carefully designed user-specific and region-specific tasks in meta-learning, such that region-aware user preferences can be captured via a rational fusion of region-independent personal preferences and region-dependent crowd preferences. In modelling region-dependent crowd preferences, a cluster-based adaptive network is adopted to capture shared preferences from similar users for knowledge transfer. Experimental results on two real-world datasets show that our model outperforms the state-of-the-art methods on next POI recommendation for cold-start users.
Huimin Sun, Jiajie Xu, Kai Zheng, Pengpeng Zhao, Pingfu Chao, Xiaofang Zhou
null
null
2,021
ijcai
Predicting Traffic Congestion Evolution: A Deep Meta Learning Approach
null
Many efforts are devoted to predicting congestion evolution using propagation patterns that are mined from historical traffic data. However, the prediction quality is limited to the intrinsic properties that are present in the mined patterns. In addition, these mined patterns frequently fail to sufficiently capture many realistic characteristics of true congestion evolution (e.g., asymmetric transitivity, local proximity). In this paper, we propose a representation learning framework to characterize and predict congestion evolution between any pair of road segments (connected via single or multiple paths). Specifically, we build dynamic attributed networks (DAN) to incorporate both dynamic and static impact factors while preserving dynamic topological structures. We propose a Deep Meta Learning Model (DMLM) for learning representations of road segments which support accurate prediction of congestion evolution. DMLM relies on matrix factorization techniques and meta-LSTM modules to exploit temporal correlations at multiple scales, and employ meta-Attention modules to merge heterogeneous features while learning the time-varying impacts of both dynamic and static features. Compared to all state-of-the-art methods, our framework achieves significantly better prediction performance on two congestion evolution behaviors (propagation and decay) when evaluated using real-world dataset.
Yidan Sun, Guiyuan Jiang, Siew Kei Lam, Peilan He
null
null
2,021
ijcai
TE-ESN: Time Encoding Echo State Network for Prediction Based on Irregularly Sampled Time Series Data
null
Prediction based on Irregularly Sampled Time Series (ISTS) is of wide concern in real-world applications. For more accurate prediction, methods had better grasp more data characteristics. Different from ordinary time series, ISTS is characterized by irregular time intervals of intra-series and different sampling rates of inter-series. However, existing methods have suboptimal predictions due to artificially introducing new dependencies in a time series and biasedly learning relations among time series when modeling these two characteristics. In this work, we propose a novel Time Encoding (TE) mechanism. TE can embed the time information as time vectors in the complex domain. It has the properties of absolute distance and relative distance under different sampling rates, which helps to represent two irregularities. Meanwhile, we create a new model named Time Encoding Echo State Network (TE-ESN). It is the first ESNs-based model that can process ISTS data. Besides, TE-ESN incorporates long short-term memories and series fusion to grasp horizontal and vertical relations. Experiments on one chaos system and three real-world datasets show that TE-ESN performs better than all baselines and has better reservoir property.
Chenxi Sun, Shenda Hong, Moxian Song, Yen-Hsiu Chou, Yongyue Sun, Derun Cai, Hongyan Li
null
null
2,021
ijcai
Interpretable Compositional Convolutional Neural Networks
null
This paper proposes a method to modify a traditional convolutional neural network (CNN) into an interpretable compositional CNN, in order to learn filters that encode meaningful visual patterns in intermediate convolutional layers. In a compositional CNN, each filter is supposed to consistently represent a specific compositional object part or image region with a clear meaning. The compositional CNN learns from image labels for classification without any annotations of parts or regions for supervision. Our method can be broadly applied to different types of CNNs. Experiments have demonstrated the effectiveness of our method. The code will be released when the paper is accepted.
Wen Shen, Zhihua Wei, Shikun Huang, Binbin Zhang, Jiaqi Fan, Ping Zhao, Quanshi Zhang
null
null
2,021
ijcai
Exact Acceleration of K-Means++ and K-Means||
null
K-Means++ and its distributed variant K-Means|| have become de facto tools for selecting the initial seeds of K-means. While alternatives have been developed, the effectiveness, ease of implementation,and theoretical grounding of the K-means++ and || methods have made them difficult to "best" from a holistic perspective. We focus on using triangle inequality based pruning methods to accelerate both of these algorithms to yield comparable or better run-time without sacrificing any of the benefits of these approaches. For both algorithms we are able to reduce distance computations by over 500×. For K-means++ this results in up to a 17×speedup in run-time and a551×speedup for K-means||. We achieve this with simple, but carefully chosen, modifications to known techniques which makes it easy to integrate our approach into existing implementations of these algorithms.
Edward Raff
null
null
2,021
ijcai
Don’t Do What Doesn’t Matter: Intrinsic Motivation with Action Usefulness
null
Sparse rewards are double-edged training signals in reinforcement learning: easy to design but hard to optimize. Intrinsic motivation guidances have thus been developed toward alleviating the resulting exploration problem. They usually incentivize agents to look for new states through novelty signals. Yet, such methods encourage exhaustive exploration of the state space rather than focusing on the environment's salient interaction opportunities. We propose a new exploration method, called Don't Do What Doesn't Matter (DoWhaM), shifting the emphasis from state novelty to state with relevant actions. While most actions consistently change the state when used, e.g. moving the agent, some actions are only effective in specific states, e.g., opening a door, grabbing an object. DoWhaM detects and rewards actions that seldom affect the environment. We evaluate DoWhaM on the procedurally-generated environment MiniGrid against state-of-the-art methods. Experiments consistently show that DoWhaM greatly reduces sample complexity, installing the new state-of-the-art in MiniGrid.
Mathieu Seurin, Florian Strub, Philippe Preux, Olivier Pietquin
null
null
2,021
ijcai
Self-supervised Network Evolution for Few-shot Classification
null
Few-shot classification aims to recognize new classes by learning reliable models from very few available samples. It could be very challenging when there is no intersection between the alreadyknown classes (base set) and the novel set (new classes). To alleviate this problem, we propose to evolve the network (for the base set) via label propagation and self-supervision to shrink the distribution difference between the base set and the novel set. Our network evolution approach transfers the latent distribution from the already-known classes to the unknown (novel) classes by: (a) label propagation of the novel/new classes (novel set); and (b) design of dual-task to exploit a discriminative representation to effectively diminish the overfitting on the base set and enhance the generalization ability on the novel set. We conduct comprehensive experiments to examine our network evolution approach against numerous state-of-the-art ones, especially in a higher way setup and cross-dataset scenarios. Notably, our approach outperforms the second best state-of-the-art method by a large margin of 3.25% for one-shot evaluation over miniImageNet.
Xuwen Tang, Zhu Teng, Baopeng Zhang, Jianping Fan
null
null
2,021
ijcai
Compositional Neural Logic Programming
null
This paper introduces Compositional Neural Logic Programming (CNLP), a framework that integrates neural networks and logic programming for symbolic and sub-symbolic reasoning. We adopt the idea of compositional neural networks to represent first-order logic predicates and rules. A voting backward-forward chaining algorithm is proposed for inference with both symbolic and sub-symbolic variables in an argument-retrieval style. The framework is highly flexible in that it can be constructed incrementally with new knowledge, and it also supports batch reasoning in certain cases. In the experiments, we demonstrate the advantages of CNLP in discriminative tasks and generative tasks.
Son N. Tran
null
null
2,021
ijcai
Sensitivity Direction Learning with Neural Networks Using Domain Knowledge as Soft Shape Constraints
null
If domain knowledge can be integrated as an appropriate constraint, it is highly possible that the generalization performance of a neural network model can be improved. We propose Sensitivity Direction Learning (SDL) for learning about the neural network model with user-specified relationships (e.g., monotonicity, convexity) between each input feature and the output of the model by imposing soft shape constraints which represent domain knowledge. To impose soft shape constraints, SDL uses a novel penalty function, Sensitivity Direction Error (SDE) function, which returns the squared error between coefficients of the approximation curve for each Individual Conditional Expectation plot and coefficient constraints which represent domain knowledge. The effectiveness of our concept was verified by simple experiments. Similar to those such as L2 regularization and dropout, SDL and SDE can be used without changing neural network architecture. We believe our algorithm can be a strong candidate for neural network users who want to incorporate domain knowledge.
Kazuyuki Wakasugi
null
null
2,021
ijcai
Learn the Highest Label and Rest Label Description Degrees
null
Although Label Distribution Learning (LDL) has found wide applications in varieties of classification problems, it may face the challenge of objective mismatch -- LDL neglects the optimal label for the sake of learning the whole label distribution, which leads to performance deterioration. To improve classification performance and solve the objective mismatch, we propose a new LDL algorithm called LDL-HR. LDL-HR provides a new perspective of label distribution, \textit{i.e.}, a combination of the \textbf{highest label} and the \textbf{rest label description degrees}. It works as follows. First, we learn the highest label by fitting the degenerated label distribution and large margin. Second, we learn the rest label description degrees to exploit generalization. Theoretical analysis shows the generalization of LDL-HR. Besides, the experimental results on 18 real-world datasets validate the statistical superiority of our method.
Jing Wang, Xin Geng
null
null
2,021
ijcai
Dual Active Learning for Both Model and Data Selection
null
To learn an effective model with less training examples, existing active learning methods typically assume that there is a given target model, and try to fit it by selecting the most informative examples. However, it is less likely to determine the best target model in prior, and thus may get suboptimal performance even if the data is perfectly selected. To tackle with this practical challenge, this paper proposes a novel framework of dual active learning (DUAL) to simultaneously perform model search and data selection. Specifically, an effective method with truncated importance sampling is proposed for Combined Algorithm Selection and Hyperparameter optimization (CASH), which mitigates the model evaluation bias on the labeled data. Further, we propose an active query strategy to label the most valuable examples. The strategy on one hand favors discriminative data to help CASH search the best model, and on the other hand prefers informative examples to accelerate the convergence of winner models. Extensive experiments are conducted on 12 openML datasets. The results demonstrate the proposed method can effectively learn a superior model with less labeled examples.
Ying-Peng Tang, Sheng-Jun Huang
null
null
2,021
ijcai
Learning from Complementary Labels via Partial-Output Consistency Regularization
null
In complementary-label learning (CLL), a multi-class classifier is learned from training instances each associated with complementary labels, which specify the classes that the instance does not belong to. Previous studies focus on unbiased risk estimator or surrogate loss while neglect the importance of regularization in training phase. In this paper, we give the first attempt to leverage regularization techniques for CLL. By decoupling a label vector into complementary labels and partial unknown labels, we simultaneously inhibit the outputs of complementary labels with a complementary loss and penalize the sensitivity of the classifier on the partial outputs of these unknown classes by consistency regularization. Then we unify the complementary loss and consistency loss together by a specially designed dynamic weighting factor. We conduct a series of experiments showing that the proposed method achieves highly competitive performance in CLL.
Deng-Bao Wang, Lei Feng, Min-Ling Zhang
null
null
2,021
ijcai
Multi-hop Attention Graph Neural Networks
null
Self-attention mechanism in graph neural networks (GNNs) led to state-of-the-art performance on many graph representation learning tasks. Currently, at every layer, attention is computed between connected pairs of nodes and depends solely on the representation of the two nodes. However, such attention mechanism does not account for nodes that are not directly connected but provide important network context. Here we propose Multi-hop Attention Graph Neural Network (MAGNA), a principled way to incorporate multi-hop context information into every layer of attention computation. MAGNA diffuses the attention scores across the network, which increases the receptive field for every layer of the GNN. Unlike previous approaches, MAGNA uses a diffusion prior on attention values, to efficiently account for all paths between the pair of disconnected nodes. We demonstrate in theory and experiments that MAGNA captures large-scale structural information in every layer, and has a low-pass effect that eliminates noisy high-frequency information from graph data. Experimental results on node classification as well as the knowledge graph completion benchmarks show that MAGNA achieves state-of-the-art results: MAGNA achieves up to 5.7% relative error reduction over the previous state-of-the-art on Cora, Citeseer, and Pubmed. MAGNA also obtains the best performance on a large-scale Open Graph Benchmark dataset. On knowledge graph completion MAGNA advances state-of-the-art on WN18RR and FB15k-237 across four different performance metrics.
Guangtao Wang, Rex Ying, Jing Huang, Jure Leskovec
null
null
2,021
ijcai
Hyperspectral Band Selection via Spatial-Spectral Weighted Region-wise Multiple Graph Fusion-Based Spectral Clustering
null
In this paper, we propose a hyperspectral band selection method via spatial-spectral weighted region-wise multiple graph fusion-based spectral clustering, referred to as RMGF briefly. Considering that different objects have different reflection characteristics, we use a superpixel segmentation algorithm to segment the first principal component of original hyperspectral image cube into homogeneous regions. For each superpixel, we construct a corresponding similarity graph to reflect the similarity between band pairs. Then, a multiple graph diffusion strategy with theoretical convergence guarantee is designed to learn a unified graph for partitioning the whole hyperspectral cube into several subcubes via spectral clustering. During the graph diffusion process, the spatial and spectral information of each superpixel are embedded to make spatial/spectral similar superpixels contribute more to each other. Finally, the band containing minimum noise in each subcube is selected to represent the whole subcube. Extensive experiments are conducted on three public datasets to validate the superiority of the proposed method when compared with other state-of-the-art ones.
Chang Tang, Xinwang Liu, En Zhu, Lizhe Wang, Albert Zomaya
null
null
2,021
ijcai
Self-Supervised Adversarial Distribution Regularization for Medication Recommendation
null
Medication recommendation is a significant healthcare application due to its promise in effectively prescribing medications. Avoiding fatal side effects related to Drug-Drug Interaction (DDI) is among the critical challenges. Most existing methods try to mitigate the problem by providing models with extra DDI knowledge, making models complicated. While treating all patients with different DDI properties as a single cohort would put forward strict requirements on models' generalization performance. In pursuit of a valuable model for a safe recommendation, we propose the Self-Supervised Adversarial Regularization Model for Medication Recommendation (SARMR). SARMR obtains the target distribution associated with safe medication combinations from raw patient records for adversarial regularization. In this way, the model can shape distributions of patient representations to achieve DDI reduction. To obtain accurate self-supervision information, SARMR models interactions between physicians and patients by building a key-value memory neural network and carrying out multi-hop reading to obtain contextual information for patient representations. SARMR outperforms all baseline methods in the experiment on a real-world clinical dataset. This model can achieve DDI reduction when considering the different number of DDI types, which demonstrates the robustness of adversarial regularization for safe medication recommendation.
Yanda Wang, Weitong Chen, Dechang PI, Lin Yue, Sen Wang, Miao Xu
null
null
2,021
ijcai
Reinforcement Learning Based Sparse Black-box Adversarial Attack on Video Recognition Models
null
We explore the black-box adversarial attack on video recognition models. Attacks are only performed on selected key regions and key frames to reduce the high computation cost of searching adversarial perturbations on a video due to its high dimensionality. To select key frames, one way is to use heuristic algorithms to evaluate the importance of each frame and choose the essential ones. However, it is time inefficient on sorting and searching. In order to speed up the attack process, we propose a reinforcement learning based frame selection strategy. Specifically, the agent explores the difference between the original class and the target class of videos to make selection decisions. It receives rewards from threat models which indicate the quality of the decisions. Besides, we also use saliency detection to select key regions and only estimate the sign of gradient instead of the gradient itself in zeroth order optimization to further boost the attack process. We can use the trained model directly in the untargeted attack or with little fine-tune in the targeted attack, which saves computation time. A range of empirical results on real datasets demonstrate the effectiveness and efficiency of the proposed method.
Zeyuan Wang, Chaofeng Sha, Su Yang
null
null
2,021
ijcai
Reward-Constrained Behavior Cloning
null
Deep reinforcement learning (RL) has demonstrated success in challenging decision-making/control tasks. However, RL methods, which solve tasks through maximizing the expected reward, may generate undesirable behaviors due to inferior local convergence or incompetent reward design. These undesirable behaviors of agents may not reduce the total reward but destroy the user experience of the application. For example, in the autonomous driving task, the policy actuated by speed reward behaves much more sudden brakes while human drivers generally don’t do that. To overcome this problem, we present a novel method named Reward-Constrained Behavior Cloning (RCBC) which synthesizes imitation learning and constrained reinforcement learning. RCBC leverages human demonstrations to induce desirable or human-like behaviors and employs lower-bound reward constraints for policy optimization to maximize the expected reward. Empirical results on popular benchmark environments show that RCBC learns significantly more human-desired policies with performance guarantees which meet the lower-bound reward constraints while performing better than or as well as baseline methods in terms of reward maximization.
Zhaorong Wang, Meng Wang, Jingqi Zhang, Yingfeng Chen, Chongjie Zhang
null
null
2,021
ijcai
Robust Adversarial Imitation Learning via Adaptively-Selected Demonstrations
null
The agent in imitation learning (IL) is expected to mimic the behavior of the expert. Its performance relies highly on the quality of given expert demonstrations. However, the assumption that collected demonstrations are optimal cannot always hold in real-world tasks, which would seriously influence the performance of the learned agent. In this paper, we propose a robust method within the framework of Generative Adversarial Imitation Learning (GAIL) to address imperfect demonstration issue, in which good demonstrations can be adaptively selected for training while bad demonstrations are abandoned. Specifically, a binary weight is assigned to each expert demonstration to indicate whether to select it for training. The reward function in GAIL is employed to determine this weight (i.e. higher reward results in higher weight). Compared to some existing solutions that require some auxiliary information about this weight, we set up the connection between weight and model so that we can jointly optimize GAIL and learn the latent weight. Besides hard binary weighting, we also propose a soft weighting scheme. Experiments in the Mujoco demonstrate the proposed method outperforms other GAIL-based methods when dealing with imperfect demonstrations.
Yunke Wang, Chang Xu, Bo Du
null
null
2,021
ijcai
Layer-Assisted Neural Topic Modeling over Document Networks
null
Neural topic modeling provides a flexible, efficient, and powerful way to extract topic representations from text documents. Unfortunately, most existing models cannot handle the text data with network links, such as web pages with hyperlinks and scientific papers with citations. To resolve this kind of data, we develop a novel neural topic model , namely Layer-Assisted Neural Topic Model (LANTM), which can be interpreted from the perspective of variational auto-encoders. Our major motivation is to enhance the topic representation encoding by not only using text contents, but also the assisted network links. Specifically, LANTM encodes the texts and network links to the topic representations by an augmented network with graph convolutional modules, and decodes them by maximizing the likelihood of the generative process. The neural variational inference is adopted for efficient inference. Experimental results validate that LANTM significantly outperforms the existing models on topic quality, text classification and link prediction..
Yiming Wang, Ximing Li, Jihong Ouyang
null
null
2,021
ijcai
Exploiting Spiking Dynamics with Spatial-temporal Feature Normalization in Graph Learning
null
Biological spiking neurons with intrinsic dynamics underlie the powerful representation and learning capabilities of the brain for processing multimodal information in complex environments. Despite recent tremendous progress in spiking neural networks (SNNs) for handling Euclidean-space tasks, it still remains challenging to exploit SNNs in processing non-Euclidean-space data represented by graph data, mainly due to the lack of effective modeling framework and useful training techniques. Here we present a general spike-based modeling framework that enables the direct training of SNNs for graph learning. Through spatial-temporal unfolding for spiking data flows of node features, we incorporate graph convolution filters into spiking dynamics and formalize a synergistic learning paradigm. Considering the unique features of spike representation and spiking dynamics, we propose a spatial-temporal feature normalization (STFN) technique suitable for SNN to accelerate convergence. We instantiate our methods into two spiking graph models, including graph convolution SNNs and graph attention SNNs, and validate their performance on three node-classification benchmarks, including Cora, Citeseer, and Pubmed. Our model can achieve comparable performance with the state-of-the-art graph neural network (GNN) models with much lower computation costs, demonstrating great benefits for the execution on neuromorphic hardware and prompting neuromorphic applications in graphical scenarios.
Mingkun Xu, Yujie Wu, Lei Deng, Faqiang Liu, Guoqi Li, Jing Pei
null
null
2,021
ijcai
Discrete Multiple Kernel k-means
null
The multiple kernel k-means (MKKM) and its variants utilize complementary information from different kernels, achieving better performance than kernel k-means (KKM). However, the optimization procedures of previous works all comprise two stages, learning the continuous relaxed label matrix and obtaining the discrete one by extra discretization procedures. Such a two-stage strategy gives rise to a mismatched problem and severe information loss. To address this problem, we elaborate a novel Discrete Multiple Kernel k-means (DMKKM) model solved by an optimization algorithm that directly obtains the cluster indicator matrix without subsequent discretization procedures. Moreover, DMKKM can strictly measure the correlations among kernels, which is capable of enhancing kernel fusion by reducing redundancy and improving diversity. What’s more, DMKKM is parameter-free avoiding intractable hyperparameter tuning, which makes it feasible in practical applications. Extensive experiments illustrated the effectiveness and superiority of the proposed model.
Rong Wang, Jitao Lu, Yihang Lu, Feiping Nie, Xuelong Li
null
null
2,021
ijcai
k-Nearest Neighbors by Means of Sequence to Sequence Deep Neural Networks and Memory Networks
null
k-Nearest Neighbors is one of the most fundamental but effective classification models. In this paper, we propose two families of models built on a sequence to sequence model and a memory network model to mimic the k-Nearest Neighbors model, which generate a sequence of labels, a sequence of out-of-sample feature vectors and a final label for classification, and thus they could also function as oversamplers. We also propose 'out-of-core' versions of our models which assume that only a small portion of data can be loaded into memory. Computational experiments show that our models on structured datasets outperform k-Nearest Neighbors, a feed-forward neural network, XGBoost, lightGBM, random forest and a memory network, due to the fact that our models must produce additional output and not just the label. On image and text datasets, the performance of our model is close to many state-of-the-art deep models. As an oversampler on imbalanced datasets, the sequence to sequence kNN model often outperforms Synthetic Minority Over-sampling Technique and Adaptive Synthetic Sampling.
Yiming Xu, Diego Klabjan
null
null
2,021
ijcai
Learning Deeper Non-Monotonic Networks by Softly Transferring Solution Space
null
Different from popular neural networks using quasiconvex activations, non-monotonic networks activated by periodic nonlinearities have emerged as a more competitive paradigm, offering revolutionary benefits: 1) compactly characterizing high-frequency patterns; 2) precisely representing high-order derivatives. Nevertheless, they are also well-known for being hard to train, due to easily over-fitting dissonant noise and only allowing for tiny architectures (shallower than 5 layers). The fundamental bottleneck is that the periodicity leads to many poor and dense local minima in solution space. The direction and norm of gradient oscillate continually during error backpropagation. Thus non-monotonic networks are prematurely stuck in these local minima, and leave out effective error feedback. To alleviate the optimization dilemma, in this paper, we propose a non-trivial soft transfer approach. It smooths their solution space close to that of monotonic ones in the beginning, and then improve their representational properties by transferring the solutions from the neural space of monotonic neurons to the Fourier space of non-monotonic neurons as the training continues. The soft transfer consists of two core components: 1) a rectified concrete gate is constructed to characterize the state of each neuron; 2) a variational Bayesian learning framework is proposed to dynamically balance the empirical risk and the intensity of transfer. We provide comprehensive empirical evidence showing that the soft transfer not only reduces the risk of non-monotonic networks on over-fitting noise, but also helps them scale to much deeper architectures (more than 100 layers) achieving the new state-of-the-art performance.
Zheng-Fan Wu, Hui Xue, Weimin Bai
null
null
2,021
ijcai
Demiguise Attack: Crafting Invisible Semantic Adversarial Perturbations with Perceptual Similarity
null
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples. Adversarial examples are malicious images with visually imperceptible perturbations. While these carefully crafted perturbations restricted with tight Lp norm bounds are small, they are still easily perceivable by humans. These perturbations also have limited success rates when attacking black-box models or models with defenses like noise reduction filters. To solve these problems, we propose Demiguise Attack, crafting "unrestricted" perturbations with Perceptual Similarity. Specifically, we can create powerful and photorealistic adversarial examples by manipulating semantic information based on Perceptual Similarity. Adversarial examples we generate are friendly to the human visual system (HVS), although the perturbations are of large magnitudes. We extend widely-used attacks with our approach, enhancing adversarial effectiveness impressively while contributing to imperceptibility. Extensive experiments show that the proposed method not only outperforms various state-of-the-art attacks in terms of fooling rate, transferability, and robustness against defenses but can also improve attacks effectively. In addition, we also notice that our implementation can simulate illumination and contrast changes that occur in real-world scenarios, which will contribute to exposing the blind spots of DNNs.
Yajie Wang, Shangbo Wu, Wenyi Jiang, Shengang Hao, Yu-an Tan, Quanxin Zhang
null
null
2,021
ijcai
Closing the BIG-LID: An Effective Local Intrinsic Dimensionality Defense for Nonlinear Regression Poisoning
null
Nonlinear regression, although widely used in engineering, financial and security applications for automated decision making, is known to be vulnerable to training data poisoning. Targeted poisoning attacks may cause learning algorithms to fit decision functions with poor predictive performance. This paper presents a new analysis of local intrinsic dimensionality (LID) of nonlinear regression under such poisoning attacks within a Stackelberg game, leading to a practical defense. After adapting a gradient-based attack on linear regression that significantly impairs prediction capabilities to nonlinear settings, we consider a multi-step unsupervised black-box defense. The first step identifies samples that have the greatest influence on the learner's validation error; we then use the theory of local intrinsic dimensionality, which reveals the degree of being an outlier of data samples, to iteratively identify poisoned samples via a generative probabilistic model, and suppress their influence on the prediction function. Empirical validation demonstrates superior performance compared to a range of recent defenses.
Sandamal Weerasinghe, Tamas Abraham, Tansu Alpcan, Sarah M. Erfani, Christopher Leckie, Benjamin I. P. Rubinstein
null
null
2,021
ijcai
Deep Reinforcement Learning Boosted Partial Domain Adaptation
null
Domain adaptation is critical for learning transferable features that effectively reduce the distribution difference among domains. In the era of big data, the availability of large-scale labeled datasets motivates partial domain adaptation (PDA) which deals with adaptation from large source domains to small target domains with less number of classes. In the PDA setting, it is crucial to transfer relevant source samples and eliminate irrelevant ones to mitigate negative transfer. In this paper, we propose a deep reinforcement learning based source data selector for PDA, which is capable of eliminating less relevant source samples automatically to boost existing adaptation methods. It determines to either keep or discard the source instances based on their feature representations so that more effective knowledge transfer across domains can be achieved via filtering out irrelevant samples. As a general module, the proposed DRL-based data selector can be integrated into any existing domain adaptation or partial domain adaptation models. Extensive experiments on several benchmark datasets demonstrate the superiority of the proposed DRL-based data selector which leads to state-of-the-art performance for various PDA tasks.
Keyu Wu, Min Wu, Jianfei Yang, Zhenghua Chen, Zhengguo Li, Xiaoli Li
null
null
2,021
ijcai