title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Encoding Probabilistic Graphical Models into Stochastic Boolean Satisfiability | null | Statistical inference is a powerful technique in various applications. Although many statistical inference tools are available, answering inference queries involving complex quantification structures remains challenging. Recently, solvers for Stochastic Boolean Satisfiability (SSAT), a powerful formalism allowing concise encodings of PSPACE decision problems under uncertainty, are under active development and applied in more and more applications. In this work, we exploit SSAT solvers for the inference of Probabilistic Graphical Models (PGMs), an essential representation for probabilistic reasoning. Specifically, we develop encoding methods to systematically convert PGM inference problems into SSAT formulas for effective solving. Experimental results demonstrate that, by using our encoding, SSAT-based solving can complement existing PGM tools, especially in answering complex queries. | Cheng-Han Hsieh, Jie-Hong R. Jiang | null | null | 2,022 | ijcai |
An Exact MaxSAT Algorithm: Further Observations and Further Improvements | null | In the maximum satisfiability problem (MaxSAT), given a CNF formula with m clauses and n variables, we are asked to find an assignment of the variables to satisfy the maximum number of clauses. Chen and Kanj showed that this problem can be solved in O*(1.3248^m) time (DAM 2004) and the running time bound was improved to O*(1.2989^m) by Xu et al. (IJCAI 2019). In this paper, we further improve the result to O*(1.2886^m). By using some new reduction and branching techniques we can avoid several bottlenecks in previous algorithms and get the improvement on this important problem. | Mingyu Xiao | null | null | 2,022 | ijcai |
Automated Program Analysis: Revisiting Precondition Inference through Constraint Acquisition | null | Program annotations under the form of function pre/postconditions are crucial for many software engineering and program verification applications. Unfortunately, such annotations are rarely available and must be retrofit by hand. In this paper, we explore how Constraint Acquisition (CA), a learning framework from Constraint Programming, can be leveraged to automatically infer program preconditions in a black-box manner, from input-output observations. We propose PreCA, the first ever framework based on active constraint acquisition dedicated to infer memory-related preconditions. PreCA overpasses prior techniques based on program analysis and formal methods, offering well-identified guarantees and returning more precise results in practice. | Grégoire Menguy, Sébastien Bardin, Nadjib Lazaar, Arnaud Gotlieb | null | null | 2,022 | ijcai |
Threshold-free Pattern Mining Meets Multi-Objective Optimization: Application to Association Rules | null | Constraint-based pattern mining is at the core of numerous data mining tasks. Unfortunately, thresholds which are involved in these constraints cannot be easily chosen. This paper investigates a Multi-objective Optimization approach where several (often conflicting) functions need to be optimized at the same time. We introduce a new model for efficiently mining Pareto optimal patterns with constraint programming. Our model exploits condensed pattern representations to reduce the mining effort. To this end, we design a new global constraint for ensuring the closeness of patterns over a set of measures. We show how our approach can be applied to derive high-quality non redundant association rules without the use of thresholds whose added-value is studied on both UCI datasets and case study related to the analysis of genes expression data integrating multiple external genes annotations. | Charles Vernerey, Samir Loudni, Noureddine Aribi, Yahia Lebbah | null | null | 2,022 | ijcai |
BandMaxSAT: A Local Search MaxSAT Solver with Multi-armed Bandit | null | We address Partial MaxSAT (PMS) and Weighted PMS (WPMS), two practical generalizations of the MaxSAT problem, and propose a local search algorithm called BandMaxSAT, that applies a multi-armed bandit to guide the search direction, for these problems. The bandit in our method is associated with all the soft clauses in the input (W)PMS instance. Each arm corresponds to a soft clause. The bandit model can help BandMaxSAT to select a good direction to escape from local optima by selecting a soft clause to be satisfied in the current step, that is, selecting an arm to be pulled. We further propose an initialization method for (W)PMS that prioritizes both unit and binary clauses when producing the initial solutions. Extensive experiments demonstrate that BandMaxSAT significantly outperforms the state-of-the-art (W)PMS local search algorithm SATLike3.0. Specifically, the number of instances in which BandMaxSAT obtains better results is about twice that obtained by SATLike3.0. We further combine BandMaxSAT with the complete solver TT-Open-WBO-Inc. The resulting solver BandMaxSAT-c also outperforms some of the best state-of-the-art complete (W)PMS solvers, including SATLike-c, Loandra and TT-Open-WBO-Inc. | Jiongzhi Zheng, Kun He, Jianrong Zhou, Yan Jin, Chu-Min Li, Felip Manyà | null | null | 2,022 | ijcai |
Doubly Sparse Asynchronous Learning for Stochastic Composite Optimization | null | Parallel optimization has become popular for large-scale learning in the past decades. However, existing methods suffer from huge computational costs, memory usage, and communication burden in high-dimensional scenarios. To address the challenges, we propose a new accelerated doubly sparse asynchronous learning (DSAL) method for stochastic composite optimization, under which two algorithms are proposed on shared-memory and distributed-memory architecture respectively, which only conducts gradient descent on the nonzero coordinates (data sparsity) and active set (model sparsity). The proposed algorithm can converge much faster and achieve significant speedup by simultaneously enjoying the sparsity of the model and data. Moreover, by sending the gradients on the active set only, communication costs are dramatically reduced. Theoretically, we prove that the proposed method achieves the linear convergence rate with lower overall complexity and can achieve the model identification in a finite number of iterations almost surely. Finally, extensive experimental results on benchmark datasets confirm the superiority of our proposed method. | Runxue Bao, Xidong Wu, Wenhan Xian, Heng Huang | null | null | 2,022 | ijcai |
Degradation Accordant Plug-and-Play for Low-Rank Tensor Completion | null | Tensor completion aims at estimating missing values from an incomplete observation, playing a fundamental role for many applications. This work proposes a novel low-rank tensor completion model, in which the inherent low-rank prior and external degradation accordant data-driven prior are simultaneously utilized. Specifically, the tensor nuclear norm (TNN) is adopted to characterize the overall low-dimensionality of the tensor data. Meanwhile, an implicit regularizer is formulated and its related subproblem is solved via a deep convolutional neural network (CNN) under the plug-and-play framework. This CNN, pretrained for the inpainting task on a mass of natural images, is expected to express the external data-driven prior and this plugged inpainter is consistent with the original degradation process. Then, an efficient alternating direction method of multipliers (ADMM) is designed to solve the proposed optimization model. Extensive experiments are conducted on different types of tensor imaging data with the comparison with state-of-the-art methods, illustrating the effectiveness and the remarkable generalization ability of our method. | Yexun Hu, Tai-Xiang Jiang, Xi-Le Zhao | null | null | 2,022 | ijcai |
Inverting 43-step MD4 via Cube-and-Conquer | null | MD4 is a prominent cryptographic hash function proposed in 1990. The full version consists of 48 steps and produces a hash of size 128 bits given a message of an arbitrary finite size. In 2007, its truncated 39-step version was inverted via reducing to SAT and applying a CDCL solver. Since that time, several attempts have been made but the 40-step version still remains unbroken. In this study, 40-, 41-, 42-, and 43-step versions of MD4 are successfully inverted. The problems are reduced to SAT and solved via the Cube-and-Conquer approach. Two algorithms are proposed for this purpose. The first one generates inversion problems for MD4 by adding special constraints. The second one is aimed at finding a proper threshold for the cubing phase of Cube-and-Conquer. While the first algorithm is focused on inverting MD4 and similar cryptographic hash functions, the second one is not area specific and so is applicable to a variety of classes of hard SAT instances. | Oleg Zaikin | null | null | 2,022 | ijcai |
Entity Alignment with Reliable Path Reasoning and Relation-aware Heterogeneous Graph Transformer | null | Entity Alignment (EA) has attracted widespread attention in both academia and industry, which aims to seek entities with same meanings from different Knowledge Graphs (KGs). There are substantial multi-step relation paths between entities in KGs, indicating the semantic relations of entities. However, existing methods rarely consider path information because not all natural paths facilitate for EA judgment. In this paper, we propose a more effective entity alignment framework, RPR-RHGT, which integrates relation and path structure information, as well as the heterogeneous information in KGs. Impressively, an initial reliable path reasoning algorithm is developed to generate the paths favorable for EA task from the relation structures of KGs. This is the first algorithm in the literature to successfully use unrestricted path information. In addition, to efficiently capture heterogeneous features in entity neighborhoods, a relation-aware heterogeneous graph transformer is designed to model the relation and path structures of KGs. Extensive experiments on three well-known datasets show RPR-RHGT significantly outperforms 10 state-of-the-art methods, exceeding the best performing baseline up to 8.62% on Hits@1. We also show its better performance than the baselines on different ratios of training set, and harder datasets. | Weishan Cai, Wenjun Ma, Jieyu Zhan, Yuncheng Jiang | null | null | 2,022 | ijcai |
Hypergraph Structure Learning for Hypergraph Neural Networks | null | Hypergraphs are natural and expressive modeling tools to encode high-order relationships among entities. Several variations of Hypergraph Neural Networks (HGNNs) are proposed to learn the node representations and complex relationships in the hypergraphs. Most current approaches assume that the input hypergraph structure accurately depicts the relations in the hypergraphs. However, the input hypergraph structure inevitably contains noise, task-irrelevant information, or false-negative connections.
Treating the input hypergraph structure as ground-truth information unavoidably leads to sub-optimal performance. In this paper, we propose a Hypergraph Structure Learning (HSL) framework, which optimizes the hypergraph structure and the HGNNs simultaneously in an end-to-end way.
HSL learns an informative and concise hypergraph structure that is optimized for downstream tasks. To efficiently learn the hypergraph structure, HSL adopts a two-stage sampling process: hyperedge sampling for pruning redundant hyperedges and incident node sampling for pruning irrelevant incident nodes and discovering potential implicit connections.
The consistency between the optimized structure and the original structure is maintained by the intra-hyperedge contrastive learning module.
The sampling processes are jointly optimized with HGNNs towards the objective of the downstream tasks. Experiments conducted on 7 datasets show shat HSL outperforms the state-of-the-art baselines while adaptively sparsifying hypergraph structures. | Derun Cai, Moxian Song, Chenxi Sun, Baofeng Zhang, Shenda Hong, Hongyan Li | null | null | 2,022 | ijcai |
Mutual Distillation Learning Network for Trajectory-User Linking | null | Trajectory-User Linking (TUL), which links trajectories to users who generate them, has been a challenging problem due to the sparsity in check-in mobility data. Existing methods ignore the utilization of historical data or rich contextual features in check-in data, resulting in poor performance for TUL task.
In this paper, we propose a novel Mutual distillation learning network to solve the TUL problem for sparse check-in mobility data, named MainTUL.
Specifically, MainTUL is composed of a Recurrent Neural Network (RNN) trajectory encoder that models sequential patterns of input trajectory and a temporal-aware Transformer trajectory encoder that captures long-term time dependencies for the corresponding augmented historical trajectories.
Then, the knowledge learned on historical trajectories is transferred between the two trajectory encoders to guide the learning of both encoders to achieve mutual distillation of information. Experimental results on two real-world check-in mobility datasets demonstrate the superiority of \model against state-of-the-art baselines. The source code of our model is available at https://github.com/Onedean/MainTUL. | Wei Chen, ShuZhe Li, Chao Huang, Yanwei Yu, Yongguo Jiang, Junyu Dong | null | null | 2,022 | ijcai |
Towards Robust Dense Retrieval via Local Ranking Alignment | null | Dense retrieval (DR) has extended the employment of pre-trained language models, like BERT, for text ranking. However, recent studies have raised the robustness issue of DR model against query variations, like query with typos, along with non-trivial performance losses. Herein, we argue that it would be beneficial to allow the DR model to learn to align the relative positions of query-passage pairs in the representation space, as query variations cause the query vector to drift away from its original position, affecting the subsequent DR effectiveness. To this end, we propose RoDR, a novel robust DR model that learns to calibrate the in-batch local ranking of query variation to that of original query for the DR space alignment. Extensive experiments on MS MARCO and ANTIQUE datasets show that RoDR significantly improves the retrieval results on both the original queries and different types of query variations. Meanwhile, RoDR provides a general query noise-tolerate learning framework that boosts the robustness and effectiveness of various existing DR models. Our code and models are openly available at https://github.com/cxa-unique/RoDR. | Xuanang Chen, Jian Luo, Ben He, Le Sun, Yingfei Sun | null | null | 2,022 | ijcai |
Meta-Learning Based Knowledge Extrapolation for Knowledge Graphs in the Federated Setting | null | We study the knowledge extrapolation problem to embed new components (i.e., entities and relations) that come with emerging knowledge graphs (KGs) in the federated setting. In this problem, a model trained on an existing KG needs to embed an emerging KG with unseen entities and relations. To solve this problem, we introduce the meta-learning setting, where a set of tasks are sampled on the existing KG to mimic the link prediction task on the emerging KG. Based on sampled tasks, we meta-train a graph neural network framework that can construct features for unseen components based on structural information and output embeddings for them. Experimental results show that our proposed method can effectively embed unseen components and outperforms models that consider inductive settings for KGs and baselines that directly use conventional KG embedding methods. | Mingyang Chen, Wen Zhang, Zhen Yao, Xiangnan Chen, Mengxiao Ding, Fei Huang, Huajun Chen | null | null | 2,022 | ijcai |
Vertically Federated Graph Neural Network for Privacy-Preserving Node Classification | null | Recently, Graph Neural Network (GNN) has achieved remarkable progresses in various real-world tasks on graph data, consisting of node features and the adjacent information between different nodes. High-performance GNN models always depend on both rich features and complete edge information in graph. However, such information could possibly be isolated by different data holders in practice, which is the so-called data isolation problem. To solve this problem, in this paper, we propose VFGNN, a federated GNN learning paradigm for privacy-preserving node classification task under data vertically partitioned setting, which can be generalized to existing GNN models. Specifically, we split the computation graph into two parts. We leave the private data (i.e., features, edges, and labels) related computations on data holders, and delegate the rest of computations to a semi-honest server. We also propose to apply differential privacy to prevent potential information leakage from the server. We conduct experiments on three benchmarks and the results demonstrate the effectiveness of VFGNN. | Chaochao Chen, Jun Zhou, Longfei Zheng, Huiwen Wu, Lingjuan Lyu, Jia Wu, Bingzhe Wu, Ziqi Liu, Li Wang, Xiaolin Zheng | null | null | 2,022 | ijcai |
Filtration-Enhanced Graph Transformation | null | Graph kernels and graph neural networks (GNNs) are widely used for the classification of graph data. However, many existing graph kernels and GNNs have limited expressive power, because they cannot distinguish graphs if the classic 1-dimensional Weisfeiler-Leman (1-WL) algorithm does not distinguish them. To break the 1-WL expressiveness barrier, we propose a novel method called filtration-enhanced graph transformation, which is based on a concept from the area of topological data analysis. In a nutshell, our approach first transforms each original graph into a filtration-enhanced graph based on a certain pre-defined filtration operation, and then uses the transformed graphs as the inputs for graph kernels or GNNs. The striking feature of our approach is that it is a plug-in method and can be applied in any graph kernel and GNN to enhance their expressive power. We theoretically and experimentally demonstrate that our solutions exhibit significantly better performance than the state-of-the art solutions for graph classification tasks. | Zijian Chen, Rong-Hua Li, Hongchao Qin, Huanzhong Duan, Yanxiong Lu, Qiangqiang Dai, Guoren Wang | null | null | 2,022 | ijcai |
Triformer: Triangular, Variable-Specific Attentions for Long Sequence Multivariate Time Series Forecasting | null | A variety of real-world applications rely on far future information to make decisions, thus calling for efficient and accurate long sequence multivariate time series forecasting. While recent attention-based forecasting models show strong abilities in
capturing long-term dependencies, they still suffer from two key limitations. First, canonical self attention has a quadratic complexity w.r.t. the input time series length, thus falling short in efficiency. Second, different variables’ time series often have
distinct temporal dynamics, which existing studies fail to capture, as they use the same model parameter space, e.g., projection matrices, for all variables’ time series, thus falling short in accuracy. To ensure high efficiency and accuracy, we propose Triformer, a triangular, variable-specific attention. (i) Linear complexity: we introduce a novel patch attention with linear complexity. When stacking multiple layers of the patch attentions, a triangular structure is proposed such that the
layer sizes shrink exponentially, thus maintaining linear complexity. (ii) Variable-specific parameters: we propose a light-weight method to enable distinct sets of model parameters for different variables’ time series to enhance accuracy
without compromising efficiency and memory usage. Strong empirical evidence on four datasets from multiple domains justifies our design choices, and it demonstrates that Triformer outperforms state-of-the-art methods w.r.t. both accuracy and
efficiency. Source code is publicly available at https://github.com/razvanc92/triformer. | Razvan-Gabriel Cirstea, Chenjuan Guo, Bin Yang, Tung Kieu, Xuanyi Dong, Shirui Pan | null | null | 2,022 | ijcai |
Can Abnormality be Detected by Graph Neural Networks? | null | Anomaly detection in graphs has attracted considerable interests in both academia and industry due to its wide applications in numerous domains ranging from finance to biology. Meanwhile, graph neural networks (GNNs) is emerging as a powerful tool for modeling graph data. A natural and fundamental question that arises here is: can abnormality be detected by graph neural networks?
In this paper, we aim to answer this question, which is nontrivial. As many existing works have explored, graph neural networks can be seen as filters for graph signals, with the favor of low frequency in graphs. In other words, GNN will smooth the signals of adjacent nodes. However, abnormality in a graph intuitively has the characteristic that it tends to be dissimilar to its neighbors, which are mostly normal samples. It thereby conflicts with the general assumption with traditional GNNs. To solve this, we propose a novel Adaptive Multi-frequency Graph Neural Network (AMNet), aiming to capture both low-frequency and high-frequency signals, and adaptively combine signals of different frequencies. Experimental results on real-world datasets demonstrate that our model achieves a significant improvement comparing with several state-of-the-art baseline methods. | Ziwei Chai, Siqi You, Yang Yang, Shiliang Pu, Jiarong Xu, Haoyang Cai, Weihao Jiang | null | null | 2,022 | ijcai |
CADET: Calibrated Anomaly Detection for Mitigating Hardness Bias | null | The detection of anomalous samples in large, high-dimensional datasets is a challenging task with numerous practical applications. Recently, state-of-the-art performance is achieved with deep learning methods: for example, using the reconstruction error from an autoencoder as anomaly scores. However, the scores are uncalibrated: that is, they follow an unknown distribution and lack a clear interpretation. Furthermore, the reconstruction error is highly influenced by the `hardness' of a given sample, which leads to false negative and false positive errors. In this paper, we empirically show the significance of this hardness bias present in a range of recent deep anomaly detection methods. To mitigate this, we propose an efficient and plug-and-play error calibration method which mitigates this hardness bias in the anomaly scoring without the need to retrain the model. We verify the effectiveness of our method on a range of image, time-series, and tabular datasets and against several baseline methods. | Ailin Deng, Adam Goodge, Lang Yi Ang, Bryan Hooi | null | null | 2,022 | ijcai |
A Strengthened Branch and Bound Algorithm for the Maximum Common (Connected) Subgraph Problem | null | We propose a new and strengthened Branch-and-Bound (BnB) algorithm for the maximum common (connected) induced subgraph problem based on two new operators, Long-Short Memory (LSM) and Leaf vertex Union Match (LUM). Given two graphs for which we search for the maximum common (connected) induced subgraph, the first operator of LSM maintains a score for the branching node using the short-term reward of each vertex of the first graph and the long-term reward of each vertex pair of the two graphs. In this way, the BnB process learns to reduce the search tree size significantly and boost the algorithm performance. The second operator of LUM further improves the performance by simultaneously matching the leaf vertices connected to the current matched vertices, and allows the algorithm to match multiple vertex pairs without affecting the optimality of solution. We incorporate the two operators into the state-of-the-art BnB algorithm McSplit, and denote the resulting algorithm as McSplit+LL. Experiments show that McSplit+LL outperforms McSplit+RL, a more recent variant of McSplit using reinforcement learning that is superior than McSplit. | Jianrong Zhou, Kun He, Jiongzhi Zheng, Chu-Min Li, Yanli Liu | null | null | 2,022 | ijcai |
Robust High-Dimensional Classification From Few Positive Examples | null | We tackle an extreme form of imbalanced classification, with up to 105 features but as few as 5 samples from the minority class. This problem occurs in predicting predicting tumor types and fraud detection, among others. Standard imbalanced classification methods are not designed for such severe data scarcity. Sampling-based methods need too many samples due to the high-dimensionality, while cost-based methods must place too high a weight on the limited minority samples. Our proposed method, called DIRECT, bypasses sample generation by training the classifier over a robust smoothed distribution of the minority class. DIRECT is fast, simple, robust, parameter-free, and easy to interpret. We validate DIRECT on several real-world datasets spanning document, image, and medical classification. DIRECT is up to 5x − 7x better than SMOTE-like methods, 30−200% better than ensemble methods, 3x − 7x better than cost-sensitive methods. The greatest gains are for settings with the fewest samples in the minority class, where DIRECT’s robustness is most helpful. | Deepayan Chakrabarti, Benjamin Fauber | null | null | 2,022 | ijcai |
Feature and Instance Joint Selection: A Reinforcement Learning Perspective | null | Feature selection and instance selection are two important techniques of data processing. However, such selections have mostly been studied separately, while existing work towards the joint selection conducts feature/instance selection coarsely; thus neglecting the latent fine-grained interaction between feature space and instance space. To address this challenge, we propose a reinforcement learning solution to accomplish the joint selection task and simultaneously capture the interaction between the selection of each feature and each instance. In particular, a sequential-scanning mechanism is designed as action strategy of agents and a collaborative-changing environment is used to enhance agent collaboration. In addition, an interactive paradigm introduces prior selection knowledge to help agents for more efficient exploration. Finally, extensive experiments on real-world datasets have demonstrated improved performances. | Wei Fan, Kunpeng Liu, Hao Liu, Hengshu Zhu, Hui Xiong, Yanjie Fu | null | null | 2,022 | ijcai |
CGMN: A Contrastive Graph Matching Network for Self-Supervised Graph Similarity Learning | null | Graph similarity learning refers to calculating the similarity score between two graphs, which is required in many realistic applications, such as visual tracking, graph classification, and collaborative filtering. As most of the existing graph neural networks yield effective graph representations of a single graph, little effort has been made for jointly learning two graph representations and calculating their similarity score. In addition, existing unsupervised graph similarity learning methods are mainly clustering-based, which ignores the valuable information embodied in graph pairs. To this end, we propose a contrastive graph matching network (CGMN) for self-supervised graph similarity learning in order to calculate the similarity between any two input graph objects. Specifically, we generate two augmented views for each graph in a pair respectively. Then, we employ two strategies, namely cross-view interaction and cross-graph interaction, for effective node representation learning. The former is resorted to strengthen the consistency of node representations in two views. The latter is utilized to identify node differences between different graphs. Finally, we transform node representations into graph-level representations via pooling operations for graph similarity computation. We have evaluated CGMN on eight real-world datasets, and the experiment results show that the proposed new approach is superior to the state-of-the-art methods in graph similarity learning downstream tasks. | Di Jin, Luzhi Wang, Yizhen Zheng, Xiang Li, Fei Jiang, Wei Lin, Shirui Pan | null | null | 2,022 | ijcai |
Constrained Adaptive Projection with Pretrained Features for Anomaly Detection | null | Anomaly detection aims to separate anomalies from normal samples, and the pretrained network is promising for anomaly detection. However, adapting the pretrained features would be confronted with the risk of pattern collapse when finetuning on one-class training data. In this paper, we propose an anomaly detection framework called constrained adaptive projection with pretrained features (CAP). Combined with pretrained features, a simple linear projection head applied on a specific input and its k most similar pretrained normal representations is designed for feature adaptation, and a reformed self-attention is leveraged to mine the inner-relationship among one-class semantic features. A loss function is proposed to avoid potential pattern collapse. Concretely, it considers the similarity between a specific data and its corresponding adaptive normal representation, and incorporates a constraint term slightly aligning pretrained and adaptive spaces. Our method achieves state-of-the-art anomaly detection performance on semantic anomaly detection and sensory anomaly detection benchmarks including 96.5% AUROC on CIFAR-100 dataset, 97.0% AUROC on CIFAR-10 dataset and 89.9% AUROC on MvTec dataset. | Xingtai Gui, Di Wu, Yang Chang, Shicai Fan | null | null | 2,022 | ijcai |
Modeling Precursors for Temporal Knowledge Graph Reasoning via Auto-encoder Structure | null | Temporal knowledge graph (TKG) reasoning that infers missing facts in the future is an essential and challenging task. When predicting a future event, there must be a narrative evolutionary process composed of closely related historical facts to support the event's occurrence, namely fact precursors. However, most existing models employ a sequential reasoning process in an auto-regressive manner, which cannot capture precursor information. This paper proposes a novel auto-encoder architecture that introduces a relation-aware graph attention layer into transformer (rGalT) to accommodate inference over the TKG. Specifically, we first calculate the correlation between historical and predicted facts through multiple attention mechanisms along intra-graph and inter-graph dimensions, then constitute these mutually related facts into diverse fact segments. Next, we borrow the translation generation idea to decode in parallel the precursor information associated with the given query, which enables our model to infer future unknown facts by progressively generating graph structures. Experimental results on four benchmark datasets demonstrate that our model outperforms other state-of-the-art methods, and precursor identification provides supporting evidence for prediction. | Yifu Gao, Linhui Feng, Zhigang Kan, Yi Han, Linbo Qiao, Dongsheng Li | null | null | 2,022 | ijcai |
Private Semi-Supervised Federated Learning | null | We study a federated learning (FL) framework to effectively train models from scarce and skewly distributed labeled data. We consider a challenging yet practical scenario: a few data sources own a small amount of labeled data, while the rest mass sources own purely unlabeled data. Classical FL requires each client to have enough labeled data for local training, thus is not applicable in this scenario. In this work, we design an effective federated semi-supervised learning framework (FedSSL) to fully leverage both labeled and unlabeled data sources. We establish a unified data space across all participating agents, so that each agent can generate mixed data samples to boost semi-supervised learning (SSL), while keeping data locality. We further show that FedSSL can integrate differential privacy protection techniques to prevent labeled data leakage at the cost of minimum performance degradation. On SSL tasks with as small as 0.17% and 1% of MNIST and CIFAR-10 datasets as labeled data, respectively, our approach can achieve 5-20% performance boost over the state-of-the-art methods. | Chenyou Fan, Junjie Hu, Jianwei Huang | null | null | 2,022 | ijcai |
MERIT: Learning Multi-level Representations on Temporal Graphs | null | Recently, representation learning on temporal graphs has drawn increasing attention, which aims at learning temporal patterns to characterize the evolving nature of dynamic graphs in real-world applications. Despite effectiveness, these methods commonly ignore the individual- and combinatorial-level patterns derived from different types of interactions (e.g.,user-item), which are at the heart of the representation learning on temporal graphs. To fill this gap, we propose MERIT, a novel multi-level graph attention network for inductive representation learning on temporal graphs.We adaptively embed the original timestamps to a higher, continuous dimensional space for learn-ing individual-level periodicity through Personalized Time Encoding (PTE) module. Furthermore, we equip MERIT with Continuous time and Con-text aware Attention (Coco-Attention) mechanism which chronologically locates most relevant neighbors by jointly capturing multi-level context on temporal graphs. Finally, MERIT performs multiple aggregations and propagations to explore and exploit high-order structural information for down-stream tasks. Extensive experiments on four public datasets demonstrate the effectiveness of MERITon both (inductive / transductive) link prediction and node classification task. | Binbin Hu, Zhengwei Wu, Jun Zhou, Ziqi Liu, Zhigang Huangfu, Zhiqiang Zhang, Chaochao Chen | null | null | 2,022 | ijcai |
RAW-GNN: RAndom Walk Aggregation based Graph Neural Network | null | Graph-Convolution-based methods have been successfully applied to representation learning on homophily graphs where nodes with the same label or similar attributes tend to connect with one another. Due to the homophily assumption of Graph Convolutional Networks (GCNs) that these methods use, they are not suitable for heterophily graphs where nodes with different labels or dissimilar attributes tend to be adjacent. Several methods have attempted to address this heterophily problem, but they do not change the fundamental aggregation mechanism of GCNs because they rely on summation operators to aggregate information from neighboring nodes, which is implicitly subject to the homophily assumption. Here, we introduce a novel aggregation mechanism and develop a RAndom Walk Aggregation-based Graph Neural Network (called RAW-GNN) method. The proposed approach integrates the random walk strategy with graph neural networks. The new method utilizes breadth-first random walk search to capture homophily information and depth-first search to collect heterophily information. It replaces the conventional neighborhoods with path-based neighborhoods and introduces a new path-based aggregator based on Recurrent Neural Networks. These designs make RAW-GNN suitable for both homophily and heterophily graphs. Extensive experimental results showed that the new method achieved state-of-the-art performance on a variety of homophily and heterophily graphs. | Di Jin, Rui Wang, Meng Ge, Dongxiao He, Xiang Li, Wei Lin, Weixiong Zhang | null | null | 2,022 | ijcai |
GraphDIVE: Graph Classification by Mixture of Diverse Experts | null | Graph classification is a challenging research task in many applications across a broad range of domains. Recently, Graph Neural Network (GNN) models have achieved superior performance on various real-world graph datasets. Despite their successes, most of current GNN models largely suffer from the ubiquitous class imbalance problem, which typically results in prediction bias towards majority classes. Although many imbalanced learning methods have been proposed, they mainly focus on regular Euclidean data and cannot well utilize topological structure of graph (non-Euclidean) data. To boost the performance of GNNs and investigate the relationship between topological structure and class imbalance, we propose GraphDIVE, which learns multi-view graph representations and combine multi-view experts (i.e., classifiers). Specifically, multi-view graph representations correspond to the intrinsic diverse graph topological structure characteristics. Extensive experiments on molecular benchmark datasets demonstrate the effectiveness of the proposed approach. | Fenyu Hu, Liping Wang, Qiang Liu, Shu Wu, Liang Wang, Tieniu Tan | null | null | 2,022 | ijcai |
Self-supervised Graph Neural Networks for Multi-behavior Recommendation | null | Traditional recommendation usually focuses on utilizing only one target user behavior (e.g., purchase) but ignoring other auxiliary behaviors (e.g., click, add to cart). Early efforts of multi-behavior recommendation often emphasize the differences between multiple behaviors, i.e., they aim to extract useful information by distinguishing different behaviors. However, the commonality between them, which reflects user's common preference for items associated with different behaviors, is largely ignored. Meanwhile, the multi-behavior recommendation still severely suffers from limited supervision signal issue. In this paper, we propose a novel self-supervised graph collaborative filtering model for multi-behavior recommendation named S-MBRec. Specifically, for each behavior, we execute the GCNs to learn the user and item embeddings. Then we design a supervised task, distinguishing the importance of different behaviors, to capture the differences between embeddings. Meanwhile, we propose a star-style contrastive learning task to capture the embedding commonality between target and auxiliary behaviors, so as to alleviate the sparsity of supervision signal, reduce the redundancy among auxiliary behavior, and extract the most critical information. Finally, we jointly optimize the above two tasks. Extensive experiments, in comparison with state-of-the-arts, well demonstrate the effectiveness of S-MBRec, where the maximum improvement can reach to 20%. | Shuyun Gu, Xiao Wang, Chuan Shi, Ding Xiao | null | null | 2,022 | ijcai |
Quaternion Ordinal Embedding | null | Ordinal embedding (OE) aims to project objects into a low-dimensional space while preserving their ordinal constraints as well as possible. Generally speaking, a reasonable OE algorithm should simultaneously capture a) semantic meaning and b) the ordinal relationship of the objects. However, most of the existing methods merely focus on b). To address this issue, our goal in this paper is to seek a generic OE method to embrace the two features simultaneously. We argue that different dimensions of vector-based embedding are naturally entangled with each other. To realize a), we expect to decompose the D dimensional embedding space into D different semantic subspaces, where each subspace is associated with a matrix representation. Unfortunately, introducing a matrix-based representation requires far more complex parametric space than its vector-based counterparts. Thanks to the algebraic property of quaternions, we are able to find a more efficient way to represent a matrix with quaternions. For b), inspired by the classic chordal Grassmannian distance, a new distance function is defined to measure the distance between different quaternions/matrices, on top of which we construct a generic OE loss function. Experimental results for different tasks on both simulated and real-world datasets verify the effectiveness of our proposed method. | Wenzheng Hou, Qianqian Xu, Ke Ma, Qianxiu Hao, Qingming Huang | null | null | 2,022 | ijcai |
Gromov-Wasserstein Discrepancy with Local Differential Privacy for Distributed Structural Graphs | null | Learning the similarity between structured data, especially the graphs, is one of the essential problems. Besides the approach like graph kernels, Gromov-Wasserstein (GW) distance recently draws a big attention due to its flexibility to capture both topological and feature characteristics, as well as handling the permutation invariance. However, structured data are widely distributed for different data mining and machine learning applications. With privacy concerns, accessing the decentralized data is limited to either individual clients or different silos.
To tackle these issues, we propose a privacy-preserving framework to analyze the GW discrepancy of node embedding learned locally from graph neural networks in a federated flavor, and then explicitly place local differential privacy (LDP) based on Multi-bit Encoder to protect sensitive information. Our experiments show that, with strong privacy protection guaranteed by ε-LDP algorithm, the proposed framework not only preserves privacy in graph learning, but also presents a noised structural metric under GW distance, resulting in comparable and even better performance in classification and clustering tasks. Moreover, we reason the rationale behind the LDP-based GW distance analytically and empirically. | Hongwei Jin, Xun Chen | null | null | 2,022 | ijcai |
End-to-End Open-Set Semi-Supervised Node Classification with Out-of-Distribution Detection | null | Out-Of-Distribution (OOD) samples are prevalent in real-world applications. The OOD issue becomes even more severe on graph data, as the effect of OOD nodes can be potentially amplified by propagation through the graph topology. Recent works have considered the OOD detection problem, which is critical for reducing the uncertainty in learning and improving the robustness. However, no prior work considers simultaneously OOD detection and node classification on graphs in an end-to-end manner. In this paper, we study a novel problem of end-to-end open-set semi-supervised node classification (OSSNC) on graphs, which deals with node classification in the presence of OOD nodes. Given the lack of supervision on OOD nodes, we introduce a latent variable to indicate in-distribution or OOD nodes in a variational inference framework, and further propose a novel algorithm named as Learning to Mix Neighbors (LMN) which learns to dampen the influence of OOD nodes through the messaging-passing in typical graph neural networks. Extensive experiments on various datasets show that the proposed method outperforms state-of-the-art baselines in terms of both node classification and OOD detection. | Tiancheng Huang, Donglin Wang, Yuan Fang, Zhengyu Chen | null | null | 2,022 | ijcai |
When Transfer Learning Meets Cross-City Urban Flow Prediction: Spatio-Temporal Adaptation Matters | null | Urban flow prediction is a fundamental task to build smart cities, where neural networks have become the most popular method. However, the deep learning methods typically rely on massive training data that are probably inaccessible in real world. In light of this, the community calls for knowledge transfer. However, when adapting transfer learning for cross-city prediction tasks, existing studies are built on static knowledge transfer, ignoring the fact inter-city correlations change dynamically across time. The dynamic correlations make urban feature transfer challenging. This paper proposes a novel Spatio-Temporal Adaptation Network (STAN) to perform urban flow prediction for data-scarce cities via the spatio-temporal knowledge transferred from data-rich cities. STAN encompasses three modules: i) spatial adversarial adaptation module that adopts an adversarial manner to capture the transferable spatial features; ii) temporal attentive adaptation module to attend to critical dynamics for temporal feature transfer; iii) prediction module that aims to learn task-driven transferable knowledge. Extensive experiments on five real datasets show STAN substantially outperforms state-of-the-art methods. | Ziquan Fang, Dongen Wu, Lu Pan, Lu Chen, Yunjun Gao | null | null | 2,022 | ijcai |
MetaER-TTE: An Adaptive Meta-learning Model for En Route Travel Time Estimation | null | En route travel time estimation (ER-TTE) aims to predict the travel time on the remaining route. Since the traveled and remaining parts of a trip usually have some common characteristics like driving speed, it is desirable to explore these characteristics for improved performance via effective adaptation. This yet faces the severe problem of data sparsity due to the few sampled points in a traveled partial trajectory. Since trajectories with different contextual information tend to have different characteristics, the existing meta-learning method for ER-TTE cannot fit each trajectory well because it uses the same model for all trajectories. To this end, we propose a novel adaptive meta-learning model called MetaER-TTE. Particularly, we utilize soft-clustering and derive cluster-aware initialized parameters to better transfer the shared knowledge across trajectories with similar contextual information. In addition, we adopt a distribution-aware approach for adaptive learning rate optimization, so as to avoid task-overfitting which will occur when guiding the initial parameters with a fixed learning rate for tasks under imbalanced distribution. Finally, we conduct comprehensive experiments to demonstrate the superiority of MetaER-TTE. | Yu Fan, Jiajie Xu, Rui Zhou, Jianxin Li, Kai Zheng, Lu Chen, Chengfei Liu | null | null | 2,022 | ijcai |
Disentangling the Computational Complexity of Network Untangling | null | We study the recently introduced network untangling problem, a variant of Vertex Cover on temporal graphs---graphs whose edge set changes over discrete time steps. There are two versions of this problem. The goal is to select at most k time intervals for each vertex such that all time-edges are covered and (depending on the problem variant) either the maximum interval length or the total sum of interval lengths is minimized. This problem has data mining applications in finding activity timelines that explain the interactions of entities in complex networks.
Both variants of the problem are NP-hard. In this paper, we initiate a multivariate complexity analysis involving the following parameters: number of vertices, lifetime of the temporal graph, number of intervals per vertex, and the interval length bound. For both problem versions, we (almost) completely settle the parameterized complexity for all combinations of those four parameters, thereby delineating the border of fixed-parameter tractability. | Vincent Froese, Pascal Kunz, Philipp Zschoche | null | null | 2,022 | ijcai |
A Sparse-Motif Ensemble Graph Convolutional Network against Over-smoothing | null | The over-smoothing issue is a well-known challenge for Graph Convolutional Networks (GCN). Specifically, it is often observed that increasing the depth of GCN ends up in a trivial embedding subspace where the difference among node embeddings belonging to the same cluster tends to vanish. This paper believes that the main cause lies in the limited diversity along the message passing pipeline. Inspired by this, we propose a Sparse-Motif Ensemble Graph Convolutional Network (SMEGCN). We argue that merely employing the original graph Laplacian as the spectrum of the graph cannot capture the diversified local structure of complex graphs. Hence, to improve the diversity of the graph spectrum, we introduce local topological structures of complex graphs into GCN by employing the so-called graph motifs or the small network subgraphs. Moreover, we find that the motif connections are much denser than the edge connections, which might converge to an all-one matrix within a few times of message-passing. To fix this, we first propose the notion of sparse motif to avoid spurious motif connections. Subsequently, we propose a hierarchical motif aggregation mechanism to integrate the graph spectral information from a series of different sparse-motif message passing paths. Finally, we conduct a series of theoretical and experimental analyses to demonstrate the superiority of the proposed method. | Xuan Jiang, Zhiyong Yang, Peisong Wen, Li Su, Qingming Huang | null | null | 2,022 | ijcai |
MLP4Rec: A Pure MLP Architecture for Sequential Recommendations | null | Self-attention models have achieved state-of-the-art performance in sequential recommender systems by capturing the sequential dependencies among user-item interactions. However, they rely on positional embeddings to retain the sequential information, which may break the semantics of item embeddings. In addition, most existing works assume that such sequential dependencies exist solely in the item embeddings, but neglect their existence among the item features. In this work, we propose a novel sequential recommender system (MLP4Rec) based on the recent advances of MLP-based architectures, which is naturally sensitive to the order of items in a sequence. To be specific, we develop a tri-directional fusion scheme to coherently capture sequential, cross-channel and cross-feature correlations. Extensive experiments demonstrate the effectiveness of MLP4Rec over various representative baselines upon two benchmark datasets. The simple architecture of MLP4Rec also leads to the linear computational complexity as well as much fewer model parameters than existing self-attention methods. | Muyang Li, Xiangyu Zhao, Chuan Lyu, Minghao Zhao, Runze Wu, Ruocheng Guo | null | null | 2,022 | ijcai |
HashNWalk: Hash and Random Walk Based Anomaly Detection in Hyperedge Streams | null | Sequences of group interactions, such as emails, online discussions, and co-authorships, are ubiquitous; and they are naturally represented as a stream of hyperedges (i.e., sets of nodes). Despite its broad potential applications, anomaly detection in hypergraphs (i.e., sets of hyperedges) has received surprisingly little attention, compared to anomaly detection in graphs. While it is tempting to reduce hypergraphs to graphs and apply existing graph-based methods, according to our experiments, taking higher-order structures of hypergraphs into consideration is worthwhile. We propose HashNWalk, an incremental algorithm that detects anomalies in a stream of hyperedges. It maintains and updates a constant-size summary of the structural and temporal information about the input stream. Using the summary, which is the form of a proximity matrix, HashNWalk measures the anomalousness of each new hyperedge as it appears. HashNWalk is (a) Fast: it processes each hyperedge in near real-time and billions of hyperedges within a few hours, (b) Space Efficient: the size of the maintained summary is a user-specific constant, (c) Effective: it successfully detects anomalous hyperedges in real-world hypergraphs. | Geon Lee, Minyoung Choe, Kijung Shin | null | null | 2,022 | ijcai |
TiRGN: Time-Guided Recurrent Graph Network with Local-Global Historical Patterns for Temporal Knowledge Graph Reasoning | null | Temporal knowledge graphs (TKGs) have been widely used in various fields that model the dynamics of facts along the timeline. In the extrapolation setting of TKG reasoning, since facts happening in the future are entirely unknowable, insight into history is the key to predicting future facts. However, it is still a great challenge for existing models as they hardly learn the characteristics of historical events adequately. From the perspective of historical development laws, comprehensively considering the sequential, repetitive, and cyclical patterns of historical facts is conducive to predicting future facts. To this end, we propose a novel representation learning model for TKG reasoning, namely TiRGN, a time-guided recurrent graph network with local-global historical patterns. Specifically, TiRGN uses a local recurrent graph encoder network to model the historical dependency of events at adjacent timestamps and uses the global history encoder network to collect repeated historical facts. After the trade-off between the two encoders, the final inference is performed by a decoder with periodicity. We use six benchmark datasets to evaluate the proposed method. The experimental results show that TiRGN outperforms the state-of-the-art TKG reasoning methods in most cases. | Yujia Li, Shiliang Sun, Jing Zhao | null | null | 2,022 | ijcai |
TGNN: A Joint Semi-supervised Framework for Graph-level Classification | null | This paper studies semi-supervised graph classification, a crucial task with a wide range of applications in social network analysis and bioinformatics. Recent works typically adopt graph neural networks to learn graph-level representations for classification, failing to explicitly leverage features derived from graph topology (e.g., paths). Moreover, when labeled data is scarce, these methods are far from satisfactory due to their insufficient topology exploration of unlabeled data. We address the challenge by proposing a novel semi-supervised framework called Twin Graph Neural Network (TGNN). To explore graph structural information from complementary views, our TGNN has a message passing module and a graph kernel module. To fully utilize unlabeled data, for each module, we calculate the similarity of each unlabeled graph to other labeled graphs in the memory bank and our consistency loss encourages consistency between two similarity distributions in different embedding spaces. The two twin modules collaborate with each other by exchanging instance similarity knowledge to fully explore the structure information of both labeled and unlabeled data. We evaluate our TGNN on various public datasets and show that it achieves strong performance. | Wei Ju, Xiao Luo, Meng Qu, Yifan Wang, Chong Chen, Minghua Deng, Xian-Sheng Hua, Ming Zhang | null | null | 2,022 | ijcai |
Poisoning Deep Learning Based Recommender Model in Federated Learning Scenarios | null | Various attack methods against recommender systems have been proposed in the past years, and the security issues of recommender systems have drawn considerable attention.
Traditional attacks attempt to make target items recommended to as many users as possible by poisoning the training data.
Benifiting from the feature of protecting users' private data, federated recommendation can effectively defend such attacks.
Therefore, quite a few works have devoted themselves to developing federated recommender systems.
For proving current federated recommendation is still vulnerable, in this work we probe to design attack approaches targeting deep learning based recommender models in federated learning scenarios.
Specifically, our attacks generate poisoned gradients for manipulated malicious users to upload based on two strategies (i.e., random approximation and hard user mining).
Extensive experiments show that our well-designed attacks can effectively poison the target models, and the attack effectiveness sets the state-of-the-art. | Dazhong Rong, Qinming He, Jianhai Chen | null | null | 2,022 | ijcai |
Physics-Informed Long-Sequence Forecasting From Multi-Resolution Spatiotemporal Data | null | Spatiotemporal data aggregated over regions or time windows at various resolutions demonstrate heterogeneous patterns and dynamics in each resolution. Meanwhile, the multi-resolution characteristic provides rich contextual information, which is critical for effective long-sequence forecasting. The importance of such inter-resolution information is more significant in practical cases, where fine-grained data is usually collected via approaches with lower costs but also lower qualities compared to those for coarse-grained data. However, existing works focus on uni-resolution data and cannot be directly applied to fully utilize the aforementioned extra information in multi-resolution data. In this work, we propose Spatiotemporal Koopman Multi-Resolution Network (ST-KMRN), a physics-informed learning framework for long-sequence forecasting from multi-resolution spatiotemporal data. Our method jointly models data aggregated in multiple resolutions and captures the inter-resolution dynamics with the self-attention mechanism. We also propose downsampling and upsampling modules among resolutions to further strengthen the connections among data of multiple resolutions. Moreover, we enhance the modeling of intra-resolution dynamics with physics-informed modules based on Koopman theory. Experimental results demonstrate that our proposed approach achieves the best performance on the long-sequence forecasting tasks compared to baselines without a specific design for multi-resolution data. | Chuizheng Meng, Hao Niu, Guillaume Habault, Roberto Legaspi, Shinya Wada, Chihiro Ono, Yan Liu | null | null | 2,022 | ijcai |
Raising the Bar in Graph-level Anomaly Detection | null | Graph-level anomaly detection has become a critical topic in diverse areas, such as financial fraud detection and detecting anomalous activities in social networks. While most research has focused on anomaly detection for visual data such as images, where high detection accuracies have been obtained, existing deep learning approaches for graphs currently show considerably worse performance. This paper raises the bar on graph-level anomaly detection, i.e., the task of detecting abnormal graphs in a set of graphs. By drawing on ideas from self-supervised learning and transformation learning, we present a new deep learning approach that significantly improves existing deep one-class approaches by fixing some of their known problems, including hypersphere collapse and performance flip. Experiments on nine real-world data sets involving nine techniques reveal that our method achieves an average performance improvement of 11.8% AUC compared to the best existing approach. | Chen Qiu, Marius Kloft, Stephan Mandt, Maja Rudolph | null | null | 2,022 | ijcai |
Federated Learning on Heterogeneous and Long-Tailed Data via Classifier Re-Training with Federated Features | null | Federated learning (FL) provides a privacy-preserving solution for distributed machine learning tasks. One challenging problem that severely damages the performance of FL models is the co-occurrence of data heterogeneity and long-tail distribution, which frequently appears in real FL applications. In this paper, we reveal an intriguing fact that the biased classifier is the primary factor leading to the poor performance of the global model. Motivated by the above finding, we propose a novel and privacy-preserving FL method for heterogeneous and long-tailed data via Classifier Re-training with Federated Features (CReFF). The classifier re-trained on federated features can produce comparable performance as the one re-trained on real data in a privacy-preserving manner without information leakage of local data or class distribution. Experiments on several benchmark datasets show that the proposed CReFF is an effective solution to obtain a promising FL model under heterogeneous and long-tailed data. Comparative results with the state-of-the-art FL methods also validate the superiority of CReFF. Our code is available at https://github.com/shangxinyi/CReFF-FL. | Xinyi Shang, Yang Lu, Gang Huang, Hanzi Wang | null | null | 2,022 | ijcai |
Towards Resolving Propensity Contradiction in Offline Recommender Learning | null | We study offline recommender learning from explicit rating feedback in the presence of selection bias. A current promising solution for dealing with the bias is the inverse propensity score (IPS) estimation. However, the existing propensity-based methods can suffer significantly from the propensity estimation bias. In fact, most of the previous IPS-based methods require some amount of missing-completely-at-random (MCAR) data to accurately estimate the propensity. This leads to a critical self-contradiction; IPS is ineffective without MCAR data, even though it originally aims to learn recommenders from only missing-not-at-random feedback. To resolve this propensity contradiction, we derive a propensity-independent generalization error bound and propose a novel algorithm to minimize the theoretical bound via adversarial learning. Our theory and algorithm do not require a propensity estimation procedure, thereby leading to a well-performing rating predictor without the true propensity information. Extensive experiments demonstrate that the proposed algorithm is superior to a range of existing methods both in rating prediction and ranking metrics in practical settings without MCAR data. Full version of the paper (including the appendix) is available at: https://arxiv.org/abs/1910.07295. | Yuta Saito, Masahiro Nomura | null | null | 2,022 | ijcai |
Adapt to Adaptation: Learning Personalization for Cross-Silo Federated Learning | null | Conventional federated learning (FL) trains one global model for a federation of clients with decentralized data, reducing the privacy risk of centralized training. However, the distribution shift across non-IID datasets, often poses a challenge to this one-model-fits-all solution. Personalized FL aims to mitigate this issue systematically. In this work, we propose APPLE, a personalized cross-silo FL framework that adaptively learns how much each client can benefit from other clients’ models. We also introduce a method to flexibly control the focus of training APPLE between global and local objectives. We empirically evaluate our method's convergence and generalization behaviors, and perform extensive experiments on two benchmark datasets and two medical imaging datasets under two non-IID settings. The results show that the proposed personalized FL framework, APPLE, achieves state-of-the-art performance compared to several other personalized FL approaches in the literature. The code is publicly available at https://github.com/ljaiverson/pFL-APPLE. | Jun Luo, Shandong Wu | null | null | 2,022 | ijcai |
Discrete Listwise Personalized Ranking for Fast Top-N Recommendation with Implicit Feedback | null | We address the efficiency problem of personalized ranking from implicit feedback by hashing users and items with binary codes, so that top-N recommendation can be fast executed in a Hamming space by bit operations. However, current hashing methods for top-N recommendation fail to align their learning objectives (such as pointwise or pairwise loss) with the benchmark metrics for ranking quality (e.g. Average Precision, AP), resulting in sub-optimal accuracy. To this end, we propose a Discrete Listwise Personalized Ranking (DLPR) model that optimizes AP under discrete constraints for fast and accurate top-N recommendation. To resolve the challenging DLPR problem, we devise an efficient algorithm that can directly learn binary codes in a relaxed continuous solution space. Specifically, theoretical analysis shows that the optimal solution to the relaxed continuous optimization problem is exactly the same as that of the original discrete DLPR problem. Through extensive experiments on two real-world datasets, we show that DLPR consistently surpasses state-of-the-art hashing methods for top-N recommendation. | Fangyuan Luo, Jun Wu, Tao Wang | null | null | 2,022 | ijcai |
Continual Federated Learning Based on Knowledge Distillation | null | Federated learning (FL) is a promising approach for learning a shared global model on decentralized data owned by multiple clients without exposing their privacy. In real-world scenarios, data accumulated at the client-side varies in distribution over time. As a consequence, the global model tends to forget the knowledge obtained from previous tasks while learning new tasks, showing signs of "catastrophic forgetting". Previous studies in centralized learning use techniques such as data replay and parameter regularization to mitigate catastrophic forgetting. Unfortunately, these techniques cannot adequately solve the non-trivial problem in FL. We propose Continual Federated Learning with Distillation (CFeD) to address catastrophic forgetting under FL. CFeD performs knowledge distillation on both the clients and the server, with each party independently having an unlabeled surrogate dataset, to mitigate forgetting. Moreover, CFeD assigns different learning objectives, namely learning the new task and reviewing old tasks, to different clients, aiming to improve the learning ability of the model. The results show that our method performs well in mitigating catastrophic forgetting and achieves a good trade-off between the two objectives. | Yuhang Ma, Zhongle Xie, Jue Wang, Ke Chen, Lidan Shou | null | null | 2,022 | ijcai |
Community Question Answering Entity Linking via Leveraging Auxiliary Data | null | Community Question Answering (CQA) platforms contain plenty of CQA texts (i.e., questions and answers corresponding to the question) where named entities appear ubiquitously. In this paper, we define a new task of CQA entity linking (CQAEL) as linking the textual entity mentions detected from CQA texts with their corresponding entities in a knowledge base. This task can facilitate many downstream applications including expert finding and knowledge base enrichment. Traditional entity linking methods mainly focus on linking entities in news documents, and are suboptimal over this new task of CQAEL since they cannot effectively leverage various informative auxiliary data involved in the CQA platform to aid entity linking, such as parallel answers and two types of meta-data (i.e., topic tags and users). To remedy this crucial issue, we propose a novel transformer-based framework to effectively harness the knowledge delivered by different kinds of auxiliary data to promote the linking performance. We validate the superiority of our framework through extensive experiments over a newly released CQAEL data set against state-of-the-art entity linking methods. | Yuhan Li, Wei Shen, Jianbo Gao, Yadong Wang | null | null | 2,022 | ijcai |
Reconciling Cognitive Modeling with Knowledge Forgetting: A Continuous Time-aware Neural Network Approach | null | As an emerging technology of computer-aided education, cognitive modeling aims at discovering the knowledge proficiency or learning ability of students, which can enable a wide range of intelligent educational applications. While considerable efforts have been made in this direction, a long-standing research challenge is how to naturally integrate the forgetting mechanism into the learning process of knowledge concepts. To this end, in this paper, we propose a novel Continuous Time based Neural Cognitive Modeling(CT-NCM) approach to integrate the dynamism and continuity of knowledge forgetting into students' learning process modeling in a realistic manner. To be specific, we first adapt the neural Hawkes process with a specially-designed learning event encoding method to model the relationship between knowledge learning and forgetting with continuous time. Then, we propose a learning function with extendable settings to jointly model the change of different knowledge states and their interactions with the exercises at each moment. In this way, CT-NCM can simultaneously predict the future knowledge state and exercise performance of students. Finally, we conduct extensive experiments on five real-world datasets with various benchmark methods. The experimental results clearly validate the effectiveness of CT-NCM and show its interpretability in terms of knowledge learning visualization. | Haiping Ma, Jingyuan Wang, Hengshu Zhu, Xin Xia, Haifeng Zhang, Xingyi Zhang, Lei Zhang | null | null | 2,022 | ijcai |
Beyond Homophily: Structure-aware Path Aggregation Graph Neural Network | null | Graph neural networks (GNNs) have been intensively studied in various real-world tasks. However, the homophily assumption of GNNs' aggregation function limits their representation learning ability in heterophily graphs.
In this paper, we shed light on the path level patterns in graphs that can explicitly reflect rich semantic and structural information.
We therefore propose a novel Structure-aware Path Aggregation Graph Neural Network (PathNet) aiming to generalize GNNs for both homophily and heterophily graphs. Specifically, we first introduce a maximal entropy path sampler, which helps us sample a number of paths containing structural context. Then, we introduce a structure-aware recurrent cell consisting of order-preserving and distance-aware components to learn the semantic information of neighborhoods. Finally, we model the preference of different paths to target node after path encoding.
Experimental results demonstrate that our model achieves superior performance in node classification on both heterophily and homophily graphs. | Yifei Sun, Haoran Deng, Yang Yang, Chunping Wang, Jiarong Xu, Renhong Huang, Linfeng Cao, Yang Wang, Lei Chen | null | null | 2,022 | ijcai |
Positive-Unlabeled Learning with Adversarial Data Augmentation for Knowledge Graph Completion | null | Most real-world knowledge graphs (KG) are far from complete and comprehensive. This problem has motivated efforts in predicting the most plausible missing facts to complete a given KG, i.e., knowledge graph completion (KGC). However, existing KGC methods suffer from two main issues, 1) the false negative issue, i.e., the sampled negative training instances may include potential true facts; and 2) the data sparsity issue, i.e., true facts account for only a tiny part of all possible facts. To this end, we propose positive-unlabeled learning with adversarial data augmentation (PUDA) for KGC. In particular, PUDA tailors positive-unlabeled risk estimator for the KGC task to deal with the false negative issue. Furthermore, to address the data sparsity issue, PUDA achieves a data augmentation strategy by unifying adversarial training and positive-unlabeled learning under the positive-unlabeled minimax game. Extensive experimental results on real-world benchmark datasets demonstrate the effectiveness and compatibility of our proposed method. | Zhenwei Tang, Shichao Pei, Zhao Zhang, Yongchun Zhu, Fuzhen Zhuang, Robert Hoehndorf, Xiangliang Zhang | null | null | 2,022 | ijcai |
Long-term Spatio-Temporal Forecasting via Dynamic Multiple-Graph Attention | null | Many real-world ubiquitous applications, such as parking recommendations and air pollution monitoring, benefit significantly from accurate long-term spatio-temporal forecasting (LSTF). LSTF makes use of long-term dependency structure between the spatial and temporal domains, as well as the contextual information. Recent studies have revealed the potential of multi-graph neural networks (MGNNs) to improve prediction performance. However, existing MGNN methods do not work well when applied to LSTF due to several issues: the low level of generality, insufficient use of contextual information, and the imbalanced graph fusion approach. To address these issues, we construct new graph models to represent the contextual information of each node and exploit the long-term spatio-temporal data dependency structure. To aggregate the information across multiple graphs, we propose a new dynamic multi-graph fusion module to characterize the correlations of nodes within a graph and the nodes across graphs via the spatial attention and graph attention mechanisms. Furthermore, we introduce a trainable weight tensor to indicate the importance of each node in different graphs. Extensive experiments on two large-scale datasets demonstrate that our proposed approaches significantly improve the performance of existing graph neural network models in LSTF prediction tasks. | Wei Shao, Zhiling Jin, Shuo Wang, Yufan Kang, Xiao Xiao, Hamid Menouar, Zhaofeng Zhang, Junshan Zhang, Flora Salim | null | null | 2,022 | ijcai |
Augmenting Knowledge Graphs for Better Link Prediction | null | Embedding methods have demonstrated robust performance on the task of link prediction in knowledge graphs, by mostly encoding entity relationships. Recent methods propose to enhance the loss function with a literal-aware term. In this paper, we propose KGA: a knowledge graph augmentation method that incorporates literals in an embedding model without modifying its loss function. KGA discretizes quantity and year values into bins, and chains these bins both horizontally, modeling neighboring values, and vertically, modeling multiple levels of granularity. KGA is scalable and can be used as a pre-processing step for any existing knowledge graph embedding model. Experiments on legacy benchmarks and a new large benchmark, DWD, show that augmenting the knowledge graph with quantities and years is beneficial for predicting both entities and numbers, as KGA outperforms the vanilla models and other relevant baselines. Our ablation studies confirm that both quantities and years contribute to KGA's performance, and that its performance depends on the discretization and binning settings. We make the code, models, and the DWD benchmark publicly available to facilitate reproducibility and future research. | Jiang Wang, Filip Ilievski, Pedro Szekely, Ke-Thia Yao | null | null | 2,022 | ijcai |
HCFRec: Hash Collaborative Filtering via Normalized Flow with Structural Consensus for Efficient Recommendation | null | The ever-increasing data scale of user-item interactions makes it challenging for an effective and efficient recommender system. Recently, hash-based collaborative filtering (Hash-CF) approaches employ efficient Hamming distance of learned binary representations of users and items to accelerate recommendations. However, Hash-CF often faces two challenging problems, i.e., optimization on discrete representations and preserving semantic information in learned representations. To address the above two challenges, we propose HCFRec, a novel Hash-CF approach for effective and efficient recommendations. Specifically, HCFRec not only innovatively introduces normalized flow to learn the optimal hash code by efficiently fitting a proposed approximate mixture multivariate normal distribution, a continuous but approximately discrete distribution, but also deploys a cluster consistency preserving mechanism to preserve the semantic structure in representations for more accurate recommendations. Extensive experiments conducted on six real-world datasets demonstrate the superiority of our HCFRec compared to the state-of-art methods in terms of effectiveness and efficiency. | Fan Wang, Weiming Liu, Chaochao Chen, Mengying Zhu, Xiaolin Zheng | null | null | 2,022 | ijcai |
Anomaly Detection by Leveraging Incomplete Anomalous Knowledge with Anomaly-Aware Bidirectional GANs | null | The goal of anomaly detection is to identify anomalous samples from normal ones. In this paper, a small number of anomalies are assumed to be available at the training stage, but they are assumed to be collected only from several anomaly types, leaving the majority of anomaly types not represented in the collected anomaly dataset at all. To effectively leverage this kind of incomplete anomalous knowledge represented by the collected anomalies, we propose to learn a probability distribution that can not only model the normal samples, but also guarantee to assign low density values for the collected anomalies. To this end, an anomaly-aware generative adversarial network (GAN) is developed, which, in addition to modeling the normal samples as most GANs do, is able to explicitly avoid assigning probabilities for collected anomalous samples. Moreover, to facilitate the computation of anomaly detection criteria like reconstruction error, the proposed anomaly-aware GAN is designed to be bidirectional, attaching an encoder for the generator. Extensive experimental results demonstrate that our proposed method is able to effectively make use of the incomplete anomalous information, leading to significant performance gains comparing to existing methods. | Bowen Tian, Qinliang Su, Jian Yin | null | null | 2,022 | ijcai |
Understanding and Mitigating Data Contamination in Deep Anomaly Detection: A Kernel-based Approach | null | Deep anomaly detection has become popular for its capability of handling complex data. However, training a deep detector is fragile to data contamination due to overfitting. In this work, we study the performance of the anomaly detectors under data contamination and construct a data-efficient countermeasure against data contamination. We show that training a deep anomaly detector induces an implicit kernel machine. We then derive an information-theoretic bound of performance degradation with respect to the data contamination ratio. To mitigate the degradation, we propose a contradicting training approach. Apart from learning normality on the contaminated dataset, our approach discourages learning an additional small auxiliary dataset of labeled anomalies. Our approach is much more affordable than constructing a completely clean training dataset. Experiments on public datasets show that our approach significantly improves anomaly detection in the presence of contamination and outperforms some recently proposed detectors. | Shuang Wu, Jingyu Zhao, Guangjian Tian | null | null | 2,022 | ijcai |
Multi-Graph Fusion Networks for Urban Region Embedding | null | Learning the embeddings for urban regions from human mobility data can reveal the functionality of regions, and then enables the correlated but distinct tasks such as crime prediction. Human mobility data contains rich but abundant information, which yields to the comprehensive region embeddings for cross domain tasks. In this paper, we propose multi-graph fusion networks (MGFN) to enable the cross domain prediction tasks. First, we integrate the graphs with spatio-temporal similarity as mobility patterns through a mobility graph fusion module. Then, in the mobility pattern joint learning module, we design the multi-level cross-attention mechanism to learn the comprehensive embeddings from multiple mobility patterns based on intra-pattern and inter-pattern messages. Finally, we conduct extensive experiments on real-world urban datasets. Experimental results demonstrate that the proposed MGFN outperforms the state-of-the-art methods by up to 12.35% improvement. https://github.com/wushangbin/MGFN | Shangbin Wu, Xu Yan, Xiaoliang Fan, Shirui Pan, Shichao Zhu, Chuanpan Zheng, Ming Cheng, Cheng Wang | null | null | 2,022 | ijcai |
Trading Hard Negatives and True Negatives: A Debiased Contrastive Collaborative Filtering Approach | null | Collaborative filtering (CF), as a standard method for recommendation with implicit feedback, tackles a semi-supervised learning problem where most interaction data are unobserved. Such a nature makes existing approaches highly rely on mining negatives for providing correct training signals. However, mining proper negatives is not a free lunch, encountering with a tricky trade-off between mining informative hard negatives and avoiding false ones. We devise a new approach named as Hardness-Aware Debiased Contrastive Collaborative Filtering (HDCCF) to resolve the dilemma. It could sufficiently explore hard negatives from two-fold aspects: 1) adaptively sharpening the gradients of harder instances through a set-wise objective, and 2) implicitly leveraging item/user frequency information with a new sampling strategy. To circumvent false negatives, we develop a principled approach to improve the reliability of negative instances and prove that the objective is an unbiased estimation of sampling from the true negative distribution. Extensive experiments demonstrate the superiority of the proposed model over existing CF models and hard negative mining methods. | Chenxiao Yang, Qitian Wu, Jipeng Jin, Xiaofeng Gao, Junwei Pan, Guihai Chen | null | null | 2,022 | ijcai |
MEIM: Multi-partition Embedding Interaction Beyond Block Term Format for Efficient and Expressive Link Prediction | null | Knowledge graph embedding aims to predict the missing relations between entities in knowledge graphs. Tensor-decomposition-based models, such as ComplEx, provide a good trade-off between efficiency and expressiveness, that is crucial because of the large size of real world knowledge graphs. The recent multi-partition embedding interaction (MEI) model subsumes these models by using the block term tensor format and provides a systematic solution for the trade-off. However, MEI has several drawbacks, some of which carried from its subsumed tensor-decomposition-based models. In this paper, we address these drawbacks and introduce the Multi-partition Embedding Interaction iMproved beyond block term format (MEIM) model, with independent core tensor for ensemble effects and soft orthogonality for max-rank mapping, in addition to multi-partition embedding. MEIM improves expressiveness while still being highly efficient, helping it to outperform strong baselines and achieve state-of-the-art results on difficult link prediction benchmarks using fairly small embedding sizes. The source code is released at https://github.com/tranhungnghiep/MEIM. | Hung-Nghiep Tran, Atsuhiro Takasu | null | null | 2,022 | ijcai |
FAITH: Few-Shot Graph Classification with Hierarchical Task Graphs | null | Few-shot graph classification aims at predicting classes for graphs, given limited labeled graphs for each class. To tackle the bottleneck of label scarcity, recent works propose to incorporate few-shot learning frameworks for fast adaptations to graph classes with limited labeled graphs. Specifically, these works propose to accumulate meta-knowledge across diverse meta-training tasks, and then generalize such meta-knowledge to the target task with a disjoint label set. However, existing methods generally ignore task correlations among meta-training tasks while treating them independently. Nevertheless, such task correlations can advance the model generalization to the target task for better classification performance. On the other hand, it remains non-trivial to utilize task correlations due to the complex components in a large number of meta-training tasks. To deal with this, we propose a novel few-shot learning framework FAITH that captures task correlations via constructing a hierarchical task graph at different granularities. Then we further design a loss-based sampling strategy to select tasks with more correlated classes. Moreover, a task-specific classifier is proposed to utilize the learned task correlations for few-shot classification. Extensive experiments on four prevalent few-shot graph classification datasets demonstrate the superiority of FAITH over other state-of-the-art baselines. | Song Wang, Yushun Dong, Xiao Huang, Chen Chen, Jundong Li | null | null | 2,022 | ijcai |
Ensemble Multi-Relational Graph Neural Networks | null | It is well established that graph neural networks (GNNs) can be interpreted and designed from the perspective of optimization objective. With this clear optimization objective, the deduced GNNs architecture has sound theoretical foundation, which is able to flexibly remedy the weakness of GNNs. However, this optimization objective is only proved for GNNs with single-relational graph. Can we infer a new type of GNNs for multi-relational graphs by extending this optimization objective, so as to simultaneously solve the issues in previous multi-relational GNNs, e.g., over-parameterization? In this paper, we propose a novel ensemble multi-relational GNNs by designing an ensemble multi-relational (EMR) optimization objective. This EMR optimization objective is able to derive an iterative updating rule, which can be formalized as an ensemble message passing (EnMP) layer with multi-relations. We further analyze the nice properties of EnMP layer, e.g., the relationship with multi-relational personalized PageRank. Finally, a new multi-relational GNNs which well alleviate the over-smoothing and over-parameterization issues are proposed. Extensive experiments conducted on four benchmark datasets well demonstrate the effectiveness of the proposed model. | Yuling Wang, Hao Xu, Yanhua Yu, Mengdi Zhang, Zhenhao Li, Yuji Yang, Wei Wu | null | null | 2,022 | ijcai |
Subgraph Neighboring Relations Infomax for Inductive Link Prediction on Knowledge Graphs | null | Inductive link prediction for knowledge graph aims at predicting missing links between unseen entities, those not shown in training stage. Most previous works learn entity-specific embeddings of entities, which cannot handle unseen entities. Recent several methods utilize enclosing subgraph to obtain inductive ability. However, all these works only consider the enclosing part of subgraph without complete neighboring relations, which leads to the issue that partial neighboring relations are neglected, and sparse subgraphs are hard to be handled. To address that, we propose Subgraph Neighboring Relations Infomax, SNRI, which sufficiently exploits complete neighboring relations from two aspects: neighboring relational feature for node feature and neighboring relational path for sparse subgraph. To further model neighboring relations in a global way, we innovatively apply mutual information (MI) maximization for knowledge graph. Experiments show that SNRI outperforms existing state-of-art methods by a large margin on inductive link prediction task, and verify the effectiveness of exploring complete neighboring relations in a global way to characterize node features and reason on sparse subgraphs. | Xiaohan Xu, Peng Zhang, Yongquan He, Chengpeng Chao, Chaoyang Yan | null | null | 2,022 | ijcai |
GOCPT: Generalized Online Canonical Polyadic Tensor Factorization and Completion | null | Low-rank tensor factorization or completion is well-studied and applied in various online settings, such as online tensor factorization (where the temporal mode grows) and online tensor completion (where incomplete slices arrive gradually). However, in many real-world settings, tensors may have more complex evolving patterns: (i) one or more modes can grow; (ii) missing entries may be filled; (iii) existing tensor elements can change. Existing methods cannot support such complex scenarios. To fill the gap, this paper proposes a Generalized Online Canonical Polyadic (CP) Tensor factorization and completion framework (named GOCPT) for this general setting, where we maintain the CP structure of such dynamic tensors during the evolution. We show that existing online tensor factorization and completion setups can be unified under the GOCPT framework. Furthermore, we propose a variant, named GOCPTE, to deal with cases where historical tensor elements are unavailable (e.g., privacy protection), which achieves similar fitness as GOCPT but with much less computational cost. Experimental results demonstrate that our GOCPT can improve fitness by up to 2.8% on the JHU Covid data and 9.2% on a proprietary patient claim dataset over baselines. Our variant GOCPTE shows up to 1.2% and 5.5% fitness improvement on two datasets with about 20% speedup compared to the best model. | Chaoqi Yang, Cheng Qian, Jimeng Sun | null | null | 2,022 | ijcai |
CERT: Continual Pre-training on Sketches for Library-oriented Code Generation | null | Code generation is a longstanding challenge, aiming to generate a code snippet based on a natural language description. Usually, expensive text-code paired data is essential for training a code generation model. Recently, thanks to the success of pre-training techniques, large language models are trained on large unlabelled code corpora and perform well in generating code. In this paper, we investigate how to leverage an unlabelled code corpus to train a model for library-oriented code generation. Since it is a common practice for programmers to reuse third-party libraries, in which case the text-code paired data are harder to obtain due to the huge number of libraries. We observe that library-oriented code snippets are more likely to share similar code sketches. Hence, we present CERT with two steps: a sketcher generates the sketch, then a generator fills the details in the sketch. Both the sketcher and generator are continually pre-trained upon a base model using unlabelled data. Also, we carefully craft two benchmarks to evaluate library-oriented code generation named PandasEval and NumpyEval. Experimental results have shown the impressive performance of CERT. For example, it surpasses the base model by an absolute 15.67% improvement in terms of pass@1 on PandasEval. Our work is available at https://github.com/microsoft/PyCodeGPT. | Daoguang Zan, Bei Chen, Dejian Yang, Zeqi Lin, Minsu Kim, Bei Guan, Yongji Wang, Weizhu Chen, Jian-Guang Lou | null | null | 2,022 | ijcai |
FedCG: Leverage Conditional GAN for Protecting Privacy and Maintaining Competitive Performance in Federated Learning | null | Federated learning (FL) aims to protect data privacy by enabling clients to build machine learning models collaboratively without sharing their private data. Recent works demonstrate that information exchanged during FL is subject to gradient-based privacy attacks and, consequently, a variety of privacy-preserving methods have been adopted to thwart such attacks. However, these defensive methods either introduce orders of magnitudes more computational and communication overheads (e.g., with homomorphic encryption) or incur substantial model performance losses in terms of prediction accuracy (e.g., with differential privacy). In this work, we propose FEDCG, a novel federated learning method that leverages conditional generative adversarial networks to achieve high-level privacy protection while still maintaining competitive model performance. FEDCG decomposes each client's local network into a private extractor and a public classifier and keeps the extractor local to protect privacy. Instead of exposing extractors, FEDCG shares clients' generators with the server for aggregating clients' shared knowledge aiming to enhance the performance of each client's local networks. Extensive experiments demonstrate that FEDCG can achieve competitive model performance compared with FL baselines, and privacy analysis shows that FEDCG has a high-level privacy-preserving capability. | Yuezhou Wu, Yan Kang, Jiahuan Luo, Yuanqin He, Lixin Fan, Rong Pan, Qiang Yang | null | null | 2,022 | ijcai |
Reconstruction Enhanced Multi-View Contrastive Learning for Anomaly Detection on Attributed Networks | null | Detecting abnormal nodes from attributed networks is of great importance in many real applications, such as financial fraud detection and cyber security. This task is challenging due to both the complex interactions between the anomalous nodes with other counterparts and their inconsistency in terms of attributes. This paper proposes a self-supervised learning framework that jointly optimizes a multi-view contrastive learning-based module and an attribute reconstruction-based module to more accurately detect anomalies on attributed networks. Specifically, two contrastive learning views are firstly established, which allow the model to better encode rich local and global information related to the abnormality. Motivated by the attribute consistency principle between neighboring nodes, a masked autoencoder-based reconstruction module is also introduced to identify the nodes which have large reconstruction errors, then are regarded as anomalies. Finally, the two complementary modules are integrated for more accurately detecting the anomalous nodes. Extensive experiments conducted on five benchmark datasets show our model outperforms current state-of-the-art models. | Jiaqiang Zhang, Senzhang Wang, Songcan Chen | null | null | 2,022 | ijcai |
CTL-MTNet: A Novel CapsNet and Transfer Learning-Based Mixed Task Net for Single-Corpus and Cross-Corpus Speech Emotion Recognition | null | Speech Emotion Recognition (SER) has become a growing focus of research in human-computer interaction. An essential challenge in SER is to extract common attributes from different speakers or languages, especially when a specific source corpus has to be trained to recognize the unknown data coming from another speech corpus. To address this challenge, a Capsule Network (CapsNet) and Transfer Learning based Mixed Task Net (CTL-MTNet) are proposed to deal with both the single-corpus and cross-corpus SER tasks simultaneously in this paper. For the single-corpus task, the combination of Convolution-Pooling and Attention CapsNet module (CPAC) is designed by embedding the self-attention mechanism to the CapsNet, guiding the module to focus on the important features that can be fed into different capsules. The extracted high-level features by CPAC provide sufficient discriminative ability. Furthermore, to handle the cross-corpus task, CTL-MTNet employs a Corpus Adaptation Adversarial Module (CAAM) by combining CPAC with Margin Disparity Discrepancy (MDD), which can learn the domain-invariant emotion representations through extracting the strong emotion commonness. Experiments including ablation studies and visualizations on both single- and cross-corpus tasks using four well-known SER datasets in different languages are conducted for performance evaluation and comparison. The results indicate that in both tasks the CTL-MTNet showed better performance in all cases compared to a number of state-of-the-art methods. The source code and the supplementary materials are available at: https://github.com/MLDMXM2017/CTLMTNet. | Xin-Cheng Wen, JiaXin Ye, Yan Luo, Yong Xu, Xuan-Ze Wang, Chang-Li Wu, Kun-Hong Liu | null | null | 2,022 | ijcai |
Language Models as Knowledge Embeddings | null | Knowledge embeddings (KE) represent a knowledge graph (KG) by embedding entities and relations into continuous vector spaces. Existing methods are mainly structure-based or description-based. Structure-based methods learn representations that preserve the inherent structure of KGs. They cannot well represent abundant long-tail entities in real-world KGs with limited structural information. Description-based methods leverage textual information and language models. Prior approaches in this direction barely outperform structure-based ones, and suffer from problems like expensive negative sampling and restrictive description demand. In this paper, we propose LMKE, which adopts Language Models to derive Knowledge Embeddings, aiming at both enriching representations of long-tail entities and solving problems of prior description-based methods. We formulate description-based KE learning with a contrastive learning framework to improve efficiency in training and evaluation. Experimental results show that LMKE achieves state-of-the-art performance on KE benchmarks of link prediction and triple classification, especially for long-tail entities. | Xintao Wang, Qianyu He, Jiaqing Liang, Yanghua Xiao | null | null | 2,022 | ijcai |
GRELEN: Multivariate Time Series Anomaly Detection from the Perspective of Graph Relational Learning | null | System monitoring and anomaly detection is a crucial task in daily operation. With the rapid development of cyber-physical systems and IT systems, multiple sensors get involved to represent the system state from different perspectives, which inspires us to detect anomalies considering feature dependence relationship among sensors instead of focusing on individual sensor's behavior. In this paper, we propose a novel Graph Relational Learning Network (GReLeN) to detect multivariate time series anomalies from the perspective of between-sensor dependence relationship learning.
Variational AutoEncoder (VAE) serves as the overall framework for feature extraction and system representation. Graph Neural Network (GNN) and stochastic graph relational learning strategy are also imposed to capture the between-sensor dependence. Then a composite anomaly metric is established with the learned dependence structure explicitly.
The experiments on four real-world datasets show our superiority in detection accuracy, anomaly diagnosis, and model interpretation. | Weiqi Zhang, Chen Zhang, Fugee Tsung | null | null | 2,022 | ijcai |
Regularized Graph Structure Learning with Semantic Knowledge for Multi-variates Time-Series Forecasting | null | Multivariate time-series forecasting is a critical task for many applications, and graph time-series network is widely studied due to its capability to capture the spatial-temporal correlation simultaneously. However, most existing works focus more on learning with the explicit prior graph structure, while ignoring potential information from the implicit graph structure, yielding incomplete structure modeling. Some recent works attempts to learn the intrinsic or implicit graph structure directly, while lacking a way to combine explicit prior structure with implicit structure together. In this paper, we propose Regularized Graph Structure Learning (RGSL) model to incorporate both explicit prior structure and implicit structure together, and learn the forecasting deep networks along with the graph structure. RGSL consists of two innovative modules. First, we derive an implicit dense similarity matrix through node embedding, and learn the sparse graph structure using the Regularized Graph Generation (RGG) based on the Gumbel Softmax trick. Second, we propose a Laplacian Matrix Mixed-up Module (LM3) to fuse the explicit graph and implicit graph together. We conduct experiments on three real-word datasets. Results show that the proposed RGSL model outperforms existing graph forecasting algorithms with a notable margin, while learning meaningful graph structure simultaneously. Our code and models are made publicly available at https://github.com/alipay/RGSL.git. | Hongyuan Yu, Ting Li, Weichen Yu, Jianguo Li, Yan Huang, Liang Wang, Alex Liu | null | null | 2,022 | ijcai |
T-SMOTE: Temporal-oriented Synthetic Minority Oversampling Technique for Imbalanced Time Series Classification | null | Time series classification is a popular and important topic in machine learning, and it suffers from the class imbalance problem in many real-world applications. In this paper, to address the class imbalance problem, we propose a novel and practical oversampling method named T-SMOTE, which can make full use of the temporal information of time-series data. In particular, for each sample of minority class, T-SMOTE generates multiple samples that are close to class border. Then, based on those samples near class border, T-SMOTE synthesizes more samples. Finally, a weighted sampling method is called on both generated samples near class border and synthetic samples. Extensive experiments on a diverse set of both univariate and multivariate time-series datasets demonstrate that T-SMOTE consistently outperforms the current state-of-the-art methods on imbalanced time series classification. More encouragingly, our empirical evaluations show that T-SMOTE performs better in the scenario of early prediction, an important application scenario in industry, which indicates that T-SMOTE could bring benefits in practice. | Pu Zhao, Chuan Luo, Bo Qiao, Lu Wang, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | null | null | 2,022 | ijcai |
Enhancing Sequential Recommendation with Graph Contrastive Learning | null | The sequential recommendation systems capture users' dynamic behavior patterns to predict their next interaction behaviors. Most existing sequential recommendation methods only exploit the local context information of an individual interaction sequence and learn model parameters solely based on the item prediction loss. Thus, they usually fail to learn appropriate sequence representations. This paper proposes a novel recommendation framework, namely Graph Contrastive Learning for Sequential Recommendation (GCL4SR). Specifically, GCL4SR employs a Weighted Item Transition Graph (WITG), built based on interaction sequences of all users, to provide global context information for each interaction and weaken the noise information in the sequence data. Moreover, GCL4SR uses subgraphs of WITG to augment the representation of each interaction sequence. Two auxiliary learning objectives have also been proposed to maximize the consistency between augmented representations induced by the same interaction sequence on WITG, and minimize the difference between the representations augmented by the global context on WITG and the local representation of the original sequence. Extensive experiments on real-world datasets demonstrate that GCL4SR consistently outperforms state-of-the-art sequential recommendation methods. | Yixin Zhang, Yong Liu, Yonghui Xu, Hao Xiong, Chenyi Lei, Wei He, Lizhen Cui, Chunyan Miao | null | null | 2,022 | ijcai |
Spiking Graph Convolutional Networks | null | Graph Convolutional Networks (GCNs) achieve an impressive performance due to the remarkable representation ability in learning the graph information. However, GCNs, when implemented on a deep network, require expensive computation power, making them difficult to be deployed on battery-powered devices. In contrast, Spiking Neural Networks (SNNs), which perform a bio-fidelity inference process, offer an energy-efficient neural architecture. In this work, we propose SpikingGCN, an end-to-end framework that aims to integrate the embedding of GCNs with the biofidelity characteristics of SNNs. The original graph data are encoded into spike trains based on the incorporation of graph convolution. We further model biological information processing
by utilizing a fully connected layer combined with neuron nodes. In a wide range of scenarios (e.g., citation networks, image graph classification, and recommender systems), our experimental results show that the proposed method could gain competitive performance against state-of-the-art approaches. Furthermore, we show that SpikingGCN on a neuromorphic chip can bring a clear advantage of energy efficiency into graph data analysis, which demonstrates its great potential to construct environment-friendly machine learning models. | Zulun Zhu, Jiaying Peng, Jintang Li, Liang Chen, Qi Yu, Siqiang Luo | null | null | 2,022 | ijcai |
On the Utility of Prediction Sets in Human-AI Teams | null | Research on human-AI teams usually provides experts with a single label, which ignores the uncertainty in a model's recommendation. Conformal prediction (CP) is a well established line of research that focuses on building a theoretically grounded, calibrated prediction set, which may contain multiple labels. We explore how such prediction sets impact expert decision-making in human-AI teams. Our evaluation on human subjects finds that set valued predictions positively impact experts. However, we notice that the predictive sets provided by CP can be very large, which leads to unhelpful AI assistants. To mitigate this, we introduce D-CP, a method to perform CP on some examples and defer to experts. We prove that D-CP can reduce the prediction set size of non-deferred examples. We show how D-CP performs in quantitative and in human subject experiments (n=120). Our results suggest that CP prediction sets improve human-AI team performance over showing the top-1 prediction alone, and that experts find D-CP prediction sets are more useful than CP prediction sets. | Varun Babbar, Umang Bhatt, Adrian Weller | null | null | 2,022 | ijcai |
Dynamic Graph Learning Based on Hierarchical Memory for Origin-Destination Demand Prediction | null | Recent years have witnessed a rapid growth of applying deep spatiotemporal methods in traffic forecasting. However, the prediction of origin-destination (OD) demands is still a challenging problem since the number of OD pairs is usually quadratic to the number of stations. In this case, most of the existing spatiotemporal methods fail to handle spatial relations on such a large scale. To address this problem, this paper provides a dynamic graph representation learning framework for OD demands prediction. In particular, a hierarchical memory updater is first proposed to maintain a time-aware representation for each node, and the representations are updated according to the most recently observed OD trips in continuous-time and multiple discrete-time ways. Second, a spatiotemporal propagation mechanism is provided to aggregate representations of neighbor nodes along a random spatiotemporal route which treats origin and destination as two different semantic entities. Last, an objective function is designed to derive the future OD demands according to the most recent node representations, and also to tackle the data sparsity problem in OD prediction. Extensive experiments have been conducted on two real-world datasets, and the experimental results demonstrate the superiority of the proposed method. The code and data are available at https://github.com/Rising0321/HMOD. | Ruixing Zhang, Liangzhe Han, Boyi Liu, Jiayuan Zeng, Leilei Sun | null | null | 2,022 | ijcai |
MFAN: Multi-modal Feature-enhanced Attention Networks for Rumor Detection | null | Rumor spreaders are increasingly taking advantage of multimedia content to attract and mislead news consumers on social media. Although recent multimedia rumor detection models have exploited both textual and visual features for classification, they do not integrate the social structure features simultaneously, which have shown promising performance for rumor identification. It is challenging to combine the heterogeneous multi-modal data in consideration of their complex relationships. In this work, we propose a novel Multi-modal Feature-enhanced Attention Networks (MFAN) for rumor detection, which makes the first attempt to integrate textual, visual, and social graph features in one unified framework. Specifically, it considers both the complement and alignment relationships between different modalities to achieve better fusion. Moreover, it takes into account the incomplete links in the social network data due to data collection constraints and proposes to infer hidden links to learn better social graph features. The experimental results show that MFAN can detect rumors effectively and outperform state-of-the-art methods. | Jiaqi Zheng, Xi Zhang, Sanchuan Guo, Quan Wang, Wenyu Zang, Yongdong Zhang | null | null | 2,022 | ijcai |
Table2Graph: Transforming Tabular Data to Unified Weighted Graph | null | Learning useful interactions between input features is crucial for tabular data modeling. Recent efforts start to explicitly model the feature interactions with graph, where each feature is treated as an individual node. However, the existing graph construction methods either heuristically formulate a fixed feature-interaction graph based on specific domain knowledge, or simply apply attention function to compute the pairwise feature similarities for each sample. While the fixed graph may be sub-optimal to downstream tasks, the sample-wise graph construction is time-consuming during model training and inference. To tackle these issues, we propose a framework named Table2Graph to transform the feature interaction modeling to learning a unified graph. Represented as a probability adjacency matrix, the unified graph learns to model the key feature interactions shared by the diverse samples in the tabular data. To well optimize the unified graph, we employ the reinforcement learning policy to capture the key feature interactions stably. A sparsity constraint is also proposed to regularize the learned graph from being overly-sparse/smooth. The experimental results in a variety of real-world applications demonstrate the effectiveness and efficiency of our Table2Graph, in terms of the prediction accuracy and feature interaction detection. | Kaixiong Zhou, Zirui Liu, Rui Chen, Li Li, Soo-Hyun Choi, Xia Hu | null | null | 2,022 | ijcai |
Proximity Enhanced Graph Neural Networks with Channel Contrast | null | We consider graph representation learning in an unsupervised manner. Graph neural networks use neighborhood aggregation as a core component that results in feature smoothing among nodes in proximity. While successful in various prediction tasks, such a paradigm falls short of capturing nodes' similarities over a long distance, which proves to be important for high-quality learning. To tackle this problem, we strengthen the graph with three types of additional graph views, in which each node is directly linked to a set of nodes with the highest similarity in terms of node features, neighborhood features or local structures. Not restricted by connectivity in the original graph, the generated views provide new and complementary perspectives from which to look at the relationship between nodes. Inspired by the recent success of contrastive learning approaches, we propose a self-supervised method that aims to learn node representations by maximizing the agreement between representations across generated views and the original graph, without the requirement of any label information. We also propose a channel-level contrast approach that greatly reduces computation cost. Extensive experiments on six assortative graphs and three disassortative graphs demonstrate the effectiveness of our approach. | Wei Zhuo, Guang Tan | null | null | 2,022 | ijcai |
Data-Free Adversarial Knowledge Distillation for Graph Neural Networks | null | Graph neural networks (GNNs) have been widely used in modeling graph structured data, owing to its impressive performance in a wide range of practical applications. Recently, knowledge distillation (KD) for GNNs has enabled remarkable progress in graph model compression and knowledge transfer. However, most of the existing KD methods require a large volume of real data, which are not readily available in practice, and may preclude their applicability in scenarios where the teacher model is trained on rare or hard to acquire datasets. To address this problem, we propose the first end-to-end framework for data-free adversarial knowledge distillation on graph structured data (DFAD-GNN). To be specific, our DFAD-GNN employs a generative adversarial network, which mainly consists of three components: a pre-trained teacher model and a student model are regarded as two discriminators, and a generator is utilized for deriving training graphs to distill knowledge from the teacher model into the student model. Extensive experiments on various benchmark models and six representative datasets demonstrate that our DFAD-GNN significantly surpasses state-of-the-art data-free baselines in the graph classification task. | Yuanxin Zhuang, Lingjuan Lyu, Chuan Shi, Carl Yang, Lichao Sun | null | null | 2,022 | ijcai |
Bridging Differential Privacy and Byzantine-Robustness via Model Aggregation | null | This paper aims at jointly addressing two seemly conflicting issues in federated learning: differential privacy (DP) and Byzantine-robustness, which are particularly challenging when the distributed data are non-i.i.d. (independent and identically distributed). The standard DP mechanisms add noise to the transmitted messages, and entangles with robust stochastic gradient aggregation to defend against Byzantine attacks. In this paper, we decouple the two issues via robust stochastic model aggregation, in the sense that our proposed DP mechanisms and the defense against Byzantine attacks have separated influence on the learning performance. Leveraging robust stochastic model aggregation, at each iteration, each worker calculates the difference between the local model and the global one, followed by sending the element-wise signs to the master node, which enables robustness to Byzantine attacks. Further, we design two DP mechanisms to perturb the uploaded signs for the purpose of privacy preservation, and prove that they are (epsilon,0)-DP by exploiting the properties of noise distributions. With the tools of Moreau envelop and proximal point projection, we establish the convergence of the proposed algorithm when the cost function is nonconvex. We analyze the trade-off between privacy preservation and learning performance, and show that the influence of our proposed DP mechanisms is decoupled with that of robust stochastic model aggregation. Numerical experiments demonstrate the effectiveness of the proposed algorithm. | Heng Zhu, Qing Ling | null | null | 2,022 | ijcai |
Multi-Level Firing with Spiking DS-ResNet: Enabling Better and Deeper Directly-Trained Spiking Neural Networks | null | Spiking neural networks (SNNs) are bio-inspired neural networks with asynchronous discrete and sparse characteristics, which have increasingly manifested their superiority in low energy consumption. Recent research is devoted to utilizing spatio-temporal information to directly train SNNs by backpropagation. However, the binary and non-differentiable properties of spike activities force directly trained SNNs to suffer from serious gradient vanishing and network degradation, which greatly limits the performance of directly trained SNNs and prevents them from going deeper. In this paper, we propose a multi-level firing (MLF) method based on the existing spatio-temporal back propagation (STBP) method, and spiking dormant-suppressed residual network (spiking DS-ResNet). MLF enables more efficient gradient propagation and the incremental expression ability of the neurons. Spiking DS-ResNet can efficiently perform identity mapping of discrete spikes, as well as provide a more suitable connection for gradient propagation in deep SNNs. With the proposed method, our model achieves superior performances on a non-neuromorphic dataset and two neuromorphic datasets with much fewer trainable parameters and demonstrates the great ability to combat the gradient vanishing and degradation problem in deep SNNs. | Lang Feng, Qianhui Liu, Huajin Tang, De Ma, Gang Pan | null | null | 2,022 | ijcai |
Forming Effective Human-AI Teams: Building Machine Learning Models that Complement the Capabilities of Multiple Experts | null | Machine learning (ML) models are increasingly being used in application domains that often involve working together with human experts. In this context, it can be advantageous to defer certain instances to a single human expert when they are difficult to predict for the ML model. While previous work has focused on scenarios with one distinct human expert, in many real-world situations several human experts with varying capabilities may be available. In this work, we propose an approach that trains a classification model to complement the capabilities of multiple human experts. By jointly training the classifier together with an allocation system, the classifier learns to accurately predict those instances that are difficult for the human experts, while the allocation system learns to pass each instance to the most suitable team member—either the classifier or one of the human experts. We evaluate our proposed approach in multiple experiments on public datasets with “synthetic” experts and a real-world medical dataset annotated by multiple radiologists. Our approach outperforms prior work and is more accurate than the best human expert or a classifier. Furthermore, it is flexibly adaptable to teams of varying sizes and different levels of expert diversity. | Patrick Hemmer, Sebastian Schellhammer, Michael Vössing, Johannes Jakubik, Gerhard Satzger | null | null | 2,022 | ijcai |
Multi-Tier Platform for Cognizing Massive Electroencephalogram | null | An end-to-end platform assembling multiple tiers is built for precisely cognizing brain activities. Being fed massive electroencephalogram (EEG) data, the time-frequency spectrograms are conventionally projected into the episode-wise feature matrices (seen as tier-1). A spiking neural network (SNN) based tier is designed to distill the principle information in terms of spike-streams from the rare features, which maintains the temporal implication in the nature of EEGs. The proposed tier-3 transposes time- and space-domain of spike patterns from the SNN; and feeds the transposed pattern-matrices into an artificial neural network (ANN, Transformer specifically) known as tier-4, where a special spanning topology is proposed to match the two-dimensional input form. In this manner, cognition such as classification is conducted with high accuracy. For proof-of-concept, the sleep stage scoring problem is demonstrated by introducing multiple EEG datasets with the largest comprising 42,560 hours recorded from 5,793 subjects. From experiment results, our platform achieves the general cognition overall accuracy of 87% by leveraging sole EEG, which is 2% superior to the state-of-the-art. Moreover, our developed multi-tier methodology offers visible and graphical interpretations of the temporal characteristics of EEG by identifying the critical episodes, which is demanded in neurodynamics but hardly appears in conventional cognition scenarios. | Zheng Chen, Lingwei Zhu, Ziwei Yang, Renyuan Zhang | null | null | 2,022 | ijcai |
Efficient and Accurate Conversion of Spiking Neural Network with Burst Spikes | null | Spiking neural network (SNN), as a brain-inspired energy-efficient neural network, has attracted the interest of researchers. While the training of spiking neural networks is still an open problem. One effective way is to map the weight of trained ANN to SNN to achieve high reasoning ability. However, the converted spiking neural network often suffers from performance degradation and a considerable time delay. To speed up the inference process and obtain higher accuracy, we theoretically analyze the errors in the conversion process from three perspectives: the differences between IF and ReLU, time dimension, and pooling operation. We propose a neuron model for releasing burst spikes, a cheap but highly efficient method to solve residual information. In addition, Lateral Inhibition Pooling (LIPooling) is proposed to solve the inaccuracy problem caused by MaxPooling in the conversion process. Experimental results on CIFAR and ImageNet demonstrate that our algorithm is efficient and accurate. For example, our method can ensure nearly lossless conversion of SNN and only use about 1/10 (less than 100) simulation time under 0.693x energy consumption of the typical method. Our code is available at https://github.com/Brain-Inspired-Cognitive-Engine/Conversion_Burst. | Yang Li, Yi Zeng | null | null | 2,022 | ijcai |
Semi-Supervised Imitation Learning of Team Policies from Suboptimal Demonstrations | null | We present Bayesian Team Imitation Learner (BTIL), an imitation learning algorithm to model the behavior of teams performing sequential tasks in Markovian domains. In contrast to existing multi-agent imitation learning techniques, BTIL explicitly models and infers the time-varying mental states of team members, thereby enabling learning of decentralized team policies from demonstrations of suboptimal teamwork. Further, to allow for sample- and label-efficient policy learning from small datasets, BTIL employs a Bayesian perspective and is capable of learning from semi-supervised demonstrations. We demonstrate and benchmark the performance of BTIL on synthetic multi-agent tasks as well as a novel dataset of human-agent teamwork. Our experiments show that BTIL can successfully learn team policies from demonstrations despite the influence of team members' (time-varying and potentially misaligned) mental states on their behavior. | Sangwon Seo, Vaibhav V. Unhelkar | null | null | 2,022 | ijcai |
Limits and Possibilities of Forgetting in Abstract Argumentation | null | The topic of forgetting has been extensively studied in the field of knowledge representation and reasoning for many major formalisms. Quite recently it has been introduced to abstract argumentation. However, many already known as well as essential aspects about forgetting like strong persistence or strong invariance have been left unconsidered. We show that forgetting in abstract argumentation cannot be reduced to forgetting in logic programming. In addition, we deal with the more general problem of forgetting whole sets of arguments and show that iterative application of existing operators for single arguments does not necessarily yield a desirable result as it may not produce an informationally economic argumentation framework. As a consequence we provide a systematic and exhaustive study of forgetting desiderata and associated operations adapted to the intrinsics of abstract argumentation. We show the limits and shed light on the possibilities. | Ringo Baumann, Matti Berthold | null | null | 2,022 | ijcai |
Signed Neuron with Memory: Towards Simple, Accurate and High-Efficient ANN-SNN Conversion | null | Spiking Neural Networks (SNNs) are receiving increasing attention due to their biological plausibility and the potential for ultra-low-power event-driven neuromorphic hardware implementation. Due to the complex temporal dynamics and discontinuity of spikes, training SNNs directly usually suffers from high computing resources and a long training time. As an alternative, SNN can be converted from a pre-trained artificial neural network (ANN) to bypass the difficulty in SNNs learning. However, the existing ANN-to-SNN methods neglect the inconsistency of information transmission between synchronous ANNs and asynchronous SNNs. In this work, we first analyze how the asynchronous spikes in SNNs may cause conversion errors between ANN and SNN. To address this problem, we propose a signed neuron with memory function, which enables almost no accuracy loss during the conversion process, and maintains the properties of asynchronous transmission in the converted SNNs. We further propose a new normalization method, named neuron-wise normalization, to significantly shorten the inference latency in the converted SNNs. We conduct experiments on challenging datasets including CIFAR10 (95.44% top-1), CIFAR100 (78.3% top-1) and ImageNet (73.16% top-1). Experimental results demonstrate that the proposed method outperforms the state-of-the-art works in terms of accuracy and inference time. The code is available at https://github.com/ppppps/ANN2SNNConversion_SNM_NeuronNorm. | Yuchen Wang, Malu Zhang, Yi Chen, Hong Qu | null | null | 2,022 | ijcai |
Annotated Sequent Calculi for Paraconsistent Reasoning and Their Relations to Logical Argumentation | null | We introduce annotated sequent calculi, which are extensions of standard sequent calculi, where sequents are combined with annotations that represent their derivation statuses. Unlike in ordinary calculi, sequents that are derived in annotated calculi may still be retracted in the presence of conflicting sequents, thus inferences are made under stricter conditions. Conflicts in the resulting systems are handled like in adaptive logics and argumentation theory. The outcome is a robust family of proof systems for non-monotonic reasoning with inconsistent information, where revision considerations are fully integrated into the object level of the proofs. These systems are shown to be strongly connected to logical argumentation. | Ofer Arieli, Kees van Berkel, Christian Straßer | null | null | 2,022 | ijcai |
Personalized Federated Learning With a Graph | null | Knowledge sharing and model personalization are two key components in the conceptual framework of personalized federated learning (PFL). Existing PFL methods focus on proposing new model personalization mechanisms while simply implementing knowledge sharing by aggregating models from all clients, regardless of their relation graph. This paper aims to enhance the knowledge-sharing process in PFL by leveraging the graph-based structural information among clients. We propose a novel structured federated learning (SFL) framework to learn both the global and personalized models simultaneously using client-wise relation graphs and clients' private data. We cast SFL with graph into a novel optimization problem that can model the client-wise complex relations and graph-based structural topology by a unified framework. Moreover, in addition to using an existing relation graph, SFL could be expanded to learn the hidden relations among clients. Experiments on traffic and image benchmark datasets can demonstrate the effectiveness of the proposed method. | Fengwen Chen, Guodong Long, Zonghan Wu, Tianyi Zhou, Jing Jiang | null | null | 2,022 | ijcai |
Body-Decoupled Grounding via Solving: A Novel Approach on the ASP Bottleneck | null | Answer-Set Programming (ASP) has seen tremendous progress over the last two decades and is nowadays successfully applied in many real-world domains. However, for certain types of problems, the well-known ASP grounding bottleneck still causes severe problems. This becomes virulent when grounding of rules, where the variables have to be replaced by constants, leads to a ground pro- gram that is too huge to be processed by the ASP solver. In this work, we tackle this problem by a novel method that decouples non-ground atoms in rules in order to delegate the evaluation of rule bodies to the solving process. Our procedure translates a non-ground normal program into a ground disjunctive program that is exponential only in the maximum predicate arity, and thus polynomial if this arity is assumed to be bounded by a constant. We demonstrate the feasibility of this new method experimentally by comparing it to standard ASP technology in terms of grounding size, grounding time and total runtime. | Viktor Besin, Markus Hecher, Stefan Woltran | null | null | 2,022 | ijcai |
The Limits of Morality in Strategic Games | null | An agent, or a coalition of agents, is blameable for an outcome if she had a strategy to prevent it. In this paper we introduce a notion of limited blameworthiness, with a constraint on the amount of sacrifice required to prevent the outcome. The main technical contribution is a sound and complete logical system for reasoning about limited blameworthiness in the strategic game setting. | Rui Cao, Pavel Naumov | null | null | 2,022 | ijcai |
Verification and Monitoring for First-Order LTL with Persistence-Preserving Quantification over Finite and Infinite Traces | null | We address the problem of model checking first-order dynamic systems where new objects can be injected in the active domain during execution. Notable examples are systems induced by a first-order action theory, e.g., expressed in the Situation Calculus. Recent results have shown that, under the state-boundedness assumption, such systems, in spite of having a first-order representation of the state, admit decidable model checking for full first-order mu-calculus. However, interestingly, model checking remains undecidable in the case of first-order LTL (LTL-FO). In this paper, we show that in LTL-FOp, which is the fragment of LTL-FO in which quantification is over objects that persist along traces, model checking state-bounded systems becomes decidable over finite and infinite traces. We then employ this result to show how to handle monitoring of LTL-FOp properties against a trace stemming from an unknown state-bounded dynamic system, simultaneously considering the finite trace up to the current point, and all its possibly infinite future continuations. | Diego Calvanese, Giuseppe De Giacomo, Marco Montali, Fabio Patrizi | null | null | 2,022 | ijcai |
On Verifying Expectations and Observations of Intelligent Agents | null | Public observation logic (POL) is a variant of dynamic epistemic logic to reason about agent expectations and agent observations. Agents have certain expectations, regarding the situation at hand, that are actuated by the relevant protocols, and they eliminate possible worlds in which their expectations do not match with their observations. In this work, we investigate the computational complexity of the model checking problem for POL and prove its PSPACE-completeness. We also study various syntactic fragments of POL. We exemplify the applicability of POL model checking in verifying different characteristics and features of an interactive system with respect to the distinct expectations and (matching) observations of the system. Finally, we provide a discussion on the implementation of the model checking algorithms. | Sourav Chakraborty, Avijeet Ghosh, Sujata Ghosh, François Schwarzentruber | null | null | 2,022 | ijcai |
On the Complexity of Enumerating Prime Implicants from Decision-DNNF Circuits | null | We consider the problem Enum·IP of enumerating prime implicants of Boolean functions represented by decision decomposable negation normal form (dec-DNNF) circuits. We study Enum·IP from dec-DNNF within the framework of enumeration complexity and prove that it is in OutputP, the class of output polynomial enumeration problems, and more precisely in IncP, the class of polynomial incremental time enumeration problems. We then focus on two closely related, but seemingly harder, enumeration problems where further restrictions are put on the prime implicants to be generated. In the first problem, one is only interested in prime implicants representing subset-minimal abductive explanations, a notion much investigated in AI for more than thirty years. In the second problem, the target is prime implicants representing sufficient reasons, a recent yet important notion in the emerging field of eXplainable AI, since they aim to explain predictions achieved by machine learning classifiers. We provide evidence showing that enumerating specific prime implicants corresponding to subset-minimal abductive explanations or to sufficient reasons is not in OutputP. | Alexis de Colnet, Pierre Marquis | null | null | 2,022 | ijcai |
LTLf Synthesis as AND-OR Graph Search: Knowledge Compilation at Work | null | Synthesis techniques for temporal logic specifications are typically based on exploiting symbolic techniques, as done in model checking. These symbolic techniques typically use backward fixpoint computation. Planning, which can be seen as a specific form of synthesis, is a witness of the success of forward search approaches. In this paper, we develop a forward-search approach to full-fledged Linear Temporal Logic on finite traces (LTLf) synthesis. We show how to compute the Deterministic Finite Automaton (DFA) of an LTLf formula on-the-fly, while performing an adversarial forward search towards the final states, by considering the DFA as a sort of AND-OR graph. Our approach is characterized by branching on suitable propositional formulas, instead of individual evaluations, hence radically reducing the branching factor of the search space. Specifically, we take advantage of techniques developed for knowledge compilation, such as Sentential Decision Diagrams (SDDs), to implement the approach efficiently. | Giuseppe De Giacomo, Marco Favorito, Jianwen Li, Moshe Y. Vardi, Shengping Xiao, Shufang Zhu | null | null | 2,022 | ijcai |
Beyond Strong-Cyclic: Doing Your Best in Stochastic Environments | null | ``Strong-cyclic policies" were introduced to formalize trial-and-error strategies and are known to work in Markovian stochastic domains, i.e., they guarantee that the goal is reached with probability 1. We introduce ``best-effort" policies for (not necessarily Markovian) stochastic domains. These generalize strong-cyclic policies by taking advantage of stochasticity even if the goal cannot be reached with probability 1. We compare such policies with optimal policies, i.e., policies that maximize the probability that the goal is achieved, and show that optimal policies are best-effort, but that the converse is false in general. With this framework at hand, we revisit the foundational problem of what it means to plan in nondeterministic domains when the nondeterminism has a stochastic nature. We show that one can view a nondeterministic planning domain as a representation of infinitely many stochastic domains with the same support but different probabilities, and that for temporally extended goals expressed in LTL/LTLf a finite-state best-effort policy in one of these domains is best-effort in each of the domains. In particular, this gives an approach for finding such policies that reduces to solving finite-state MDPs with LTL/LTLf goals. All this shows that ``best-effort" policies are robust to changes in the probabilities, as long as the support is unchanged. | Benjamin Aminof, Giuseppe De Giacomo, Sasha Rubin, Florian Zuleger | null | null | 2,022 | ijcai |
Abstract Argumentation Frameworks with Marginal Probabilities | null | In the context of probabilistic AAFs, we intro-
duce AAFs with marginal probabilities (mAAFs)
requiring only marginal probabilities of argu-
ments/attacks to be specified and not relying on the
independence assumption. Reasoning over mAAFs
requires taking into account multiple probability
distributions over the possible worlds, so that the
probability of extensions is not determined by a
unique value, but by an interval. We focus on the
problems of computing the max and min probabil-
ities of extensions over mAAFs under Dung’s se-
mantics, characterize their complexity, and provide
closed formulas for polynomial cases. | Bettina Fazzinga, Sergio Flesca, Filippo Furfaro | null | null | 2,022 | ijcai |
Epistemic Logic of Likelihood and Belief | null | A major challenge in AI is dealing with uncertain information. While probabilistic approaches have been employed to address this issue, in many situations probabilities may not be available or may be unsuitable. As an alternative, qualitative approaches have been introduced to express that one event is no more probable than another. We provide an approach where an agent may reason deductively about notions of likelihood, and may hold beliefs where the subjective probability for a belief is less than 1. Thus, an agent can believe that p holds (with probability <1); and if the agent believes that q is more likely than p, then the agent will also believe q. Our language allows for arbitrary nesting of beliefs and qualitative likelihoods. We provide a sound and complete proof system for the logic with respect to an underlying probabilistic semantics, and show that the language is equivalent to a sublanguage with no nested modalities. | James P. Delgrande, Joshua Sack, Gerhard Lakemeyer, Maurice Pagnucco | null | null | 2,022 | ijcai |
On Preferences and Priority Rules in Abstract Argumentation | null | Dung's abstract Argumentation Framework (AF) has emerged as a central formalism for argumentation in AI.
Preferences in AF allow to represent the comparative strength of arguments in a simple yet expressive way.
In this paper we first investigate the complexity of the verification as well as credulous and skeptical acceptance problems in Preference-based AF (PAF) that extends AF with preferences over arguments.
Next, after introducing new semantics for AF where extensions are selected using cardinality (instead of set inclusion) criteria and investigating their complexity, we introduce a framework called AF with Priority rules (AFP) that extends AF with sequences of priority rules.
AFP generalizes AF with classical set-inclusion and cardinality based semantics, suggesting that argumentation semantics can be viewed as ways to express priorities among extensions.
Finally, we extend AFP by proposing AF with Priority rules and Preferences (AFP^2), where also preferences over arguments can be used to define priority rules, and study the complexity of the above-mentioned problems. | Gianvincenzo Alfano, Sergio Greco, Francesco Parisi, Irina Trubitsyna | null | null | 2,022 | ijcai |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.