title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Conservative and Adaptive Penalty for Model-Based Safe Reinforcement Learning
| null |
Reinforcement Learning (RL) agents in the real world must satisfy safety constraints in addition to maximizing a reward objective. Model-based RL algorithms hold promise for reducing unsafe real-world actions: they may synthesize policies that obey all constraints using simulated samples from a learned model. However, imperfect models can result in real-world constraint violations even for actions that are predicted to satisfy all constraints. We propose Conservative and Adaptive Penalty (CAP), a model-based safe RL framework that accounts for potential modeling errors by capturing model uncertainty and adaptively exploiting it to balance the reward and the cost objectives. First, CAP inflates predicted costs using an uncertainty-based penalty. Theoretically, we show that policies that satisfy this conservative cost constraint are guaranteed to also be feasible in the true environment. We further show that this guarantees the safety of all intermediate solutions during RL training. Further, CAP adaptively tunes this penalty during training using true cost feedback from the environment. We evaluate this conservative and adaptive penalty-based approach for model-based safe RL extensively on state and image-based environments. Our results demonstrate substantial gains in sample-efficiency while incurring fewer violations than prior safe RL algorithms. Code is available at: https://github.com/Redrew/CAP
|
Yecheng Jason Ma, Andrew Shen, Osbert Bastani, Jayaraman Dinesh
| null | null | 2,022 |
aaai
|
Prevailing in the Dark: Information Walls in Strategic Games
| null |
The paper studies strategic abilities that rise from restrictions on the information sharing in multi-agent systems. The main technical result is a sound and complete logical system that describes the interplay between the knowledge and the strategic ability modalities.
|
Pavel Naumov, Wenxuan Zhang
| null | null | 2,022 |
aaai
|
Propositional Encodings of Acyclicity and Reachability by Using Vertex Elimination
| null |
We introduce novel methods for encoding acyclicity and s-t-reachability constraints for propositional formulas with underlying directed graphs, based on vertex elimination graphs, which makes them suitable for cases where the underlying graph has a low directed elimination width. In contrast to solvers with ad hoc constraint propagators for graph constraints such as GraphSAT, our methods encode these constraints as standard propositional clauses, making them directly applicable with any SAT solver. An empirical study demonstrates that our methods do often outperform both earlier encodings of these constraints as well as GraphSAT especially when underlying graphs have a low directed elimination width.
|
Masood Feyzbakhsh Rankooh, Jussi Rintanen
| null | null | 2,022 |
aaai
|
Weakly Supervised Neural Symbolic Learning for Cognitive Tasks
| null |
Despite the recent success of end-to-end deep neural networks, there are growing concerns about their lack of logical reasoning abilities, especially on cognitive tasks with perception and reasoning processes. A solution is the neural symbolic learning (NeSyL) method that can effectively utilize pre-defined logic rules to constrain the neural architecture making it perform better on cognitive tasks. However, it is challenging to apply NeSyL to these cognitive tasks because of the lack of supervision, the non-differentiable manner of the symbolic system, and the difficulty to probabilistically constrain the neural network. In this paper, we propose WS-NeSyL, a weakly supervised neural symbolic learning model for cognitive tasks with logical reasoning. First, WS-NeSyL employs a novel back search algorithm to sample the possible reasoning process through logic rules. This sampled process can supervise the neural network as the pseudo label. Based on this algorithm, we can backpropagate gradients to the neural network of WS-NeSyL in a weakly supervised manner. Second, we introduce a probabilistic logic regularization into WS-NeSyL to help the neural network learn probabilistic logic. To evaluate WS-NeSyL, we have conducted experiments on three cognitive datasets, including temporal reasoning, handwritten formula recognition, and relational reasoning datasets. Experimental results show that WS-NeSyL not only outperforms the end-to-end neural model but also beats the state-of-the-art neural symbolic learning models.
|
Jidong Tian, Yitian Li, Wenqing Chen, Liqiang Xiao, Hao He, Yaohui Jin
| null | null | 2,022 |
aaai
|
Random vs. Best-First: Impact of Sampling Strategies on Decision Making in Model-Based Diagnosis
| null |
Statistical samples, in order to be representative, have to be drawn from a population in a random and unbiased way. Nevertheless, it is common practice in the field of model-based diagnosis to make estimations from (biased) best-first samples. One example is the computation of a few most probable fault explanations for a defective system and the use of these to assess which aspect of the system, if measured, would bring the highest information gain. In this work, we scrutinize whether these statistically not well-founded conventions, that both diagnosis researchers and practitioners have adhered to for decades, are indeed reasonable. To this end, we empirically analyze various sampling methods that generate fault explanations. We study the representativeness of the produced samples in terms of their estimations about fault explanations and how well they guide diagnostic decisions, and we investigate the impact of sample size, the optimal trade-off between sampling efficiency and effectivity, and how approximate sampling techniques compare to exact ones.
|
Patrick Rodler
| null | null | 2,022 |
aaai
|
Residual Similarity Based Conditional Independence Test and Its Application in Causal Discovery
| null |
Recently, many regression based conditional independence (CI) test methods have been proposed to solve the problem of causal discovery. These methods provide alternatives to test CI by first removing the information of the controlling set from the two target variables, and then testing the independence between the corresponding residuals Res1 and Res2. When the residuals are linearly uncorrelated, the independence test between them is nontrivial. With the ability to calculate inner product in high-dimensional space, kernel-based methods are usually used to achieve this goal, but still consume considerable time. In this paper, we investigate the independence between two linear combinations under linear non-Gaussian structural equation model. We show that the dependence between the two residuals can be captured by the difference between the similarity of (Res1, Res2) and that of (Res1, Res3) (Res3 is generated by random permutation) in high-dimensional space. With this result, we design a new method called SCIT for CI test, where permutation test is performed to control Type I error rate. The proposed method is simpler yet more efficient and effective than the existing ones. When applied to causal discovery, the proposed method outperforms the counterparts in terms of both speed and Type II error rate, especially in the case of small sample size, which is validated by our extensive experiments on various datasets.
|
Hao Zhang, Shuigeng Zhou, Kun Zhang, Jihong Guan
| null | null | 2,022 |
aaai
|
Sim2Real Object-Centric Keypoint Detection and Description
| null |
Keypoint detection and description play a central role in computer vision. Most existing methods are in the form of scene-level prediction, without returning the object classes of different keypoints. In this paper, we propose the object-centric formulation, which, beyond the conventional setting, requires further identifying which object each interest point belongs to. With such fine-grained information, our framework enables more downstream potentials, such as object-level matching and pose estimation in a clustered environment. To get around the difficulty of label collection in the real world, we develop a sim2real contrastive learning mechanism that can generalize the model trained in simulation to real-world applications. The novelties of our training method are three-fold: (i) we integrate the uncertainty into the learning framework to improve feature description of hard cases, e.g., less-textured or symmetric patches; (ii) we decouple the object descriptor into two independent branches, intra-object salience and inter-object distinctness, resulting in a better pixel-wise description; (iii) we enforce cross-view semantic consistency for enhanced robustness in representation learning. Comprehensive experiments on image matching and 6D pose estimation verify the encouraging generalization ability of our method. Particularly for 6D pose estimation, our method significantly outperforms typical unsupervised/sim2real methods, achieving a closer gap with the fully supervised counterpart.
|
Chengliang Zhong, Chao Yang, Fuchun Sun, Jinshan Qi, Xiaodong Mu, Huaping Liu, Wenbing Huang
| null | null | 2,022 |
aaai
|
Weighted Model Counting in FO2 with Cardinality Constraints and Counting Quantifiers: A Closed Form Formula
| null |
Weighted First-Order Model Counting (WFOMC) computes the weighted sum of the models of a first-order logic theory on a given finite domain. First-Order Logic theories that admit polynomial-time WFOMC w.r.t domain cardinality are called domain liftable. We introduce the concept of lifted interpretations as a tool for formulating closed forms for WFOMC. Using lifted interpretations, we reconstruct the closed-form formula for polynomial-time FOMC in the universally quantified fragment of FO2, earlier proposed by Beame et al. We then expand this closed-form to incorporate cardinality constraints, existential quantifiers, and counting quantifiers (a.k.a C2) without losing domain-liftability. Finally, we show that the obtained closed-form motivates a natural definition of a family of weight functions strictly larger than symmetric weight functions.
|
Sagar Malhotra, Luciano Serafini
| null | null | 2,022 |
aaai
|
MeTeoR: Practical Reasoning in Datalog with Metric Temporal Operators
| null |
DatalogMTL is an extension of Datalog with operators from metric temporal logic which has received significant attention in recent years. It is a highly expressive knowledge representation language that is well-suited for applications in temporal ontology-based query answering and stream processing. Reasoning in DatalogMTL is, however, of high computational complexity, making implementation challenging and hindering its adoption in applications. In this paper, we present a novel approach for practical reasoning in DatalogMTL which combines materialisation (a.k.a. forward chaining) with automata-based techniques. We have implemented this approach in a reasoner called MeTeoR and evaluated its performance using a temporal extension of the Lehigh University Benchmark and a benchmark based on real-world meteorological data. Our experiments show that MeTeoR is a scalable system which enables reasoning over complex temporal rules and datasets involving tens of millions of temporal facts.
|
Dingmin Wang, Pan Hu, Przemysław Andrzej Wałęga, Bernardo Cuenca Grau
| null | null | 2,022 |
aaai
|
Knowledge Compilation Meets Logical Separability
| null |
Knowledge compilation is an alternative solution to address demanding reasoning tasks with high complexity via converting knowledge bases into a suitable target language. Interestingly, the notion of logical separability, proposed by Levesque, offers a general explanation for the tractability of clausal entailment for two remarkable languages: decomposable negation normal form and prime implicates. It is interesting to explore what role logical separability on earth plays in problem tractability. In this paper, we apply the notion of logical separability in three reasoning problems within the context of propositional logic: satisfiability check (CO), clausal entailment check (CE) and model counting (CT), contributing to three corresponding polytime procedures. We provide three logical separability based properties: CO- logical separability, CE-logical separability and CT-logical separability. We then identify three novel normal forms: CO-LSNNF, CE-LSNNF and CT-LSNNF based on the above properties. Besides, we show that every normal form is the necessary and sufficient condition under which the corresponding procedure is correct. We finally integrate the above four normal forms into the knowledge compilation map.
|
Junming Qiu, Wenqing Li, Zhanhao Xiao, Quanlong Guan, Liangda Fang, Zhao-Rong Lai, Qian Dong
| null | null | 2,022 |
aaai
|
SGEITL: Scene Graph Enhanced Image-Text Learning for Visual Commonsense Reasoning
| null |
Answering complex questions about images is an ambitious goal for machine intelligence, which requires a joint understanding of images, text, and commonsense knowledge, as well as a strong reasoning ability. Recently, multimodal Transformers have made a great progress in the task of Visual Commonsense Reasoning (VCR), by jointly understanding visual objects and text tokens through layers of cross-modality attention. However, these approaches do not utilize the rich structure of the scene and the interactions between objects which are essential in answering complex commonsense questions. We propose a Scene Graph Enhanced Image-Text Learning (SGEITL) framework to incorporate visual scene graph in commonsense reasoning. In order to exploit the scene graph structure, at the model structure level, we propose a multihop graph transformer for regularizing attention interaction among hops. As for pre-training, a scene-graph-aware pre-training method is proposed to leverage structure knowledge extracted in visual scene graph. Moreover, we introduce a method to train and generate domain relevant visual scene graph using textual annotations in a weakly-supervised manner. Extensive experiments on VCR and other tasks show significant performance boost compared with the state-of-the-art methods, and prove the efficacy of each proposed component.
|
Zhecan Wang, Haoxuan You, Liunian Harold Li, Alireza Zareian, Suji Park, Yiqing Liang, Kai-Wei Chang, Shih-Fu Chang
| null | null | 2,022 |
aaai
|
Robust Adversarial Reinforcement Learning with Dissipation Inequation Constraint
| null |
Robust adversarial reinforcement learning is an effective method to train agents to manage uncertain disturbance and modeling errors in real environments. However, for systems that are sensitive to disturbances or those that are difficult to stabilize, it is easier to learn a powerful adversary than establish a stable control policy. An improper strong adversary can destabilize the system, introduce biases in the sampling process, make the learning process unstable, and even reduce the robustness of the policy. In this study, we consider the problem of ensuring system stability during training in the adversarial reinforcement learning architecture. The dissipative principle of robust H-infinity control is extended to the Markov Decision Process, and robust stability constraints are obtained based on L2 gain performance in the reinforcement learning system. Thus, we propose a dissipation-inequation-constraint-based adversarial reinforcement learning architecture. This architecture ensures the stability of the system during training by imposing constraints on the normal and adversarial agents. Theoretically, this architecture can be applied to a large family of deep reinforcement learning algorithms. Results of experiments in MuJoCo and GymFc environments show that our architecture effectively improves the robustness of the controller against environmental changes and adapts to more powerful adversaries. Results of the flight experiments on a real quadcopter indicate that our method can directly deploy the policy trained in the simulation environment to the real environment, and our controller outperforms the PID controller based on hardware-in-the-loop. Both our theoretical and empirical results provide new and critical outlooks on the adversarial reinforcement learning architecture from a rigorous robust control perspective.
|
Peng Zhai, Jie Luo, Zhiyan Dong, Lihua Zhang, Shunli Wang, Dingkang Yang
| null | null | 2,022 |
aaai
|
First Order Rewritability in Ontology-Mediated Querying in Horn Description Logics
| null |
We consider first-order (FO) rewritability for query answering in ontology mediated querying (OMQ) in which ontologies are formulated in Horn fragments of description logics (DLs). In general, OMQ approaches for such logics rely on non-FO rewriting of the query and/or on non-FO completion of the data, called a ABox. Specifically, we consider the problem of FO rewritability in terms of Beth definability, and show how Craig interpolation can then be used to effectively construct the rewritings, when they exist, from the Clark’s completion of Datalog-like programs encoding a given DL TBox and optionally a query. We show how this approach to FO rewritability can also be used to (a) capture integrity constraints commonly available in backend relational data sources, (b) capture constraints inherent in mapping such sources to an ABox , and (c) can be used an alternative to deriving so-called perfect rewritings of queries in the case of DL-Lite ontologies.
|
David Toman, Grant Weddell
| null | null | 2,022 |
aaai
|
Characterizing the Program Expressive Power of Existential Rule Languages
| null |
Existential rule languages are a family of ontology languages that have been widely used in ontology-mediated query answering (OMQA). However, for most of them, the expressive power of representing domain knowledge for OMQA, known as the program expressive power, is not well-understood yet. In this paper, we establish a number of novel characterizations for the program expressive power of several important existential rule languages, including tuple-generating dependencies (TGDs), linear TGDs, as well as disjunctive TGDs. The characterizations employ natural model-theoretic properties, and automata-theoretic properties sometimes, which thus provide powerful tools for identifying the definability of domain knowledge for OMQA in these languages.
|
Heng Zhang, Guifei Jiang
| null | null | 2,022 |
aaai
|
Monocular Camera-Based Point-Goal Navigation by Learning Depth Channel and Cross-Modality Pyramid Fusion
| null |
For a monocular camera-based navigation system, if we could effectively explore scene geometric cues from RGB images, the geometry information will significantly facilitate the efficiency of the navigation system. Motivated by this, we propose a highly efficient point-goal navigation framework, dubbed Geo-Nav. In a nutshell, our Geo-Nav consists of two parts: a visual perception part and a navigation part. In the visual perception part, we firstly propose a Self-supervised Depth Estimation network (SDE) specially tailored for the monocular camera-based navigation agent. Our SDE learns a mapping from an RGB input image to its corresponding depth image by exploring scene geometric constraints in a self-consistency manner. Then, in order to achieve a representative visual representation from the RGB inputs and learned depth images, we propose a Cross-modality Pyramid Fusion module (CPF). Concretely, our CPF computes a patch-wise cross-modality correlation between different modal features and exploits the correlation to fuse and enhance features at each scale. Thanks to the patch-wise nature of our CPF, we can fuse feature maps at high resolution, allowing our visual network to perceive more image details. In the navigation part, our extracted visual representations are fed to a navigation policy network to learn how to map the visual representations to agent actions effectively. Extensive experiments on a widely-used multiple-room environment Gibson demonstrate that Geo-Nav outperforms the state-of-the-art in terms of efficiency and effectiveness.
|
Tianqi Tang, Heming Du, Xin Yu, Yi Yang
| null | null | 2,022 |
aaai
|
CTIN: Robust Contextual Transformer Network for Inertial Navigation
| null |
Recently, data-driven inertial navigation approaches have demonstrated their capability of using well-trained neural networks to obtain accurate position estimates from inertial measurement units (IMUs) measurements. In this paper, we propose a novel robust Contextual Transformer-based network for Inertial Navigation (CTIN) to accurately predict velocity and trajectory. To this end, we first design a ResNet-based encoder enhanced by local and global multi-head self-attention to capture spatial contextual information from IMU measurements. Then we fuse these spatial representations with temporal knowledge by leveraging multi-head attention in the Transformer decoder. Finally, multi-task learning with uncertainty reduction is leveraged to improve learning efficiency and prediction accuracy of velocity and trajectory. Through extensive experiments over a wide range of inertial datasets (e.g., RIDI, OxIOD, RoNIN, IDOL, and our own), CTIN is very robust and outperforms state-of-the-art models.
|
Bingbing Rao, Ehsan Kazemi, Yifan Ding, Devu M Shila, Frank M Tucker, Liqiang Wang
| null | null | 2,022 |
aaai
|
Using Conditional Independence for Belief Revision
| null |
We present an approach to incorporating qualitative assertions of conditional irrelevance into belief revision, in order to address the limitations of existing work which considers only unconditional irrelevance. These assertions serve to enforce the requirement of minimal change to existing beliefs, while also suggesting a route to reducing the computational cost of belief revision by excluding irrelevant beliefs from consideration. In our approach, a knowledge engineer specifies a collection of multivalued dependencies that encode domain-dependent assertions of conditional irrelevance in the knowledge base. We consider these as capturing properties of the underlying domain which should be taken into account during belief revision. We introduce two related notions of what it means for a multivalued dependency to be taken into account by a belief revision operator: partial and full compliance. We provide characterisations of partially and fully compliant belief revision operators in terms of semantic conditions on their associated faithful rankings. Using these characterisations, we show that the constraints for partially and fully compliant belief revision operators are compatible with the AGM postulates. Finally, we compare our approach to existing work on unconditional irrelevance in belief revision.
|
Matthew James Lynn, James P. Delgrande, Pavlos Peppas
| null | null | 2,022 |
aaai
|
On Paraconsistent Belief Revision in LP
| null |
Belief revision aims at incorporating, in a rational way, a new piece of information into the beliefs of an agent. Most works in belief revision suppose a classical logic setting, where the beliefs of the agent are consistent. Moreover, the consistency postulate states that the result of the revision should be consistent if the new piece of information is consistent. But in real applications it may easily happen that (some parts of) the beliefs of the agent are not consistent. In this case then it seems reasonable to use paraconsistent logics to derive sensible conclusions from these inconsistent beliefs. However, in this context, the standard belief revision postulates trivialize the revision process. In this work we discuss how to adapt these postulates when the underlying logic is Priest's LP logic, in order to model a rational change, while being a conservative extension of AGM/KM belief revision. This implies, in particular, to adequately adapt the notion of expansion. We provide a representation theorem and some examples of belief revision operators in this setting.
|
Nicolas Schwind, Sébastien Konieczny, Ramón Pino Pérez
| null | null | 2,022 |
aaai
|
ApproxASP – a Scalable Approximate Answer Set Counter
| null |
Answer Set Programming (ASP) is a framework in artificial intelligence and knowledge representation for declarative modeling and problem solving. Modern ASP solvers focus on the computation or enumeration of answer sets. However, a variety of probabilistic applications in reasoning or logic programming require counting answer sets. While counting can be done by enumeration, simple enumeration becomes immediately infeasible if the number of solutions is high. On the other hand, approaches to exact counting are of high worst-case complexity. In fact, in propositional model counting, exact counting becomes impractical. In this work, we present a scalable approach to approximate counting for answer set programming. Our approach is based on systematically adding XOR constraints to ASP programs, which divide the search space. We prove that adding random XOR constraints partitions the answer sets of an ASP program. In practice, we use a Gaussian elimination-based approach by lifting ideas from SAT to ASP and integrating it into a state of the art ASP solver, which we call ApproxASP. Finally, our experimental evaluation shows the scalability of our approach over the existing ASP systems.
|
Mohimenul Kabir, Flavio O Everardo, Ankit K Shukla, Markus Hecher, Johannes Klaus Fichte, Kuldeep S Meel
| null | null | 2,022 |
aaai
|
Inferring Lexicographically-Ordered Rewards from Preferences
| null |
Modeling the preferences of agents over a set of alternatives is a principal concern in many areas. The dominant approach has been to find a single reward/utility function with the property that alternatives yielding higher rewards are preferred over alternatives yielding lower rewards. However, in many settings, preferences are based on multiple—often competing—objectives; a single reward function is not adequate to represent such preferences. This paper proposes a method for inferring multi-objective reward-based representations of an agent's observed preferences. We model the agent's priorities over different objectives as entering lexicographically, so that objectives with lower priorities matter only when the agent is indifferent with respect to objectives with higher priorities. We offer two example applications in healthcare—one inspired by cancer treatment, the other inspired by organ transplantation—to illustrate how the lexicographically-ordered rewards we learn can provide a better understanding of a decision-maker's preferences and help improve policies when used in reinforcement learning.
|
Alihan Hüyük, William R. Zame, Mihaela van der Schaar
| null | null | 2,022 |
aaai
|
Towards Explainable Action Recognition by Salient Qualitative Spatial Object Relation Chains
| null |
In order to be trusted by humans, Artificial Intelligence agents should be able to describe rationales behind their decisions. One such application is human action recognition in critical or sensitive scenarios, where trustworthy and explainable action recognizers are expected. For example, reliable pedestrian action recognition is essential for self-driving cars and explanations for real-time decision making are critical for investigations if an accident happens. In this regard, learning-based approaches, despite their popularity and accuracy, are disadvantageous due to their limited interpretability. This paper presents a novel neuro-symbolic approach that recognizes actions from videos with human-understandable explanations. Specifically, we first propose to represent videos symbolically by qualitative spatial relations between objects called qualitative spatial object relation chains. We further develop a neural saliency estimator to capture the correlation between such object relation chains and the occurrence of actions. Given an unseen video, this neural saliency estimator is able to tell which object relation chains are more important for the action recognized. We evaluate our approach on two real-life video datasets, with respect to recognition accuracy and the quality of generated action explanations. Experiments show that our approach achieves superior performance on both aspects to previous symbolic approaches, thus facilitating trustworthy intelligent decision making. Our approach can be used to augment state-of-the-art learning approaches with explainabilities.
|
Hua Hua, Dongxu Li, Ruiqi Li, Peng Zhang, Jochen Renz, Anthony Cohn
| null | null | 2,022 |
aaai
|
Tractable Explanations for d-DNNF Classifiers
| null |
Compilation into propositional languages finds a growing number of practical uses, including in constraint programming, diagnosis and machine learning (ML), among others. One concrete example is the use of propositional languages as classifiers, and one natural question is how to explain the predictions made. This paper shows that for classifiers represented with some of the best-known propositional languages, different kinds of explanations can be computed in polynomial time. These languages include deterministic decomposable negation normal form (d-DNNF), and so any propositional language that is strictly less succinct than d-DNNF. Furthermore, the paper describes optimizations, specific to Sentential Decision Diagrams (SDDs), which are shown to yield more efficient algorithms in practice.
|
Xuanxiang Huang, Yacine Izza, Alexey Ignatiev, Martin Cooper, Nicholas Asher, Joao Marques-Silva
| null | null | 2,022 |
aaai
|
How Does Knowledge Graph Embedding Extrapolate to Unseen Data: A Semantic Evidence View
| null |
Knowledge Graph Embedding (KGE) aims to learn representations for entities and relations. Most KGE models have gained great success, especially on extrapolation scenarios. Specifically, given an unseen triple (h, r, t), a trained model can still correctly predict t from (h, r, ?), or h from (?, r, t), such extrapolation ability is impressive. However, most existing KGE works focus on the design of delicate triple modeling function, which mainly tells us how to measure the plausibility of observed triples, but offers limited explanation of why the methods can extrapolate to unseen data, and what are the important factors to help KGE extrapolate. Therefore in this work, we attempt to study the KGE extrapolation of two problems: 1. How does KGE extrapolate to unseen data? 2. How to design the KGE model with better extrapolation ability? For the problem 1, we first discuss the impact factors for extrapolation and from relation, entity and triple level respectively, propose three Semantic Evidences (SEs), which can be observed from train set and provide important semantic information for extrapolation. Then we verify the effectiveness of SEs through extensive experiments on several typical KGE methods. For the problem 2, to make better use of the three levels of SE, we propose a novel GNN-based KGE model, called Semantic Evidence aware Graph Neural Network (SE-GNN). In SE-GNN, each level of SE is modeled explicitly by the corresponding neighbor pattern, and merged sufficiently by the multi-layer aggregation, which contributes to obtaining more extrapolative knowledge representation. Finally, through extensive experiments on FB15k-237 and WN18RR datasets, we show that SE-GNN achieves state-of-the-art performance on Knowledge Graph Completion task and performs a better extrapolation ability. Our code is available at https://github.com/renli1024/SE-GNN.
|
Ren Li, Yanan Cao, Qiannan Zhu, Guanqun Bi, Fang Fang, Yi Liu, Qian Li
| null | null | 2,022 |
aaai
|
Understanding Enthymemes in Deductive Argumentation Using Semantic Distance Measures
| null |
An argument can be regarded as some premises and a claim following from those premises. Normally, arguments exchanged by human agents are enthymemes, which generally means that some premises are implicit. So when an enthymeme is presented, the presenter expects that the recipient can identify the missing premises. An important kind of implicitness arises when a presenter assumes that two symbols denote the same, or nearly the same, concept (e.g. dad and father), and uses the symbols interchangeably. To model this process, we propose the use of semantic distance measures (e.g. based on a vector representation of word embeddings or a semantic network representation of words) to determine whether one symbol can be substituted by another. We present a theoretical framework for using substitutions, together with abduction of default knowledge, for understanding enthymemes based on deductive argumentation, and investigate how this could be used in practice.
|
Anthony Hunter
| null | null | 2,022 |
aaai
|
Unit Selection with Causal Diagram
| null |
The unit selection problem aims to identify a set of individuals who are most likely to exhibit a desired mode of behavior, for example, selecting individuals who would respond one way if encouraged and a different way if not encouraged. Using a combination of experimental and observational data, Li and Pearl derived tight bounds on the "benefit function" - the payoff/cost associated with selecting an individual with given characteristics. This paper shows that these bounds can be narrowed significantly (enough to change decisions) when structural information is available in the form of a causal model. We address the problem of estimating the benefit function using observational and experimental data when specific graphical criteria are assumed to hold.
|
Ang Li, Judea Pearl
| null | null | 2,022 |
aaai
|
Inductive Relation Prediction by BERT
| null |
Relation prediction in knowledge graphs is dominated by embedding based methods which mainly focus on the transductive setting. Unfortunately, they are not able to handle inductive learning where unseen entities and relations are present and cannot take advantage of prior knowledge. Furthermore, their inference process is not easily explainable. In this work, we propose an all-in-one solution, called BERTRL (BERT-based Relational Learning), which leverages pre-trained language model and fine-tunes it by taking relation instances and their possible reasoning paths as training samples. BERTRL outperforms the SOTAs in 15 out of 18 cases in both inductive and transductive settings. Meanwhile, it demonstrates strong generalization capability in few-shot learning and is explainable. The data and code can be found at https://github.com/zhw12/BERTRL.
|
Hanwen Zha, Zhiyu Chen, Xifeng Yan
| null | null | 2,022 |
aaai
|
Multi-View Graph Representation for Programming Language Processing: An Investigation into Algorithm Detection
| null |
Program representation, which aims at converting program source code into vectors with automatically extracted features, is a fundamental problem in programming language processing (PLP). Recent work tries to represent programs with neural networks based on source code structures. However, such methods often focus on the syntax and consider only one single perspective of programs, limiting the representation power of models. This paper proposes a multi-view graph (MVG) program representation method. MVG pays more attention to code semantics and simultaneously includes both data flow and control flow as multiple views. These views are then combined and processed by a graph neural network (GNN) to obtain a comprehensive program representation that covers various aspects. We thoroughly evaluate our proposed MVG approach in the context of algorithm detection, an important and challenging subfield of PLP. Specifically, we use a public dataset POJ-104 and also construct a new challenging dataset ALG-109 to test our method. In experiments, MVG outperforms previous methods significantly, demonstrating our model's strong capability of representing source code.
|
Ting Long, Yutong Xie, Xianyu Chen, Weinan Zhang, Qinxiang Cao, Yong Yu
| null | null | 2,022 |
aaai
|
Bounds on Causal Effects and Application to High Dimensional Data
| null |
This paper addresses the problem of estimating causal effects when adjustment variables in the back-door or front-door criterion are partially observed. For such scenarios, we derive bounds on the causal effects by solving two non-linear optimization problems, and demonstrate that the bounds are sufficient. Using this optimization method, we propose a framework for dimensionality reduction that allows one to trade bias for estimation power, and demonstrate its performance using simulation studies.
|
Ang Li, Judea Pearl
| null | null | 2,022 |
aaai
|
Learning to Walk with Dual Agents for Knowledge Graph Reasoning
| null |
Graph walking based on reinforcement learning (RL) has shown great success in navigating an agent to automatically complete various reasoning tasks over an incomplete knowledge graph (KG) by exploring multi-hop relational paths. However, existing multi-hop reasoning approaches only work well on short reasoning paths and tend to miss the target entity with the increasing path length. This is undesirable for many reasoning tasks in real-world scenarios, where short paths connecting the source and target entities are not available in incomplete KGs, and thus the reasoning performances drop drastically unless the agent is able to seek out more clues from longer paths. To address the above challenge, in this paper, we propose a dual-agent reinforcement learning framework, which trains two agents (Giant and Dwarf) to walk over a KG jointly and search for the answer collaboratively. Our approach tackles the reasoning challenge in long paths by assigning one of the agents (Giant) searching on cluster-level paths quickly and providing stage-wise hints for another agent (Dwarf). Finally, experimental results on several KG reasoning benchmarks show that our approach can search answers more accurately and efficiently, and outperforms existing RL-based methods for long path queries by a large margin.
|
Denghui Zhang, Zixuan Yuan, Hao Liu, Xiaodong lin, Hui Xiong
| null | null | 2,022 |
aaai
|
BERTMap: A BERT-Based Ontology Alignment System
| null |
Ontology alignment (a.k.a ontology matching (OM)) plays a critical role in knowledge integration. Owing to the success of machine learning in many domains, it has been applied in OM. However, the existing methods, which often adopt ad-hoc feature engineering or non-contextual word embeddings, have not yet outperformed rule-based systems especially in an unsupervised setting. In this paper, we propose a novel OM system named BERTMap which can support both unsupervised and semi-supervised settings. It first predicts mappings using a classifier based on fine-tuning the contextual embedding model BERT on text semantics corpora extracted from ontologies, and then refines the mappings through extension and repair by utilizing the ontology structure and logic. Our evaluation with three alignment tasks on biomedical ontologies demonstrates that BERTMap can often perform better than the leading OM systems LogMap and AML.
|
Yuan He, Jiaoyan Chen, Denvar Antonyrajah, Ian Horrocks
| null | null | 2,022 |
aaai
|
Automated Synthesis of Generalized Invariant Strategies via Counterexample-Guided Strategy Refinement
| null |
Strategy synthesis for multi-agent systems has proved to be a hard task, even when limited to two-player games with safety objectives. Generalized strategy synthesis, an extension of generalized planning which aims to produce a single solution for multiple (possibly infinitely many) planning instances, is a promising direction to deal with the state-space explosion problem. In this paper, we formalize the problem of generalized strategy synthesis in the situation calculus. The synthesis task involves second-order theorem proving generally. Thus we consider strategies aiming to maintain invariants; such strategies can be verified with first-order theorem proving. We propose a sound but incomplete approach to synthesize invariant strategies by adapting the framework of counterexample-guided refinement. The key idea for refinement is to generate a strategy using a model checker for a game constructed from the counterexample, and use it to refine the candidate general strategy. We implemented our method and did experiments with a number of game problems. Our system can successfully synthesize solutions for most of the domains within a reasonable amount of time.
|
Kailun Luo, Yongmei Liu
| null | null | 2,022 |
aaai
|
Linear-Time Verification of Data-Aware Dynamic Systems with Arithmetic
| null |
Combined modeling and verification of dynamic systems and the data they operate on has gained momentum in AI and in several application domains. We investigate the expressive yet concise framework of data-aware dynamic systems (DDS), extending it with linear arithmetic, and providing the following contributions. First, we introduce a new, semantic property of “finite summary”, which guarantees the existence of a faithful finite-state abstraction. We rely on this to show that checking whether a witness exists for a linear-time, finite-trace property is decidable for DDSs with finite summary. Second, we demonstrate that several decidability conditions studied in formal methods and database theory can be seen as concrete, checkable instances of this property. This also gives rise to new decidability results. Third, we show how the abstract, uniform property of finite summary leads to modularity results: a system enjoys finite summary if it can be partitioned appropriately into smaller systems that possess the property. Our results allow us to analyze systems that were out of reach in earlier approaches. Finally, we demonstrate the feasibility of our approach in a prototype implementation.
|
Paolo Felli, Marco Montali, Sarah Winkler
| null | null | 2,022 |
aaai
|
Conditional Abstract Dialectical Frameworks
| null |
Abstract dialectical frameworks (in short, ADFs) are a unifying model of formal argumentation, where argumentative relations between arguments are represented by assigning acceptance conditions to atomic arguments. This idea is generalized by letting acceptance conditions being assigned to complex formulas, resulting in conditional abstract dialectical frameworks (in short, cADFs). We define the semantics of cADFs in terms of a non-truth-functional four-valued logic, and study the semantics in-depth, by showing existence results and proving that all semantics are generalizations of the corresponding semantics for ADFs.
|
Jesse Heyninck, Matthias Thimm, Gabriele Kern-Isberner, Tjitze Rienstra, Kenneth Skiba
| null | null | 2,022 |
aaai
|
Axiomatization of Aggregates in Answer Set Programming
| null |
The paper presents a characterization of logic programs with aggregates based on many-sorted generalization of operator SM that refers neither to grounding nor to fixpoints. This characterization introduces new symbols for aggregate operations and aggregate elements, whose meaning is fixed by adding appropriate axioms to the result of the SM transformation. We prove that for programs without positive recursion through aggregates our semantics coincides with the semantics of the answer set solver Clingo.
|
Jorge Fandinno, Zachary Hansen, Yuliya Lierler
| null | null | 2,022 |
aaai
|
An Axiomatic Approach to Revising Preferences
| null |
We study a model of preference revision in which a prior preference over a set of alternatives is adjusted in order to accommodate input from an authoritative source, while maintaining certain structural constraints (e.g., transitivity, completeness), and without giving up more information than strictly necessary. We analyze this model under two aspects: the first allows us to capture natural distance-based operators, at the cost of a mismatch between the input and output formats of the revision operator. Requiring the input and output to be aligned yields a second type of operator, which we characterize using preferences on the comparisons in the prior preference Prefence revision is set in a logic-based framework and using the formal machinery of belief change, along the lines of the well-known AGM approach: we propose rationality postulates for each of the two versions of our model and derive representation results, thus situating preference revision within the larger family of belief change operators.
|
Adrian Haret, Johannes Peter Wallner
| null | null | 2,022 |
aaai
|
Reasoning about Causal Models with Infinitely Many Variables
| null |
Generalized structural equations models (GSEMs) (Peters and Halpern 2021), are, as the name suggests, a generalization of structural equations models (SEMs). They can deal with (among other things) infinitely many variables with infinite ranges, which is critical for capturing dynamical systems. We provide a sound and complete axiomatization of causal reasoning in GSEMs that is an extension of the sound and complete axiomatization provided by Halpern (2000) for SEMs. Considering GSEMs helps clarify what properties Halpern's axioms capture.
|
Joseph Y. Halpern, Spencer Peters
| null | null | 2,022 |
aaai
|
MultiplexNet: Towards Fully Satisfied Logical Constraints in Neural Networks
| null |
We propose a novel way to incorporate expert knowledge into the training of deep neural networks. Many approaches encode domain constraints directly into the network architecture, requiring non-trivial or domain-specific engineering. In contrast, our approach, called MultiplexNet, represents domain knowledge as a quantifier-free logical formula in disjunctive normal form (DNF) which is easy to encode and to elicit from human experts. It introduces a latent Categorical variable that learns to choose which constraint term optimizes the error function of the network and it compiles the constraints directly into the output of existing learning algorithms. We demonstrate the efficacy of this approach empirically on several classical deep learning tasks, such as density estimation and classification in both supervised and unsupervised settings where prior knowledge about the domains was expressed as logical constraints. Our results show that the MultiplexNet approach learned to approximate unknown distributions well, often requiring fewer data samples than the alternative approaches. In some cases, MultiplexNet finds better solutions than the baselines; or solutions that could not be achieved with the alternative approaches. Our contribution is in encoding domain knowledge in a way that facilitates inference. We specifically focus on quantifier-free logical formulae that are specified over the output domain of a network. We show that this approach is both efficient and general; and critically, our approach guarantees 100% constraint satisfaction in a network's output.
|
Nick Hoernle, Rafael Michael Karampatsis, Vaishak Belle, Kobi Gal
| null | null | 2,022 |
aaai
|
Sufficient Reasons for Classifier Decisions in the Presence of Domain Constraints
| null |
Recent work has unveiled a theory for reasoning about the decisions made by binary classifiers: a classifier describes a Boolean function, and the reasons behind an instance being classified as positive are the prime-implicants of the function that are satisfied by the instance. One drawback of these works is that they do not explicitly treat scenarios where the underlying data is known to be constrained, e.g., certain combinations of features may not exist, may not be observable, or may be required to be disregarded. We propose a more general theory, also based on prime-implicants, tailored to taking constraints into account. The main idea is to view classifiers as describing partial Boolean functions that are undefined on instances that do not satisfy the constraints. We prove that this simple idea results in more parsimonious reasons. That is, not taking constraints into account (e.g., ignoring, or taking them as negative instances) results in reasons that are subsumed by reasons that do take constraints into account. We illustrate this improved succinctness on synthetic classifiers and classifiers learnt from real data.
|
Niku Gorji, Sasha Rubin
| null | null | 2,022 |
aaai
|
ASP-Based Declarative Process Mining
| null |
We put forward Answer Set Programming (ASP) as a solution approach for three classical problems in Declarative Process Mining: Log Generation, Query Checking, and Conformance Checking. These problems correspond to different ways of analyzing business processes under execution, starting from sequences of recorded events, a.k.a. event logs. We tackle them in their data-aware variant, i.e., by considering events that carry a payload (set of attribute-value pairs), in addition to the performed activity, specifying processes declaratively with an extension of linear-time temporal logic over finite traces (LTLf). The data-aware setting is significantly more challenging than the control-flow one: Query Checking is still open, while the existing approaches for the other two problems do not scale well. The contributions of the work include an ASP encoding schema for the three problems, their solution, and experiments showing the feasibility of the approach.
|
Francesco Chiariello, Fabrizio Maria Maggi, Fabio Patrizi
| null | null | 2,022 |
aaai
|
Multi-Relational Graph Representation Learning with Bayesian Gaussian Process Network
| null |
Learning effective representations of entities and relations for knowledge graphs (KGs) is critical to the success of many multi-relational learning tasks. Existing methods based on graph neural networks learn a deterministic embedding function, which lacks sufficient flexibility to explore better choices when dealing with the imperfect and noisy KGs such as the scarce labeled nodes and noisy graph structure. To this end, we propose a novel multi-relational graph Gaussian Process network (GGPN), which aims to improve the flexibility of deterministic methods by simultaneously learning a family of embedding functions, i.e., a stochastic embedding function. Specifically, a Bayesian Gaussian Process (GP) is proposed to model the distribution of this stochastic function and the resulting representations are obtained by aggregating stochastic function values, i.e., messages, from neighboring entities. The two problems incurred when leveraging GP in GGPN are the proper choice of kernel function and the cubic computational complexity. To address the first problem, we further propose a novel kernel function that can explicitly take the diverse relations between each pair of entities into account and be adaptively learned in a data-driven way. We address the second problem by reformulating GP as a Bayesian linear model, resulting in a linear computational complexity. With these two solutions, our GGPN can be efficiently trained in an end-to-end manner. We evaluate our GGPN in link prediction and entity classification tasks, and the experimental results demonstrate the superiority of our method. Our code is available at https://github.com/sysu-gzchen/GGPN.
|
Guanzheng Chen, Jinyuan Fang, Zaiqiao Meng, Qiang Zhang, Shangsong Liang
| null | null | 2,022 |
aaai
|
Trading Complexity for Sparsity in Random Forest Explanations
| null |
Random forests have long been considered as powerful model ensembles in machine learning. By training multiple decision trees, whose diversity is fostered through data and feature subsampling, the resulting random forest can lead to more stable and reliable predictions than a single decision tree. This however comes at the cost of decreased interpretability: while decision trees are often easily interpretable, the predictions made by random forests are much more difficult to understand, as they involve a majority vote over multiple decision trees. In this paper, we examine different types of reasons that explain "why" an input instance is classified as positive or negative by a Boolean random forest. Notably, as an alternative to prime-implicant explanations taking the form of subset-minimal implicants of the random forest, we introduce majoritary reasons which are subset-minimal implicants of a strict majority of decision trees. For these abductive explanations, the tractability of the generation problem (finding one reason) and the optimization problem (finding one minimum-sized reason) are investigated. Unlike prime-implicant explanations, majoritary reasons may contain redundant features. However, in practice, prime-implicant explanations - for which the identification problem is DP-complete - are slightly larger than majoritary reasons that can be generated using a simple linear-time greedy algorithm. They are also significantly larger than minimum-sized majoritary reasons which can be approached using an anytime Partial MaxSAT algorithm.
|
Gilles Audemard, Steve Bellart, Louènas Bounia, Frédéric Koriche, Jean-Marie Lagniez, Pierre Marquis
| null | null | 2,022 |
aaai
|
Rushing and Strolling among Answer Sets – Navigation Made Easy
| null |
Answer set programming (ASP) is a popular declarative programming paradigm with a wide range of applications in artificial intelligence. Oftentimes, when modeling an AI problem with ASP, and in particular when we are interested beyond simple search for optimal solutions, an actual solution, differences between solutions, or number of solutions of the ASP program matter. For example, when a user aims to identify a specific answer set according to her needs, or requires the total number of diverging solutions to comprehend probabilistic applications such as reasoning in medical domains. Then, there are only certain problem specific and handcrafted encoding techniques available to navigate the solution space of ASP programs, which is oftentimes not enough. In this paper, we propose a formal and general framework for interactive navigation toward desired subsets of answer sets analogous to faceted browsing. Our approach enables the user to explore the solution space by consciously zooming in or out of sub-spaces of solutions at a certain configurable pace. We illustrate that weighted faceted navigation is computationally hard. Finally, we provide an implementation of our approach that demonstrates the feasibility of our framework for incomprehensible solution spaces.
|
Johannes Klaus Fichte, Sarah Alice Gaggl, Dominik Rusovac
| null | null | 2,022 |
aaai
|
On Testing for Discrimination Using Causal Models
| null |
Consider a bank that uses an AI system to decide which loan applications to approve. We want to ensure that the system is fair, that is, it does not discriminate against applicants based on a predefined list of sensitive attributes, such as gender and ethnicity. We expect there to be a regulator whose job it is to certify the bank’s system as fair or unfair. We consider issues that the regulator will have to confront when making such a decision, including the precise definition of fairness, dealing with proxy variables, and dealing with what we call allowed variables, that is, variables such as salary on which the decision is allowed to depend, despite being correlated with sensitive variables. We show (among other things) that the problem of deciding fairness as we have defined it is co-NP-complete, but then argue that, despite that, in practice the problem should be manageable.
|
Hana Chockler, Joseph Y. Halpern
| null | null | 2,022 |
aaai
|
Geometry Interaction Knowledge Graph Embeddings
| null |
Knowledge graph (KG) embeddings have shown great power in learning representations of entities and relations for link prediction tasks. Previous work usually embeds KGs into a single geometric space such as Euclidean space (zero curved), hyperbolic space (negatively curved) or hyperspherical space (positively curved) to maintain their specific geometric structures (e.g., chain, hierarchy and ring structures). However, the topological structure of KGs appears to be complicated, since it may contain multiple types of geometric structures simultaneously. Therefore, embedding KGs in a single space, no matter the Euclidean space, hyperbolic space or hyperspheric space, cannot capture the complex structures of KGs accurately. To overcome this challenge, we propose Geometry Interaction knowledge graph Embeddings (GIE), which learns spatial structures interactively between the Euclidean, hyperbolic and hyperspherical spaces. Theoretically, our proposed GIE can capture a richer set of relational information, model key inference patterns, and enable expressive semantic matching across entities. Experimental results on three well-established knowledge graph completion benchmarks show that our GIE achieves the state-of-the-art performance with fewer parameters.
|
Zongsheng Cao, Qianqian Xu, Zhiyong Yang, Xiaochun Cao, Qingming Huang
| null | null | 2,022 |
aaai
|
Monotone Abstractions in Ontology-Based Data Management
| null |
In Ontology-Based Data Management (OBDM), an abstraction of a source query q is a query over the ontology capturing the semantics of q in terms of the concepts and the relations available in the ontology. Since a perfect characterization of a source query may not exist, the notions of best sound and complete approximations of an abstraction have been introduced and studied in the typical OBDM context, i.e., in the case where the ontology is expressed in DL-Lite, and source queries are expressed as unions of conjunctive queries (UCQs). Interestingly, if we restrict our attention to abstractions expressed as UCQs, even best approximations of abstractions are not guaranteed to exist. Thus, a natural question to ask is whether such limitations affect even larger classes of queries. In this paper, we answer this fundamental question for an essential class of queries, namely the class of monotone queries. We define a monotone query language based on disjunctive Datalog enriched with an epistemic operator, and show that its expressive power suffices for expressing the best approximations of monotone abstractions of UCQs.
|
Gianluca Cima, Marco Console, Maurizio Lenzerini, Antonella Poggi
| null | null | 2,022 |
aaai
|
Lower Bounds on Intermediate Results in Bottom-Up Knowledge Compilation
| null |
Bottom-up knowledge compilation is a paradigm for generating representations of functions by iteratively conjoining constraints using a so-called apply function. When the input is not efficiently compilable into a language - generally a class of circuits - because optimal compiled representations are provably large, the problem is not the compilation algorithm as much as the choice of a language too restrictive for the input. In contrast, in this paper, we look at CNF formulas for which very small circuits exists and look at the efficiency of their bottom-up compilation in one of the most general languages, namely that of structured decomposable negation normal forms (str-DNNF). We prove that, while the inputs have constant size representations as str-DNNF, any bottom-up compilation in the general setting where conjunction and structure modification are allowed takes exponential time and space, since large intermediate results have to be produced. This unconditionally proves that the inefficiency of bottom-up compilation resides in the bottom-up paradigm itself.
|
Alexis de Colnet, Stefan Mengel
| null | null | 2,022 |
aaai
|
Finite Entailment of Local Queries in the Z Family of Description Logics
| null |
In the last few years the field of logic-based knowledge representation took a lot of inspiration from database theory. A vital example is that the finite model semantics in description logics (DLs) is reconsidered as a desirable alternative to the classical one and that query entailment has replaced knowledge-base satisfiability (KBSat) checking as the key inference problem. However, despite the considerable effort, the overall picture concerning finite query answering in DLs is still incomplete. In this work we study the complexity of finite entailment of local queries (conjunctive queries and positive boolean combinations thereof) in the Z family of DLs, one of the most powerful KR formalisms, lying on the verge of decidability. Our main result is that the DLs ZOQ and ZOI are finitely controllable, i.e. that their finite and unrestricted entailment problems for local queries coincide. This allows us to reuse recently established upper bounds on querying these logics under the classical semantics. While we will not solve finite query entailment for the third main logic in the Z family, ZIQ, we provide a generic reduction from the finite entail- ment problem to the finite KBSat problem, working for ZIQ and some of its sublogics. Our proofs unify and solidify previously established results on finite satisfiability and finite query entailment for many known DLs.
|
Bartosz Bednarczyk, Emanuel Kieroński
| null | null | 2,022 |
aaai
|
From Actions to Programs as Abstract Actual Causes
| null |
Causality plays a central role in reasoning about observations. In many cases, it might be useful to define the conditions under which a non-deterministic program can be called an actual cause of an effect in a setting where a sequence of programs are executed one after another. There can be two perspectives, one where at least one execution of the program leads to the effect, and another where all executions do so. The former captures a ''weak'' notion of causation and is more general than the latter stronger notion. In this paper, we give a definition of weak potential causes. Our analysis is performed within the situation calculus basic action theories and we consider programs formulated in the logic programming language ConGolog. Within this setting, we show how one can utilize a recently developed abstraction framework to relate causes at various levels of abstraction, which facilitates reasoning about programs as causes.
|
Bita Banihashemi, Shakil M. Khan, Mikhail Soutchanski
| null | null | 2,022 |
aaai
|
Expressivity of Planning with Horn Description Logic Ontologies
| null |
State constraints in AI Planning globally restrict the legal environment states. Standard planning languages make closed-domain and closed-world assumptions. Here we address open-world state constraints formalized by planning over a description logic (DL) ontology. Previously, this combination of DL and planning has been investigated for the light-weight DL DL-Lite. Here we propose a novel compilation scheme into standard PDDL with derived predicates, which applies to more expressive DLs and is based on the rewritability of DL queries into Datalog with stratified negation. We also provide a new rewritability result for the DL Horn-ALCHOIQ, which allows us to apply our compilation scheme to quite expressive ontologies. In contrast, we show that in the slight extension Horn-SROIQ no such compilation is possible unless the weak exponential hierarchy collapses. Finally, we show that our approach can outperform previous work on existing benchmarks for planning with DL ontologies, and is feasible on new benchmarks taking advantage of more expressive ontologies.
|
Stefan Borgwardt, Jörg Hoffmann, Alisa Kovtunova, Markus Krötzsch, Bernhard Nebel, Marcel Steinmetz
| null | null | 2,022 |
aaai
|
Answering Queries with Negation over Existential Rules
| null |
Ontology-based query answering with existential rules is well understood and implemented for positive queries, in particular conjunctive queries. For queries with negation, however, there is no agreed-upon semantics or standard implementation. This problem is unknown for simpler rule languages, such as Datalog, where it is intuitive and practical to evaluate negative queries over the least model. This fails for existential rules, which instead of a single least model have multiple universal models that may not lead to the same results for negative queries. We therefore propose universal core models as a basis for a meaningful (non-monotonic) semantics for queries with negation. Since cores are hard to compute, we identify syntactic conditions (on rules and queries) under which our core-based semantics can equivalently be obtained for other universal models, such as those produced by practical chase algorithms. Finally, we use our findings to propose a semantics for a broad class of existential rules with negation.
|
Stefan Ellmauthaler, Markus Krötzsch, Stephan Mennicke
| null | null | 2,022 |
aaai
|
Large-Neighbourhood Search for Optimisation in Answer-Set Solving
| null |
While Answer-Set Programming (ASP) is a prominent approach to declarative problem solving, optimisation problems can still be a challenge for it. Large-Neighbourhood Search (LNS) is a metaheuristic for optimisation where parts of a solution are alternately destroyed and reconstructed that has high but untapped potential for ASP solving. We present a framework for LNS optimisation in answer-set solving, in which neighbourhoods can be specified either declaratively as part of the ASP encoding, or automatically generated by code. To effectively explore different neighbourhoods, we focus on multi-shot solving as it allows to avoid program regrounding. We illustrate the framework on different optimisation problems, some of which are notoriously difficult, including shift planning and a parallel machine scheduling problem from semi-conductor production which demonstrate the effectiveness of the LNS approach.
|
Thomas Eiter, Tobias Geibinger, Nelson Higuera Ruiz, Nysret Musliu, Johannes Oetsch, Daria Stepanova
| null | null | 2,022 |
aaai
|
The Price of Selfishness: Conjunctive Query Entailment for ALCSelf Is 2EXPTIME-Hard
| null |
In logic-based knowledge representation, query answering has essentially replaced mere satisfiability checking as the inferencing problem of primary interest. For knowledge bases in the basic description logic ALC, the computational complexity of conjunctive query (CQ) answering is well known to be EXPTIME-complete and hence not harder than satisfiability. This does not change when the logic is extended by certain features (such as counting or role hierarchies), whereas adding others (inverses, nominals or transitivity together with role-hierarchies) turns CQ answering exponentially harder. We contribute to this line of results by showing the surprising fact that even extending ALC by just the Self operator – which proved innocuous in many other contexts – increases the complexity of CQ entailment to 2EXPTIME. As common for this type of problem, our proof establishes a reduction from alternating Turing machines running in exponential space, but several novel ideas and encoding tricks are required to make the approach work in that specific, restricted setting.
|
Bartosz Bednarczyk, Sebastian Rudolph
| null | null | 2,022 |
aaai
|
Equivalence in Argumentation Frameworks with a Claim-Centric View – Classical Results with Novel Ingredients
| null |
A common feature of non-monotonic logics is that the classical notion of equivalence does not preserve the intended meaning in light of additional information. Consequently, the term strong equivalence was coined in the literature and thoroughly investigated. In the present paper, the knowledge representation formalism under consideration are claim-augmented argumentation frameworks (CAFs) which provide a formal basis to analyze conclusion-oriented problems in argumentation by adapting a claim-focused perspective. CAFs extend Dung AFs by associating a claim to each argument representing its conclusion. In this paper, we investigate both ordinary and strong equivalence in CAFs. Thereby, we take the fact into account that one might either be interested in the actual arguments or their claims only. The former point of view naturally yields an extension of strong equivalence for AFs to the claim-based setting while the latter gives rise to a novel equivalence notion which is genuine for CAFs. We tailor, examine and compare these notions and obtain a comprehensive study of this matter for CAFs. We conclude by investigating the computational complexity of naturally arising decision problems.
|
Ringo Baumann, Anna Rapberger, Markus Ulbricht
| null | null | 2,022 |
aaai
|
ER: Equivariance Regularizer for Knowledge Graph Completion
| null |
Tensor factorization and distanced based models play important roles in knowledge graph completion (KGC). However, the relational matrices in KGC methods often induce a high model complexity, bearing a high risk of overfitting. As a remedy, researchers propose a variety of different regularizers such as the tensor nuclear norm regularizer. Our motivation is based on the observation that the previous work only focuses on the “size” of the parametric space, while leaving the implicit semantic information widely untouched. To address this issue, we propose a new regularizer, namely, Equivariance Regularizer (ER), which can suppress overfitting by leveraging the implicit semantic information. Specifically, ER can enhance the generalization ability of the model by employing the semantic equivariance between the head and tail entities. Moreover, it is a generic solution for both distance based models and tensor factorization based models. Our experimental results indicate a clear and substantial improvement over the state-of-the-art relation prediction methods.
|
Zongsheng Cao, Qianqian Xu, Zhiyong Yang, Qingming Huang
| null | null | 2,022 |
aaai
|
Enforcement Heuristics for Argumentation with Deep Reinforcement Learning
| null |
In this paper, we present a learning-based approach to the symbolic reasoning problem of dynamic argumentation, where the knowledge about attacks between arguments is incomplete or evolving. Specifically, we employ deep reinforcement learning to learn which attack relations between arguments should be added or deleted in order to enforce the acceptability of (a set of) arguments. We show that our Graph Neural Network (GNN) architecture EGNN can learn a near optimal enforcement heuristic for all common argument-fixed enforcement problems, including problems for which no other (symbolic) solvers exist. We demonstrate that EGNN outperforms other GNN baselines and on enforcement problems with high computational complexity performs better than state-of-the-art symbolic solvers with respect to efficiency. Thus, we show our neuro-symbolic approach is able to learn heuristics without the expert knowledge of a human designer and offers a valid alternative to symbolic solvers. We publish our code at https://github.com/DennisCraandijk/DL-Abstract-Argumentation.
|
Dennis Craandijk, Floris Bex
| null | null | 2,022 |
aaai
|
On the Complexity of Inductively Learning Guarded Clauses
| null |
We investigate the computational complexity of mining guarded clauses from clausal datasets through the framework of inductive logic programming (ILP). We show that learning guarded clauses is NP-complete and thus one step below the Sigma2-complete task of learning Horn clauses on the polynomial hierarchy. Motivated by practical applications on large datasets we identify a natural tractable fragment of the problem. Finally, we also generalise all of our results to k-guarded clauses for constant k.
|
Andrei Draghici, Georg Gottlob, Matthias Lanzinger
| null | null | 2,022 |
aaai
|
Tractable Abstract Argumentation via Backdoor-Treewidth
| null |
Argumentation frameworks (AFs) are a core formalism in the field of formal argumentation. As most standard computational tasks regarding AFs are hard for the first or second level of the Polynomial Hierarchy, a variety of algorithmic approaches to achieve manageable runtimes have been considered in the past. Among them, the backdoor-approach and the treewidth-approach turned out to yield fixed-parameter tractable fragments. However, many applications yield high parameter values for these methods, often rendering them infeasible in practice. We introduce the backdoor-treewidth approach for abstract argumentation, combining the best of both worlds with a guaranteed parameter value that does not exceed the minimum of the backdoor- and treewidth-parameter. In particular, we formally define backdoor-treewidth and establish fixed-parameter tractability for standard reasoning tasks of abstract argumentation. Moreover, we provide systems to find and exploit backdoors of small width, and conduct systematic experiments evaluating the new parameter.
|
Wolfgang Dvořák, Markus Hecher, Matthias König, André Schidler, Stefan Szeider, Stefan Woltran
| null | null | 2,022 |
aaai
|
On the Computation of Necessary and Sufficient Explanations
| null |
The complete reason behind a decision is a Boolean formula that characterizes why the decision was made. This recently introduced notion has a number of applications, which include generating explanations, detecting decision bias and evaluating counterfactual queries. Prime implicants of the complete reason are known as sufficient reasons for the decision and they correspond to what is known as PI explanations and abductive explanations. In this paper, we refer to the prime implicates of a complete reason as necessary reasons for the decision. We justify this terminology semantically and show that necessary reasons correspond to what is known as contrastive explanations. We also study the computation of complete reasons for multi-class decision trees and graphs with nominal and numeric features for which we derive efficient, closed-form complete reasons. We further investigate the computation of shortest necessary and sufficient reasons for a broad class of complete reasons, which include the derived closed forms and the complete reasons for Sentential Decision Diagrams (SDDs). We provide an algorithm which can enumerate their shortest necessary reasons in output polynomial time. Enumerating shortest sufficient reasons for this class of complete reasons is hard even for a single reason. For this problem, we provide an algorithm that appears to be quite efficient as we show empirically.
|
Adnan Darwiche, Chunxi Ji
| null | null | 2,022 |
aaai
|
Incomplete Argumentation Frameworks: Properties and Complexity
| null |
Dung’s Argumentation Framework (AF) has been extended in several directions, including the possibility of representing unquantified uncertainty about the existence of arguments and attacks. The framework resulting from such an extension is called incomplete AF (iAF). In this paper, we first introduce three new satisfaction problems named totality, determinism and functionality, and investigate their computational complexity for both AF and iAF under several semantics. We also investigate the complexity of credulous and skeptical acceptance in iAF under semi-stable semantics—a problem left open in the literature. We then show that any iAF can be rewritten into an equivalent one where either only (unattacked) arguments or only attacks are uncertain. Finally, we relate iAF to probabilistic argumentation framework, where uncertainty is quantified.
|
Gianvincenzo Alfano, Sergio Greco, Francesco Parisi, Irina Trubitsyna
| null | null | 2,022 |
aaai
|
Machine Learning for Utility Prediction in Argument-Based Computational Persuasion
| null |
Automated persuasion systems (APS) aim to persuade a user to believe something by entering into a dialogue in which arguments and counterarguments are exchanged. To maximize the probability that an APS is successful in persuading a user, it can identify a global policy that will allow it to select the best arguments it presents at each stage of the dialogue whatever arguments the user presents. However, in real applications, such as for healthcare, it is unlikely the utility of the outcome of the dialogue will be the same, or the exact opposite, for the APS and user. In order to deal with this situation, games in extended form have been harnessed for argumentation in Bi-party Decision Theory. This opens new problems that we address in this paper: (1) How can we use Machine Learning (ML) methods to predict utility functions for different subpopulations of users? and (2) How can we identify for a new user the best utility function from amongst those that we have learned. To this extent, we develop two ML methods, EAI and EDS, that leverage information coming from the users to predict their utilities. EAI is restricted to a fixed amount of information, whereas EDS can choose the information that best detects the subpopulations of a user. We evaluate EAI and EDS in a simulation setting and in a realistic case study concerning healthy eating habits. Results are promising in both cases, but EDS is more effective at predicting useful utility functions.
|
Ivan Donadello, Anthony Hunter, Stefano Teso, Mauro Dragoni
| null | null | 2,022 |
aaai
|
Anytime Guarantees under Heavy-Tailed Data
| null |
Under data distributions which may be heavy-tailed, many stochastic gradient-based learning algorithms are driven by feedback queried at points with almost no performance guarantees on their own. Here we explore a modified "anytime online-to-batch" mechanism which for smooth objectives admits high-probability error bounds while requiring only lower-order moment bounds on the stochastic gradients. Using this conversion, we can derive a wide variety of "anytime robust" procedures, for which the task of performance analysis can be effectively reduced to regret control, meaning that existing regret bounds (for the bounded gradient case) can be robustified and leveraged in a straightforward manner. As a direct takeaway, we obtain an easily implemented stochastic gradient-based algorithm for which all queried points formally enjoy sub-Gaussian error bounds, and in practice show noteworthy gains on real-world data applications.
|
Matthew J. Holland
| null | null | 2,022 |
aaai
|
Towards Automating Model Explanations with Certified Robustness Guarantees
| null |
Providing model explanations has gained significant popularity recently. In contrast with the traditional feature-level model explanations, concept-based explanations can provide explanations in the form of high-level human concepts. However, existing concept-based explanation methods implicitly follow a two-step procedure that involves human intervention. Specifically, they first need the human to be involved to define (or extract) the high-level concepts, and then manually compute the importance scores of these identified concepts in a post-hoc way. This laborious process requires significant human effort and resource expenditure due to manual work, which hinders their large-scale deployability. In practice, it is challenging to automatically generate the concept-based explanations without human intervention due to the subjectivity of defining the units of concept-based interpretability. In addition, due to its data-driven nature, the interpretability itself is also potentially susceptible to malicious manipulations. Hence, our goal in this paper is to free human from this tedious process, while ensuring that the generated explanations are provably robust to adversarial perturbations. We propose a novel concept-based interpretation method, which can not only automatically provide the prototype-based concept explanations but also provide certified robustness guarantees for the generated prototype-based explanations. We also conduct extensive experiments on real-world datasets to verify the desirable properties of the proposed method.
|
Mengdi Huai, Jinduo Liu, Chenglin Miao, Liuyi Yao, Aidong Zhang
| null | null | 2,022 |
aaai
|
Multi-Mode Tensor Space Clustering Based on Low-Tensor-Rank Representation
| null |
Traditional subspace clustering aims to cluster data lying in a union of linear subspaces. The vectorization of high-dimensional data to 1-D vectors to perform clustering ignores much of the structure intrinsic to such data. To preserve said structure, in this work we exploit clustering in a high-order tensor space rather than a vector space. We develop a novel low-tensor-rank representation (LTRR) for unfolded matrices of tensor data lying in a low-rank tensor space. The representation coefficient matrix of an unfolding matrix is tensorized to a 3-order tensor, and the low-tensor-rank constraint is imposed on the transformed coefficient tensor to exploit the self-expressiveness property. Then, inspired by the multi-view clustering framework, we develop a multi-mode tensor space clustering algorithm (MMTSC) that can deal with tensor space clustering with or without missing entries. The tensor is unfolded along each mode, and the coefficient matrices are obtained for each unfolded matrix. The low tensor rank constraint is imposed on a tensor combined from transformed coefficient tensors of each mode, such that the proposed method can simultaneously capture the low rank property for the data within each tensor space and maintain cluster consistency across different modes. Experimental results demonstrate that the proposed MMTSC algorithm can outperform existing clustering algorithms in many cases.
|
Yicong He, George K. Atia
| null | null | 2,022 |
aaai
|
Adversarial Examples Can Be Effective Data Augmentation for Unsupervised Machine Learning
| null |
Adversarial examples causing evasive predictions are widely used to evaluate and improve the robustness of machine learning models. However, current studies focus on supervised learning tasks, relying on the ground truth data label, a targeted objective, or supervision from a trained classifier. In this paper, we propose a framework of generating adversarial examples for unsupervised models and demonstrate novel applications to data augmentation. Our framework exploits a mutual information neural estimator as an information theoretic similarity measure to generate adversarial examples without supervision. We propose a new MinMax algorithm with provable convergence guarantees for the efficient generation of unsupervised adversarial examples. Our framework can also be extended to supervised adversarial examples. When using unsupervised adversarial examples as a simple plugin data augmentation tool for model retraining, significant improvements are consistently observed across different unsupervised tasks and datasets, including data reconstruction, representation learning, and contrastive learning. Our results show novel methods and considerable advantages in studying and improving unsupervised machine learning via adversarial examples.
|
Chia-Yi Hsu, Pin-Yu Chen, Songtao Lu, Sijia Liu, Chia-Mu Yu
| null | null | 2,022 |
aaai
|
Globally Optimal Hierarchical Reinforcement Learning for Linearly-Solvable Markov Decision Processes
| null |
We present a novel approach to hierarchical reinforcement learning for linearly-solvable Markov decision processes. Our approach assumes that the state space is partitioned, and defines subtasks for moving between the partitions. We represent value functions on several levels of abstraction, and use the compositionality of subtasks to estimate the optimal values of the states in each partition. The policy is implicitly defined on these optimal value estimates, rather than being decomposed among the subtasks. As a consequence, our approach can learn the globally optimal policy, and does not suffer from non-stationarities induced by high-level decisions. If several partitions have equivalent dynamics, the subtasks of those partitions can be shared. We show that our approach is significantly more sample efficient than that of a flat learner and similar hierarchical approaches when the set of boundary states is smaller than the entire state space.
|
Guillermo Infante, Anders Jonsson, Vicenç Gómez
| null | null | 2,022 |
aaai
|
Multi-View Clustering on Topological Manifold
| null |
Multi-view clustering has received a lot of attentions in data mining recently. Though plenty of works have been investigated on this topic, it is still a severe challenge due to the complex nature of the multiple heterogeneous features. Particularly, existing multi-view clustering algorithms fail to consider the topological structure in the data, which is essential for clustering data on manifold. In this paper, we propose to exploit the implied data manifold by learning the topological relationship between data points. Our method coalesces multiple view-wise graphs with the topological relevance considered, and learns the weights as well as the consensus graph interactively in a unified framework. Furthermore, we manipulate the consensus graph by a connectivity constraint such that the data points from the same cluster are precisely connected into the same component. Substantial experiments on both toy data and real datasets are conducted to validate the effectiveness of the proposed method, compared to the state-of-the-art algorithms over the clustering performance.
|
Shudong Huang, Ivor Tsang, Zenglin Xu, Jiancheng Lv, Quan-Hui Liu
| null | null | 2,022 |
aaai
|
Toward Physically Realizable Quantum Neural Networks
| null |
There has been significant recent interest in quantum neural networks (QNNs), along with their applications in diverse domains. Current solutions for QNNs pose significant challenges concerning their scalability, ensuring that the postulates of quantum mechanics are satisfied and that the networks are physically realizable. The exponential state space of QNNs poses challenges for the scalability of training procedures. The no-cloning principle prohibits making multiple copies of training samples, and the measurement postulates lead to non-deterministic loss functions. Consequently, the physical realizability and efficiency of existing approaches that rely on repeated measurement of several copies of each sample for training QNNs are unclear. This paper presents a new model for QNNs that relies on band-limited Fourier expansions of transfer functions of quantum perceptrons (QPs) to design scalable training procedures. This training procedure is augmented with a randomized quantum stochastic gradient descent technique that eliminates the need for sample replication. We show that this training procedure converges to the true minima in expectation, even in the presence of non-determinism due to quantum measurement. Our solution has a number of important benefits: (i) using QPs with concentrated Fourier power spectrum, we show that the training procedure for QNNs can be made scalable; (ii) it eliminates the need for resampling, thus staying consistent with the no-cloning rule; and (iii) enhanced data efficiency for the overall training process since each data sample is processed once per epoch. We present a detailed theoretical foundation for our models and methods' scalability, accuracy, and data efficiency. We also validate the utility of our approach through a series of numerical experiments.
|
Mohsen Heidari, Ananth Grama, Wojciech Szpankowski
| null | null | 2,022 |
aaai
|
Uncertainty-Aware Learning against Label Noise on Imbalanced Datasets
| null |
Learning against label noise is a vital topic to guarantee a reliable performance for deep neural networks.Recent research usually refers to dynamic noise modeling with model output probabilities and loss values, and then separates clean and noisy samples.These methods have gained notable success. However, unlike cherry-picked data, existing approaches often cannot perform well when facing imbalanced datasets, a common scenario in the real world.We thoroughly investigate this phenomenon and point out two major issues that hinder the performance, i.e., inter-class loss distribution discrepancy and misleading predictions due to uncertainty.The first issue is that existing methods often perform class-agnostic noise modeling. However, loss distributions show a significant discrepancy among classes under class imbalance, and class-agnostic noise modeling can easily get confused with noisy samples and samples in minority classes.The second issue refers to that models may output misleading predictions due to epistemic uncertainty and aleatoric uncertainty, thus existing methods that rely solely on the output probabilities may fail to distinguish confident samples. Inspired by our observations, we propose an Uncertainty-aware Label Correction framework(ULC) to handle label noise on imbalanced datasets. First, we perform epistemic uncertainty-aware class-specific noise modeling to identify trustworthy clean samples and refine/discard highly confident true/corrupted labels.Then, we introduce aleatoric uncertainty in the subsequent learning process to prevent noise accumulation in the label noise modeling process. We conduct experiments on several synthetic and real-world datasets. The results demonstrate the effectiveness of the proposed method, especially on imbalanced datasets.
|
Yingsong Huang, Bing Bai, Shengwei Zhao, Kun Bai, Fei Wang
| null | null | 2,022 |
aaai
|
Learning Large DAGs by Combining Continuous Optimization and Feedback Arc Set Heuristics
| null |
Bayesian networks represent relations between variables using a directed acyclic graph (DAG). Learning the DAG is an NP-hard problem and exact learning algorithms are feasible only for small sets of variables. We propose two scalable heuristics for learning DAGs in the linear structural equation case. Our methods learn the DAG by alternating between unconstrained gradient descent-based step to optimize an objective function and solving a maximum acyclic subgraph problem to enforce acyclicity. Thanks to this decoupling, our methods scale up beyond thousands of variables.
|
Pierre Gillot, Pekka Parviainen
| null | null | 2,022 |
aaai
|
Not All Parameters Should Be Treated Equally: Deep Safe Semi-supervised Learning under Class Distribution Mismatch
| null |
Deep semi-supervised learning (SSL) aims to utilize a sizeable unlabeled set to train deep networks, thereby reducing the dependence on labeled instances. However, the unlabeled set often carries unseen classes that cause the deep SSL algorithm to lose generalization. Previous works focus on the data level that they attempt to remove unseen class data or assign lower weight to them but could not eliminate their adverse effects on the SSL algorithm. Rather than focusing on the data level, this paper turns attention to the model parameter level. We find that only partial parameters are essential for seen-class classification, termed safe parameters. In contrast, the other parameters tend to fit irrelevant data, termed harmful parameters. Driven by this insight, we propose Safe Parameter Learning (SPL) to discover safe parameters and make the harmful parameters inactive, such that we can mitigate the adverse effects caused by unseen-class data. Specifically, we firstly design an effective strategy to divide all parameters in the pre-trained SSL model into safe and harmful ones. Then, we introduce a bi-level optimization strategy to update the safe parameters and kill the harmful parameters. Extensive experiments show that SPL outperforms the state-of-the-art SSL methods on all the benchmarks by a large margin. Moreover, experiments demonstrate that SPL can be integrated into the most popular deep SSL networks and be easily extended to handle other cases of class distribution mismatch.
|
Rundong He, Zhongyi Han, Yang Yang, Yilong Yin
| null | null | 2,022 |
aaai
|
Partial Multi-Label Learning via Large Margin Nearest Neighbour Embeddings
| null |
To deal with ambiguities in partial multi-label learning (PML), existing popular PML research attempts to perform disambiguation by direct ground-truth label identification. However, these approaches can be easily misled by noisy false-positive labels in the iteration of updating the model parameter and the latent ground-truth label variables. When labeling information is ambiguous, we should depend more on underlying structure of data, such as label and feature correlations, to perform disambiguation for partially labeled data. Moreover, large margin nearest neighbour (LMNN) is a popular strategy that considers data structure in classification. However, due to the ambiguity of labeling information in PML, traditional LMNN cannot be used to solve the PML problem directly. In addition, embedding is an effective technology to decrease the noise information of data. Inspried by LMNN and embedding technology, we propose a novel PML paradigm called Partial Multi-label Learning via Large Margin Nearest Neighbour Embeddings (PML-LMNNE), which aims to conduct disambiguation by projecting labels and features into a lower-dimension embedding space and reorganize the underlying structure by LMNN in the embedding space simultaneously. An efficient algorithm is designed to implement the proposed method and the convergence rate of the algorithm is analyzed. Moreover, we present a theoretical analysis of the generalization error bound for the proposed PML-LMNNE, which shows that the generalization error converges to the sum of two times the Bayes error over the labels when the number of instances goes to infinity. Comprehensive experiments on artificial and real-world datasets demonstrate the superiorities of the proposed PML-LMNNE.
|
Xiuwen Gong, Dong Yuan, Wei Bao
| null | null | 2,022 |
aaai
|
Recovering the Propensity Score from Biased Positive Unlabeled Data
| null |
Positive-Unlabeled (PU) learning methods train a classifier to distinguish between the positive and negative classes given only positive and unlabeled data. While traditional PU methods require the labeled positive samples to be an unbiased sample of the positive distribution, in practice the labeled sample is often a biased draw from the true distribution. Prior work shows that if we know the likelihood that each positive instance will be selected for labeling, referred to as the propensity score, then the biased sample can be used for PU learning. Unfortunately, no prior work has been proposed an inference strategy for which the propensity score is identifiable. In this work, we propose two sets of assumptions under which the propensity score can be uniquely determined: one in which no assumption is made on the functional form of the propensity score (requiring assumptions on the data distribution), and the second which loosens the data assumptions while assuming a functional form for the propensity score. We then propose inference strategies for each case. Our empirical study shows that our approach significantly outperforms the state-of-the-art propensity estimation methods on a rich variety of benchmark datasets.
|
Walter Gerych, Thomas Hartvigsen, Luke Buquicchio, Emmanuel Agu, Elke Rundensteiner
| null | null | 2,022 |
aaai
|
Creativity of AI: Automatic Symbolic Option Discovery for Facilitating Deep Reinforcement Learning
| null |
Despite of achieving great success in real life, Deep Reinforcement Learning (DRL) is still suffering from three critical issues, which are data efficiency, lack of the interpretability and transferability. Recent research shows that embedding symbolic knowledge into DRL is promising in addressing those challenges. Inspired by this, we introduce a novel deep reinforcement learning framework with symbolic options. This framework features a loop training procedure, which enables guiding the improvement of policy by planning with action models and symbolic options learned from interactive trajectories automatically. The learned symbolic options help doing the dense requirement of expert domain knowledge and provide inherent interpretabiliy of policies. Moreover, the transferability and data efficiency can be further improved by planning with the action models. To validate the effectiveness of this framework, we conduct experiments on two domains, Montezuma's Revenge and Office World respectively, and the results demonstrate the comparable performance, improved data efficiency, interpretability and transferability.
|
Mu Jin, Zhihao Ma, Kebing Jin, Hankz Hankui Zhuo, Chen Chen, Chao Yu
| null | null | 2,022 |
aaai
|
Towards Discriminant Analysis Classifiers Using Online Active Learning via Myoelectric Interfaces
| null |
We propose a discriminant analysis (DA) classifier that uses online active learning to address the need for the frequent training of myoelectric interfaces due to covariate shift. This online classifier is initially trained using a small set of examples, and then updated over time using streaming data that are interactively labeled by a user or pseudo-labeled by a soft-labeling technique. We prove, theoretically, that this yields the same model as training a DA classifier via full batch learning. We then provide experimental evidence that our approach improves the performance of DA classifiers and is robust to mislabeled data, and that our soft-labeling technique has better performance than existing state-of-the-art methods. We argue that our proposal is suitable for real-time applications, as its time complexity w.r.t. the streaming data remains constant.
|
Andres G Jaramillo-Yanez, Marco E. Benalcázar, Sebastian Sardina, Fabio Zambetta
| null | null | 2,022 |
aaai
|
Causal Discovery in Hawkes Processes by Minimum Description Length
| null |
Hawkes processes are a special class of temporal point processes which exhibit a natural notion of causality, as occurrence of events in the past may increase the probability of events in the future. Discovery of the underlying influence network among the dimensions of multi-dimensional temporal processes is of high importance in disciplines where a high-frequency data is to model, e.g. in financial data or in seismological data. This paper approaches the problem of learning Granger-causal network in multi-dimensional Hawkes processes. We formulate this problem as a model selection task in which we follow the minimum description length (MDL) principle. Moreover, we propose a general algorithm for MDL-based inference using a Monte-Carlo method and we use it for our causal discovery problem. We compare our algorithm with the state-of-the-art baseline methods on synthetic and real-world financial data. The synthetic experiments demonstrate superiority of our method in causal graph discovery compared to the baseline methods with respect to the size of the data. The results of experiments with the G-7 bonds price data are consistent with the experts’ knowledge.
|
Amirkasra Jalaldoust, Kateřina Hlaváčková-Schindler, Claudia Plant
| null | null | 2,022 |
aaai
|
LUNAR: Unifying Local Outlier Detection Methods via Graph Neural Networks
| null |
Many well-established anomaly detection methods use the distance of a sample to those in its local neighbourhood: so-called `local outlier methods', such as LOF and DBSCAN. They are popular for their simple principles and strong performance on unstructured, feature-based data that is commonplace in many practical applications. However, they cannot learn to adapt for a particular set of data due to their lack of trainable parameters. In this paper, we begin by unifying local outlier methods by showing that they are particular cases of the more general message passing framework used in graph neural networks. This allows us to introduce learnability into local outlier methods, in the form of a neural network, for greater flexibility and expressivity: specifically, we propose LUNAR, a novel, graph neural network-based anomaly detection method. LUNAR learns to use information from the nearest neighbours of each node in a trainable way to find anomalies. We show that our method performs significantly better than existing local outlier methods, as well as state-of-the-art deep baselines. We also show that the performance of our method is much more robust to different settings of the local neighbourhood size.
|
Adam Goodge, Bryan Hooi, See-Kiong Ng, Wee Siong Ng
| null | null | 2,022 |
aaai
|
Semi-supervised Conditional Density Estimation with Wasserstein Laplacian Regularisation
| null |
Conditional Density Estimation (CDE) has wide-reaching applicability to various real-world problems, such as spatial density estimation and environmental modelling. CDE estimates the probability density of a random variable rather than a single value and can thus model uncertainty and inverse problems. This task is inherently more complex than regression, and many algorithms suffer from overfitting, particularly when modelled with few labelled data points. For applications where unlabelled data is abundant but labelled data is scarce, we propose Wasserstein Laplacian Regularisation, a semi-supervised learning framework that allows CDE algorithms to leverage these unlabelled data. The framework minimises an objective function which ensures that the learned model is smooth along the manifold of the underlying data, as measured by Wasserstein distance. When applying our framework to Mixture Density Networks, the resulting semi-supervised algorithm can achieve similar performance to a supervised model with up to three times as many labelled data points on baseline datasets. We additionally apply our technique to the problem of remote sensing for chlorophyll-a estimation in inland waters.
|
Olivier Graffeuille, Yun Sing Koh, Jörg Wicker, Moritz K Lehmann
| null | null | 2,022 |
aaai
|
Improved Gradient-Based Adversarial Attacks for Quantized Networks
| null |
Neural network quantization has become increasingly popular due to efficient memory consumption and faster computation resulting from bitwise operations on the quantized networks. Even though they exhibit excellent generalization capabilities, their robustness properties are not well-understood. In this work, we systematically study the robustness of quantized networks against gradient based adversarial attacks and demonstrate that these quantized models suffer from gradient vanishing issues and show a fake sense of robustness. By attributing gradient vanishing to poor forward-backward signal propagation in the trained network, we introduce a simple temperature scaling approach to mitigate this issue while preserving the decision boundary. Despite being a simple modification to existing gradient based adversarial attacks, experiments on multiple image classification datasets with multiple network architectures demonstrate that our temperature scaled attacks obtain near-perfect success rate on quantized networks while outperforming original attacks on adversarially trained models as well as floating-point networks.
|
Kartik Gupta, Thalaiyasingam Ajanthan
| null | null | 2,022 |
aaai
|
Regularized Modal Regression on Markov-Dependent Observations: A Theoretical Assessment
| null |
Modal regression, a widely used regression protocol, has been extensively investigated in statistical and machine learning communities due to its robustness to outlier and heavy-tailed noises. Understanding modal regression's theoretical behavior can be fundamental in learning theory. Despite significant progress in characterizing its statistical property, the majority results are based on the assumption that samples are independent and identical distributed (i.i.d.), which is too restrictive for real-world applications. This paper concerns about the statistical property of regularized modal regression (RMR) within an important dependence structure - Markov dependent. Specifically, we establish the upper bound for RMR estimator under moderate conditions and give an explicit learning rate. Our results show that the Markov dependence impacts on the generalization error in the way that sample size would be discounted by a multiplicative factor depending on the spectral gap of the underlying Markov chain. This result shed a new light on characterizing the theoretical underpinning for robust regression.
|
Tieliang Gong, Yuxin Dong, Hong Chen, Wei Feng, Bo Dong, Chen Li
| null | null | 2,022 |
aaai
|
Theoretical Guarantees of Fictitious Discount Algorithms for Episodic Reinforcement Learning and Global Convergence of Policy Gradient Methods
| null |
When designing algorithms for finite-time-horizon episodic reinforcement learning problems, a common approach is to introduce a fictitious discount factor and use stationary policies for approximations. Empirically, it has been shown that the fictitious discount factor helps reduce variance, and stationary policies serve to save the per-iteration computational cost. Theoretically, however, there is no existing work on convergence analysis for algorithms with this fictitious discount recipe. This paper takes the first step towards analyzing these algorithms. It focuses on two vanilla policy gradient (VPG) variants: the first being a widely used variant with discounted advantage estimations (DAE), the second with an additional fictitious discount factor in the score functions of the policy gradient estimators. Non-asymptotic convergence guarantees are established for both algorithms, and the additional discount factor is shown to reduce the bias introduced in DAE and thus improve the algorithm convergence asymptotically. A key ingredient of our analysis is to connect three settings of Markov decision processes (MDPs): the finite-time-horizon, the average reward and the discounted settings. To our best knowledge, this is the first theoretical guarantee on fictitious discount algorithms for the episodic reinforcement learning of finite-time-horizon MDPs, which also leads to the (first) global convergence of policy gradient methods for finite-time-horizon episodic reinforcement learning.
|
Xin Guo, Anran Hu, Junzi Zhang
| null | null | 2,022 |
aaai
|
GoTube: Scalable Statistical Verification of Continuous-Depth Models
| null |
We introduce a new statistical verification algorithm that formally quantifies the behavioral robustness of any time-continuous process formulated as a continuous-depth model. Our algorithm solves a set of global optimization (Go) problems over a given time horizon to construct a tight enclosure (Tube) of the set of all process executions starting from a ball of initial states. We call our algorithm GoTube. Through its construction, GoTube ensures that the bounding tube is conservative up to a desired probability and up to a desired tightness. GoTube is implemented in JAX and optimized to scale to complex continuous-depth neural network models. Compared to advanced reachability analysis tools for time-continuous neural networks, GoTube does not accumulate overapproximation errors between time steps and avoids the infamous wrapping effect inherent in symbolic techniques. We show that GoTube substantially outperforms state-of-the-art verification tools in terms of the size of the initial ball, speed, time-horizon, task completion, and scalability on a large set of experiments. GoTube is stable and sets the state-of-the-art in terms of its ability to scale to time horizons well beyond what has been previously possible.
|
Sophie A. Gruenbacher, Mathias Lechner, Ramin Hasani, Daniela Rus, Thomas A. Henzinger, Scott A. Smolka, Radu Grosu
| null | null | 2,022 |
aaai
|
Balanced Self-Paced Learning for AUC Maximization
| null |
Learning to improve AUC performance is an important topic in machine learning. However, AUC maximization algorithms may decrease generalization performance due to the noisy data. Self-paced learning is an effective method for handling noisy data. However, existing self-paced learning methods are limited to pointwise learning, while AUC maximization is a pairwise learning problem. To solve this challenging problem, we innovatively propose a balanced self-paced AUC maximization algorithm (BSPAUC). Specifically, we first provide a statistical objective for self-paced AUC. Based on this, we propose our self-paced AUC maximization formulation, where a novel balanced self-paced regularization term is embedded to ensure that the selected positive and negative samples have proper proportions. Specially, the sub-problem with respect to all weight variables may be non-convex in our formulation, while the one is normally convex in existing self-paced problems. To address this, we propose a doubly cyclic block coordinate descent method. More importantly, we prove that the sub-problem with respect to all weight variables converges to a stationary point on the basis of closed-form solutions, and our BSPAUC converges to a stationary point of our fixed optimization objective under a mild assumption. Considering both the deep learning and kernel-based implementations, experimental results on several large-scale datasets demonstrate that our BSPAUC has a better generalization performance than existing state-of-the-art AUC maximization methods.
|
Bin Gu, Chenkang Zhang, Huan Xiong, Heng Huang
| null | null | 2,022 |
aaai
|
Cross-Domain Few-Shot Graph Classification
| null |
We study the problem of few-shot graph classification across domains with nonequivalent feature spaces by introducing three new cross-domain benchmarks constructed from publicly available datasets. We also propose an attention-based graph encoder that uses three congruent views of graphs, one contextual and two topological views, to learn representations of task-specific information for fast adaptation, and task-agnostic information for knowledge transfer. We run exhaustive experiments to evaluate the performance of contrastive and meta-learning strategies. We show that when coupled with metric-based meta-learning frameworks, the proposed encoder achieves the best average meta-test classification accuracy across all benchmarks.
|
Kaveh Hassani
| null | null | 2,022 |
aaai
|
End-to-End Probabilistic Label-Specific Feature Learning for Multi-Label Classification
| null |
Label-specific features serve as an effective strategy to learn from multi-label data with tailored features accounting for the distinct discriminative properties of each class label. Existing prototype-based label-specific feature transformation approaches work in a three-stage framework, where prototype acquisition, label-specific feature generation and classification model induction are performed independently. Intuitively, this separate framework is suboptimal due to its decoupling nature. In this paper, we make a first attempt towards a unified framework for prototype-based label-specific feature transformation, where the prototypes and the label-specific features are directly optimized for classification. To instantiate it, we propose modelling the prototypes probabilistically by the normalizing flows, which possess adaptive prototypical complexity to fully capture the underlying properties of each class label and allow for scalable stochastic optimization. Then, a label correlation regularized probabilistic latent metric space is constructed via jointly learning the prototypes and the metric-based label-specific features for classification. Comprehensive experiments on 14 benchmark data sets show that our approach outperforms the state-of-the-art counterparts.
|
Jun-Yi Hang, Min-Ling Zhang, Yanghe Feng, Xiaocheng Song
| null | null | 2,022 |
aaai
|
TIGGER: Scalable Generative Modelling for Temporal Interaction Graphs
| null |
There has been a recent surge in learning generative models for graphs. While impressive progress has been made on static graphs, work on generative modeling of temporal graphs is at a nascent stage with significant scope for improvement. First, existing generative models do not scale with either the time horizon or the number of nodes. Second, existing techniques are transductive in nature and thus do not facilitate knowledge transfer. Finally, due to relying on one-to-one node mapping from source to the generated graph, existing models leak node identity information and do not allow up-scaling/down-scaling the source graph size. In this paper, we bridge these gaps with a novel generative model called TIGGER. TIGGER derives its power through a combination of temporal point processes with auto-regressive modeling enabling both transductive and inductive variants. Through extensive experiments on real datasets, we establish TIGGER generates graphs of superior fidelity, while also being up to 3 orders of magnitude faster than the state-of-the-art.
|
Shubham Gupta, Sahil Manchanda, Srikanta Bedathur, Sayan Ranu
| null | null | 2,022 |
aaai
|
Adaptive Orthogonal Projection for Batch and Online Continual Learning
| null |
Catastrophic forgetting is a key obstacle to continual learning. One of the state-of-the-art approaches is orthogonal projection. The idea of this approach is to learn each task by updating the network parameters or weights only in the direction orthogonal to the subspace spanned by all previous task inputs. This ensures no interference with tasks that have been learned. The system OWM that uses the idea performs very well against other state-of-the-art systems. In this paper, we first discuss an issue that we discovered in the mathematical derivation of this approach and then propose a novel method, called AOP (Adaptive Orthogonal Projection), to resolve it, which results in significant accuracy gains in empirical evaluations in both the batch and online continual learning settings without saving any previous training data as in replay-based methods.
|
Yiduo Guo, Wenpeng Hu, Dongyan Zhao, Bing Liu
| null | null | 2,022 |
aaai
|
Learning Action Translator for Meta Reinforcement Learning on Sparse-Reward Tasks
| null |
Meta reinforcement learning (meta-RL) aims to learn a policy solving a set of training tasks simultaneously and quickly adapting to new tasks. It requires massive amounts of data drawn from training tasks to infer the common structure shared among tasks. Without heavy reward engineering, the sparse rewards in long-horizon tasks exacerbate the problem of sample efficiency in meta-RL. Another challenge in meta-RL is the discrepancy of difficulty level among tasks, which might cause one easy task dominating learning of the shared policy and thus preclude policy adaptation to new tasks. This work introduces a novel objective function to learn an action translator among training tasks. We theoretically verify that the value of the transferred policy with the action translator can be close to the value of the source policy and our objective function (approximately) upper bounds the value difference. We propose to combine the action translator with context-based meta-RL algorithms for better data collection and moreefficient exploration during meta-training. Our approach em-pirically improves the sample efficiency and performance ofmeta-RL algorithms on sparse-reward tasks.
|
Yijie Guo, Qiucheng Wu, Honglak Lee
| null | null | 2,022 |
aaai
|
Scaling Neural Program Synthesis with Distribution-Based Search
| null |
We consider the problem of automatically constructing computer programs from input-output examples. We investigate how to augment probabilistic and neural program synthesis methods with new search algorithms, proposing a framework called distribution-based search. Within this framework, we introduce two new search algorithms: Heap Search, an enumerative method, and SQRT Sampling, a probabilistic method. We prove certain optimality guarantees for both methods, show how they integrate with probabilistic and neural techniques, and demonstrate how they can operate at scale across parallel compute environments. Collectively these findings offer theoretical and applied studies of search algorithms for program synthesis that integrate with recent developments in machine-learned program synthesizers.
|
Nathanaël Fijalkow, Guillaume Lagarde, Théo Matricon, Kevin Ellis, Pierre Ohlmann, Akarsh Nayan Potta
| null | null | 2,022 |
aaai
|
A Generalized Bootstrap Target for Value-Learning, Efficiently Combining Value and Feature Predictions
| null |
Estimating value functions is a core component of reinforcement learning algorithms. Temporal difference (TD) learning algorithms use bootstrapping, i.e. they update the value function toward a learning target using value estimates at subsequent time-steps. Alternatively, the value function can be updated toward a learning target constructed by separately predicting successor features (SF)—a policy-dependent model—and linearly combining them with instantaneous rewards. We focus on bootstrapping targets used when estimating value functions, and propose a new backup target, the ?-return mixture, which implicitly combines value-predictive knowledge (used by TD methods) with (successor) feature-predictive knowledge—with a parameter ? capturing how much to rely on each. We illustrate that incorporating predictive knowledge through an ??-discounted SF model makes more efficient use of sampled experience, compared to either extreme, i.e. bootstrapping entirely on the value function estimate, or bootstrapping on the product of separately estimated successor features and instantaneous reward models. We empirically show this approach leads to faster policy evaluation and better control performance, for tabular and nonlinear function approximations, indicating scalability and generality.
|
Anthony GX-Chen, Veronica Chelu, Blake A. Richards, Joelle Pineau
| null | null | 2,022 |
aaai
|
Disentangled Spatiotemporal Graph Generative Models
| null |
Spatiotemporal graph represents a crucial data structure where the nodes and edges are embedded in a geometric space and their attribute values can evolve dynamically over time. Nowadays, spatiotemporal graph data is becoming increasingly popular and important, ranging from microscale (e.g. protein folding), to middle-scale (e.g. dynamic functional connectivity), to macro-scale (e.g. human mobility network). Although disentangling and understanding the correlations among spatial, temporal, and graph aspects have been a long-standing key topic in network science, they typically rely on network processes hypothesized by human knowledge. They usually fit well towards the properties that the predefined principles are tailored for, but usually cannot do well for the others, especially for many key domains where the human has yet very limited knowledge such as protein folding and biological neuronal networks. In this paper, we aim at pushing forward the modeling and understanding of spatiotemporal graphs via new disentangled deep generative models. Specifically, a new Bayesian model is proposed that factorizes spatiotemporal graphs into spatial, temporal, and graph factors as well as the factors that explain the interplay among them. A variational objective function and new mutual information thresholding algorithms driven by information bottleneck theory have been proposed to maximize the disentanglement among the factors with theoretical guarantees. Qualitative and quantitative experiments on both synthetic and real-world datasets demonstrate the superiority of the proposed model over the state-of-the-arts by up to 69.2% for graph generation and 41.5% for interpretability.
|
Yuanqi Du, Xiaojie Guo, Hengning Cao, Yanfang Ye, Liang Zhao
| null | null | 2,022 |
aaai
|
Self-Supervised Pre-training for Protein Embeddings Using Tertiary Structures
| null |
The protein tertiary structure largely determines its interaction with other molecules. Despite its importance in various structure-related tasks, fully-supervised data are often time-consuming and costly to obtain. Existing pre-training models mostly focus on amino-acid sequences or multiple sequence alignments, while the structural information is not yet exploited. In this paper, we propose a self-supervised pre-training model for learning structure embeddings from protein tertiary structures. Native protein structures are perturbed with random noise, and the pre-training model aims at estimating gradients over perturbed 3D structures. Specifically, we adopt SE(3)-invariant features as model inputs and reconstruct gradients over 3D coordinates with SE(3)-equivariance preserved. Such paradigm avoids the usage of sophisticated SE(3)-equivariant models, and dramatically improves the computational efficiency of pre-training models. We demonstrate the effectiveness of our pre-training model on two downstream tasks, protein structure quality assessment (QA) and protein-protein interaction (PPI) site prediction. Hierarchical structure embeddings are extracted to enhance corresponding prediction models. Extensive experiments indicate that such structure embeddings consistently improve the prediction accuracy for both downstream tasks.
|
Yuzhi Guo, Jiaxiang Wu, Hehuan Ma, Junzhou Huang
| null | null | 2,022 |
aaai
|
Learning Aligned Cross-Modal Representation for Generalized Zero-Shot Classification
| null |
Learning a common latent embedding by aligning the latent spaces of cross-modal autoencoders is an effective strategy for Generalized Zero-Shot Classification (GZSC). However, due to the lack of fine-grained instance-wise annotations, it still easily suffer from the domain shift problem for the discrepancy between the visual representation of diversified images and the semantic representation of fixed attributes. In this paper, we propose an innovative autoencoder network by learning Aligned Cross-Modal Representations (dubbed ACMR) for GZSC. Specifically, we propose a novel Vision-Semantic Alignment (VSA) method to strengthen the alignment of cross-modal latent features on the latent subspaces guided by a learned classifier. In addition, we propose a novel Information Enhancement Module (IEM) to reduce the possibility of latent variables collapse meanwhile encouraging the discriminative ability of latent variables. Extensive experiments on publicly available datasets demonstrate the state-of-the-art performance of our method.
|
Zhiyu Fang, Xiaobin Zhu, Chun Yang, Zheng Han, Jingyan Qin, Xu-Cheng Yin
| null | null | 2,022 |
aaai
|
Dynamic Nonlinear Matrix Completion for Time-Varying Data Imputation
| null |
Classical matrix completion methods focus on data with stationary latent structure and hence are not effective in missing value imputation when the latent structure changes with time. This paper proposes a dynamic nonlinear matrix completion (D-NLMC) method, which is able to recover the missing values of streaming data when the low-dimensional nonlinear latent structure of the data changes with time. The paper provides an efficient approach to updating the nonlinear model dynamically. D-NLMC incorporates the information of new data and remove the information of earlier data recursively. The paper shows that the missing data can be estimated if the change of latent structure is slow enough. Different from existing online or adaptive low-rank matrix completion methods, D-NLMC does not require the local low-rank assumption and is able to adaptively recover high-rank matrices with low-dimensional latent structures. Note that existing high-rank matrix completion methods have high-computational costs and are not applicable to streaming data with varying latent structures, which fortunately can be handled by D-NLMC efficiently and accurately. Numerical results show that D-NLMC outperforms the baselines in real applications.
|
Jicong Fan
| null | null | 2,022 |
aaai
|
Adaptive and Universal Algorithms for Variational Inequalities with Optimal Convergence
| null |
We develop new adaptive algorithms for variational inequalities with monotone operators, which capture many problems of interest, notably convex optimization and convex-concave saddle point problems. Our algorithms automatically adapt to unknown problem parameters such as the smoothness and the norm of the operator, and the variance of the stochastic evaluation oracle. We show that our algorithms are universal and simultaneously achieve the optimal convergence rates in the non-smooth, smooth, and stochastic settings. The convergence guarantees of our algorithms improve over existing adaptive methods and match the optimal non-adaptive algorithms. Additionally, prior works require that the optimization domain is bounded. In this work, we remove this restriction and give algorithms for unbounded domains that are adaptive and universal. Our general proof techniques can be used for many variants of the algorithm using one or two operator evaluations per iteration. The classical methods based on the ExtraGradient/MirrorProx algorithm require two operator evaluations per iteration, which is the dominant factor in the running time in many settings.
|
Alina Ene, Huy Lê Nguyễn
| null | null | 2,022 |
aaai
|
Learning from the Dark: Boosting Graph Convolutional Neural Networks with Diverse Negative Samples
| null |
Graph Convolutional Neural Networks (GCNs) have been generally accepted to be an effective tool for node representations learning. An interesting way to understand GCNs is to think of them as a message passing mechanism where each node updates its representation by accepting information from its neighbours (also known as positive samples). However, beyond these neighbouring nodes, graphs have a large, dark, all-but forgotten world in which we find the non-neighbouring nodes (negative samples). In this paper, we show that this great dark world holds a substantial amount of information that might be useful for representation learning. Most specifically, it can provide negative information about the node representations. Our overall idea is to select appropriate negative samples for each node and incorporate the negative information contained in these samples into the representation updates. Moreover, we show that the process of selecting the negative samples is not trivial. Our theme therefore begins by describing the criteria for a good negative sample, followed by a determinantal point process algorithm for efficiently obtaining such samples. A GCN, boosted by diverse negative samples, then jointly considers the positive and negative information when passing messages. Experimental evaluations show that this idea not only improves the overall performance of standard representation learning but also significantly alleviates over-smoothing problems.
|
Wei Duan, Junyu Xuan, Maoying Qiao, Jie Lu
| null | null | 2,022 |
aaai
|
Zero-Shot Out-of-Distribution Detection Based on the Pre-trained Model CLIP
| null |
In an out-of-distribution (OOD) detection problem, samples of known classes (also called in-distribution classes) are used to train a special classifier. In testing, the classifier can (1) classify the test samples of known classes to their respective classes and also (2) detect samples that do not belong to any of the known classes (i.e., they belong to some unknown or OOD classes). This paper studies the problem of zero-shot out-of-distribution (OOD) detection, which still performs the same two tasks in testing but has no training except using the given known class names. This paper proposes a novel and yet simple method (called ZOC) to solve the problem. ZOC builds on top of the recent advances in zero-shot classification through multi-modal representation learning. It first extends the pre-trained language-vision model CLIP by training a text-based image description generator on top of CLIP. In testing, it uses the extended model to generate candidate unknown class names for each test sample and computes a confidence score based on both the known class names and candidate unknown class names for zero-shot OOD detection. Experimental results on 5 benchmark datasets for OOD detection demonstrate that ZOC outperforms the baselines by a large margin.
|
Sepideh Esmaeilpour, Bing Liu, Eric Robertson, Lei Shu
| null | null | 2,022 |
aaai
|
Online Certification of Preference-Based Fairness for Personalized Recommender Systems
| null |
Recommender systems are facing scrutiny because of their growing impact on the opportunities we have access to. Current audits for fairness are limited to coarse-grained parity assessments at the level of sensitive groups. We propose to audit for envy-freeness, a more granular criterion aligned with individual preferences: every user should prefer their recommendations to those of other users. Since auditing for envy requires to estimate the preferences of users beyond their existing recommendations, we cast the audit as a new pure exploration problem in multi-armed bandits. We propose a sample-efficient algorithm with theoretical guarantees that it does not deteriorate user experience. We also study the trade-offs achieved on real-world recommendation datasets.
|
Virginie Do, Sam Corbett-Davies, Jamal Atif, Nicolas Usunier
| null | null | 2,022 |
aaai
|
SpreadGNN: Decentralized Multi-Task Federated Learning for Graph Neural Networks on Molecular Data
| null |
Graph Neural Networks (GNNs) are the first choice methods for graph machine learning problems thanks to their ability to learn state-of-the-art level representations from graph-structured data. However, centralizing a massive amount of real-world graph data for GNN training is prohibitive due to user-side privacy concerns, regulation restrictions, and commercial competition. Federated Learning is the de-facto standard for collaborative training of machine learning models over many distributed edge devices without the need for centralization. Nevertheless, training graph neural networks in a federated setting is vaguely defined and brings statistical and systems challenges. This work proposes SpreadGNN, a novel multi-task federated training framework capable of operating in the presence of partial labels and absence of a central server for the first time in the literature. We provide convergence guarantees and empirically demonstrate the efficacy of our framework on a variety of non-I.I.D. distributed graph-level molecular property prediction datasets with partial labels. Our results show that SpreadGNN outperforms GNN models trained over a central server-dependent federated learning system, even in constrained topologies.
|
Chaoyang He, Emir Ceyani, Keshav Balasubramanian, Murali Annavaram, Salman Avestimehr
| null | null | 2,022 |
aaai
|
Smoothing Advantage Learning
| null |
Advantage learning (AL) aims to improve the robustness of value-based reinforcement learning against estimation errors with action-gap-based regularization. Unfortunately, the method tends to be unstable in the case of function approximation. In this paper, we propose a simple variant of AL, named smoothing advantage learning (SAL), to alleviate this problem. The key to our method is to replace the original Bellman Optimal operator in AL with a smooth one so as to obtain more reliable estimation of the temporal difference target. We give a detailed account of the resulting action gap and the performance bound for approximate SAL. Further theoretical analysis reveals that the proposed value smoothing technique not only helps to stabilize the training procedure of AL by controlling the trade-off between convergence rate and the upper bound of the approximation errors, but is beneficial to increase the action gap between the optimal and sub-optimal action value as well.
|
Yaozhong Gan, Zhe Zhang, Xiaoyang Tan
| null | null | 2,022 |
aaai
|
Up to 100x Faster Data-Free Knowledge Distillation
| null |
Data-free knowledge distillation (DFKD) has recently been attracting increasing attention from research communities, attributed to its capability to compress a model only using synthetic data. Despite the encouraging results achieved, state-of-the-art DFKD methods still suffer from the inefficiency of data synthesis, making the data-free training process extremely time-consuming and thus inapplicable for large-scale tasks. In this work, we introduce an efficacious scheme, termed as FastDFKD, that allows us to accelerate DFKD by a factor of orders of magnitude. At the heart of our approach is a novel strategy to reuse the shared common features in training data so as to synthesize different data instances. Unlike prior methods that optimize a set of data independently, we propose to learn a meta-synthesizer that seeks common features as the initialization for the fast data synthesis. As a result, FastDFKD achieves data synthesis within only a few steps, significantly enhancing the efficiency of data-free training. Experiments over CIFAR, NYUv2, and ImageNet demonstrate that the proposed FastDFKD achieves 10x and even 100x acceleration while preserving performances on par with state of the art. Code is available at https://github.com/zju-vipa/Fast-Datafree.
|
Gongfan Fang, Kanya Mo, Xinchao Wang, Jie Song, Shitao Bei, Haofei Zhang, Mingli Song
| null | null | 2,022 |
aaai
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.