title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Vertex Weighting-Based Tabu Search for p-Center Problem
| null |
The p-center problem consists of choosing p centers from a set of candidates to minimize the maximum cost between any client and its assigned facility. In this paper, we transform the p-center problem into a series of set covering subproblems, and propose a vertex weighting-based tabu search (VWTS) algorithm to solve them. The proposed VWTS algorithm integrates distinguishing features such as a vertex weighting technique and a tabu search strategy to help the search to jump out of the local optima. Computational experiments on 138 most commonly used benchmark instances show that VWTS is highly competitive comparing to the state-of-the-art methods in spite of its simplicity. As a well-known NP-hard problem which has already been studied for over half a century, it is a challenging task to break the records on these classic datasets. Yet VWTS improves the best known results for 14 out of 54 large instances, and matches the optimal results for all remaining 84 ones. In addition, the computational time taken by VWTS is much shorter than other algorithms in the literature.
|
Qingyun Zhang, Zhipeng Lü, Zhouxing Su, Chumin Li, Yuan Fang, Fuda Ma
| null | null | 2,020 |
ijcai
|
Learning Regional Attention Convolutional Neural Network for Motion Intention Recognition Based on EEG Data
| null |
Recent deep learning-based Brain-Computer Interface (BCI) decoding algorithms mainly focus on spatial-temporal features, while failing to explicitly explore spectral information which is one of the most important cues for BCI. In this paper, we propose a novel regional attention convolutional neural network (RACNN) to take full advantage of spectral-spatial-temporal features for EEG motion intention recognition. Time-frequency based analysis is adopted to reveal spectral-temporal features in terms of neural oscillations of primary sensorimotor. The basic idea of RACNN is to identify the activated area of the primary sensorimotor adaptively. The RACNN aggregates a varied number of spectral-temporal features produced by a backbone convolutional neural network into a compact fixed-length representation. Inspired by the neuroscience findings that functional asymmetry of the cerebral hemisphere, we propose a region biased loss to encourage high attention weights for the most critical regions. Extensive evaluations on two benchmark datasets and real-world BCI dataset show that our approach significantly outperforms previous methods.
|
Zhijie Fang, Weiqun Wang, Shixin Ren, Jiaxing Wang, Weiguo Shi, Xu Liang, Chen-Chen Fan, Zeng-Guang Hou
| null | null | 2,020 |
ijcai
|
Boolean Games: Inferring Agents' Goals Using Taxation Queries
| null |
In Boolean games, each agent controls a set of Boolean variables
and has a goal represented by a propositional formula. We study
inference problems in Boolean games assuming the presence of a
PRINCIPAL who has the ability to control the agents and impose
taxation schemes. Previous work used taxation schemes to guide a
game towards certain equilibria. We present algorithms that show
how taxation schemes can also be used to infer agents' goals. We
present experimental results to demonstrate the efficacy our
algorithms. We also consider goal inference when only limited
information is available in response to a query.
|
Abhijin Adiga, Sarit Kraus, Oleg Maksimov, S. S. Ravi
| null | null | 2,020 |
ijcai
|
Incorporating Failure Events in Agents’ Decision Making to Improve User Satisfaction
| null |
This paper suggests a new paradigm for the design of collaborative autonomous agents engaged in executing a joint task alongside a human user. In particular, we focus on the way an agent's failures should affect its decision making, as far as user satisfaction measures are concerned. Unlike the common practice that considers agent (and more broadly, system) failures solely in the prism of their influence over the agent's contribution to the execution of the joint task, we argue that there is an additional, direct, influence which cannot be fully captured by the above measure. Through two series of large-scale controlled experiments with 450 human subjects, recruited through Amazon Mechanical Turk, we show that, indeed, such direct influence holds. Furthermore, we show that the use of a simple agent design that takes into account the direct influence of failures in its decision making yields considerably better user satisfaction, compared to an agent that focuses exclusively on maximizing its absolute contribution to the joint task.
|
David Sarne, Chen Rozenshtein
| null | null | 2,020 |
ijcai
|
A Dataset Complexity Measure for Analogical Transfer
| null |
Analogical transfer consists in leveraging a measure of similarity between two situations to predict the amount of similarity between their outcomes. Acquiring a suitable similarity measure for analogical transfer may be difficult, especially when the data is sparse or when the domain knowledge is incomplete. To alleviate this problem, this paper presents a dataset complexity measure that can be used either to select an optimal similarity measure, or if the similarity measure is given, to perform analogical transfer: among the potential outcomes of a new situation, the most plausible is the one which minimizes the dataset complexity.
|
Fadi Badra
| null | null | 2,020 |
ijcai
|
Structured Probabilistic End-to-End Learning from Crowds
| null |
End-to-end learning from crowds has recently been introduced as an EM-free approach to training deep neural networks directly from noisy crowdsourced annotations. It models the relationship between true labels and annotations with a specific type of neural layer, termed as the crowd layer, which can be trained using pure backpropagation. Parameters of the crowd layer, however, can hardly be interpreted as annotator reliability, as compared with the more principled probabilistic approach. The lack of probabilistic interpretation further prevents extensions of the approach to account for important factors of annotation processes, e.g., instance difficulty. This paper presents SpeeLFC, a structured probabilistic model that incorporates the constraints of probability axioms for parameters of the crowd layer, which allows to explicitly model annotator reliability while benefiting from the end-to-end training of neural networks. Moreover, we propose SpeeLFC-D, which further takes into account instance difficulty. Extensive validation on real-world datasets shows that our methods improve the state-of-the-art.
|
Zhijun Chen, Huimin Wang, Hailong Sun, Pengpeng Chen, Tao Han, Xudong Liu, Jie Yang
| null | null | 2,020 |
ijcai
|
Optimal Complex Task Assignment in Service Crowdsourcing
| null |
Existing schemes cannot assign complex tasks to the most suitable workers because they either cannot measure skills quantitatively or do not consider assigning tasks to workers who are the most suitable but unavailable temporarily. In this paper, we investigate how to realize optimal complex task assignment. Firstly, we formulate the multiple-skill based task assignment problem in service crowdsourcing. We then propose a weighted multi-skill tree (WMST) to model multiple skills and their correlations. Next, we propose the acceptance expectation to uniformly measure the probabilities that different categories of workers will accept and complete specified tasks. Finally, we propose an acceptance-expectation-based task assignment (AE-TA) algorithm, which reserves tasks for the most suitable workers even unavailable temporarily. Comprehensive experimental results demonstrate that our WMST model and AE-TA algorithm significantly outperform related proposals.
|
Feilong Tang
| null | null | 2,020 |
ijcai
|
Pitfalls of Learning a Reward Function Online
| null |
In some agent designs like inverse reinforcement learning an agent needs to learn its own reward function. Learning the reward function and optimising for it are typically two different processes, usually performed at different stages. We consider a continual (``one life'') learning approach where the agent both learns the reward function and optimises for it at the same time. We show that this comes with a number of pitfalls, such as deliberately manipulating the learning process in one direction, refusing to learn, ``learning'' facts already known to the agent, and making decisions that are strictly dominated (for all relevant reward functions). We formally introduce two desirable properties: the first is `unriggability', which prevents the agent from steering the learning process in the direction of a reward function that is easier to optimise. The second is `uninfluenceability', whereby the reward-function learning process operates by learning facts about the environment. We show that an uninfluenceable process is automatically unriggable, and if the set of possible environments is sufficiently large, the converse is true too.
|
Stuart Armstrong, Jan Leike, Laurent Orseau, Shane Legg
| null | null | 2,020 |
ijcai
|
Maximizing the Spread of an Opinion in Few Steps: Opinion Diffusion in Non-Binary Networks
| null |
We consider the setting of asynchronous opinion diffusion with majority threshold: given a social network with each agent assigned to one opinion, an agent will update its opinion if more than half of its neighbors agree on a different opinion. The stabilized final outcome highly depends on the sequence in which agents update their opinion. We are interested in optimistic sequences---sequences that maximize the spread of a chosen opinion. We complement known results for two opinions where optimistic sequences can be computed in time and length linear in the number of agents. We analyze upper and lower bounds on the length of optimistic sequences, showing quadratic bounds in the general and linear bounds in the acyclic case. Moreover, we show that in networks with more than two opinions determining a spread-maximizing sequence becomes intractable; surprisingly, already with three opinions the intractability results hold in highly restricted cases, e.g., when each agent has at most three neighbors, when looking for a short sequence, or when we aim for approximate solutions.
|
Robert Bredereck, Lilian Jacobs, Leon Kellerhals
| null | null | 2,020 |
ijcai
|
Implementing Theory of Mind on a Robot Using Dynamic Epistemic Logic
| null |
Previous research has claimed dynamic epistemic logic (DEL) to be a suitable formalism for representing essential aspects of a Theory of Mind (ToM) for an autonomous agent. This includes the ability of the formalism to represent the reasoning involved in false-belief tasks of arbitrary order, and hence for autonomous agents based on the formalism to become able to pass such tests. This paper provides evidence for the claims by documenting the implementation of a DEL-based reasoning system on a humanoid robot. Our implementation allows the robot to perform cognitive perspective-taking, in particular to reason about the first- and higher-order beliefs of other agents. We demonstrate how this allows the robot to pass a quite general class of false-belief tasks involving human agents. Additionally, as is briefly illustrated, it allows the robot to proactively provide human agents with relevant information in situations where a system without ToM-abilities would fail. The symbolic grounding problem of turning robotic sensor input into logical action descriptions in DEL is achieved via a perception system based on deep neural networks.
|
Lasse Dissing, Thomas Bolander
| null | null | 2,020 |
ijcai
|
Aggregating Crowd Wisdom with Side Information via a Clustering-based Label-aware Autoencoder
| null |
Aggregating crowd wisdom infers true labels for objects, from multiple noisy labels provided by various sources. Besides labels from sources, side information such as object features is also introduced to achieve higher inference accuracy. Usually, the learning-from-crowds framework is adopted. However, the framework considers each object in isolation and does not make full use of object features to overcome label noise. In this paper, we propose a clustering-based label-aware autoencoder (CLA) to alleviate label noise. CLA utilizes clusters to gather objects with similar features and exploits clustering to infer true labels, by constructing a novel deep generative process to simultaneously generate object features and source labels from clusters. For model inference, CLA extends the framework of variational autoencoders and utilizes maximizing a posteriori (MAP) estimation, which prevents the model from overfitting and trivial solutions. Experiments on real-world tasks demonstrate the significant improvement of CLA compared with the state-of-the-art aggregation algorithms.
|
Li'ang Yin, Yunfei Liu, Weinan Zhang, Yong Yu
| null | null | 2,020 |
ijcai
|
Synthesizing strategies under expected and exceptional environment behaviors
| null |
We consider an agent that operates with two models of the environment: one that captures expected behaviors and one that captures additional exceptional behaviors. We study the problem of synthesizing agent strategies that enforce a goal against environments operating as expected while also making a best effort against exceptional environment behaviors. We formalize these concepts in the context of linear-temporal logic, and give an algorithm for solving this problem. We also show that there is no trade-off between enforcing the goal under the expected environment specification and making a best-effort for it under the exceptional one.
|
Benjamin Aminof, Giuseppe De Giacomo, Alessio Lomuscio, Aniello Murano, Sasha Rubin
| null | null | 2,020 |
ijcai
|
Switch-List Representations in a Knowledge Compilation Map
| null |
In this paper we focus on a less usual way to represent Boolean functions, namely on representations by switch-lists. Given a truth table representation of a Boolean function f the switch-list representation (SLR) of f is a list of Boolean vectors from the truth table which have a different function value than the preceding Boolean vector in the truth table. The main aim of this paper is to include the language SL of all SLR in the Knowledge Compilation Map [Darwiche and Marquis, 2002] and to argue, that SL may in certain situations constitute a reasonable choice for a target language in knowledge compilation. First we compare SL with a number of standard representation languages (such as CNF, DNF, and OBDD) with respect to their relative succinctness. As a by-product of this analysis we also give a short proof of a long standing open question from [Darwiche and Marquis, 2002], namely the incomparability of MODS (models) and PI (prime implicates) languages. Next we analyze which standard transformations and queries (those considered in [Darwiche and Marquis, 2002] can be performed in poly-time with respect to the size of the input SLR. We show that this collection is quite broad and the combination of poly-time transformations and queries is quite unique.
|
Ondřej Čepek, Miloš Chromý
| null | null | 2,020 |
ijcai
|
Answering Counting Queries over DL-Lite Ontologies
| null |
Ontology-mediated query answering (OMQA) is a promising approach to data access and integration that has been actively studied in the knowledge representation and database communities for more than a decade. The vast majority of work on OMQA focuses on conjunctive queries, whereas more expressive queries that feature counting or other forms of aggregation remain largely unexplored. In this paper, we introduce a general form of counting query, relate it to previous proposals, and study the complexity of answering such queries in the presence of DL-Lite ontologies. As it follows from existing work that query answering is intractable and often of high complexity, we consider some practically relevant restrictions, for which we establish improved complexity bounds.
|
Meghyn Bienvenu, Quentin Manière, Michaël Thomazo
| null | null | 2,020 |
ijcai
|
Counting Query Answers over a DL-Lite Knowledge Base
| null |
Counting answers to a query is an operation supported by virtually all database management systems.
In this paper we focus on counting answers over a Knowledge Base (KB), which may be viewed as a database enriched with background knowledge about the domain under consideration.
In particular, we place our work in the context of Ontology-Mediated Query Answering/Ontology-based Data Access (OMQA/OBDA), where the language used for the ontology is a member of the DL-Lite family and the data is a (usually virtual) set of assertions.
We study the data complexity of query answering, for different members of the DL-Lite family that include number restrictions, and for variants of conjunctive queries with counting that differ with respect to their shape (connected, branching, rooted).
We improve upon existing results by providing PTIME and coNP lower bounds, and upper bounds in PTIME and LOGSPACE.
For the LOGSPACE case, we have devised a novel query rewriting technique into first-order logic with counting.
|
Diego Calvanese, Julien Corman, Davide Lanti, Simon Razniewski
| null | null | 2,020 |
ijcai
|
A Framework for Reasoning about Dynamic Axioms in Description Logics
| null |
Description logics are well-known logical formalisms for knowledge
representation. We propose to enrich knowledge bases (KBs) with dynamic
axioms that specify how the satisfaction of statements from the KBs
evolves when the interpretation is decomposed or recomposed, providing
a natural means to predict the evolution of interpretations.
Our dynamic axioms borrow logical connectives from separation logics,
well-known specification languages to verify programs with
dynamic data structures.
In the paper, we focus on ALC and EL augmented
with dynamic axioms, or to their subclass of positive dynamic axioms.
The knowledge base consistency problem in the presence of dynamic axioms
is investigated, leading to interesting complexity results, among which
the problem for EL with positive dynamic axioms is tractable,
whereas EL with dynamic axioms is undecidable.
|
Bartosz Bednarczyk, Stephane Demri, Alessio Mansutti
| null | null | 2,020 |
ijcai
|
Deductive Module Extraction for Expressive Description Logics
| null |
In deductive module extraction, we determine a small subset of an ontology for a given vocabulary that preserves all logical entailments that can be expressed in that vocabulary. While in the literature stronger module notions have been discussed, we argue that for applications in ontology analysis and ontology reuse, deductive modules, which are decidable and potentially smaller, are often sufficient. We present methods based on uniform interpolation for extracting different variants of deductive modules, satisfying properties such as completeness, minimality and robustness under replacements, the latter being particularly relevant for ontology reuse. An evaluation of our implementation shows that the modules computed by our method are often significantly smaller than those computed by existing methods.
|
Patrick Koopmann, Jieying Chen
| null | null | 2,020 |
ijcai
|
Automatic Synthesis of Generalized Winning Strategies of Impartial Combinatorial Games Using SMT Solvers
| null |
Strategy representation and reasoning has recently received much attention in artificial intelligence. Impartial combinatorial games (ICGs) are a type of elementary and fundamental games in game theory. One of the challenging problems of ICGs is to construct winning strategies, particularly, generalized winning strategies for possibly infinitely many instances of ICGs. In this paper, we investigate synthesizing generalized winning strategies for ICGs. To this end, we first propose a logical framework to formalize ICGs based on the linear integer arithmetic fragment of numeric part of PDDL. We then propose an approach to generating the winning formula that exactly captures the states in which the player can force to win. Furthermore, we compute winning strategies for ICGs based on the winning formula. Experimental results on several games demonstrate the effectiveness of our approach.
|
Kaisheng Wu, Liangda Fang, Liping Xiong, Zhao-Rong Lai, Yong Qiao, Kaidong Chen, Fei Rong
| null | null | 2,020 |
ijcai
|
A Logic of Directions
| null |
We propose a logic of directions for points (LD) over 2D Euclidean space, which formalises primary direction relations east (E), west (W), and indeterminate east/west (Iew), north (N), south (S) and indeterminate north/south (Ins). We provide a sound and complete axiomatisation of it, and prove that its satisfiability problem is NP-complete.
|
Heshan Du, Natasha Alechina, Anthony G. Cohn
| null | null | 2,020 |
ijcai
|
Diagnosing Software Faults Using Multiverse Analysis
| null |
Spectrum-based Fault Localization (SFL) approaches aim to efficiently localize faulty components from examining program behavior. This is done by collecting the execution patterns of various
combinations of components and the corresponding outcomes into a spectrum. Efficient fault localization depends heavily on the quality of the spectra. Previous approaches, including the current
state-of-the-art Density- Diversity-Uniqueness (DDU) approach, attempt to generate “good” test-suites by improving certain structural properties of the spectra. In this work, we propose a different
approach, Multiverse Analysis, that considers multiple hypothetical universes, each corresponding to a scenario where one of the components is assumed to be faulty, to generate a spectrum that
attempts to reduce the expected worst-case wasted effort over all the universes. Our experiments show that the Multiverse Analysis not just improves the efficiency of fault localization but also achieves better coverage and generates smaller test-suites over DDU, the current state-of-the-art technique. On average, our approach reduces the developer effort over DDU by over 16% for more than 92% of the instances. Further, the improvements over DDU are indeed statistically significant on the paired Wilcoxon Signed-rank test.
|
Prantik Chatterjee, Abhijit Chatterjee, Jose Campos, Rui Abreu, Subhajit Roy
| null | null | 2,020 |
ijcai
|
Overcoming the Grounding Bottleneck Due to Constraints in ASP Solving: Constraints Become Propagators
| null |
Answer Set Programming (ASP) is a well-known formalism for Knowledge Representation and Reasoning, successfully employed to solve many AI problems, also thanks to the availability of efficient implementations. Traditionally, ASP systems are based on the ground&solve approach, where the grounding transforms a general input program into its propositional counterpart, whose stable models are then computed by the solver using the CDCL algorithm. This approach suffers an intrinsic limitation: the grounding of one or few constraints may be unaffordable from a computational point of view; a problem known as grounding bottleneck. In this paper, we develop an innovative approach for evaluating ASP programs, where some of the constraints of the input program are not grounded but automatically translated into propagators of the CDCL algorithm that work on partial interpretations. We implemented the new approach on top of the solver WASP and carried out an experimental analysis on different benchmarks. Results show that our approach consistently outperforms state-of-the-art ASP systems by overcoming the grounding bottleneck.
|
Bernardo Cuteri, Carmine Dodaro, Francesco Ricca, Peter Schüller
| null | null | 2,020 |
ijcai
|
Smart Voting
| null |
We propose a generalisation of liquid democracy in which a voter can either vote directly on the issues at stake, delegate her vote to another voter, or express complex delegations to a set of trusted voters. By requiring a ranking of desirable delegations and a backup vote from each voter, we are able to put forward and compare four algorithms to solve delegation cycles and obtain a final collective decision.
|
Rachael Colley, Umberto Grandi, Arianna Novaro
| null | null | 2,020 |
ijcai
|
All-Instances Oblivious Chase Termination is Undecidable for Single-Head Binary TGDs
| null |
The chase is a famous algorithmic procedure in database
theory with numerous applications in ontology-mediated query answering.
We consider static analysis of the chase termination
problem, which asks, given set of TGDs, whether the chase
terminates on all input databases. The problem was recently
shown to be undecidable by Gogacz et al. for
sets of rules containing only ternary predicates.
In this work, we show that undecidability occurs already
for sets of single-head TGD over binary vocabularies.
This question is relevant since many real-world ontologies, e.g.,
those from the Horn fragment of the popular OWL, are of this shape.
|
Bartosz Bednarczyk, Robert Ferens, Piotr Ostropolski-Nalewaja
| null | null | 2,020 |
ijcai
|
The Complexity Landscape of Resource-Constrained Scheduling
| null |
The Resource-Constrained Project Scheduling Problem (RCPSP) and its extension via activity modes (MRCPSP) are well-established scheduling frameworks that have found numerous applications in a broad range of settings related to artificial intelligence. Unsurprisingly, the problem of finding a suitable schedule in these frameworks is known to be NP-complete; however, aside from a few results for special cases, we have lacked an in-depth and comprehensive understanding of the complexity of the problems from the viewpoint of natural restrictions of the considered instances.
In the first part of our paper, we develop new algorithms and give hardness-proofs in order to obtain a detailed complexity map of (M)RCPSP that settles the complexity of all 1024 considered variants of the problem defined in terms of explicit restrictions of natural parameters of instances. In the second part, we turn to implicit structural restrictions defined in terms of the complexity of interactions between individual activities. In particular, we show that if the treewidth of a graph which captures such interactions is bounded by a constant, then we can solve MRCPSP in polynomial time.
|
Robert Ganian, Thekla Hamm, Guillaume Mescoff
| null | null | 2,020 |
ijcai
|
Revisiting the Notion of Extension over Incomplete Abstract Argumentation Frameworks
| null |
We revisit the notion of i-extension, i.e., the adaption of the fundamental
notion of extension to the case of incomplete Abstract
Argumentation Frameworks.
We show that the definition of i-extension raises some concerns in the
"possible" variant, e.g., it allows even conflicting arguments
to be collectively considered as members of an (i-)extension.
Thus, we introduce the alternative notion of i*-extension overcoming the
highlighted problems, and provide a thorough complexity characterization of the
corresponding verification problem.
Interestingly, we show that the revisitation not only has beneficial effects for
the semantics, but also for the complexity: under various semantics,
the verification problem under the possible perspective moves from NP-complete
to P.
|
Bettina Fazzinga, Sergio Flesca, Filippo Furfaro
| null | null | 2,020 |
ijcai
|
Semantic Width and the Fixed-Parameter Tractability of Constraint Satisfaction Problems
| null |
Constraint satisfaction problems (CSPs) are an important formal framework for the uniform treatment of various prominent AI tasks, e.g., coloring or scheduling problems. Solving CSPs is, in general, known to be NP-complete and fixed-parameter intractable when parameterized by their constraint scopes. We give a characterization of those classes of CSPs for which the problem becomes fixed-parameter tractable. Our characterization significantly increases the utility of the CSP framework by making it possible to decide the fixed-parameter tractability of problems via their CSP formulations. We further extend our characterization to the evaluation of unions of conjunctive queries, a fundamental problem in databases. Furthermore, we provide some new insight on the frontier of PTIME solvability of CSPs. In particular, we observe that bounded fractional hypertree width is more general than bounded hypertree width only for classes that exhibit a certain type of exponential growth. The presented work resolves a long-standing open problem and yields powerful new tools for complexity research in AI and database theory.
|
Hubie Chen, Georg Gottlob, Matthias Lanzinger, Reinhard Pichler
| null | null | 2,020 |
ijcai
|
Enriching Documents with Compact, Representative, Relevant Knowledge Graphs
| null |
A prominent application of knowledge graph (KG) is document enrichment. Existing methods identify mentions of entities in a background KG and enrich documents with entity types and direct relations. We compute an entity relation subgraph (ERG) that can more expressively represent indirect relations among a set of mentioned entities. To find compact, representative, and relevant ERGs for effective enrichment, we propose an efficient best-first search algorithm to solve a new combinatorial optimization problem that achieves a trade-off between representativeness and compactness, and then we exploit ontological knowledge to rank ERGs by entity-based document-KG and intra-KG relevance. Extensive experiments and user studies show the promising performance of our approach.
|
Shuxin Li, Zixian Huang, Gong Cheng, Evgeny Kharlamov, Kalpa Gunaratna
| null | null | 2,020 |
ijcai
|
NeurASP: Embracing Neural Networks into Answer Set Programming
| null |
We present NeurASP, a simple extension of answer set programs by embracing neural networks. By treating the neural network output as the probability distribution over atomic facts in answer set programs, NeurASP provides a simple and effective way to integrate sub-symbolic and symbolic computation. We demonstrate how NeurASP can make use of a pre-trained neural network in symbolic computation and how it can improve the neural network's perception result by applying symbolic reasoning in answer set programming. Also, NeurASP can make use of ASP rules to train a neural network better so that a neural network not only learns from implicit correlations from the data but also from the explicit complex semantic constraints expressed by the rules.
|
Zhun Yang, Adam Ishay, Joohyung Lee
| null | null | 2,020 |
ijcai
|
On Computational Aspects of Iterated Belief Change
| null |
Iterated belief change aims to determine how the belief state of a rational agent evolves given a sequence of change formulae. Several families of iterated belief change operators (revision operators, improvement operators) have been pointed out so far, and characterized from an axiomatic point of view. This paper focuses on the inference problem for iterated belief change, when belief states are represented as a special kind of stratified belief bases. The computational complexity of the inference problem is identified and shown to be identical for all revision operators satisfying Darwiche and Pearl's (R*1-R*6) postulates. In addition, some complexity bounds for the inference problem are provided for the family of soft improvement operators. We also show that a revised belief state can be computed in a reasonable time for large-sized instances using SAT-based algorithms, and we report empirical results showing the feasibility of iterated belief change for bases of significant sizes.
|
Nicolas Schwind, Sebastien Konieczny, Jean-Marie Lagniez, Pierre Marquis
| null | null | 2,020 |
ijcai
|
Lower Bounds and Faster Algorithms for Equality Constraints
| null |
We study the fine-grained complexity of NP-complete, infinite-domain constraint satisfaction problems (CSPs) parameterised by a set of first-order definable relations (with equality). Such CSPs are of central importance since they form a subclass of any infinite-domain CSP parameterised by a set of first-order definable relations. We prove that under the randomised exponential-time hypothesis it is not possible to find c > 1 such that a CSP over an arbitrary finite equality language is solvable in O(c^n) time (n is the number of variables). Stronger lower bounds are possible for infinite equality languages where we rule out the existence of 2^o(n log n) time algorithms; a lower bound which also extends to satisfiability modulo theories solving for an arbitrary background theory. Despite these lower bounds we prove that for each c > 1 there exists an NP-hard equality CSP solvable in O(c^n) time. Lower bounds like these immediately ask for closely matching upper bounds, and we prove that a CSP over a finite equality language is always solvable in O(c^n) time for a fixed c.
|
Peter Jonsson, Victor Lagerkvist
| null | null | 2,020 |
ijcai
|
Controlled Query Evaluation in Description Logics Through Instance Indistinguishability
| null |
We study privacy-preserving query answering in Description Logics (DLs). Specifically, we consider the approach of controlled query evaluation (CQE) based on the notion of instance indistinguishability. We derive data complexity results for query answering over DL-LiteR ontologies, through a comparison with an alternative, existing confidentiality-preserving approach to CQE. Finally, we identify a semantically well-founded notion of approximated query answering for CQE, and prove that, for DL-LiteR ontologies, this form of CQE is tractable with respect to data complexity and is first-order rewritable, i.e., it is always reducible to the evaluation of a first-order query over the data instance.
|
Gianluca Cima, Domenico Lembo, Riccardo Rosati, Domenico Fabio Savo
| null | null | 2,020 |
ijcai
|
On Robustness in Qualitative Constraint Networks
| null |
We introduce and study a notion of robustness in
Qualitative Constraint Networks (QCNs), which
are typically used to represent and reason about
abstract spatial and temporal information. In
particular, given a QCN, we are interested in obtaining
a robust qualitative solution, or, a robust scenario of
it, which is a satisfiable scenario that has a higher
perturbation tolerance than any other, or, in other
words, a satisfiable scenario that has more chances
than any other to remain valid after it is altered.
This challenging problem requires to consider the
entire set of satisfiable scenarios of a QCN, whose
size is usually exponential in the number of constraints
of that QCN; however, we present a first algorithm
that is able to compute a robust scenario of a QCN
using linear space in the number of constraints.
Preliminary results with a dataset from the
job-shop scheduling domain, and a standard one,
show the interest of our approach and highlight the
fact that not all solutions are created equal.
|
Michael Sioutis, Zhiguo Long, Tomi Janhunen
| null | null | 2,020 |
ijcai
|
Rewriting the Description Logic ALCHIQ to Disjunctive Existential Rules
| null |
Especially in data-intensive settings, a promising reasoning approach for description logics (DLs) is to rewrite DL theories into sets of rules. Although many such approaches have been considered in the literature, there are still various relevant DLs for which no small rewriting (of polynomial size) is known. We therefore develop small rewritings for the DL \ALCHIQ -- featuring disjunction, number restrictions, and inverse roles -- to disjunctive Datalog. By admitting existential quantifiers in rule heads, we can improve this result to yield only rules of bounded size, a property that is common to all rewritings that were implemented in practice so far.
|
David Carral, Markus Krötzsch
| null | null | 2,020 |
ijcai
|
Cone Semantics for Logics with Negation
| null |
This paper presents an embedding of ontologies expressed in the ALC description logic into a real-valued vector space, comprising restricted existential and universal quantifiers, as well as concept negation and concept disjunction. Our main result states that an ALC ontology is satisfiable in the classical sense iff it is satisfiable by a partial faithful geometric model based on cones. The line of work to which we contribute aims to integrate knowledge representation techniques and machine learning. The new cone-model of ALC proposed in this work gives rise to conic optimization techniques for machine learning, extending previous approaches by its ability to model full ALC.
|
Özgür Lütfü Özçep, Mena Leemhuis, Diedrich Wolter
| null | null | 2,020 |
ijcai
|
On the Decidability of Intuitionistic Tense Logic without Disjunction
| null |
Implicative semi-lattices (also known as Brouwerian semi-lattices) are a generalization of Heyting algebras, and have been already well studied both from a logical and an algebraic perspective. In this paper, we consider the variety ISt of the expansions of implicative semi-lattices with tense modal operators, which are algebraic models of the disjunction-free fragment of intuitionistic tense logic. Using methods from algebraic proof theory, we show that the logic of tense implicative semi-lattices has the finite model property. Combining with the finite axiomatizability of the logic, it follows that the logic is decidable.
|
Fei Liang, Zhe Lin
| null | null | 2,020 |
ijcai
|
Belief Merging Operators as Maximum Likelihood Estimators
| null |
We study how belief merging operators can be considered as maximum likelihood estimators, i.e., we assume that there exists a (unknown) true state of the world and that each agent participating in the merging process receives a noisy signal of it, characterized by a noise model. The objective is then to aggregate the agents' belief bases to make the best possible guess about the true state of the world. In this paper, some logical connections between the rationality postulates for belief merging (IC postulates) and simple conditions over the noise model under consideration are exhibited. These results provide a new justification for IC merging postulates. We also provide results for two specific natural noise models: the world swap noise and the atom swap noise, by identifying distance-based merging operators that are maximum likelihood estimators for these two noise models.
|
Patricia Everaere, Sebastien Konieczny, Pierre Marquis
| null | null | 2,020 |
ijcai
|
A Modal Logic for Joint Abilities under Strategy Commitments
| null |
Representation and reasoning about strategic abilities has been an active research area in AI and multi-agent systems. Many variations and extensions of alternating-time temporal logic ATL have been proposed. However, most of the logical frameworks ignore the issue of coordination within a coalition, and are unable to specify the internal structure of strategies. In this paper, we propose JAADL, a modal logic for joint abilities under strategy commitments, which is an extension of ATL. Firstly, we introduce an operator of elimination of (strictly) dominated strategies, with which we can represent joint abilities of coalitions. Secondly, our logic is based on linear dynamic logic (LDL), an extension of linear temporal logic (LTL), so that we can use regular expressions to represent commitments to structured strategies. We analyze valid formulas in JAADL, give sufficient/necessary conditions for joint abilities, and show that model checking memoryless JAADL is in EXPTIME.
|
Zhaoshuai Liu, Liping Xiong, Yongmei Liu, Yves Lespérance, Ronghai Xu, Hongyi Shi
| null | null | 2,020 |
ijcai
|
Solving Analogies on Words based on Minimal Complexity Transformation
| null |
Analogies are 4-ary relations of the form "A is to B as C is to D". When A, B and C are fixed, we call analogical equation the problem of finding the correct D. A direct applicative domain is Natural Language Processing, in which it has been shown successful on word inflections, such as conjugation or declension. If most approaches rely on the axioms of proportional analogy to solve these equations, these axioms are known to have limitations, in particular in the nature of the considered flections. In this paper, we propose an alternative approach, based on the assumption that optimal word inflections are transformations of minimal complexity. We propose a rough estimation of complexity for word analogies and an algorithm to find the optimal transformations. We illustrate our method on a large-scale benchmark dataset and compare with state-of-the-art approaches to demonstrate the interest of using complexity to solve analogies on words.
|
Pierre-Alexandre Murena, Marie Al-Ghossein, Jean-Louis Dessalles, Antoine Cornuéjols
| null | null | 2,020 |
ijcai
|
Lower Bounds for Approximate Knowledge Compilation
| null |
Knowledge compilation studies the trade-off between succinctness and efficiency of different representation languages. For many languages, there are known strong lower bounds on the representation size, but recent work shows that, for some languages, one can bypass these bounds using approximate compilation. The idea is to compile an approximation of the knowledge for which the number of errors can be controlled. We focus on circuits in deterministic decomposable negation normal form (d-DNNF), a compilation language suitable in contexts such as probabilistic reasoning, as it supports efficient model counting and probabilistic inference. Moreover, there are known size lower bounds for d-DNNF which by relaxing to approximation one might be able to avoid. In this paper we formalize two notions of approximation: weak approximation which has been studied before in the decision diagram literature and strong approximation which has been used in recent algorithmic results. We then show lower bounds for approximation by d-DNNF, complementing the positive results from the literature.
|
Alexis de Colnet, Stefan Mengel
| null | null | 2,020 |
ijcai
|
A Fully Rational Account of Structured Argumentation Under Resource Bounds
| null |
ASPIC+ is an established general framework for argumentation and non-monotonic reasoning. However, ASPIC+ does not satisfy the non-contamination rationality postulates, and moreover, tacitly assumes unbounded resources when demonstrating satisfaction of the consistency postulates. In this paper we present a new version of ASPIC+ – Dialectial ASPIC+ – that is fully rational under resource bounds.
|
Marcello D'Agostino, Sanjay Modgil
| null | null | 2,020 |
ijcai
|
A Journey into Ontology Approximation: From Non-Horn to Horn
| null |
We study complete approximations of an ontology formulated in a
non-Horn description logic (DL) such as ALC in a Horn DL such
as EL. We provide concrete approximation schemes that are
necessarily infinite and observe that in the ELU-to-EL case
finite approximations tend to exist in practice and are guaranteed to
exist when the source ontology is acyclic. In contrast, neither of
this is the case for ELU_bot-to-EL_bot and for
ALC-to-EL_bot approximations. We also define a notion of
approximation tailored towards ontology-mediated querying, connect
it to subsumption-based approximations, and identify a case where
finite approximations are guaranteed to exist.
|
Anneke Haga, Carsten Lutz, Johannes Marti, Frank Wolter
| null | null | 2,020 |
ijcai
|
Threshold Treewidth and Hypertree Width
| null |
Treewidth and hypertree width have proven to be highly successful structural parameters in the context of the Constraint Satisfaction Problem (CSP). When either of these parameters is bounded by a constant, then CSP becomes solvable in polynomial time. However, here the order of the polynomial in the running time depends on the width, and this is known to be unavoidable; therefore, the problem is not fixed-parameter tractable parameterized by either of these width measures. Here we introduce an enhancement of tree and hypertree width through a novel notion of thresholds, allowing the associated decompositions to take into account information about the computational costs associated with solving the given CSP instance. Aside from introducing these notions, we obtain efficient theoretical as well as empirical algorithms for computing threshold treewidth and hypertree width and show that these parameters give rise to fixed-parameter algorithms for CSP as well as other, more general problems. We complement our theoretical results with experimental evaluations in terms of heuristics as well as exact methods based on SAT/SMT encodings.
|
Robert Ganian, Andre Schidler, Manuel Sorge, Stefan Szeider
| null | null | 2,020 |
ijcai
|
Adversarial Oracular Seq2seq Learning for Sequential Recommendation
| null |
Recently, sequential recommendation has become a significant demand for many real-world applications, where the recommended items would be displayed to users one after another and the order of the displays influences the satisfaction of users. An extensive number of models have been developed for sequential recommendation by recommending the next items with the highest scores based on the user histories while few efforts have been made on identifying the transition dependency and behavior continuity in the recommended sequences. In this paper, we introduce the Adversarial Oracular Seq2seq learning for sequential Recommendation (AOS4Rec), which formulates the sequential recommendation as a seq2seq learning problem to portray time-varying interactions in the recommendation, and exploits the oracular learning and adversarial learning to enhance the recommendation quality. We examine the performance of AOS4Rec over RNN-based and Transformer-based recommender systems on two large datasets from real-world applications and make comparisons with state-of-the-art methods. Results indicate the accuracy and efficiency of AOS4Rec, and further analysis verifies that AOS4Rec has both robustness and practicability for real-world scenarios.
|
Pengyu Zhao, Tianxiao Shui, Yuanxing Zhang, Kecheng Xiao, Kaigui Bian
| null | null | 2,020 |
ijcai
|
Inconsistency Measurement for Improving Logical Formula Clustering
| null |
Formal logic can be used as a tool for representing complex and heterogeneous data such as beliefs, knowledge and preferences. This study proposes an approach for defining clustering methods that deal with bases of propositional formulas in classical logic, i.e., methods for dividing formula bases into meaningful groups. We first use a postulate-based approach for introducing an intuitive framework for formula clustering. Then, in order to characterize interesting clustering forms, we introduce additional properties that take into consideration different notions, such us logical consequence, overlapping, and consistent partition. Finally, we describe our approach that shows how the inconsistency
measures can be involved in improving the task of formula clustering. The main idea consists in using the measures for quantifying the quality of the inconsistent clusters. In this context, we propose further properties that allow characterizing interesting aspects related to the amount of inconsistency.
|
Yakoub Salhi
| null | null | 2,020 |
ijcai
|
Concurrent Games in Dynamic Epistemic Logic
| null |
Action models of Dynamic Epistemic Logic (DEL) represent precisely how actions are perceived by agents. DEL has recently been used to define infinite multi-player games, and it was shown that they can be solved in some cases. However, the dynamics being defined by the classic DEL update product for individual actions, only turn-based games have been considered so far. In this work we define a concurrent DEL product, propose a mechanism to resolve conflicts between actions, and define concurrent DEL games. As in the turn-based case, the obtained concurrent infinite game arenas can be finitely represented when all actions are public, or all are propositional. Thus we identify cases where the strategic epistemic logic ATL*K can be model checked on such games.
|
Bastien Maubert, Sophie Pinchinat, Francois Schwarzentruber, Silvia Stranieri
| null | null | 2,020 |
ijcai
|
Ranking Semantics for Argumentation Systems With Necessities
| null |
Bipolar argumentation studies argumentation graphs where attacks are combined with another relation between arguments. Many kind of relations (e.g. deductive support, evidential support, necessities etc.) have been defined and investigated from a Dung semantics perspective. We place ourselves in the context of argumentation systems with necessities and provide the first study to investigate ranking semantics in this setting. To this end, we (1) provide a set of postulates specifically designed for necessities and (2) propose the first ranking-based semantics in the literature to be shown to respect these postulates.
|
Dragan Doder, Srdjan Vesic, Madalina Croitoru
| null | null | 2,020 |
ijcai
|
Model-Based Synthesis of Incremental and Correct Estimators for Discrete Event Systems
| null |
State tracking, i.e. estimating the state over time, is always an important problem in autonomous dynamic systems. Run-time requirements advocate for incremental estimation and memory limitations lead us to consider an estimation strategy that retains only one state out of the set of candidate estimates at each time step. This avoids the ambiguity of a high number of candidate estimates and allows the decision system to be fed with a clear input.
However, this strategy may lead to dead-ends in the continuation of the execution. In this paper, we show that single-state trackability can be expressed in terms of the simulation relation between automata. This allows us to provide a complexity bound and a way to build estimators endowed with this property and, moreover, customizable along some correctness criteria. Our implementation relies on the Sat Modulo Theory solver MonoSAT and experiments show that our encoding scales up and applies to real world scenarios.
|
Stéphanie Roussel, Xavier Pucel, Valentin Bouziat, Louise Travé-Massuyès
| null | null | 2,020 |
ijcai
|
Tractable Fragments of Datalog with Metric Temporal Operators
| null |
We study the data complexity of reasoning for several fragments of MTL - an extension of Datalog with metric temporal operators over the rational numbers. Reasoning in the full MTL language is PSPACE-complete, which handicaps its application in practice. To achieve tractability we first study the core fragment, which disallows conjunction in rule bodies, and show that reasoning remains PSPACE-hard. Intractability prompts us to also limit the kinds of temporal operators allowed in rules, and we propose a practical core fragment for which reasoning becomes TC0-complete. Finally, we show that this fragment can be extended by allowing linear conjunctions in rule bodies, where at most one atom can be intensional (IDB); we show that the resulting fragment is NL-complete, and hence no harder than plain linear Datalog.
|
Przemysław A. Wałęga, Bernardo Cuenca Grau, Mark Kaminski, Egor V. Kostylev
| null | null | 2,020 |
ijcai
|
Controllability of Control Argumentation Frameworks
| null |
Control argumentation frameworks (CAFs) allow for modeling uncertainties inherent in various argumentative settings. We establish a complete computational complexity map of the central computational problem of controllability in CAFs for five key semantics. We also develop Boolean satisfiability based counterexample-guided abstraction refinement algorithms and direct encodings of controllability as quantified Boolean formulas, and empirically evaluate their scalability on a range of NP-hard variants of controllability.
|
Andreas Niskanen, Daniel Neugebauer, Matti Järvisalo
| null | null | 2,020 |
ijcai
|
Provenance for the Description Logic ELHr
| null |
We address the problem of handling provenance information in ELHr ontologies. We consider a setting recently introduced for ontology-based data access, based on semirings and extending classical data provenance, in which ontology axioms are annotated with provenance tokens. A consequence inherits the provenance of the axioms involved in deriving it, yielding a provenance polynomial as an annotation. We analyse the semantics for the ELHr case and show that the presence of conjunctions poses various difficulties for handling provenance, some of which are mitigated by assuming multiplicative idempotency of the semiring. Under this assumption, we study three problems: ontology completion with provenance, computing the set of relevant axioms for a consequence, and query answering.
|
Camille Bourgaux, Ana Ozaki, Rafael Penaloza, Livia Predoiu
| null | null | 2,020 |
ijcai
|
Stabilizing Adversarial Invariance Induction from Divergence Minimization Perspective
| null |
Adversarial invariance induction (AII) is a generic and powerful framework for enforcing an invariance to nuisance attributes into neural network representations. However, its optimization is often unstable and little is known about its practical behavior. This paper presents an analysis of the reasons for the optimization difficulties and provides a better optimization procedure by rethinking AII from a divergence minimization perspective. Interestingly, this perspective indicates a cause of the optimization difficulties: it does not ensure proper divergence minimization, which is a requirement of the invariant representations. We then propose a simple variant of AII, called invariance induction by discriminator matching, which takes into account the divergence minimization interpretation of the invariant representations. Our method consistently achieves near-optimal invariance in toy datasets with various configurations in which the original AII is catastrophically unstable. Extentive experiments on four real-world datasets also support the superior performance of the proposed method, leading to improved user anonymization and domain generalization.
|
Yusuke Iwasawa, Kei Akuzawa, Yutaka Matsuo
| null | null | 2,020 |
ijcai
|
Learning With Subquadratic Regularization : A Primal-Dual Approach
| null |
Subquadratic norms have been studied recently in the context of structured sparsity, which has been shown to be more beneficial than conventional regularizers in applications such as image denoising, compressed sensing, banded covariance estimation, etc. While existing works have been successful in learning structured sparse models such as trees, graphs, their associated optimization procedures have been inefficient because of hard-to-evaluate proximal operators of the norms. In this paper, we study the computational aspects of learning with subquadratic norms in a general setup. Our main contributions are two proximal-operator based algorithms ADMM-η and CP-η, which generically apply to these learning problems with convex loss functions, and achieve a proven rate of convergence of O(1/T) after T iterations. These algorithms are derived in a primal-dual framework, which have not been examined for subquadratic norms. We illustrate the efficiency of the algorithms developed in the context of tree-structured sparsity, where they comprehensively outperform relevant baselines.
|
Raman Sankaran, Francis Bach, Chiranjib Bhattacharyya
| null | null | 2,020 |
ijcai
|
Reasoning Like Human: Hierarchical Reinforcement Learning for Knowledge Graph Reasoning
| null |
Knowledge Graphs typically suffer from incompleteness. A popular approach to knowledge graph completion is to infer missing knowledge by multihop reasoning over the information found along other paths connecting a pair of entities. However, multi-hop reasoning is still challenging because the reasoning process usually experiences multiple semantic issue that a relation or an entity has multiple meanings. In order to deal with the situation, we propose a novel Hierarchical Reinforcement Learning framework to learn chains of reasoning from a Knowledge Graph automatically. Our framework is inspired by the hierarchical structure through which human handle cognitionally ambiguous cases. The whole reasoning process is decomposed into a hierarchy of two-level Reinforcement Learning policies for encoding historical information and learning structured action space. As a consequence, it is more feasible and natural for dealing with the multiple semantic issue. Experimental results show that our proposed model achieves substantial improvements in ambiguous relation tasks.
|
Guojia Wan, Shirui Pan, Chen Gong, Chuan Zhou, Gholamreza Haffari
| null | null | 2,020 |
ijcai
|
Model-theoretic Characterizations of Existential Rule Languages
| null |
Existential rules, a.k.a. dependencies in databases, and Datalog+/- in knowledge representation and reasoning recently, are a family of important logical languages widely used in computer science and artificial intelligence. Towards a deep understanding of these languages in model theory, we establish model-theoretic characterizations for a number of existential rule languages such as (disjunctive) embedded dependencies, tuple-generating dependencies (TGDs), (frontier-)guarded TGDs and linear TGDs. All these characterizations hold for the class of arbitrary structures, and most of them also work on the class of finite structures. As a natural application of these results, complexity bounds for the rewritability of above languages are also identified.
|
Heng Zhang, Yan Zhang, Guifei Jiang
| null | null | 2,020 |
ijcai
|
Switching Poisson Gamma Dynamical Systems
| null |
We propose Switching Poisson gamma dynamical systems (SPGDS) to model sequentially observed multivariate count data. Different from previous models, SPGDS assigns its latent variables into mixture of gamma distributed parameters to model complex sequences and describe the nonlinear dynamics, meanwhile, capture various temporal dependencies. For efficient inference, we develop a scalable hybrid stochastic gradient-MCMC and switching recurrent autoencoding variational inference, which is scalable to large scale sequences and fast in out-of-sample prediction. Experiments on both unsupervised and supervised tasks demonstrate that the proposed model not only has excellent fitting and prediction performance on complex dynamic sequences, but also separates different dynamical patterns within them.
|
Wenchao Chen, Bo Chen, Yicheng Liu, Qianru Zhao, Mingyuan Zhou
| null | null | 2,020 |
ijcai
|
Learning and Solving Regular Decision Processes
| null |
Regular Decision Processes (RDPs) are a recently introduced model that extends MDPs with non-Markovian dynamics and rewards. The non-Markovian behavior is restricted to depend on regular properties of the history. These can be specified using regular expressions or formulas in linear dynamic logic over finite traces. Fully specified RDPs can be solved by compiling them into an appropriate MDP. Learning RDPs from data is a challenging problem that has yet to be addressed, on which we focus in this paper. Our approach rests on a new representation for RDPs using Mealy Machines that emit a distribution and an expected reward for each state-action pair. Building on this representation, we combine automata learning techniques with history clustering to learn such a Mealy machine and solve it by adapting MCTS to it. We empirically evaluate this approach, demonstrating its feasibility.
|
Eden Abadi, Ronen I. Brafman
| null | null | 2,020 |
ijcai
|
Query Answering for Existential Rules via Efficient Datalog Rewriting
| null |
Existential rules are an expressive ontology formalism for ontology-mediated query answering and thus query answering is of high complexity, while several tractable fragments have been identified. Existing systems based on first-order rewriting methods can lead to queries too large for DBMS to handle. It is shown that datalog rewriting can result in more compact queries, yet previously proposed datalog rewriting methods are mostly inefficient for implementation. In this paper, we fill the gap by proposing an efficient datalog rewriting approach for answering conjunctive queries over existential rules, and identify and combine existing fragments of existential rules for which our rewriting method terminates. We implemented a prototype system Drewer, and experiments show that it is able to handle a wide range of benchmarks in the literature. Moreover, Drewer shows superior or comparable performance over state-of-the-art systems on both the compactness of rewriting and the efficiency of query answering.
|
Zhe Wang, Peng Xiao, Kewen Wang, Zhiqiang Zhuang, Hai Wan
| null | null | 2,020 |
ijcai
|
Order-Dependent Event Models for Agent Interactions
| null |
In multivariate event data, the instantaneous rate of an event's occurrence may be sensitive to the temporal sequence in which other influencing events have occurred in the history. For example, an agent’s actions are typically driven by preceding actions taken by the agent as well as those of other relevant agents in some order. We introduce a novel statistical/causal model for capturing such an order-sensitive historical dependence, where an event’s arrival rate is determined by the order in which its underlying causal events have occurred in the recent past. We propose an algorithm to discover these causal events and learn the most influential orders using time-stamped event occurrence data. We show that the proposed model fits various event datasets involving single as well as multiple agents better than baseline models. We also illustrate potentially useful insights from our proposed model for an analyst during the discovery process through analysis on a real-world political event dataset.
|
Debarun Bhattacharjya, Tian Gao, Dharmashankar Subramanian
| null | null | 2,020 |
ijcai
|
A New Attention Mechanism to Classify Multivariate Time Series
| null |
Classifying multivariate time series (MTS), which record the values of multiple variables over a continuous period of time, has gained a lot of attention. However, existing techniques suffer from two major issues. First, the long-range dependencies of the time-series sequences are not well captured. Second, the interactions of multiple variables are generally not represented in features. To address these aforementioned issues, we propose a novel Cross Attention Stabilized Fully Convolutional Neural Network (CA-SFCN) to classify MTS data. First, we introduce a temporal attention mechanism to extract long- and short-term memories across all time steps. Second, variable attention is designed to select relevant variables at each time step. CA-SFCN is compared with 16 approaches using 14 different MTS datasets. The extensive experimental results show that the CA-SFCN outperforms state-of-the-art classification methods, and the cross attention mechanism achieves better performance than other attention mechanisms.
|
Yifan Hao, Huiping Cao
| null | null | 2,020 |
ijcai
|
Positive Unlabeled Learning with Class-prior Approximation
| null |
The positive unlabeled (PU) learning aims to train a binary classifier from a set of positive labeled samples and other unlabeled samples. Much research
has been done on this special branch of weakly supervised classification problems. Since only part of the positive class is labeled, the classical PU model trains the classifier assuming the class-prior is known. However, the true class prior is usually difficult to obtain and must be learned from the given data, and the traditional methods may not work. In this paper, we formulate a convex formulation to jointly solve the class-prior unknown problem and train an accurate classifier with no need of any class-prior assumptions or additional negative samples. The class prior is estimated by pursuing the optimal solution of gradient thresholding and the classifier is simultaneously trained by performing empirical unbiased risk. The detailed derivation and theoretical analysis of the proposed model are outlined, and a comparison of our experiments with other representative methods prove the superiority of our method.
|
Shizhen Chang, Bo Du, Liangpei Zhang
| null | null | 2,020 |
ijcai
|
Explainable Inference on Sequential Data via Memory-Tracking
| null |
In this paper we present a novel mechanism to
get explanations that allow to better understand
network predictions when dealing with sequential
data. Specifically, we adopt memory-based networks
— Differential Neural Computers — to exploit
their capability of storing data in memory and
reusing it for inference. By tracking both the memory
access at prediction time, and the information
stored by the network at each step of the input
sequence, we can retrieve the most relevant input
steps associated to each prediction. We validate
our approach (1) on a modified T-maze, which is a
non-Markovian discrete control task evaluating an
algorithm’s ability to correlate events far apart in
history, and (2) on the Story Cloze Test, which is
a commonsense reasoning framework for evaluating
story understanding that requires a system to
choose the correct ending to a four-sentence story.
Our results show that we are able to explain agent’s
decisions in (1) and to reconstruct the most relevant
sentences used by the network to select the story
ending in (2). Additionally, we show not only that
by removing those sentences the network prediction
changes, but also that the same are sufficient to
reproduce the inference.
|
Biagio La Rosa, Roberto Capobianco, Daniele Nardi
| null | null | 2,020 |
ijcai
|
Learning Interpretable Representations with Informative Entanglements
| null |
Learning interpretable representations in an unsupervised setting is an important yet a challenging task. Existing unsupervised interpretable methods focus on extracting independent salient features from data. However they miss out the fact that the entanglement of salient features may also be informative. Acknowledging these entanglements can improve the interpretability, resulting in extraction of higher quality and a wider variety of salient features. In this paper, we propose a new method to enable Generative Adversarial Networks (GANs) to discover salient features that may be entangled in an informative manner, instead of extracting only disentangled features. Specifically, we propose a regularizer to punish the disagreement between the extracted feature interactions and a given dependency structure while training. We model these interactions using a Bayesian network, estimate the maximum likelihood parameters and calculate a negative likelihood score to measure the disagreement. Upon qualitatively and quantitatively evaluating the proposed method using both synthetic and real-world datasets, we show that our proposed regularizer guides GANs to learn representations with disentanglement scores competing with the state-of-the-art, while extracting a wider variety of salient features.
|
Ege Beyazıt, Doruk Tuncel, Xu Yuan, Nian-Feng Tzeng, Xindong Wu
| null | null | 2,020 |
ijcai
|
Neural Representation and Learning of Hierarchical 2-additive Choquet Integrals
| null |
Multi-Criteria Decision Making (MCDM) aims at modelling expert preferences and assisting decision makers in identifying options best accommodating expert criteria. An instance of MCDM model, the Choquet integral is widely used in real-world applications, due to its ability to capture interactions between criteria while retaining interpretability. Aimed at a better scalability and modularity, hierarchical Choquet integrals involve intermediate aggregations of the interacting criteria, at the cost of a more complex elicitation. The paper presents a machine learning-based approach for the automatic identification of hierarchical MCDM models, composed of 2-additive Choquet integral aggregators and of marginal utility functions on the raw features from data reflecting expert preferences. The proposed NEUR-HCI framework relies on a specific neural architecture, enforcing by design the Choquet model constraints and supporting its end-to-end training. The empirical validation of NEUR-HCI on real-world and artificial benchmarks demonstrates the merits of the approach compared to state-of-art baselines.
|
Roman Bresson, Johanne Cohen, Eyke Hüllermeier, Christophe Labreuche, Michèle Sebag
| null | null | 2,020 |
ijcai
|
SI-VDNAS: Semi-Implicit Variational Dropout for Hierarchical One-shot Neural Architecture Search
| null |
Bayesian methods have improved the interpretability and stability of neural architecture search (NAS). In this paper, we propose a novel probabilistic approach, namely Semi-Implicit Variational Dropout one-shot Neural Architecture Search (SI-VDNAS), that leverages semi-implicit variational dropout to support architecture search with variable operations and edges. SI-VDNAS achieves stable training that would not be affected by the over-selection of skip-connect operation. Experimental results demonstrate that SI-VDNAS finds a convergent architecture with only 2.7 MB parameters within 0.8 GPU-days and can achieve 2.60% top-1 error rate on CIFAR-10. The convergent architecture can obtain a top-1 error rate of 16.20% and 25.6% when transferred to CIFAR-100 and ImageNet (mobile setting).
|
Yaoming Wang, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong
| null | null | 2,020 |
ijcai
|
Direct Quantization for Training Highly Accurate Low Bit-width Deep Neural Networks
| null |
This paper proposes two novel techniques to train deep convolutional neural networks with low bit-width weights and activations. First, to obtain low bit-width weights, most existing methods obtain the quantized weights by performing quantization on the full-precision network weights. However, this approach would result in some mismatch: the gradient descent updates full-precision weights, but it does not update the quantized weights. To address this issue, we propose a novel method that enables direct updating of quantized weights with learnable quantization levels to minimize the cost function using gradient descent. Second, to obtain low bit-width activations, existing works consider all channels equally. However, the activation quantizers could be biased toward a few channels with high-variance. To address this issue, we propose a method to take into account the quantization errors of individual channels. With this approach, we can learn activation quantizers that minimize the quantization errors in the majority of channels. Experimental results demonstrate that our proposed method achieves state-of-the-art performance on the image classification task, using AlexNet, ResNet and MobileNetV2 architectures on CIFAR-100 and ImageNet datasets.
|
Tuan Hoang, Thanh-Toan Do, Tam V. Nguyen, Ngai-Man Cheung
| null | null | 2,020 |
ijcai
|
Compressed Self-Attention for Deep Metric Learning with Low-Rank Approximation
| null |
In this paper, we apply self-attention (SA) mechanism to boost the performance of deep metric learning. However, due to the pairwise similarity measurement, the cost of storing and manipulating the complete attention maps makes it infeasible for large inputs. To solve this problem, we propose
a compressed self-attention with low-rank approximation (CSALR) module, which significantly reduces the computation and memory costs without
sacrificing the accuracy. In CSALR, the original attention map is decomposed into a landmark attention map and a combination coefficient map with
a small number of landmark feature vectors sampled from the input feature map by average pooling. Thanks to the efficiency of CSALR, we can apply CSALR to high-resolution shallow convolutional layers and implement a multi-head form of CSALR, which further boosts the performance. We evaluate the proposed CSALR on person reidentification which is a typical metric learning task. Extensive experiments shows the effectiveness and efficiency of CSALR in deep metric learning and its superiority over the baselines.
|
Ziye Chen, Mingming Gong, Lingjuan Ge, Bo Du
| null | null | 2,020 |
ijcai
|
An Online Learning Framework for Energy-Efficient Navigation of Electric Vehicles
| null |
Energy-efficient navigation constitutes an important challenge in electric vehicles, due to their limited battery capacity. We employ a Bayesian approach to model the energy consumption at road segments for efficient navigation. In order to learn the model parameters, we develop an online learning framework and investigate several exploration strategies such as Thompson Sampling and Upper Confidence Bound. We then extend our online learning framework to multi-agent setting, where multiple vehicles adaptively navigate and learn the parameters of the energy model. We analyze Thompson Sampling and establish rigorous regret bounds on its performance. Finally, we demonstrate the performance of our methods via several real-world experiments on Luxembourg SUMO Traffic dataset.
|
Niklas Åkerblom, Yuxin Chen, Morteza Haghir Chehreghani
| null | null | 2,020 |
ijcai
|
Self-paced Consensus Clustering with Bipartite Graph
| null |
Consensus clustering provides a framework to ensemble multiple clustering results to obtain a consensus and robust result. Most existing consensus clustering methods usually apply all data to ensemble learning, whereas ignoring the side effects caused by some difficult or unreliable instances. To tackle this problem, we propose a novel self-paced consensus clustering method to gradually involve instances from more reliable to less reliable ones into the ensemble learning. We first construct an initial bipartite graph from the multiple base clustering results, where the nodes represent the instances and clusters and the edges indicate that an instance belongs to a cluster. Then, we learn a structured bipartite graph from the initial one by self-paced learning, i.e., we automatically decide the reliability of each edge and involves the edges into graph learning in order of their reliability. At last, we obtain the final consensus clustering result from the learned bipartite graph. The extensive experimental results demonstrate the effectiveness and superiority of the proposed method.
|
Peng Zhou, Liang Du, Xuejun Li
| null | null | 2,020 |
ijcai
|
The Sparse MinMax k-Means Algorithm for High-Dimensional Clustering
| null |
Classical clustering methods usually face tough challenges when we have a larger set of features compared to the number of items to be partitioned. We propose a Sparse MinMax k-Means Clustering approach by reformulating the objective of the MinMax k-Means algorithm (a variation of classical k-Means that minimizes the maximum intra-cluster variance instead of the sum of intra-cluster variances), into a new weighted between-cluster sum of squares (BCSS) form. We impose sparse regularization on these weights to make it suitable for high-dimensional clustering. We seek to use the advantages of the MinMax k-Means algorithm in the high-dimensional space to generate good quality clusters. The efficacy of the proposal is showcased through comparison against a few representative clustering methods over several real world datasets.
|
Sayak Dey, Swagatam Das, Rammohan Mallipeddi
| null | null | 2,020 |
ijcai
|
Learning Large Logic Programs By Going Beyond Entailment
| null |
A major challenge in inductive logic programming (ILP) is learning large programs. We argue that a key limitation of existing systems is that they use entailment to guide the hypothesis search. This approach is limited because entailment is a binary decision: a hypothesis either entails an example or does not, and there is no intermediate position. To address this limitation, we go beyond entailment and use 'example-dependent' loss functions to guide the search, where a hypothesis can partially cover an example. We implement our idea in Brute, a new ILP system which uses best-first search, guided by an example-dependent loss function, to incrementally build programs. Our experiments on three diverse program synthesis domains (robot planning, string transformations, and ASCII art), show that Brute can substantially outperform existing ILP systems, both in terms of predictive accuracies and learning times, and can learn programs 20 times larger than state-of-the-art systems.
|
Andrew Cropper, Sebastijan Dumančic
| null | null | 2,020 |
ijcai
|
Potential Driven Reinforcement Learning for Hard Exploration Tasks
| null |
Experience replay plays a crucial role in Reinforcement Learning (RL), enabling the agent to remember and reuse experience from the past. Most previous methods sample experience transitions using simple heuristics like uniformly sampling or prioritizing those good ones. Since humans can learn from both good and bad experiences, more sophisticated experience replay algorithms need to be developed. Inspired by the potential energy in physics, this work introduces the artificial potential field into experience replay and develops Potentialized Experience Replay (PotER) as a new and effective sampling algorithm for RL in hard exploration tasks with sparse rewards. PotER defines a potential energy function for each state in experience replay and helps the agent to learn from both good and bad experiences using intrinsic state supervision. PotER can be combined with different RL algorithms as well as the self-imitation learning algorithm. Experimental analyses and comparisons on multiple challenging hard exploration environments have verified its effectiveness and efficiency.
|
Enmin Zhao, Shihong Deng, Yifan Zang, Yongxin Kang, Kai Li, Junliang Xing
| null | null | 2,020 |
ijcai
|
Marthe: Scheduling the Learning Rate Via Online Hypergradients
| null |
We study the problem of fitting task-specific learning rate schedules from the perspective of hyperparameter optimization, aiming at good generalization. We describe the structure of the gradient of a validation error w.r.t. the learning rate schedule -- the hypergradient. Based on this, we introduce MARTHE, a novel online algorithm guided by cheap approximations of the hypergradient that uses past information from the optimization trajectory to simulate future behaviour. It interpolates between two recent techniques, RTHO (Franceschi et al., 2017) and HD (Baydin et al. 2018), and is able to produce learning rate schedules that are more stable leading to models that generalize better.
|
Michele Donini, Luca Franceschi, Orchid Majumder, Massimiliano Pontil, Paolo Frasconi
| null | null | 2,020 |
ijcai
|
Handling Black Swan Events in Deep Learning with Diversely Extrapolated Neural Networks
| null |
By virtue of their expressive power, neural networks (NNs) are well suited to fitting large, complex datasets, yet they are also known to
produce similar predictions for points outside the training distribution.
As such, they are, like humans, under the influence of the Black Swan theory: models tend to be extremely "surprised" by rare events, leading to potentially disastrous consequences, while justifying these same events in hindsight.
To avoid this pitfall, we introduce DENN, an ensemble approach building a set of Diversely Extrapolated Neural Networks that fits the training data and is able to generalize more diversely when extrapolating to novel data points.
This leads DENN to output highly uncertain predictions for unexpected inputs.
We achieve this by adding a diversity term in the loss function used to train the model, computed at specific inputs.
We first illustrate the usefulness of the method on a low-dimensional regression problem.
Then, we show how the loss can be adapted to tackle anomaly detection during classification, as well as safe imitation learning problems.
|
Maxime Wabartha, Audrey Durand, Vincent François-Lavet, Joelle Pineau
| null | null | 2,020 |
ijcai
|
Bayesian Decision Process for Budget-efficient Crowdsourced Clustering
| null |
The performance of clustering depends on an appropriately defined similarity between two items. When the similarity is measured based on human perception, human workers are often employed to estimate a similarity score between items in order to support clustering, leading to a procedure called crowdsourced clustering. Assuming a monetary reward is paid to a worker for each similarity score and assuming the similarities between pairs and workers' reliability have a large diversity, when the budget is limited, it is critical to wisely assign pairs of items to different workers to optimize the clustering result. We model this budget allocation problem as a Markov decision process where item pairs are dynamically assigned to workers based on the historical similarity scores they provided. We propose an optimistic knowledge gradient policy where the assignment of items in each stage is based on the minimum-weight K-cut defined on a similarity graph. We provide simulation studies and real data analysis to demonstrate the performance of the proposed method.
|
Xiaozhou Wang, Xi Chen, Qihang Lin, Weidong Liu
| null | null | 2,020 |
ijcai
|
Fully Nested Neural Network for Adaptive Compression and Quantization
| null |
Neural network compression and quantization are important tasks for fitting state-of-the-art models into the computational, memory and power constraints of mobile devices and embedded hardware. Recent approaches to model compression/quantization are based on reinforcement learning or search methods to quantize the neural network for a specific hardware platform. However, these methods require multiple runs to compress/quantize the same base neural network to different hardware setups. In this work, we propose a fully nested neural network (FN3) that runs only once to build a nested set of compressed/quantized models, which is optimal for different resource constraints. Specifically, we exploit the additive characteristic in different levels of building blocks in neural network and propose an ordered dropout (ODO) operation that ranks the building blocks. Given a trained FN3, a fast heuristic search algorithm is run offline to find the optimal removal of components to maximize the accuracy under different constraints. Compared with the related works on adaptive neural network designed only for channels or bits, the proposed approach is applicable to different levels of building blocks (bits, neurons, channels, residual paths and layers). Empirical results validate strong practical performance of proposed approach.
|
Yufei Cui, Ziquan Liu, Wuguannan Yao, Qiao Li, Antoni B. Chan, Tei-wei Kuo, Chun Jason Xue
| null | null | 2,020 |
ijcai
|
Variational Learning of Bayesian Neural Networks via Bayesian Dark Knowledge
| null |
Bayesian neural networks (BNNs) have received more and more attention because they are capable of modeling epistemic uncertainty which is hard for conventional neural networks. Markov chain Monte Carlo (MCMC) methods and variational inference (VI) are two mainstream methods for Bayesian deep learning. The former is effective but its storage cost is prohibitive since it has to save many samples of neural network parameters. The latter method is more time and space efficient, however the approximate variational posterior limits its performance. In this paper, we aim to combine the advantages of above two methods by distilling MCMC samples into an approximate variational posterior. On the basis of an existing distillation technique we first propose variational Bayesian dark knowledge method. Moreover, we propose Bayesian dark prior knowledge, a novel distillation method which considers MCMC posterior as the prior of a variational BNN. Two proposed methods both not only can reduce the space overhead of the teacher model so that are scalable, but also maintain a distilled posterior distribution capable of modeling epistemic uncertainty. Experimental results manifest our methods outperform existing distillation method in terms of predictive accuracy and uncertainty modeling.
|
Gehui Shen, Xi Chen, Zhihong Deng
| null | null | 2,020 |
ijcai
|
Learning from Few Positives: a Provably Accurate Metric Learning Algorithm to Deal with Imbalanced Data
| null |
Learning from imbalanced data, where the positive examples are very scarce, remains a challenging task from both a theoretical and algorithmic perspective. In this paper, we address this problem using a metric learning strategy. Unlike the state-of-the-art methods, our algorithm MLFP, for Metric Learning from Few Positives, learns a new representation that is used only when a test query is compared to a minority training example. From a geometric perspective, it artificially brings positive examples closer to the query without changing the distances to the negative (majority class) data. This strategy allows us to expand the decision boundaries around the positives, yielding a better F-Measure, a criterion which is suited to deal with imbalanced scenarios. Beyond the algorithmic contribution provided by MLFP, our paper presents generalization guarantees on the false positive and false negative rates. Extensive experiments conducted on several imbalanced datasets show the effectiveness of our method.
|
Rémi Viola, Rémi Emonet, Amaury Habrard, Guillaume Metzler, Marc Sebban
| null | null | 2,020 |
ijcai
|
Non-monotone DR-submodular Maximization over General Convex Sets
| null |
Many real-world problems can often be cast as the optimization of DR-submodular functions defined over a convex domain. These functions play an important role with applications in many areas of applied mathematics, such as machine learning, computer vision, operation research, communication systems or economics. In addition, they capture a subclass of non-convex optimization that provides both practical and theoretical guarantees.
In this paper, we show that for maximizing non-monotone DR-submodular functions over a general convex set (such as up-closed convex sets, conic convex set, etc) the Frank-Wolfe algorithm achieves an approximation guarantee which depends on the convex set. To the best of our knowledge, this is the first approximation guarantee. Finally we benchmark our algorithm on problems arising in machine learning domain with the real-world datasets.
|
Christoph Dürr, Nguyen Kim Thang, Abhinav Srivastav, Léo Tible
| null | null | 2,020 |
ijcai
|
Coloring Graph Neural Networks for Node Disambiguation
| null |
In this paper, we show that a simple coloring scheme can improve, both theoretically and empirically, the expressive power of Message Passing Neural Networks (MPNNs). More specifically, we introduce a graph neural network called Colored Local Iterative Procedure (CLIP) that uses colors to disambiguate identical node attributes, and show that this representation is a universal approximator of continuous functions on graphs with node attributes. Our method relies on separability, a key topological characteristic that allows to extend well-chosen neural networks into universal representations. Finally, we show experimentally that CLIP is capable of capturing structural characteristics that traditional MPNNs fail to distinguish, while being state-of-the-art on benchmark graph classification datasets.
|
George Dasoulas, Ludovic Dos Santos, Kevin Scaman, Aladin Virmaux
| null | null | 2,020 |
ijcai
|
Metric Learning in Optimal Transport for Domain Adaptation
| null |
Domain Adaptation aims at benefiting from a labeled dataset drawn from a source distribution to learn a model from examples generated from a different but related target distribution. Creating a domain-invariant representation between the two source and target domains is the most widely technique used. A simple and robust way to perform this task consists in (i) representing the two domains by subspaces described by their respective eigenvectors and (ii) seeking a mapping function which aligns them. In this paper, we propose to use Optimal Transport (OT) and its associated Wassertein distance to perform this alignment. While the idea of using OT in domain adaptation is not new, the original contribution of this paper is two-fold: (i) we derive a generalization bound on the target error involving several Wassertein distances. This prompts us to optimize the ground metric of OT to reduce the target risk; (ii) from this theoretical analysis, we design an algorithm (MLOT) which optimizes a Mahalanobis distance leading to a transportation plan that adapts better. Extensive experiments demonstrate the effectiveness of this original approach.
|
Tanguy Kerdoncuff, Rémi Emonet, Marc Sebban
| null | null | 2,020 |
ijcai
|
Memory Augmented Neural Model for Incremental Session-based Recommendation
| null |
Increasing concerns with privacy have stimulated interests in Session-based Recommendation (SR) using no personal data other than what is observed in the current browser session. Existing methods are evaluated in static settings which rarely occur in real-world applications. To better address the dynamic nature of SR tasks, we study an incremental SR scenario, where new items and preferences appear continuously. We show that existing neural recommenders can be used in incremental SR scenarios with small incremental updates to alleviate computation overhead and catastrophic forgetting. More importantly, we propose a general framework called Memory Augmented Neural model (MAN). MAN augments a base neural recommender with a continuously queried and updated nonparametric memory, and the predictions from the neural and the memory components are combined through another lightweight gating network. We empirically show that MAN is well-suited for the incremental SR task, and it consistently outperforms state-oft-he-art neural and nonparametric methods. We analyze the results and demonstrate that it is particularly good at incrementally learning preferences on new and infrequent items.
|
Fei Mi, Boi Faltings
| null | null | 2,020 |
ijcai
|
Disentangling Direct and Indirect Interactions in Polytomous Item Response Theory Models
| null |
Measurement is at the core of scientific discovery. However, some quantities, such as economic behavior or intelligence, do not allow for direct measurement. They represent latent constructs that require surrogate measurements. In other scenarios, non-observed quantities can influence the variables of interest. In either case, models with latent variables are needed. Here, we investigate fused latent and graphical models that exhibit continuous latent variables and discrete observed variables. These models are characterized by a decomposition of the pairwise interaction parameter matrix into a group-sparse component of direct interactions and a low-rank component of indirect interactions due to the latent variables. We first investigate when such a decomposition is identifiable. Then, we show that fused latent and graphical models can be recovered consistently from data in the high-dimensional setting. We support our theoretical findings with experiments on synthetic and real-world data from polytomous item response theory studies.
|
Frank Nussbaum, Joachim Giesen
| null | null | 2,020 |
ijcai
|
Human-Driven FOL Explanations of Deep Learning
| null |
Deep neural networks are usually considered black-boxes due to their complex internal architecture, that cannot straightforwardly provide human-understandable explanations on how they behave. Indeed, Deep Learning is still viewed with skepticism in those real-world domains in which incorrect predictions may produce critical effects. This is one of the reasons why in the last few years Explainable Artificial Intelligence (XAI) techniques have gained a lot of attention in the scientific community. In this paper, we focus on the case of multi-label classification, proposing a neural network that learns the relationships among the predictors associated to each class, yielding First-Order Logic (FOL)-based descriptions. Both the explanation-related network and the classification-related network are jointly learned, thus implicitly introducing a latent dependency between the development of the explanation mechanism and the development of the classifiers. Our model can integrate human-driven preferences that guide the learning-to-explain process, and it is presented in a unified framework. Different typologies of explanations are evaluated in distinct experiments, showing that the proposed approach discovers new knowledge and can improve the classifier performance.
|
Gabriele Ciravegna, Francesco Giannini, Marco Gori, Marco Maggini, Stefano Melacci
| null | null | 2,020 |
ijcai
|
Effective Search of Logical Forms for Weakly Supervised Knowledge-Based Question Answering
| null |
Many algorithms for Knowledge-Based Question Answering (KBQA) depend on semantic parsing, which translates a question to its logical form. When only weak supervision is provided, it is usually necessary to search valid logical forms for model training. However, a complex question typically involves a huge search space, which creates two main problems: 1) the solutions limited by computation time and memory usually reduce the success rate of the search, and 2) spurious logical forms in the search results degrade the quality of training data. These two problems lead to a poorly-trained semantic parsing model. In this work, we propose an effective search method for weakly supervised KBQA based on operator prediction for questions. With search space constrained by predicted operators, sufficient search paths can be explored, more valid logical forms can be derived, and operators possibly causing spurious logical forms can be avoided. As a result, a larger proportion of questions in a weakly supervised training set are equipped with logical forms, and fewer spurious logical forms are generated. Such high-quality training data directly contributes to a better semantic parsing model. Experimental results on one of the largest KBQA datasets (i.e., CSQA) verify the effectiveness of our approach and deliver a new state-of-the-art performance.
|
Tao Shen, Xiubo Geng, Guodong Long, Jing Jiang, Chengqi Zhang, Daxin Jiang
| null | null | 2,020 |
ijcai
|
Location Prediction over Sparse User Mobility Traces Using RNNs: Flashback in Hidden States!
| null |
Location prediction is a key problem in human mobility modeling, which predicts a user's next location based on historical user mobility traces. As a sequential prediction problem by nature, it has been recently studied using Recurrent Neural Networks (RNNs). Due to the sparsity of user mobility traces, existing techniques strive to improve RNNs by considering spatiotemporal contexts. The most adopted scheme is to incorporate spatiotemporal factors into the recurrent hidden state passing process of RNNs using context-parameterized transition matrices or gates. However, such a scheme oversimplifies the temporal periodicity and spatial regularity of user mobility, and thus cannot fully benefit from rich historical spatiotemporal contexts encoded in user mobility traces. Against this background, we propose Flashback, a general RNN architecture designed for modeling sparse user mobility traces by doing flashbacks on hidden states in RNNs. Specifically, Flashback explicitly uses spatiotemporal contexts to search past hidden states with high predictive power (i.e., historical hidden states sharing similar contexts as the current one) for location prediction, which can then directly benefit from rich spatiotemporal contexts. Our extensive evaluation compares Flashback against a sizable collection of state-of-the-art techniques on two real-world LBSN datasets. Results show that Flashback consistently and significantly outperforms state-of-the-art RNNs involving spatiotemporal factors by 15.9% to 27.6% in the next location prediction task.
|
Dingqi Yang, Benjamin Fankhauser, Paolo Rosso, Philippe Cudre-Mauroux
| null | null | 2,020 |
ijcai
|
Solving Hard AI Planning Instances Using Curriculum-Driven Deep Reinforcement Learning
| null |
Despite significant progress in general AI planning, certain domains remain out of reach of current AI planning systems. Sokoban is a PSPACE-complete planning task and represents one of the hardest domains for current AI planners. Even domain-specific specialized search methods fail quickly due to the exponential search complexity on hard instances. Our approach based on deep reinforcement learning augmented with a curriculum-driven method is the first one to solve hard instances within one day of training while other modern solvers cannot solve these instances within any reasonable time limit. In contrast to prior efforts, which use carefully handcrafted pruning techniques, our approach automatically uncovers domain structure. Our results reveal that deep RL provides a promising framework for solving previously unsolved AI planning problems, provided a proper training curriculum can be devised.
|
Dieqiao Feng, Carla Gomes, Bart Selman
| null | null | 2,020 |
ijcai
|
Can Cross Entropy Loss Be Robust to Label Noise?
| null |
Trained with the standard cross entropy loss, deep neural networks can achieve great performance on correctly labeled data. However, if the training data is corrupted with label noise, deep models tend to overfit the noisy labels, thereby achieving poor generation performance. To remedy this issue, several loss functions have been proposed and demonstrated to be robust to label noise. Although most of the robust loss functions stem from Categorical Cross Entropy (CCE) loss, they fail to embody the intrinsic relationships between CCE and other loss functions. In this paper, we propose a general framework dubbed Taylor cross entropy loss to train deep models in the presence of label noise. Specifically, our framework enables to weight the extent of fitting the training labels by controlling the order of Taylor Series for CCE, hence it can be robust to label noise. In addition, our framework clearly reveals the intrinsic relationships between CCE and other loss functions, such as Mean Absolute Error (MAE) and Mean Squared Error (MSE). Moreover, we present a detailed theoretical analysis to certify the robustness of this framework. Extensive experimental results on benchmark datasets demonstrate that our proposed approach significantly outperforms the state-of-the-art counterparts.
|
Lei Feng, Senlin Shu, Zhuoyi Lin, Fengmao Lv, Li Li, Bo An
| null | null | 2,020 |
ijcai
|
KoGuN: Accelerating Deep Reinforcement Learning via Integrating Human Suboptimal Knowledge
| null |
Reinforcement learning agents usually learn from scratch, which requires a large number of interactions with the environment. This is quite different from the learning process of human. When faced with a new task, human naturally have the common sense and use the prior knowledge to derive an initial policy and guide the learning process afterwards. Although the prior knowledge may be not fully applicable to the new task, the learning process is significantly sped up since the initial policy ensures a quick-start of learning and intermediate guidance allows to avoid unnecessary exploration. Taking this inspiration, we propose knowledge guided policy network (KoGuN), a novel framework that combines human prior suboptimal knowledge with reinforcement learning. Our framework consists of a fuzzy rule controller to represent human knowledge and a refine module to finetune suboptimal prior knowledge. The proposed framework is end-to-end and can be combined with existing policy-based reinforcement learning algorithm. We conduct experiments on several control tasks. The empirical results show that our approach, which combines suboptimal human knowledge and RL, achieves significant improvement on learning efficiency of flat RL algorithms, even with very low-performance human prior knowledge.
|
Peng Zhang, Jianye Hao, Weixun Wang, Hongyao Tang, Yi Ma, Yihai Duan, Yan Zheng
| null | null | 2,020 |
ijcai
|
Soft Threshold Ternary Networks
| null |
Large neural networks are difficult to deploy on mobile devices because of intensive computation and storage. To alleviate it, we study ternarization, a balance between efficiency and accuracy that quantizes both weights and activations into ternary values. In previous ternarized neural networks, a hard threshold Δ is introduced to determine quantization intervals. Although the selection of Δ greatly affects the training results, previous works estimate Δ via an approximation or treat it as a hyper-parameter, which is suboptimal. In this paper, we present the Soft Threshold Ternary Networks (STTN), which enables the model to automatically determine quantization intervals instead of depending on a hard threshold. Concretely, we replace the original ternary kernel with the addition of two binary kernels at training time, where ternary values are determined by the combination of two corresponding binary values. At inference time, we add up the two binary kernels to obtain a single ternary kernel. Our method dramatically outperforms current state-of-the-arts, lowering the performance gap between full-precision networks and extreme low bit networks. Experiments on ImageNet with AlexNet (Top-1 55.6%), ResNet-18 (Top-1 66.2%) achieves new state-of-the-art.
|
Weixiang Xu, Xiangyu He, Tianli Zhao, Qinghao Hu, Peisong Wang, Jian Cheng
| null | null | 2,020 |
ijcai
|
Randomised Gaussian Process Upper Confidence Bound for Bayesian Optimisation
| null |
In order to improve the performance of Bayesian optimisation, we develop a modified Gaussian process upper confidence bound (GP-UCB) acquisition function. This is done by sampling the exploration-exploitation trade-off parameter from a distribution. We prove that this allows the expected trade-off parameter to be altered to better suit the problem without compromising a bound on the function's Bayesian regret. We also provide results showing that our method achieves better performance than GP-UCB in a range of real-world and synthetic problems.
|
Julian Berk, Sunil Gupta, Santu Rana, Svetha Venkatesh
| null | null | 2,020 |
ijcai
|
Fairness-Aware Neural Rényi Minimization for Continuous Features
| null |
The past few years have seen a dramatic rise of academic and societal interest in fair machine learning. While plenty of fair algorithms have been proposed recently to tackle this challenge for discrete variables, only a few ideas exist for continuous ones. The objective in this paper is to ensure some independence level between the outputs of regression models and any given continuous sensitive variables. For this purpose, we use the Hirschfeld-Gebelein-Rényi (HGR) maximal correlation coefficient as a fairness metric. We propose to minimize the HGR coefficient directly with an adversarial neural network architecture. The idea is to predict the output Y while minimizing the ability of an adversarial neural network to find the estimated transformations which are required to predict the HGR coefficient. We empirically assess and compare our approach and demonstrate significant improvements on previously presented work in the field.
|
Vincent Grari, Sylvain Lamprier, Marcin Detyniecki
| null | null | 2,020 |
ijcai
|
DeepView: Visualizing Classification Boundaries of Deep Neural Networks as Scatter Plots Using Discriminative Dimensionality Reduction
| null |
Machine learning algorithms using deep architectures have been able to implement increasingly powerful and successful models. However, they also become increasingly more complex, more difficult to comprehend and easier to fool. So far, most methods in the literature investigate the decision of the model for a single given input datum. In this paper, we propose to visualize a part of the decision function of a deep neural network together with a part of the data set in two dimensions with discriminative dimensionality reduction. This enables us to inspect how different properties of the data are treated by the model, such as outliers, adversaries or poisoned data. Further, the presented approach is complementary to the mentioned interpretation methods from the literature and hence might be even more useful in combination with those. Code is available at https://github.com/LucaHermes/DeepView
|
Alexander Schulz, Fabian Hinder, Barbara Hammer
| null | null | 2,020 |
ijcai
|
Online Positive and Unlabeled Learning
| null |
Positive and Unlabeled learning (PU learning) aims to build a binary classifier where only positive and unlabeled data are available for classifier training. However, existing PU learning methods all work on a batch learning mode, which cannot deal with the online learning scenarios with sequential data. Therefore, this paper proposes a novel positive and unlabeled learning algorithm in an online training mode, which trains a classifier solely on the positive and unlabeled data arriving in a sequential order. Specifically, we adopt an unbiased estimate for the loss induced by the arriving positive or unlabeled examples at each time. Then we show that for any coming new single datum, the model can be updated independently and incrementally by gradient based online learning method. Furthermore, we extend our method to tackle the cases when more than one example is received at each time. Theoretically, we show that the proposed online PU learning method achieves low regret even though it receives sequential positive and unlabeled data. Empirically, we conduct intensive experiments on both benchmark and real-world datasets, and the results clearly demonstrate the effectiveness of the proposed method.
|
Chuang Zhang, Chen Gong, Tengfei Liu, Xun Lu, Weiqiang Wang, Jian Yang
| null | null | 2,020 |
ijcai
|
Intention2Basket: A Neural Intention-driven Approach for Dynamic Next-basket Planning
| null |
User purchase behaviours are complex and dynamic, which are usually observed as multiple choice actions across a sequence of shopping baskets. Most of the existing next-basket prediction approaches model user actions as homogeneous sequence data without considering complex and heterogeneous user intentions, impeding deep under-standing of user behaviours from the perspective of human inside drivers and thus reducing the prediction performance. Psychological theories have indicated that user actions are essentially driven by certain underlying intentions (e.g., diet and entertainment). Moreover, different intentions may influence each other while different choices usually have different utilities to accomplish an intention. Inspired by such psychological insights, we formalize the next-basket prediction as an Intention Recognition, Modelling and Accomplishing problem and further design the Intention2Basket (Int2Ba in short) model. In Int2Ba, an Intention Recognizer, a Coupled Intention Chain Net, and a Dynamic Basket Planner are specifically designed to respectively recognize, model and accomplish the heterogeneous intentions behind a sequence of baskets to better plan the next-basket. Extensive experiments on real-world datasets show the superiority of Int2Ba over the state-of-the-art approaches.
|
Shoujin Wang, Liang Hu, Yan Wang, Quan Z. Sheng, Mehmet Orgun, Longbing Cao
| null | null | 2,020 |
ijcai
|
Interpretable Models for Understanding Immersive Simulations
| null |
This paper describes methods for comparative evaluation of the interpretability of models of high dimensional time series data inferred by unsupervised machine learning algorithms. The time series data used in this investigation were logs from an immersive simulation like those commonly used in education and healthcare training. The structures learnt by the models provide representations of participants' activities in the simulation which are intended to be meaningful to people's interpretation. To choose the model that induces the best representation, we designed two interpretability tests, each of which evaluates the extent to which a model’s output aligns with people’s expectations or intuitions of what has occurred in the simulation. We compared the performance of the models on these interpretability tests to their performance on statistical information criteria. We show that the models that optimize interpretability quality differ from those that optimize (statistical) information theoretic criteria. Furthermore, we found that a model using a fully Bayesian approach performed well on both the statistical and human-interpretability measures. The Bayesian approach is a good candidate for fully automated model selection, i.e., when direct empirical investigations of interpretability are costly or infeasible.
|
Nicholas Hoernle, Kobi Gal, Barbara Grosz, Leilah Lyons, Ada Ren, Andee Rubin
| null | null | 2,020 |
ijcai
|
Complete Bottom-Up Predicate Invention in Meta-Interpretive Learning
| null |
Predicate Invention in Meta-Interpretive Learning (MIL) is generally based on a top-down approach, and the search for a consistent hypothesis is carried out starting from the positive examples as goals. We consider augmenting top-down MIL systems with a bottom-up step during which the background knowledge is generalised with an extension of the immediate consequence operator for second-order logic programs. This new method provides a way to perform extensive predicate invention useful for feature discovery. We demonstrate this method is complete with respect to a fragment of dyadic datalog. We theoretically prove this method reduces the number of clauses to be learned for the top-down learner, which in turn can reduce the sample complexity. We formalise an equivalence relation for predicates which is used to eliminate redundant predicates. Our experimental results suggest pairing the state-of-the-art MIL system Metagol with an initial bottom-up step can significantly improve learning performance.
|
Céline Hocquette, Stephen H. Muggleton
| null | null | 2,020 |
ijcai
|
DropNAS: Grouped Operation Dropout for Differentiable Architecture Search
| null |
Neural architecture search (NAS) has shown encouraging results in automating the architecture design. Recently, DARTS relaxes the search process with a differentiable formulation that leverages weight-sharing and SGD for cost reduction of NAS. In DARTS, all candidate operations are trained simultaneously during the network weight training step. Our empirical results show that this training procedure leads to the co-adaption problem and Matthew Effect: operations with fewer parameters would be trained maturely earlier. This causes two problems: firstly, the operations with more parameters may never have the chance to express the desired function since those with less have already done the job; secondly, the system will punish those underperforming operations by lowering their architecture parameter and backward smaller loss gradients, this causes the Matthew Effect. In this paper, we systematically study these problems and propose a novel grouped operation dropout algorithm named DropNAS to fix the problems with DARTS. Extensive experiments demonstrate that DropNAS solves the above issues and achieves promising performance. Specifically, DropNAS achieves 2.26% test error on CIFAR-10, 16.39% on CIFAR-100 and 23.4% on ImageNet (with the same training hyperparameters as DARTS for a fair comparison). It is also observed that DropNAS is robust across variants of the DARTS search space. Code is available at https://github.com/huawei-noah.
|
Weijun Hong, Guilin Li, Weinan Zhang, Ruiming Tang, Yunhe Wang, Zhenguo Li, Yong Yu
| null | null | 2,020 |
ijcai
|
Arbitrary Talking Face Generation via Attentional Audio-Visual Coherence Learning
| null |
Talking face generation aims to synthesize a face video with precise lip synchronization as well as a smooth transition of facial motion over the entire video via the given speech clip and facial image. Most existing methods mainly focus on either disentangling the information in a single image or learning temporal information between frames. However, cross-modality coherence between audio and video information has not been well addressed during synthesis. In this paper, we propose a novel arbitrary talking face generation framework by discovering the audio-visual coherence via the proposed Asymmetric Mutual Information Estimator (AMIE). In addition, we propose a Dynamic Attention (DA) block by selectively focusing the lip area of the input image during the training stage, to further enhance lip synchronization. Experimental results on benchmark LRW dataset and GRID dataset transcend the state-of-the-art methods on prevalent metrics with robust high-resolution synthesizing on gender and pose variations.
|
Hao Zhu, Huaibo Huang, Yi Li, Aihua Zheng, Ran He
| null | null | 2,020 |
ijcai
|
Unsupervised Monocular Visual-inertial Odometry Network
| null |
Recently, unsupervised methods for monocular visual odometry (VO), with no need for quantities of expensive labeled ground truth, have attracted much attention. However, these methods are inadequate for long-term odometry task, due to the inherent limitation of only using monocular visual data and the inability to handle the error accumulation problem. By utilizing supplemental low-cost inertial measurements, and exploiting the multi-view geometric constraint and sequential constraint, an unsupervised visual-inertial odometry framework (UnVIO) is proposed in this paper. Our method is able to predict the per-frame depth map, as well as extracting and self-adaptively fusing visual-inertial motion features from image-IMU stream to achieve long-term odometry task. A novel sliding window optimization strategy, which consists of an intra-window and an inter-window optimization, is introduced for overcoming the error accumulation and scale ambiguity problem. The intra-window optimization restrains the geometric inferences within the window through checking the photometric consistency. And the inter-window optimization checks the 3D geometric consistency and trajectory consistency among predictions of separate windows. Extensive experiments have been conducted on KITTI and Malaga datasets to demonstrate the superiority of UnVIO over other state-of-the-art VO / VIO methods. The codes are open-source.
|
Peng Wei, Guoliang Hua, Weibo Huang, Fanyang Meng, Hong Liu
| null | null | 2,020 |
ijcai
|
LSGCN: Long Short-Term Traffic Prediction with Graph Convolutional Networks
| null |
Traffic prediction is a classical spatial-temporal prediction problem with many real-world applications such as intelligent route planning, dynamic traffic management, and smart location-based applications. Due to the high nonlinearity and complexity of traffic data, deep learning approaches have attracted much interest in recent years. However, few methods are satisfied with both long and short-term prediction tasks. Target at the shortcomings of existing studies, in this paper, we propose a novel deep learning framework called Long Short-term Graph Convolutional Networks (LSGCN) to tackle both traffic prediction tasks. In our framework, we propose a new graph attention network called cosAtt, and integrate both cosAtt and graph convolution networks (GCN) into a spatial gated block. By the spatial gated block and gated linear units convolution (GLU), LSGCN can efficiently capture complex spatial-temporal features and obtain stable prediction results. Experiments with three real-world traffic datasets verify the effectiveness of LSGCN.
|
Rongzhou Huang, Chuyin Huang, Yubao Liu, Genan Dai, Weiyang Kong
| null | null | 2,020 |
ijcai
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.