title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Interpreting Deep Neural Networks with Relative Sectional Propagation by Analyzing Comparative Gradients and Hostile Activations
| null |
The clear transparency of Deep Neural Networks (DNNs) is hampered by complex internal structures and nonlinear transformations along deep hierarchies. In this paper, we propose a new attribution method, Relative Sectional Propagation (RSP), for fully decomposing the output predictions with the characteristics of class-discriminative attributions and clear objectness. We carefully revisit some shortcomings of backpropagation-based attribution methods, which are trade-off relations in decomposing DNNs. We define hostile factor as an element that interferes with finding the attributions of the target and propagate it in a distinguishable way to overcome the non-suppressed nature of activated neurons. As a result, it is possible to assign the bi-polar relevance scores of the target (positive) and hostile (negative) attributions while maintaining each attribution aligned with the importance. We also present the purging techniques to prevent the decrement of the gap between the relevance scores of the target and hostile attributions during backward propagation by eliminating the conflicting units to channel attribution map. Therefore, our method makes it possible to decompose the predictions of DNNs with clearer class-discriminativeness and detailed elucidations of activation neurons compared to the conventional attribution methods. In a verified experimental environment, we report the results of the assessments: (i) Pointing Game, (ii) mIoU, and (iii) Model Sensitivity with PASCAL VOC 2007, MS COCO 2014, and ImageNet datasets. The results demonstrate that our method outperforms existing backward decomposition methods, including distinctive and intuitive visualizations.
|
Woo-Jeoung Nam, Jaesik Choi, Seong-Whan Lee
| null | null | 2,021 |
aaai
|
How RL Agents Behave When Their Actions Are Modified
| null |
Reinforcement learning in complex environments may require supervision to prevent the agent from attempting dangerous actions. As a result of supervisor intervention, the executed action may differ from the action specified by the policy. How does this affect learning? We present the Modified-Action Markov Decision Process, an extension of the MDP model that allows actions to differ from the policy. We analyze the asymptotic behaviours of common reinforcement learning algorithms in this setting and show that they adapt in different ways: some completely ignore modifications while others go to various lengths in trying to avoid action modifications that decrease reward. By choosing the right algorithm, developers can prevent their agents from learning to circumvent interruptions or constraints, and better control agent responses to other kinds of action modification, like self-damage.
|
Eric D. Langlois, Tom Everitt
| null | null | 2,021 |
aaai
|
Verifiable Machine Ethics in Changing Contexts
| null |
Many systems proposed for the implementation of ethical reasoning involve an encoding of user values as a set of rules or a model. We consider the question of how changes of context affect these encodings. We propose the use of a reasoning cycle, in which information about the ethical reasoner's context is imported in a logical form, and we propose that context-specific aspects of an ethical encoding be prefaced by a guard formula. This guard formula should evaluate to true when the reasoner is in the appropriate context and the relevant parts of the reasoner's rule set or model should be updated accordingly. This architecture allows techniques for the model-checking of agent-based autonomous systems to be used to verify that all contexts respect key stakeholder values. We implement this framework using the hybrid ethical reasoning agents system (HERA) and the model-checking agent programming languages (MCAPL) framework.
|
Louise A. Dennis, Martin Mose Bentzen, Felix Lindner, Michael Fisher
| null | null | 2,021 |
aaai
|
Outlier Impact Characterization for Time Series Data
| null |
For time series data, certain types of outliers are intrinsically more harmful for parameter estimation and future predictions than others, irrespective of their frequency. In this paper, for the first time, we study the characteristics of such outliers through the lens of the influence functional from robust statistics. In particular, we consider the input time series as a contaminated process, with the recurring outliers generated from an unknown contaminating process. Then we leverage the influence functional to understand the impact of the contaminating process on parameter estimation. The influence functional results in a multi-dimensional vector that measures the sensitivity of the predictive model to the contaminating process, which can be challenging to interpret especially for models with a large number of parameters. To this end, we further propose a comprehensive single-valued metric (the SIF) to measure outlier impacts on future predictions. It provides a quantitative measure regarding the outlier impacts, which can be used in a variety of scenarios, such as the evaluation of outlier detection methods, the creation of more harmful outliers, etc. The empirical results on multiple real data sets demonstrate the effectivenss of the proposed SIF metric.
|
Jianbo Li, Lecheng Zheng, Yada Zhu, Jingrui He
| null | null | 2,021 |
aaai
|
Robustness of Accuracy Metric and its Inspirations in Learning with Noisy Labels
| null |
For multi-class classification under class-conditional label noise, we prove that the accuracy metric itself can be robust. We concretize this finding's inspiration in two essential aspects: training and validation, with which we address critical issues in learning with noisy labels. For training, we show that maximizing training accuracy on sufficiently many noisy samples yields an approximately optimal classifier. For validation, we prove that a noisy validation set is reliable, addressing the critical demand of model selection in scenarios like hyperparameter-tuning and early stopping. Previously, model selection using noisy validation samples has not been theoretically justified. We verify our theoretical results and additional claims with extensive experiments. We show characterizations of models trained with noisy labels, motivated by our theoretical results, and verify the utility of a noisy validation set by showing the impressive performance of a framework termed noisy best teacher and student (NTS). Our code is released.
|
Pengfei Chen, Junjie Ye, Guangyong Chen, Jingwei Zhao, Pheng-Ann Heng
| null | null | 2,021 |
aaai
|
Beyond Class-Conditional Assumption: A Primary Attempt to Combat Instance-Dependent Label Noise
| null |
Supervised learning under label noise has seen numerous advances recently, while existing theoretical findings and empirical results broadly build up on the class-conditional noise (CCN) assumption that the noise is independent of input features given the true label. In this work, we present a theoretical hypothesis testing and prove that noise in real-world dataset is unlikely to be CCN, which confirms that label noise should depend on the instance and justifies the urgent need to go beyond the CCN assumption.The theoretical results motivate us to study the more general and practical-relevant instance-dependent noise (IDN). To stimulate the development of theory and methodology on IDN, we formalize an algorithm to generate controllable IDN and present both theoretical and empirical evidence to show that IDN is semantically meaningful and challenging. As a primary attempt to combat IDN, we present a tiny algorithm termed self-evolution average label (SEAL), which not only stands out under IDN with various noise fractions, but also improves the generalization on real-world noise benchmark Clothing1M. Our code is released. Notably, our theoretical analysis in Section 2 provides rigorous motivations for studying IDN, which is an important topic that deserves more research attention in future.
|
Pengfei Chen, Junjie Ye, Guangyong Chen, Jingwei Zhao, Pheng-Ann Heng
| null | null | 2,021 |
aaai
|
Epistemic Logic of Know-Who
| null |
The paper suggests a definition of "know who" as a modality using Grove-Halpern semantics of names. It also introduces a logical system that describes the interplay between modalities "knows who", "knows", and "for all agents". The main technical result is a completeness theorem for the proposed system.
|
Sophia Epstein, Pavel Naumov
| null | null | 2,021 |
aaai
|
On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning
| null |
There is a growing concern that the recent progress made in AI, especially regarding the predictive competence of deep learning models, will be undermined by a failure to properly explain their operation and outputs. In response to this disquiet, counterfactual explanations have become very popular in eXplainable AI (XAI) due to their asserted computational, psychological, and legal benefits. In contrast however, semi-factuals (which appear to be equally useful) have surprisingly received no attention. Most counterfactual methods address tabular rather than image data, partly because the non-discrete nature of images makes good counterfactuals difficult to define; indeed, generating plausible counterfactual images which lie on the data manifold is also problematic. This paper advances a novel method for generating plausible counterfactuals and semi-factuals for black-box CNN classifiers doing computer vision. The present method, called PlausIble Exceptionality-based Contrastive Explanations (PIECE), modifies all “exceptional” features in a test image to be “normal” from the perspective of the counterfactual class, to generate plausible counterfactual images. Two controlled experiments compare this method to others in the literature, showing that PIECE generates highly plausible counterfactuals (and the best semi-factuals) on several benchmark measures.
|
Eoin M. Kenny, Mark T Keane
| null | null | 2,021 |
aaai
|
Explaining A Black-box By Using A Deep Variational Information Bottleneck Approach
| null |
Interpretable machine learning has gained much attention recently. Briefness and comprehensiveness are necessary in order to provide a large amount of information concisely when explaining a black-box decision system. However, existing interpretable machine learning methods fail to consider briefness and comprehensiveness simultaneously, leading to redundant explanations. We propose the variational information bottleneck for interpretation, VIBI, a system-agnostic interpretable method that provides a brief but comprehensive explanation. VIBI adopts an information theoretic principle, information bottleneck principle, as a criterion for finding such explanations. For each instance, VIBI selects key features that are maximally compressed about an input (briefness), and informative about a decision made by a black-box system on that input (comprehensive). We evaluate VIBI on three datasets and compare with state-of-the-art interpretable machine learning methods in terms of both interpretability and fidelity evaluated by human and quantitative metrics.
|
Seojin Bang, Pengtao Xie, Heewook Lee, Wei Wu, Eric Xing
| null | null | 2,021 |
aaai
|
TripleTree: A Versatile Interpretable Representation of Black Box Agents and their Environments
| null |
In explainable artificial intelligence, there is increasing interest in understanding the behaviour of autonomous agents to build trust and validate performance. Modern agent architectures, such as those trained by deep reinforcement learning, are currently so lacking in interpretable structure as to effectively be black boxes, but insights may still be gained from an external, behaviourist perspective. Inspired by conceptual spaces theory, we suggest that a versatile first step towards general understanding is to discretise the state space into convex regions, jointly capturing similarities over the agent's action, value function and temporal dynamics within a dataset of observations. We create such a representation using a novel variant of the CART decision tree algorithm, and demonstrate how it facilitates practical understanding of black box agents through prediction, visualisation and rule-based explanation.
|
Tom Bewley, Jonathan Lawry
| null | null | 2,021 |
aaai
|
Bayes-TrEx: a Bayesian Sampling Approach to Model Transparency by Example
| null |
Post-hoc explanation methods are gaining popularity for interpreting, understanding, and debugging neural networks. Most analyses using such methods explain decisions in response to inputs drawn from the test set. However, the test set may have few examples that trigger some model behaviors, such as high-confidence failures or ambiguous classifications. To address these challenges, we introduce a flexible model inspection framework: Bayes-TrEx. Given a data distribution, Bayes-TrEx finds in-distribution examples which trigger a specified prediction confidence. We demonstrate several use cases of Bayes-TrEx, including revealing highly confident (mis)classifications, visualizing class boundaries via ambiguous examples, understanding novel-class extrapolation behavior, and exposing neural network overconfidence. We use Bayes-TrEx to study classifiers trained on CLEVR, MNIST, and Fashion-MNIST, and we show that this framework enables more flexible holistic model analysis than just inspecting the test set. Code and supplemental material are available at https://github.com/serenabooth/Bayes-TrEx.
|
Serena Booth, Yilun Zhou, Ankit Shah, Julie Shah
| null | null | 2,021 |
aaai
|
PenDer: Incorporating Shape Constraints via Penalized Derivatives
| null |
When deploying machine learning models in the real-world, system designers may wish that models exhibit certain shape behavior, i.e., model outputs follow a particular shape with respect to input features. Trends such as monotonicity, convexity, diminishing or accelerating returns are some of the desired shapes. Presence of these shapes makes the model more interpretable for the system designers, and adequately fair for the customers. We notice that many such common shapes are related to derivatives, and propose a new approach, PenDer (Penalizing Derivatives), which incorporates these shape constraints by penalizing the derivatives. We further present an Augmented Lagrangian Method (ALM) to solve this constrained optimization problem. Experiments on three real-world datasets illustrate that even though both PenDer and state-of-the-art Lattice models achieve similar conformance to shape, PenDer captures better sensitivity of prediction with respect to intended features. We also demonstrate that PenDer achieves better test performance than Lattice while enforcing more desirable shape behavior.
|
Akhil Gupta, Lavanya Marla, Ruoyu Sun, Naman Shukla, Arinbjörn Kolbeinsson
| null | null | 2,021 |
aaai
|
FIMAP: Feature Importance by Minimal Adversarial Perturbation
| null |
Instance-based model-agnostic feature importance explanations (LIME, SHAP, L2X) are a popular form of algorithmic transparency. These methods generally return either a weighting or subset of input features as an explanation for the classification of an instance. An alternative literature argues instead that counterfactual instances, which alter the black-box model's classification, provide a more actionable form of explanation. We present Feature Importance by Minimal Adversarial Perturbation (FIMAP), a neural network based approach that unifies feature importance and counterfactual explanations. We show that this approach combines the two paradigms, recovering the output of feature-weighting methods in continuous feature spaces, whilst indicating the direction in which the nearest counterfactuals can be found. Our method also provides an implicit confidence estimate in its own explanations, something existing methods lack. Additionally, FIMAP improves upon the speed of sampling-based methods, such as LIME, by an order of magnitude, allowing for explanation deployment in time-critical applications. We extend our approach to categorical features using a partitioned Gumbel layer and demonstrate its efficacy on standard datasets.
|
Matt Chapman-Rounds, Umang Bhatt, Erik Pazos, Marc-Andre Schulz, Konstantinos Georgatzis
| null | null | 2,021 |
aaai
|
Asking the Right Questions: Learning Interpretable Action Models Through Query Answering
| null |
This paper develops a new approach for estimating an interpretable, relational model of a black-box autonomous agent that can plan and act. Our main contributions are a new paradigm for estimating such models using a rudimentary query interface with the agent and a hierarchical querying algorithm that generates an interrogation policy for estimating the agent's internal model in a user-interpretable vocabulary. Empirical evaluation of our approach shows that despite the intractable search space of possible agent models, our approach allows correct and scalable estimation of interpretable agent models for a wide class of black-box autonomous agents. Our results also show that this approach can use predicate classifiers to learn interpretable models of planning agents that represent states as images.
|
Pulkit Verma, Shashank Rao Marpally, Siddharth Srivastava
| null | null | 2,021 |
aaai
|
Multi-Decoder Attention Model with Embedding Glimpse for Solving Vehicle Routing Problems
| null |
We present a novel deep reinforcement learning method to learn construction heuristics for vehicle routing problems. In specific, we propose a Multi-Decoder Attention Model (MDAM) to train multiple diverse policies, which effectively increases the chance of finding good solutions compared with existing methods that train only one policy. A customized beam search strategy is designed to fully exploit the diversity of MDAM. In addition, we propose an Embedding Glimpse layer in MDAM based on the recursive nature of construction, which can improve the quality of each policy by providing more informative embeddings. Extensive experiments on six different routing problems show that our method significantly outperforms the state-of-the-art deep learning based models.
|
Liang Xin, Wen Song, Zhiguang Cao, Jie Zhang
| null | null | 2,021 |
aaai
|
Competitive Analysis for Two-Level Ski-Rental Problem
| null |
In this paper, we study a two-level ski-rental problem. There are multiple commodities, each one can be “rented” (paying for on-demand usage) or “purchased” (paying for life-time usage). There is also a combo purchase available so that all commodities can be purchased as a combo. Since the usages of the commodities in future are not known in advance, to minimize the overall cost, we design an online algorithm to decide if we rent a commodity, purchase a commodity, or make a combo purchase. We first propose a deterministic online algorithm. It can achieve 3 competitive ratio, which is optimal and tight. Next, we further propose a randomized online algorithm, leading to a e^σ/(e^σ-1) competitive ratio, where σ is the ratio between the price of a single commodity and the price of combo purchase. Finally, we apply simulation to verify the theoretical competitive ratios and evaluate the actual performance against benchmarks.
|
Binghan Wu, Wei Bao, Dong Yuan
| null | null | 2,021 |
aaai
|
On the Verification of Neural ODEs with Stochastic Guarantees
| null |
We show that Neural ODEs, an emerging class of time-continuous neural networks, can be verified by solving a set of global-optimization problems. For this purpose, we introduce Stochastic Lagrangian Reachability (SLR), an abstraction-based technique for constructing a tight Reachtube (an over-approximation of the set of reachable states over a given time-horizon), and provide stochastic guarantees in the form of confidence intervals for the Reachtube bounds. SLR inherently avoids the infamous wrapping effect (accumulation of over-approximation errors) by performing local optimization steps to expand safe regions instead of repeatedly forward-propagating them as is done by deterministic reachability methods. To enable fast local optimizations, we introduce a novel forward-mode adjoint sensitivity method to compute gradients without the need for backpropagation. Finally, we establish asymptotic and non-asymptotic convergence rates for SLR.
|
Sophie Grunbacher, Ramin Hasani, Mathias Lechner, Jacek Cyranka, Scott A. Smolka, Radu Grosu
| null | null | 2,021 |
aaai
|
On the Optimal Efficiency of A* with Dominance Pruning
| null |
A well known result is that, given a consistent heuristic and no other source of information, A* does expand a minimal number of nodes up to tie-breaking. We extend this analysis for A* with dominance pruning, which exploits a dominance relation to eliminate some nodes during the search. We show that the expansion order of A* is not necessarily optimally efficient when considering dominance pruning with arbitrary dominance relations, but it remains optimally efficient under certain restrictions for the heuristic and dominance relation.
|
Álvaro Torralba
| null | null | 2,021 |
aaai
|
Individual Fairness in Kidney Exchange Programs
| null |
Kidney transplant is the preferred method of treatment for patients suffering from kidney failure. However, not all patients can find a donor which matches their physiological characteristics. Kidney exchange programs (KEPs) seek to match such incompatible patient-donor pairs together, usually with the main objective of maximizing the total number of transplants. Since selecting one optimal solution translates to a decision on who receives a transplant, it has a major effect on the lives of patients. The current practice in selecting an optimal solution does not necessarily ensure fairness in the selection process. In this paper, the existence of multiple optimal plans for a KEP is explored as a mean to achieve individual fairness. We propose the use of randomized policies for selecting an optimal solution in which patients' equal opportunity to receive a transplant is promoted. Our approach gives rise to the problem of enumerating all optimal solutions, which we tackle using a hybrid of constraint programming and linear programming. The advantages of our proposed method over the common practice of using the optimal solution obtained by a solver are stressed through computational experiments. Our methodology enables decision makers to fully control KEP outcomes, overcoming any potential bias or vulnerability intrinsic to a deterministic solver.
|
Golnoosh Farnadi, William St-Arnaud, Behrouz Babaki, Margarida Carvalho
| null | null | 2,021 |
aaai
|
Synthesis of Search Heuristics for Temporal Planning via Reinforcement Learning
| null |
Automated temporal planning is the problem of synthesizing, starting from a model of a system, a course of actions to achieve a desired goal when temporal constraints, such as deadlines, are present in the problem. Despite considerable successes in the literature, scalability is still a severe limitation for existing planners, especially when confronted with real-world, industrial scenarios. In this paper, we aim at exploiting recent advances in reinforcement learning, for the synthesis of heuristics for temporal planning. Starting from a set of problems of interest for a specific domain, we use a customized reinforcement learning algorithm to construct a value function that is able to estimate the expected reward for as many problems as possible. We use a reward schema that captures the semantics of the temporal planning problem and we show how the value function can be transformed in a planning heuristic for a semi-symbolic heuristic search exploration of the planning model. We show on two case-studies how this method can widen the reach of current temporal planners with encouraging results.
|
Andrea Micheli, Alessandro Valentini
| null | null | 2,021 |
aaai
|
Revealing Hidden Preconditions and Effects of Compound HTN Planning Tasks – A Complexity Analysis
| null |
In Hierarchical Task Network (HTN) planning, compound tasks need to be refined into executable (primitive) action sequences. In contrast to their primitive counterparts, compound tasks do not specify preconditions or effects. Thus, their implications on the states in which they are applied are not explicitly known: they are "hidden" in and depending on the decomposition structure. We formalize several kinds of preconditions and effects that can be inferred for compound tasks in totally ordered HTN domains. As relevant special case we introduce a problem relaxation which admits reasoning about preconditions and effects in polynomial time. We provide procedures for doing so, thereby extending previous work, which could only deal with acyclic models. We prove our procedures to be correct and complete for any totally ordered input domain. These results are embedded into an encompassing complexity analysis of the inference of preconditions and effects of compound tasks, an investigation that has not been made so far.
|
Conny Olz, Susanne Biundo, Pascal Bercher
| null | null | 2,021 |
aaai
|
Saturated Post-hoc Optimization for Classical Planning
| null |
Saturated cost partitioning and post-hoc optimization are two powerful cost partitioning algorithms for optimal classical planning. The main idea of saturated cost partitioning is to give each considered heuristic only the fraction of remaining operator costs that it needs to prove its estimates. We show how to apply this idea to post-hoc optimization and obtain a heuristic that dominates the original both in theory and on the IPC benchmarks.
|
Jendrik Seipp, Thomas Keller, Malte Helmert
| null | null | 2,021 |
aaai
|
Latent Independent Excitation for Generalizable Sensor-based Cross-Person Activity Recognition
| null |
In wearable-sensor-based activity recognition, it is often assumed that the training and test samples follow the same data distribution. This assumption neglects practical scenarios where the activity patterns inevitably vary from person to person. To solve this problem, transfer learning and domain adaptation approaches are often leveraged to reduce the gaps between different participants. Nevertheless, these approaches require additional information (i.e., labeled or unlabeled data, meta-information) from the target domain during the training stage. In this paper, we introduce a novel method named Generalizable Independent Latent Excitation (GILE) for human activity recognition, which greatly enhances the cross-person generalization capability of the model. Our proposed method is superior to existing methods in the sense that it does not require any access to the target domain information. Besides, this novel model can be directly applied to various target domains without re-training or fine-tuning. Specifically, the proposed model learns to automatically disentangle domain-agnostic and domain-specific features, the former of which are expected to be invariant across various persons. To further remove correlations between the two types of features, a novel Independent Excitation mechanism is incorporated in the latent feature space. Comprehensive experimental evaluations are conducted on three benchmark datasets to demonstrate the superiority of the proposed method over the state-of-the-art solutions.
|
Hangwei Qian, Sinno Jialin Pan, Chunyan Miao
| null | null | 2,021 |
aaai
|
Symbolic Search for Oversubscription Planning
| null |
The objective of optimal oversubscription planning is to find a plan that yields an end state with a maximum utility while keeping plan cost under a certain bound. In practice, the situation occurs whenever a large number of possible, often competing goals of varying value exist, or the resources are not sufficient to achieve all goals. In this paper, we investigate the use of symbolic search for optimal oversubscription planning. Specifically, we show how to apply symbolic forward search to oversubscription planning tasks and prove that our approach is sound, complete and optimal. An empirical analysis shows that our symbolic approach favorably competes with explicit state-space heuristic search, the current state of the art for oversubscription planning.
|
David Speck, Michael Katz
| null | null | 2,021 |
aaai
|
Dynamic Automaton-Guided Reward Shaping for Monte Carlo Tree Search
| null |
Reinforcement learning and planning have been revolutionized in recent years, due in part to the mass adoption of deep convolutional neural networks and the resurgence of powerful methods to refine decision-making policies. However, the problem of sparse reward signals and their representation remains pervasive in many domains. While various rewardshaping mechanisms and imitation learning approaches have been proposed to mitigate this problem, the use of humanaided artificial rewards introduces human error, sub-optimal behavior, and a greater propensity for reward hacking. In this paper, we mitigate this by representing objectives as automata in order to define novel reward shaping functions over this structured representation. In doing so, we address the sparse rewards problem within a novel implementation of Monte Carlo Tree Search (MCTS) by proposing a reward shaping function which is updated dynamically to capture statistics on the utility of each automaton transition as it pertains to satisfying the goal of the agent. We further demonstrate that such automaton-guided reward shaping can be utilized to facilitate transfer learning between different environments when the objective is the same.
|
Alvaro Velasquez, Brett Bissey, Lior Barak, Andre Beckus, Ismail Alkhouri, Daniel Melcer, George Atia
| null | null | 2,021 |
aaai
|
Faster and Better Simple Temporal Problems
| null |
In this paper we give a structural characterization and extend the tractability frontier of the Simple Temporal Problem (STP) by defining the class of the Extended Simple Temporal Problem (ESTP), which augments STP with strict inequalities and monotone Boolean formulae on inequations (i.e., formulae involving the operations of conjunction, disjunction and parenthesization). A polynomial-time algorithm is provided to solve ESTP, faster than previous state-of-the-art algorithms for other extensions of STP that had been considered in the literature, all encompassed by ESTP. We show the practical competitiveness of our approach through a proof-of-concept implementation and an experimental evaluation involving also state-of-the-art SMT solvers.
|
Dario Ostuni, Alice Raffaele, Romeo Rizzi, Matteo Zavatteri
| null | null | 2,021 |
aaai
|
Improved Knowledge Modeling and Its Use for Signaling in Multi-Agent Planning with Partial Observability
| null |
Collaborative Multi-Agent Planning (MAP) problems with uncertainty and partial observability are often modeled as Dec-POMDPs. Yet, in deterministic domains, Qualitative Dec-POMDPs can scale up to much larger problem sizes. The best current QDec solver (QDec-FP) reduces MAP problems to multiple single-agent problems. In this paper, we describe a planner that uses richer information about agents’ knowledge to improve upon QDec-FP. With this change, the planner not only scales up to larger problems with more objects, but it can also support signaling, where agents signal information to each other by changing the state of the world.
|
Shashank Shekhar, Ronen I. Brafman, Guy Shani
| null | null | 2,021 |
aaai
|
Online Action Recognition
| null |
Recognition in planning seeks to find agent intentions, goals or activities given a set of observations and a knowledge library (e.g. goal states, plans or domain theories). In this work we introduce the problem of Online Action Recognition. It consists in recognizing, in an open world, the planning action that best explains a partially observable state transition from a knowledge library of first-order STRIPS actions, which is initially empty. We frame this as an optimization problem, and propose two algorithms to address it: Action Unification (AU) and Online Action Recognition through Unification (OARU). The former builds on logic unification and generalizes two input actions using weighted partial MaxSAT. The latter looks for an action within the library that explains an observed transition. If there is such action, it generalizes it making use of AU, building in this way an AU hierarchy. Otherwise, OARU inserts a Trivial Grounded Action (TGA) in the library that explains just that transition. We report results on benchmarks from the International Planning Competition and PDDLGym, where OARU recognizes actions accurately with respect to expert knowledge, and shows real-time performance.
|
Alejandro Suárez-Hernández, Javier Segovia-Aguas, Carme Torras, Guillem Alenyà
| null | null | 2,021 |
aaai
|
Faster Stackelberg Planning via Symbolic Search and Information Sharing
| null |
Stackelberg planning is a recent framework where a leader and a follower each choose a plan in the same planning task, the leader's objective being to maximize plan cost for the follower. This formulation naturally captures security-related (leader=defender, follower=attacker) as well as robustness-related (leader=adversarial event, follower=agent) scenarios. Solving Stackelberg planning tasks requires solving many related planning tasks at the follower level (in the worst case, one for every possible leader plan). Here we introduce new methods to tackle this source of complexity, through sharing information across follower tasks. Our evaluation shows that these methods can significantly reduce both the time needed to solve follower tasks and the number of follower tasks that need to be solved in the first place.
|
Álvaro Torralba, Patrick Speicher, Robert Künnemann, Marcel Steinmetz, Jörg Hoffmann
| null | null | 2,021 |
aaai
|
A Complexity-theoretic Analysis of Green Pickup-and-Delivery Problems
| null |
In a Green Pickup-and-Delivery problem (GPD), vehicles traveling in a transport network achieving pickup-and-delivery tasks are in particular subject to the two textit{green} constraints: limited vehicle fuel capacity thus short vehicle traveling range, and limited availability of refueling infrastructure for the vehicles. GPD adds additional but probably insignificant computational complexity to the classic and already NP-hard Pickup-and-Delivery problem and Vehicle Routing Problem. Nevertheless, we demonstrate in this paper an inherent intractability of these green components themselves. More precisely, we show that GPD problems whose total constraints are reduced to almost the green ones only, remain to be NP-complete in the strong sense. We figure out a specifically constrained variant of GPD that, however, is weakly NP-complete -- a practical pseudo-polynomial time algorithm solving the variant problem is identified. Insight obtained from this complexity-theoretic analysis would shed light for a deeper understanding of GPDs, and on better development of heuristics for solving these problems, leading to promisingly many real-world applications.
|
Xing Tan, Jimmy Xiangji Huang
| null | null | 2,021 |
aaai
|
Planning with Learned Object Importance in Large Problem Instances using Graph Neural Networks
| null |
Real-world planning problems often involve hundreds or even thousands of objects, straining the limits of modern planners. In this work, we address this challenge by learning to predict a small set of objects that, taken together, would be sufficient for finding a plan. We propose a graph neural network architecture for predicting object importance in a single inference pass, thus incurring little overhead while greatly reducing the number of objects that must be considered by the planner. Our approach treats the planner and transition model as black boxes, and can be used with any off-the-shelf planner. Empirically, across classical planning, probabilistic planning, and robotic task and motion planning, we find that our method results in planning that is significantly faster than several baselines, including other partial grounding strategies and lifted planners. We conclude that learning to predict a sufficient set of objects for a planning problem is a simple, powerful, and general mechanism for planning in large instances. Video: https://youtu.be/FWsVJc2fvCE Code: https://git.io/JIsqX
|
Tom Silver, Rohan Chitnis, Aidan Curtis, Joshua B. Tenenbaum, Tomás Lozano-Pérez, Leslie Pack Kaelbling
| null | null | 2,021 |
aaai
|
Bayesian Optimized Monte Carlo Planning
| null |
Online solvers for partially observable Markov decision processes have difficulty scaling to problems with large action spaces. Monte Carlo tree search with progressive widening attempts to improve scaling by sampling from the action space to construct a policy search tree. The performance of progressive widening search is dependent upon the action sampling policy, often requiring problem-specific samplers. In this work, we present a general method for efficient action sampling based on Bayesian optimization. The proposed method uses a Gaussian process to model a belief over the action-value function and selects the action that will maximize the expected improvement in the optimal action value. We implement the proposed approach in a new online tree search algorithm called Bayesian Optimized Monte Carlo Planning (BOMCP). Several experiments show that BOMCP is better able to scale to large action space POMDPs than existing state-of-the-art tree search solvers.
|
John Mern, Anil Yildiz, Zachary Sunberg, Tapan Mukerji, Mykel J. Kochenderfer
| null | null | 2,021 |
aaai
|
Progression Heuristics for Planning with Probabilistic LTL Constraints
| null |
Probabilistic planning subject to multi-objective probabilistic temporal logic (PLTL) constraints models the problem of computing safe and robust behaviours for agents in stochastic environments. We present novel admissible heuristics to guide the search for cost-optimal policies for these problems. These heuristics project and decompose LTL formulae obtained by progression to estimate the probability that an extension of a partial policy satisfies the constraints. Their computation with linear programming is integrated with the recent PLTL-dual heuristic search algorithm, enabling more aggressive pruning of regions violating the constraints. Our experiments show that they further widen the scalability gap between heuristic search and verification approaches to these planning problems.
|
Ian Mallett, Sylvie Thiebaux, Felipe Trevizan
| null | null | 2,021 |
aaai
|
An LP-Based Approach for Goal Recognition as Planning
| null |
Goal recognition aims to recognize the set of candidate goals that are compatible with the observed behavior of an agent. In this paper, we develop a method based on the operator-counting framework that efficiently computes solutions that satisfy the observations and uses the information generated to solve goal recognition tasks. Our method reasons explicitly about both partial and noisy observations: estimating uncertainty for the former, and satisfying observations given the unreliability of the sensor for the latter. We evaluate our approach empirically over a large data set, analyzing its components on how each can impact the quality of the solutions. In general, our approach is superior to previous methods in terms of agreement ratio, accuracy, and spread. Finally, our approach paves the way for new research on combinatorial optimization to solve goal recognition tasks.
|
Luísa R. A. Santos, Felipe Meneguzzi, Ramon Fraga Pereira, André Grahl Pereira
| null | null | 2,021 |
aaai
|
Minimax Regret Optimisation for Robust Planning in Uncertain Markov Decision Processes
| null |
The parameters for a Markov Decision Process (MDP) often cannot be specified exactly. Uncertain MDPs (UMDPs) capture this model ambiguity by defining sets which the parameters belong to. Minimax regret has been proposed as an objective for planning in UMDPs to find robust policies which are not overly conservative. In this work, we focus on planning for Stochastic Shortest Path (SSP) UMDPs with uncertain cost and transition functions. We introduce a Bellman equation to compute the regret for a policy. We propose a dynamic programming algorithm that utilises the regret Bellman equation, and show that it optimises minimax regret exactly for UMDPs with independent uncertainties. For coupled uncertainties, we extend our approach to use options to enable a trade off between computation and solution quality. We evaluate our approach on both synthetic and real-world domains, showing that it significantly outperforms existing baselines.
|
Marc Rigter, Bruno Lacerda, Nick Hawes
| null | null | 2,021 |
aaai
|
On-line Learning of Planning Domains from Sensor Data in PAL: Scaling up to Large State Spaces
| null |
We propose an approach to learn an extensional representation of a discrete deterministic planning domain from observations in a continuous space navigated by the agent actions. This is achieved through the use of a perception function providing the likelihood of a real-value observation being in a given state of the planning domain after executing an action. The agent learns an extensional representation of the domain (the set of states, the transitions from states to states caused by actions) and the perception function on-line, while it acts for accomplishing its task. In order to provide a practical approach that can scale up to large state spaces, a “draft” intensional (PDDL-based) model of the planning domain is used to guide the exploration of the environment and learn the states and state transitions. The proposed approach uses a novel algorithm to (i) construct the extensional representation of the domain by interleaving symbolic planning in the PDDL intensional representation and search in the state transition graph of the extensional representation; (ii) incrementally refine the intensional representation taking into account information about the actions that the agent cannot execute. An experimental analysis shows that the novel approach can scale up to large state spaces, thus overcoming the limits in scalability of the previous work.
|
Leonardo Lamanna, Alfonso Emilio Gerevini, Alessandro Saetti, Luciano Serafini, Paolo Traverso
| null | null | 2,021 |
aaai
|
Contract Scheduling With Predictions
| null |
Contract scheduling is a general technique that allows to design a system with interruptible capabilities, given an algorithm that is not necessarily interruptible. Previous work on this topic has largely assumed that the interruption is a worst-case deadline that is unknown to the scheduler. In this work, we study the setting in which there is a potentially erroneous prediction concerning the interruption. Specifically, we consider the setting in which the prediction describes the time that the interruption occurs, as well as the setting in which the prediction is obtained as a response to a single or multiple binary queries. For both settings, we investigate tradeoffs between the robustness (i.e., the worst-case performance assuming adversarial prediction) and the consistency (i.e, the performance assuming that the prediction is error-free), both from the side of positive and negative results.
|
Spyros Angelopoulos, Shahin Kamali
| null | null | 2,021 |
aaai
|
Branch and Price for Bus Driver Scheduling with Complex Break Constraints
| null |
This paper presents a Branch and Price approach for a real-life Bus Driver Scheduling problem with a complex set of break constraints. The column generation uses a set partitioning model as master problem and a resource constrained shortest path problem as subproblem. Due to the complex constraints, the branch and price algorithm adopts several novel ideas to improve the column generation in the presence of a high-dimensional subproblem, including exponential arc throttling and a dedicated two-stage dominance algorithm. Evaluation on a publicly available set of benchmark instances shows that the approach provides the first provably optimal solutions for small instances, improving best-known solutions or proving them optimal for 48 out of 50 instances, and yielding an optimality gap of less than 1% for more than half the instances.
|
Lucas Kletzander, Nysret Musliu, Pascal Van Hentenryck
| null | null | 2,021 |
aaai
|
Responsibility Attribution in Parameterized Markovian Models
| null |
We consider the problem of responsibility attribution in the setting of parametric Markov chains. Given a family of Markov chains over a set of parameters, and a property, responsibility attribution asks how the difference in the value of the property should be attributed to the parameters when they change from one point in the parameter space to another. We formalize responsibility as path-based attribution schemes studied in cooperative game theory. An attribution scheme in a game determines how a value (a surplus or a cost) is distributed among a set of participants. Path-based attribution schemes include the well-studied Aumann-Shapley and the Shapley-Shubik schemes. In our context, an attribution scheme measures the responsibility of each parameter on the value function of the parametric Markov chain. We study the decision problem for path-based attribution schemes. Our main technical result is an algorithm for deciding if a path-based attribution scheme for a rational (ratios of polynomials) cost function is over a rational threshold. In particular, it is decidable if the Aumann-Shapley value for a player is at least a given rational number. As a consequence, we show that responsibility attribution is decidable for parametric Markov chains and for a general class of properties that include expectation and variance of discounted sum and long-run average rewards, as well as specifications in temporal logic.
|
Christel Baier, Florian Funke, Rupak Majumdar
| null | null | 2,021 |
aaai
|
Landmark Generation in HTN Planning
| null |
Landmarks (LMs) are state features that need to be made true or tasks that need to be contained in every solution of a planning problem. They are a valuable source of information in planning and can be exploited in various ways. LMs have been used both in classical and hierarchical planning, but while there is much work in classical planning, the techniques in hierarchical planning are less evolved. We introduce a novel LM generation method for Hierarchical Task Network (HTN) planning and show that it is sound and incomplete. We show that every complete approach is as hard as the co-class of the underlying HTN problem, i.e. coNP-hard for our setting (while our approach is in P). On a widely used benchmark set, our approach finds more than twice the number of landmarks than the approach from the literature. Though our focus is on LM generation, we show that the newly discovered landmarks bear information beneficial for solvers.
|
Daniel Höller, Pascal Bercher
| null | null | 2,021 |
aaai
|
Endomorphisms of Classical Planning Tasks
| null |
Detection of redundant operators that can be safely removed from the planning task is an essential technique allowing to greatly improve performance of planners. In this paper, we employ structure-preserving maps on labeled transition systems (LTSs), namely endomorphisms well known from model theory, in order to detect redundancy. Computing endomorphisms of an LTS induced by a planning task is typically infeasible, so we show how to compute some of them on concise representations of planning tasks such as finite domain representations and factored LTSs. We formulate the computation of endomorphisms as a constraint satisfaction problem (CSP) that can be solved by an off-the-shelf CSP solver. Finally, we experimentally verify that the proposed method can find a sizeable number of redundant operators on the standard benchmark set.
|
Rostislav Horčík, Daniel Fišer
| null | null | 2,021 |
aaai
|
Constrained Risk-Averse Markov Decision Processes
| null |
We consider the problem of designing policies for Markov decision processes (MDPs) with dynamic coherent risk objectives and constraints. We begin by formulating the problem in a Lagrangian framework. Under the assumption that the risk objectives and constraints can be represented by a Markov risk transition mapping, we propose an optimization-based method to synthesize Markovian policies that lower-bound the constrained risk-averse problem. We demonstrate that the formulated optimization problems are in the form of difference convex programs (DCPs) and can be solved by the disciplined convex-concave programming (DCCP) framework. We show that these results generalize linear programs for constrained MDPs with total discounted expected costs and constraints. Finally, we illustrate the effectiveness of the proposed method with numerical experiments on a rover navigation problem involving conditional-value-at-risk (CVaR) and entropic-value-at-risk (EVaR) coherent risk measures.
|
Mohamadreza Ahmadi, Ugo Rosolia, Michel D. Ingham, Richard M. Murray, Aaron D. Ames
| null | null | 2,021 |
aaai
|
GLIB: Efficient Exploration for Relational Model-Based Reinforcement Learning via Goal-Literal Babbling
| null |
We address the problem of efficient exploration for transition model learning in the relational model-based reinforcement learning setting without extrinsic goals or rewards. Inspired by human curiosity, we propose goal-literal babbling (GLIB), a simple and general method for exploration in such problems. GLIB samples relational conjunctive goals that can be understood as specific, targeted effects that the agent would like to achieve in the world, and plans to achieve these goals using the transition model being learned. We provide theoretical guarantees showing that exploration with GLIB will converge almost surely to the ground truth model. Experimentally, we find GLIB to strongly outperform existing methods in both prediction and planning on a range of tasks, encompassing standard PDDL and PPDDL planning benchmarks and a robotic manipulation task implemented in the PyBullet physics simulator. Video: https://youtu.be/F6lmrPT6TOY Code: https://git.io/JIsTB
|
Rohan Chitnis, Tom Silver, Joshua B. Tenenbaum, Leslie Pack Kaelbling, Tomás Lozano-Pérez
| null | null | 2,021 |
aaai
|
Equitable Scheduling on a Single Machine
| null |
We introduce a natural but seemingly yet unstudied generalization of the problem of scheduling jobs on a single machine so as to minimize the number of tardy jobs. Our generalization lies in simultaneously considering several instances of the problem at once. In particular, we have n clients over a period of m days, where each client has a single job with its own processing time and deadline per day. Our goal is to provide a schedule for each of the m days, so that each client is guaranteed to have their job meet its deadline in at least k
|
Klaus Heeger, Dan Hermelin, George B. Mertzios, Hendrik Molter, Rolf Niedermeier, Dvir Shabtay
| null | null | 2,021 |
aaai
|
Symbolic Search for Optimal Total-Order HTN Planning
| null |
Symbolic search has proven to be a useful approach to optimal classical planning. In Hierarchical Task Network (HTN) planning, however, there is little work on optimal planning. One reason for this is that in HTN planning, most algorithms are based on heuristic search, and admissible heuristics have to incorporate the structure of the task network in order to be informative. In this paper, we present a novel approach to optimal (totally-ordered) HTN planning, which is based on symbolic search. An empirical analysis shows that our symbolic approach outperforms the current state of the art for optimal totally-ordered HTN planning.
|
Gregor Behnke, David Speck
| null | null | 2,021 |
aaai
|
Bike-Repositioning Using Volunteers: Crowd Sourcing with Choice Restriction
| null |
Motivated by the Bike Angels Program in New York's Citi Bike and Boston's Blue Bikes, we study the use of (registered) volunteers to re-position empty bikes for riders in a bike sharing system. We propose a method that can be used to deploy the volunteers in the system, based on the real time distribution of the bikes in different stations. To account for (random) route demand in the network, we solve a related transshipment network design model and construct a sparse structure to restrict the re-balancing activities of the volunteers (concentrating re-balancing activities on essential routes). We also develop a comprehensive simulation model using a threshold-based policy to deploy the volunteers in real time, to test the effect of choice restriction on volunteers (suitably deployed) to re-position bikes. We use the Hubway system in Boston (with 60 stations) to demonstrate that using a sparse structure to concentrate the re-balancing activities of the volunteers, instead of allowing all admissible flows in the system (as in current practice), can reduce the number of re-balancing moves by a huge amount, losing only a small proportion of demand satisfied.
|
Jinjia Huang, Mabel C. Chou, Chung-Piaw Teo
| null | null | 2,021 |
aaai
|
Computing Plan-Length Bounds Using Lengths of Longest Paths
| null |
We devise a method to exactly compute the length of the longest simple path in factored state spaces, like state spaces encountered in classical planning. Although the complexity of this problem is NEXP-Hard, we show that our method can be used to compute practically useful upper-bounds on lengths of plans. We show that the computed upper-bounds are significantly better than bounds produced by state-of-the-art bounding techniques and that they can be used to improve the SAT-based planning.
|
Mohammad Abdulaziz,Dominik Berger
| null | null | 2,021 |
aaai
|
Bounding Causal Effects on Continuous Outcome
| null |
We investigate the problem of bounding causal effects from experimental studies in which treatment assignment is randomized but the subject compliance is imperfect. It is well known that under such conditions, the actual causal effects are not point-identifiable due to uncontrollable unobserved confounding. In their seminal work, Balke and Pearl (1994) derived the tightest bounds over the causal effects in this settings by employing an algebra program to derive analytic expressions. However, Pearl's approach assumes the primary outcome to be discrete and finite. Solving such a program could be intractable when high-dimensional context variables are present. In this paper, we present novel non-parametric methods to bound causal effects on the continuous outcome from studies with imperfect compliance. These bounds could be generalized to settings with a high-dimensional context.
|
Junzhe Zhang, Elias Bareinboim
| null | null | 2,021 |
aaai
|
A Multivariate Complexity Analysis of the Material Consumption Scheduling Problem
| null |
The NP-hard Material Consumption Scheduling Problem and closely related problems have been thoroughly studied since the 1980's. Roughly speaking, the problem deals with minimizing the makespan when scheduling jobs that consume non-renewable resources. We focus on the single-machine case without preemption: from time to time, the resources of the machine are (partially) replenished, thus allowing for meeting a necessary pre-condition for processing further jobs, each of which having individual resource demands. We initiate a systematic exploration of the parameterized computational complexity landscape of the problem, providing parameterized tractability as well as intractability results. Doing so, we mainly investigate how parameters related to the resource supplies influence the computational complexity. Thereby, we get a deepened understanding of this fundamental scheduling problem.
|
Matthias Bentert, Robert Bredereck, Péter Györgyi, Andrzej Kaczmarczyk, Rolf Niedermeier
| null | null | 2,021 |
aaai
|
Revisiting Dominance Pruning in Decoupled Search
| null |
In classical planning as search, duplicate state pruning is a standard method to avoid unnecessarily handling the same state multiple times. In decoupled search, similar to symbolic search approaches, search nodes, called decoupled states, do not correspond to individual states, but to sets of states. Therefore, duplicate state pruning is less effective in decoupled search, and dominance pruning is employed, taking into account the state sets. We observe that the time required for dominance checking dominates the overall runtime, and propose two ways to tackle this issue. Our main contribution is a stronger variant of dominance checking for optimal planning, where efficiency and pruning power are most crucial. The new variant greatly improves the latter, without incurring a computational overhead. Moreover, we develop three methods that make the dominance check more efficient: exact duplicate checking, which, albeit resulting in weaker pruning, can pay off due to the use of hashing; avoiding the dominance check in non-optimal planning if leaf state spaces are invertible; and exploiting the transitivity of the dominance relation to only check against the relevant subset of visited decoupled states. We show empirically that all our improvements are indeed beneficial in many standard benchmarks.
|
Daniel Gnad
| null | null | 2,021 |
aaai
|
Learning General Planning Policies from Small Examples Without Supervision
| null |
Generalized planning is concerned with the computation of general policies that solve multiple instances of a planning domain all at once. It has been recently shown that these policies can be computed in two steps: first, a suitable abstraction in the form of a qualitative numerical planning problem (QNP) is learned from sample plans, then the general policies are obtained from the learned QNP using a planner. In this work, we introduce an alternative approach for computing more expressive general policies which does not require sample plans or a QNP planner. The new formulation is very simple and can be cast in terms that are more standard in machine learning: a large but finite pool of features is defined from the predicates in the planning examples using a general grammar, and a small subset of features is sought for separating “good” from “bad” state transitions, and goals from non-goals. The problems of finding such a “separating surface” while labeling the transitions as “good” or “bad” are jointly addressed as a single combinatorial optimization problem expressed as a Weighted Max-SAT problem. The advantage of looking for the simplest policy in the given feature space that solves the given examples, possibly non-optimally, is that many domains have no general, compact policies that are optimal. The approach yields general policies for a number of benchmark domains.
|
Guillem Francès, Blai Bonet, Hector Geffner
| null | null | 2,021 |
aaai
|
Successor Feature Sets: Generalizing Successor Representations Across Policies
| null |
Successor-style representations have many advantages for reinforcement learning: for example, they can help an agent generalize from past experience to new goals, and they have been proposed as explanations of behavioral and neural data from human and animal learners. They also form a natural bridge between model-based and model-free RL methods: like the former they make predictions about future experiences, and like the latter they allow efficient prediction of total discounted rewards. However, successor-style representations are not optimized to generalize across policies: typically, we maintain a limited-length list of policies, and share information among them by representation learning or GPI. Successor-style representations also typically make no provision for gathering information or reasoning about latent variables. To address these limitations, we bring together ideas from predictive state representations, belief space value iteration, successor features, and convex analysis: we develop a new, general successor-style representation, together with a Bellman equation that connects multiple sources of information within this representation, including different latent states, policies, and reward functions. The new representation is highly expressive: for example, it lets us efficiently read off an optimal policy for a new reward function, or a policy that imitates a new demonstration. For this paper, we focus on exact computation of the new representation in small, known environments, since even this restricted setting offers plenty of interesting questions. Our implementation does not scale to large, unknown environments --- nor would we expect it to, since it generalizes POMDP value iteration, which is difficult to scale. However, we believe that future work will allow us to extend our ideas to approximate reasoning in large, unknown environments. We conduct experiments to explore which of the potential barriers to scaling are most pressing.
|
Kianté Brantley, Soroush Mehri, Geoff J. Gordon
| null | null | 2,021 |
aaai
|
General Policies, Representations, and Planning Width
| null |
It has been observed that in many of the benchmark planning domains, atomic goals can be reached with a simple polynomial exploration procedure, called IW, that runs in time exponential in the problem width. Such problems have indeed a bounded width: a width that does not grow with the number of problem variables and is often no greater than two. Yet, while the notion of width has become part of the state- of-the-art planning algorithms like BFWS, there is still no good explanation for why so many benchmark domains have bounded width. In this work, we address this question by relating bounded width and serialized width to ideas of generalized planning, where general policies aim to solve multiple instances of a planning problem all at once. We show that bounded width is a property of planning domains that admit optimal general policies in terms of features that are explicitly or implicitly represented in the domain encoding. The results are extended to the larger class of domains with bounded serialized width where the general policies do not have to be optimal. The study leads also to a new simple, meaningful, and expressive language for specifying domain serializations in the form of policy sketches which can be used for encoding domain control knowledge by hand or for learning it from traces. The use of sketches and the meaning of the theoretical results are all illustrated through a number of examples.
|
Blai Bonet, Hector Geffner
| null | null | 2,021 |
aaai
|
Probabilistic Dependency Graphs
| null |
We introduce Probabilistic Dependency Graphs (PDGs), a new class of directed graphical models. PDGs can capture inconsistent beliefs in a natural way and are more modular than Bayesian Networks (BNs), in that they make it easier to incorporate new information and restructure the representation. We show by example how PDGs are an especially natural modeling tool. We provide three semantics for PDGs, each of which can be derived from a scoring function (on joint distributions over the variables in the network) that can be viewed as representing a distribution's incompatibility with the PDG. For the PDG corresponding to a BN, this function is uniquely minimized by the distribution the BN represents, showing that PDG semantics extend BN semantics. We show further that factor graphs and their exponential families can also be faithfully represented as PDGs, while there are significant barriers to modeling a PDG with a factor graph.
|
Oliver Richardson, Joseph Y Halpern
| null | null | 2,021 |
aaai
|
Polynomial-Time Algorithms for Counting and Sampling Markov Equivalent DAGs
| null |
Counting and uniform sampling of directed acyclic graphs (DAGs) from a Markov equivalence class are fundamental tasks in graphical causal analysis. In this paper, we show that these tasks can be performed in polynomial time, solving a long-standing open problem in this area. Our algorithms are effective and easily implementable. Experimental results show that the algorithms significantly outperform state-of-the-art methods.
|
Marcel Wienöbst, Max Bannach, Maciej Liskiewicz
| null | null | 2,021 |
aaai
|
A New Bounding Scheme for Influence Diagrams
| null |
Influence diagrams provide a modeling and inference framework for sequential decision problems, representing the probabilistic knowledge by a Bayesian network and the preferences of an agent by utility functions over the random variables and decision variables. Computing the maximum expected utility (MEU) and the optimizing policy is exponential in the constrained induced width and therefore is notoriously difficult for larger models. In this paper, we develop a new bounding scheme for MEU that applies partitioning based approximations on top of the decomposition scheme called a multi-operator cluster DAG for influence diagrams that is more sensitive to the underlying structure of the model than the classical join-tree decomposition of influence diagrams. Our bounding scheme utilizes a cost-shifting mechanism to tighten the bound further. We demonstrate the effectiveness of the proposed scheme on various hard benchmarks.
|
Radu Marinescu, Junkyu Lee, Rina Dechter
| null | null | 2,021 |
aaai
|
Learning the Parameters of Bayesian Networks from Uncertain Data
| null |
The creation of Bayesian networks often requires the specification of a large number of parameters, making it highly desirable to be able to learn these parameters from historical data. In many cases, such data has uncertainty associated with it, including cases in which this data comes from unstructured analysis or from sensors. When creating diagnosis networks, for example, unstructured analysis algorithms can be run on the historical text descriptions or images of previous cases so as to extract data for learning Bayesian network parameters, but such derived data has inherent uncertainty associated with it due to the nature of such algorithms. Because of the inability of current Bayesian network parameter learning algorithms to incorporate such uncertainty, common approaches either ignore this uncertainty, thus reducing the resulting accuracy, or completely disregard such data. We present an approach for learning Bayesian network parameters that explicitly incorporates such uncertainty, and which is a natural extension of the Bayesian network formalism. We present a generalization of the Expectation Maximization parameter learning algorithm that enables it to handle any historical data with likelihood-evidence-based uncertainty, as well as an empirical validation demonstrating the improved accuracy and convergence enabled by our approach. We also prove that our extended algorithm maintains the convergence and correctness properties of the original EM algorithm, while explicitly incorporating data uncertainty in the learning process.
|
Segev Wasserkrug, Radu Marinescu, Sergey Zeltyn, Evgeny Shindin, Yishai A Feldman
| null | null | 2,021 |
aaai
|
Better Bounds on the Adaptivity Gap of Influence Maximization under Full-adoption Feedback
| null |
In the influence maximization (IM) problem, we are given a social network and a budget k, and we look for a set of k nodes in the network, called seeds, that maximize the expected number of nodes that are reached by an influence cascade generated by the seeds, according to some stochastic model for influence diffusion. Extensive studies have been done on the IM problem, since his definition by Kempe, Kleinberg, and Tardos (2003). However, most of the work focuses on the non-adaptive version of the problem where all the k seed nodes must be selected before that the cascade starts. In this paper we study the adaptive IM, where the nodes are selected sequentially one by one, and the decision on the i-th seed can be based on the observed cascade produced by the first i-1 seeds. We focus on the full-adoption feedback in which we can observe the entire cascade of each previously selected seed and on the independent cascade model where each edge is associated with an independent probability of diffusing influence. Previous works showed that there are constant upper bounds on the adaptivity gap, which compares the performance of an adaptive algorithm against a non-adaptive one, but the analyses used to prove these bounds only works for specific graph classes such as in-arborescences, out-arborescences, and one-directional bipartite graphs. Our main result is the first sub-linear upper bound that holds for any graph. Specifically, we show that the adaptivity gap is upper-bounded by ∛n+1, where n is the number of nodes in the graph. Moreover we improve over the known upper bound for in-arborescences from 2e/(e-1)≈3.16 to 2e²/(e²-1)≈2.31. Finally, we study α-bounded graphs, a class of undirected graphs in which the sum of node degrees higher than two is at most α, and show that the adaptivity gap is upper-bounded by √α+O(1). Moreover, we show that in 0-bounded graphs, i.e. undirected graphs in which each connected component is a path or a cycle, the adaptivity gap is at most 3e³/(e³-1)≈3.16. To prove our bounds, we introduce new techniques to relate adaptive policies with non-adaptive ones that might be of their own interest.
|
Gianlorenzo D'Angelo, Debashmita Poddar, Cosimo Vinci
| null | null | 2,021 |
aaai
|
Estimating Identifiable Causal Effects through Double Machine Learning
| null |
Identifying causal effects from observational data is a pervasive challenge found throughout the empirical sciences. Very general methods have been developed to decide the identifiability of a causal quantity from a combination of observational data and causal knowledge about the underlying system. In practice, however, there are still challenges to estimating identifiable causal functionals from finite samples. Recently, a method known as double/debiased machine learning (DML) (Chernozhukov et al. 2018) has been proposed to learn parameters leveraging modern machine learning techniques, which is both robust to model misspecification and bias-reducing. Still, DML has only been used for causal estimation in settings when the back-door condition (also known as conditional ignorability) holds. In this paper, we develop a new, general class of estimators for any identifiable causal functionals that exhibit DML properties, which we name DML-ID. In particular, we introduce a complete identification algorithm that returns an influence function (IF) for any identifiable causal functional. We then construct the DML estimator based on the derived IF. We show that DML-ID estimators hold the key properties of debiasedness and doubly robustness. Simulation results corroborate with the theory.
|
Yonghan Jung, Jin Tian, Elias Bareinboim
| null | null | 2,021 |
aaai
|
Instrumental Variable-based Identification for Causal Effects using Covariate Information
| null |
This paper deals with the identification problem of causal effects in randomized trials with noncompliance. In this problem, generally, causal effects are not identifiable and thus have been evaluated under some strict assumptions, or through the bounds. Different from existing studies, we propose novel identification conditions of joint probabilities of potential outcomes, which allow us to derive a consistent estimator of the causal effect. Regarding the identification conditions of joint probabilities of potential outcomes, the assumptions of monotonicity (Pearl, 2009), independence between potential outcomes (Robins & Richardson, 2011), gain equality (Li & Pearl, 2019) and specific functional relationships between cause and effect (Pearl, 2009) have been utilized. In contrast, without such assumptions, the proposed conditions enable us to evaluate joint probabilities of potential outcomes using an instrumental variable and a proxy variable of potential outcomes. The results of this paper extend the range of solvable identification problems in causal inference.
|
Yuta Kawakami
| null | null | 2,021 |
aaai
|
Learning Continuous High-Dimensional Models using Mutual Information and Copula Bayesian Networks
| null |
We propose a new framework to learn non-parametric graphical models from continuous observational data. Our method is based on concepts from information theory in order to discover independences and causality between variables: the conditional and multivariate mutual information (such as cite{verny2017learning} for discrete models). To estimate these quantities, we propose non-parametric estimators relying on the Bernstein copula and that are constructed by exploiting the relation between the mutual information and the copula entropy cite{ma2011mutual, belalia2017testing}. To our knowledge, this relation is only documented for the bivariate case and, for the need of our algorithms, is here extended to the conditional and multivariate mutual information. This framework leads to a new algorithm to learn continuous non-parametric Bayesian network. Moreover, we use this estimator to speed up the BIC algorithm proposed in cite{elidan2010copula} by taking advantage of the decomposition of the likelihood function in a sum of mutual information cite{koller2009probabilistic}. Finally, our method is compared in terms of performances and complexity with other state of the art techniques to learn Copula Bayesian Networks and shows superior results. In particular, it needs less data to recover the true structure and generalizes better on data that are not sampled from Gaussian distributions.
|
Marvin Lasserre, Régis Lebrun, Pierre-Henri Wuillemin
| null | null | 2,021 |
aaai
|
Robust Contextual Bandits via Bootstrapping
| null |
Upper confidence bound (UCB) based contextual bandit algorithms require one to know the tail property of the reward distribution. Unfortunately, such tail property is usually unknown or difficult to specify in real-world applications. Using a tail property heavier than the ground truth leads to a slow learning speed of the contextual bandit algorithm, while using a lighter one may cause the algorithm to diverge. To address this fundamental problem, we develop an estimator (evaluated from historical rewards) for the contextual bandit UCB based on the multiplier bootstrapping technique. We first establish sufficient conditions under which our estimator converges asymptotically to the ground truth of contextual bandit UCB. We further derive a second order correction for our estimator so as to obtain its confidence level with a finite number of rounds. To demonstrate the versatility of the estimator, we apply it to design a BootLinUCB algorithm for the contextual bandit. We prove that the BootLinUCB has a sub-linear regret upper bound and also conduct extensive experiments to validate its superior performance.
|
Qiao Tang, Hong Xie, Yunni Xia, Jia Lee, Qingsheng Zhu
| null | null | 2,021 |
aaai
|
Submodel Decomposition Bounds for Influence Diagrams
| null |
Influence diagrams (IDs) are graphical models for representing and reasoning with sequential decision-making problems under uncertainty. Limited memory influence diagrams (LIMIDs) model a decision-maker (DM) who forgets the history in the course of making a sequence of decisions. The standard inference task in IDs and LIMIDs is to compute the maximum expected utility (MEU), which is one of the most challenging tasks in graphical models. We present a model decomposition framework in both IDs and LIMIDs, which we call submodel decomposition that generates a tree of single-stage decision problems through a tree clustering scheme. We also develop a valuation algebra over the submodels that leads to a hierarchical message passing algorithm that propagates conditional expected utility functions over a submodel-tree as external messages. We show that the overall complexity is bounded by the maximum tree-width over the submodels, common in graphical model algorithms. Finally, we present a new method for computing upper bounds over a submodel-tree by first exponentiating the utility functions yielding a standard probabilistic graphical model as an upper bound and then applying standard variational upper bounds for the marginal MAP inference, yielding tighter upper bounds compared with state-of-the-art bounding schemes for the MEU task.
|
Junkyu Lee, Radu Marinescu, Rina Dechter
| null | null | 2,021 |
aaai
|
Uncertainty Quantification in CNN Through the Bootstrap of Convex Neural Networks
| null |
Despite the popularity of Convolutional Neural Networks (CNN), the problem of uncertainty quantification (UQ) of CNN has been largely overlooked. Lack of efficient UQ tools severely limits the application of CNN in certain areas, such as medicine, where prediction uncertainty is critically important. Among the few existing UQ approaches that have been proposed for deep learning, none of them has theoretical consistency that can guarantee the uncertainty quality. To address this issue, we propose a novel bootstrap based framework for the estimation of prediction uncertainty. The inference procedure we use relies on convexified neural networks to establish the theoretical consistency of bootstrap. Our approach has a significantly less computational load than its competitors, as it relies on warm-starts at each bootstrap that avoids refitting the model from scratch. We further explore a novel transfer learning method so our framework can work on arbitrary neural networks. We experimentally demonstrate our approach has a much better performance compared to other baseline CNNs and state-of-the-art methods on various image datasets.
|
Hongfei Du, Emre Barut, Fang Jin
| null | null | 2,021 |
aaai
|
Enhancing Balanced Graph Edge Partition with Effective Local Search
| null |
Graph partition is a key component to achieve workload balance and reduce job completion time in parallel graph processing systems. Among the various partition strategies, edge partition has demonstrated more promising performance in power-law graphs than vertex partition and thereby has been more widely adopted as the default partition strategy by existing graph systems. The graph edge partition problem, which is to split the edge set into multiple balanced parts with the objective of minimizing the total number of copied vertices, has been widely studied from the view of optimization and algorithms. In this paper, we study local search algorithms for this problem to further improve the partition results from existing methods. More specifically, we propose two novel concepts, namely adjustable edges and blocks. Based on these, we develop a greedy heuristic as well as an improved search algorithm utilizing the property of max-flow model. To evaluate the performance of our algorithms, we first provide adequate theoretical analysis in terms of approximation quality. We significantly improve the previous known approximation ratio for this problem. Then we conduct extensive experiments on a large number of benchmark datasets and state-of-the-art edge partition strategies. The results show that our proposed local search framework can further improve the quality of graph partition by a wide margin.
|
Zhenyu Guo, Mingyu Xiao, Yi Zhou, Dongxiang Zhang, Kian-Lee Tan
| null | null | 2,021 |
aaai
|
Weighting-based Variable Neighborhood Search for Optimal Camera Placement
| null |
The optimal camera placement problem (OCP) aims to accomplish surveillance tasks with the minimum number of cameras, which is one of the topics in the GECCO 2020 Competition and can be modeled as the unicost set covering problem (USCP). This paper presents a weighting-based variable neighborhood search (WVNS) algorithm for solving OCP. First, it simplifies the problem instances with four reduction rules based on dominance and independence. Then, WVNS converts the simplified OCP into a series of decision unicost set covering subproblems and tackles them with a fast local search procedure featured by a swap-based neighborhood structure. WVNS employs an efficient incremental evaluation technique and further boosts the neighborhood evaluation by exploiting the dominance and independence features among neighborhood moves. Computational experiments on the 69 benchmark instances introduced in the GECCO 2020 Competition on OCP and USCP show that WVNS is extremely competitive comparing to the state-of-the-art methods. It outperforms or matches several best performing competitors on all instances in both the OCP and USCP tracks of the competition, and its advantage on 15 large-scale instances are over 10%. In addition, WVNS improves the previous best known results for 12 classical benchmark instances in the literature.
|
Zhouxing Su, Qingyun Zhang, Zhipeng Lü, Chu-Min Li, Weibo Lin, Fuda Ma
| null | null | 2,021 |
aaai
|
Combining Reinforcement Learning with Lin-Kernighan-Helsgaun Algorithm for the Traveling Salesman Problem
| null |
We address the Traveling Salesman Problem (TSP), a famous NP-hard combinatorial optimization problem. And we propose a variable strategy reinforced approach, denoted as VSR-LKH, which combines three reinforcement learning methods (Q-learning, Sarsa and Monte Carlo) with the well-known TSP algorithm, called Lin-Kernighan-Helsgaun (LKH). VSR-LKH replaces the inflexible traversal operation in LKH, and lets the program learn to make choice at each search step by reinforcement learning. Experimental results on 111 TSP benchmarks from the TSPLIB with up to 85,900 cities demonstrate the excellent performance of the proposed method.
|
Jiongzhi Zheng, Kun He, Jianrong Zhou, Yan Jin, Chu-Min Li
| null | null | 2,021 |
aaai
|
Bayes DistNet – A Robust Neural Network for Algorithm Runtime Distribution Predictions
| null |
Randomized algorithms are used in many state-of-the-art solvers for constraint satisfaction problems (CSP) and Boolean satisfiability (SAT) problems. For many of these problems, there is no single solver which will dominate others. Having access to the underlying runtime distributions (RTD) of these solvers can allow for better use of algorithm selection, algorithm portfolios, and restart strategies. Previous state-of-the-art methods directly try to predict a fixed parametric distribution that the input instance follows. In this paper, we extend RTD prediction models into the Bayesian setting for the first time. This new model achieves robust predictive performance in the low observation setting, as well as handling censored observations. This technique also allows for richer representations which cannot be achieved by the classical models which restrict their output representations. Our model outperforms the previous state-of-the-art model in settings in which data is scarce, and can make use of censored data such as lower bound time estimates, where that type of data would otherwise be discarded. It can also quantify its uncertainty in its predictions, allowing for algorithm portfolio models to make better informed decisions about which algorithm to run on a particular instance.
|
Jake Tuero, Michael Buro
| null | null | 2,021 |
aaai
|
Accelerated Combinatorial Search for Outlier Detection with Provable Bound on Sub-Optimality
| null |
Outliers negatively affect the accuracy of data analysis. In this paper we are concerned with their influence on the accuracy of Principal Component Analysis (PCA). Algorithms that attempt to detect outliers and remove them from the data prior to applying PCA are sometimes called Robust PCA, or Robust Subspace Recovery algorithms. We propose a new algorithm for outlier detection that combines two ideas. The first is "chunk recursive elimination" that was used effectively to accelerate feature selection, and the second is combinatorial search, in a setting similar to A*. Our main result is showing how to combine these two ideas. One variant of our algorithm is guaranteed to compute an optimal solution according to some natural criteria, but its running time makes it impractical for large datasets. Other variants are much faster and come with provable bounds on sub-optimality. Experimental results show the effectiveness of the proposed approach.
|
Guihong Wan, Haim Schweitzer
| null | null | 2,021 |
aaai
|
Correlation-Aware Heuristic Search for Intelligent Virtual Machine Provisioning in Cloud Systems
| null |
The optimization of resource is crucial for the operation of public cloud systems such as Microsoft Azure, as well as servers dedicated to the workloads of large customers such as Microsoft 365. Those optimization tasks often need to take unknown parameters into consideration and can be formulated as Prediction+Optimization problems. This paper proposes a new Prediction+Optimization method named Correlation-Aware Heuristic Search (CAHS) that is capable of accounting for the uncertainty in unknown parameters and delivering effective solutions to difficult optimization problems. We apply this method to solving the predictive virtual machine (VM) provisioning (PreVMP) problem, where the VM provisioning plans are optimized based on the predicted demands of different VM types, to ensure rapid provisions upon customers' requests and to pursue high resource utilization. Unlike the current state-of-the-art PreVMP approaches that assume independence among the demands for different VM types, CAHS incorporates demand correlation when conducting prediction and optimization in a novel and effective way. Our experiments on two public benchmarks and one industrial benchmark demonstrate that CAHS can achieve better performance than its nine state-of-the-art competitors. CAHS has been successfully deployed in Microsoft Azure and significantly improved its performance. The main ideas of CAHS have also been leveraged to improve the efficiency and the reliability of the cloud services provided by Microsoft 365.
|
Chuan Luo, Bo Qiao, Wenqian Xing, Xin Chen, Pu Zhao, Chao Du, Randolph Yao, Hongyu Zhang, Wei Wu, Shaowei Cai, Bing He, Saravanakumar Rajmohan, Qingwei Lin
| null | null | 2,021 |
aaai
|
Improving Maximum k-plex Solver via Second-Order Reduction and Graph Color Bounding
| null |
In a graph, a k-plex is a vertex set in which every vertex is not adjacent to at most k vertices of this set. The maximum k-plex problem, which asks for the largest k-plex from the given graph, is a key primitive in a variety of real-world applications like community detection and so on. In the paper, we develop an exact algorithm, Maplex, for solving this problem in real world graphs practically. Based on the existing first-order and the novel second-order reduction rules, we design a powerful preprocessing method which efficiently removes redundant vertices and edges for Maplex. Also, the graph color heuristic is widely used for overestimating the maximum clique of a graph. For the first time, we generalize this technique for bounding the size of maximum k-plex in Maplex. Experiments are carried out to compare our algorithm with other state-of-the-art solvers on a wide range of publicly available graphs. Maplex outperforms all other algorithms on large real world graphs and is competitive with existing solvers on artificial dense graphs. Finally, we shed light on the effectiveness of each key component of Maplex.
|
Yi Zhou, Shan Hu, Mingyu Xiao, Zhang-Hua Fu
| null | null | 2,021 |
aaai
|
Deep Innovation Protection: Confronting the Credit Assignment Problem in Training Heterogeneous Neural Architectures
| null |
Deep reinforcement learning approaches have shown impressive results in a variety of different domains, however, more complex heterogeneous architectures such as world models require the different neural components to be trained separately instead of end-to-end. While a simple genetic algorithm recently showed end-to-end training is possible, it failed to solve a more complex 3D task. This paper presents a method called Deep Innovation Protection (DIP) that addresses the credit assignment problem in training complex heterogenous neural network models end-to-end for such environments. The main idea behind the approach is to employ multiobjective optimization to temporally reduce the selection pressure on specific components in multi-component network, allowing other components to adapt. We investigate the emergent representations of these evolved networks, which learn to predict properties important for the survival of the agent, without the need for a specific forward-prediction loss.
|
Sebastian Risi, Kenneth O. Stanley
| null | null | 2,021 |
aaai
|
Single Player Monte-Carlo Tree Search Based on the Plackett-Luce Model
| null |
The problem of minimal cost path search is especially difficult when no useful heuristics are available. A common solution is roll-out-based search like Monte Carlo Tree Search (MCTS). However, MCTS is mostly used in stochastic or adversarial environments, with the goal to identify an agent's best next move. For this reason, even though single player versions of MCTS exist, most algorithms, including UCT, are not directly tailored to classical minimal cost path search. We present Plackett-Luce MCTS (PL-MCTS), a path search algorithm based on a probabilistic model over the qualities of successor nodes. We empirically show that PL-MCTS is competitive and often superior to the state of the art.
|
Felix Mohr, Viktor Bengs, Eyke Hüllermeier
| null | null | 2,021 |
aaai
|
A Generative Adversarial Framework for Bounding Confounded Causal Effects
| null |
Causal inference from observational data is receiving wide applications in many fields. However, unidentifiable situations, where causal effects cannot be uniquely computed from observational data, pose critical barriers to applying causal inference to complicated real applications. In this paper, we develop a bounding method for estimating the average causal effect (ACE) under unidentifiable situations due to hidden confounding based on Pearl's structural causal model. We propose to parameterize the unknown exogenous random variables and structural equations of a causal model using neural networks and implicit generative models. Then, using an adversarial learning framework, we search the parameter space to explicitly traverse causal models that agree with the given observational distribution, and find those that minimize or maximize the ACE to obtain its lower and upper bounds. The proposed method does not make assumption about the type of structural equations and variables. Experiments using both synthetic and real-world datasets are conducted.
|
Yaowei Hu, Yongkai Wu, Lu Zhang, Xintao Wu
| null | null | 2,021 |
aaai
|
Multi-Goal Multi-Agent Path Finding via Decoupled and Integrated Goal Vertex Ordering
| null |
We introduce multi-goal multi agent path finding (MG-MAPF) which generalizes the standard discrete multi-agent path finding (MAPF) problem. While the task in MAPF is to navigate agents in an undirected graph from their starting vertices to one individual goal vertex per agent, MG-MAPF assigns each agent multiple goal vertices and the task is to visit each of them at least once. Solving MG-MAPF not only requires finding collision free paths for individual agents but also determining the order of visiting agent's goal vertices so that common objectives like the sum-of-costs are optimized. We suggest two novel algorithms using different paradigms to address MG-MAPF: a heuristic search-based algorithm called Hamiltonian-CBS (HCBS) and a compilation-based algorithm built using the satisfiability modulo theories (SMT), called SMT-Hamiltonian-CBS (SMT-HCBS).
|
Pavel Surynek
| null | null | 2,021 |
aaai
|
Multi-Objective Submodular Maximization by Regret Ratio Minimization with Theoretical Guarantee
| null |
Submodular maximization has attracted much attention due to its wide application and attractive property. Previous works mainly considered one single objective function, while there can be multiple ones in practice. As the objectives are usually conflicting, there exists a set of Pareto optimal solutions, attaining different optimal trade-offs among multiple objectives. In this paper, we consider the problem of minimizing the regret ratio in multi-objective submodular maximization, which is to find at most k solutions to approximate the whole Pareto set as well as possible. We propose a new algorithm RRMS by sampling representative weight vectors and solving the corresponding weighted sums of objective functions using some given alpha-approximation algorithm for single-objective submodular maximization. We prove that the regret ratio of the output of RRMS is upper bounded by 1-alpha+O(sqrt{d-1}cdot(frac{d}{k-d})^{frac{1}{d-1}}), where d is the number of objectives. This is the first theoretical guarantee for the situation with more than two objectives. When d=2, it reaches the (1-alpha+O(1/k))-guarantee of the only existing algorithm Polytope. Empirical results on the applications of multi-objective weighted maximum coverage and Max-Cut show the superior performance of RRMS over Polytope.
|
Chao Feng, Chao Qian
| null | null | 2,021 |
aaai
|
EECBS: A Bounded-Suboptimal Search for Multi-Agent Path Finding
| null |
Multi-Agent Path Finding (MAPF), i.e., finding collision-free paths for multiple robots, is important for many applications where small runtimes are necessary, including the kind of automated warehouses operated by Amazon. CBS is a leading two-level search algorithm for solving MAPF optimally. ECBS is a bounded-suboptimal variant of CBS that uses focal search to speed up CBS by sacrificing optimality and instead guaranteeing that the costs of its solutions are within a given factor of optimal. In this paper, we study how to decrease its runtime even further using inadmissible heuristics. Motivated by Explicit Estimation Search (EES), we propose Explicit Estimation CBS (EECBS), a new bounded-suboptimal variant of CBS, that uses online learning to obtain inadmissible estimates of the cost of the solution of each high-level node and uses EES to choose which high-level node to expand next. We also investigate recent improvements of CBS and adapt them to EECBS. We find that EECBS with the improvements runs significantly faster than the state-of-the-art bounded-suboptimal MAPF algorithms ECBS, BCP-7, and eMDD-SAT on a variety of MAPF instances. We hope that the scalability of EECBS enables additional applications for bounded-suboptimal MAPF algorithms.
|
Jiaoyang Li, Wheeler Ruml, Sven Koenig
| null | null | 2,021 |
aaai
|
Efficient Bayesian Network Structure Learning via Parameterized Local Search on Topological Orderings
| null |
In Bayesian Network Structure Learning (BNSL), we are given a variable set and parent scores for each variable and aim to compute a DAG, called Bayesian network, that maximizes the sum of parent scores, possibly under some structural constraints. Even very restricted special cases of BNSL are computationally hard, and, thus, in practice heuristics such as local search are used. In a typical local search algorithm, we are given some BNSL solution and ask whether there is a better solution within some pre-defined neighborhood of the solution. We study ordering-based local search, where a solution is described via a topological ordering of the variables. We show that given such a topological ordering, we can compute an optimal DAG whose ordering is within inversion distance r in subexponential FPT time; the parameter r allows to balance between solution quality and running time of the local search algorithm. This running time bound can be achieved for BNSL without any structural constraints and for all structural constraints that can be expressed via a sum of weights that are associated with each parent set. We show that for other modification operations on the variable orderings, algorithms with an FPT time for r are unlikely. We also outline the limits of ordering-based local search by showing that it cannot be used for common structural constraints on the moralized graph of the network.
|
Niels Grüttemeier, Christian Komusiewicz, Nils Morawietz
| null | null | 2,021 |
aaai
|
Submodular Span, with Applications to Conditional Data Summarization
| null |
As an extension to the matroid span problem, we propose the submodular span problem that involves finding a large set of elements with small gain relative to a given query set. We then propose a two-stage Submodular Span Summarization (S3) framework to achieve a form of conditional or query-focused data summarization. The first stage encourages the summary to be relevant to a given query set, and the second stage encourages the final summary to be diverse, thus achieving two important necessities for a good query-focused summary. Unlike previous methods, our framework uses only a single submodular function defined over both data and query. We analyze theoretical properties in the context of both matroids and polymatroids that elucidate when our methods should work well. We find that a scalable approximation algorithm to the polymatroid submodular span problem has good theoretical and empirical properties. We provide empirical and qualitative results on three real-world tasks: conditional multi-document summarization on the DUC 2005-2007 datasets, conditional video summarization on the UT-Egocentric dataset, and conditional image corpus summarization on the ImageNet dataset. We use deep neural networks, specifically a BERT model for text, AlexNet for video frames, and Bi-directional Generative Adversarial Networks (BiGAN) for ImageNet images to help instantiate the submodular functions. The result is a minimally supervised form of conditional summarization that matches or improves over the previous state-of-the-art.
|
Lilly Kumari, Jeff Bilmes
| null | null | 2,021 |
aaai
|
A Fast Exact Algorithm for the Resource Constrained Shortest Path Problem
| null |
Resource constrained path finding is a well studied topic in AI, with real-world applications in different areas such as transportation and robotics. This paper introduces several heuristics in the resource constrained path finding context that significantly improve the algorithmic performance of the initialisation phase and the core search. We implement our heuristics on top of a bidirectional A* algorithm and evaluate them on a set of large instances. The experimental results show that, for the first time in the context of constrained path finding, our fast and enhanced algorithm can solve all of the benchmark instances to optimality, and compared to the state of the art algorithms, it can improve existing runtimes by up to four orders of magnitude on large-size network graphs.
|
Saman Ahmadi, Guido Tack, Daniel D. Harabor, Philip Kilby
| null | null | 2,021 |
aaai
|
Theoretical Analyses of Multi-Objective Evolutionary Algorithms on Multi-Modal Objectives
| null |
Previous theory work on multi-objective evolutionary algorithms considers mostly easy problems that are composed of unimodal objectives. This paper takes a first step towards a deeper understanding of how evolutionary algorithms solve multi-modal multi-objective problems. We propose the OneJumpZeroJump problem, a bi-objective problem whose single objectives are isomorphic to the classic jump functions benchmark. We prove that the simple evolutionary multi-objective optimizer (SEMO) cannot compute the full Pareto front. In contrast, for all problem sizes n and all jump sizes k in [4..n/2-1], the global SEMO (GSEMO) covers the Pareto front in Θ((n-2k)n^k) iterations in expectation. To improve the performance, we combine the GSEMO with two approaches, a heavy-tailed mutation operator and a stagnation detection strategy, that showed advantages in single-objective multi-modal problems. Runtime improvements of asymptotic order at least k^Ω(k) are shown for both strategies. Our experiments verify the substantial runtime gains already for moderate problem sizes. Overall, these results show that the ideas recently developed for single-objective evolutionary algorithms can be effectively employed also in multi-objective optimization.
|
Benjamin Doerr, Weijie Zheng
| null | null | 2,021 |
aaai
|
Generalization in Portfolio-Based Algorithm Selection
| null |
Portfolio-based algorithm selection has seen tremendous practical success over the past two decades. This algorithm configuration procedure works by first selecting a portfolio of diverse algorithm parameter settings, and then, on a given problem instance, using an algorithm selector to choose a parameter setting from the portfolio with strong predicted performance. Oftentimes, both the portfolio and the algorithm selector are chosen using a training set of typical problem instances from the application domain at hand. In this paper, we provide the first provable guarantees for portfolio-based algorithm selection. We analyze how large the training set should be to ensure that the resulting algorithm selector's average performance over the training set is close to its future (expected) performance. This involves analyzing three key reasons why these two quantities may diverge: 1) the learning-theoretic complexity of the algorithm selector, 2) the size of the portfolio, and 3) the learning-theoretic complexity of the algorithm's performance as a function of its parameters. We introduce an end-to-end learning-theoretic analysis of the portfolio construction and algorithm selection together. We prove that if the portfolio is large, overfitting is inevitable, even with an extremely simple algorithm selector. With experiments, we illustrate a tradeoff exposed by our theoretical analysis: as we increase the portfolio size, we can hope to include a well-suited parameter setting for every possible problem instance, but it becomes impossible to avoid overfitting.
|
Maria-Florina Balcan, Tuomas Sandholm, Ellen Vitercik
| null | null | 2,021 |
aaai
|
Pareto Optimization for Subset Selection with Dynamic Partition Matroid Constraints
| null |
In this study, we consider the subset selection problems with submodular or monotone discrete objective functions under partition matroid constraints where the thresholds are dynamic. We focus on POMC, a simple Pareto optimization approach that has been shown to be effective on such problems. Our analysis departs from singular constraint problems and extends to problems of multiple constraints. We show that previous results of POMC's performance also hold for multiple constraints. Our experimental investigations on random undirected maxcut problems demonstrate POMC's competitiveness against the classical GREEDY algorithm with restart strategy.
|
Anh Viet Do, Frank Neumann
| null | null | 2,021 |
aaai
|
Combining Preference Elicitation with Local Search and Greedy Search for Matroid Optimization
| null |
We propose two incremental preference elicitation methods for interactive preference-based optimization on weighted matroid structures. More precisely, for linear objective (utility) functions, we propose an interactive greedy algorithm interleaving preference queries with the incremental construction of an independent set to obtain an optimal or near-optimal base of a matroid. We also propose an interactive local search algorithm based on sequences of possibly improving exchanges for the same problem. For both algorithms, we provide performance guarantees on the quality of the returned solutions and the number of queries. Our algorithms are tested on the uniform, graphical and scheduling matroids to solve three different problems (committee election, spanning tree, and scheduling problems) and evaluated in terms of computation times, number of queries, and empirical error.
|
Nawal Benabbou, Cassandre Leroy, Thibaut Lust, Patrice Perny
| null | null | 2,021 |
aaai
|
Parameterized Algorithms for MILPs with Small Treedepth
| null |
Solving (mixed) integer (linear) programs, (M)I(L)Ps for short, is a fundamental optimisation task with a wide range of applications in artificial intelligence and computer science in general. While hard in general, recent years have brought about vast progress for solving structurally restricted, (non-mixed) ILPs: n-fold, tree-fold, 2-stage stochastic and multi-stage stochastic programs admit efficient algorithms, and all of these special cases are subsumed by the class of ILPs of small treedepth. In this paper, we extend this line of work to the mixed case, by showing an algorithm solving MILP in time f(a,d)poly(n), where a is the largest coefficient of the constraint matrix, d is its treedepth, and n is the number of variables. This is enabled by proving bounds on the denominators (fractionality) of the vertices of bounded-treedepth (non-integer) linear programs. We do so by carefully analysing the inverses of invertible sub-matrices of the constraint matrix. This allows us to afford scaling up the mixed program to the integer grid, and applying the known methods for integer programs. We then trace the limiting boundary of our "bounded fractionality" approach both in terms of going beyond MILP (by allowing non-linear objectives) as well as its usefulness for generalising other important known tractable classes of ILP. On the positive side, we show that our result can be generalised from MILP to MIP with piece-wise linear separable convex objectives with integer breakpoints. On the negative side, we show that going even slightly beyond such objectives or considering other natural related tractable classes of ILP leads to unbounded fractionality. Finally, we show that restricting the structure of only the integral variables in the constraint matrix does not yield tractable special cases.
|
Cornelius Brand, Martin Koutecký, Sebastian Ordyniak
| null | null | 2,021 |
aaai
|
NuQClq: An Effective Local Search Algorithm for Maximum Quasi-Clique Problem
| null |
The maximum quasi-clique problem (MQCP) is an important extension of maximum clique problem with wide applications. Recent heuristic MQCP algorithms can hardly solve large and hard graphs effectively. This paper develops an efficient local search algorithm named NuQClq for the MQCP, which has two main ideas. First, we propose a novel vertex selection strategy, which utilizes cumulative saturation information to be a selection criterion when the candidate vertices have equal values on the primary scoring function. Second, a variant of configuration checking named BoundedCC is designed by setting an upper bound for the threshold of forbidding strength. When the threshold value of vertex exceeds the upper bound, we reset its threshold value to increase the diversity of search process. Experiments on a broad range of classic benchmarks and sparse instances show that NuQClq significantly outperforms the state-of-the-art MQCP algorithms for most instances.
|
Jiejiang Chen, Shaowei Cai, Shiwei Pan, Yiyuan Wang, Qingwei Lin, Mengyu Zhao, Minghao Yin
| null | null | 2,021 |
aaai
|
Choosing the Initial State for Online Replanning
| null |
The need to replan arises in many applications. However, in the context of planning as heuristic search, it raises an annoying problem: if the previous plan is still executing, what should the new plan search take as its initial state? If it were possible to accurately predict how long replanning would take, it would be easy to find the appropriate state at which control will transfer from the previous plan to the new one. But as planning problems can vary enormously in their difficulty, this prediction can be difficult. Many current systems merely use a manually chosen constant duration. In this paper, we show how such ad hoc solutions can be avoided by integrating the choice of the appropriate initial state into the search process itself. The search is initialized with multiple candidate initial states and a time-aware evaluation function is used to prefer plans whose total goal achievement time is minimal. Experimental results show that this approach yields better behavior than either guessing a constant or trying to predict replanning time in advance. By making replanning more effective and easier to implement, this work aids in creating planning systems that can better handle the inevitable exigencies of real-world execution.
|
Maximilian Fickert, Ivan Gavran, Ivan Fedotov, Jörg Hoffmann, Rupak Majumdar, Wheeler Ruml
| null | null | 2,021 |
aaai
|
Escaping Local Optima with Non-Elitist Evolutionary Algorithms
| null |
Most discrete evolutionary algorithms (EAs) implement elitism, meaning that they make the biologically implausible assumption that the fittest individuals never die. While elitism favours exploitation and ensures that the best seen solutions are not lost, it has been widely conjectured that non-elitism is necessary to explore promising fitness valleys without getting stuck in local optima. Determining when non-elitist EAs outperform elitist EAs has been one of the most fundamental open problems in evolutionary computation. A recent analysis of a non-elitist EA shows that this algorithm does not outperform its elitist counterparts on the benchmark problem JUMP. We solve this open problem through rigorous runtime analysis of elitist and non-elitist population-based EAs on a class of multi-modal problems. We show that with 3-tournament selection and appropriate mutation rates, the non-elitist EA optimises the multi-modal problem in expected polynomial time, while an elitist EA requires exponential time with overwhelmingly high probability. A key insight in our analysis is the non-linear selection profile of the tournament selection mechanism which, with appropriate mutation rates, allows a small sub-population to reside on the local optimum while the rest of the population explores the fitness valley. In contrast, we show that the comma-selection mechanism which does not have this non-linear profile, fails to optimise this problem in polynomial time. The theoretical analysis is complemented with an empirical investigation on instances of the set cover problem, showing that non-elitist EAs can perform better than the elitist ones. We also provide examples where usage of mutation rates close to the error thresholds is beneficial when employing non-elitist population-based EAs.
|
Duc-Cuong Dang, Anton Eremeev, Per Kristian Lehre
| null | null | 2,021 |
aaai
|
Symmetry Breaking for k-Robust Multi-Agent Path Finding
| null |
During Multi-Agent Path Finding (MAPF) problems, agentscan be delayed by unexpected events. To address suchsituations recent work describes k-Robust Conflict-BasedSearch (k-CBS): an algorithm that produces coordinated andcollision-free plan that is robust for up tokdelays. In thiswork we introducing a variety of pairwise symmetry break-ing constraints, specific tok-robust planning, that can effi-ciently find compatible and optimal paths for pairs of con-flicting agents. We give a thorough description of the newconstraints and report large improvements to success rate ina range of domains including: (i) classic MAPF benchmarks;(ii) automated warehouse domains and; (iii) on maps fromthe 2019 Flatland Challenge, a recently introduced railwaydomain wherek-robust planning can be fruitfully applied toschedule trains.
|
Zhe Chen, Daniel D. Harabor, Jiaoyang Li, Peter J. Stuckey
| null | null | 2,021 |
aaai
|
f-Aware Conflict Prioritization & Improved Heuristics For Conflict-Based Search
| null |
Conflict-Based Search (CBS) is a leading two-level algorithm for optimal Multi-Agent Path Finding (MAPF). The main step of CBS is to expand nodes by resolving conflicts (where two agents collide). Choosing the ‘right’ conflict to resolve can greatly speed up the search. CBS first resolves conflicts where the costs (g-values) of the resulting child nodes are larger than the cost of the node to be split. However, the recent addition of high-level heuristics to CBS and expanding nodes according to f=g+h reduces the relevance of this conflict prioritization method. Therefore, we introduce an expanded categorization of conflicts, which first resolves conflicts where the f-values of the child nodes are larger than the f-value of the node to be split, and present a method for identifying such conflicts. We also enhance all known heuristics for CBS by using information about the cost of resolving certain conflicts, and with only a small computational overhead. Finally, we experimentally demonstrate that both the expanded categorization of conflicts and the improved heuristics contribute to making CBS even more efficient.
|
Eli Boyarski, Ariel Felner, Pierre Le Bodic, Daniel D. Harabor, Peter J. Stuckey, Sven Koenig
| null | null | 2,021 |
aaai
|
SARG: A Novel Semi Autoregressive Generator for Multi-turn Incomplete Utterance Restoration
| null |
Dialogue systems in open domain have achieved great success due to the easily obtained single-turn corpus and the development of deep learning, but the multi-turn scenario is still a challenge because of the frequent coreference and information omission. In this paper, we investigate the incomplete utterance restoration which has brought general improvement over multi-turn dialogue systems in recent studies. Meanwhile, inspired by the autoregression for text generation and the sequence labeling for text editing, we propose a novel semi autoregressive generator (SARG) with the high efficiency and flexibility. Moreover, experiments on Restoration-200k show that our proposed model significantly outperforms the state-of-the-art models in terms of quality and inference speed.
|
Mengzuo Huang, Feng Li, Wuhe Zou, Weidong Zhang
| null | null | 2,021 |
aaai
|
Entity Guided Question Generation with Contextual Structure and Sequence Information Capturing
| null |
Question generation is a challenging task and has attracted widespread attention in recent years. Although previous studies have made great progress, there are still two main shortcomings: First, previous work did not simultaneously capture the sequence information and structure information hidden in the context, which results in poor results of the generated questions. Second, the generated questions cannot be answered by the given context. To tackle these issues, we propose an entity guided question generation model with contextual structure information and sequence information capturing. We use a Graph Convolutional Network and a Bidirectional Long Short Term Memory Network to capture the structure information and sequence information of the context, simultaneously. In addition, to improve the answerability of the generated questions, we use an entity-guided approach to obtain question type from the answer, and jointly encode the answer and question type. Both automatic and manual metrics show that our model can generate comparable questions with state-of-the-art models. Our code is available at https://github.com/VISLANG-Lab/EGSS.
|
Qingbao Huang, Mingyi Fu, Linzhang Mo, Yi Cai, Jingyun Xu, Pijian Li, Qing Li, Ho-fung Leung
| null | null | 2,021 |
aaai
|
Story Ending Generation with Multi-Level Graph Convolutional Networks over Dependency Trees
| null |
As an interesting and challenging task, story ending generation aims at generating a reasonable and coherent ending for a given story context. The key challenge of the task is to comprehend the context sufficiently and capture the hidden logic information effectively, which has not been well explored by most existing generative models. To tackle this issue, we propose a context-aware Multi-level Graph Convolutional Networks over Dependency Parse (MGCN-DP) trees to capture dependency relations and context clues more effectively. We utilize dependency parse trees to facilitate capturing relations and events in the context implicitly, and Multi-level Graph Convolutional Networks to update and deliver the representation crossing levels to obtain richer contextual information. Both automatic and manual evaluations show that our MGCN-DP can achieve comparable performance with state-of-the-art models. Our source code is available at https://github.com/VISLANG-Lab/MLGCN-DP.
|
Qingbao Huang, Linzhang Mo, Pijian Li, Yi Cai, Qingguang Liu, Jielong Wei, Qing Li, Ho-fung Leung
| null | null | 2,021 |
aaai
|
Multi-SpectroGAN: High-Diversity and High-Fidelity Spectrogram Generation with Adversarial Style Combination for Speech Synthesis
| null |
While generative adversarial networks (GANs) based neural text-to-speech (TTS) systems have shown significant improvement in neural speech synthesis, there is no TTS system to learn to synthesize speech from text sequences with only adversarial feedback. Because adversarial feedback alone is not sufficient to train the generator, current models still require the reconstruction loss compared with the ground-truth and the generated mel-spectrogram directly. In this paper, we present Multi-SpectroGAN (MSG), which can train the multi-speaker model with only the adversarial feedback by conditioning a self-supervised hidden representation of the generator to a conditional discriminator. This leads to better guidance for generator training. Moreover, we also propose adversarial style combination (ASC) for better generalization in the unseen speaking style and transcript, which can learn latent representations of the combined style embedding from multiple mel-spectrograms. Trained with ASC and feature matching, the MSG synthesizes a high-diversity mel-spectrogram by controlling and mixing the individual speaking styles (e.g., duration, pitch, and energy). The result shows that the MSG synthesizes a high-fidelity mel-spectrogram, which has almost the same naturalness MOS score as the ground-truth mel-spectrogram.
|
Sang-Hoon Lee, Hyun-Wook Yoon, Hyeong-Rae Noh, Ji-Hoon Kim, Seong-Whan Lee
| null | null | 2,021 |
aaai
|
Adaptive Beam Search Decoding for Discrete Keyphrase Generation
| null |
Keyphrase Generation compresses a document into some highly-summative phrases, which is an important task in natural language processing. Most state-of-the-art adopt greedy search or beam search decoding methods. These two decoding methods generate a large number of duplicated keyphrases and are time-consuming. Moreover, beam search only predicts a fixed number of keyphrases for different documents. In this paper, we propose an adaptive generation model-AdaGM, which is mainly inspired by the importance of the first words in keyphrase generation. In AdaGM, a novel reset state training mechanism is proposed to maximize the difference in the predicted first words. To ensure the discreteness and get an appropriate number of keyphrases according to the content of the document adaptively, we equip beam search with a highly effective filter mechanism. Experiments on five public datasets demonstrate the proposed model can generate marginally less duplicated and more accurate keyphrases. The codes of AdaGM are available at: https://github.com/huangxiaolist/adaGM.
|
Xiaoli Huang, Tongge Xu, Lvan Jiao, Yueran Zu, Youmin Zhang
| null | null | 2,021 |
aaai
|
Dynamic Hybrid Relation Exploration Network for Cross-Domain Context-Dependent Semantic Parsing
| null |
Semantic parsing has long been a fundamental problem in natural language processing. Recently, cross-domain context-dependent semantic parsing has become a new focus of research. Central to the problem is the challenge of leveraging contextual information of both natural language queries and database schemas in the interaction history. In this paper, we present a dynamic graph framework that is capable of effectively modelling contextual utterances, tokens, database schemas, and their complicated interaction as the conversation proceeds. The framework employs a dynamic memory decay mechanism that incorporates inductive bias to integrate enriched contextual relation representation, which is further enhanced with a powerful reranking model. At the time of writing, we demonstrate that the proposed framework outperforms all existing models by large margins, achieving new state-of-the-art performance on two large-scale benchmarks, the SParC and CoSQL datasets. Specifically, the model attains a 55.8% question-match and 30.8% interaction-match accuracy on SParC, and a 46.8% question-match and 17.0% interaction-match accuracy on CoSQL.
|
Binyuan Hui, Ruiying Geng, Qiyu Ren, Binhua Li, Yongbin Li, Jian Sun, Fei Huang, Luo Si, Pengfei Zhu, Xiaodan Zhu
| null | null | 2,021 |
aaai
|
Hierarchical Macro Discourse Parsing Based on Topic Segmentation
| null |
Hierarchically constructing micro (i.e., intra-sentence or inter-sentence) discourse structure trees using explicit boundaries (e.g., sentence and paragraph boundaries) has been proved to be an effective strategy. However, it is difficult to apply this strategy to document-level macro (i.e., inter-paragraph) discourse parsing, the more challenging task, due to the lack of explicit boundaries at the higher level. To alleviate this issue, we introduce a topic segmentation mechanism to detect implicit topic boundaries and then help the document-level macro discourse parser to construct better discourse trees hierarchically. In particular, our parser first splits a document into several sections using the topic boundaries that the topic segmentation detects. Then it builds a smaller and more accurate discourse sub-tree in each section and sequentially forms a whole tree for a document. The experimental results on both Chinese MCDTB and English RST-DT show that our proposed method outperforms the state-of-the-art baselines significantly.
|
Feng Jiang, Yaxin Fan, Xiaomin Chu, Peifeng Li, Qiaoming Zhu, Fang Kong
| null | null | 2,021 |
aaai
|
EQG-RACE: Examination-Type Question Generation
| null |
Question Generation (QG) is an essential component of the automatic intelligent tutoring systems, which aims to generate high-quality questions for facilitating the reading practice and assessments. However, existing QG technologies encounter several key issues concerning the biased and unnatural language sources of datasets which are mainly obtained from the Web (e.g. SQuAD). In this paper, we propose an innovative Examination-type Question Generation approach (EQG-RACE) to generate exam-like questions based on a dataset extracted from RACE. Two main strategies are employed in EQG-RACE for dealing with discrete answer information and reasoning among long contexts. A Rough Answer and Key Sentence Tagging scheme is utilized to enhance the representations of input. An Answer-guided Graph Convolutional Network (AG-GCN) is designed to capture structure information in revealing the inter-sentences and intra-sentence relations. Experimental results show a state-of-the-art performance of EQG-RACE, which is apparently superior to the baselines. In addition, our work has established a new QG prototype with a reshaped dataset and QG method, which provides an important benchmark for related research in future work. We will make our data and code publicly available for further research.
|
Xin Jia, Wenjie Zhou, Xu Sun, Yunfang Wu
| null | null | 2,021 |
aaai
|
Unsupervised Learning of Discourse Structures using a Tree Autoencoder
| null |
Discourse information, as postulated by popular discourse theories, such as RST and PDTB, has been shown to improve an increasing number of downstream NLP tasks, showing positive effects and synergies of discourse with important real-world applications. While methods for incorporating discourse become more and more sophisticated, the growing need for robust and general discourse structures has not been sufficiently met by current discourse parsers, usually trained on small scale datasets in a strictly limited number of domains. This makes the prediction for arbitrary tasks noisy and unreliable. The overall resulting lack of high-quality, high-quantity discourse trees poses a severe limitation to further progress. In order the alleviate this shortcoming, we propose a new strategy to generate tree structures in a task-agnostic, unsupervised fashion by extending a latent tree induction framework with an auto-encoding objective. The proposed approach can be applied to any tree-structured objective, such as syntactic parsing, discourse parsing and others. However, due to the especially difficult annotation process to generate discourse trees, we initially develop a method to generate larger and more diverse discourse treebanks. In this paper we are inferring general tree structures of natural text in multiple domains, showing promising results on a diverse set of tasks.
|
Patrick Huber, Giuseppe Carenini
| null | null | 2,021 |
aaai
|
Self-supervised Pre-training and Contrastive Representation Learning for Multiple-choice Video QA
| null |
Video Question Answering (VideoQA) requires fine-grained understanding of both video and language modalities to answer the given questions. In this paper, we propose novel training schemes for multiple-choice video question answering with a self-supervised pre-training stage and a supervised contrastive learning in the main stage as an auxiliary learning. In the self-supervised pre-training stage, we transform the original problem format of predicting the correct answer into the one that predicts the relevant question to provide a model with broader contextual inputs without any further dataset or annotation. For contrastive learning in the main stage, we add a masking noise to the input corresponding to the ground-truth answer, and consider the original input of the ground-truth answer as a positive sample, while treating the rest as negative samples. By mapping the positive sample closer to the masked input, we show that the model performance is improved. We further employ locally aligned attention to focus more effectively on the video frames that are particularly relevant to the given corresponding subtitle sentences. We evaluate our proposed model on highly competitive benchmark datasets related to multiple-choice video QA: TVQA, TVQA+, and DramaQA. Experimental results show that our model achieves state-of-the-art performance on all datasets. We also validate our approaches through further analyses.
|
Seonhoon Kim, Seohyeong Jeong, Eunbyul Kim, Inho Kang, Nojun Kwak
| null | null | 2,021 |
aaai
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.