title
stringlengths
5
246
categories
stringlengths
5
94
abstract
stringlengths
54
5.03k
authors
stringlengths
0
6.72k
doi
stringlengths
12
54
id
stringlengths
6
10
year
float64
2.02k
2.02k
venue
stringclasses
13 values
GAN-Based Unpaired Chinese Character Image Translation via Skeleton Transformation and Stroke Rendering
null
The automatic style translation of Chinese characters (CH-Char) is a challenging problem. Different from English or general artistic style transfer, Chinese characters contain a large number of glyphs with the complicated content and characteristic style. Early methods on CH-Char synthesis are inefficient and require manual intervention. Recently some GAN-based methods are proposed for font generation. The supervised GAN-based methods require numerous image pairs, which is difficult for many chirography styles. In addition, unsupervised methods often cause the blurred and incorrect strokes. Therefore, in this work, we propose a three-stage Generative Adversarial Network (GAN) architecture for multi-chirography image translation, which is divided into skeleton extraction, skeleton transformation and stroke rendering with unpaired training data. Specifically, we first propose a fast skeleton extraction method (ENet). Secondly, we utilize the extracted skeleton and the original image to train a GAN model, RNet (a stroke rendering network), to learn how to render the skeleton with stroke details in target style. Finally, the pre-trained model RNet is employed to assist another GAN model, TNet (a skeleton transformation network), to learn to transform the skeleton structure on the unlabeled skeleton set. We demonstrate the validity of our method on two chirography datasets we established.
Yiming Gao, Jiangqin Wu
null
null
2,020
aaai
Multi-Scale Anomaly Detection on Attributed Networks
null
Many social and economic systems can be represented as attributed networks encoding the relations between entities who are themselves described by different node attributes. Finding anomalies in these systems is crucial for detecting abuses such as credit card frauds, web spams or network intrusions. Intuitively, anomalous nodes are defined as nodes whose attributes differ starkly from the attributes of a certain set of nodes of reference, called the context of the anomaly. While some methods have proposed to spot anomalies locally, globally or within a community context, the problem remain challenging due to the multi-scale composition of real networks and the heterogeneity of node metadata. Here, we propose a principled way to uncover outlier nodes simultaneously with the context with respect to which they are anomalous, at all relevant scales of the network. We characterize anomalous nodes in terms of the concentration retained for each node after smoothing specific signals localized on the vertices of the graph. Besides, we introduce a graph signal processing formulation of the Markov stability framework used in community detection, in order to find the context of anomalies. The performance of our method is assessed on synthetic and real-world attributed networks and shows superior results concerning state of the art algorithms. Finally, we show the scalability of our approach in large networks employing Chebychev polynomial approximations.
Leonardo Gutiérrez-Gómez, Alexandre Bovet, Jean-Charles Delvenne
null
null
2,020
aaai
Differentially Private and Fair Classification via Calibrated Functional Mechanism
null
Machine learning is increasingly becoming a powerful tool to make decisions in a wide variety of applications, such as medical diagnosis and autonomous driving. Privacy concerns related to the training data and unfair behaviors of some decisions with regard to certain attributes (e.g., sex, race) are becoming more critical. Thus, constructing a fair machine learning model while simultaneously providing privacy protection becomes a challenging problem. In this paper, we focus on the design of classification model with fairness and differential privacy guarantees by jointly combining functional mechanism and decision boundary fairness. In order to enforce ϵ-differential privacy and fairness, we leverage the functional mechanism to add different amounts of Laplace noise regarding different attributes to the polynomial coefficients of the objective function in consideration of fairness constraint. We further propose an utility-enhancement scheme, called relaxed functional mechanism by adding Gaussian noise instead of Laplace noise, hence achieving (ϵ, δ)-differential privacy. Based on the relaxed functional mechanism, we can design (ϵ, δ)-differentially private and fair classification model. Moreover, our theoretical analysis and empirical results demonstrate that our two approaches achieve both fairness and differential privacy while preserving good utility and outperform the state-of-the-art algorithms.
Jiahao Ding, Xinyue Zhang, Xiaohuan Li, Junyi Wang, Rong Yu, Miao Pan
null
null
2,020
aaai
CASTER: Predicting Drug Interactions with Chemical Substructure Representation
null
Adverse drug-drug interactions (DDIs) remain a leading cause of morbidity and mortality. Identifying potential DDIs during the drug design process is critical for patients and society. Although several computational models have been proposed for DDI prediction, there are still limitations: (1) specialized design of drug representation for DDI predictions is lacking; (2) predictions are based on limited labelled data and do not generalize well to unseen drugs or DDIs; and (3) models are characterized by a large number of parameters, thus are hard to interpret. In this work, we develop a ChemicAl SubstrucTurE Representation (CASTER) framework that predicts DDIs given chemical structures of drugs. CASTER aims to mitigate these limitations via (1) a sequential pattern mining module rooted in the DDI mechanism to efficiently characterize functional sub-structures of drugs; (2) an auto-encoding module that leverages both labelled and unlabelled chemical structure data to improve predictive accuracy and generalizability; and (3) a dictionary learning module that explains the prediction via a small set of coefficients which measure the relevance of each input sub-structures to the DDI outcome. We evaluated CASTER on two real-world DDI datasets and showed that it performed better than state-of-the-art baselines and provided interpretable predictions.
Kexin Huang, Cao Xiao, Trong Hoang, Lucas Glass, Jimeng Sun
null
null
2,020
aaai
Predicting AC Optimal Power Flows: Combining Deep Learning and Lagrangian Dual Methods
null
The Optimal Power Flow (OPF) problem is a fundamental building block for the optimization of electrical power systems. It is nonlinear and nonconvex and computes the generator setpoints for power and voltage, given a set of load demands. It is often solved repeatedly under various conditions, either in real-time or in large-scale studies. This need is further exacerbated by the increasing stochasticity of power systems due to renewable energy sources in front and behind the meter. To address these challenges, this paper presents a deep learning approach to the OPF. The learning model exploits the information available in the similar states of the system (which is commonly available in practical applications), as well as a dual Lagrangian method to satisfy the physical and engineering constraints present in the OPF. The proposed model is evaluated on a large collection of realistic medium-sized power systems. The experimental results show that its predictions are highly accurate with average errors as low as 0.2%. Additionally, the proposed approach is shown to improve the accuracy of the widely adopted linear DC approximation by at least two orders of magnitude.
Ferdinando Fioretto, Terrence W.K. Mak, Pascal Van Hentenryck
null
null
2,020
aaai
Robust Low-Rank Discovery of Data-Driven Partial Differential Equations
null
Partial differential equations (PDEs) are essential foundations to model dynamic processes in natural sciences. Discovering the underlying PDEs of complex data collected from real world is key to understanding the dynamic processes of natural laws or behaviors. However, both the collected data and their partial derivatives are often corrupted by noise, especially from sparse outlying entries, due to measurement/process noise in the real-world applications. Our work is motivated by the observation that the underlying data modeled by PDEs are in fact often low rank. We thus develop a robust low-rank discovery framework to recover both the low-rank data and the sparse outlying entries by integrating double low-rank and sparse recoveries with a (group) sparse regression method, which is implemented as a minimization problem using mixed nuclear norms with ℓ1 and ℓ0 norms. We propose a low-rank sequential (grouped) threshold ridge regression algorithm to solve the minimization problem. Results from several experiments on seven canonical models (i.e., four PDEs and three parametric PDEs) verify that our framework outperforms the state-of-art sparse and group sparse regression methods. Code is available at https://github.com/junli2019/Robust-Discovery-of-PDEs
Jun Li, Gan Sun, Guoshuai Zhao, Li-wei H. Lehman
null
null
2,020
aaai
Region Focus Network for Joint Optic Disc and Cup Segmentation
null
Glaucoma is one of the three leading causes of blindness in the world and is predicted to affect around 80 million people by 2020. The optic cup (OC) to optic disc (OD) ratio (CDR) in fundus images plays a pivotal role in the screening and diagnosis of glaucoma. Existing methods usually crop the optic disc region first, and subsequently perform segmentation in this region. However, these approaches come up with high complexities due to the separate operations. To remedy this issue, we propose a Region Focus Network (RF-Net) that innovatively integrates detection and multi-class segmentation into a unified architecture for end-to-end joint optic disc and cup segmentation with global optimization. The key idea of our method is designing a novel multi-class mask branch which generates a high-quality segmentation in the detected region for both disc and cup. To bridge the connection between the backbone and multi-class mask branch, a Fusion Feature Pooling (FFP) structure is presented to extract features from each level of the pyramid network and fuse them into a final feature representation for segmentation. Extensive experimental results on the REFUGE-2018 challenge dataset and the Drishti-GS dataset show that the proposed method achieves the best performance, compared with competitive approaches reported in the literature and the official leaderboard. Our code will be released soon.
Ge Li, Changsheng Li, Chan Zeng, Peng Gao, Guotong Xie
null
null
2,020
aaai
A Graph Auto-Encoder for Haplotype Assembly and Viral Quasispecies Reconstruction
null
Reconstructing components of a genomic mixture from data obtained by means of DNA sequencing is a challenging problem encountered in a variety of applications including single individual haplotyping and studies of viral communities. High-throughput DNA sequencing platforms oversample mixture components to provide massive amounts of reads whose relative positions can be determined by mapping the reads to a known reference genome; assembly of the components, however, requires discovery of the reads' origin – an NP-hard problem that the existing methods struggle to solve with the required level of accuracy. In this paper, we present a learning framework based on a graph auto-encoder designed to exploit structural properties of sequencing data. The algorithm is a neural network which essentially trains to ignore sequencing errors and infers the posterior probabilities of the origin of sequencing reads. Mixture components are then reconstructed by finding consensus of the reads determined to originate from the same genomic component. Results on realistic synthetic as well as experimental data demonstrate that the proposed framework reliably assembles haplotypes and reconstructs viral communities, often significantly outperforming state-of-the-art techniques. Source codes, datasets and supplementary document are available at https://github.com/WuLoli/GAEseq.
Ziqi Ke, Haris Vikalo
null
null
2,020
aaai
Pairwise Learning with Differential Privacy Guarantees
null
Pairwise learning has received much attention recently as it is more capable of modeling the relative relationship between pairs of samples. Many machine learning tasks can be categorized as pairwise learning, such as AUC maximization and metric learning. Existing techniques for pairwise learning all fail to take into consideration a critical issue in their design, i.e., the protection of sensitive information in the training set. Models learned by such algorithms can implicitly memorize the details of sensitive information, which offers opportunity for malicious parties to infer it from the learned models. To address this challenging issue, in this paper, we propose several differentially private pairwise learning algorithms for both online and offline settings. Specifically, for the online setting, we first introduce a differentially private algorithm (called OnPairStrC) for strongly convex loss functions. Then, we extend this algorithm to general convex loss functions and give another differentially private algorithm (called OnPairC). For the offline setting, we also present two differentially private algorithms (called OffPairStrC and OffPairC) for strongly and general convex loss functions, respectively. These proposed algorithms can not only learn the model effectively from the data but also provide strong privacy protection guarantee for sensitive information in the training set. Extensive experiments on real-world datasets are conducted to evaluate the proposed algorithms and the experimental results support our theoretical analysis.
Mengdi Huai, Di Wang, Chenglin Miao, Jinhui Xu, Aidong Zhang
null
null
2,020
aaai
DeepAlerts: Deep Learning Based Multi-Horizon Alerts for Clinical Deterioration on Oncology Hospital Wards
null
Machine learning and data mining techniques are increasingly being applied to electronic health record (EHR) data to discover underlying patterns and make predictions for clinical use. For instance, these data may be evaluated to predict clinical deterioration events such as cardiopulmonary arrest or escalation of care to the intensive care unit (ICU). In clinical practice, early warning systems with multiple time horizons could indicate different levels of urgency, allowing clinicians to make decisions regarding triage, testing, and interventions for patients at risk of poor outcomes. These different horizon alerts are related and have intrinsic dependencies, which elicit multi-task learning. In this paper, we investigate approaches to properly train deep multi-task models for predicting clinical deterioration events via generating multi-horizon alerts for hospitalized patients outside the ICU, with particular application to oncology patients. Prior knowledge is used as a regularization to exploit the positive effects from the task relatedness. Simultaneously, we propose task-specific loss balancing to reduce the negative effects when optimizing the joint loss function of deep multi-task models. In addition, we demonstrate the effectiveness of the feature-generating techniques from prediction outcome interpretation. To evaluate the model performance of predicting multi-horizon deterioration alerts in a real world scenario, we apply our approaches to the EHR data from 20,700 hospitalizations of adult oncology patients. These patients' baseline high-risk status provides a unique opportunity: the application of an accurate model to an enriched population could produce improved positive predictive value and reduce false positive alerts. With our dataset, the model applying all proposed learning techniques achieves the best performance compared with common models previously developed for clinical deterioration warning.
Dingwen Li, Patrick G. Lyons, Chenyang Lu, Marin Kollef
null
null
2,020
aaai
DeepVar: An End-to-End Deep Learning Approach for Genomic Variant Recognition in Biomedical Literature
null
We consider the problem of Named Entity Recognition (NER) on biomedical scientific literature, and more specifically the genomic variants recognition in this work. Significant success has been achieved for NER on canonical tasks in recent years where large data sets are generally available. However, it remains a challenging problem on many domain-specific areas, especially the domains where only small gold annotations can be obtained. In addition, genomic variant entities exhibit diverse linguistic heterogeneity, differing much from those that have been characterized in existing canonical NER tasks. The state-of-the-art machine learning approaches heavily rely on arduous feature engineering to characterize those unique patterns. In this work, we present the first successful end-to-end deep learning approach to bridge the gap between generic NER algorithms and low-resource applications through genomic variants recognition. Our proposed model can result in promising performance without any hand-crafted features or post-processing rules. Our extensive experiments and results may shed light on other similar low-resource NER applications.
Chaoran Cheng, Fei Tan, Zhi Wei
null
null
2,020
aaai
Adaptive Greedy versus Non-Adaptive Greedy for Influence Maximization
null
We consider the adaptive influence maximization problem: given a network and a budget k, iteratively select k seeds in the network to maximize the expected number of adopters. In the full-adoption feedback model, after selecting each seed, the seed-picker observes all the resulting adoptions. In the myopic feedback model, the seed-picker only observes whether each neighbor of the chosen seed adopts. Motivated by the extreme success of greedy-based algorithms/heuristics for influence maximization, we propose the concept of greedy adaptivity gap, which compares the performance of the adaptive greedy algorithm to its non-adaptive counterpart. Our first result shows that, for submodular influence maximization, the adaptive greedy algorithm can perform up to a (1-1/e)-fraction worse than the non-adaptive greedy algorithm, and that this ratio is tight. More specifically, on one side we provide examples where the performance of the adaptive greedy algorithm is only a (1-1/e) fraction of the performance of the non-adaptive greedy algorithm in four settings: for both feedback models and both the independent cascade model and the linear threshold model. On the other side, we prove that in any submodular cascade, the adaptive greedy algorithm always outputs a (1-1/e)-approximation to the expected number of adoptions in the optimal non-adaptive seed choice. Our second result shows that, for the general submodular cascade model with full-adoption feedback, the adaptive greedy algorithm can outperform the non-adaptive greedy algorithm by an unbounded factor. Finally, we propose a risk-free variant of the adaptive greedy algorithm that always performs no worse than the non-adaptive greedy algorithm.
Wei Chen, Binghui Peng, Grant Schoenebeck, Biaoshuai Tao
null
null
2,020
aaai
Learning the Graphical Structure of Electronic Health Records with Graph Convolutional Transformer
null
Effective modeling of electronic health records (EHR) is rapidly becoming an important topic in both academia and industry. A recent study showed that using the graphical structure underlying EHR data (e.g. relationship between diagnoses and treatments) improves the performance of prediction tasks such as heart failure prediction. However, EHR data do not always contain complete structure information. Moreover, when it comes to claims data, structure information is completely unavailable to begin with. Under such circumstances, can we still do better than just treating EHR data as a flat-structured bag-of-features? In this paper, we study the possibility of jointly learning the hidden structure of EHR while performing supervised prediction tasks on EHR data. Specifically, we discuss that Transformer is a suitable basis model to learn the hidden EHR structure, and propose Graph Convolutional Transformer, which uses data statistics to guide the structure learning process. The proposed model consistently outperformed previous approaches empirically, on both synthetic data and publicly available EHR data, for various prediction tasks such as graph reconstruction and readmission prediction, indicating that it can serve as an effective general-purpose representation learning algorithm for EHR data.
Edward Choi, Zhen Xu, Yujia Li, Michael Dusenberry, Gerardo Flores, Emily Xue, Andrew Dai
null
null
2,020
aaai
SynSig2Vec: Learning Representations from Synthetic Dynamic Signatures for Real-World Verification
null
An open research problem in automatic signature verification is the skilled forgery attacks. However, the skilled forgeries are very difficult to acquire for representation learning. To tackle this issue, this paper proposes to learn dynamic signature representations through ranking synthesized signatures. First, a neuromotor inspired signature synthesis method is proposed to synthesize signatures with different distortion levels for any template signature. Then, given the templates, we construct a lightweight one-dimensional convolutional network to learn to rank the synthesized samples, and directly optimize the average precision of the ranking to exploit relative and fine-grained signature similarities. Finally, after training, fixed-length representations can be extracted from dynamic signatures of variable lengths for verification. One highlight of our method is that it requires neither skilled nor random forgeries for training, yet it surpasses the state-of-the-art by a large margin on two public benchmarks.
Songxuan Lai, Lianwen Jin, Luojun Lin, Yecheng Zhu, Huiyun Mao
null
null
2,020
aaai
RL-Duet: Online Music Accompaniment Generation Using Deep Reinforcement Learning
null
This paper presents a deep reinforcement learning algorithm for online accompaniment generation, with potential for real-time interactive human-machine duet improvisation. Different from offline music generation and harmonization, online music accompaniment requires the algorithm to respond to human input and generate the machine counterpart in a sequential order. We cast this as a reinforcement learning problem, where the generation agent learns a policy to generate a musical note (action) based on previously generated context (state). The key of this algorithm is the well-functioning reward model. Instead of defining it using music composition rules, we learn this model from monophonic and polyphonic training data. This model considers the compatibility of the machine-generated note with both the machine-generated context and the human-generated context. Experiments show that this algorithm is able to respond to the human part and generate a melodic, harmonic and diverse machine part. Subjective evaluations on preferences show that the proposed algorithm generates music pieces of higher quality than the baseline method.
Nan Jiang, Sheng Jin, Zhiyao Duan, Changshui Zhang
null
null
2,020
aaai
Pose-Assisted Multi-Camera Collaboration for Active Object Tracking
null
Active Object Tracking (AOT) is crucial to many vision-based applications, e.g., mobile robot, intelligent surveillance. However, there are a number of challenges when deploying active tracking in complex scenarios, e.g., target is frequently occluded by obstacles. In this paper, we extend the single-camera AOT to a multi-camera setting, where cameras tracking a target in a collaborative fashion. To achieve effective collaboration among cameras, we propose a novel Pose-Assisted Multi-Camera Collaboration System, which enables a camera to cooperate with the others by sharing camera poses for active object tracking. In the system, each camera is equipped with two controllers and a switcher: The vision-based controller tracks targets based on observed images. The pose-based controller moves the camera in accordance to the poses of the other cameras. At each step, the switcher decides which action to take from the two controllers according to the visibility of the target. The experimental results demonstrate that our system outperforms all the baselines and is capable of generalizing to unseen environments. The code and demo videos are available on our website https://sites.google.com/view/pose-assisted-collaboration.
Jing Li, Jing Xu, Fangwei Zhong, Xiangyu Kong, Yu Qiao, Yizhou Wang
null
null
2,020
aaai
Pay Your Trip for Traffic Congestion: Dynamic Pricing in Traffic-Aware Road Networks
null
Pricing is essential in optimizing transportation resource allocation. Congestion pricing is widely used to reduce urban traffic congestion. We propose and investigate a novel Dynamic Pricing Strategy (DPS) to price travelers' trips in intelligent transportation platforms (e.g., DiDi, Lyft, Uber). The trips are charged according to their “congestion contributions” to global urban traffic systems. The dynamic pricing strategy retrieves a matching between n travelers' trips and the potential travel routes (each trip has k potential routes) to minimize the global traffic congestion. We believe that DPS holds the potential to benefit society and the environment, such as reducing traffic congestion and enabling smarter and greener transportation. The DPS problem is challenging due to its high computation complexity (there exist kn matching possibilities). We develop an efficient and effective approximate matching algorithm based on local search, as well as pruning techniques to further enhance the matching efficiency. The accuracy and efficiency of the dynamic pricing strategy are verified by extensive experiments on real datasets.
Lisi Chen, Shuo Shang, Bin Yao, Jing Li
null
null
2,020
aaai
Real-Time Route Search by Locations
null
With the proliferation of GPS-based data (e.g., routes and trajectories), it is of great importance to enable the functionality of real-time route search and recommendations. We define and study a novel Continuous Route-Search-by-Location (C-RSL) problem to enable real-time route search by locations for a large number of users over route data streams. Given a set of C-RSL queries where each query q contains a set of places q.O to visit and a threshold q.θ, we continuously feed each query q with routes that has similarity to q.O no less than q.θ. We also extend our proposal to support top-k C-RSL problem where each query continuously maintains k most similar routes. The C-RSL problem targets a variety of applications, including real-time route planning, ridesharing, and other location-based services that have real-time demand. To enable efficient route matching on a large number of C-RSL queries, we develop novel parallel route matching algorithms with good time complexity. Extensive experiments with real data offer insight into the performance of our algorithms, indicating that our proposal is capable of achieving high efficiency and scalability.
Lisi Chen, Shuo Shang, Tao Guo
null
null
2,020
aaai
Rumor Detection on Social Media with Bi-Directional Graph Convolutional Networks
null
Social media has been developing rapidly in public due to its nature of spreading new information, which leads to rumors being circulated. Meanwhile, detecting rumors from such massive information in social media is becoming an arduous challenge. Therefore, some deep learning methods are applied to discover rumors through the way they spread, such as Recursive Neural Network (RvNN) and so on. However, these deep learning methods only take into account the patterns of deep propagation but ignore the structures of wide dispersion in rumor detection. Actually, propagation and dispersion are two crucial characteristics of rumors. In this paper, we propose a novel bi-directional graph model, named Bi-Directional Graph Convolutional Networks (Bi-GCN), to explore both characteristics by operating on both top-down and bottom-up propagation of rumors. It leverages a GCN with a top-down directed graph of rumor spreading to learn the patterns of rumor propagation; and a GCN with an opposite directed graph of rumor diffusion to capture the structures of rumor dispersion. Moreover, the information from source post is involved in each layer of GCN to enhance the influences from the roots of rumors. Encouraging empirical results on several benchmarks confirm the superiority of the proposed method over the state-of-the-art approaches.
Tian Bian, Xi Xiao, Tingyang Xu, Peilin Zhao, Wenbing Huang, Yu Rong, Junzhou Huang
null
null
2,020
aaai
TrueLearn: A Family of Bayesian Algorithms to Match Lifelong Learners to Open Educational Resources
null
The recent advances in computer-assisted learning systems and the availability of open educational resources today promise a pathway to providing cost-efficient high-quality education to large masses of learners. One of the most ambitious use cases of computer-assisted learning is to build a lifelong learning recommendation system. Unlike short-term courses, lifelong learning presents unique challenges, requiring sophisticated recommendation models that account for a wide range of factors such as background knowledge of learners or novelty of the material while effectively maintaining knowledge states of masses of learners for significantly longer periods of time (ideally, a lifetime). This work presents the foundations towards building a dynamic, scalable and transparent recommendation system for education, modelling learner's knowledge from implicit data in the form of engagement with open educational resources. We i) use a text ontology based on Wikipedia to automatically extract knowledge components of educational resources and, ii) propose a set of online Bayesian strategies inspired by the well-known areas of item response theory and knowledge tracing. Our proposal, TrueLearn, focuses on recommendations for which the learner has enough background knowledge (so they are able to understand and learn from the material), and the material has enough novelty that would help the learner improve their knowledge about the subject and keep them engaged. We further construct a large open educational video lectures dataset and test the performance of the proposed algorithms, which show clear promise towards building an effective educational recommendation system.
Sahan Bulathwela, Maria Perez-Ortiz, Emine Yilmaz, John Shawe-Taylor
null
null
2,020
aaai
Transfer Reinforcement Learning Using Output-Gated Working Memory
null
Transfer learning allows for knowledge to generalize across tasks, resulting in increased learning speed and/or performance. These tasks must have commonalities that allow for knowledge to be transferred. The main goal of transfer learning in the reinforcement learning domain is to train and learn on one or more source tasks in order to learn a target task that exhibits better performance than if transfer was not used (Taylor and Stone 2009). Furthermore, the use of output-gated neural network models of working memory has been shown to increase generalization for supervised learning tasks (Kriete and Noelle 2011; Kriete et al. 2013). We propose that working memory-based generalization plays a significant role in a model's ability to transfer knowledge successfully across tasks. Thus, we extended the Holographic Working Memory Toolkit (HWMtk) (Dubois and Phillips 2017; Phillips and Noelle 2005) to utilize the generalization benefits of output gating within a working memory system. Finally, the model's utility was tested on a temporally extended, partially observable 5x5 2D grid-world maze task that required the agent to learn 3 tasks over the duration of the training period. The results indicate that the addition of output gating increases the initial learning performance of an agent in target tasks and decreases the learning time required to reach a fixed performance threshold.
Arthur Williams, Joshua Phillips
null
null
2,020
aaai
Synch-Graph: Multisensory Emotion Recognition Through Neural Synchrony via Graph Convolutional Networks
null
Human emotions are essentially multisensory, where emotional states are conveyed through multiple modalities such as facial expression, body language, and non-verbal and verbal signals. Therefore having multimodal or multisensory learning is crucial for recognising emotions and interpreting social signals. Existing multisensory emotion recognition approaches focus on extracting features on each modality, while ignoring the importance of constant interaction and co-learning between modalities. In this paper, we present a novel bio-inspired approach based on neural synchrony in audio-visual multisensory integration in the brain, named Synch-Graph. We model multisensory interaction using spiking neural networks (SNN) and explore the use of Graph Convolutional Networks (GCN) to represent and learn neural synchrony patterns. We hypothesise that modelling interactions between modalities will improve the accuracy of emotion recognition. We have evaluated Synch-Graph on two state-of-the-art datasets and achieved an overall accuracy of 98.3% and 96.82%, which are significantly higher than the existing techniques.
Esma Mansouri-Benssassi, Juan Ye
null
null
2,020
aaai
Deep Spiking Delayed Feedback Reservoirs and Its Application in Spectrum Sensing of MIMO-OFDM Dynamic Spectrum Sharing
null
In this paper, we introduce a deep spiking delayed feedback reservoir (DFR) model to combine DFR with spiking neuros: DFRs are a new type of recurrent neural networks (RNNs) that are able to capture the temporal correlations in time series while spiking neurons are energy-efficient and biologically plausible neurons models. The introduced deep spiking DFR model is energy-efficient and has the capability of analyzing time series signals. The corresponding field programmable gate arrays (FPGA)-based hardware implementation of such deep spiking DFR model is introduced and the underlying energy-efficiency and recourse utilization are evaluated. Various spike encoding schemes are explored and the optimal spike encoding scheme to analyze the time series has been identified. To be specific, we evaluate the performance of the introduced model using the spectrum occupancy time series data in MIMO-OFDM based cognitive radio (CR) in dynamic spectrum sharing (DSS) networks. In a MIMO-OFDM DSS system, available spectrum is very scarce and efficient utilization of spectrum is very essential. To improve the spectrum efficiency, the first step is to identify the frequency bands that are not utilized by the existing users so that a secondary user (SU) can use them for transmission. Due to the channel correlation as well as users' activities, there is a significant temporal correlation in the spectrum occupancy behavior of the frequency bands in different time slots. The introduced deep spiking DFR model is used to capture the temporal correlation of the spectrum occupancy time series and predict the idle/busy subcarriers in future time slots for potential spectrum access. Evaluation results suggest that our introduced model achieves higher area under curve (AUC) in the receiver operating characteristic (ROC) curve compared with the traditional energy detection-based strategies and the learning-based support vector machines (SVMs).
Kian Hamedani, Lingjia Liu, Shiya Liu, Haibo He, Yang Yi
null
null
2,020
aaai
Effective AER Object Classification Using Segmented Probability-Maximization Learning in Spiking Neural Networks
null
Address event representation (AER) cameras have recently attracted more attention due to the advantages of high temporal resolution and low power consumption, compared with traditional frame-based cameras. Since AER cameras record the visual input as asynchronous discrete events, they are inherently suitable to coordinate with the spiking neural network (SNN), which is biologically plausible and energy-efficient on neuromorphic hardware. However, using SNN to perform the AER object classification is still challenging, due to the lack of effective learning algorithms for this new representation. To tackle this issue, we propose an AER object classification model using a novel segmented probability-maximization (SPA) learning algorithm. Technically, 1) the SPA learning algorithm iteratively maximizes the probability of the classes that samples belong to, in order to improve the reliability of neuron responses and effectiveness of learning; 2) a peak detection (PD) mechanism is introduced in SPA to locate informative time points segment by segment, based on which information within the whole event stream can be fully utilized by the learning. Extensive experimental results show that, compared to state-of-the-art methods, not only our model is more effective, but also it requires less information to reach a certain level of accuracy.
Qianhui Liu, Haibo Ruan, Dong Xing, Huajin Tang, Gang Pan
null
null
2,020
aaai
Theory-Based Causal Transfer:Integrating Instance-Level Induction and Abstract-Level Structure Learning
null
Learning transferable knowledge across similar but different settings is a fundamental component of generalized intelligence. In this paper, we approach the transfer learning challenge from a causal theory perspective. Our agent is endowed with two basic yet general theories for transfer learning: (i) a task shares a common abstract structure that is invariant across domains, and (ii) the behavior of specific features of the environment remain constant across domains. We adopt a Bayesian perspective of causal theory induction and use these theories to transfer knowledge between environments. Given these general theories, the goal is to train an agent by interactively exploring the problem space to (i) discover, form, and transfer useful abstract and structural knowledge, and (ii) induce useful knowledge from the instance-level attributes observed in the environment. A hierarchy of Bayesian structures is used to model abstract-level structural causal knowledge, and an instance-level associative learning scheme learns which specific objects can be used to induce state changes through interaction. This model-learning scheme is then integrated with a model-based planner to achieve a task in the OpenLock environment, a virtual “escape room” with a complex hierarchy that requires agents to reason about an abstract, generalized causal structure. We compare performances against a set of predominate model-free reinforcement learning (RL) algorithms. RL agents showed poor ability transferring learned knowledge across different trials. Whereas the proposed model revealed similar performance trends as human learners, and more importantly, demonstrated transfer behavior across trials and learning situations.1
Mark Edmonds, Xiaojian Ma, Siyuan Qi, Yixin Zhu, Hongjing Lu, Song-Chun Zhu
null
null
2,020
aaai
People Do Not Just Plan,They Plan to Plan
null
Planning is useful. It lets people take actions that have desirable long-term consequences. But, planning is hard. It requires thinking about consequences, which consumes limited computational and cognitive resources. Thus, people should plan their actions, but they should also be smart about how they deploy resources used for planning their actions. Put another way, people should also “plan their plans”. Here, we formulate this aspect of planning as a meta-reasoning problem and formalize it in terms of a recursive Bellman objective that incorporates both task rewards and information-theoretic planning costs. Our account makes quantitative predictions about how people should plan and meta-plan as a function of the overall structure of a task, which we test in two experiments with human participants. We find that people's reaction times reflect a planned use of information processing, consistent with our account. This formulation of planning to plan provides new insight into the function of hierarchical planning, state abstraction, and cognitive control in both humans and machines.
Mark Ho, David Abel, Jonathan Cohen, Michael Littman, Thomas Griffiths
null
null
2,020
aaai
Machine Number Sense: A Dataset of Visual Arithmetic Problems for Abstract and Relational Reasoning
null
As a comprehensive indicator of mathematical thinking and intelligence, the number sense (Dehaene 2011) bridges the induction of symbolic concepts and the competence of problem-solving. To endow such a crucial cognitive ability to machine intelligence, we propose a dataset, Machine Number Sense (MNS), consisting of visual arithmetic problems automatically generated using a grammar model—And-Or Graph (AOG). These visual arithmetic problems are in the form of geometric figures: each problem has a set of geometric shapes as its context and embedded number symbols. Solving such problems is not trivial; the machine not only has to recognize the number, but also to interpret the number with its contexts, shapes, and relations (e.g., symmetry) together with proper operations. We benchmark the MNS dataset using four predominant neural network models as baselines in this visual reasoning task. Comprehensive experiments show that current neural-network-based models still struggle to understand number concepts and relational operations. We show that a simple brute-force search algorithm could work out some of the problems without context information. Crucially, taking geometric context into account by an additional perception module would provide a sharp performance gain with fewer search steps. Altogether, we call for attention in fusing the classic search-based algorithms with modern neural networks to discover the essential number concepts in future research.
Wenhe Zhang, Chi Zhang, Yixin Zhu, Song-Chun Zhu
null
null
2,020
aaai
Doctor2Vec: Dynamic Doctor Representation Learning for Clinical Trial Recruitment
null
Massive electronic health records (EHRs) enable the success of learning accurate patient representations to support various predictive health applications. In contrast, doctor representation was not well studied despite that doctors play pivotal roles in healthcare. How to construct the right doctor representations? How to use doctor representation to solve important health analytic problems? In this work, we study the problem on clinical trial recruitment, which is about identifying the right doctors to help conduct the trials based on the trial description and patient EHR data of those doctors. We propose Doctor2Vec which simultaneously learns 1) doctor representations from EHR data and 2) trial representations from the description and categorical information about the trials. In particular, Doctor2Vec utilizes a dynamic memory network where the doctor's experience with patients are stored in the memory bank and the network will dynamically assign weights based on the trial representation via an attention mechanism. Validated on large real-world trials and EHR data including 2,609 trials, 25K doctors and 430K patients, Doctor2Vec demonstrated improved performance over the best baseline by up to 8.7% in PR-AUC. We also demonstrated that the Doctor2Vec embedding can be transferred to benefit data insufficiency settings including trial recruitment in less populated/newly explored country with 13.7% improvement or for rare diseases with 8.1% improvement in PR-AUC.
Siddharth Biswal, Cao Xiao, Lucas M. Glass, Elizabeth Milkovits, Jimeng Sun
null
null
2,020
aaai
M3ER: Multiplicative Multimodal Emotion Recognition using Facial, Textual, and Speech Cues
null
We present M3ER, a learning-based method for emotion recognition from multiple input modalities. Our approach combines cues from multiple co-occurring modalities (such as face, text, and speech) and also is more robust than other methods to sensor noise in any of the individual modalities. M3ER models a novel, data-driven multiplicative fusion method to combine the modalities, which learn to emphasize the more reliable cues and suppress others on a per-sample basis. By introducing a check step which uses Canonical Correlational Analysis to differentiate between ineffective and effective modalities, M3ER is robust to sensor noise. M3ER also generates proxy features in place of the ineffectual modalities. We demonstrate the efficiency of our network through experimentation on two benchmark datasets, IEMOCAP and CMU-MOSEI. We report a mean accuracy of 82.7% on IEMOCAP and 89.0% on CMU-MOSEI, which, collectively, is an improvement of about 5% over prior work.
Trisha Mittal, Uttaran Bhattacharya, Rohan Chandra, Aniket Bera, Dinesh Manocha
null
null
2,020
aaai
STEP: Spatial Temporal Graph Convolutional Networks for Emotion Perception from Gaits
null
We present a novel classifier network called STEP, to classify perceived human emotion from gaits, based on a Spatial Temporal Graph Convolutional Network (ST-GCN) architecture. Given an RGB video of an individual walking, our formulation implicitly exploits the gait features to classify the perceived emotion of the human into one of four emotions: happy, sad, angry, or neutral. We train STEP on annotated real-world gait videos, augmented with annotated synthetic gaits generated using a novel generative network called STEP-Gen, built on an ST-GCN based Conditional Variational Autoencoder (CVAE). We incorporate a novel push-pull regularization loss in the CVAE formulation of STEP-Gen to generate realistic gaits and improve the classification accuracy of STEP. We also release a novel dataset (E-Gait), which consists of 4,227 human gaits annotated with perceived emotions along with thousands of synthetic gaits. In practice, STEP can learn the affective features and exhibits classification accuracy of 88% on E-Gait, which is 14–30% more accurate over prior methods.
Uttaran Bhattacharya, Trisha Mittal, Rohan Chandra, Tanmay Randhavane, Aniket Bera, Dinesh Manocha
null
null
2,020
aaai
To Signal or Not To Signal: Exploiting Uncertain Real-Time Information in Signaling Games for Security and Sustainability
null
Motivated by real-world deployment of drones for conservation, this paper advances the state-of-the-art in security games with signaling. The well-known defender-attacker security games framework can help in planning for such strategic deployments of sensors and human patrollers, and warning signals to ward off adversaries. However, we show that defenders can suffer significant losses when ignoring real-world uncertainties despite carefully planned security game strategies with signaling. In fact, defenders may perform worse than forgoing drones completely in this case. We address this shortcoming by proposing a novel game model that integrates signaling and sensor uncertainty; perhaps surprisingly, we show that defenders can still perform well via a signaling strategy that exploits uncertain real-time information. For example, even in the presence of uncertainty, the defender still has an informational advantage in knowing that she has or has not actually detected the attacker; and she can design a signaling scheme to “mislead” the attacker who is uncertain as to whether he has been detected. We provide theoretical results, a novel algorithm, scale-up techniques, and experimental results from simulation based on our ongoing deployment of a conservation drone system in South Africa.
Elizabeth Bondi, Hoon Oh, Haifeng Xu, Fei Fang, Bistra Dilkina, Milind Tambe
null
null
2,020
aaai
Multiple Graph Matching and Clustering via Decayed Pairwise Matching Composition
null
Jointly matching of multiple graphs is challenging and recently has been an active topic in machine learning and computer vision. State-of-the-art methods have been devised, however, to our best knowledge there is no effective mechanism that can explicitly deal with the matching of a mixture of graphs belonging to multiple clusters, e.g., a collection of bikes and bottles. Seeing its practical importance, we propose a novel approach for multiple graph matching and clustering. Firstly, for the traditional multi-graph matching setting, we devise a composition scheme based on a tree structure, which can be seen as in the between of two strong multi-graph matching solvers, i.e., MatchOpt (Yan et al. 2015a) and CAO (Yan et al. 2016a). In particular, it can be more robust than MatchOpt against a set of diverse graphs and more efficient than CAO. Then we further extend the algorithm to the multiple graph matching and clustering setting, by adopting a decaying technique along the composition path, to discount the meaningless matching between graphs in different clusters. Experimental results show the proposed methods achieve excellent trade-off on the traditional multi-graph matching case, and outperform in both matching and clustering accuracy, as well as time efficiency.
Tianzhe Wang, Zetian Jiang, Junchi Yan
null
null
2,020
aaai
Explaining Propagators for String Edit Distance Constraints
null
The computation of string similarity measures has been thoroughly studied in the scientific literature and has applications in a wide variety of different areas. One of the most widely used measures is the so called string edit distance which captures the number of required edit operations to transform a string into another given string. Although polynomial time algorithms are known for calculating the edit distance between two strings, there also exist NP-hard problems from practical applications like scheduling or computational biology that constrain the minimum edit distance between arrays of decision variables. In this work, we propose a novel global constraint to formulate restrictions on the minimum edit distance for such problems. Furthermore, we describe a propagation algorithm and investigate an explanation strategy for an edit distance constraint propagator that can be incorporated into state of the art lazy clause generation solvers. Experimental results show that the proposed propagator is able to significantly improve the performance of existing exact methods regarding solution quality and computation speed for benchmark problems from the literature.
Felix Winter, Nysret Musliu, Peter Stuckey
null
null
2,020
aaai
Probabilistic Inference for Predicate Constraint Satisfaction
null
In this paper, we present a novel constraint solving method for a class of predicate Constraint Satisfaction Problems (pCSP) where each constraint is represented by an arbitrary clause of first-order predicate logic over predicate variables. The class of pCSP properly subsumes the well-studied class of Constrained Horn Clauses (CHCs) where each constraint is restricted to a Horn clause. The class of CHCs has been widely applied to verification of linear-time safety properties of programs in different paradigms. In this paper, we show that pCSP further widens the applicability to verification of branching-time safety properties of programs that exhibit finitely-branching non-determinism. Solving pCSP (and CHCs) however is challenging because the search space of solutions is often very large (or unbounded), high-dimensional, and non-smooth. To address these challenges, our method naturally combines techniques studied separately in different literatures: counterexample guided inductive synthesis (CEGIS) and probabilistic inference in graphical models. We have implemented the presented method and obtained promising results on existing benchmarks as well as new ones that are beyond the scope of existing CHC solvers.
Yuki Satake, Hiroshi Unno, Hinata Yanagi
null
null
2,020
aaai
Efficient Algorithms for Generating Provably Near-Optimal Cluster Descriptors for Explainability
null
Improving the explainability of the results from machine learning methods has become an important research goal. Here, we study the problem of making clusters more interpretable by extending a recent approach of [Davidson et al., NeurIPS 2018] for constructing succinct representations for clusters. Given a set of objects S, a partition π of S (into clusters), and a universe T of tags such that each element in S is associated with a subset of tags, the goal is to find a representative set of tags for each cluster such that those sets are pairwise-disjoint and the total size of all the representatives is minimized. Since this problem is NP-hard in general, we develop approximation algorithms with provable performance guarantees for the problem. We also show applications to explain clusters from datasets, including clusters of genomic sequences that represent different threat levels.
Prathyush Sambaturu, Aparna Gupta, Ian Davidson, S. S. Ravi, Anil Vullikanti, Andrew Warren
null
null
2,020
aaai
Deep Neural Network Approximated Dynamic Programming for Combinatorial Optimization
null
In this paper, we propose a general framework for combining deep neural networks (DNNs) with dynamic programming to solve combinatorial optimization problems. For problems that can be broken into smaller subproblems and solved by dynamic programming, we train a set of neural networks to replace value or policy functions at each decision step. Two variants of the neural network approximated dynamic programming (NDP) methods are proposed; in the value-based NDP method, the networks learn to estimate the value of each choice at the corresponding step, while in the policy-based NDP method the DNNs only estimate the best decision at each step. The training procedure of the NDP starts from the smallest problem size and a new DNN for the next size is trained to cooperate with previous DNNs. After all the DNNs are trained, the networks are fine-tuned together to further improve overall performance. We test NDP on the linear sum assignment problem, the traveling salesman problem and the talent scheduling problem. Experimental results show that NDP can achieve considerable computation time reduction on hard problems with reasonable performance loss. In general, NDP can be applied to reducible combinatorial optimization problems for the purpose of computation time reduction.
Shenghe Xu, Shivendra S. Panwar, Murali Kodialam, T.V. Lakshman
null
null
2,020
aaai
D-SPIDER-SFO: A Decentralized Optimization Algorithm with Faster Convergence Rate for Nonconvex Problems
null
Decentralized optimization algorithms have attracted intensive interests recently, as it has a balanced communication pattern, especially when solving large-scale machine learning problems. Stochastic Path Integrated Differential Estimator Stochastic First-Order method (SPIDER-SFO) nearly achieves the algorithmic lower bound in certain regimes for nonconvex problems. However, whether we can find a decentralized algorithm which achieves a similar convergence rate to SPIDER-SFO is still unclear. To tackle this problem, we propose a decentralized variant of SPIDER-SFO, called decentralized SPIDER-SFO (D-SPIDER-SFO). We show that D-SPIDER-SFO achieves a similar gradient computation cost—that is, O(ε−3) for finding an ϵ-approximate first-order stationary point—to its centralized counterpart. To the best of our knowledge, D-SPIDER-SFO achieves the state-of-the-art performance for solving nonconvex optimization problems on decentralized networks in terms of the computational cost. Experiments on different network configurations demonstrate the efficiency of the proposed method.
Taoxing Pan, Jun Liu, Jie Wang
null
null
2,020
aaai
Grammar Filtering for Syntax-Guided Synthesis
null
Programming-by-example (PBE) is a synthesis paradigm that allows users to generate functions by simply providing input-output examples. While a promising interaction paradigm, synthesis is still too slow for realtime interaction and more widespread adoption. Existing approaches to PBE synthesis have used automated reasoning tools, such as SMT solvers, as well as works applying machine learning techniques. At its core, the automated reasoning approach relies on highly domain specific knowledge of programming languages. On the other hand, the machine learning approaches utilize the fact that when working with program code, it is possible to generate arbitrarily large training datasets. In this work, we propose a system for using machine learning in tandem with automated reasoning techniques to solve Syntax Guided Synthesis (SyGuS) style PBE problems. By preprocessing SyGuS PBE problems with a neural network, we can use a data driven approach to reduce the size of the search space, then allow automated reasoning-based solvers to more quickly find a solution analytically. Our system is able to run atop existing SyGuS PBE synthesis tools, decreasing the runtime of the winner of the 2019 SyGuS Competition for the PBE Strings track by 47.65% to outperform all of the competing tools.
Kairo Morton, William Hallahan, Elven Shum, Ruzica Piskac, Mark Santolucito
null
null
2,020
aaai
Constructing Minimal Perfect Hash Functions Using SAT Technology
null
Minimal perfect hash functions (MPHFs) are used to provide efficient access to values of large dictionaries (sets of key-value pairs). Discovering new algorithms for building MPHFs is an area of active research, especially from the perspective of storage efficiency. The information-theoretic limit for MPHFs is 1/ln 2 ≈ 1.44 bits per key. The current best practical algorithms range between 2 and 4 bits per key. In this article, we propose two SAT-based constructions of MPHFs. Our first construction yields MPHFs near the information-theoretic limit. For this construction, current state-of-the-art SAT solvers can handle instances where the dictionaries contain up to 40 elements, thereby outperforming the existing (brute-force) methods. Our second construction uses XORSAT filters to realize a practical approach with long-term storage of approximately 1.83 bits per key.
Sean Weaver, Marijn Heule
null
null
2,020
aaai
Modeling Electrical Motor Dynamics Using Encoder-Decoder with Recurrent Skip Connection
null
Electrical motors are the most important source of mechanical energy in the industrial world. Their modeling traditionally relies on a physics-based approach, which aims at taking their complex internal dynamics into account. In this paper, we explore the feasibility of modeling the dynamics of an electrical motor by following a data-driven approach, which uses only its inputs and outputs and does not make any assumption on its internal behaviour. We propose a novel encoder-decoder architecture which benefits from recurrent skip connections. We also propose a novel loss function that takes into account the complexity of electrical motor quantities and helps in avoiding model bias. We show that the proposed architecture can achieve a good learning performance on our high-frequency high-variance datasets. Two datasets are considered: the first one is generated using a simulator based on the physics of an induction motor and the second one is recorded from an industrial electrical motor. We benchmark our solution using variants of traditional neural networks like feedforward, convolutional, and recurrent networks. We evaluate various design choices of our architecture and compare it to the baselines. We show the domain adaptation capability of our model to learn dynamics just from simulated data by testing it on the raw sensor data. We finally show the effect of signal complexity on the proposed method ability to model temporal dynamics.
Sagar Verma, Nicolas Henwood, Marc Castella, Francois Malrait, Jean-Christophe Pesquet
null
null
2,020
aaai
Hard Examples for Common Variable Decision Heuristics
null
The CDCL algorithm for SAT is equivalent to the resolution proof system under a few assumptions, one of them being an optimal non-deterministic procedure for choosing the next variable to branch on. In practice this task is left to a variable decision heuristic, and since the so-called VSIDS decision heuristic is considered an integral part of CDCL, whether CDCL with a VSIDS-like heuristic is also equivalent to resolution remained a significant open question.We give a negative answer by building a family of formulas that have resolution proofs of polynomial size but require exponential time to decide in CDCL with common heuristics such as VMTF, CHB, and certain implementations of VSIDS and LRB.
Marc Vinyals
null
null
2,020
aaai
Augmenting the Power of (Partial) MaxSat Resolution with Extension
null
The refutation power of SAT and MaxSAT resolution is challenged by problems like the soft and hard Pigeon Hole Problem PHP for which short refutations do not exist. In this paper we augment the MaxSAT resolution proof system with an extension rule. The new proof system MaxResE is sound and complete, and more powerful than plain MaxSAT resolution, since it can refute the soft and hard PHP in polynomial time. We show that MaxResE refutations actually subtract lower bounds from the objective function encoded by the formulas. The resulting formula is the residual after the lower bound extraction. We experimentally show that the residual of the soft PHP (once its necessary cost of 1 has been efficiently subtracted with MaxResE) is a concise, easy to solve, satisfiable problem.
Javier Larrosa, Emma Rollon
null
null
2,020
aaai
Tensorized LSTM with Adaptive Shared Memory for Learning Trends in Multivariate Time Series
null
The problem of learning and forecasting underlying trends in time series data arises in a variety of applications, such as traffic management, energy optimization, etc. In literature, a trend in time series is characterized by the slope and duration, and its prediction is then to forecast the two values of the subsequent trend given historical data of the time series. For this problem, existing approaches mainly deal with the case in univariate time series. However, in many real-world applications, there are multiple variables at play, and handling all of them at the same time is crucial for an accurate prediction. A natural way is to employ multi-task learning (MTL) techniques in which the trend learning of each time series is treated as a task. The key point of MTL is to learn task relatedness to achieve better parameter sharing, which however is challenging in trend prediction task. First, effectively modeling the complex temporal patterns in different tasks is hard as the temporal and spatial dimensions are entangled. Second, the relatedness among tasks may change over time. In this paper, we propose a neural network, DeepTrends, for multivariate time series trend prediction. The core module of DeepTrends is a tensorized LSTM with adaptive shared memory (TLASM). TLASM employs the tensorized LSTM to model the temporal patterns of long-term trend sequences in an MTL setting. With an adaptive shared memory, TLASM is able to learn the relatedness among tasks adaptively, based upon which it can dynamically vary degrees of parameter sharing among tasks. To further consider short-term patterns, DeepTrends utilizes a multi-task 1dCNN to learn the local time series features, and employs a task-specific sub-network to learn a mixture of long-term and short-term patterns for trend prediction. Extensive experiments on real datasets demonstrate the effectiveness of the proposed model.
Dongkuan Xu, Wei Cheng, Bo Zong, Dongjin Song, Jingchao Ni, Wenchao Yu, Yanchi Liu, Haifeng Chen, Xiang Zhang
null
null
2,020
aaai
Accelerating Column Generation via Flexible Dual Optimal Inequalities with Application to Entity Resolution
null
In this paper, we introduce a new optimization approach to Entity Resolution. Traditional approaches tackle entity resolution with hierarchical clustering, which does not benefit from a formal optimization formulation. In contrast, we model entity resolution as correlation-clustering, which we treat as a weighted set-packing problem and write as an integer linear program (ILP). In this case, sources in the input data correspond to elements and entities in output data correspond to sets/clusters. We tackle optimization of weighted set packing by relaxing integrality in our ILP formulation. The set of potential sets/clusters can not be explicitly enumerated, thus motivating optimization via column generation. In addition to the novel formulation, we also introduce new dual optimal inequalities (DOI), that we call flexible dual optimal inequalities, which tightly lower-bound dual variables during optimization and accelerate column generation. We apply our formulation to entity resolution (also called de-duplication of records), and achieve state-of-the-art accuracy on two popular benchmark datasets. Our F-DOI can be extended to other weighted set-packing problems.
Vishnu Suresh Lokhande, Shaofei Wang, Maneesh Singh, Julian Yarkony
null
null
2,020
aaai
Estimating the Density of States of Boolean Satisfiability Problems on Classical and Quantum Computing Platforms
null
Given a Boolean formula ϕ(x) in conjunctive normal form (CNF), the density of states counts the number of variable assignments that violate exactly e clauses, for all values of e. Thus, the density of states is a histogram of the number of unsatisfied clauses over all possible assignments. This computation generalizes both maximum-satisfiability (MAX-SAT) and model counting problems and not only provides insight into the entire solution space, but also yields a measure for the hardness of the problem instance. Consequently, in real-world scenarios, this problem is typically infeasible even when using state-of-the-art algorithms. While finding an exact answer to this problem is a computationally intensive task, we propose a novel approach for estimating density of states based on the concentration of measure inequalities. The methodology results in a quadratic unconstrained binary optimization (QUBO), which is particularly amenable to quantum annealing-based solutions. We present the overall approach and compare results from the D-Wave quantum annealer against the best-known classical algorithms such as the Hamze-de Freitas-Selby (HFS) algorithm and satisfiability modulo theory (SMT) solvers.
Tuhin Sahai, Anurag Mishra, Jose Miguel Pasini, Susmit Jha
null
null
2,020
aaai
Deep Unsupervised Binary Coding Networks for Multivariate Time Series Retrieval
null
Multivariate time series data are becoming increasingly ubiquitous in varies real-world applications such as smart city, power plant monitoring, wearable devices, etc. Given the current time series segment, how to retrieve similar segments within the historical data in an efficient and effective manner is becoming increasingly important. As it can facilitate underlying applications such as system status identification, anomaly detection, etc. Despite the fact that various binary coding techniques can be applied to this task, few of them are specially designed for multivariate time series data in an unsupervised setting. To this end, we present Deep Unsupervised Binary Coding Networks (DUBCNs) to perform multivariate time series retrieval. DUBCNs employ the Long Short-Term Memory (LSTM) encoder-decoder framework to capture the temporal dynamics within the input segment and consist of three key components, i.e., a temporal encoding mechanism to capture the temporal order of different segments within a mini-batch, a clustering loss on the hidden feature space to capture the hidden feature structure, and an adversarial loss based upon Generative Adversarial Networks (GANs) to enhance the generalization capability of the generated binary codes. Thoroughly empirical studies on three public datasets demonstrated that the proposed DUBCNs can outperform state-of-the-art unsupervised binary coding techniques.
Dixian Zhu, Dongjin Song, Yuncong Chen, Cristian Lumezanu, Wei Cheng, Bo Zong, Jingchao Ni, Takehiko Mizoguchi, Tianbao Yang, Haifeng Chen
null
null
2,020
aaai
Finding Most Compatible Phylogenetic Trees over Multi-State Characters
null
The reconstruction of the evolutionary tree of a set of species based on qualitative attributes is a central problem in phylogenetics. In the NP-hard perfect phylogeny problem the input is a set of taxa (species) and characters (attributes) on them, and the task is to find an evolutionary tree that describes the evolution of the taxa so that each character state evolves only once. However, in practical situations a perfect phylogeny rarely exists, motivating the maximum compatibility problem of finding the largest subset of characters admitting a perfect phylogeny. Various declarative approaches, based on applying integer programming (IP), answer set programming (ASP) and pseudo-Boolean optimization (PBO) solvers, have been proposed for maximum compatibility. In this work we develop a new hybrid approach to solving maximum compatibility for multi-state characters, making use of both declarative optimization techniques (specifically maximum satisfiability, MaxSAT) and an adaptation of the Bouchitt'e-Todinca approach to triangulation-based graph optimization problems. Empirically our approach outperforms in scalability the earlier proposed approaches w.r.t. various parameters underlying the problem.
Tuukka Korhonen, Matti J„ärvisalo
null
null
2,020
aaai
Solving Set Cover and Dominating Set via Maximum Satisfiability
null
The Set Covering Problem (SCP) and Dominating Set Problem (DSP) are NP-hard and have many real world applications. SCP and DSP can be encoded into Maximum Satisfiability (MaxSAT) naturally and the resulting instances share a special structure. In this paper, we develop an efficient local search solver for MaxSAT instances of this kind. Our algorithm contains three phrase: construction, local search and recovery. In construction phrase, we simplify the instance by three reduction rules and construct an initial solution by a greedy heuristic. The initial solution is improved during the local search phrase, which exploits the feature of such instances in the scoring function and the variable selection heuristic. Finally, the corresponding solution of original instance is recovered in the recovery phrase. Experiment results on a broad range of large scale instances of SCP and DSP show that our algorithm significantly outperforms state of the art solvers for SCP, DSP and MaxSAT.
Zhendong Lei, Shaowei Cai
null
null
2,020
aaai
Finding Good Subtrees for Constraint Optimization Problems Using Frequent Pattern Mining
null
Making good decisions at the top of a search tree is important for finding good solutions early in constraint optimization. In this paper, we propose a method employing frequent pattern mining (FPM), a classic datamining technique, to find good subtrees for solving constraint optimization problems. We demonstrate that applying FPM in a small number of random high-quality feasible solutions enables us to identify subtrees containing optimal solutions in more than 55% of problem instances for four real world benchmark problems. The method works as a plugin that can be combined with any search strategy for branch-and-bound search. Exploring the identified subtrees first, the method brings substantial improvements for four efficient search strategies in both total runtime and runtime of finding optimal solutions.
Hongbo Li, Jimmy Lee, He Mi, Minghao Yin
null
null
2,020
aaai
Using Approximation within Constraint Programming to Solve the Parallel Machine Scheduling Problem with Additional Unit Resources
null
In this paper, we consider the Parallel Machine Scheduling Problem with Additional Unit Resources, which consists in scheduling a set of n jobs on m parallel unrelated machines and subject to exactly one of r unit resources. This problem arises from the download of acquisitions from satellites to ground stations. We first introduce two baseline constraint models for this problem. Then, we build on an approximation algorithm for this problem, and we discuss about the efficiency of designing an improved constraint model based on these approximation results. In particular, we introduce new constraints that restrict search to executions of the approximation algorithm. Finally, we report experimental data demonstrating that this model significantly outperforms the two reference models.
Arthur Godet, Xavier Lorca, Emmanuel Hebrard, Gilles Simonin
null
null
2,020
aaai
Modelling Diversity of Solutions
null
For many combinatorial problems, finding a single solution is not enough. This is clearly the case for multi-objective optimization problems, as they have no single “best solution” and, thus, it is useful to find a representation of the non-dominated solutions (the Pareto frontier). However, it also applies to single objective optimization problems, where one may be interested in finding several (close to) optimal solutions that illustrate some form of diversity. The same applies to satisfaction problems. This is because models usually idealize the problem in some way, and a diverse pool of solutions may provide a better choice with respect to considerations that are omitted or simplified in the model. This paper describes a general framework for finding k diverse solutions to a combinatorial problem (be it satisfaction, single-objective or multi-objective), various approaches to solve problems in the framework, their implementations, and an experimental evaluation of their practicality.
Linnea Ingmar, Maria Garcia de la Banda, Peter J. Stuckey, Guido Tack
null
null
2,020
aaai
Modelling and Solving Online Optimisation Problems
null
Many optimisation problems are of an online—also called dynamic—nature, where new information is expected to arrive and the problem must be resolved in an ongoing fashion to (a) improve or revise previous decisions and (b) take new ones. Typically, building an online decision-making system requires substantial ad-hoc coding to ensure the offline version of the optimisation problem is continually adjusted and resolved. This paper defines a general framework for automatically solving online optimisation problems. This is achieved by extending a model of the offline optimisation problem, from which an online version is automatically constructed, thus requiring no further modelling effort. In doing so, it formalises many of the aspects that arise in online optimisation problems. The same framework can be applied for automatically creating sliding-window solving approaches for problems that have a large time horizon. Experiments show we can automatically create efficient online and sliding-window solutions to optimisation problems.
Alexander Ek, Maria Garcia de la Banda, Andreas Schutt, Peter J. Stuckey, Guido Tack
null
null
2,020
aaai
Justifying All Differences Using Pseudo-Boolean Reasoning
null
Constraint programming solvers support rich global constraints and propagators, which make them both powerful and hard to debug. In the Boolean satisfiability community, proof-logging is the standard solution for generating trustworthy outputs, and this has become key to the social acceptability of computer-generated proofs. However, reusing this technology for constraint programming requires either much weaker propagation, or an impractical blowup in proof length. This paper demonstrates that simple, clean, and efficient proof logging is still possible for the all-different constraint, through pseudo-Boolean reasoning. We explain how such proofs can be expressed and verified mechanistically, describe an implementation, and discuss the broader implications for proof logging in constraint programming.
Jan Elffers, Stephan Gocht, Ciaran McCreesh, Jakob Nordstr”öm
null
null
2,020
aaai
Incremental Symmetry Breaking Constraints for Graph Search Problems
null
This paper introduces incremental symmetry breaking constraints for graph search problems which are complete and compact. We show that these constraints can be computed incrementally: A symmetry breaking constraint for order n graphs can be extended to one for order n + 1 graphs. Moreover, these constraints induce a special property on their canonical solutions: An order n canonical graph contains a canonical subgraph on the first k vertices for every 1 ≤ k ≤ n. This facilitates a “generate and extend” paradigm for parallel graph search problem solving: To solve a graph search problem φ on order n graphs, first generate the canonical graphs of some order k < n. Then, compute canonical solutions for φ by extending, in parallel, each canonical order k graph together with suitable symmetry breaking constraints. The contribution is that the proposed symmetry breaking constraints enable us to extend the order k canonical graphs to order n canonical solutions. We demonstrate our approach through its application on two hard graph search problems.
Avraham Itzhakov, Michael Codish
null
null
2,020
aaai
An Effective Hard Thresholding Method Based on Stochastic Variance Reduction for Nonconvex Sparse Learning
null
We propose a hard thresholding method based on stochastically controlled stochastic gradients (SCSG-HT) to solve a family of sparsity-constrained empirical risk minimization problems. The SCSG-HT uses batch gradients where batch size is pre-determined by the desirable precision tolerance rather than full gradients to reduce the variance in stochastic gradients. It also employs the geometric distribution to determine the number of loops per epoch. We prove that, similar to the latest methods based on stochastic gradient descent or stochastic variance reduction methods, SCSG-HT enjoys a linear convergence rate. However, SCSG-HT now has a strong guarantee to recover the optimal sparse estimator. The computational complexity of SCSG-HT is independent of sample size n when n is larger than 1/ε, which enhances the scalability to massive-scale problems. Empirical results demonstrate that SCSG-HT outperforms several competitors and decreases the objective value the most with the same computational costs.
Guannan Liang, Qianqian Tong, Chunjiang Zhu, Jinbo Bi
null
null
2,020
aaai
A Cardinal Improvement to Pseudo-Boolean Solving
null
Pseudo-Boolean solvers hold out the theoretical potential of exponential improvements over conflict-driven clause learning (CDCL) SAT solvers, but in practice perform very poorly if the input is given in the standard conjunctive normal form (CNF) format. We present a technique to remedy this problem by recovering cardinality constraints from CNF on the fly during search. This is done by collecting potential building blocks of cardinality constraints during propagation and combining these blocks during conflict analysis. Our implementation has a non-negligible but manageable overhead when detection is not successful, and yields significant gains for some SAT competition and crafted benchmarks for which pseudo-Boolean reasoning is stronger than CDCL. It also boosts performance for some native pseudo-Boolean formulas where this approach helps to improve learned constraints.
Jan Elffers, Jakob Nordstr”m
null
null
2,020
aaai
FourierSAT: A Fourier Expansion-Based Algebraic Framework for Solving Hybrid Boolean Constraints
null
The Boolean SATisfiability problem (SAT) is of central importance in computer science. Although SAT is known to be NP-complete, progress on the engineering side—especially that of Conflict-Driven Clause Learning (CDCL) and Local Search SAT solvers—has been remarkable. Yet, while SAT solvers, aimed at solving industrial-scale benchmarks in Conjunctive Normal Form (CNF), have become quite mature, SAT solvers that are effective on other types of constraints (e.g., cardinality constraints and XORs) are less well-studied; a general approach to handling non-CNF constraints is still lacking. In addition, previous work indicated that for specific classes of benchmarks, the running time of extant SAT solvers depends heavily on properties of the formula and details of encoding, instead of the scale of the benchmarks, which adds uncertainty to expectations of running time.To address the issues above, we design FourierSAT, an incomplete SAT solver based on Fourier analysis of Boolean functions, a technique to represent Boolean functions by multilinear polynomials. By such a reduction to continuous optimization, we propose an algebraic framework for solving systems consisting of different types of constraints. The idea is to leverage gradient information to guide the search process in the direction of local improvements. Empirical results demonstrate that FourierSAT is more robust than other solvers on certain classes of benchmarks.
Anastasios Kyrillidis, Anshumali Shrivastava, Moshe Vardi, Zhiwei Zhang
null
null
2,020
aaai
ADDMC: Weighted Model Counting with Algebraic Decision Diagrams
null
We present an algorithm to compute exact literal-weighted model counts of Boolean formulas in Conjunctive Normal Form. Our algorithm employs dynamic programming and uses Algebraic Decision Diagrams as the main data structure. We implement this technique in ADDMC, a new model counter. We empirically evaluate various heuristics that can be used with ADDMC. We then compare ADDMC to four state-of-the-art weighted model counters (Cachet, c2d, d4, and miniC2D) on 1914 standard model counting benchmarks and show that ADDMC significantly improves the virtual best solver.
Jeffrey Dudek, Vu Phan, Moshe Vardi
null
null
2,020
aaai
Guiding CDCL SAT Search via Random Exploration amid Conflict Depression
null
The efficiency of Conflict Driven Clause Learning (CDCL) SAT solving depends crucially on finding conflicts at a fast rate. State-of-the-art CDCL branching heuristics such as VSIDS, CHB and LRB conform to this goal. We take a closer look at the way in which conflicts are generated over the course of a CDCL SAT search. Our study of the VSIDS branching heuristic shows that conflicts are typically generated in short bursts, followed by what we call a conflict depression phase in which the search fails to generate any conflicts in a span of decisions. The lack of conflict indicates that the variables that are currently ranked highest by the branching heuristic fail to generate conflicts. Based on this analysis, we propose an exploration strategy, called expSAT, which randomly samples variable selection sequences in order to learn an updated heuristic from the generated conflicts. The goal is to escape from conflict depressions expeditiously. The branching heuristic deployed in expSAT combines these updates with the standard VSIDS activity scores. An extensive empirical evaluation with four state-of-the-art CDCL SAT solvers demonstrates good-to-strong performance gains with the expSAT approach.
Md Solimul Chowdhury, Martin Müller, Jia You
null
null
2,020
aaai
Improved Filtering for the Euclidean Traveling Salesperson Problem in CLP(FD)
null
The Traveling Salesperson Problem (TSP) is one of the best-known problems in computer science. The Euclidean TSP is a special case in which each node is identified by its coordinates on the plane and the Euclidean distance is used as cost function.Many works in the Constraint Programming (CP) literature addressed the TSP, and use as benchmark Euclidean instances; however the usual approach is to build a distance matrix from the points coordinates, and then address the problem as a TSP, disregarding the information carried by the points coordinates for constraint propagation.In this work, we propose to use geometric information, present in Euclidean TSP instances, to improve the filtering power. In order to have a declarative approach, we implemented the filtering algorithms in Constraint Logic Programming on Finite Domains (CLP(FD)).
Alessandro Bertagnon, Marco Gavanelli
null
null
2,020
aaai
Chain Length and CSPs Learnable with Few Queries
null
The goal of constraint acquisition is to learn exactly a constraint network given access to an oracle that answers truthfully certain types of queries. In this paper we focus on partial membership queries and initiate a systematic investigation of the learning complexity of constraint languages. First, we use the notion of chain length to show that a wide class of languages can be learned with as few as O(n log(n)) queries. Then, we combine this result with generic lower bounds to derive a dichotomy in the learning complexity of binary languages. Finally, we identify a class of ternary languages that eludes our framework and hints at new research directions.
Christian Bessiere, Cl‚ément Carbonnel, George Katsirelos
null
null
2,020
aaai
Representative Solutions for Bi-Objective Optimisation
null
Bi-objective optimisation aims to optimise two generally competing objective functions. Typically, it consists in computing the set of nondominated solutions, called the Pareto front. This raises two issues: 1) time complexity, as the Pareto front in general can be infinite for continuous problems and exponentially large for discrete problems, and 2) lack of decisiveness. This paper focusses on the computation of a small, “relevant” subset of the Pareto front called the representative set, which provides meaningful trade-offs between the two objectives. We introduce a procedure which, given a pre-computed Pareto front, computes a representative set in polynomial time, and then we show how to adapt it to the case where the Pareto front is not provided. This has three important consequences for computing the representative set: 1) does not require the whole Pareto front to be provided explicitly, 2) can be done in polynomial time for bi-objective mixed-integer linear programs, and 3) only requires a polynomial number of solver calls for bi-objective problems, as opposed to the case where a higher number of objectives is involved. We implement our algorithm and empirically illustrate the efficiency on two families of benchmarks.
Emir Demirovi?, Nicolas Schwind
null
null
2,020
aaai
Deep Reinforcement Learning for General Game Playing
null
General Game Playing agents are required to play games they have never seen before simply by looking at a formal description of the rules of the game at runtime. Previous successful agents have been based on search with generic heuristics, with almost no work done into using machine learning. Recent advances in deep reinforcement learning have shown it to be successful in some two-player zero-sum board games such as Chess and Go. This work applies deep reinforcement learning to General Game Playing, extending the AlphaZero algorithm and finds that it can provide competitive results.
Adrian Goldwaser, Michael Thielscher
null
null
2,020
aaai
Dynamic Programming for Predict+Optimise
null
We study the predict+optimise problem, where machine learning and combinatorial optimisation must interact to achieve a common goal. These problems are important when optimisation needs to be performed on input parameters that are not fully observed but must instead be estimated using machine learning. We provide a novel learning technique for predict+optimise to directly reason about the underlying combinatorial optimisation problem, offering a meaningful integration of machine learning and optimisation. This is done by representing the combinatorial problem as a piecewise linear function parameterised by the coefficients of the learning model and then iteratively performing coordinate descent on the learning coefficients. Our approach is applicable to linear learning functions and any optimisation problem solvable by dynamic programming. We illustrate the effectiveness of our approach on benchmarks from the literature.
Emir Demirovi?, Peter J. Stuckey, Tias Guns, James Bailey, Christopher Leckie, Kotagiri Ramamohanarao, Jeffrey Chan
null
null
2,020
aaai
Narrative Planning Model Acquisition from Text Summaries and Descriptions
null
AI Planning has been shown to be a useful approach for the generation of narrative in interactive entertainment systems and games. However, the creation of the underlying narrative domain models is challenging: the well documented AI planning modelling bottleneck is further compounded by the need for authors, who tend to be non-technical, to create content. We seek to support authors in this task by allowing natural language (NL) plot synopses to be used as a starting point from which planning domain models can be automatically acquired. We present a solution which analyses input NL text summaries, and builds structured representations from which a pddl model is output (fully automated or author in-the-loop). We introduce a novel sieve-based approach to pronoun resolution that demonstrates consistently high performance across domains. In the paper we focus on authoring of narrative planning models for use in interactive entertainment systems and games. We show that our approach exhibits comprehensive detection of both actions and objects in the system-extracted domain models, in combination with significant improvement in the accuracy of pronoun resolution due to the use of contextual object information. Our results and an expert user assessment show that our approach enables a reduction in authoring effort required to generate baseline narrative domain models from which variants can be built.
Thomas Hayton, Julie Porteous, Joao Ferreira, Alan Lindsay
null
null
2,020
aaai
A Character-Centric Neural Model for Automated Story Generation
null
Automated story generation is a challenging task which aims to automatically generate convincing stories composed of successive plots correlated with consistent characters. Most recent generation models are built upon advanced neural networks, e.g., variational autoencoder, generative adversarial network, convolutional sequence to sequence model. Although these models have achieved prompting results on learning linguistic patterns, very few methods consider the attributes and prior knowledge of the story genre, especially from the perspectives of explainability and consistency. To fill this gap, we propose a character-centric neural storytelling model, where a story is created encircling the given character, i.e., each part of a story is conditioned on a given character and corresponded context environment. In this way, we explicitly capture the character information and the relations between plots and characters to improve explainability and consistency. Experimental results on open dataset indicate that our model yields meaningful improvements over several strong baselines on both human and automatic evaluations.
Danyang Liu, Juntao Li, Meng-Hsuan Yu, Ziming Huang, Gongshen Liu, Dongyan Zhao, Rui Yan
null
null
2,020
aaai
Computing Team-Maxmin Equilibria in Zero-Sum Multiplayer Extensive-Form Games
null
The study of finding the equilibrium for multiplayer games is challenging. This paper focuses on computing Team-Maxmin Equilibria (TMEs) in zero-sum multiplayer Extensive-Form Games (EFGs), which describes the optimal strategies for a team of players who share the same goal but they take actions independently against an adversary. TMEs can capture many realistic scenarios, including: 1) a team of players play against a target player in poker games; and 2) defense resources schedule and patrol independently in security games. However, the study of efficiently finding TMEs within any given accuracy in EFGs is almost completely unexplored. To fill this gap, we first study the inefficiency caused by computing the equilibrium where team players correlate their strategies and then transforming it into the mixed strategy profile of the team and show that this inefficiency can be arbitrarily large. Second, to efficiently solve the non-convex program for finding TMEs directly, we develop the Associated Recursive Asynchronous Multiparametric Disaggregation Technique (ARAMDT) to approximate multilinear terms in the program with two novel techniques: 1) an asynchronous precision method to reduce the number of constraints and variables for approximation by using different precision levels to approximate these terms; and 2) an associated constraint method to reduce the feasible solution space of the mixed-integer linear program resulting from ARAMDT by exploiting the relation between these terms. Third, we develop a novel iterative algorithm to efficiently compute TMEs within any given accuracy based on ARAMDT. Our algorithm is orders of magnitude faster than baselines in the experimental evaluation.
Youzhi Zhang, Bo An
null
null
2,020
aaai
FET-GAN: Font and Effect Transfer via K-shot Adaptive Instance Normalization
null
Text effect transfer aims at learning the mapping between text visual effects while maintaining the text content. While remarkably successful, existing methods have limited robustness in font transfer and weak generalization ability to unseen effects. To address these problems, we propose FET-GAN, a novel end-to-end framework to implement visual effects transfer with font variation among multiple text effects domains. Our model achieves remarkable results both on arbitrary effect transfer between texts and effect translation from text to graphic objects. By a few-shot fine-tuning strategy, FET-GAN can generalize the transfer of the pre-trained model to the new effect. Through extensive experimental validation and comparison, our model advances the state-of-the-art in the text effect transfer task. Besides, we have collected a font dataset including 100 fonts of more than 800 Chinese and English characters. Based on this dataset, we demonstrated the generalization ability of our model by the application that complements the font library automatically by few-shot samples. This application is significant in reducing the labor cost for the font designer.
Wei Li, Yongxing He, Yanwei Qi, Zejian Li, Yongchuan Tang
null
null
2,020
aaai
Draft and Edit: Automatic Storytelling Through Multi-Pass Hierarchical Conditional Variational Autoencoder
null
Automatic Storytelling has consistently been a challenging area in the field of natural language processing. Despite considerable achievements have been made, the gap between automatically generated stories and human-written stories is still significant. Moreover, the limitations of existing automatic storytelling methods are obvious, e.g., the consistency of content, wording diversity. In this paper, we proposed a multi-pass hierarchical conditional variational autoencoder model to overcome the challenges and limitations in existing automatic storytelling models. While the conditional variational autoencoder (CVAE) model has been employed to generate diversified content, the hierarchical structure and multi-pass editing scheme allow the story to create more consistent content. We conduct extensive experiments on the ROCStories Dataset. The results verified the validity and effectiveness of our proposed model and yields substantial improvement over the existing state-of-the-art approaches.
Meng-Hsuan Yu, Juntao Li, Danyang Liu, Dongyan Zhao, Rui Yan, Bo Tang, Haisong Zhang
null
null
2,020
aaai
Algorithms for Manipulating Sequential Allocation
null
Sequential allocation is a simple and widely studied mechanism to allocate indivisible items in turns to agents according to a pre-specified picking sequence of agents. At each turn, the current agent in the picking sequence picks its most preferred item among all items having not been allocated yet. This problem is well-known to be not strategyproof, i.e., an agent may get more utility by reporting an untruthful preference ranking of items. It arises the problem: how to find the best response of an agent? It is known that this problem is polynomially solvable for only two agents and NP-complete for an arbitrary number of agents. The computational complexity of this problem with three agents was left as an open problem. In this paper, we give a novel algorithm that solves the problem in polynomial time for each fixed number of agents. We also show that an agent can always get at least half of its optimal utility by simply using its truthful preference as the response.
Mingyu Xiao, Jiaxing Ling
null
null
2,020
aaai
Accelerating Primal Solution Findings for Mixed Integer Programs Based on Solution Prediction
null
Mixed Integer Programming (MIP) is one of the most widely used modeling techniques for combinatorial optimization problems. In many applications, a similar MIP model is solved on a regular basis, maintaining remarkable similarities in model structures and solution appearances but differing in formulation coefficients. This offers the opportunity for machine learning methods to explore the correlations between model structures and the resulting solution values. To address this issue, we propose to represent a MIP instance using a tripartite graph, based on which a Graph Convolutional Network (GCN) is constructed to predict solution values for binary variables. The predicted solutions are used to generate a local branching type cut which can be either treated as a global (invalid) inequality in the formulation resulting in a heuristic approach to solve the MIP, or as a root branching rule resulting in an exact approach. Computational evaluations on 8 distinct types of MIP problems show that the proposed framework improves the primal solution finding performance significantly on a state-of-the-art open-source MIP solver.
Jian-Ya Ding, Chao Zhang, Lei Shen, Shengyin Li, Bing Wang, Yinghui Xu, Le Song
null
null
2,020
aaai
Fast and Robust Face-to-Parameter Translation for Game Character Auto-Creation
null
With the rapid development of Role-Playing Games (RPGs), players are now allowed to edit the facial appearance of their in-game characters with their preferences rather than using default templates. This paper proposes a game character auto-creation framework that generates in-game characters according to a player's input face photo. Different from the previous methods that are designed based on neural style transfer or monocular 3D face reconstruction, we re-formulate the character auto-creation process in a different point of view: by predicting a large set of physically meaningful facial parameters under a self-supervised learning paradigm. Instead of updating facial parameters iteratively at the input end of the renderer as suggested by previous methods, which are time-consuming, we introduce a facial parameter translator so that the creation can be done efficiently through a single forward propagation from the face embeddings to parameters, with a considerable 1000x computational speedup. Despite its high efficiency, the interactivity is preserved in our method where users are allowed to optionally fine-tune the facial parameters on our creation according to their needs. Our approach also shows better robustness than previous methods, especially for those photos with head-pose variance. Comparison results and ablation analysis on seven public face verification datasets suggest the effectiveness of our method.
Tianyang Shi, Zhengxia Zuo, Yi Yuan, Changjie Fan, Tianyang Shi, Zhengxia Zuo, Yi Yuan, Changjie Fan
null
null
2,020
aaai
A Multi-Unit Profit Competitive Mechanism for Cellular Traffic Offloading
null
Cellular traffic offloading is nowadays an important problem in mobile networking. We model it as a procurement problem where each agent sells multi-units of a homogeneous item with privately known capacity and unit cost, and the auctioneer's demand valuation function is symmetric submodular. Based on the framework of random sampling and profit extraction, we aim to design a prior-free mechanism which guarantees a profit competitive to the omniscient single-price auction. However, the symmetric submodular demand valuation function and 2-parameter setting present new challenges. By adopting the highest feasible clear price, we successfully design a truthful profit extractor, and then we propose a mechanism which is proved to be truthful, individually rational and constant-factor competitive in a fixed market.
Jun Wu, Yu Qiao, Lei Zhang, Chongjun Wang, Meilin Liu
null
null
2,020
aaai
Computing Equilibria in Binary Networked Public Goods Games
null
Public goods games study the incentives of individuals to contribute to a public good and their behaviors in equilibria. In this paper, we examine a specific type of public goods game where players are networked and each has binary actions, and focus on the algorithmic aspects of such games. First, we show that checking the existence of a pure-strategy Nash equilibrium is NP-complete. We then identify tractable instances based on restrictions of either utility functions or of the underlying graphical structure. In certain cases, we also show that we can efficiently compute a socially optimal Nash equilibrium. Finally, we propose a heuristic approach for computing approximate equilibria in general binary networked public goods games, and experimentally demonstrate its effectiveness. Due to space limitation, some proofs are deferred to the extended version1.
Sixie Yu, Kai Zhou, Jeffrey Brantingham, Yevgeniy Vorobeychik
null
null
2,020
aaai
Can We Predict the Election Outcome from Sampled Votes?
null
In the standard model of voting, it is assumed that a voting rule observes the ranked preferences of each individual over a set of alternatives and makes a collective decision. In practice, however, not every individual votes. Is it possible to make a good collective decision for a group given the preferences of only a few of its members? We propose a framework in which we are given the ranked preferences of k out of n individuals sampled from a distribution, and the goal is to predict what a given voting rule would output if applied on the underlying preferences of all n individuals. We focus on the family of positional scoring rules, derive a strong negative result when the underlying preferences can be arbitrary, and discover interesting phenomena when they are generated from a known distribution.
Evi Micha, Nisarg Shah
null
null
2,020
aaai
Bounded Incentives in Manipulating the Probabilistic Serial Rule
null
The Probabilistic Serial mechanism is well-known for its desirable fairness and efficiency properties. It is one of the most prominent protocols for the random assignment problem. However, Probabilistic Serial is not incentive-compatible, thereby these desirable properties only hold for the agents' declared preferences, rather than their genuine preferences. A substantial utility gain through strategic behaviors would trigger self-interested agents to manipulate the mechanism and would subvert the very foundation of adopting the mechanism in practice. In this paper, we characterize the extent to which an individual agent can increase its utility by strategic manipulation. We show that the incentive ratio of the mechanism is 3/2. That is, no agent can misreport its preferences such that its utility becomes more than 1.5 times of what it is when reports truthfully. This ratio is a worst-case guarantee by allowing an agent to have complete information about other agents' reports and to figure out the best response strategy even if it is computationally intractable in general. To complement this worst-case study, we further evaluate an agent's utility gain on average by experiments. The experiments show that an agent' incentive in manipulating the rule is very limited. These results shed some light on the robustness of Probabilistic Serial against strategic manipulation, which is one step further than knowing that it is not incentive-compatible.
Zihe Wang, Zhide Wei, Jie Zhang
null
null
2,020
aaai
Mechanism Design with Predicted Task Revenue for Bike Sharing Systems
null
Bike sharing systems have been widely deployed around the world in recent years. A core problem in such systems is to reposition the bikes so that the distribution of bike supply is reshaped to better match the dynamic bike demand. When the bike-sharing company or platform is able to predict the revenue of each reposition task based on historic data, an additional constraint is to cap the payment for each task below its predicted revenue. In this paper, we propose an incentive mechanism called TruPreTar to incentivize users to park bicycles at locations desired by the platform toward rebalancing supply and demand. TruPreTar possesses four important economic and computational properties such as truthfulness and budget feasibility. Furthermore, we prove that even when the payment budget is tight, the total revenue still exceeds or equals the budget. Otherwise, TruPreTar achieves 2-approximation as compared to the optimal (revenue-maximizing) solution, which is close to the lower bound of at least √2 that we also prove. Using an industrial dataset obtained from a large bike-sharing company, our experiments show that TruPreTar is effective in rebalancing bike supply and demand and, as a result, generates high revenue that outperforms several benchmark mechanisms.
Hongtao Lv, Chaoli Zhang, Zhenzhe Zheng, Tie Luo, Fan Wu, Guihai Chen
null
null
2,020
aaai
Deep Learning—Powered Iterative Combinatorial Auctions
null
In this paper, we study the design of deep learning-powered iterative combinatorial auctions (ICAs). We build on prior work where preference elicitation was done via kernelized support vector regressions (SVRs). However, the SVR-based approach has limitations because it requires solving a machine learning (ML)-based winner determination problem (WDP). With expressive kernels (like gaussians), the ML-based WDP cannot be solved for large domains. While linear or quadratic kernels have better computational scalability, these kernels have limited expressiveness. In this work, we address these shortcomings by using deep neural networks (DNNs) instead of SVRs. We first show how the DNN-based WDP can be reformulated into a mixed integer program (MIP). Second, we experimentally compare the prediction performance of DNNs against SVRs. Third, we present experimental evaluations in two medium-sized domains which show that even ICAs based on relatively small-sized DNNs lead to higher economic efficiency than ICAs based on kernelized SVRs. Finally, we show that our DNN-powered ICA also scales well to very large CA domains.
Jakob Weissteiner, Sven Seuken
null
null
2,020
aaai
Limitations of Incentive Compatibility on Discrete Type Spaces
null
In the design of incentive compatible mechanisms, a common approach is to enforce incentive compatibility as constraints in programs that optimize over feasible mechanisms. Such constraints are often imposed on sparsified representations of the type spaces, such as their discretizations or samples, in order for the program to be manageable. In this work, we explore limitations of this approach, by studying whether all dominant strategy incentive compatible mechanisms on a set T of discrete types can be extended to the convex hull of T.Dobzinski, Fu and Kleinberg (2015) answered the question affirmatively for all settings where types are single dimensional. It is not difficult to show that the same holds when the set of feasible outcomes is downward closed. In this work we show that the question has a negative answer for certain non-downward-closed settings with multi-dimensional types. This result should call for caution in the use of the said approach to enforcing incentive compatibility beyond single-dimensional preferences and downward closed feasible outcomes.
Taylor Lundy, Hu Fu
null
null
2,020
aaai
Adaptive Quantitative Trading: An Imitative Deep Reinforcement Learning Approach
null
In recent years, considerable efforts have been devoted to developing AI techniques for finance research and applications. For instance, AI techniques (e.g., machine learning) can help traders in quantitative trading (QT) by automating two tasks: market condition recognition and trading strategies execution. However, existing methods in QT face challenges such as representing noisy high-frequent financial data and finding the balance between exploration and exploitation of the trading agent with AI techniques. To address the challenges, we propose an adaptive trading model, namely iRDPG, to automatically develop QT strategies by an intelligent trading agent. Our model is enhanced by deep reinforcement learning (DRL) and imitation learning techniques. Specifically, considering the noisy financial data, we formulate the QT process as a Partially Observable Markov Decision Process (POMDP). Also, we introduce imitation learning to leverage classical trading strategies useful to balance between exploration and exploitation. For better simulation, we train our trading agent in the real financial market using minute-frequent data. Experimental results demonstrate that our model can extract robust market features and be adaptive in different markets.
Yang Liu, Qi Liu, Hongke Zhao, Zhen Pan, Chuanren Liu
null
null
2,020
aaai
The Surprising Power of Hiding Information in Facility Location
null
Facility location is the problem of locating a public facility based on the preferences of multiple agents. In the classic framework, where each agent holds a single location on a line and can misreport it, strategyproof mechanisms for choosing the location of the facility are well-understood.We revisit this problem in a more general framework. We assume that each agent may hold several locations on the line with different degrees of importance to the agent. We study mechanisms which elicit the locations of the agents and different levels of information about their importance. Further, in addition to the classic manipulation of misreporting locations, we introduce and study a new manipulation, whereby agents may hide some of their locations. We argue for its novelty in facility location and applicability in practice. Our results provide a complete picture of the power of strategyproof mechanisms eliciting different levels of information and with respect to each type of manipulation. Surprisingly, we show that in some cases hiding locations can be a strictly more powerful manipulation than misreporting locations.
Safwan Hossain, Evi Micha, Nisarg Shah
null
null
2,020
aaai
Defending with Shared Resources on a Network
null
In this paper we consider a defending problem on a network. In the model, the defender holds a total defending resource of R, which can be distributed to the nodes of the network. The defending resource allocated to a node can be shared by its neighbors. There is a weight associated with every edge that represents the efficiency defending resources are shared between neighboring nodes. We consider the setting when each attack can affect not only the target node, but its neighbors as well. Assuming that nodes in the network have different treasures to defend and different defending requirements, the defender aims at allocating the defending resource to the nodes to minimize the loss due to attack. We give polynomial time exact algorithms for two important special cases of the network defending problem. For the case when an attack can only affect the target node, we present an LP-based exact algorithm. For the case when defending resources cannot be shared, we present a max-flow-based exact algorithm. We show that the general problem is NP-hard, and we give a 2-approximation algorithm based on LP-rounding. Moreover, by giving a matching lower bound of 2 on the integrality gap on the LP relaxation, we show that our rounding is tight.
Minming Li, Long Tran-Thanh, Xiaowei Wu
null
null
2,020
aaai
Structure Learning for Approximate Solution of Many-Player Games
null
Games with many players are difficult to solve or even specify without adopting structural assumptions that enable representation in compact form. Such structure is generally not given and will not hold exactly for particular games of interest. We introduce an iterative structure-learning approach to search for approximate solutions of many-player games, assuming only black-box simulation access to noisy payoff samples. Our first algorithm, K-Roles, exploits symmetry by learning a role assignment for players of the game through unsupervised learning (clustering) methods. Our second algorithm, G3L, seeks sparsity by greedy search over local interactions to learn a graphical game model. Both algorithms use supervised learning (regression) to fit payoff values to the learned structures, in compact representations that facilitate equilibrium calculation. We experimentally demonstrate the efficacy of both methods in reaching quality solutions and uncovering hidden structure, on both perfectly and approximately structured game instances.
Zun Li, Michael Wellman
null
null
2,020
aaai
Price of Fairness in Budget Division and Probabilistic Social Choice
null
A group of agents needs to divide a divisible common resource (such as a monetary budget) among several uses or projects. We assume that agents have approval preferences over projects, and their utility is the fraction of the budget spent on approved projects. If we maximize utilitarian social welfare, the entire budget will be spent on a single popular project, even if a substantial fraction of the agents disapprove it. This violates the individual fair share axiom (IFS) which requires that for each agent, at least 1/n of the budget is spent on approved projects. We study the price of imposing such fairness axioms on utilitarian social welfare. We show that no division rule satisfying IFS can guarantee to achieve more than an O(1/√m) fraction of maximum utilitarian welfare, in the worst case. However, imposing stronger group fairness conditions (such as the core) does not come with an increased price, since both the conditional utilitarian rule and the Nash rule match this bound and guarantee an Ώ(1/√m) fraction. The same guarantee is attained by the rule under which the spending on a project is proportional to its approval score. We also study a family of rules interpolating between the utilitarian and the Nash rule, quantifying a trade-off between welfare and group fairness. An experimental analysis by sampling using several probabilistic models shows that the conditional utilitarian rule achieves very high welfare on average.
Marcin Michorzewski, Dominik Peters, Piotr Skowron
null
null
2,020
aaai
Practical Frank–Wolfe Method with Decision Diagrams for Computing Wardrop Equilibrium of Combinatorial Congestion Games
null
Computation of equilibria for congestion games has been an important research subject. In many realistic scenarios, each strategy of congestion games is given by a combination of elements that satisfies certain constraints; such games are called combinatorial congestion games. For example, given a road network with some toll roads, each strategy of routing games is a path (a combination of edges) whose total toll satisfies a certain budget constraint. Generally, given a ground set of n elements, the set of all such strategies, called the strategy set, can be large exponentially in n, and it often has complicated structures; these issues make equilibrium computation very hard. In this paper, we propose a practical algorithm for such hard equilibrium computation problems. We use data structures, called zero-suppressed binary decision diagrams (ZDDs), to compactly represent strategy sets, and we develop a Frank–Wolfe-style iterative equilibrium computation algorithm whose per-iteration complexity is linear in the size of the ZDD representation. We prove that an ϵ-approximate Wardrop equilibrium can be computed in O(poly(n)/ϵ) iterations, and we improve the result to O(poly(n) log ϵ−1) for some special cases. Experiments confirm the practical utility of our method.
Kengo Nakamura, Shinsaku Sakaue, Norihito Yasuda
null
null
2,020
aaai
Lifting Preferences over Alternatives to Preferences over Sets of Alternatives: The Complexity of Recognizing Desirable Families of Sets
null
The problem of lifting a preference order on a set of objects to a preference order on a family of subsets of this set is a fundamental problem with a wide variety of applications in AI. The process is often guided by axioms postulating properties the lifted order should have. Well-known impossibility results by Kannai and Peleg and by Barberà and Pattanaik tell us that some desirable axioms – namely dominance and (strict) independence – are not jointly satisfiable for any linear order on the objects if all non-empty sets of objects are to be ordered. On the other hand, if not all non-empty sets of objects are to be ordered, the axioms are jointly satisfiable for all linear orders on the objects for some families of sets. Such families are very important for applications as they allow for the use of lifted orders, for example, in combinatorial voting. In this paper, we determine the computational complexity of recognizing such families. We show that it is Π2p-complete to decide for a given family of subsets whether dominance and independence or dominance and strict independence are jointly satisfiable for all linear orders on the objects if the lifted order needs to be total. Furthermore, we show that the problem remains coNP-complete if the lifted order can be incomplete. Additionally, we show that the complexity of these problem can increase exponentially if the family of sets is not given explicitly but via a succinct domain restriction.
Jan Maly
null
null
2,020
aaai
Reinforcement Mechanism Design: With Applications to Dynamic Pricing in Sponsored Search Auctions
null
In many social systems in which individuals and organizations interact with each other, there can be no easy laws to govern the rules of the environment, and agents' payoffs are often influenced by other agents' actions. We examine such a social system in the setting of sponsored search auctions and tackle the search engine's dynamic pricing problem by combining the tools from both mechanism design and the AI domain. In this setting, the environment not only changes over time, but also behaves strategically. Over repeated interactions with bidders, the search engine can dynamically change the reserve prices and determine the optimal strategy that maximizes the profit. We first train a buyer behavior model, with a real bidding data set from a major search engine, that predicts bids given information disclosed by the search engine and the bidders' performance data from previous rounds. We then formulate the dynamic pricing problem as an MDP and apply a reinforcement-based algorithm that optimizes reserve prices over time. Experiments demonstrate that our model outperforms static optimization strategies including the ones that are currently in use as well as several other dynamic ones.
Weiran Shen, Binghui Peng, Hanpeng Liu, Michael Zhang, Ruohan Qian, Yan Hong, Zhi Guo, Zongyao Ding, Pengjun Lu, Pingzhong Tang
null
null
2,020
aaai
Complexity of Computing the Shapley Value in Games with Externalities
null
We study the complexity of computing the Shapley value in games with externalities. We focus on two representations based on marginal contribution nets (embedded MC-nets and weighted MC-nets) and five extensions of the Shapley value to games with externalities. Our results show that while weighted MC-nets are more concise than embedded MC-nets, they have slightly worse computational properties when it comes to computing the Shapley value: two out of five extensions can be computed in polynomial time for embedded MC-nets and only one for weighted MC-nets.
Oskar Skibski
null
null
2,020
aaai
Solving Online Threat Screening Games using Constrained Action Space Reinforcement Learning
null
Large-scale screening for potential threats with limited resources and capacity for screening is a problem of interest at airports, seaports, and other ports of entry. Adversaries can observe screening procedures and arrive at a time when there will be gaps in screening due to limited resource capacities. To capture this game between ports and adversaries, this problem has been previously represented as a Stackelberg game, referred to as a Threat Screening Game (TSG). Given the significant complexity associated with solving TSGs and uncertainty in arrivals of customers, existing work has assumed that screenees arrive and are allocated security resources at the beginning of the time-window. In practice, screenees such as airport passengers arrive in bursts correlated with flight time and are not bound by fixed time-windows. To address this, we propose an online threat screening model in which the screening strategy is determined adaptively as a passenger arrives while satisfying a hard bound on acceptable risk of not screening a threat. To solve the online problem, we first reformulate it as a Markov Decision Process (MDP) in which the hard bound on risk translates to a constraint on the action space and then solve the resultant MDP using Deep Reinforcement Learning (DRL). To this end, we provide a novel way to efficiently enforce linear inequality constraints on the action output in DRL. We show that our solution allows us to significantly reduce screenee wait time without compromising on the risk.
Shah Sanket, Arunesh Sinha, Pradeep Varakantham, Perrault Andrew, Milind Tambe
null
null
2,020
aaai
Nice Invincible Strategy for the Average-Payoff IPD
null
The Iterated Prisoner's Dilemma (IPD) is a well-known benchmark for studying the long term behaviours of rational agents. Many well-known strategies have been studied, from the simple tit-for-tat (TFT) to more involved ones like zero determinant and extortionate strategies studied recently by Press and Dyson. In this paper, we consider what we call invincible strategies. These are ones that will never lose against any other strategy in terms of average payoff in the limit. We provide a simple characterization of this class of strategies, and show that invincible strategies can also be nice. We discuss its relationship with some important strategies and generalize our results to some typical repeated 2x2 games. It's known that experimentally, nice strategies like the TFT and extortionate ones can act as catalysts for the evolution of cooperation. Our experiments show that this is also the case for some invincible strategies that are neither nice nor extortionate.
Shiheng Wang, Fangzhen Lin
null
null
2,020
aaai
Path Planning Problems with Side Observations—When Colonels Play Hide-and-Seek
null
Resource allocation games such as the famous Colonel Blotto (CB) and Hide-and-Seek (HS) games are often used to model a large variety of practical problems, but only in their one-shot versions. Indeed, due to their extremely large strategy space, it remains an open question how one can efficiently learn in these games. In this work, we show that the online CB and HS games can be cast as path planning problems with side-observations (SOPPP): at each stage, a learner chooses a path on a directed acyclic graph and suffers the sum of losses that are adversarially assigned to the corresponding edges; and she then receives semi-bandit feedback with side-observations (i.e., she observes the losses on the chosen edges plus some others). We propose a novel algorithm, Exp3-OE, the first-of-its-kind with guaranteed efficient running time for SOPPP without requiring any auxiliary oracle. We provide an expected-regret bound of Exp3-OE in SOPPP matching the order of the best benchmark in the literature. Moreover, we introduce additional assumptions on the observability model under which we can further improve the regret bounds of Exp3-OE. We illustrate the benefit of using Exp3-OE in SOPPP by applying it to the online CB and HS games.
Dong Quan Vu, Patrick Loiseau, Alonso Silva, Long Tran-Thanh
null
null
2,020
aaai
Comparing Election Methods Where Each Voter Ranks Only Few Candidates
null
Election rules are formal processes that aggregate voters' preferences, typically to select a single winning candidate. Most of the election rules studied in the literature require the voters to rank the candidates from the most to the least preferred one. This method of eliciting preferences is impractical when the number of candidates to be ranked is large. We ask how well certain election rules (focusing on positional scoring rules and the Minimax rule) can be approximated from partial preferences collected through one of the following procedures: (i) randomized—we ask each voter to rank a random subset of ℓ candidates, and (ii) deterministic—we ask each voter to provide a ranking of her ℓ most preferred candidates (the ℓ-truncated ballot). We establish theoretical bounds on the approximation ratios and complement our theoretical analysis with computer simulations. We find that it is usually better to use the randomized approach.
Matthias Bentert, Piotr Skowron
null
null
2,020
aaai
Multi-Type Resource Allocation with Partial Preferences
null
We propose multi-type probabilistic serial (MPS) and multi-type random priority (MRP) as extensions of the well-known PS and RP mechanisms to the multi-type resource allocation problems (MTRAs) with partial preferences. In our setting, there are multiple types of divisible items, and a group of agents who have partial order preferences over bundles consisting of one item of each type. We show that for the unrestricted domain of partial order preferences, no mechanism satisfies both sd-efficiency and sd-envy-freeness. Notwithstanding this impossibility result, our main message is positive: When agents' preferences are represented by acyclic CP-nets, MPS satisfies sd-efficiency, sd-envy-freeness, ordinal fairness, and upper invariance, while MRP satisfies ex-post-efficiency, sd-strategyproofness, and upper invariance, recovering the properties of PS and RP. Besides, we propose a hybrid mechanism, multi-type general dictatorship (MGD), combining the ideas of MPS and MRP, which satisfies sd-efficiency, equal treatment of equals and decomposability under the unrestricted domain of partial order preferences.
Haibin Wang, Sujoy Sikdar, Xiaoxi Guo, Lirong Xia, Yongzhi Cao, Hanpin Wang
null
null
2,020
aaai
Robust Market Equilibria with Uncertain Preferences
null
The problem of allocating scarce items to individuals is an important practical question in market design. An increasingly popular set of mechanisms for this task uses the concept of market equilibrium: individuals report their preferences, have a budget of real or fake currency, and a set of prices for items and allocations is computed that sets demand equal to supply. An important real world issue with such mechanisms is that individual valuations are often only imperfectly known. In this paper, we show how concepts from classical market equilibrium can be extended to reflect such uncertainty. We show that in linear, divisible Fisher markets a robust market equilibrium (RME) always exists; this also holds in settings where buyers may retain unspent money. We provide theoretical analysis of the allocative properties of RME in terms of envy and regret. Though RME are hard to compute for general uncertainty sets, we consider some natural and tractable uncertainty sets which lead to well behaved formulations of the problem that can be solved via modern convex programming methods. Finally, we show that very mild uncertainty about valuations can cause RME allocations to outperform those which take estimates as having no underlying uncertainty.
Riley Murray, Christian Kroer, Alex Peysakhovich, Parikshit Shah
null
null
2,020
aaai
A Simple, Fast, and Safe Mediator for Congestion Management
null
Congestion is a severe problem in cities. A large population with little information about each other's preferences hardly reaches equilibrium and causes unexpected congestion. Controlling such congestion requires us to collect information dispersed in the market and to coordinate actions among agents. We aim to design a mediator that a) induces a game with high social welfare in equilibrium, b) computes an equilibrium efficiently, c) works without common prior, and d) performs well even when only some of the agents in the market use the mediator. We propose a mediator based on a version of best response dynamics (BRD). We prove that, in a simple setting with two resources, “good behavior” (reporting truthfully and following the recommendation) forms an (approximate) ex-post Nash equilibrium in the mediated game; in the equilibrium, the welfare is close to the first-best when preferences diverge enough. Furthermore, under a certain behavioral assumption, those who are not using the mediator can always enjoy non-negative payoff gain by joining it even without the full participation of others. Additionally, our experimental results suggest that such results remain valid for more general settings.
Kei Ikegami, Kyohei Okumura, Takumi Yoshikawa
null
null
2,020
aaai
Perpetual Voting: Fairness in Long-Term Decision Making
null
In this paper we introduce a new voting formalism to support long-term collective decision making: perpetual voting rules. These are voting rules that take the history of previous decisions into account. Due to this additional information, perpetual voting rules may offer temporal fairness guarantees that cannot be achieved in singular decisions. In particular, such rules may enable minorities to have a fair (proportional) influence on the decision process and thus foster long-term participation of minorities. This paper explores the proposed voting rules via an axiomatic analysis as well as a quantitative evaluation by computer simulations. We identify two perpetual voting rules as particularly recommendable in long-term collective decision making.
Martin Lackner
null
null
2,020
aaai
Double-Oracle Sampling Method for Stackelberg Equilibrium Approximation in General-Sum Extensive-Form Games
null
The paper presents a new method for approximating Strong Stackelberg Equilibrium in general-sum sequential games with imperfect information and perfect recall. The proposed approach is generic as it does not rely on any specific properties of a particular game model. The method is based on iterative interleaving of the two following phases: (1) guided Monte Carlo Tree Search sampling of the Follower's strategy space and (2) building the Leader's behavior strategy tree for which the sampled Follower's strategy is an optimal response. The above solution scheme is evaluated with respect to expected Leader's utility and time requirements on three sets of interception games with variable characteristics, played on graphs. A comparison with three state-of-the-art MILP/LP-based methods shows that in vast majority of test cases proposed simulation-based approach leads to optimal Leader's strategies, while excelling the competitive methods in terms of better time scalability and lower memory requirements.
Jan Karwowski, Jacek Mańdziuk
null
null
2,020
aaai
Information Elicitation Mechanisms for Statistical Estimation
null
We study learning statistical properties from strategic agents with private information. In this problem, agents must be incentivized to truthfully reveal their information even when it cannot be directly verified. Moreover, the information reported by the agents must be aggregated into a statistical estimate. We study two fundamental statistical properties: estimating the mean of an unknown Gaussian, and linear regression with Gaussian error. The information of each agent is one point in a Euclidean space.Our main results are two mechanisms for each of these problems which optimally aggregate the information of agents in the truth-telling equilibrium:• A minimal (non-revelation) mechanism for large populations — agents only need to report one value, but that value need not be their point.• A mechanism for small populations that is non-minimal — agents need to answer more than one question.These mechanisms are “informed truthful” mechanisms where reporting unaltered data (truth-telling) 1) forms a strict Bayesian Nash equilibrium and 2) has strictly higher welfare than any oblivious equilibrium where agents' strategies are independent of their private signals. We also show a minimal revelation mechanism (each agent only reports her signal) for a restricted setting and use an impossibility result to prove the necessity of this restriction.We build upon the peer prediction literature in the single-question setting; however, most previous work in this area focuses on discrete signals, whereas our setting is inherently continuous, and we further simplify the agents' reports.
Yuqing Kong, Grant Schoenebeck, Biaoshuai Tao, Fang-Yi Yu
null
null
2,020
aaai
Balancing the Tradeoff between Profit and Fairness in Rideshare Platforms during High-Demand Hours
null
Rideshare platforms, when assigning requests to drivers, tend to maximize profit for the system and/or minimize waiting time for riders. Such platforms can exacerbate biases that drivers may have over certain types of requests. We consider the case of peak hours when the demand for rides is more than the supply of drivers. Drivers are well aware of their advantage during the peak hours and can choose to be selective about which rides to accept. Moreover, if in such a scenario, the assignment of requests to drivers (by the platform) is made only to maximize profit and/or minimize wait time for riders, requests of a certain type (e.g., from a non-popular pickup location, or to a non-popular drop-off location) might never be assigned to a driver. Such a system can be highly unfair to riders. However, increasing fairness might come at a cost of the overall profit made by the rideshare platform. To balance these conflicting goals, we present a flexible, non-adaptive algorithm, NAdap, that allows the platform designer to control the profit and fairness of the system via parameters α and β respectively. We model the matching problem as an online bipartite matching where the set of drivers is offline and requests arrive online. Upon the arrival of a request, we use NAdap to assign it to a driver (the driver might then choose to accept or reject it) or reject the request. We formalize the measures of profit and fairness in our setting and show that by using NAdap, the competitive ratios for profit and fairness measures would be no worse than α/e and β/e respectively. Extensive experimental results on both real-world and synthetic datasets confirm the validity of our theoretical lower bounds. Additionally, they show that NAdap under some choice of (α, β) can beat two natural heuristics, Greedy and Uniform, on both fairness and profit. Code is available at: https://github.com/nvedant07/rideshare-fairness-peak/.
Vedant Nanda, Pan Xu, Karthik Abhinav Sankararaman, John Dickerson, Aravind Srinivasan
null
null
2,020
aaai
An Analysis Framework for Metric Voting based on LP Duality
null
Distortion-based analysis has established itself as a fruitful framework for comparing voting mechanisms. m voters and n candidates are jointly embedded in an (unknown) metric space, and the voters submit rankings of candidates by non-decreasing distance from themselves. Based on the submitted rankings, the social choice rule chooses a winning candidate; the quality of the winner is the sum of the (unknown) distances to the voters. The rule's choice will in general be suboptimal, and the worst-case ratio between the cost of its chosen candidate and the optimal candidate is called the rule's distortion. It was shown in prior work that every deterministic rule has distortion at least 3, while the Copeland rule and related rules guarantee distortion at most 5; a very recent result gave a rule with distortion 2 + √5 ≈ 4.236.We provide a framework based on LP-duality and flow interpretations of the dual which provides a simpler and more unified way for proving upper bounds on the distortion of social choice rules. We illustrate the utility of this approach with three examples. First, we show that the Ranked Pairs and Schulze rules have distortion Θ(√n). Second, we give a fairly simple proof of a strong generalization of the upper bound of 5 on the distortion of Copeland, to social choice rules with short paths from the winning candidate to the optimal candidate in generalized weak preference graphs. A special case of this result recovers the recent 2 + √5 guarantee. Finally, our framework naturally suggests a combinatorial rule that is a strong candidate for achieving distortion 3, which had also been proposed in recent work. We prove that the distortion bound of 3 would follow from any of three combinatorial conjectures we formulate.
David Kempe
null
null
2,020
aaai