title
stringlengths
5
246
categories
stringlengths
5
94
abstract
stringlengths
54
5.03k
authors
stringlengths
0
6.72k
doi
stringlengths
12
54
id
stringlengths
6
10
year
float64
2.02k
2.02k
venue
stringclasses
13 values
Deepfake Network Architecture Attribution
null
With the rapid progress of generation technology, it has become necessary to attribute the origin of fake images. Existing works on fake image attribution perform multi-class classification on several Generative Adversarial Network (GAN) models and obtain high accuracies. While encouraging, these works are restricted to model-level attribution, only capable of handling images generated by seen models with a specific seed, loss and dataset, which is limited in real-world scenarios when fake images may be generated by privately trained models. This motivates us to ask whether it is possible to attribute fake images to the source models' architectures even if they are finetuned or retrained under different configurations. In this work, we present the first study on Deepfake Network Architecture Attribution to attribute fake images on architecture-level. Based on an observation that GAN architecture is likely to leave globally consistent fingerprints while traces left by model weights vary in different regions, we provide a simple yet effective solution named by DNA-Det for this problem. Extensive experiments on multiple cross-test setups and a large-scale dataset demonstrate the effectiveness of DNA-Det.
Tianyun Yang, Ziyao Huang, Juan Cao, Lei Li, Xirong Li
null
null
2,022
aaai
Learning and Dynamical Models for Sub-seasonal Climate Forecasting: Comparison and Collaboration
null
Sub-seasonal forecasting (SSF) is the prediction of key climate variables such as temperature and precipitation on the 2-week to 2-month time horizon. Skillful SSF would have substantial societal value in areas such as agricultural productivity, hydrology and water resource management, and emergency planning for extreme events such as droughts and wildfires. Despite its societal importance, SSF has stayed a challenging problem compared to both short-term weather forecasting and long-term seasonal forecasting. Recent studies have shown the potential of machine learning (ML) models to advance SSF. In this paper, for the first time, we perform a fine-grained comparison of a suite of modern ML models with start-of-the-art physics-based dynamical models from the Subseasonal Experiment (SubX) project for SSF in the western contiguous United States. Additionally, we explore mechanisms to enhance the ML models by using forecasts from dynamical models. Empirical results illustrate that, on average, ML models outperform dynamical models while the ML models tend to generate forecasts with conservative magnitude compared to the SubX models. Further, we illustrate that ML models make forecasting errors under extreme weather conditions, e.g., cold waves due to the polar vortex, highlighting the need for separate models for extreme events. Finally, we show that suitably incorporating dynamical model forecasts as inputs to ML models can substantially improve the forecasting performance of the ML models. The SSF dataset constructed for the work and code for the ML models are released along with the paper for the benefit of the artificial intelligence community.
Sijie He, Xinyan Li, Laurie Trenary, Benjamin A Cash, Timothy DelSole, Arindam Banerjee
null
null
2,022
aaai
Learning Human Driving Behaviors with Sequential Causal Imitation Learning
null
Learning human driving behaviors is an efficient approach for self-driving vehicles. Traditional Imitation Learning (IL) methods assume that the expert demonstrations follow Markov Decision Processes (MDPs). However, in reality, this assumption does not always hold true. Spurious correlation may exist through the paths of historical variables because of the existence of unobserved confounders. Accounting for the latent causal relationships from unobserved variables to outcomes, this paper proposes Sequential Causal Imitation Learning (SeqCIL) for imitating driver behaviors. We develop a sequential causal template that generalizes the default MDP settings to one with Unobserved Confounders (MDPUC-HD). Then we develop a sufficient graphical criterion to determine when ignoring causality leads to poor performances in MDPUC-HD. Through the framework of Adversarial Imitation Learning, we develop a procedure to imitate the expert policy by blocking π-backdoor paths at each time step. Our methods are evaluated on a synthetic dataset and a real-world highway driving dataset, both demonstrating that the proposed procedure significantly outperforms non-causal imitation learning methods.
Kangrui Ruan, Xuan Di
null
null
2,022
aaai
AlphaHoldem: High-Performance Artificial Intelligence for Heads-Up No-Limit Poker via End-to-End Reinforcement Learning
null
Heads-up no-limit Texas hold’em (HUNL) is the quintessential game with imperfect information. Representative priorworks like DeepStack and Libratus heavily rely on counter-factual regret minimization (CFR) and its variants to tackleHUNL. However, the prohibitive computation cost of CFRiteration makes it difficult for subsequent researchers to learnthe CFR model in HUNL and apply it in other practical applications. In this work, we present AlphaHoldem, a high-performance and lightweight HUNL AI obtained with an end-to-end self-play reinforcement learning framework. The proposed framework adopts a pseudo-siamese architecture to directly learn from the input state information to the output actions by competing the learned model with its different historical versions. The main technical contributions include anovel state representation of card and betting information, amultitask self-play training loss function, and a new modelevaluation and selection metric to generate the final model.In a study involving 100,000 hands of poker, AlphaHoldemdefeats Slumbot and DeepStack using only one PC with threedays training. At the same time, AlphaHoldem only takes 2.9milliseconds for each decision-making using only a singleGPU, more than 1,000 times faster than DeepStack. We release the history data among among AlphaHoldem, Slumbot,and top human professionals in the author’s GitHub repository to facilitate further studies in this direction.
Enmin Zhao, Renye Yan, Jinqiu Li, Kai Li, Junliang Xing
null
null
2,022
aaai
Diaformer: Automatic Diagnosis via Symptoms Sequence Generation
null
Automatic diagnosis has attracted increasing attention but remains challenging due to multi-step reasoning. Recent works usually address it by reinforcement learning methods. However, these methods show low efficiency and require task-specific reward functions. Considering the conversation between doctor and patient allows doctors to probe for symptoms and make diagnoses, the diagnosis process can be naturally seen as the generation of a sequence including symptoms and diagnoses. Inspired by this, we reformulate automatic diagnosis as a symptoms Sequence Generation (SG) task and propose a simple but effective automatic Diagnosis model based on Transformer (Diaformer). We firstly design the symptom attention framework to learn the generation of symptom inquiry and the disease diagnosis. To alleviate the discrepancy between sequential generation and disorder of implicit symptoms, we further design three orderless training mechanisms. Experiments on three public datasets show that our model outperforms baselines on disease diagnosis by 1%, 6% and 11.5% with the highest training efficiency. Detailed analysis on symptom inquiry prediction demonstrates that the potential of applying symptoms sequence generation for automatic diagnosis.
Junying Chen, Dongfang Li, Qingcai Chen, Wenxiu Zhou, Xin Liu
null
null
2,022
aaai
Dynamic Manifold Learning for Land Deformation Forecasting
null
Landslides refer to occurrences of massive ground movements due to geological (and meteorological) factors, and can have disastrous impact on property, economy, and even lead to loss of life. The advances of remote sensing provide accurate and continuous terrain monitoring, enabling the study and analysis of land deformation which, in turn, can be used for possible landslides forecast. Prior studies either rely on independent observations for displacement prediction or model static land characteristics without considering the subtle interactions between different locations and the dynamic changes of the surface conditions. We present DyLand -- Dynamic Manifold Learning with Normalizing Flows for Land deformation prediction -- a novel framework for learning dynamic structures of terrain surface and improving the performance of land deformation prediction. DyLand models the spatial connections of InSAR measurements and estimates conditional distributions of deformations on the terrain manifold with a novel normalizing flow-based method. Instead of modeling the stable terrains, it incorporates surface permutations and captures the innate dynamics of the land surface while allowing for tractable likelihood estimates on the manifold. Our extensive evaluations on curated InSAR datasets from continuous monitoring of slopes prone to landslides show that DyLand outperforms existing bechmarking models.
Fan Zhou, Rongfan Li, Qiang Gao, Goce Trajcevski, Kunpeng Zhang, Ting Zhong
null
null
2,022
aaai
FactorVAE: A Probabilistic Dynamic Factor Model Based on Variational Autoencoder for Predicting Cross-Sectional Stock Returns
null
As an asset pricing model in economics and finance, factor model has been widely used in quantitative investment. Towards building more effective factor models, recent years have witnessed the paradigm shift from linear models to more flexible nonlinear data-driven machine learning models. However, due to low signal-to-noise ratio of the financial data, it is quite challenging to learn effective factor models. In this paper, we propose a novel factor model, FactorVAE, as a probabilistic model with inherent randomness for noise modeling. Essentially, our model integrates the dynamic factor model (DFM) with the variational autoencoder (VAE) in machine learning, and we propose a prior-posterior learning method based on VAE, which can effectively guide the learning of model by approximating an optimal posterior factor model with future information. Particularly, considering that risk modeling is important for the noisy stock data, FactorVAE can estimate the variances from the distribution over the latent space of VAE, in addition to predicting returns. The experiments on the real stock market data demonstrate the effectiveness of FactorVAE, which outperforms various baseline methods.
Yitong Duan, Lei Wang, Qizhong Zhang, Jian Li
null
null
2,022
aaai
AXM-Net: Implicit Cross-Modal Feature Alignment for Person Re-identification
null
Cross-modal person re-identification (Re-ID) is critical for modern video surveillance systems. The key challenge is to align cross-modality representations conforming to semantic information present for a person and ignore background information. This work presents a novel convolutional neural network (CNN) based architecture designed to learn semantically aligned cross-modal visual and textual representations. The underlying building block, named AXM-Block, is a unified multi-layer network that dynamically exploits the multi-scale knowledge from both modalities and re-calibrates each modality according to shared semantics. To complement the convolutional design, contextual attention is applied in the text branch to manipulate long-term dependencies. Moreover, we propose a unique design to enhance visual part-based feature coherence and locality information. Our framework is novel in its ability to implicitly learn aligned semantics between modalities during the feature learning stage. The unified feature learning effectively utilizes textual data as a super-annotation signal for visual representation learning and automatically rejects irrelevant information. The entire AXM-Net is trained end-to-end on CUHK-PEDES data. We report results on two tasks, person search and cross-modal Re-ID. The AXM-Net outperforms the current state-of-the-art (SOTA) methods and achieves 64.44% Rank@1 on the CUHK-PEDES test set. It also outperforms by >10% for cross-viewpoint text-to-image Re-ID scenarios on CrossRe-ID and CUHK-SYSU datasets.
Ammarah Farooq, Muhammad Awais, Josef Kittler, Syed Safwan Khalid
null
null
2,022
aaai
Solving PDE-Constrained Control Problems Using Operator Learning
null
The modeling and control of complex physical systems are essential in real-world problems. We propose a novel framework that is generally applicable to solving PDE-constrained optimal control problems by introducing surrogate models for PDE solution operators with special regularizers. The procedure of the proposed framework is divided into two phases: solution operator learning for PDE constraints (Phase 1) and searching for optimal control (Phase 2). Once the surrogate model is trained in Phase 1, the optimal control can be inferred in Phase 2 without intensive computations. Our framework can be applied to both data-driven and data-free cases. We demonstrate the successful application of our method to various optimal control problems for different control variables with diverse PDE constraints from the Poisson equation to Burgers' equation.
Rakhoon Hwang, Jae Yong Lee, Jin Young Shin, Hyung Ju Hwang
null
null
2,022
aaai
A Unified Framework for Real Time Motion Completion
null
Motion completion, as a challenging and fundamental problem, is of great significance in film and game applications. For different motion completion application scenarios (in-betweening, in-filling, and blending), most previous methods deal with the completion problems with case-by-case methodology designs. In this work, we propose a simple but effective method to solve multiple motion completion problems under a unified framework and achieves a new state-of-the-art accuracy on LaFAN1 (+17% better than previous sota) under multiple evaluation settings. Inspired by the recent great success of self-attention-based transformer models, we consider the completion as a sequence-to-sequence prediction problem. Our method consists of three modules - a standard transformer encoder with self-attention that learns long-range dependencies of input motions, a trainable mixture embedding module that models temporal information and encodes different key-frame combinations in a unified form, and a new motion perceptual loss for better capturing high-frequency movements. Our method can predict multiple missing frames within a single forward propagation in real-time and get rid of the post-processing requirement. We also introduce a novel large-scale dance movement dataset for exploring the scaling capability of our method and its effectiveness in complex motion applications.
Yinglin Duan, Yue Lin, Zhengxia Zou, Yi Yuan, Zhehui Qian, Bohan Zhang
null
null
2,022
aaai
Intra-Inter Subject Self-Supervised Learning for Multivariate Cardiac Signals
null
Learning information-rich and generalizable representations effectively from unlabeled multivariate cardiac signals to identify abnormal heart rhythms (cardiac arrhythmias) is valuable in real-world clinical settings but often challenging due to its complex temporal dynamics. Cardiac arrhythmias can vary significantly in temporal patterns even for the same patient (i.e., intra subject difference). Meanwhile, the same type of cardiac arrhythmia can show different temporal patterns among different patients due to different cardiac structures (i.e., inter subject difference). In this paper, we address the challenges by proposing an Intra-Inter Subject Self-Supervised Learning (ISL) model that is customized for multivariate cardiac signals. Our proposed ISL model integrates medical knowledge into self-supervision to effectively learn from intra-inter subject differences. In intra subject self-supervision, ISL model first extracts heartbeat-level features from each subject using a channel-wise attentional CNN-RNN encoder. Then a stationarity test module is employed to capture the temporal dependencies between heartbeats. In inter subject self-supervision, we design a set of data augmentations according to the clinical characteristics of cardiac signals and perform contrastive learning among subjects to learn distinctive representations for various types of patients. Extensive experiments on three real-world datasets were conducted. In a semi-supervised transfer learning scenario, our pre-trained ISL model leads about 10% improvement over supervised training when only 1% labeled data is available, suggesting strong generalizability and robustness of the model.
Xiang Lan, Dianwen Ng, Shenda Hong, Mengling Feng
null
null
2,022
aaai
SCIR-Net: Structured Color Image Representation Based 3D Object Detection Network from Point Clouds
null
3D object detection from point clouds data has become an indispensable part in autonomous driving. Previous works for processing point clouds lie in either projection or voxelization. However, projection-based methods suffer from information loss while voxelization-based methods bring huge computation. In this paper, we propose to encode point clouds into structured color image representation (SCIR) and utilize 2D CNN to fulfill the 3D detection task. Specifically, we use the structured color image encoding module to convert the irregular 3D point clouds into a squared 2D tensor image, where each point corresponds to a spatial point in the 3D space. Furthermore, in order to fit for the Euclidean structure, we apply feature normalization to parameterize the 2D tensor image onto a regular dense color image. Then, we conduct repeated multi-scale fusion with different levels so as to augment the initial features and learn scale-aware feature representations for box prediction. Extensive experiments on KITTI benchmark, Waymo Open Dataset and more challenging nuScenes dataset show that our proposed method yields decent results and demonstrate the effectiveness of such representations for point clouds.
Qingdong He, Hao Zeng, Yi Zeng, Yijun Liu
null
null
2,022
aaai
Can Machines Read Coding Manuals Yet? – A Benchmark for Building Better Language Models for Code Understanding
null
Code understanding is an increasingly important application of Artificial Intelligence. A fundamental aspect of understanding code is understanding text about code, e.g., documentation and forum discussions. Pre-trained language models (e.g., BERT) are a popular approach for various NLP tasks, and there are now a variety of benchmarks, such as GLUE, to help improve the development of such models for natural language understanding. However, little is known about how well such models work on textual artifacts about code, and we are unaware of any systematic set of downstream tasks for such an evaluation. In this paper, we derive a set of benchmarks (BLANCA - Benchmarks for LANguage models on Coding Artifacts) that assess code understanding based on tasks such as predicting the best answer to a question in a forum post, finding related forum posts, or predicting classes related in a hierarchy from class documentation. We evaluate performance of current state-of-the-art language models on these tasks and show that there is significant improvement on each task from fine tuning. We also show that multi-task training over BLANCA tasks help build better language models for code understanding.
Ibrahim Abdelaziz, Julian Dolby, Jamie McCusker, Kavitha Srinivas
null
null
2,022
aaai
Zero-Shot Audio Source Separation through Query-Based Learning from Weakly-Labeled Data
null
Deep learning techniques for separating audio into different sound sources face several challenges. Standard architectures require training separate models for different types of audio sources. Although some universal separators employ a single model to target multiple sources, they have difficulty generalizing to unseen sources. In this paper, we propose a three-component pipeline to train a universal audio source separator from a large, but weakly-labeled dataset: AudioSet. First, we propose a transformer-based sound event detection system for processing weakly-labeled training data. Second, we devise a query-based audio separation model that leverages this data for model training. Third, we design a latent embedding processor to encode queries that specify audio targets for separation, allowing for zero-shot generalization. Our approach uses a single model for source separation of multiple sound types, and relies solely on weakly-labeled data for training. In addition, the proposed audio separator can be used in a zero-shot setting, learning to separate types of audio sources that were never seen in training. To evaluate the separation performance, we test our model on MUSDB18, while training on the disjoint AudioSet. We further verify the zero-shot performance by conducting another experiment on audio source types that are held-out from training. The model achieves comparable Source-to-Distortion Ratio (SDR) performance to current supervised models in both cases.
Ke Chen, Xingjian Du, Bilei Zhu, Zejun Ma, Taylor Berg-Kirkpatrick, Shlomo Dubnov
null
null
2,022
aaai
No Task Left Behind: Multi-Task Learning of Knowledge Tracing and Option Tracing for Better Student Assessment
null
Student assessment is one of the most fundamental tasks in the field of AI Education (AIEd). One of the most common approach to student assessment is Knowledge Tracing (KT), which evaluates a student's knowledge state by predicting whether the student will answer a given question correctly or not. However, in the context of multiple choice (polytomous) questions, conventional KT approaches are limited in that they only consider the binary (dichotomous) correctness label (i.e., correct or incorrect), and disregard the specific option chosen by the student. Meanwhile, Option Tracing (OT) attempts to model a student by predicting which option they will choose for a given question, but overlooks the correctness information. In this paper, we propose Dichotomous-Polytomous Multi-Task Learning (DP-MTL), a multi-task learning framework that combines KT and OT for more precise student assessment. In particular, we show that the KT objective acts as a regularization term for OT in the DP-MTL framework, and propose an appropriate architecture for applying our method on top of existing deep learning-based KT models. We experimentally confirm that DP-MTL significantly improves both KT and OT performances, and also benefits downstream tasks such as Score Prediction (SP).
Suyeong An, Junghoon Kim, Minsam Kim, Juneyoung Park
null
null
2,022
aaai
Proxy Learning of Visual Concepts of Fine Art Paintings from Styles through Language Models
null
We present a machine learning system that can quantify fine art paintings with a set of visual elements and principles of art. The formal analysis is fundamental for understanding art, but developing such a system is challenging. Paintings have high visual complexities, but it is also difficult to collect enough training data with direct labels. To resolve these practical limitations, we introduce a novel mechanism, called proxy learning, which learns visual concepts in paintings through their general relation to styles. This framework does not require any visual annotation, but only uses style labels and a general relationship between visual concepts and style. In this paper, we propose a novel proxy model and reformulate four pre-existing methods in the context of proxy learning. Through quantitative and qualitative comparison, we evaluate these methods and compare their effectiveness in quantifying the artistic visual concepts, where the general relationship is estimated by language models; GloVe or BERT. The language modeling is a practical and scalable solution requiring no labeling, but it is inevitably imperfect. We demonstrate how the new proxy model is robust to the imperfection, while the other methods are sensitively affected by it.
Diana Kim, Ahmed Elgammal, Marian Mazzone
null
null
2,022
aaai
DeepHardMark: Towards Watermarking Neural Network Hardware
null
This paper presents a framework for embedding watermarks into DNN hardware accelerators. Unlike previous works that have looked at protecting the algorithmic intellectual properties of deep learning systems, this work proposes a methodology for defending deep learning hardware. Our methodology embeds modifications into the hardware accelerator's functional blocks that can be revealed with the rightful owner's key DNN and corresponding key sample, verifying the legitimate owner. We propose an Lp-box ADMM based algorithm to co-optimize watermark's hardware overhead and impact on the design's algorithmic functionality. We evaluate the performance of the hardware watermarking scheme on popular image classifier models using various accelerator designs. Our results demonstrate that the proposed methodology effectively embeds watermarks while preserving the original functionality of the hardware architecture. Specifically, we can successfully embed watermarks into the deep learning hardware and reliably execute a ResNet ImageNet classifiers with an accuracy degradation of only 0.009%
Joseph Clements, Yingjie Lao
null
null
2,022
aaai
EMVLight: A Decentralized Reinforcement Learning Framework for Efficient Passage of Emergency Vehicles
null
Emergency vehicles (EMVs) play a crucial role in responding to time-critical events such as medical emergencies and fire outbreaks in an urban area. The less time EMVs spend traveling through the traffic, the more likely it would help save people's lives and reduce property loss. To reduce the travel time of EMVs, prior work has used route optimization based on historical traffic-flow data and traffic signal pre-emption based on the optimal route. However, traffic signal pre-emption dynamically changes the traffic flow which, in turn, modifies the optimal route of an EMV. In addition, traffic signal pre-emption practices usually lead to significant disturbances in traffic flow and subsequently increase the travel time for non-EMVs. In this paper, we propose EMVLight, a decentralized reinforcement learning (RL) framework for simultaneous dynamic routing and traffic signal control. EMVLight extends Dijkstra's algorithm to efficiently update the optimal route for the EMVs in real-time as it travels through the traffic network. The decentralized RL agents learn network-level cooperative traffic signal phase strategies that not only reduce EMV travel time but also reduce the average travel time of non-EMVs in the network. This benefit has been demonstrated through comprehensive experiments with synthetic and real-world maps. These experiments show that EMVLight outperforms benchmark transportation engineering techniques and existing RL-based signal control methods.
Haoran Su, Yaofeng Desmond Zhong, Biswadip Dey, Amit Chakraborty
null
null
2,022
aaai
SPATE-GAN: Improved Generative Modeling of Dynamic Spatio-Temporal Patterns with an Autoregressive Embedding Loss
null
From ecology to atmospheric sciences, many academic disciplines deal with data characterized by intricate spatio-temporal complexities, the modeling of which often requires specialized approaches. Generative models of these data are of particular interest, as they enable a range of impactful downstream applications like simulation or creating synthetic training data. Recently, COT-GAN, a new GAN algorithm inspired by the theory of causal optimal transport (COT), was proposed in an attempt to improve generation of sequential data. However, the task of learning complex patterns over time and space requires additional knowledge of the specific data structures. In this study, we propose a novel loss objective combined with COT-GAN based on an autoregressive embedding to reinforce the learning of spatio-temporal dynamics. We devise SPATE (spatio-temporal association), a new metric measuring spatio-temporal autocorrelation. We compute SPATE for real and synthetic data samples and use it to compute an embedding loss that considers space-time interactions, nudging the GAN to learn outputs that are faithful to the observed dynamics. We test our new SPATE-GAN on a diverse set of spatio-temporal patterns: turbulent flows, log-Gaussian Cox processes and global weather data. We show that our novel embedding loss improves performance without any changes to the architecture of the GAN backbone, highlighting our model's increased capacity for capturing autoregressive structures.
Konstantin Klemmer, Tianlin Xu, Beatrice Acciaio, Daniel B. Neill
null
null
2,022
aaai
Hyperverlet: A Symplectic Hypersolver for Hamiltonian Systems
null
Hamiltonian systems represent an important class of dynamical systems such as pendulums, molecular dynamics, and cosmic systems. The choice of solvers is significant to the accuracy when simulating Hamiltonian systems, where symplectic solvers show great significance. Recent advances in neural network-based hypersolvers, though achieve competitive results, still lack the symplecity necessary for reliable simulations, especially over long time horizons. To alleviate this, we introduce Hyperverlet, a new hypersolver composing the traditional, symplectic velocity Verlet and symplectic neural network-based solvers. More specifically, we propose a parameterization of symplectic neural networks and prove that hyperbolic tangent is r-finite expanding the set of allowable activation functions for symplectic neural networks, improving the accuracy. Extensive experiments on a spring-mass and a pendulum system justify the design choices and suggest that Hyperverlet outperforms both traditional solvers and hypersolvers.
Frederik Baymler Mathiesen, Bin Yang, Jilin Hu
null
null
2,022
aaai
End-to-End Line Drawing Vectorization
null
Vector graphics is broadly used in a variety of forms, such as illustrations, logos, posters, billboards, and printed ads. Despite its broad use, many artists still prefer to draw with pen and paper, which leads to a high demand of converting raster designs into the vector form. In particular, line drawing is a primary art and attracts many research efforts in automatically converting raster line drawings to vector form. However, the existing methods generally adopt a two-step approach, stroke segmentation and vectorization. Without vector guidance, the raster-based stroke segmentation frequently obtains unsatisfying segmentation results, such as over-grouped strokes and broken strokes. In this paper, we make an attempt in proposing an end-to-end vectorization method which directly generates vectorized stroke primitives from raster line drawing in one step. We propose a Transformer-based framework to perform stroke tracing like human does in an automatic stroke-by-stroke way with a novel stroke feature representation and multi-modal supervision to achieve vectorization with high quality and fidelity. Qualitative and quantitative evaluations show that our method achieves state of the art performance.
Hanyuan Liu, Chengze Li, Xueting Liu, Tien-Tsin Wong
null
null
2,022
aaai
Generalized Dynamic Cognitive Hierarchy Models for Strategic Driving Behavior
null
While there has been an increasing focus on the use of game theoretic models for autonomous driving, empirical evidence shows that there are still open questions around dealing with the challenges of common knowledge assumptions as well as modeling bounded rationality. To address some of these practical challenges, we develop a framework of generalized dynamic cognitive hierarchy for both modelling naturalistic human driving behavior as well as behavior planning for autonomous vehicles (AV). This framework is built upon a rich model of level-0 behavior through the use of automata strategies, an interpretable notion of bounded rationality through safety and maneuver satisficing, and a robust response for planning. Based on evaluation on two large naturalistic datasets as well as simulation of critical traffic scenarios, we show that i) automata strategies are well suited for level-0 behavior in a dynamic level-k framework, and ii) the proposed robust response to a heterogeneous population of strategic and non-strategic reasoners can be an effective approach for game theoretic planning in AV.
Atrisha Sarkar, Kate Larson, Krzysztof Czarnecki
null
null
2,022
aaai
The Semi-random Likelihood of Doctrinal Paradoxes
null
When aggregating logically interconnected judgements from n agents, the result might be logically inconsistent. This phenomenon is known as the doctrinal paradox, which plays a central role in the field of judgement aggregation. Previous work has mostly focused on the worst-case analysis of the doctrinal paradox, leading to many impossibility results. Little is known about its likelihood of occurrence in practical settings, except for the study under certain distributions by List in 2005. In this paper, we characterize the likelihood of the doctrinal paradox under a general and realistic model called semi-random social choice framework (proposed by Xia in 2020). In the framework, agents' ground truth judgements can be arbitrarily correlated, while the noises are independent. Our main theorem states that under mild conditions, the semi-random likelihood of the doctrinal paradox is either 0, exp(-Θ(n)), Θ(n^~(-0.5)) or Θ(1). This not only answers open questions by List in 2005, but also draws clear lines between situations with frequent paradoxes and with vanishing paradoxes.
Ao Liu, Lirong Xia
null
null
2,022
aaai
Choices Are Not Independent: Stackelberg Security Games with Nested Quantal Response Models
null
The quantal response (QR) model is widely used in Stackelberg security games (SSG) to model a bounded rational adversary. The QR model is a model of human response from among a large variety of prominent models known as discrete choice models. QR is the simplest type of discrete choice models and does not capture commonly observed phenomenon such as correlation among choices. We introduce the nested QR adversary model (based on nested logit model in discrete choice theory) in SSG which addresses shortcoming of the QR model. We present tractable approximation of the resulting equilibrium problem with nested QR adversary. We do so by deriving an interesting property of the equilibrium problem, namely a loosely coupled split into nested problems that mirrors the nested decision making by the adversary in the nested QR model. We show that each separate nested problem can be approximated efficiently and that the loosely coupled overall problem can be solved approximately by formulating it as a discretized version of a continuous dynamic program. Finally, we conduct experiments that show the scalability and parallelizability of our approach, as well as advantages of the nested QR model.
Tien Mai, Arunesh Sinha
null
null
2,022
aaai
Context-Aware Health Event Prediction via Transition Functions on Dynamic Disease Graphs
null
With the wide application of electronic health records (EHR) in healthcare facilities, health event prediction with deep learning has gained more and more attention. A common feature of EHR data used for deep-learning-based predictions is historical diagnoses. Existing work mainly regards a diagnosis as an independent disease and does not consider clinical relations among diseases in a visit. Many machine learning approaches assume disease representations are static in different visits of a patient. However, in real practice, multiple diseases that are frequently diagnosed at the same time reflect hidden patterns that are conducive to prognosis. Moreover, the development of a disease is not static since some diseases can emerge or disappear and show various symptoms in different visits of a patient. To effectively utilize this combinational disease information and explore the dynamics of diseases, we propose a novel context-aware learning framework using transition functions on dynamic disease graphs. Specifically, we construct a global disease co-occurrence graph with multiple node properties for disease combinations. We design dynamic subgraphs for each patient's visit to leverage global and local contexts. We further define three diagnosis roles in each visit based on the variation of node properties to model disease transition processes. Experimental results on two real-world EHR datasets show that the proposed model outperforms state of the art in predicting health events.
Chang Lu, Tian Han, Yue Ning
null
null
2,022
aaai
OAM: An Option-Action Reinforcement Learning Framework for Universal Multi-Intersection Control
null
Efficient traffic signal control is an important means to alleviate urban traffic congestion. Reinforcement learning (RL) has shown great potentials in devising optimal signal plans that can adapt to dynamic traffic congestion. However, several challenges still need to be overcome. Firstly, a paradigm of state, action, and reward design is needed, especially for an optimality-guaranteed reward function. Secondly, the generalization of the RL algorithms is hindered by the varied topologies and physical properties of intersections. Lastly, enhancing the cooperation between intersections is needed for large network applications. To address these issues, the Option-Action RL framework for universal Multi-intersection control (OAM) is proposed. Based on the well-known cell transmission model, we first define a lane-cell-level state to better model the traffic flow propagation. Based on this physical queuing dynamics, we propose a regularized delay as the reward to facilitate temporal credit assignment while maintaining the equivalence with minimizing the average travel time. We then recapitulate the phase actions as the constrained combinations of lane options and design a universal neural network structure to realize model generalization to any intersection with any phase definition. The multiple-intersection cooperation is then rigorously discussed using the potential game theory. We test the OAM algorithm under four networks with different settings, including a city-level scenario with 2,048 intersections using synthetic and real-world datasets. The results show that the OAM can outperform the state-of-the-art controllers in reducing the average travel time.
Enming Liang, Zicheng Su, Chilin Fang, Renxin Zhong
null
null
2,022
aaai
Online Elicitation of Necessarily Optimal Matchings
null
In this paper, we study the problem of eliciting preferences of agents in the house allocation model. For this we build on a recently introduced model and focus on the task of eliciting preferences to find matchings which are necessarily optimal, i.e., optimal under all possible completions of the elicited preferences. In particular, we investigate the elicitation of necessarily Pareto optimal (NPO) and necessarily rank-maximal (NRM) matchings. Most importantly, we answer an open question and give an online algorithm for eliciting an NRM matching in the next-best query model which is 3/2-competitive, i.e., it takes at most 3/2 as many queries as an optimal algorithm. Besides this, we extend this field of research by introducing two new natural models of elicitation and by studying both the complexity of determining whether a necessarily optimal matching exists in them, and by giving online algorithms for these models.
Jannik Peters
null
null
2,022
aaai
Safe Subgame Resolving for Extensive Form Correlated Equilibrium
null
Correlated Equilibrium is a solution concept that is more general than Nash Equilibrium (NE) and can lead to outcomes with better social welfare. However, its natural extension to the sequential setting, the Extensive Form Correlated Equilibrium (EFCE), requires a quadratic amount of space to solve, even in restricted settings without randomness in nature. To alleviate these concerns, we apply subgame resolving, a technique extremely successful in finding NE in zero-sum games to solving general-sum EFCEs. Subgame resolving refines a correlation plan in an online manner: instead of solving for the full game upfront, it only solves for strategies in subgames that are reached in actual play, resulting in significant computational gains. In this paper, we (i) lay out the foundations to quantify the quality of a refined strategy, in terms of the social welfare and exploitability of correlation plans, (ii) show that EFCEs possess a sufficient amount of independence between subgames to perform resolving efficiently, and (iii) provide two algorithms for resolving, one using linear programming and the other based on regret minimization. Both methods guarantee safety, i.e., they will never be counterproductive. Our methods are the first time an online method has been applied to the correlated, general-sum setting.
Chun Kai Ling, Fei Fang
null
null
2,022
aaai
Iterative Calculus of Voting under Plurality
null
We formalize a voting model for plurality elections that combines Iterative Voting and Calculus of Voting. Each iteration, autonomous agents simultaneously maximize the utility they expect from candidates. Agents are aware of neither other individuals’ preferences or choices, nor of the distribution of preferences. They know only of candidates’ latest vote shares and with that calculate expected rewards from each candidate, pondering the probability that voting for each would alter the election. We define the general form of those pivotal probabilities, then we derive efficient exact and approximated calculations. Lastly, we prove formally the model converges with asymptotically large electorates and show via simulations that it nearly always converges even with very few agents.
Fabricio Vasselai
null
null
2,022
aaai
Is There a Strongest Die in a Set of Dice with the Same Mean Pips?
null
Jan-ken, a.k.a. rock-paper-scissors, is a cerebrated example of a non-transitive game with three (pure) strategies, rock, paper and scissors. Interestingly, any Jan-ken generalized to four strategies contains at least one useless strategy unless it allows a tie between distinct pure strategies. Non-transitive dice could be a stochastic analogue of Jan-ken: the stochastic transitivity does not hold on some sets of dice, e.g., Efron's dice. Including the non-transitive dice, this paper is interested in dice sets which do not contain a useless die. In particular, we are concerned with the existence of a strongest (or weakest, symmetrically) die in a dice set under the two conditions that (1) any number appears on at most one die and at most one side, i.e., no tie break between two distinct dice, and (2) the mean pips of dice are the same. We firstly prove that a strongest die never exist if a set of n dice of m-sided is given as a partition of the set of numbers {1,…,mn}. Next, we show some sufficient conditions that a strongest die exists in a dice set which is not a partition of a set of numbers. We also give some algorithms to find a strongest die in a dice set which includes given dice.
Shang Lu, Shuji Kijima
null
null
2,022
aaai
Characterization of Incentive Compatibility of an Ex-ante Constrained Player
null
We consider a variant of the standard Bayesian mechanism, where players evaluate their outcomes and constraints in an ex-ante manner. Such a model captures a major form of modern online advertising where an advertiser is concerned with her/his expected utility over a time period and her/his type may change over time. We are interested in the incentive compatibility (IC) problem of such Bayesian mechanism. Under very mild conditions on the mechanism environments, we give a full characterization of IC via the taxation principle and show, perhaps surprisingly, that such IC mechanisms are fully characterized by the so-called auto-bidding mechanisms, which are pervasively fielded in the online advertising industry.
Bonan Ni, Pingzhong Tang
null
null
2,022
aaai
Proportional Public Decisions
null
We consider a setting where a group of individuals needs to make a number of independent decisions. The decisions should proportionally represent the views of the voters. We formulate new criteria of proportionality and analyse two rules, Proportional Approval Voting and the Method of Equal Shares, that are inspired by the corresponding approval-based committee election rules. We prove that the two rules provide very strong proportionality guarantees when applied to the setting of public decisions.
Piotr Skowron, Adrian Górecki
null
null
2,022
aaai
PageRank for Edges: Axiomatic Characterization
null
Edge centrality measures are functions that evaluate the importance of edges in a network. They can be used to assess the role of a backlink for the popularity of a website as well as the importance of a flight in virus spreading. Various node centralities have been translated to apply for edges, including Edge Betweenness, Eigenedge (edge version of eigenvector centrality), and Edge PageRank. With this paper, we initiate the discussion on the axiomatic properties of edge centrality measures. We do it by proposing an axiomatic characterization of Edge PageRank. Our characterization is the first characterization of any edge centrality measures in the literature.
Natalia Kucharczuk, Tomasz Wąs, Oskar Skibski
null
null
2,022
aaai
Planning with Participation Constraints
null
We pose and study the problem of planning in Markov decision processes (MDPs), subject to participation constraints as studied in mechanism design. In this problem, a planner must work with a self-interested agent on a given MDP. Each action in the MDP provides an immediate reward to the planner and a (possibly different) reward to the agent. The agent has no control in choosing the actions, but has the option to end the entire process at any time. The goal of the planner is to find a policy that maximizes her cumulative reward, taking into consideration the agent's ability to terminate. We give a fully polynomial-time approximation scheme for this problem. En route, we present polynomial-time algorithms for computing (exact) optimal policies for important special cases of this problem, including when the time horizon is constant, or when the MDP exhibits a "definitive decisions" property. We illustrate our algorithms with two different game-theoretic applications: the problem of assigning rides in ride-sharing and the problem of designing screening policies. Our results imply efficient algorithms for computing (approximately) optimal policies in both applications.
Hanrui Zhang, Yu Cheng, Vincent Conitzer
null
null
2,022
aaai
Strictly Proper Contract Functions Can Be Arbitrage-Free
null
We consider mechanisms for truthfully eliciting probabilistic predictions from a group of experts. The standard approach --- using a proper scoring rule to separately reward each expert --- is not robust to collusion: experts may collude to misreport their beliefs in a way that guarantees them a larger total reward no matter the eventual outcome. It is a long-standing open question whether there is a truthful elicitation mechanism that makes any such collusion (also called "arbitrage") impossible. We resolve this question positively, exhibiting a class of strictly proper arbitrage-free contract functions. These contract functions have two parts: one ensures that the total reward of a coalition of experts depends only on the average of their reports; the other ensures that changing this average report hurts the experts under at least one outcome.
Eric Neyman, Tim Roughgarden
null
null
2,022
aaai
GeomGCL: Geometric Graph Contrastive Learning for Molecular Property Prediction
null
Recently many efforts have been devoted to applying graph neural networks (GNNs) to molecular property prediction which is a fundamental task for computational drug and material discovery. One of major obstacles to hinder the successful prediction of molecular property by GNNs is the scarcity of labeled data. Though graph contrastive learning (GCL) methods have achieved extraordinary performance with insufficient labeled data, most focused on designing data augmentation schemes for general graphs. However, the fundamental property of a molecule could be altered with the augmentation method (like random perturbation) on molecular graphs. Whereas, the critical geometric information of molecules remains rarely explored under the current GNN and GCL architectures. To this end, we propose a novel graph contrastive learning method utilizing the geometry of the molecule across 2D and 3D views, which is named GeomGCL. Specifically, we first devise a dual-view geometric message passing network (GeomMPNN) to adaptively leverage the rich information of both 2D and 3D graphs of a molecule. The incorporation of geometric properties at different levels can greatly facilitate the molecular representation learning. Then a novel geometric graph contrastive scheme is designed to make both geometric views collaboratively supervise each other to improve the generalization ability of GeomMPNN. We evaluate GeomGCL on various downstream property prediction tasks via a finetune process. Experimental results on seven real-life molecular datasets demonstrate the effectiveness of our proposed GeomGCL against state-of-the-art baselines.
Shuangli Li, Jingbo Zhou, Tong Xu, Dejing Dou, Hui Xiong
null
null
2,022
aaai
Coordinating Followers to Reach Better Equilibria: End-to-End Gradient Descent for Stackelberg Games
null
A growing body of work in game theory extends the traditional Stackelberg game to settings with one leader and multiple followers who play a Nash equilibrium. Standard approaches for computing equilibria in these games reformulate the followers' best response as constraints in the leader's optimization problem. These reformulation approaches can sometimes be effective, but make limiting assumptions on the followers' objectives and the equilibrium reached by followers, e.g., uniqueness, optimism, or pessimism. To overcome these limitations, we run gradient descent to update the leader's strategy by differentiating through the equilibrium reached by followers. Our approach generalizes to any stochastic equilibrium selection procedure that chooses from multiple equilibria, where we compute the stochastic gradient by back-propagating through a sampled Nash equilibrium using the solution to a partial differential equation to establish the unbiasedness of the stochastic gradient. Using the unbiased gradient estimate, we implement the gradient-based approach to solve three Stackelberg problems with multiple followers. Our approach consistently outperforms existing baselines to achieve higher utility for the leader.
Kai Wang, Lily Xu, Andrew Perrault, Michael K. Reiter, Milind Tambe
null
null
2,022
aaai
Online Task Assignment Problems with Reusable Resources
null
We study online task assignment problem with reusable resources, motivated by practical applications such as ridesharing, crowdsourcing and job hiring. In the problem, we are given a set of offline vertices (agents), and, at each time, an online vertex (task) arrives randomly according to a known time-dependent distribution. Upon arrival, we assign the task to agents immediately and irrevocably. The goal of the problem is to maximize the expected total profit produced by completed tasks. The key features of our problem are (1) an agent is reusable, i.e., an agent comes back to the market after completing the assigned task, (2) an agent may reject the assigned task to stay the market, and (3) a task may accommodate multiple agents. The setting generalizes that of existing work in which an online task is assigned to one agent under (1). In this paper, we propose an online algorithm that is 1/2-competitive for the above setting, which is tight. Moreover, when each agent can reject assigned tasks at most Δ times, the algorithm is shown to have the competitive ratio Δ/(3Δ-1), which is at least 1/3. We also evaluate our proposed algorithm with numerical experiments.
Hanna Sumita, Shinji Ito, Kei Takemura, Daisuke Hatano, Takuro Fukunaga, Naonori Kakimura, Ken-ichi Kawarabayashi
null
null
2,022
aaai
Multi-Unit Auction in Social Networks with Budgets
null
We study multi-unit auctions in social networks, where each buyer has a fixed budget and can spread the sale information to the network neighbors. We design a mechanism encouraging buyers to report their valuations truthfully and spread the sale information. Our design uses the idea of the clinching mechanism to decide the transaction price and can be viewed as a network version of the mechanism. Most of the previous clinching mechanisms search for the transaction prices by increasing the current price. Our mechanism directly computes the transaction prices in polynomial time. Furthermore, the mechanism applies a technique to iteratively activate new buyers in the network. This ensures utility preservations of the buyers and benefits the seller. We prove key properties of our mechanism, such as no-positive-transfers, individual rationality, incentive compatibility, non-wastefulness and social welfare preservation.
Mingyu Xiao, Yuchao Song, Bakh Khoussainov
null
null
2,022
aaai
Bayesian Persuasion in Sequential Decision-Making
null
We study a dynamic model of Bayesian persuasion in sequential decision-making settings. An informed principal observes an external parameter of the world and advises an uninformed agent about actions to take over time. The agent takes actions in each time step based on the current state, the principal's advice/signal, and beliefs about the external parameter. The action of the agent updates the state according to a stochastic process. The model arises naturally in many applications, e.g., an app (the principal) can advice the user (the agent) on possible choices between actions based on additional real-time information the app has. We study the problem of designing a signaling strategy from the principal's point of view. We show that the principal has an optimal strategy against a myopic agent, who only optimizes their rewards locally, and the optimal strategy can be computed in polynomial time. In contrast, it is NP-hard to approximate an optimal policy against a far-sighted agent. Further, we show that if the principal has the power to threaten the agent by not providing future signals, then we can efficiently design a threat-based strategy. This strategy guarantees the principal's payoff as if playing against an agent who is far-sighted but myopic to future signals.
Jiarui Gan, Rupak Majumdar, Goran Radanovic, Adish Singla
null
null
2,022
aaai
Secretary Matching with Vertex Arrivals and No Rejections
null
Most prior work on online matching problems has been with the flexibility of keeping some vertices unmatched. We study three related online matching problems with the constraint of matching every vertex, i.e., with no rejections. We adopt a model in which vertices arrive in a uniformly random order and the edge-weights are arbitrary positive numbers. For the capacitated online bipartite matching problem in which the vertices of one side of the graph are offline and those of the other side arrive online, we give a 4.62-competitive algorithm when the capacity of each offline vertex is 2. For the online general (non-bipartite) matching problem, where all vertices arrive online, we give a 3.34-competitive algorithm. We also study the online roommate matching problem, in which each room (offline vertex) holds 2 persons (online vertices). Persons derive non-negative additive utilities from their room as well as roommate. In this model, with the goal of maximizing the sum of utilities of all persons, we give a 7.96-competitive algorithm. This is an improvement over the 24.72 approximation factor in prior work.
Mohak Goyal
null
null
2,022
aaai
AutoCFR: Learning to Design Counterfactual Regret Minimization Algorithms
null
Counterfactual regret minimization (CFR) is the most commonly used algorithm to approximately solving two-player zero-sum imperfect-information games (IIGs). In recent years, a series of novel CFR variants such as CFR+, Linear CFR, DCFR have been proposed and have significantly improved the convergence rate of the vanilla CFR. However, most of these new variants are hand-designed by researchers through trial and error based on different motivations, which generally requires a tremendous amount of efforts and insights. This work proposes to meta-learn novel CFR algorithms through evolution to ease the burden of manual algorithm design. We first design a search language that is rich enough to represent many existing hand-designed CFR variants. We then exploit a scalable regularized evolution algorithm with a bag of acceleration techniques to efficiently search over the combinatorial space of algorithms defined by this language. The learned novel CFR algorithm can generalize to new IIGs not seen during training and performs on par with or better than existing state-of-the-art CFR variants. The code is available at https://github.com/rpSebastian/AutoCFR.
Hang Xu, Kai Li, Haobo Fu, Qiang Fu, Junliang Xing
null
null
2,022
aaai
Two-Price Equilibrium
null
Walrasian equilibrium is a prominent market equilibrium notion, but rarely exists in markets with indivisible items. We introduce a new market equilibrium notion, called two-price equilibrium (2PE). A 2PE is a relaxation of Walrasian equilibrium, where instead of a single price per item, every item has two prices: one for the item's owner and a (possibly) higher one for all other buyers. Thus, a 2PE is given by a tuple (S,p_high,p_low) of an allocation S and two price vectors p_high,p_low, where every buyer i is maximally happy with her bundle S_i, given prices p_low for items in S_i and prices p_high for all other items. 2PE generalizes previous market equilibrium notions, such as conditional equilibrium, and is related to relaxed equilibrium notions like endowment equilibrium. We define the discrepancy of a 2PE --- a measure of distance from Walrasian equilibrium --- as the sum of differences p_high_j-p_low_j over all items (normalized by social welfare). We show that the social welfare degrades gracefully with the discrepancy; namely, the social welfare of a 2PE with discrepancy d is at least a fraction 1/d+1 of the optimal welfare. We use this to establish welfare guarantees for markets with subadditive valuations over identical items. In particular, we show that every such market admits a 2PE with at least 1/7 of the optimal welfare. This is in contrast to Walrasian equilibrium or conditional equilibrium which may not even exist. Our techniques provide new insights regarding valuation functions over identical items, which we also use to characterize instances that admit a WE.
Michal Feldman, Galia Shabtai, Aner Wolfenfeld
null
null
2,022
aaai
The Strange Role of Information Asymmetry in Auctions—Does More Accurate Value Estimation Benefit a Bidder?
null
We study the second-price auction in which bidders have asymmetric information regarding the item’s value. Each bidder’s value for the item depends on a private component and a public component. While each bidder observes their own private component, they hold different and asymmetric information about the public component. We characterize the equilibrium of this auction game and study how the asymmetric bidder information affects their equilibrium bidding strategies. We also discover multiple surprisingly counter-intuitive equilibrium phenomena. For instance, a bidder may be better off if she is less informed regarding the public component. Conversely, a bidder may sometimes be worse off if she obtains more accurate estimation about the auctioned item. Our results suggest that efforts devoted by bidders to improve their value estimations, as widely seen in today’s online advertising auctions, may not always be to their benefit.
Haifeng Xu, Ruggiero Cavallo
null
null
2,022
aaai
Approval-Based Committee Voting under Incomplete Information
null
We investigate approval-based committee voting with incomplete information about the approval preferences of voters. We consider several models of incompleteness where each voter partitions the set of candidates into approved, disapproved, and unknown candidates, possibly with ordinal preference constraints among candidates in the latter category. This captures scenarios where voters have not evaluated all candidates and/or it is unknown where voters draw the threshold between approved and disapproved candidates. We study the complexity of some fundamental computational problems for a number of classic approval-based committee voting rules including Proportional Approval Voting and Chamberlin-Courant. These problems include that of determining whether a given set of candidates is a possible or necessary winning committee and whether it forms a committee that possibly or necessarily satisfies representation axioms. We also consider the problem whether a given candidate is possibly or necessarily a member of the winning committee.
Aviram Imber, Jonas Israel, Markus Brill, Benny Kimelfeld
null
null
2,022
aaai
Algorithmic Bayesian Persuasion with Combinatorial Actions
null
Bayesian persuasion is a model for understanding strategic information revelation: an agent with an informational advantage, called a sender, strategically discloses information by sending signals to another agent, called a receiver. In algorithmic Bayesian persuasion, we are interested in efficiently designing the sender's signaling schemes that lead the receiver to take action in favor of the sender. This paper studies algorithmic Bayesian-persuasion settings where the receiver's feasible actions are specified by combinatorial constraints, e.g., matroids or paths in graphs. We first show that constant-factor approximation is NP-hard even in some special cases of matroids or paths. We then propose a polynomial-time algorithm for general matroids by assuming the number of states of nature to be a constant. We finally consider a relaxed notion of persuasiveness, called CCE-persuasiveness, and present a sufficient condition for polynomial-time approximability.
Kaito Fujii, Shinsaku Sakaue
null
null
2,022
aaai
Hedonic Diversity Games: A Complexity Picture with More than Two Colors
null
Hedonic diversity games are a variant of the classical Hedonic games designed to better model a variety of questions concerning diversity and fairness. Previous works mainly targeted the case with two diversity classes (represented as colors in the model) and provided a set of initial complexity-theoretic and existential results concerning Nash and Individually stable outcomes. Here, we design new algorithms accompanied with lower bounds which provide a full parameterized-complexity picture for computing Nash and Individually stable outcomes with respect to the most natural parameterizations of the problem. Crucially, our results hold for general Hedonic diversity games where the number of colors is not necessarily restricted to two, and show that---apart from two trivial cases---a necessary condition for tractability in this setting is that the number of colors is bounded by the parameter. Moreover, for the special case of two colors we resolve an open question asked in previous work~(Boehmer and Elkind, AAAI 2020).
Robert Ganian, Thekla Hamm, Dušan Knop, Šimon Schierreich, Ondřej Suchý
null
null
2,022
aaai
Team Correlated Equilibria in Zero-Sum Extensive-Form Games via Tree Decompositions
null
Despite the many recent practical and theoretical breakthroughs in computational game theory, equilibrium finding in extensive-form team games remains a significant challenge. While NP-hard in the worst case, there are provably efficient algorithms for certain families of team game. In particular, if the game has common external information, also known as A-loss recall---informally, actions played by non-team members (i.e., the opposing team or nature) are either unknown to the entire team, or common knowledge within the team---then polynomial-time algorithms exist. In this paper, we devise a completely new algorithm for solving team games. It uses a tree decomposition of the constraint system representing each team's strategy to reduce the number and degree of constraints required for correctness (tightness of the mathematical program). Our approach has the bags of the tree decomposition correspond to team-public states---that is, minimal sets of nodes (that is, states of the team) such that, upon reaching the set, it is common knowledge among the players on the team that the set has been reached. Our algorithm reduces the problem of solving team games to a linear program with at most O(NW^(w+1)) nonzero entries in the constraint matrix, where N is the size of the game tree, w is a parameter that depends on the amount of uncommon external information, and W is the treewidth of the tree decomposition. In public-action games, our program size is bounded by the tighter 2^(O(nt))N for teams of n players with t types each. Our algorithm is based on a new way to write a custom, concise tree decomposition, and its fast run time does not assume that the decomposition has small treewidth. Since our algorithm describes the polytope of correlated strategies directly, we get equilibrium finding in correlated strategies for free---instead of, say, having to run a double oracle algorithm. We show via experiments on a standard suite of games that our algorithm achieves state-of-the-art performance on all benchmark game classes except one. We also present, to our knowledge, the first experiments for this setting where both teams have more than one member.
Brian Hu Zhang, Tuomas Sandholm
null
null
2,022
aaai
Multi-Leader Congestion Games with an Adversary
null
We study a multi-leader single-follower congestion game where multiple users (leaders) choose one resource out of a set of resources and, after observing the realized loads, an adversary (single-follower) attacks the resources with maximum loads causing additional costs for the leaders. For the resulting strategic game among the leaders, we show that pure Nash equilibria fail to exist and therefore, we consider approximate equilibria instead. As our first main result, we show that the existence of a K-approximate equilibrium can always be guaranteed, where K (approximately equal to 1.1974) is the unique solution of a cubic polynomial equation. To this end, we give a polynomial time combinatorial algorithm which computes a K-approximate equilibrium. The factor K is tight, meaning that there is an instance that does not admit an A-approximate equilibrium for any A < K. Thus A = K is the smallest possible value of A such that the existence of an A-approximate equilibrium can be guaranteed for any instance of the considered game. Secondly, we focus on approximate equilibria of a given fixed instance. We show how to compute efficiently a best approximate equilibrium, that is, with smallest possible A among all A-approximate equilibria of the given instance.
Tobias Harks, Mona Henle, Max Klimm, Jannik Matuschke, Anja Schedel
null
null
2,022
aaai
Worst-Case Voting When the Stakes Are High
null
We study the additive distortion of social choice functions in the implicit utilitarian model, and argue that it is a more appropriate metric than multiplicative distortion when an alternative that confers significant social welfare may exist (i.e., when the stakes are high). We define a randomized analog of positional scoring rules, and present a rule which is asymptotically optimal within this class as the number of alternatives increases. We then show that the instance-optimal social choice function can be efficiently computed. Next, we take a beyond-worst-case view, bounding the additive distortion of prominent voting rules as a function of the best welfare attainable in an instance. Lastly, we evaluate the additive distortion of a range of rules on real-world election data.
Anson Kahng, Gregory Kehne
null
null
2,022
aaai
The Complexity of Subelection Isomorphism Problems
null
We study extensions of the Election Isomorphism problem, focused on the existence of isomorphic subelections. Specifically, we propose the Subelection Isomorphism and the Maximum Common Subelection problems and study their computational complexity and approximability. Using our problems in experiments, we provide some insights into the nature of several statistical models of elections.
Piotr Faliszewski, Krzysztof Sornat, Stanisław Szufa
null
null
2,022
aaai
Fast Payoff Matrix Sparsification Techniques for Structured Extensive-Form Games
null
The practical scalability of many optimization algorithms for large extensive-form games is often limited by the games' huge payoff matrices. To ameliorate the issue, Zhang and Sandholm recently proposed a sparsification technique that factorizes the payoff matrix A into a sparser object A = Â + UVᵀ, where the total combined number of nonzeros of Â, U, and V, is significantly smaller. Such a factorization can be used in place of the original payoff matrix in many optimization algorithm, such as interior-point and second-order methods, thus increasing the size of games that can be handled. Their technique significantly sparsifies poker (end)games, standard benchmarks used in computational game theory, AI, and more broadly. We show that the existence of extremely sparse factorizations in poker games can be tied to their particular Kronecker-product structure. We clarify how such structure arises and introduce the connection between that structure and sparsification. By leveraging such structure, we give two ways of computing strong sparsifications of poker games (as well as any other game with a similar structure) that are i) orders of magnitude faster to compute, ii) more numerically stable, and iii) produce a dramatically smaller number of nonzeros than the prior technique. Our techniques enable—for the first time—effective computation of high-precision Nash equilibria and strategies subject to constraints on the amount of allowed randomization. Furthermore, they significantly speed up parallel first-order game-solving algorithms; we show state-of-the-art speed on a GPU.
Gabriele Farina, Tuomas Sandholm
null
null
2,022
aaai
Pizza Sharing Is PPA-Hard
null
We study the computational complexity of computing solutions for the straight-cut and square-cut pizza sharing problems. We show that finding an approximate solution is PPA-hard for the straight-cut problem, and PPA-complete for the square-cut problem, while finding an exact solution for the square-cut problem is FIXP-hard and in BU. Our PPA-hardness results apply even when all mass distributions are unions of non-overlapping squares, and our FIXP-hardness result applies even when all mass distributions are unions of weighted squares and right-angled triangles. We also prove that decision variants of the square-cut problem are hard: the approximate problem is NP-complete, and the exact problem is ETR-complete.
Argyrios Deligkas, John Fearnley, Themistoklis Melissourgos
null
null
2,022
aaai
The Complexity of Proportionality Degree in Committee Elections
null
Over the last few years, researchers have put significant effort into understanding of the notion of proportional representation in committee election. In particular, recently they have proposed the notion of proportionality degree. We study the complexity of computing committees with a given proportionality degree and of testing if a given committee provides a particular one. This way, we complement recent studies that mostly focused on the notion of (extended) justified representation. We also study the problems of testing if a cohesive group of a given size exists and of counting such groups.
Łukasz Janeczko, Piotr Faliszewski
null
null
2,022
aaai
Reforming an Envy-Free Matching
null
We consider the problem of reforming an envy-free matching when each agent is assigned a single item. Given an envy-free matching, we consider an operation to exchange the item of an agent with an unassigned item preferred by the agent that results in another envy-free matching. We repeat this operation as long as we can. We prove that the resulting envy-free matching is uniquely determined up to the choice of an initial envy-free matching, and can be found in polynomial time. We call the resulting matching a reformist envy-free matching, and then we study a shortest sequence to obtain the reformist envy-free matching from an initial envy-free matching. We prove that a shortest sequence is computationally hard to obtain even when each agent accepts at most four items and each item is accepted by at most three agents. On the other hand, we give polynomial-time algorithms when each agent accepts at most three items or each item is accepted by at most two agents. Inapproximability and fixed-parameter (in)tractability are also discussed.
Takehiro Ito, Yuni Iwamasa, Naonori Kakimura, Naoyuki Kamiyama, Yusuke Kobayashi, Yuta Nozaki, Yoshio Okamoto, Kenta Ozeki
null
null
2,022
aaai
Signaling in Posted Price Auctions
null
We study single-item single-unit Bayesian posted price auctions, where buyers arrive sequentially and their valuations for the item being sold depend on a random, unknown state of nature. The seller has complete knowledge of the actual state and can send signals to the buyers so as to disclose information about it. For instance, the state of nature may reflect the condition and/or some particular features of the item, which are known to the seller only. The problem faced by the seller is about how to partially disclose information about the state so as to maximize revenue. Unlike classical signaling problems, in this setting, the seller must also correlate the signals being sent to the buyers with some price proposals for them. This introduces additional challenges compared to standard settings. We consider two cases: the one where the seller can only send signals publicly visible to all buyers, and the case in which the seller can privately send a different signal to each buyer. As a first step, we prove that, in both settings, the problem of maximizing the seller's revenue does not admit an FPTAS unless P=NP, even for basic instances with a single buyer. As a result, in the rest of the paper, we focus on designing PTASs. In order to do so, we first introduce a unifying framework encompassing both public and private signaling, whose core result is a decomposition lemma that allows focusing on a finite set of possible buyers' posteriors. This forms the basis on which our PTASs are developed. In particular, in the public signaling setting, our PTAS employs some ad hoc techniques based on linear programming, while our PTAS for the private setting relies on the ellipsoid method to solve an exponentially-sized LP in polynomial time. In the latter case, we need a custom approximate separation oracle, which we implement with a dynamic programming approach.
Matteo Castiglioni, Giulia Romano, Alberto Marchesi, Nicola Gatti
null
null
2,022
aaai
Improved Maximin Guarantees for Subadditive and Fractionally Subadditive Fair Allocation Problem
null
In this work, we study the maximin share fairness notion for allocation of indivisible goods in the subadditive and fractionally subadditive settings. While previous work refutes the possibility of obtaining an allocation which is better than 1/2-MMS, the only positive result for the subadditive setting states that when the number of items is equal to m, there always exists an Ω(1/log m)-MMS allocation. Since the number of items may be larger than the number of agents (n), such a bound can only imply a weak bound of Ω(1/(n log n))-MMS allocation in general. In this work, we improve this gap exponentially to an Ω(1/(log n log log n))-MMS guarantee. In addition to this, we prove that when the valuation functions are fractionally subadditive, a 1/4.6-MMS allocation is guaranteed to exist. This also improves upon the previous bound of 1/5-MMS guarantee for the fractionally subadditive setting.
Masoud Seddighin, Saeed Seddighin
null
null
2,022
aaai
Heterogeneous Facility Location with Limited Resources
null
We initiate the study of the heterogeneous facility location problem with limited resources. We mainly focus on the fundamental case where a set of agents are positioned in the line segment [0,1] and have approval preferences over two available facilities. A mechanism takes as input the positions and the preferences of the agents, and chooses to locate a single facility based on this information. We study mechanisms that aim to maximize the social welfare (the total utility the agents derive from facilities they approve), under the constraint of incentivizing the agents to truthfully report their positions and preferences. We consider three different settings depending on the level of agent-related information that is public or private. For each setting, we design deterministic and randomized strategyproof mechanisms that achieve a good approximation of the optimal social welfare, and complement these with nearly-tight impossibility results.
Argyrios Deligkas, Aris Filos-Ratsikas, Alexandros A. Voudouris
null
null
2,022
aaai
Fair and Efficient Allocations of Chores under Bivalued Preferences
null
We study the problem of fair and efficient allocation of a set of indivisible chores to agents with additive cost functions. We consider the popular fairness notion of envy-freeness up to one good (EF1) with the efficiency notion of Pareto-optimality (PO). While it is known that EF1+PO allocations exists and can be computed in pseudo-polynomial time in the case of goods, the same problem is open for chores. Our first result is a strongly polynomial-time algorithm for computing an EF1+PO allocation for bivalued instances, where agents have (at most) two disutility values for the chores. To the best of our knowledge, this is the first non-trivial class of chores to admit an EF1+PO allocation and an efficient algorithm for its computation. We also study the problem of computing an envy-free (EF) and PO allocation for the case of divisible chores. While the existence of EF+PO allocation is known via competitive equilibrium with equal incomes, its efficient computation is open. Our second result shows that for bivalued instances, an EF+PO allocation can be computed in strongly polynomial-time.
Jugal Garg, Aniket Murhekar, John Qin
null
null
2,022
aaai
The Price of Justified Representation
null
In multiwinner approval voting, the goal is to select k-member committees based on voters' approval ballots. A well-studied concept of proportionality in this context is the justified representation (JR) axiom, which demands that no large cohesive group of voters remains unrepresented. However, the JR axiom may conflict with other desiderata, such as coverage (maximizing the number of voters who approve at least one committee member) or social welfare (maximizing the number of approvals obtained by committee members). In this work, we investigate the impact of imposing the JR axiom (as well as the more demanding EJR axiom) on social welfare and coverage. Our approach is threefold: we derive worst-case bounds on the loss of welfare/coverage that is caused by imposing JR, study the computational complexity of finding 'good' committees that provide JR (obtaining a hardness result, an approximation algorithm, and an exact algorithm for one-dimensional preferences), and examine this setting empirically on several synthetic datasets.
Edith Elkind, Piotr Faliszewski, Ayumi Igarashi, Pasin Manurangsi, Ulrike Schmidt-Kraepelin, Warut Suksompong
null
null
2,022
aaai
Theory of and Experiments on Minimally Invasive Stability Preservation in Changing Two-Sided Matching Markets
null
Following up on purely theoretical work, we contribute further theoretical insights into adapting stable two-sided matchings to change. Moreover, we perform extensive empirical studies hinting at numerous practically useful properties. Our theoretical extensions include the study of new problems (that is, incremental variants of Almost Stable Marriage and Hospital Residents), focusing on their (parameterized) computational complexity and the equivalence of various change types (thus simplifying algorithmic and complexity-theoretic studies for various natural change scenarios). Our experimental findings reveal, for instance, that allowing the new matching to be blocked by a few pairs significantly decreases the difference between the old and the new matching.
Niclas Boehmer, Klaus Heeger, Rolf Niedermeier
null
null
2,022
aaai
Complexity of Deliberative Coalition Formation
null
Elkind et al. (AAAI'21) introduced a model for deliberative coalition formation, where a community wishes to identify a strongly supported proposal from a space of alternatives, in order to change the status quo. In their model, agents and proposals are points in a metric space, agents' preferences are determined by distances, and agents deliberate by dynamically forming coalitions around proposals that they prefer over the status quo. The deliberation process operates via k-compromise transitions, where agents from k (current) coalitions come together to form a larger coalition in order to support a (perhaps new) proposal, possibly leaving behind some of the dissenting agents from their old coalitions. A deliberation succeeds if it terminates by identifying a proposal with the largest possible support. For deliberation in d dimensions, Elkind et al. consider two variants of their model: in the Euclidean model, proposals and agent locations are points in R^d and the distance is measured according to ||...||_2; and in the hypercube model, proposals and agent locations are vertices of the d-dimensional hypercube and the metric is the Hamming distance. They show that in the Euclidean model 2-compromises are guaranteed to succeed, but in the hypercube model for deliberation to succeed it may be necessary to use k-compromises with k >= d. We complement their analysis by (1) proving that in both models it is hard to find a proposal with a high degree of support, and even a 2-compromise transition may be hard to compute; (2) showing that a sequence of 2-compromise transitions may be exponentially long; (3) strengthening the lower bound on the size of the compromise for the d-hypercube model from d to 2^Ω(d).
Edith Elkind, Abheek Ghosh, Paul Goldberg
null
null
2,022
aaai
Weighted Fairness Notions for Indivisible Items Revisited
null
We revisit the setting of fairly allocating indivisible items when agents have different weights representing their entitlements. First, we propose a parameterized family of relaxations for weighted envy-freeness and the same for weighted proportionality; the parameters indicate whether smaller-weight or larger-weight agents should be given a higher priority. We show that each notion in these families can always be satisfied, but any two cannot necessarily be fulfilled simultaneously. We then introduce an intuitive weighted generalization of maximin share fairness and establish the optimal approximation of it that can be guaranteed. Furthermore, we characterize the implication relations between the various weighted fairness notions introduced in this and prior work, and relate them to the lower and upper quota axioms from apportionment.
Mithun Chakraborty, Erel Segal-Halevi, Warut Suksompong
null
null
2,022
aaai
On Improving Resource Allocations by Sharing
null
Given an initial resource allocation, where some agents may envy others or where a different distribution of resources might lead to higher social welfare, our goal is to improve the allocation without reassigning resources. We consider a sharing concept allowing resources being shared with social network neighbors of the resource owners. To this end, we introduce a formal model that allows a central authority to compute an optimal sharing between neighbors based on an initial allocation. Advocating this point of view, we focus on the most basic scenario where a resource may be shared by two neighbors in a social network and each agent can participate in a bounded number of sharings. We present algorithms for optimizing utilitarian and egalitarian social welfare of allocations and for reducing the number of envious agents. In particular, we examine the computational complexity with respect to several natural parameters. Furthermore, we study cases with restricted social network structures and, among others, devise polynomial-time algorithms in path- and tree-like (hierarchical) social networks.
Robert Bredereck, Andrzej Kaczmarczyk, Junjie Luo, Rolf Niedermeier, Florian Sachse
null
null
2,022
aaai
Combating Collusion Rings Is Hard but Possible
null
A recent report of Littmann published in the Communications of the ACM outlines the existence and the fatal impact of collusion rings in academic peer reviewing. We introduce and analyze the problem Cycle-Free Reviewing that aims at finding a review assignment without the following kind of collusion ring: A sequence of reviewers each reviewing a paper authored by the next reviewer in the sequence (with the last reviewer reviewing a paper of the first), thus creating a review cycle where each reviewer gives favorable reviews. As a result, all papers in that cycle have a high chance of acceptance independent of their respective scientific merit. We observe that review assignments computed using a standard Linear Programming approach typically admit many short review cycles. On the negative side, we show that Cycle-Free Reviewing is NP-hard in various restricted cases (i.e., when every author is qualified to review all papers and one wants to prevent that authors review each other's or their own papers or when every author has only one paper and is only qualified to review few papers). On the positive side, among others, we show that, in some realistic settings, an assignment without any review cycles of small length always exists. This result also gives rise to an efficient heuristic for computing (weighted) cycle-free review assignments, which we show to be of excellent quality in practice.
Niclas Boehmer, Robert Bredereck, André Nichterlein
null
null
2,022
aaai
The Secretary Problem with Competing Employers on Random Edge Arrivals
null
The classic secretary problem concerns the problem of an employer facing a random sequence of candidates and making online hiring decisions to try to hire the best candidate. In this paper, we study a game-theoretic generalization of the secretary problem where a set of employers compete with each other to hire the best candidate. Different from previous secretary market models, our model assumes that the sequence of candidates arriving at each employer is uniformly random but independent from other sequences. We consider two versions of this secretary game where employers can have adaptive or non-adaptive strategies, and provide characterizations of the best response and Nash equilibrium of each game.
Xiaohui Bei, Shengyu Zhang
null
null
2,022
aaai
Sequential Blocked Matching
null
We consider a sequential blocked matching (SBM) model where strategic agents repeatedly report ordinal preferences over a set of services to a central planner. The planner's goal is to elicit agents' true preferences and design a policy that matches services to agents in order to maximize the expected social welfare with the added constraint that each matched service can be blocked or unavailable for a number of time periods. Naturally, SBM models the repeated allocation of reusable services to a set of agents where each allocated service becomes unavailable for a fixed duration. We first consider the offline SBM setting, where the strategic agents are aware of their true preferences. We measure the performance of any policy by distortion, the worst-case multiplicative approximation guaranteed by any policy. For the setting with s services, we establish lower bounds of Ω(s) and Ω(√s) on the distortions of any deterministic and randomised mechanisms, respectively. We complement these results by providing approximately truthful, measured by incentive ratio, deterministic and randomised policies based on random serial dictatorship which match our lower bounds. Our results show that there is a significant improvement if one considers the class of randomised policies. Finally, we consider the online SBM setting with bandit feedback where each agent is initially unaware of her true preferences, and the planner must facilitate each agent in the learning of their preferences through the matching of services over time. We design an approximately truthful mechanism based on the explore-then-commit paradigm, which achieves logarithmic dynamic approximate regret.
Nicholas Bishop, Hau Chan, Debmalya Mandal, Long Tran-Thanh
null
null
2,022
aaai
Universal and Tight Online Algorithms for Generalized-Mean Welfare
null
We study fair and efficient allocation of divisible goods, in an online manner, among n agents. The goods arrive online in a sequence of T time periods. The agents' values for a good are revealed only after its arrival, and the online algorithm needs to fractionally allocate the good, immediately and irrevocably, among the agents. Towards a unifying treatment of fairness and economic efficiency objectives, we develop an algorithmic framework for finding online allocations to maximize the generalized mean of the values received by the agents. In particular, working with the assumption that each agent's value for the grand bundle of goods is appropriately scaled, we address online maximization of p-mean welfare. Parameterized by an exponent term p in (-infty, 1], these means encapsulate a range of welfare functions, including social welfare (p=1), egalitarian welfare (p to -infty), and Nash social welfare (p to 0). We present a simple algorithmic template that takes a threshold as input and, with judicious choices for this threshold, leads to both universal and tailored competitive guarantees. First, we show that one can compute online a single allocation that O (sqrt(n) log n)-approximates the optimal p-mean welfare for all p
Siddharth Barman, Arindam Khan, Arnab Maiti
null
null
2,022
aaai
Individual Representation in Approval-Based Committee Voting
null
When selecting multiple candidates based on approval preferences of agents, the proportional representation of agents' opinions is an important and well-studied desideratum. Existing criteria for evaluating the representativeness of outcomes focus on groups of agents and demand that sufficiently large and cohesive groups are "represented" in the sense that candidates approved by some group members are selected. Crucially, these criteria say nothing about the representation of individual agents, even if these agents are members of groups that deserve representation. In this paper, we formalize the concept of individual representation (IR) and explore to which extent, and under which circumstances, it can be achieved. We show that checking whether an IR outcome exists is computationally intractable, and we verify that all common approval-based voting rules may fail to provide IR even in cases where this is possible. We then focus on domain restrictions and establish an interesting contrast between "voter interval" and "candidate interval" preferences. This contrast can also be observed in our experimental results, where we analyze the attainability of IR for realistic preference profiles.
Markus Brill, Jonas Israel, Evi Micha, Jannik Peters
null
null
2,022
aaai
Almost Full EFX Exists for Four Agents
null
The existence of EFX allocations of goods is a major open problem in fair division, even for additive valuations. The current state of the art is that no setting where EFX allocations are impossible is known, and yet, existence results are known only for very restricted settings, such as: (i) agents with identical valuations, (ii) 2 agents, and (iii) 3 agents with additive valuations. It is also known that EFX exists if one can leave n-1 items unallocated, where n is the number of agents. We develop new techniques that allow us to push the boundaries of the enigmatic EFX problem beyond these known results, and (arguably) to simplify proofs of earlier results. Our main result is that every setting with 4 additive agents admits an EFX allocation that leaves at most a single item unallocated. Beyond our main result, we introduce a new class of valuations, termed nice cancelable, which includes additive, unit-demand, budget-additive and multiplicative valuations, among others. Using our new techniques, we show that both our results and previous results for additive valuations extend to nice cancelable valuations.
Ben Berger, Avi Cohen, Michal Feldman, Amos Fiat
null
null
2,022
aaai
A Calculus for Computing Structured Justifications for Election Outcomes
null
In the context of social choice theory, we develop a tableau-based calculus for reasoning about voting rules. This calculus can be used to obtain structured explanations for why a given set of axioms justifies a given election outcome for a given profile of voter preferences. We then show how to operationalise this calculus, using a combination of SAT solving and answer set programming, to arrive at a flexible framework for presenting human-readable justifications to users.
Arthur Boixel, Ulle Endriss, Ronald de Haan
null
null
2,022
aaai
Truthful Cake Sharing
null
The classic cake cutting problem concerns the fair allocation of a heterogeneous resource among interested agents. In this paper, we study a public goods variant of the problem, where instead of competing with one another for the cake, the agents all share the same subset of the cake which must be chosen subject to a length constraint. We focus on the design of truthful and fair mechanisms in the presence of strategic agents who have piecewise uniform utilities over the cake. On the one hand, we show that the leximin solution is truthful and moreover maximizes an egalitarian welfare measure among all truthful and position oblivious mechanisms. On the other hand, we demonstrate that the maximum Nash welfare solution is truthful for two agents but not in general. Our results assume that mechanisms can block each agent from accessing parts that the agent does not claim to desire; we provide an impossibility result when blocking is not allowed.
Xiaohui Bei, Xinhang Lu, Warut Suksompong
null
null
2,022
aaai
Liquid Democracy with Ranked Delegations
null
Liquid democracy is a novel paradigm for collective decision-making that gives agents the choice between casting a direct vote or delegating their vote to another agent. We consider a generalization of the standard liquid democracy setting by allowing agents to specify multiple potential delegates, together with a preference ranking among them. This generalization increases the number of possible delegation paths and enables higher participation rates because fewer votes are lost due to delegation cycles or abstaining agents. In order to implement this generalization of liquid democracy, we need to find a principled way of choosing between multiple delegation paths. In this paper, we provide a thorough axiomatic analysis of the space of delegation rules, i.e., functions assigning a feasible delegation path to each delegating agent. In particular, we prove axiomatic characterizations as well as an impossibility result for delegation rules. We also analyze requirements on delegation rules that have been suggested by practitioners, and introduce novel rules with attractive properties. By performing an extensive experimental analysis on synthetic as well as real-world data, we compare delegation rules with respect to several quantitative criteria relating to the chosen paths and the resulting distribution of voting power. Our experiments reveal that delegation rules can be aligned on a spectrum reflecting an inherent trade-off between competing objectives.
Markus Brill, Théo Delemazure, Anne-Marie George, Martin Lackner, Ulrike Schmidt-Kraepelin
null
null
2,022
aaai
Fair and Truthful Giveaway Lotteries
null
We consider a setting where a large number of agents are all interested in attending some public resource of limited capacity. Attendance is thus allotted by lottery. If agents arrive individually, then randomly choosing the agents – one by one - is a natural, fair and efficient solution. We consider the case where agents are organized in groups (e.g. families, friends), the members of each of which must all be admitted together. We study the question of how best to design such lotteries. We first establish the desired properties of such lotteries, in terms of fairness and efficiency, and define the appropriate notions of strategy proofness (providing that agents cannot gain by misrepresenting the true groups, e.g. joining or splitting groups). We establish inter-relationships between the different properties, proving properties that cannot be fulfilled simultaneously (e.g. leximin optimality and strong group stratagy proofness). Our main contribution is a polynomial mechanism for the problem, which guarantees many of the desired properties, including: leximin optimality, Pareto-optimality, anonymity, group strategy proofness, and adjunctive strategy proofness (which provides that no benefit can be obtained by registering additional - uninterested or bogus - individuals). The mechanism approximates the utilitarian optimum to within a factor of 2, which, we prove, is optimal for any mechanism that guarantees any one of the following properties: egalitarian welfare optimality, leximin optimality, envyfreeness, and adjunctive strategy proofness.
Tal Arbiv, Yonatan Aumann
null
null
2,022
aaai
A Little Charity Guarantees Fair Connected Graph Partitioning
null
Motivated by fair division applications, we study a fair connected graph partitioning problem, in which an undirected graph with m nodes must be divided between n agents such that each agent receives a connected subgraph and the partition is fair. We study approximate versions of two fairness criteria: alpha-proportionality requires that each agent receive a subgraph with at least (1/alpha)*m/n nodes, and alpha-balancedness requires that the ratio between the sizes of the largest and smallest subgraphs be at most alpha. Unfortunately, there exist simple examples in which no partition is reasonably proportional or balanced. To circumvent this, we introduce the idea of charity. We show that by "donating" just n-1 nodes, we can guarantee the existence of 2-proportional and almost 2-balanced partitions (and find them in polynomial time), and that this result is almost tight. More generally, we chart the tradeoff between the size of charity and the approximation of proportionality or balancedness we can guarantee.
Ioannis Caragiannis, Evi Micha, Nisarg Shah
null
null
2,022
aaai
Truth-Tracking via Approval Voting: Size Matters
null
Epistemic social choice aims at unveiling a hidden ground truth given votes, which are interpreted as noisy signals about it. We consider here a simple setting where votes consist of approval ballots: each voter approves a set of alternatives which they believe can possibly be the ground truth. Based on the intuitive idea that more reliable votes contain fewer alternatives, we define several noise models that are approval voting variants of the Mallows model. The likelihood-maximizing alternative is then characterized as the winner of a weighted approval rule, where the weight of a ballot decreases with its cardinality. We have conducted an experiment on three image annotation datasets; they conclude that rules based on our noise model outperform standard approval voting; the best performance is obtained by a variant of the Condorcet noise model.
Tahar Allouche, Jérôme Lang, Florian Yger
null
null
2,022
aaai
The Metric Distortion of Multiwinner Voting
null
We extend the recently introduced framework of metric distortion to multiwinner voting. In this framework, n agents and m alternatives are located in an underlying metric space. The exact distances between agents and alternatives are unknown. Instead, each agent provides a ranking of the alternatives, ordered from the closest to the farthest. Typically, the goal is to select a single alternative that approximately minimizes the total distance from the agents, and the worst-case approximation ratio is termed distortion. In the case of multiwinner voting, the goal is to select a committee of k alternatives that (approximately) minimizes the total cost to all agents. We consider the scenario where the cost of an agent for a committee is her distance from the q-th closest alternative in the committee. We reveal a surprising trichotomy on the distortion of multiwinner voting rules in terms of k and q: The distortion is unbounded when q
Ioannis Caragiannis, Nisarg Shah, Alexandros A. Voudouris
null
null
2,022
aaai
Dimensionality and Coordination in Voting: The Distortion of STV
null
We study the performance of voting mechanisms from a utilitarian standpoint, under the recently introduced framework of metric-distortion, offering new insights along two main lines. First, if d represents the doubling dimension of the metric space, we show that the distortion of STV is O(d log log m), where m represents the number of candidates. For doubling metrics this implies an exponential improvement over the lower bound for general metrics, and as a special case it effectively answers a question left open by Skowron and Elkind (AAAI '17) regarding the distortion of STV under low-dimensional Euclidean spaces. More broadly, this constitutes the first nexus between the performance of any voting rule and the ``intrinsic dimensionality'' of the underlying metric space. We also establish a nearly-matching lower bound, refining the construction of Skowron and Elkind. Moreover, motivated by the efficiency of STV, we investigate whether natural learning rules can lead to low-distortion outcomes. Specifically, we introduce simple, deterministic and decentralized exploration/exploitation dynamics, and we show that they converge to a candidate with O(1) distortion.
Ioannis Anagnostides, Dimitris Fotakis, Panagiotis Patsilinakos
null
null
2,022
aaai
An Algorithmic Introduction to Savings Circles
null
Rotating savings and credit associations (roscas) are informal financial organizations common in settings where communities have reduced access to formal financial institutions. In a rosca, a fixed group of participants regularly contribute sums of money to a pot. This pot is then allocated periodically using lottery, aftermarket, or auction mechanisms. Roscas are empirically well-studied in economics. They are, however, challenging to study theoretically due to their dynamic nature. Typical economic analyses of roscas stop at coarse ordinal welfare comparisons to other credit allocation mechanisms, leaving much of roscas' ubiquity unexplained. In this work, we take an algorithmic perspective on the study of roscas. Building on techniques from the price of anarchy literature, we present worst-case welfare approximation guarantees. We further experimentally compare the welfare of outcomes as key features of the environment vary. These cardinal welfare analyses further rationalize the prevalence of roscas. We conclude by discussing several other promising avenues.
Rediet Abebe, Adam Eck, Christian Ikeokwu, Sam Taggart
null
null
2,022
aaai
Single-Agent Dynamics in Additively Separable Hedonic Games
null
The formation of stable coalitions is a central concern in multiagent systems. A considerable stream of research defines stability via the absence of beneficial deviations by single agents. Such deviations require an agent to improve her utility by joining another coalition while possibly imposing further restrictions on the consent of the agents in the welcoming as well as the abandoned coalition. While most of the literature focuses on unanimous consent, we also study consent decided by majority vote, and introduce two new stability notions that can be seen as local variants of popularity. We investigate these notions in additively separable hedonic games by pinpointing boundaries to computational complexity depending on the type of consent and restrictions on the utility functions. The latter restrictions shed new light on well-studied classes of games based on the appreciation of friends or the aversion to enemies. Many of our positive results follow from the Deviation Lemma, a general combinatorial observation, which can be leveraged to prove the convergence of simple and natural single-agent dynamics under fairly general conditions.
Felix Brandt, Martin Bullinger, Leo Tappe
null
null
2,022
aaai
Truthful and Fair Mechanisms for Matroid-Rank Valuations
null
We study the problem of allocating indivisible goods among strategic agents. We focus on settings wherein monetary transfers are not available and each agent's private valuation is a submodular function with binary marginals, i.e., the agents' valuations are matroid-rank functions. In this setup, we establish a notable dichotomy between two of the most well-studied fairness notions in discrete fair division; specifically, between envy-freeness up to one good (EF1) and maximin shares (MMS). First, we show that a known Pareto-efficient mechanism is group strategy-proof for finding EF1 allocations, under matroid-rank valuations. The group strategy-proofness guarantee strengthens an existing result that establishes truthfulness (individually for each agent) in the same context. Our result also generalizes prior work from binary additive valuations to the matroid-rank case. Next, we establish that an analogous positive result cannot be achieved for MMS, even when considering truthfulness on an individual level. Specifically, we prove that, for matroid-rank valuations, there does not exist a truthful mechanism that is index oblivious, Pareto efficient, and maximin fair. For establishing our results, we develop a characterization of truthful mechanisms for matroid-rank functions. This characterization in fact holds for a broader class of valuations (specifically, holds for binary XOS functions) and might be of independent interest.
Siddharth Barman, Paritosh Verma
null
null
2,022
aaai
Explain, Edit, and Understand: Rethinking User Study Design for Evaluating Model Explanations
null
In attempts to "explain" predictions of machine learning models, researchers have proposed hundreds of techniques for attributing predictions to features that are deemed important. While these attributions are often claimed to hold the potential to improve human "understanding" of the models, surprisingly little work explicitly evaluates progress towards this aspiration. In this paper, we conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews. They are challenged both to simulate the model on fresh reviews, and to edit reviews with the goal of lowering the probability of the originally predicted class. Successful manipulations would lead to an adversarial example. During the training (but not the test) phase, input spans are highlighted to communicate salience. Through our evaluation, we observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control. For the BERT-based classifier, popular local explanations do not improve their ability to reduce the model confidence over the no-explanation case. Remarkably, when the explanation for the BERT model is given by the (global) attributions of a linear model trained to imitate the BERT model, people can effectively manipulate the model.
Siddhant Arora, Danish Pruthi, Norman Sadeh, William W. Cohen, Zachary C. Lipton, Graham Neubig
null
null
2,022
aaai
Truthful Aggregation of Budget Proposals with Proportionality Guarantees
null
We study a participatory budgeting problem, where a set of strategic agents wish to split a divisible budget among different projects by aggregating their proposals on a single division. Unfortunately, the straightforward rule that divides the budget proportionally is susceptible to manipulation. Recently, a class of truthful mechanisms has been proposed, namely the moving phantom mechanisms. One such mechanism satisfies the proportionality property, in the sense that in the extreme case where all agents prefer a single project to receive the whole amount, the budget is assigned proportionally. While proportionality is a naturally desired property, it is defined over a limited type of preference profiles. To address this, we expand the notion of proportionality, by proposing a quantitative framework that evaluates a budget aggregation mechanism according to its worst-case distance from the proportional allocation. Crucially, this is defined for every preference profile. We study this measure on the class of moving phantom mechanisms, and we provide approximation guarantees. For two projects, we show that the Uniform Phantom mechanism is optimal among all truthful mechanisms. For three projects, we propose a new, proportional mechanism that is optimal among all moving phantom mechanisms. Finally, we provide impossibility results regarding the approximability of moving phantom mechanisms.
Ioannis Caragiannis, George Christodoulou, Nicos Protopapas
null
null
2,022
aaai
Maximizing Nash Social Welfare in 2-Value Instances
null
We consider the problem of maximizing the Nash social welfare when allocating a set G of indivisible goods to a set N of agents. We study instances, in which all agents have 2-value additive valuations: The value of every agent for every good is either p or q, where p and q are integers and p2. In terms of approximation, we present positive and negative results for general p and q. We show that our algorithm obtains an approximation ratio of at most 1.0345. Moreover, we prove that the problem is APX-hard, with a lower bound of 1.000015 achieved at p/q = 4/5.
Hannaneh Akrami, Bhaskar Ray Chaudhury, Martin Hoefer, Kurt Mehlhorn, Marco Schmalhofer, Golnoosh Shahkarami, Giovanna Varricchio, Quentin Vermande, Ernest van Wijland
null
null
2,022
aaai
How General-Purpose Is a Language Model? Usefulness and Safety with Human Prompters in the Wild
null
The new generation of language models is reported to solve some extraordinary tasks the models were never trained for specifically, in few-shot or zero-shot settings. However, these reports usually cherry-pick the tasks, use the best prompts, and unwrap or extract the solutions leniently even if they are followed by nonsensical text. In sum, they are specialised results for one domain, a particular way of using the models and interpreting the results. In this paper, we present a novel theoretical evaluation framework and a distinctive experimental study assessing language models as general-purpose systems when used directly by human prompters --- in the wild. For a useful and safe interaction in these increasingly more common conditions, we need to understand when the model fails because of a lack of capability or a misunderstanding of the user's intents. Our results indicate that language models such as GPT-3 have limited understanding of the human command; far from becoming general-purpose systems in the wild.
Pablo Antonio Moreno Casares, Bao Sheng Loe, John Burden, Sean hEigeartaigh, José Hernández-Orallo
null
null
2,022
aaai
Locally Fair Partitioning
null
We model the societal task of redistricting political districts as a partitioning problem: Given a set of n points in the plane, each belonging to one of two parties, and a parameter k, our goal is to compute a partition P of the plane into regions so that each region contains roughly s = n/k points. P should satisfy a notion of "local" fairness, which is related to the notion of core, a well-studied concept in cooperative game theory. A region is associated with the majority party in that region, and a point is unhappy in P if it belongs to the minority party. A group D of roughly s contiguous points is called a deviating group with respect to P if majority of points in D are unhappy in P. The partition P is locally fair if there is no deviating group with respect to P. This paper focuses on a restricted case when points lie in 1D. The problem is non-trivial even in this case. We consider both adversarial and "beyond worst-case" settings for this problem. For the former, we characterize the input parameters for which a locally fair partition always exists; we also show that a locally fair partition may not exist for certain parameters. We then consider input models where there are "runs" of red and blue points. For such clustered inputs, we show that a locally fair partition may not exist for certain values of s, but an approximate locally fair partition exists if we allow some regions to have smaller sizes. We finally present a polynomial-time algorithm for computing a locally fair partition if one exists.
Pankaj K. Agarwal, Shao-Heng Ko, Kamesh Munagala, Erin Taylor
null
null
2,022
aaai
Role of Human-AI Interaction in Selective Prediction
null
Recent work has shown the potential benefit of selective prediction systems that can learn to defer to a human when the predictions of the AI are unreliable, particularly to improve the reliability of AI systems in high-stakes applications like healthcare or conservation. However, most prior work assumes that human behavior remains unchanged when they solve a prediction task as part of a human-AI team as opposed to by themselves. We show that this is not the case by performing experiments to quantify human-AI interaction in the context of selective prediction. In particular, we study the impact of communicating different types of information to humans about the AI system's decision to defer. Using real-world conservation data and a selective prediction system that improves expected accuracy over that of the human or AI system working individually, we show that this messaging has a significant impact on the accuracy of human judgements. Our results study two components of the messaging strategy: 1) Whether humans are informed about the prediction of the AI system and 2) Whether they are informed about the decision of the selective prediction system to defer. By manipulating these messaging components, we show that it is possible to significantly boost human performance by informing the human of the decision to defer, but not revealing the prediction of the AI. We therefore show that it is vital to consider how the decision to defer is communicated to a human when designing selective prediction systems, and that the composite accuracy of a human-AI team must be carefully evaluated using a human-in-the-loop framework.
Elizabeth Bondi, Raphael Koster, Hannah Sheahan, Martin Chadwick, Yoram Bachrach, Taylan Cemgil, Ulrich Paquet, Krishnamurthy Dvijotham
null
null
2,022
aaai
Efficiency of Ad Auctions with Price Displaying
null
Most economic reports suggest that almost half of the market value unlocked by artificial intelligence (AI) by the next decade (about 9 trillion USD per year) will be in marketing&sales. In particular, AI will allow the optimization of more and more intricate economic settings in which multiple different activities can be automated jointly. A relatively recent example is that one of ad auctions in which similar products or services are displayed together with their price, thus merging advertising and pricing in a unique website. This is the case, e.g., of Google Hotel Ads and TripAdvisor. More precisely, as in a classical ad auction, the ranking of the ads depends on the advertisers' bids, while, differently from classical ad auctions, the price is displayed together with the ad, so as to provide a direct comparison among the prices and thus dramatically affect the behavior of the users. This paper investigates how displaying prices and ads together conditions the properties of the main economic mechanisms such as VCG and GSP. Initially, we focus on the direct-revelation mechanism, showing that prices are chosen by the mechanisms once given the advertisers' reports. We also provide an efficient algorithm to compute the optimal allocation given the private information reported by the advertisers. Then, with both VCG and GSP payments, we show the inefficiency in terms of Price of Anarchy (PoA) and Stability (PoS) over the social welfare and mechanism's revenue when the advertisers choose the prices. The main results show that, with both VCG and GSP, PoS over the revenue may be unbounded even with two slots, while PoA over the social welfare may be as large as the number of slots. Finally, we show that, under some assumptions, simple modifications to VCG and GSP allow us to obtain a better PoS over the revenue.
Matteo Castiglioni, Diodato Ferraioli, Nicola Gatti, Alberto Marchesi, Giulia Romano
null
null
2,022
aaai
Adversarial Learning from Crowds
null
Learning from Crowds (LFC) seeks to induce a high-quality classifier from training instances, which are linked to a range of possible noisy annotations from crowdsourcing workers under their various levels of skills and their own preconditions. Recent studies on LFC focus on designing new methods to improve the performance of the classifier trained from crowdsourced labeled data. To this day, however, there remain under-explored security aspects of LFC systems. In this work, we seek to bridge this gap. We first show that LFC models are vulnerable to adversarial examples---small changes to input data can cause classifiers to make prediction mistakes. Second, we propose an approach, A-LFC for training a robust classifier from crowdsourced labeled data. Our empirical results on three real-world datasets show that the proposed approach can substantially improve the performance of the trained classifier even with the existence of adversarial examples. On average, A-LFC has 10.05% and 11.34% higher test robustness than the state-of-the-art in the white-box and black-box attack settings, respectively.
Pengpeng Chen, Hailong Sun, Yongqiang Yang, Zhijun Chen
null
null
2,022
aaai
Deceptive Decision-Making under Uncertainty
null
We study the design of autonomous agents that are capable of deceiving outside observers about their intentions while carrying out tasks in stochastic, complex environments. By modeling the agent's behavior as a Markov decision process, we consider a setting where the agent aims to reach one of multiple potential goals while deceiving outside observers about its true goal. We propose a novel approach to model observer predictions based on the principle of maximum entropy and to efficiently generate deceptive strategies via linear programming. The proposed approach enables the agent to exhibit a variety of tunable deceptive behaviors while ensuring the satisfaction of probabilistic constraints on the behavior. We evaluate the performance of the proposed approach via comparative user studies and present a case study on the streets of Manhattan, New York, using real travel time distributions.
Yagiz Savas, Christos K. Verginis, Ufuk Topcu
null
null
2,022
aaai
FOCUS: Flexible Optimizable Counterfactual Explanations for Tree Ensembles
null
Model interpretability has become an important problem in machine learning (ML) due to the increased effect algorithmic decisions have on humans. Counterfactual explanations can help users understand not only why ML models make certain decisions, but also how these decisions can be changed. We frame the problem of finding counterfactual explanations as an optimization task and extend previous work that could only be applied to differentiable models. In order to accommodate non-differentiable models such as tree ensembles, we use probabilistic model approximations in the optimization framework. We introduce an approximation technique that is effective for finding counterfactual explanations for predictions of the original model and show that our counterfactual examples are significantly closer to the original instances than those produced by other methods specifically designed for tree ensembles.
Ana Lucic, Harrie Oosterhuis, Hinda Haned, Maarten de Rijke
null
null
2,022
aaai
Discovering State and Action Abstractions for Generalized Task and Motion Planning
null
Generalized planning accelerates classical planning by finding an algorithm-like policy that solves multiple instances of a task. A generalized plan can be learned from a few training examples and applied to an entire domain of problems. Generalized planning approaches perform well in discrete AI planning problems that involve large numbers of objects and extended action sequences to achieve the goal. In this paper, we propose an algorithm for learning features, abstractions, and generalized plans for continuous robotic task and motion planning (TAMP) and examine the unique difficulties that arise when forced to consider geometric and physical constraints as a part of the generalized plan. Additionally, we show that these simple generalized plans learned from only a handful of examples can be used to improve the search efficiency of TAMP solvers.
Aidan Curtis, Tom Silver, Joshua B. Tenenbaum, Tomás Lozano-Pérez, Leslie Kaelbling
null
null
2,022
aaai
“I Don’t Think So”: Summarizing Policy Disagreements for Agent Comparison
null
With Artificial Intelligence on the rise, human interaction with autonomous agents becomes more frequent. Effective human-agent collaboration requires users to understand the agent's behavior, as failing to do so may cause reduced productivity, misuse or frustration. Agent strategy summarization methods are used to describe the strategy of an agent to users through demonstrations. A summary's objective is to maximize the user's understanding of the agent's aptitude by showcasing its behaviour in a selected set of world states. While shown to be useful, we show that current methods are limited when tasked with comparing between agents, as each summary is independently generated for a specific agent. In this paper, we propose a novel method for generating dependent and contrastive summaries that emphasize the differences between agent policies by identifying states in which the agents disagree on the best course of action. We conducted user studies to assess the usefulness of disagreement-based summaries for identifying superior agents and conveying agent differences. Results show disagreement-based summaries lead to improved user performance compared to summaries generated using HIGHLIGHTS, a strategy summarization algorithm which generates summaries for each agent independently.
Yotam Amitai, Ofra Amir
null
null
2,022
aaai
When Facial Expression Recognition Meets Few-Shot Learning: A Joint and Alternate Learning Framework
null
Human emotions involve basic and compound facial expressions. However, current research on facial expression recognition (FER) mainly focuses on basic expressions, and thus fails to address the diversity of human emotions in practical scenarios. Meanwhile, existing work on compound FER relies heavily on abundant labeled compound expression training data, which are often laboriously collected under the professional instruction of psychology. In this paper, we study compound FER in the cross-domain few-shot learning setting, where only a few images of novel classes from the target domain are required as a reference. In particular, we aim to identify unseen compound expressions with the model trained on easily accessible basic expression datasets. To alleviate the problem of limited base classes in our FER task, we propose a novel Emotion Guided Similarity Network (EGS-Net), consisting of an emotion branch and a similarity branch, based on a two-stage learning framework. Specifically, in the first stage, the similarity branch is jointly trained with the emotion branch in a multi-task fashion. With the regularization of the emotion branch, we prevent the similarity branch from overfitting to sampled base classes that are highly overlapped across different episodes. In the second stage, the emotion branch and the similarity branch play a “two-student game” to alternately learn from each other, thereby further improving the inference ability of the similarity branch on unseen compound expressions. Experimental results on both in-the-lab and in-the-wild compound expression datasets demonstrate the superiority of our proposed method against several state-of-the-art methods.
Xinyi Zou, Yan Yan, Jing-Hao Xue, Si Chen, Hanzi Wang
null
null
2,022
aaai
Teaching Humans When to Defer to a Classifier via Exemplars
null
Expert decision makers are starting to rely on data-driven automated agents to assist them with various tasks. For this collaboration to perform properly, the human decision maker must have a mental model of when and when not to rely on the agent. In this work, we aim to ensure that human decision makers learn a valid mental model of the agent's strengths and weaknesses. To accomplish this goal, we propose an exemplar-based teaching strategy where humans solve a set of selected examples and with our help generalize from them to the domain. We present a novel parameterization of the human's mental model of the AI that applies a nearest neighbor rule in local regions surrounding the teaching examples. Using this model, we derive a near-optimal strategy for selecting a representative teaching set. We validate the benefits of our teaching strategy on a multi-hop question answering task with an interpretable AI model using crowd workers. We find that when workers draw the right lessons from the teaching stage, their task performance improves. We furthermore validate our method on a set of synthetic experiments.
Hussein Mozannar, Arvind Satyanarayan, David Sontag
null
null
2,022
aaai
Random Mapping Method for Large-Scale Terrain Modeling
null
The vast amount of data captured by robots in large-scale environments brings the computing and storage bottlenecks to the typical methods of modeling the spaces the robots travel in. In order to efficiently construct a compact terrain model from uncertain, incomplete point cloud data of large-scale environments, in this paper, we first propose a novel feature mapping method, named random mapping, based on the fast random construction of base functions, which can efficiently project the messy points in the low-dimensional space into the high-dimensional space where the points are approximately linearly distributed. Then, in this mapped space, we propose to learn a continuous linear regression model to represent the terrain. We show that this method can model the environments in much less computation time, memory consumption, and access time, with high accuracy. Furthermore, the models possess the generalization capabilities comparable to the performances on the training set, and its inference accuracy gradually increases as the random mapping dimension increases. To better solve the large-scale environmental modeling problem, we adopt the idea of parallel computing to train the models. This strategy greatly reduces the wall-clock time of calculation without losing much accuracy. Experiments show the effectiveness of the random mapping method and the effects of some important parameters on its performance. Moreover, we evaluate the proposed terrain modeling method based on the random mapping method and compare its performances with popular typical methods and state-of-art methods.
Xu Liu, Decai Li, Yuqing He
null
null
2,022
aaai
Open Vocabulary Electroencephalography-to-Text Decoding and Zero-Shot Sentiment Classification
null
State-of-the-art brain-to-text systems have achieved great success in decoding language directly from brain signals using neural networks. However, current approaches are limited to small closed vocabularies which are far from enough for natural communication. In addition, most of the high-performing approaches require data from invasive devices (e.g., ECoG). In this paper, we extend the problem to open vocabulary Electroencephalography(EEG)-To-Text Sequence-To-Sequence decoding and zero-shot sentence sentiment classification on natural reading tasks. We hypothesis that the human brain functions as a special text encoder and propose a novel framework leveraging pre-trained language models (e.g., BART). Our model achieves a 40.1% BLEU-1 score on EEG-To-Text decoding and a 55.6% F1 score on zero-shot EEG-based ternary sentiment classification, which significantly outperforms supervised baselines. Furthermore, we show that our proposed model can handle data from various subjects and sources, showing great potential for a high-performance open vocabulary brain-to-text system once sufficient data is available. The code is made publicly available for research purpose at https://github.com/MikeWangWZHL/EEG-To-Text.
Zhenhailong Wang, Heng Ji
null
null
2,022
aaai
Recurrent Neural Network Controllers Synthesis with Stability Guarantees for Partially Observed Systems
null
Neural network controllers have become popular in control tasks thanks to their flexibility and expressivity. Stability is a crucial property for safety-critical dynamical systems, while stabilization of partially observed systems, in many cases, requires controllers to retain and process long-term memories of the past. We consider the important class of recurrent neural networks (RNN) as dynamic controllers for nonlinear uncertain partially-observed systems, and derive convex stability conditions based on integral quadratic constraints, S-lemma and sequential convexification. To ensure stability during the learning and control process, we propose a projected policy gradient method that iteratively enforces the stability conditions in the reparametrized space taking advantage of mild additional information on system dynamics. Numerical experiments show that our method learns stabilizing controllers with fewer samples and achieves higher final performance compared with policy gradient.
Fangda Gu, He Yin, Laurent El Ghaoui, Murat Arcak, Peter Seiler, Ming Jin
null
null
2,022
aaai
DeepVisualInsight: Time-Travelling Visualization for Spatio-Temporal Causality of Deep Classification Training
null
Understanding how the predictions of deep learning models are formed during the training process is crucial to improve model performance and fix model defects, especially when we need to investigate nontrivial training strategies such as active learning, and track the root cause of unexpected training results such as performance degeneration. In this work, we propose a time-travelling visual solution DeepVisualInsight (DVI), aiming to manifest the spatio-temporal causality while training a deep learning image classifier. The spatio-temporal causality demonstrates how the gradient-descent algorithm and various training data sampling techniques can influence and reshape the layout of learnt input representation and the classification boundaries in consecutive epochs. Such causality allows us to observe and analyze the whole learning process in the visible low dimensional space. Technically, we propose four spatial and temporal properties and design our visualization solution to satisfy them. These properties preserve the most important information when projecting and inverse-projecting input samples between the visible low-dimensional and the invisible high-dimensional space, for causal analyses. Our extensive experiments show that, comparing to baseline approaches, we achieve the best visualization performance regarding the spatial/temporal properties and visualization efficiency. Moreover, our case study shows that our visual solution can well reflect the characteristics of various training scenarios, showing good potential of DVI as a debugging tool for analyzing deep learning training processes.
Xianglin Yang, Yun Lin, Ruofan Liu, Zhenfeng He, Chao Wang, Jin Song Dong, Hong Mei
null
null
2,022
aaai
Compilation of Aggregates in ASP Systems
null
Answer Set Programming (ASP) is a well-known declarative AI formalism for knowledge representation and reasoning. State-of-the-art ASP implementations employ the ground&solve approach, and they were successfully applied to industrial and academic problems. Nonetheless there are classes of ASP programs whose evaluation is not efficient (sometimes not feasible) due to the combinatorial blow-up of the program produced by the grounding step. Recent researches suggest that compilation-based techniques can mitigate the grounding bottleneck problem. However, no compilation-based technique has been developed for ASP programs that contain aggregates, which are one of the most relevant and commonly-employed constructs of ASP. In this paper, we propose a compilation-based approach for ASP programs with aggregates. We implement it on top of a state-of-the-art ASP system, and evaluate the performance on publicly-available benchmarks. Experiments show our approach is effective on ground-intensive ASP programs.
Giuseppe Mazzotta, Francesco Ricca, Carmine Dodaro
null
null
2,022
aaai