title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
HINT: Hierarchical Invertible Neural Transport for Density Estimation and Bayesian Inference
| null |
Many recent invertible neural architectures are based on coupling block designs where variables are divided in two subsets which serve as inputs of an easily invertible (usually affine) triangular transformation. While such a transformation is invertible, its Jacobian is very sparse and thus may lack expressiveness. This work presents a simple remedy by noting that subdivision and (affine) coupling can be repeated recursively within the resulting subsets, leading to an efficiently invertible block with dense, triangular Jacobian. By formulating our recursive coupling scheme via a hierarchical architecture, HINT allows sampling from a joint distribution p(y,x) and the corresponding posterior p(x|y) using a single invertible network. We evaluate our method on some standard data sets and benchmark its full power for density estimation and Bayesian inference on a novel data set of 2D shapes in Fourier parameterization, which enables consistent visualization of samples for different dimensionalities.
|
Jakob Kruse, Gianluca Detommaso, Ullrich Köthe, Robert Scheichl
| null | null | 2,021 |
aaai
|
Asynchronous Optimization Methods for Efficient Training of Deep Neural Networks with Guarantees
| null |
Asynchronous distributed algorithms are a popular way to reduce synchronization costs in large-scale optimization, and in particular for neural network training. However, for nonsmooth and nonconvex objectives, few convergence guarantees exist beyond cases where closed-form proximal operator solutions are available. As training most popular deep neural networks corresponds to optimizing nonsmooth and nonconvex objectives, there is a pressing need for such convergence guarantees. In this paper, we analyze for the first time the convergence of stochastic asynchronous optimization for this general class of objectives. In particular, we focus on stochastic subgradient methods allowing for block variable partitioning, where the shared model is asynchronously updated by concurrent processes. To this end, we use a probabilistic model which captures key features of real asynchronous scheduling between concurrent processes. Under this model, we establish convergence with probability one to an invariant set for stochastic subgradient methods with momentum. From a practical perspective, one issue with the family of algorithms that we consider is that they are not efficiently supported by machine learning frameworks, which mostly focus on distributed data-parallel strategies. To address this, we propose a new implementation strategy for shared-memory based training of deep neural networks for a partitioned but shared model in single- and multi-GPU settings. Based on this implementation, we achieve on average1.2x speed-up in comparison to state-of-the-art training methods for popular image classification tasks, without compromising accuracy.
|
Vyacheslav Kungurtsev, Malcolm Egan, Bapi Chatterjee, Dan Alistarh
| null | null | 2,021 |
aaai
|
Nearly Linear-Time, Parallelizable Algorithms for Non-Monotone Submodular Maximization
| null |
We study combinatorial, parallelizable algorithms for maximization of a submodular function, not necessarily monotone, with respect to a cardinality constraint k. We improve the best approximation factor achieved by an algorithm that has optimal adaptivity and query complexity, up to logarithmic factors in the size of the ground set, from 0.039 to nearly 0.193. Heuristic versions of our algorithms are empirically validated to use a low number of adaptive rounds and total queries while obtaining solutions with high objective value in comparison with state-of-the-art approximation algorithms, including continuous algorithms that use the multilinear extension.
|
Alan Kuhnle
| null | null | 2,021 |
aaai
|
Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching
| null |
Knowledge distillation extracts general knowledge from a pretrained teacher network and provides guidance to a target student network. Most studies manually tie intermediate features of the teacher and student, and transfer knowledge through predefined links. However, manual selection often constructs ineffective links that limit the improvement from the distillation. There has been an attempt to address the problem, but it is still challenging to identify effective links under practical scenarios. In this paper, we introduce an effective and efficient feature distillation method utilizing all the feature levels of the teacher without manually selecting the links. Specifically, our method utilizes an attention based meta network that learns relative similarities between features, and applies identified similarities to control distillation intensities of all possible pairs. As a result, our method determines competent links more efficiently than the previous approach and provides better performance on model compression and transfer learning tasks. Further qualitative analyses and ablative studies describe how our method contributes to better distillation.
|
Mingi Ji, Byeongho Heo, Sungrae Park
| null | null | 2,021 |
aaai
|
Neural Utility Functions
| null |
Current neural network architectures have no mechanism for explicitly reasoning about item trade-offs. Such trade-offs are important for popular tasks such as recommendation. The main idea of this work is to give neural networks inductive biases that are inspired by economic theories. To this end, we propose Neural Utility Functions, which directly optimize the gradients of a neural network so that they are more consistent with utility theory, a mathematical framework for modeling choice among items. We demonstrate that Neural Utility Functions can recover theoretical item relationships better than vanilla neural networks, analytically show existing neural networks are not quasi-concave and do not inherently reason about trade-offs, and that augmenting existing models with a utility loss function improves recommendation results. The Neural Utility Functions we propose are theoretically motivated, and yield strong empirical results.
|
Porter Jenkins, Ahmad Farag, J. Stockton Jenkins, Huaxiu Yao, Suhang Wang, Zhenhui Li
| null | null | 2,021 |
aaai
|
Constructing a Fair Classifier with Generated Fair Data
| null |
Fairness in machine learning is getting rising attention as it is directly related to real-world applications and social problems. Recent methods have been explored to alleviate the discrimination between certain demographic groups that are characterized by sensitive attributes (such as race, age, or gender). Some studies have found that the data itself is biased, so training directly on the data causes unfair decision making. Models directly trained on raw data can replicate or even exacerbate bias in the prediction between demographic groups. This leads to vastly different prediction performance in different demographic groups. In order to address this issue, we propose a new approach to improve machine learning fairness by generating fair data. We introduce a generative model to generate cross-domain samples w.r.t. multiple sensitive attributes. This ensures that we can generate infinite number of samples that are balanced wrt both target label and sensitive attributes to enhance fair prediction. By training the classifier solely with the synthetic data and then transfer the model to real data, we can overcome the under-representation problem which is non-trivial since collecting real data is extremely time and resource consuming. We provide empirical evidence to demonstrate the benefit of our model with respect to both fairness and accuracy.
|
Taeuk Jang, Feng Zheng, Xiaoqian Wang
| null | null | 2,021 |
aaai
|
Compressing Deep Convolutional Neural Networks by Stacking Low-dimensional Binary Convolution Filters
| null |
Deep Convolutional Neural Networks (CNN) have been successfully applied to many real-life problems. However, the huge memory cost of deep CNN models poses a great challenge of deploying them on memory-constrained devices (e.g., mobile phones). One popular way to reduce the memory cost of deep CNN model is to train binary CNN where the weights in convolution filters are either 1 or -1 and therefore each weight can be efficiently stored using a single bit. However, the compression ratio of existing binary CNN models is upper bounded by ∼ 32. To address this limitation, we propose a novel method to compress deep CNN model by stacking low-dimensional binary convolution filters. Our proposed method approximates a standard convolution filter by selecting and stacking filters from a set of low-dimensional binary convolution filters. This set of low-dimensional binary convolution filters is shared across all filters for a given convolution layer. Therefore, our method will achieve much larger compression ratio than binary CNN models. In order to train our proposed model, we have theoretically shown that our proposed model is equivalent to select and stack intermediate feature maps generated by low-dimensional binary filters. Therefore, our proposed model can be efficiently trained using the split-transform-merge strategy. We also provide detailed analysis of the memory and computation cost of our model in model inference. We compared the proposed method with other five popular model compression techniques on two benchmark datasets. Our experimental results have demonstrated that our proposed method achieves much higher compression ratio than existing methods while maintains comparable accuracy.
|
Weichao Lan, Liang Lan
| null | null | 2,021 |
aaai
|
IB-GAN: Disentangled Representation Learning with Information Bottleneck Generative Adversarial Networks
| null |
We propose a new GAN-based unsupervised model for disentangled representation learning. The new model is discovered in an attempt to utilize the Information Bottleneck (IB) framework to the optimization of GAN, thereby named IB-GAN. The architecture of IB-GAN is partially similar to that of InfoGAN but has a critical difference; an intermediate layer of the generator is leveraged to constrain the mutual information between the input and the generated output. The intermediate stochastic layer can serve as a learnable latent distribution that is trained with the generator jointly in an end-to-end fashion. As a result, the generator of IB-GAN can harness the latent space in a disentangled and interpretable manner. With the experiments on dSprites and Color-dSprites dataset, we demonstrate that IB-GAN achieves competitive disentanglement scores to those of state-of-the-art β-VAEs and outperforms InfoGAN. Moreover, the visual quality and the diversity of samples generated by IB-GAN are often better than those by β-VAEs and Info-GAN in terms of FID score on CelebA and 3D Chairs dataset.
|
Insu Jeon, Wonkwang Lee, Myeongjang Pyeon, Gunhee Kim
| null | null | 2,021 |
aaai
|
LightXML: Transformer with Dynamic Negative Sampling for High-Performance Extreme Multi-label Text Classification
| null |
Extreme multi-label text classification(XMC) is a task for finding the most relevant labels from a large label set. Nowadays deep learning-based methods have shown significant success in XMC. However, the existing methods (e.g., AttentionXML and X-Transformer etc) still suffer from 1) combining several models to train and predict for one dataset, and 2) sampling negative labels statically during the process of training label ranking model, which will harm the performance and accuracy of model. To address the above problems, we propose LightXML, which adopts end-to-end training and dynamical negative labels sampling. In LightXML, we use GAN like networks to recall and rank labels. The label recalling part will generate negative and positive labels, and the label ranking part will distinguish positive labels from these labels. Based on these networks, negative labels are sampled dynamically during label ranking part training. With feeding both label recalling and ranking parts with the same text representation, LightXML can reach high performance. Extensive experiments show that LightXML outperforms state-of-the-art methods in five extreme multi-label datasets with much smaller model size and lower computational complexity. In particular, on the Amazon dataset with 670K labels, LightXML can reduce the model size up to 72% compared to AttentionXML. Our code is available at http://github.com/kongds/LightXML.
|
Ting Jiang, Deqing Wang, Leilei Sun, Huayi Yang, Zhengyang Zhao, Fuzhen Zhuang
| null | null | 2,021 |
aaai
|
Accurate and Robust Feature Importance Estimation under Distribution Shifts
| null |
With increasing reliance on the outcomes of black-box models in critical applications, post-hoc explainability tools that do not require access to the model internals are often used to enable humans understand and trust these models. In particular, we focus on the class of methods that can reveal the influence of input features on the predicted outputs. Despite their wide-spread adoption, existing methods are known to suffer from one or more of the following challenges: computational complexities, large uncertainties and most importantly, inability to handle real-world domain shifts. In this paper, we propose PRoFILE (Producing Robust Feature Importances using Loss Estimates), a novel feature importance estimation method that addresses all these challenges. Through the use of a loss estimator jointly trained with the predictive model and a causal objective, PRoFILE can accurately estimate the feature importance scores even under complex distribution shifts, without any additional re-training. To this end, we also develop learning strategies for training the loss estimator, namely contrastive and dropout calibration, and find that it can effectively detect distribution shifts. Using empirical studies on several benchmark image and non-image data, we show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
|
Jayaraman J. Thiagarajan, Vivek Narayanaswamy, Rushil Anirudh, Peer-Timo Bremer, Andreas Spanias
| null | null | 2,021 |
aaai
|
Active Bayesian Assessment of Black-Box Classifiers
| null |
Recent advances in machine learning have led to increased deployment of black-box classifiers across a wide variety of applications. In many such situations there is a critical need to both reliably assess the performance of these pre-trained models and to perform this assessment in a label-efficient manner (given that labels may be scarce and costly to collect). In this paper, we introduce an active Bayesian approach for assessment of classifier performance to satisfy the desiderata of both reliability and label-efficiency. We begin by developing inference strategies to quantify uncertainty for common assessment metrics such as accuracy, misclassification cost, and calibration error. We then propose a general framework for active Bayesian assessment using inferred uncertainty to guide efficient selection of instances for labeling, enabling better performance assessment with fewer labels. We demonstrate significant gains from our proposed active Bayesian approach via a series of systematic empirical experiments assessing the performance of modern neural classifiers (e.g., ResNet and BERT) on several standard image and text classification datasets.
|
Disi Ji, Robert L. Logan, Padhraic Smyth, Mark Steyvers
| null | null | 2,021 |
aaai
|
Dynamic Multi-Context Attention Networks for Citation Forecasting of Scientific Publications
| null |
Forecasting citations of scientific patents and publications is a crucial task for understanding the evolution and development of technological domains and for foresight into emerging technologies. By construing citations as a time series, the task can be cast into the domain of temporal point processes. Most existing work on forecasting with temporal point processes, both conventional and neural network-based, only performs single-step forecasting. In citation forecasting, however, the more salient goal is n-step forecasting: predicting the arrival time and the technology class of the next n citations. In this paper, we propose Dynamic Multi-Context Attention Networks (DMA-Nets), a novel deep learning sequence-to-sequence (Seq2Seq) model with a novel hierarchical dynamic attention mechanism for long-term citation forecasting. Extensive experiments on two real-world datasets demonstrate that the proposed model learns better representations of conditional dependencies over historical sequences compared to state-of-the-art counterparts and thus achieves significant performance for citation predictions. The dataset and code have been made available online.
|
Taoran Ji, Nathan Self, Kaiqun Fu, Zhiqian Chen, Naren Ramakrishnan, Chang-Tien Lu
| null | null | 2,021 |
aaai
|
Action Candidate Based Clipped Double Q-learning for Discrete and Continuous Action Tasks
| null |
Double Q-learning is a popular reinforcement learning algorithm in Markov decision process (MDP) problems. Clipped Double Q-learning, as an effective variant of Double Q-learning, employs the clipped double estimator to approximate the maximum expected action value. Due to the underestimation bias of the clipped double estimator, performance of clipped Double Q-learning may be degraded in some stochastic environments. In this paper, in order to reduce the underestimation bias, we propose an action candidate based clipped double estimator for Double Q-learning. Specifically, we first select a set of elite action candidates with the high action values from one set of estimators. Then, among these candidates, we choose the highest valued action from the other set of estimators. Finally, we use the maximum value in the second set of estimators to clip the action value of the chosen action in the first set of estimators and the clipped value is used for approximating the maximum expected action value. Theoretically, the underestimation bias in our clipped Double Q-learning decays monotonically as the number of the action candidates decreases. Moreover, the number of action candidates controls the trade-off between the overestimation and underestimation biases. In addition, we also extend our clipped Double Q-learning to continuous action tasks via approximating the elite continuous action candidates. We empirically verify that our algorithm can more accurately estimate the maximum expected action value on some toy environments and yield good performance on several benchmark problems.
|
Haobo Jiang, Jin Xie, Jian Yang
| null | null | 2,021 |
aaai
|
Positions, Channels, and Layers: Fully Generalized Non-Local Network for Singer Identification
| null |
Recently, a non-local (NL) operation has been designed as the central building block for deep-net models to capture long-range dependencies (Wang et al. 2018). Despite its excellent performance, it does not consider the interaction between positions across channels and layers, which is crucial in fine-grained classification tasks. To address the limitation, we target at singer identification (SID) task and present a fully generalized non-local (FGNL) module to help identify fine-grained vocals. Specifically, we first propose a FGNL operation, which extends the NL operation to explore the correlations between positions across channels and layers. Secondly, we further apply a depth-wise convolution with Gaussian kernel in the FGNL operation to smooth feature maps for better generalization. More, we modify the squeeze-and-excitation (SE) scheme into the FGNL module to adaptively emphasize correlated feature channels to help uncover relevant feature responses and eventually the target singer. Evaluating results on the benchmark artist20 dataset shows that the FGNL module significantly improves the accuracy of the deep-net models in SID. Codes are available at https://github.com/ian-k-1217/Fully-Generalized-Non-Local-Network.
|
I-Yuan Kuo, Wen-Li Wei, Jen-Chun Lin
| null | null | 2,021 |
aaai
|
Temporal-Logic-Based Reward Shaping for Continuing Reinforcement Learning Tasks
| null |
In continuing tasks, average-reward reinforcement learning may be a more appropriate problem formulation than the more common discounted reward formulation. As usual, learning an optimal policy in this setting typically requires a large amount of training experiences. Reward shaping is a common approach for incorporating domain knowledge into reinforcement learning in order to speed up convergence to an optimal policy. However, to the best of our knowledge, the theoretical properties of reward shaping have thus far only been established in the discounted setting. This paper presents the first reward shaping framework for average-reward learning and proves that, under standard assumptions, the optimal policy under the original reward function can be recovered. In order to avoid the need for manual construction of the shaping function, we introduce a method for utilizing domain knowledge expressed as a temporal logic formula. The formula is automatically translated to a shaping function that provides additional reward throughout the learning process. We evaluate the proposed method on three continuing tasks. In all cases, shaping speeds up the average-reward learning rate without any reduction in the performance of the learned policy compared to relevant baselines.
|
Yuqian Jiang, Suda Bharadwaj, Bo Wu, Rishi Shah, Ufuk Topcu, Peter Stone
| null | null | 2,021 |
aaai
|
A Sample-Efficient Algorithm for Episodic Finite-Horizon MDP with Constraints
| null |
Constrained Markov decision processes (CMDPs) formalize sequential decision-making problems whose objective is to minimize a cost function while satisfying constraints on various cost functions. In this paper, we consider the setting of episodic fixed-horizon CMDPs. We propose an online algorithm which leverages the linear programming formulation of repeated optimistic planning for finite-horizon CMDP to provide a probably approximately correctness (PAC) guarantee on the number of episodes needed to ensure a near optimal policy, i.e., with resulting objective value close to that of the optimal value and satisfying the constraints within low tolerance, with high probability. The number of episodes needed is shown to have linear dependence on the sizes of the state and action spaces and quadratic dependence on the time horizon and an upper bound on the number of possible successor states for a state-action pair. Therefore, if the upper bound on the number of possible successor states is much smaller than the size of the state space, the number of needed episodes becomes linear in the sizes of the state and action spaces and quadratic in the time horizon.
|
Krishna C. Kalagarla, Rahul Jain, Pierluigi Nuzzo
| null | null | 2,021 |
aaai
|
Linearly Replaceable Filters for Deep Network Channel Pruning
| null |
Convolutional neural networks (CNNs) have achieved remarkable results; however, despite the development of deep learning, practical user applications are fairly limited because heavy networks can be used solely with the latest hardware and software supports. Therefore, network pruning is gaining attention for general applications in various fields. This paper proposes a novel channel pruning method, Linearly Replaceable Filter (LRF), which suggests that a filter that can be approximated by the linear combination of other filters is replaceable. Moreover, an additional method called Weights Compensation is proposed to support the LRF method. This is a technique that effectively reduces the output difference caused by removing filters via direct weight modification. Through various experiments, we have confirmed that our method achieves state-of-the-art performance in several benchmarks. In particular, on ImageNet, LRF-60 reduces approximately 56% of FLOPs on ResNet-50 without top-5 accuracy drop. Further, through extensive analyses, we proved the effectiveness of our approaches.
|
Donggyu Joo, Eojindl Yi, Sunghyun Baek, Junmo Kim
| null | null | 2,021 |
aaai
|
Balanced Open Set Domain Adaptation via Centroid Alignment
| null |
Open Set Domain Adaptation (OSDA) is a challenging domain adaptation setting which allows the existence of unknown classes on the target domain. Although existing OSDA methods are good at classifying samples of known classes, they ignore the classification ability for the unknown samples, making them unbalanced OSDA methods. To alleviate this problem, we propose a balanced OSDA methods which could recognize the unknown samples while maintain high classification performance for the known samples. Specifically, to reduce the domain gaps, we first project the features to a hyperspherical latent space. In this space, we propose to bound the centroid deviation angles to not only increase the intra-class compactness but also enlarge the inter-class margins. With the bounded centroid deviation angles, we employ the statistical Extreme Value Theory to recognize the unknown samples that are misclassified into known classes. In addition, to learn better centroids, we propose an improved centroid update strategy based on sample reweighting and adaptive update rate to cooperate with centroid alignment. Experimental results on three OSDA benchmarks verify that our method can significantly outperform the compared methods and reduce the proportion of the unknown samples being misclassified into known classes.
|
Mengmeng Jing, Jingjing Li, Lei Zhu, Zhengming Ding, Ke Lu, Yang Yang
| null | null | 2,021 |
aaai
|
Deep Probabilistic Canonical Correlation Analysis
| null |
We propose a deep generative framework for multi-view learning based on a probabilistic interpretation of canonical correlation analysis (CCA). The model combines a linear multi-view layer in the latent space with deep generative networks as observation models, to decompose the variability in multiple views into a shared latent representation that describes the common underlying sources of variation and a set of viewspecific components. To approximate the posterior distribution of the latent multi-view layer, an efficient variational inference procedure is developed based on the solution of probabilistic CCA. The model is then generalized to an arbitrary number of views. An empirical analysis confirms that the proposed deep multi-view model can discover subtle relationships between multiple views and recover rich representations.
|
Mahdi Karami, Dale Schuurmans
| null | null | 2,021 |
aaai
|
Disentangled Representation Learning in Heterogeneous Information Network for Large-scale Android Malware Detection in the COVID-19 Era and Beyond
| null |
In the fight against the COVID-19 pandemic, many social activities have moved online; society's overwhelming reliance on the complex cyberspace makes its security more important than ever. In this paper, we propose and develop an intelligent system named Dr.HIN to protect users against the evolving Android malware attacks in the COVID-19 era and beyond. In Dr.HIN, besides app content, we propose to consider higher-level semantics and social relations among apps, developers and mobile devices to comprehensively depict Android apps; and then we introduce a structured heterogeneous information network (HIN) to model the complex relations and exploit meta-path guided strategy to learn node (i.e., app) representations from HIN. As the representations of malware could be highly entangled with benign apps in the complex ecosystem of development, it poses a new challenge of learning the latent explanatory factors hidden in the HIN embeddings to detect the evolving malware. To address this challenge, we propose to integrate domain priors generated from different views (i.e., app content, app authorship, app installation) to devise an adversarial disentangler to separate the distinct, informative factors of variations hidden in the HIN embeddings for large-scale Android malware detection. This is the first attempt of disentangled representation learning in HIN data. Promising experimental results based on the large-scale and real sample collections from security industry demonstrate the performance of Dr.HIN in evolving Android malware detection, by comparison with baselines and popular mobile security products.
|
Shifu Hou, Yujie Fan, Mingxuan Ju, Yanfang Ye, Wenqiang Wan, Kui Wang, Yinming Mei, Qi Xiong, Fudong Shao
| null | null | 2,021 |
aaai
|
Reinforcement Learning Based Multi-Agent Resilient Control: From Deep Neural Networks to an Adaptive Law
| null |
Recent advances in Multi-agent Reinforcement Learning (MARL) have made it possible to implement various tasks in cooperative as well as competitive scenarios through trial and error, and deep neural networks. These successes motivate us to bring the mechanism of MARL into the Multi-agent Resilient Consensus (MARC) problem that studies the consensus problem in a network of agents with faulty ones. Relying on the natural characteristics of the system goal, the key component in MARL, reward function, can thus be directly constructed via the relative distance among agents. Firstly, we apply Deep Deterministic Policy Gradient (DDPG) on each single agent to train and learn adjacent weights of neighboring agents in a distributed manner, that we call Distributed-DDPG (D-DDPG), so as to minimize the weights from suspicious agents and eliminate the corresponding influences. Secondly, to get rid of neural networks and their time-consuming training process, a Q-learning based algorithm, called Q-consensus, is further presented by building a proper reward function and a credibility function for each pair of neighboring agents so that the adjacent weights can update in an adaptive way. The experimental results indicate that both algorithms perform well with appearance of constant and/or random faulty agents, yet the Q-consensus algorithm outperforms the faulty ones running D-DDPG. Compared to the traditional resilient consensus strategies, e.g., Weighted-Mean-Subsequence-Reduced (W-MSR) or trustworthiness analysis, the proposed Q-consensus algorithm has greatly relaxed the topology requirements, as well as reduced the storage and computation loads. Finally, a smart-car hardware platform consisting of six vehicles is used to verify the effectiveness of the Q-consensus algorithm by achieving resilient velocity synchronization.
|
Jian Hou, Fangyuan Wang, Lili Wang, Zhiyong Chen
| null | null | 2,021 |
aaai
|
Topology Distance: A Topology-Based Approach for Evaluating Generative Adversarial Networks
| null |
Automatic evaluation of the goodness of Generative Adversarial Networks (GANs) has been a challenge for the field of machine learning. In this work, we propose a distance complementary to existing measures: Topology Distance (TD), the main idea behind which is to compare the geometric and topological features of the latent manifold of real data with those of generated data. More specifically, we build Vietoris-Rips complex on image features, and define TD based on the differences in persistent-homology groups of the two manifolds. We compare TD with the most commonly-used and relevant measures in the field, including Inception Score (IS), Fr'echet Inception Distance (FID), Kernel Inception Distance (KID) and Geometry Score (GS), in a range of experiments on various datasets. We demonstrate the unique advantage and superiority of our proposed approach over the aforementioned metrics. A combination of our empirical results and the theoretical argument we propose in favour of TD, strongly supports the claim that TD is a powerful candidate metric that researchers can employ when aiming to automatically evaluate the goodness of GANs’ learning.
|
Danijela Horak, Simiao Yu, Gholamreza Salimi-Khorshidi
| null | null | 2,021 |
aaai
|
OPQ: Compressing Deep Neural Networks with One-shot Pruning-Quantization
| null |
As Deep Neural Networks (DNNs) usually are overparameterized and have millions of weight parameters, it is challenging to deploy these large DNN models on resource-constrained hardware platforms, e.g., smartphones. Numerous network compression methods such as pruning and quantization are proposed to reduce the model size significantly, of which the key is to find suitable compression allocation (e.g., pruning sparsity and quantization codebook) of each layer. Existing solutions obtain the compression allocation in an iterative/manual fashion while finetuning the compressed model, thus suffering from the efficiency issue. Different from the prior art, we propose a novel One-shot Pruning-Quantization (OPQ) in this paper, which analytically solves the compression allocation with pre-trained weight parameters only. During finetuning, the compression module is fixed and only weight parameters are updated. To our knowledge, OPQ is the first work that reveals pre-trained model is sufficient for solving pruning and quantization simultaneously, without any complex iterative/manual optimization at the finetuning stage. Furthermore, we propose a unified channel-wise quantization method that enforces all channels of each layer to share a common codebook, which leads to low bit-rate allocation without introducing extra overhead brought by traditional channel-wise quantization. Comprehensive experiments on ImageNet with AlexNet/MobileNet-V1/ResNet-50 show that our method improves accuracy and training efficiency while obtains significantly higher compression rates compared to the state-of-the-art.
|
Peng Hu, Xi Peng, Hongyuan Zhu, Mohamed M. Sabry Aly, Jie Lin
| null | null | 2,021 |
aaai
|
Adversarial Defence by Diversified Simultaneous Training of Deep Ensembles
| null |
Learning-based classifiers are susceptible to adversarial examples. Existing defence methods are mostly devised on individual classifiers. Recent studies showed that it is viable to increase adversarial robustness by promoting diversity over an ensemble of models. In this paper, we propose adversarial defence by encouraging ensemble diversity on learning high-level feature representations and gradient dispersion in simultaneous training of deep ensemble networks. We perform extensive evaluations under white-box and black-box attacks including transferred examples and adaptive attacks. Our approach achieves a significant gain of up to 52% in adversarial robustness, compared with the baseline and the state-of-the-art method on image benchmarks with complex data scenes. The proposed approach complements the defence paradigm of adversarial training, and can further boost the performance. The source code is available at https://github.com/ALIS-Lab/AAAI2021-PDD.
|
Bo Huang, Zhiwei Ke, Yi Wang, Wei Wang, Linlin Shen, Feng Liu
| null | null | 2,021 |
aaai
|
Exploration via State influence Modeling
| null |
This paper studies the challenging problem of reinforcement learning (RL) in hard exploration tasks with sparse rewards. It focuses on the exploration stage before the agent gets the first positive reward, in which case, traditional RL algorithms with simple exploration strategies often work poorly. Unlike previous methods using some attribute of a single state as the intrinsic reward to encourage exploration, this work leverages the social influence between different states to permit more efficient exploration. It introduces a general intrinsic reward construction method to evaluate the social influence of states dynamically. Three kinds of social influence are introduced for a state: conformity, power, and authority. By measuring the state’s social influence, agents quickly find the focus state during the exploration process. The proposed RL framework with state social influence evaluation works well in hard exploration task. Extensive experimental analyses and comparisons in Grid Maze and many hard exploration Atari 2600 games demonstrate its high exploration efficiency.
|
Yongxin Kang, Enmin Zhao, Kai Li, Junliang Xing
| null | null | 2,021 |
aaai
|
Multi-scale Graph Fusion for Co-saliency Detection
| null |
The key challenge of co-saliency detection is to extract discriminative features to distinguish the common salient foregrounds from backgrounds in a group of relevant images. In this paper, we propose a new co-saliency detection framework which includes two strategies to improve the discriminative ability of the features. Specifically, on one hand, we segment each image to semantic superpixel clusters as well as generate different scales/sizes of images for each input image by the VGG-16 model. Different scales capture different patterns of the images. As a result, multi-scale images can capture various patterns among all images by many kinds of perspectives. Second, we propose a new method of Graph Convolutional Network (GCN) to fine-tune the multi-scale features, aiming at capturing the common information among the features from all scales and the private or complementary information for the feature of each scale. Moreover, the proposed GCN method jointly conducts multi-scale feature fine-tune, graph learning, and feature learning in a unified framework. We evaluated our method on three benchmark data sets, compared to state-of-the-art co-saliency detection methods. Experimental results showed that our method outperformed all comparison methods in terms of different evaluation metrics.
|
Rongyao Hu, Zhenyun Deng, Xiaofeng Zhu
| null | null | 2,021 |
aaai
|
Learning to Reweight Imaginary Transitions for Model-Based Reinforcement Learning
| null |
Model-based reinforcement learning (RL) is more sample efficient than model-free RL by using imaginary trajectories generated by the learned dynamics model. When the model is inaccurate or biased, imaginary trajectories may be deleterious for training the action-value and policy functions. To alleviate such problem, this paper proposes to adaptively reweight the imaginary transitions, so as to reduce the negative effects of poorly generated trajectories. More specifically, we evaluate the effect of an imaginary transition by calculating the change of the loss computed on the real samples when we use the transition to train the action-value and policy functions. Based on this evaluation criterion, we construct the idea of reweighting each imaginary transition by a well-designed meta-gradient algorithm. Extensive experimental results demonstrate that our method outperforms state-of-the-art model-based and model-free RL algorithms on multiple tasks. Visualization of our changing weights further validates the necessity of utilizing reweight scheme.
|
Wenzhen Huang, Qiyue Yin, Junge Zhang, Kaiqi Huang
| null | null | 2,021 |
aaai
|
Accelerating Continuous Normalizing Flow with Trajectory Polynomial Regularization
| null |
In this paper, we propose an approach to effectively accelerating the computation of continuous normalizing flow (CNF), which has been proven to be a powerful tool for the tasks such as variational inference and density estimation. The training time cost of CNF can be extremely high because the required number of function evaluations (NFE) for solving corresponding ordinary differential equations (ODE) is very large. We think that the high NFE results from large truncation errors of solving ODEs. To address the problem, we propose to add a regularization. The regularization penalizes the difference between the trajectory of the ODE and its fitted polynomial regression. The trajectory of ODE will approximate a polynomial function, and thus the truncation error will be smaller. Furthermore, we provide two proofs and claim that the additional regularization does not harm training quality. Experimental results show that our proposed method can result in 42.3% to 71.3% reduction of NFE on the task of density estimation, and 19.3% to 32.1% reduction of NFE on variational auto-encoder, while the testing losses are not affected.
|
Han-Hsien Huang, Mi-Yen Yeh
| null | null | 2,021 |
aaai
|
Continual Learning by Using Information of Each Class Holistically
| null |
Continual learning (CL) incrementally learns a sequence of tasks while solving the catastrophic forgetting (CF) problem. Existing methods mainly try to deal with CF directly. In this paper, we propose to avoid CF by considering the features of each class holistically rather than only the discriminative information for classifying the classes seen so far. This latter approach is prone to CF because the discriminative information for old classes may not be sufficiently discriminative for the new class to be learned. Consequently, in learning each new task, the network parameters for previous tasks have to be revised, which causes CF. With the holistic consideration, after adding new tasks, the system can still do well for previous tasks. The proposed technique is called Per-class Continual Learning (PCL). PCL has two key novelties. (1) It proposes a one-class learning based technique for CL, which considers features of each class holistically and represents a new approach to solving the CL problem. (2) It proposes a method to extract discriminative information after training to further improve the accuracy. Empirical evaluation shows that PCL markedly outperforms the state-of-the-art baselines for one or more classes per task. More tasks also result in more gains.
|
Wenpeng Hu, Qi Qin, Mengyu Wang, Jinwen Ma, Bing Liu
| null | null | 2,021 |
aaai
|
Boosting Multi-task Learning Through Combination of Task Labels – with Applications in ECG Phenotyping
| null |
Multi-task learning has increased in importance due to its superior performance by learning multiple different tasks simultaneously and its ability to perform several different tasks using a single model. In medical phenotyping, task labels are costly to acquire and might contain a certain degree of label noise. This decreases the efficiency of using additional human labels as auxiliary tasks when applying multi-task learning to medical phenotyping. In this work, we proposed an effective multi-task learning framework, CO-TASK, to boost multi-task learning performance by generating auxiliary tasks through COmbination of TASK Labels. The proposed CO-TASK framework generates auxiliary tasks without additional labeling effort, is robust to a certain degree of label noise, and can be applied in parallel with various multi-task learning techniques. We evaluated our performance using the CIFAR-MTL dataset and demonstrated its effectiveness in medical phenotyping using two large-scale ECG phenotyping datasets, an 18 diseases multi-label ECG-P18 dataset and an echocardiogram diagnostic from electrocardiogram dataset ECG-EchoLVH. On the CIFAR-MTL dataset, we doubled the average per-task performance gain of the multi-task learning model from 4.38% to 9.78%. With the proposed task-aware imbalance data sampler, the CO-TASK framework can effectively deal with the different imbalance ratios for the different tasks in electrocardiogram phenotyping datasets. The proposed framework combined with noisy annotations as minor tasks increased the sensitivity by 7.1% compared to the single-task model while maintaining the same specificity as the doctor annotations on the ECG-EchoLVH dataset.
|
Ming-En Hsieh, Vincent Tseng
| null | null | 2,021 |
aaai
|
Attributes-Guided and Pure-Visual Attention Alignment for Few-Shot Recognition
| null |
The purpose of few-shot recognition is to recognize novel categories with a limited number of labeled examples in each class. To encourage learning from a supplementary view, recent approaches have introduced auxiliary semantic modalities into effective metric-learning frameworks that aim to learn a feature similarity between training samples (support set) and test samples (query set). However, these approaches only augment the representations of samples with available semantics while ignoring the query set, which loses the potential for the improvement and may lead to a shift between the modalities combination and the pure-visual representation. In this paper, we devise an attributes-guided attention module (AGAM) to utilize human-annotated attributes and learn more discriminative features. This plug-and-play module enables visual contents and corresponding attributes to collectively focus on important channels and regions for the support set. And the feature selection is also achieved for query set with only visual information while the attributes are not available. Therefore, representations from both sets are improved in a fine-grained manner. Moreover, an attention alignment mechanism is proposed to distill knowledge from the guidance of attributes to the pure-visual branch for samples without attributes. Extensive experiments and analysis show that our proposed module can significantly improve simple metric-based approaches to achieve state-of-the-art performance on different datasets and settings.
|
Siteng Huang, Min Zhang, Yachen Kang, Donglin Wang
| null | null | 2,021 |
aaai
|
Reward-Biased Maximum Likelihood Estimation for Linear Stochastic Bandits
| null |
Modifying the reward-biased maximum likelihood method originally proposed in the adaptive control literature, we propose novel learning algorithms to handle the explore-exploit trade-off in linear bandits problems as well as generalized linear bandits problems. We develop novel index policies that we prove achieve order-optimality, and show that they achieve empirical performance competitive with the state-of-the-art benchmark methods in extensive experiments. The new policies achieve this with low computation time per pull for linear bandits, and thereby resulting in both favorable regret as well as computational efficiency.
|
Yu-Heng Hung, Ping-Chun Hsieh, Xi Liu, P. R. Kumar
| null | null | 2,021 |
aaai
|
Predictive Adversarial Learning from Positive and Unlabeled Data
| null |
This paper studies learning from positive and unlabeled examples, known as PU learning. It proposes a novel PU learning method called Predictive Adversarial Networks (PAN) based on GAN (Generative Adversarial Networks). GAN learns a generator to generate data (e.g., images) to fool a discriminator which tries to determine whether the generated data belong to a (positive) training class. PU learning can be casted as trying to identify (not generate) likely positive instances from the unlabeled set to fool a discriminator that determines whether the identified likely positive instances from the unlabeled set are indeed positive. However, directly applying GAN is problematic because GAN focuses on only the positive data. The resulting PU learning method will have high precision but low recall. We propose a new objective function based on KL-divergence. Evaluation using both image and text data shows that PAN outperforms state-of-the-art PU learning methods and also a direct adaptation of GAN for PU learning.
|
Wenpeng Hu, Ran Le, Bing Liu, Feng Ji, Jinwen Ma, Dongyan Zhao, Rui Yan
| null | null | 2,021 |
aaai
|
Attribute-Guided Adversarial Training for Robustness to Natural Perturbations
| null |
While existing work in robust deep learning has focused on small pixel-level norm-based perturbations, this may not account for perturbations encountered in several real world settings. In many such cases although test data might not be available, broad specifications about the types of perturbations (such as an unknown degree of rotation) may be known. We consider a setup where robustness is expected over an unseen test domain that is not i.i.d. but deviates from the training domain. While this deviation may not be exactly known, its broad characterization is specified a priori, in terms of attributes. We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space, without having access to the data from the test domain. Our adversarial training solves a min-max optimization problem, with the inner maximization generating adversarial perturbations, and the outer minimization finding model parameters by optimizing the loss on adversarial perturbations generated from the inner maximization. We demonstrate the applicability of our approach on three types of naturally occurring perturbations --- object-related shifts, geometric transformations, and common image corruptions. Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations. We demonstrate the usefulness of the proposed approach by showing the robustness gains of deep neural networks trained using our adversarial training on MNIST, CIFAR-10, and a new variant of the CLEVR dataset.
|
Tejas Gokhale, Rushil Anirudh, Bhavya Kailkhura, Jayaraman J. Thiagarajan, Chitta Baral, Yezhou Yang
| null | null | 2,021 |
aaai
|
Controllable Guarantees for Fair Outcomes via Contrastive Information Estimation
| null |
Controlling bias in training datasets is vital for ensuring equal treatment, or parity, between different groups in downstream applications. A naive solution is to transform the data so that it is statistically independent of group membership, but this may throw away too much information when a reasonable compromise between fairness and accuracy is desired. Another common approach is to limit the ability of a particular adversary who seeks to maximize parity. Unfortunately, representations produced by adversarial approaches may still retain biases as their efficacy is tied to the complexity of the adversary used during training. To this end, we theoretically establish that by limiting the mutual information between representations and protected attributes, we can assuredly control the parity of any downstream classifier. We demonstrate an effective method for controlling parity through mutual information based on contrastive information estimators and show that they outperform approaches that rely on variational bounds based on complex generative models. We test our approach on UCI Adult and Heritage Health datasets and demonstrate that our approach provides more informative representations across a range of desired parity thresholds while providing strong theoretical guarantees on the parity of any downstream algorithm.
|
Umang Gupta, Aaron M Ferber, Bistra Dilkina, Greg Ver Steeg
| null | null | 2,021 |
aaai
|
The Importance of Modeling Data Missingness in Algorithmic Fairness: A Causal Perspective
| null |
Training datasets for machine learning often have some form of missingness. For example, to learn a model for deciding whom to give a loan, the available training data includes individuals who were given a loan in the past, but not those who were not. This missingness, if ignored, nullifies any fairness guarantee of the training procedure when the model is deployed. Using causal graphs, we characterize the missingness mechanisms in different real-world scenarios. We show conditions under which various distributions, used in popular fairness algorithms, can or can not be recovered from the training data. Our theoretical results imply that many of these algorithms can not guarantee fairness in practice. Modeling missingness also helps to identify correct design principles for fair algorithms. For example, in multi-stage settings where decisions are made in multiple screening rounds, we use our framework to derive the minimal distributions required to design a fair algorithm. Our proposed algorithm also decentralizes the decision-making process and still achieves similar performance to the optimal algorithm that requires centralization and non-recoverable distributions.
|
Naman Goel, Alfonso Amayuelas, Amit Deshpande, Amit Sharma
| null | null | 2,021 |
aaai
|
Attentive Neural Point Processes for Event Forecasting
| null |
Event sequence, where each event is associated with a marker and a timestamp, is increasingly ubiquitous in various applications. Accordingly, event forecasting emerges to be a crucial problem, which aims to predict the next event based on the historical sequence. In this paper, we propose ANPP, an Attentive Neural Point Processes framework to solve this problem. In comparison with state-of-the-art methods like recurrent marked temporal point processes, ANPP leverages the time-aware self-attention mechanism to explicitly model the influence between every pair of historical events, resulting in more accurate predictions of events and better interpretation ability. Extensive experiments on one synthetic and four real-world datasets demonstrate that ANPP can achieve significant performance gains against state-of-the-art methods for predictions of both timings and markers. To facilitate future research, we release the codes and datasets at https://github.com/guyulongcs/AAAI2021_ANPP.
|
Yulong Gu
| null | null | 2,021 |
aaai
|
Large Batch Optimization for Deep Learning Using New Complete Layer-Wise Adaptive Rate Scaling
| null |
Training deep neural networks using a large batch size has shown promising results and benefits many real-world applications. Warmup is one of nontrivial techniques to stabilize the convergence of large batch training. However, warmup is an empirical method and it is still unknown whether there is a better algorithm with theoretical underpinnings. In this paper, we propose a novel Complete Layer-wise Adaptive Rate Scaling (CLARS) algorithm for large-batch training. We prove the convergence of our algorithm by introducing a new fine-grained analysis of gradient-based methods. Furthermore, the new analysis also helps to understand two other empirical tricks, layer-wise adaptive rate scaling and linear learning rate scaling. We conduct extensive experiments and demonstrate that the proposed algorithm outperforms gradual warmup technique by a large margin and defeats the convergence of the state-of-the-art large-batch optimizer in training advanced deep neural networks (ResNet, DenseNet, MobileNet) on ImageNet dataset.
|
Zhouyuan Huo, Bin Gu, Heng Huang
| null | null | 2,021 |
aaai
|
Revisiting Iterative Back-Translation from the Perspective of Compositional Generalization
| null |
Human intelligence exhibits compositional generalization (i.e., the capacity to understand and produce unseen combinations of seen components), but current neural seq2seq models lack such ability. In this paper, we revisit iterative back-translation, a simple yet effective semi-supervised method, to investigate whether and how it can improve compositional generalization. In this work: (1) We first empirically show that iterative back-translation substantially improves the performance on compositional generalization benchmarks (CFQ and SCAN). (2) To understand why iterative back-translation is useful, we carefully examine the performance gains and find that iterative back-translation can increasingly correct errors in pseudo-parallel data. (3) To further encourage this mechanism, we propose curriculum iterative back-translation, which better improves the quality of pseudo-parallel data, thus further improving the performance.
|
Yinuo Guo, Hualei Zhu, Zeqi Lin, Bei Chen, Jian-Guang Lou, Dongmei Zhang
| null | null | 2,021 |
aaai
|
High-Dimensional Bayesian Optimization via Tree-Structured Additive Models
| null |
Bayesian Optimization (BO) has shown significant success in tackling expensive low-dimensional black-box optimization problems. Many optimization problems of interest are high-dimensional, and scaling BO to such settings remains an important challenge. In this paper, we consider generalized additive models in which low-dimensional functions with overlapping subsets of variables are composed to model a high-dimensional target function. Our goal is to lower the computational resources required and facilitate faster model learning by reducing the model complexity while retaining the sample-efficiency of existing methods. Specifically, we constrain the underlying dependency graphs to tree structures in order to facilitate both the structure learning and optimization of the acquisition function. For the former, we propose a hybrid graph learning algorithm based on Gibbs sampling and mutation. In addition, we propose a novel zooming-based algorithm that permits generalized additive models to be employed more efficiently in the case of continuous domains. We demonstrate and discuss the efficacy of our approach via a range of experiments on synthetic functions and real-world datasets.
|
Eric Han, Ishank Arora, Jonathan Scarlett
| null | null | 2,021 |
aaai
|
Towards Reusable Network Components by Learning Compatible Representations
| null |
This paper proposes to make a first step towards compatible and hence reusable network components. Rather than training networks for different tasks independently, we adapt the training process to produce network components that are compatible across tasks. In particular, we split a network into two components, a features extractor and a target task head, and propose various approaches to accomplish compatibility between them. We systematically analyse these approaches on the task of image classification on standard datasets. We demonstrate that we can produce components which are directly compatible without any fine-tuning or compromising accuracy on the original tasks. Afterwards, we demonstrate the use of compatible components on three applications: Unsupervised domain adaptation, transferring classifiers across feature extractors with different architectures, and increasing the computational efficiency of transfer learning.
|
Michael Gygli, Jasper Uijlings, Vittorio Ferrari
| null | null | 2,021 |
aaai
|
DeepSynth: Automata Synthesis for Automatic Task Segmentation in Deep Reinforcement Learning
| null |
This paper proposes DeepSynth, a method for effective training of deep Reinforcement Learning (RL) agents when the reward is sparse and non-Markovian, but at the same time progress towards the reward requires achieving an unknown sequence of high-level objectives. Our method employs a novel algorithm for synthesis of compact automata to uncover this sequential structure automatically. We synthesise a human-interpretable automaton from trace data collected by exploring the environment. The state space of the environment is then enriched with the synthesised automaton so that the generation of a control policy by deep RL is guided by the discovered structure encoded in the automaton. The proposed approach is able to cope with both high-dimensional, low-level features and unknown sparse non-Markovian rewards. We have evaluated DeepSynth's performance in a set of experiments that includes the Atari game Montezuma's Revenge. Compared to existing approaches, we obtain a reduction of two orders of magnitude in the number of iterations required for policy synthesis, and also a significant improvement in scalability.
|
Mohammadhosein Hasanbeig, Natasha Yogananda Jeppu, Alessandro Abate, Tom Melham, Daniel Kroening
| null | null | 2,021 |
aaai
|
Uncertainty-Aware Multi-View Representation Learning
| null |
Learning from different data views by exploring the underlying complementary information among them can endow the representation with stronger expressive ability. However, high-dimensional features tend to contain noise, and furthermore, quality of data usually varies for different samples (even for different views), i.e., one view may be informative for one sample but not the case for another. Therefore, it is quite challenging to integrate multi-view noisy data under unsupervised setting. Traditional multi-view methods either simply treat each view with equal importance or tune the weights of different views to fixed values, which are insufficient to capture the dynamic noise in multi-view data. In this work, we devise a novel unsupervised multi-view learning approach, termed as Dynamic Uncertainty-Aware Networks (DUA-Nets). Guided by the uncertainty of data estimated from the generation perspective, intrinsic information from multiple views is integrated to obtain noise-free representations. Under the help of uncertainty estimation, DUA-Nets weigh each view of individual sample according to data quality so that the high-quality samples (or views) can be fully exploited while the effects from the noisy samples (or views) will be alleviated. Our model achieves superior performance in extensive experiments and shows the robustness to noisy data.
|
Yu Geng, Zongbo Han, Changqing Zhang, Qinghua Hu
| null | null | 2,021 |
aaai
|
Explanation Consistency Training: Facilitating Consistency-Based Semi-Supervised Learning with Interpretability
| null |
Unlabeled data exploitation and interpretability are usually both required in reality. They, however, are conducted independently, and very few works try to connect the two. For unlabeled data exploitation, state-of-the-art semi-supervised learning (SSL) results have been achieved via encouraging the consistency of model output on data perturbation, that is, consistency assumption. However, it remains hard for users to understand how particular decisions are made by state-of-the-art SSL models. To this end, in this paper we first disclose that the consistency assumption is closely related to causality invariance, where causality invariance lies in the main reason why the consistency assumption is valid. We then propose ECT (Explanation Consistency Training) which encourages a consistent reason of model decision under data perturbation. ECT employs model explanation as a surrogate of the causality of model output, which is able to bridge state-of-the-art interpretability to SSL models and alleviate the high complexity of causality. We realize ECT-SM for vision and ECT-ATT for NLP tasks. Experimental results on real-world data sets validate the highly competitive performance and better explanation of the proposed algorithms.
|
Tao Han, Wei-Wei Tu, Yu-Feng Li
| null | null | 2,021 |
aaai
|
Efficient On-Chip Learning for Optical Neural Networks Through Power-Aware Sparse Zeroth-Order Optimization
| null |
Optical neural networks (ONNs) have demonstrated record-breaking potential in high-performance neuromorphic computing due to their ultra-high execution speed and low energy consumption. However, current learning protocols fail to provide scalable and efficient solutions to photonic circuit optimization in practical applications. In this work, we propose a novel on-chip learning framework to release the full potential of ONNs for power-efficient in situ training. Instead of deploying implementation-costly back-propagation, we directly optimize the device configurations with computation budgets and power constraints. We are the first to model the ONN on-chip learning as a resource-constrained stochastic noisy zeroth-order optimization problem, and propose a novel mixed-training strategy with two-level sparsity and power-aware dynamic pruning to offer a scalable on-chip training solution in practical ONN deployment. Compared with previous methods, we are the first to optimize over 2,500 optical components on chip. We can achieve much better optimization stability, 3.7x-7.6x higher efficiency, and save >90% power under practical device variations and thermal crosstalk.
|
Jiaqi Gu, Chenghao Feng, Zheng Zhao, Zhoufeng Ying, Ray T. Chen, David Z. Pan
| null | null | 2,021 |
aaai
|
Graph Game Embedding
| null |
Graph embedding aims to encode nodes/edges into low-dimensional continuous features, and has become a crucial tool for graph analysis including graph/node classification, link prediction, etc. In this paper we propose a novel graph learning framework, named graph game embedding, to learn discriminative node representation as well as encode graph structures. Inspired by the spirit of game learning, node embedding is converted to the selection/searching process of player strategies, where each node corresponds to one player and each edge corresponds to the interaction of two players. Then, a utility function, which theoretically satisfies the Nash Equilibrium, is defined to measure the benefit/loss of players during graph evolution. Furthermore, a collaboration and competition mechanism is introduced to increase the discriminant learning ability. Under this graph game embedding framework, considering different interaction manners of nodes, we propose two specific models, named paired game embedding for paired nodes and group game embedding for group interaction. Comparing with existing graph embedding methods, our algorithm possesses two advantages: (1) the designed utility function ensures the stable graph evolution with theoretical convergence and Nash Equilibrium satisfaction; (2) the introduced collaboration and competition mechanism endows the graph game embedding framework with discriminative feature leaning ability by guiding each node to learn an optimal strategy distinguished from others. We test the proposed method on three public datasets about citation networks, and the experimental results verify the effectiveness of our method.
|
Xiaobin Hong, Tong Zhang, Zhen Cui, Yuge Huang, Pengcheng Shen, Shaoxin Li, Jian Yang
| null | null | 2,021 |
aaai
|
Provably Good Solutions to the Knapsack Problem via Neural Networks of Bounded Size
| null |
The development of a satisfying and rigorous mathematical understanding of the performance of neural networks is a major challenge in artificial intelligence. Against this background, we study the expressive power of neural networks through the example of the classical NP-hard Knapsack Problem. Our main contribution is a class of recurrent neural networks (RNNs) with rectified linear units that are iteratively applied to each item of a Knapsack instance and thereby compute optimal or provably good solution values. We show that an RNN of depth four and width depending quadratically on the profit of an optimum Knapsack solution is sufficient to find optimum Knapsack solutions. We also prove the following tradeoff between the size of an RNN and the quality of the computed Knapsack solution: for Knapsack instances consisting of n items, an RNN of depth five and width w computes a solution of value at least 1 - O(n^2 sqrt(w)) times the optimum solution value. Our results build upon a classical dynamic programming formulation of the Knapsack Problem as well as a careful rounding of profit values that are also at the core of the well-known fully polynomial-time approximation scheme for the Knapsack Problem. Finally, we point out that our results can be generalized to many other combinatorial optimization problems that admit dynamic programming solution methods, such as various Shortest Path Problems, the Longest Common Subsequence Problem, and the Traveling Salesperson Problem.
|
Christoph Hertrich, Martin Skutella
| null | null | 2,021 |
aaai
|
On the Convergence of Communication-Efficient Local SGD for Federated Learning
| null |
Federated Learning (FL) has attracted increasing attention in recent years. A leading training algorithm in FL is local SGD, which updates the model parameter on each worker and averages model parameters across different workers only once in a while. Although it has fewer communication rounds than the classical parallel SGD, local SGD still has large communication overhead in each communication round for large machine learning models, such as deep neural networks. To address this issue, we propose a new communication-efficient distributed SGD method, which can significantly reduce the communication cost by the error-compensated double compression mechanism. Under the non-convex setting, our theoretical results show that our approach has better communication complexity than existing methods and enjoys the same linear speedup regarding the number of workers as the full-precision local SGD. Moreover, we propose a communication-efficient distributed SGD with momentum, which also has better communication complexity than existing methods and enjoys a linear speedup with respect to the number of workers. At last, extensive experiments are conducted to verify the performance of our proposed two methods. Moreover, we propose a communication-efficient distributed SGD with momentum to accelerate the convergence, which also has better communication complexity than existing methods and enjoys a linear speedup with respect to the number of workers. At last, extensive experiments are conducted to verify the performance of our proposed methods.
|
Hongchang Gao, An Xu, Heng Huang
| null | null | 2,021 |
aaai
|
Learning Model-Based Privacy Protection under Budget Constraints
| null |
Protecting privacy in gradient-based learning has become increasingly critical as more sensitive information is being used. Many existing solutions seek to protect the sensitive gradients by constraining the overall privacy cost within a constant budget, where the protection is hand-designed and empirically calibrated to boost the utility of the resulting model. However, it remains challenging to choose the proper protection adapted for specific constraints so that the utility is maximized. To this end, we propose a novel Learning-to-Protect algorithm that automatically learns a model-based protector from a set of non-private learning tasks. The learned protector can be applied to private learning tasks to improve utility within the specific privacy budget constraint. Our empirical studies on both synthetic and real datasets demonstrate that the proposed algorithm can achieve a superior utility with a given privacy constraint and generalize well to new private datasets distributed differently as compared to the hand-designed competitors.
|
Junyuan Hong, Haotao Wang, Zhangyang Wang, Jiayu Zhou
| null | null | 2,021 |
aaai
|
Scaling-Up Robust Gradient Descent Techniques
| null |
We study a scalable alternative to robust gradient descent (RGD) techniques that can be used when losses and/or gradients can be heavy-tailed, though this will be unknown to the learner. The core technique is simple: instead of trying to robustly aggregate gradients at each step, which is costly and leads to sub-optimal dimension dependence in risk bounds, we choose a candidate which does not diverge too far from the majority of cheap stochastic sub-processes run over partitioned data. This lets us retain the formal strength of RGD methods at a fraction of the cost.
|
Matthew J. Holland
| null | null | 2,021 |
aaai
|
Addressing Domain Gap via Content Invariant Representation for Semantic Segmentation
| null |
The problem of unsupervised domain adaptation in semantic segmentation is a major challenge for numerous computer vision tasks because acquiring pixel-level labels is time-consuming with expensive human labor. A large gap exists among data distributions in different domains, which will cause severe performance loss when a model trained with synthetic data is generalized to real data. Hence, we propose a novel domain adaptation approach, called Content Invariant Representation Network, to narrow the domain gap between the source (S) and target (T) domains. The previous works developed a network to directly transfer the knowledge from the S to T. On the contrary, the proposed method aims to progressively reduce the gap between S and T on the basis of a Content Invariant Representation (CIR). CIR is an intermediate domain (I) sharing invariant content with S and having similar data distribution to T. Then, an Ancillary Classifier Module (ACM) is designed to focus on pixel-level details and generate attention-aware results. ACM adaptively assigns different weights to pixels according to their domain offsets, thereby reducing local domain gaps. The global domain gap between CIR and T is also narrowed by enforcing local alignments. Last, we perform self-supervised training in the pseudo-labeled target domain to further fit the distribution of the real data. Comprehensive experiments on two domain adaptation tasks, that is, GTAV → Cityscapes and SYNTHIA → Cityscapes, clearly demonstrate the superiority of our method compared with state-of-the-art methods.
|
Li Gao, Lefei Zhang, Qian Zhang
| null | null | 2,021 |
aaai
|
Learning with Safety Constraints: Sample Complexity of Reinforcement Learning for Constrained MDPs
| null |
Many physical systems have underlying safety considerations that require that the policy employed ensures the satisfaction of a set of constraints. The analytical formulation usually takes the form of a Constrained Markov Decision Process (CMDP). We focus on the case where the CMDP is unknown, and RL algorithms obtain samples to discover the model and compute an optimal constrained policy. Our goal is to characterize the relationship between safety constraints and the number of samples needed to ensure a desired level of accuracy---both objective maximization and constraint satisfaction---in a PAC sense. We explore two classes of RL algorithms, namely, (i) a generative model based approach, wherein samples are taken initially to estimate a model, and (ii) an online approach, wherein the model is updated as samples are obtained. Our main finding is that compared to the best known bounds of the unconstrained regime, the sample complexity of constrained RL algorithms are increased by a factor that is logarithmic in the number of constraints, which suggests that the approach may be easily utilized in real systems.
|
Aria HasanzadeZonuzy, Archana Bura, Dileep Kalathil, Srinivas Shakkottai
| null | null | 2,021 |
aaai
|
HiGAN: Handwriting Imitation Conditioned on Arbitrary-Length Texts and Disentangled Styles
| null |
Given limited handwriting scripts, humans can easily visualize (or imagine) what the handwritten words/texts would look like with other arbitrary textual contents. Moreover, a person also is able to imitate the handwriting styles of provided reference samples. Humans can do such hallucinations, perhaps because they can learn to disentangle the calligraphic styles and textual contents from given handwriting scripts. However, computers cannot study to do such flexible handwriting imitation with existing techniques. In this paper, we propose a novel handwriting imitation generative adversarial network (HiGAN) to mimic such hallucinations. Specifically, HiGAN can generate variable-length handwritten words/texts conditioned on arbitrary textual contents, which are unconstrained to any predefined corpus or out-of-vocabulary words. Moreover, HiGAN can flexibly control the handwriting styles of synthetic images by disentangling calligraphic styles from the reference samples. Experiments on handwriting benchmarks validate our superiority in terms of visual quality and scalability when comparing to the state-of-the-art methods for handwritten word/text synthesis. The code and pre-trained models can be found at https://github.com/ganji15/HiGAN.
|
Ji Gan, Weiqiang Wang
| null | null | 2,021 |
aaai
|
Diffusion Network Inference from Partial Observations
| null |
To infer the structure of a diffusion network from observed diffusion results, existing approaches customarily assume that observed data are complete and contain the final infection status of each node, as well as precise timestamps of node infections. Due to high cost and uncertainties in the monitoring of node infections, exact timestamps are often unavailable in practice, and even the final infection statuses of nodes are sometimes missing. In this work, we study how to carry out diffusion network inference without infection timestamps, using only partial observations of the final infection statuses of nodes. To this end, we iteratively infer the structure of the target diffusion network with observed data and imputed values for missing data, and learn the most likely infection transmission probabilities between nodes w.r.t. current inferred structure, which then help us update the imputation of missing data in turn. Extensive experimental results on both synthetic and real-world networks show that our approach can properly handle missing data and accurately uncover diffusion network structures.
|
Ting Gan, Keqi Han, Hao Huang, Shi Ying, Yunjun Gao, Zongpeng Li
| null | null | 2,021 |
aaai
|
Improving Model Robustness by Adaptively Correcting Perturbation Levels with Active Queries
| null |
In addition to high accuracy, robustness is becoming increasingly important for machine learning models in various applications. Recently, much research has been devoted to improving the model robustness by training with noise perturbations. Most existing studies assume a fixed perturbation level for all training examples, which however hardly holds in real tasks. In fact, excessive perturbations may destroy the discriminative content of an example, while deficient perturbations may fail to provide helpful information for improving the robustness. Motivated by this observation, we propose to adaptively adjust the perturbation levels for each example in the training process. Specifically, a novel active learning framework is proposed to allow the model interactively querying the correct perturbation level from human experts. By designing a cost-effective sampling strategy along with a new query type, the robustness can be significantly improved with a few queries. Both theoretical analysis and experimental studies validate the effectiveness of the proposed approach.
|
Kun-Peng Ning, Lue Tao, Songcan Chen, Sheng-Jun Huang
| null | null | 2,021 |
aaai
|
Top-k Ranking Bayesian Optimization
| null |
This paper presents a novel approach to top-k ranking Bayesian optimization (top-k ranking BO) which is a practical and significant generalization of preferential BO to handle top-k ranking and tie/indifference observations. We first design a surrogate model that is not only capable of catering to the above observations, but is also supported by a classic random utility model. Another equally important contribution is the introduction of the first information-theoretic acquisition function in BO with preferential observation called multinomial predictive entropy search (MPES) which is flexible in handling these observations and optimized for all inputs of a query jointly. MPES possesses superior performance compared with existing acquisition functions that select the inputs of a query one at a time greedily. We empirically evaluate the performance of MPES using several synthetic benchmark functions, CIFAR-10 dataset, and SUSHI preference dataset.
|
Quoc Phong Nguyen, Sebastian Tay, Bryan Kian Hsiang Low, Patrick Jaillet
| null | null | 2,021 |
aaai
|
Precision-based Boosting
| null |
AdaBoost is a highly popular ensemble classification method for which many variants have been published. This paper proposes a generic refinement of all of these AdaBoost variants. Instead of assigning weights based on the total error of the base classifiers (as in AdaBoost), our method uses class-specific error rates. On instance x it assigns a higher weight to a classifier predicting label y on x, if that classifier is less likely to make a mistake when it predicts class y. Like AdaBoost, our method is guaranteed to boost weak learners into strong learners. An empirical study on AdaBoost and one of its multi-class versions, SAMME, demonstrates the superiority of our method on datasets with more than 1,000 instances as well as on datasets with more than three classes.
|
Mohammad Hossein Nikravan, Marjan Movahedan, Sandra Zilles
| null | null | 2,021 |
aaai
|
Fast PCA in 1-D Wasserstein Spaces via B-splines Representation and Metric Projection
| null |
We address the problem of performing Principal Component Analysis over a family of probability measures on the real line, using the Wasserstein geometry. We present a novel representation of the 2-Wasserstein space, based on a well known isometric bijection and a B-spline expansion. Thanks to this representation, we are able to reinterpret previous work and derive more efficient optimization routines for existing approaches. As shown in our simulations, the solution of these optimization problems can be costly in practice and thus pose a limit to their usage. We propose a novel definition of Principal Component Analysis in the Wasserstein space that, when used in combination with the B-spline representation, yields a straightforward optimization problem that is extremely fast to compute. Through extensive simulation studies, we show how our PCA performs similarly to the ones already proposed in the literature while retaining a much smaller computational cost. We apply our method to a real dataset of mortality rates due to Covid-19 in the US, concluding that our analyses are consistent with the current scientific consensus on the disease.
|
Matteo Pegoraro, Mario Beraha
| null | null | 2,021 |
aaai
|
Consistency and Finite Sample Behavior of Binary Class Probability Estimation
| null |
We investigate to which extent one can recover class probabilities within the empirical risk minimization (ERM) paradigm. We extend existing results and emphasize the tight relations between empirical risk minimization and class probability estimation. Following previous literature on excess risk bounds and proper scoring rules, we derive a class probability estimator based on empirical risk minimization. We then derive conditions under which this estimator will converge with high probability to the true class probabilities with respect to the L1-norm. One of our core contributions is a novel way to derive finite sample L1-convergence rates of this estimator for different surrogate loss functions. We also study in detail which commonly used loss functions are suitable for this estimation problem and briefly address the setting of model-misspecification.
|
Alexander Mey, Marco Loog
| null | null | 2,021 |
aaai
|
Inverse Reinforcement Learning From Like-Minded Teachers
| null |
We study the problem of learning a policy in a Markov decision process (MDP) based on observations of the actions taken by multiple teachers. We assume that the teachers are like-minded in that their reward functions -- while different from each other -- are random perturbations of an underlying reward function. Under this assumption, we demonstrate that inverse reinforcement learning algorithms that satisfy a certain property -- that of matching feature expectations -- yield policies that are approximately optimal with respect to the underlying reward function, and that no algorithm can do better in the worst case. We also show how to efficiently recover the optimal policy when the MDP has one state -- a setting that is akin to multi-armed bandits.
|
Ritesh Noothigattu, Tom Yan, Ariel D. Procaccia
| null | null | 2,021 |
aaai
|
Robust Reinforcement Learning: A Case Study in Linear Quadratic Regulation
| null |
This paper studies the robustness of reinforcement learning algorithms to errors in the learning process. Specifically, we revisit the benchmark problem of discrete-time linear quadratic regulation (LQR) and study the long-standing open question: Under what conditions is the policy iteration method robustly stable from a dynamical systems perspective? Using advanced stability results in control theory, it is shown that policy iteration for LQR is inherently robust to small errors in the learning process and enjoys small-disturbance input-to-state stability: whenever the error in each iteration is bounded and small, the solutions of the policy iteration algorithm are also bounded, and, moreover, enter and stay in a small neighborhood of the optimal LQR solution. As an application, a novel off-policy optimistic least-squares policy iteration for the LQR problem is proposed, when the system dynamics are subjected to additive stochastic disturbances. The proposed new results in robust reinforcement learning are validated by a numerical example.
|
Bo Pang, Zhong-Ping Jiang
| null | null | 2,021 |
aaai
|
Defending against Backdoors in Federated Learning with Robust Learning Rate
| null |
Federated learning (FL) allows a set of agents to collaboratively train a model without sharing their potentially sensitive data. This makes FL suitable for privacy-preserving applications. At the same time, FL is susceptible to adversarial attacks due to decentralized and unvetted data. One important line of attacks against FL is the backdoor attacks. In a backdoor attack, an adversary tries to embed a backdoor functionality to the model during training that can later be activated to cause a desired misclassification. To prevent backdoor attacks, we propose a lightweight defense that requires minimal change to the FL protocol. At a high level, our defense is based on carefully adjusting the aggregation server's learning rate, per dimension and per round, based on the sign information of agents' updates. We first conjecture the necessary steps to carry a successful backdoor attack in FL setting, and then, explicitly formulate the defense based on our conjecture. Through experiments, we provide empirical evidence that supports our conjecture, and we test our defense against backdoor attacks under different settings. We observe that either backdoor is completely eliminated, or its accuracy is significantly reduced. Overall, our experiments suggest that our defense significantly outperforms some of the recently proposed defenses in the literature. We achieve this by having minimal influence over the accuracy of the trained models. In addition, we also provide convergence rate analysis for our proposed scheme.
|
Mustafa Safa Ozdayi, Murat Kantarcioglu, Yulia R. Gel
| null | null | 2,021 |
aaai
|
Multinomial Logit Contextual Bandits: Provable Optimality and Practicality
| null |
We consider a sequential assortment selection problem where the user choice is given by a multinomial logit (MNL) choice model whose parameters are unknown. In each period, the learning agent observes a d-dimensional contextual information about the user and the N available items, and offers an assortment of size K to the user, and observes the bandit feedback of the item chosen from the assortment. We propose upper confidence bound based algorithms for this MNL contextual bandit. The first algorithm is a simple and practical method that achieves an O(d√T) regret over T rounds. Next, we propose a second algorithm which achieves a O(√dT) regret. This matches the lower bound for the MNL bandit problem, up to logarithmic terms, and improves on the best-known result by a √d factor. To establish this sharper regret bound, we present a non-asymptotic confidence bound for the maximum likelihood estimator of the MNL model that may be of independent interest as its own theoretical contribution. We then revisit the simpler, significantly more practical, first algorithm and show that a simple variant of the algorithm achieves the optimal regret for a broad class of important applications.
|
Min-hwan Oh, Garud Iyengar
| null | null | 2,021 |
aaai
|
Augmented Experiment in Material Engineering Using Machine Learning
| null |
The synthesis of materials using the principle of thermogravimetric analysis to discover new anticorrosive paints requires several costly experiments. This paper presents an approach combining empirical data and domain analytical models to reduce the number of real experiments required to obtain the desired synthesis. The main idea is to predict the behavior of the synthesis of two materials with well-defined mass proportions as a function of temperature. As no exact equational model exists to predict the new material, we integrate a machine learning approach circumscribed by existing domain analytical models such as heating and kinetics equations in order to derive a generative model of augmented experiments. Extensive empirical evaluation shows that using machine learning approach guided by analytic models, it is possible to substantially reduce the number of needed physical experiments without losing the approximation quality.
|
Aomar Osmani, Massinissa Hamidi, Salah Bouhouche
| null | null | 2,021 |
aaai
|
OT-Flow: Fast and Accurate Continuous Normalizing Flows via Optimal Transport
| null |
A normalizing flow is an invertible mapping between an arbitrary probability distribution and a standard normal distribution; it can be used for density estimation and statistical inference. Computing the flow follows the change of variables formula and thus requires invertibility of the mapping and an efficient way to compute the determinant of its Jacobian. To satisfy these requirements, normalizing flows typically consist of carefully chosen components. Continuous normalizing flows (CNFs) are mappings obtained by solving a neural ordinary differential equation (ODE). The neural ODE's dynamics can be chosen almost arbitrarily while ensuring invertibility. Moreover, the log-determinant of the flow's Jacobian can be obtained by integrating the trace of the dynamics' Jacobian along the flow. Our proposed OT-Flow approach tackles two critical computational challenges that limit a more widespread use of CNFs. First, OT-Flow leverages optimal transport (OT) theory to regularize the CNF and enforce straight trajectories that are easier to integrate. Second, OT-Flow features exact trace computation with time complexity equal to trace estimators used in existing CNFs. On five high-dimensional density estimation and generative modeling tasks, OT-Flow performs competitively to state-of-the-art CNFs while on average requiring one-fourth of the number of weights with an 8x speedup in training time and 24x speedup in inference.
|
Derek Onken, Samy Wu Fung, Xingjian Li, Lars Ruthotto
| null | null | 2,021 |
aaai
|
Learning Deep Generative Models for Queuing Systems
| null |
Modern society is heavily dependent on large scale client-server systems with applications ranging from Internet and Communication Services to sophisticated logistics and deployment of goods. To maintain and improve such a system, a careful study of client and server dynamics is needed – e.g. response/service times, aver-age number of clients at given times, etc. To this end, one traditionally relies, within the queuing theory formalism,on parametric analysis and explicit distribution forms.However, parametric forms limit the model’s expressiveness and could struggle on extensively large datasets. We propose a novel data-driven approach towards queuing systems: the Deep Generative Service Times. Our methodology delivers a flexible and scalable model for service and response times. We leverage the representation capabilities of Recurrent Marked Point Processes for the temporal dynamics of clients, as well as Wasserstein Generative Adversarial Network techniques, to learn deep generative models which are able to represent complex conditional service time distributions. We provide extensive experimental analysis on both empirical and synthetic datasets, showing the effectiveness of the proposed models
|
Cesar Ojeda, Kostadin Cvejoski, Bodgan Georgiev, Christian Bauckhage, Jannis Schuecker, Ramses J. Sanchez
| null | null | 2,021 |
aaai
|
FC-GAGA: Fully Connected Gated Graph Architecture for Spatio-Temporal Traffic Forecasting
| null |
Forecasting of multivariate time-series is an important problem that has applications in traffic management, cellular network configuration, and quantitative finance. A special case of the problem arises when there is a graph available that captures the relationships between the time-series. In this paper we propose a novel learning architecture that achieves performance competitive with or better than the best existing algorithms, without requiring knowledge of the graph. The key element of our proposed architecture is the learnable fully connected hard graph gating mechanism that enables the use of the state-of-the-art and highly computationally efficient fully connected time-series forecasting architecture in traffic forecasting applications. Experimental results for two public traffic network datasets illustrate the value of our approach, and ablation studies confirm the importance of each element of the architecture. The code is available here: https://github.com/boreshkinai/fc-gaga.
|
Boris N. Oreshkin, Arezou Amini, Lucy Coyle, Mark Coates
| null | null | 2,021 |
aaai
|
NASTransfer: Analyzing Architecture Transferability in Large Scale Neural Architecture Search
| null |
Neural Architecture Search (NAS) is an open and challenging problem in machine learning. While NAS offers great promise, the prohibitive computational demand of most of the existing NAS methods makes it difficult to directly search the architectures on large-scale tasks. The typical way of conducting large scale NAS is to search for an architectural building block on a small dataset (either using a proxy set from the large dataset or a completely different small scale dataset) and then transfer the block to a larger dataset. Despite a number of recent results that show the promise of transfer from proxy datasets, a comprehensive evaluation of different NAS methods studying the impact of different source datasets has not yet been addressed. In this work, we propose to analyze the architecture transferability of different NAS methods by performing a series of experiments on large scale benchmarks such as ImageNet1K and ImageNet22K. We find that: (i) The size and domain of the proxy set does not seem to influence architecture performance on the target dataset. On average, transfer performance of architectures searched using completely different small datasets (e.g., CIFAR10) perform similarly to the architectures searched directly on proxy target datasets. However, design of proxy sets has considerable impact on rankings of different NAS methods. (ii) While different NAS methods show similar performance on a source dataset (e.g., CIFAR10), they significantly differ on the transfer performance to a large dataset (e.g., ImageNet1K). (iii) Even on large datasets, random sampling baseline is very competitive, but the choice of the appropriate combination of proxy set and search strategy can provide significant improvement over it. We believe that our extensive empirical analysis will prove useful for future design of NAS algorithms.
|
Rameswar Panda, Michele Merler, Mayoore S Jaiswal, Hui Wu, Kandan Ramakrishnan, Ulrich Finkler, Chun-Fu Richard Chen, Minsik Cho, Rogerio Feris, David Kung, Bishwaranjan Bhattacharjee
| null | null | 2,021 |
aaai
|
Meta-Learning Framework with Applications to Zero-Shot Time-Series Forecasting
| null |
Can meta-learning discover generic ways of processing time series (TS) from a diverse dataset so as to greatly improve generalization on new TS coming from different datasets? This work provides positive evidence to this using a broad meta-learning framework which we show subsumes many existing meta-learning algorithms. Our theoretical analysis suggests that residual connections act as a meta-learning adaptation mechanism, generating a subset of task-specific parameters based on a given TS input, thus gradually expanding the expressive power of the architecture on-the-fly. The same mechanism is shown via linearization analysis to have the interpretation of a sequential update of the final linear layer. Our empirical results on a wide range of data emphasize the importance of the identified meta-learning mechanisms for successful zero-shot univariate forecasting, suggesting that it is viable to train a neural network on a source TS dataset and deploy it on a different target TS dataset without retraining, resulting in performance that is at least as good as that of state-of-practice univariate forecasting models.
|
Boris N. Oreshkin, Dmitri Carpov, Nicolas Chapados, Yoshua Bengio
| null | null | 2,021 |
aaai
|
Second Order Techniques for Learning Time-series with Structural Breaks
| null |
We study fundamental problems in learning nonstationary time-series: how to effectively regularize time-series models and how to adaptively tune forgetting rates. The effectiveness of L2 regularization depends on the choice of coordinates, and the variables need to be appropriately normalized. In nonstationary environment, however, what is appropriate can vary over time. Proposed regularization is invariant to the invertible linear transformation of coordinates, eliminating the necessity of normalization. We also propose an ensemble learning approach to adaptively tuning the forgetting rate and regularization-coefficient. We train multiple models with varying hyperparameters and evaluate their performance by the use of multiple hyper forgetting rates. At each step, we choose the best performing model on the basis of the best performing hyper forgetting rate. The effectiveness of the proposed approaches is demonstrated with real time-series.
|
Takayuki Osogami
| null | null | 2,021 |
aaai
|
RT3D: Achieving Real-Time Execution of 3D Convolutional Neural Networks on Mobile Devices
| null |
Mobile devices are becoming an important carrier for deep learning tasks, as they are being equipped with powerful, high-end mobile CPUs and GPUs. However, it is still a challenging task to execute 3D Convolutional Neural Networks (CNNs) targeting for real-time performance, besides high inference accuracy. The reason is more complex model structure and higher model dimensionality overwhelm the available computation/storage resources on mobile devices. A natural way may be turning to deep learning weight pruning techniques. However, the direct generalization of existing 2D CNN weight pruning methods to 3D CNNs is not ideal for fully exploiting mobile parallelism while achieving high inference accuracy. This paper proposes RT3D, a model compression and mobile acceleration framework for 3D CNNs, seamlessly integrating neural network weight pruning and compiler code generation techniques. We propose and investigate two structured sparsity schemes i.e., the vanilla structured sparsity and kernel group structured (KGS) sparsity that are mobile acceleration friendly. The vanilla sparsity removes whole kernel groups, while KGS sparsity is a more fine-grained structured sparsity that enjoys higher flexibility while exploiting full on-device parallelism. We propose a reweighted regularization pruning algorithm to achieve the proposed sparsity schemes. The inference time speedup due to sparsity is approaching the pruning rate of the whole model FLOPs (floating point operations). RT3D demonstrates up to 29.1x speedup in end-to-end inference time comparing with current mobile frameworks supporting 3D CNNs, with moderate 1%~1.5% accuracy loss. The end-to-end inference time for 16 video frames could be within 150 ms, when executing representative C3D and R(2+1)D models on a cellphone. For the first time, real-time execution of 3D CNNs is achieved on off-the-shelf mobiles.
|
Wei Niu, Mengshu Sun, Zhengang Li, Jou-An Chen, Jiexiong Guan, Xipeng Shen, Yanzhi Wang, Sijia Liu, Xue Lin, Bin Ren
| null | null | 2,021 |
aaai
|
Discovering Fully Oriented Causal Networks
| null |
We study the problem of inferring causal graphs from observational data. We are particularly interested in discovering graphs where all edges are oriented, as opposed to the partially directed graph that the state of the art discover. To this end, we base our approach on the algorithmic Markov condition. Unlike the statistical Markov condition, it uniquely identifies the true causal network as the one that provides the simplest— as measured in Kolmogorov complexity—factorization of the joint distribution. Although Kolmogorov complexity is not computable, we can approximate it from above via the Minimum Description Length principle, which allows us to define a consistent and computable score based on non-parametric multivariate regression. To efficiently discover causal networks in practice, we introduce the GLOBE algorithm, which greedily adds, removes, and orients edges such that it minimizes the overall cost. Through an extensive set of experiments, we show GLOBE performs very well in practice, beating the state of the art by a margin.
|
Osman A Mian, Alexander Marx, Jilles Vreeken
| null | null | 2,021 |
aaai
|
A General Class of Transfer Learning Regression without Implementation Cost
| null |
We propose a novel framework that unifies and extends existing methods of transfer learning (TL) for regression. To bridge a pretrained source model to the model on a target task, we introduce a density-ratio reweighting function, which is estimated through the Bayesian framework with a specific prior distribution. By changing two intrinsic hyperparameters and the choice of the density-ratio model, the proposed method can integrate three popular methods of TL: TL based on cross-domain similarity regularization, a probabilistic TL using the density-ratio estimation, and fine-tuning of pretrained neural networks. Moreover, the proposed method can benefit from its simple implementation without any additional cost; the regression model can be fully trained using off-the-shelf libraries for supervised learning in which the original output variable is simply transformed to a new output variable. We demonstrate its simplicity, generality, and applicability using various real data applications.
|
Shunya Minami, Song Liu, Stephen Wu, Kenji Fukumizu, Ryo Yoshida
| null | null | 2,021 |
aaai
|
Generative Semi-supervised Learning for Multivariate Time Series Imputation
| null |
The missing values, widely existed in multivariate time series data, hinder the effective data analysis. Existing time series imputation methods do not make full use of the label information in real-life time series data. In this paper, we propose a novel semi-supervised generative adversarial network model, named SSGAN, for missing value imputation in multivariate time series data. It consists of three players, i.e., a generator, a discriminator, and a classifier. The classifier predicts labels of time series data, and thus it drives the generator to estimate the missing values (or components), conditioned on observed components and data labels at the same time. We introduce a temporal reminder matrix to help the discriminator better distinguish the observed components from the imputed ones. Moreover, we theoretically prove that, SSGAN using the temporal reminder matrix and the classifier does learn to estimate missing values converging to the true data distribution when the Nash equilibrium is achieved. Extensive experiments on three public real-world datasets demonstrate that, SSGAN yields a more than 15% gain in performance, compared with the state-of-the-art methods.
|
Xiaoye Miao, Yangyang Wu, Jun Wang, Yunjun Gao, Xudong Mao, Jianwei Yin
| null | null | 2,021 |
aaai
|
Scheduling of Time-Varying Workloads Using Reinforcement Learning
| null |
Resource usage of production workloads running on shared compute clusters often fluctuate significantly across time. While simultaneous spike in the resource usage between two workloads running on the same machine can create performance degradation, unused resources in a machine results in wastage and undesirable operational characteristics for a compute cluster. Prior works did not consider such temporal resource fluctuations or their alignment for scheduling decisions. Due to the variety of time-varying workloads, their complex resource usage characteristics, it is challenging to design well-defined heuristics for scheduling them optimally across different machines in a cluster. In this paper, we propose a Deep Reinforcement Learning (DRL) based approach to exploit various temporal resource usage patterns of time varying workloads as well as a technique for creating equivalence classes among a large number of production workloads to improve scalability of our method. Validations with real production traces from Google and Alibaba show that our technique can significantly improve metrics for operational excellence (e.g. utilization, fragmentation, resource exhaustion etc.) for a cluster, compared to the baselines.
|
Shanka Subhra Mondal, Nikhil Sheoran, Subrata Mitra
| null | null | 2,021 |
aaai
|
Task-Agnostic Exploration via Policy Gradient of a Non-Parametric State Entropy Estimate
| null |
In a reward-free environment, what is a suitable intrinsic objective for an agent to pursue so that it can learn an optimal task-agnostic exploration policy? In this paper, we argue that the entropy of the state distribution induced by finite-horizon trajectories is a sensible target. Especially, we present a novel and practical policy-search algorithm, Maximum Entropy POLicy optimization (MEPOL), to learn a policy that maximizes a non-parametric, $k$-nearest neighbors estimate of the state distribution entropy. In contrast to known methods, MEPOL is completely model-free as it requires neither to estimate the state distribution of any policy nor to model transition dynamics. Then, we empirically show that MEPOL allows learning a maximum-entropy exploration policy in high-dimensional, continuous-control domains, and how this policy facilitates learning meaningful reward-based tasks downstream.
|
Mirco Mutti, Lorenzo Pratissoli, Marcello Restelli
| null | null | 2,021 |
aaai
|
Improved Mutual Information Estimation
| null |
We propose to estimate the KL divergence using a relaxed likelihood ratio estimation in a Reproducing Kernel Hilbert space. We show that the dual of our ratio estimator for KL in the particular case of Mutual Information estimation corresponds to a lower bound on the MI that is related to the so called Donsker Varadhan lower bound. In this dual form, MI is estimated via learning a witness function discriminating between the joint density and the product of marginal, as well as an auxiliary scalar variable that enforces a normalization constraint on the likelihood ratio. By extending the function space to neural networks, we propose an efficient neural MI estimator, and validate its performance on synthetic examples, showing advantage over the existing baselines. We demonstrate its strength in large-scale self-supervised representation learning through MI maximization.
|
Youssef Mroueh, Igor Melnyk, Pierre Dognin, Jarret Ross, Tom Sercu
| null | null | 2,021 |
aaai
|
Text-based RL Agents with Commonsense Knowledge: New Challenges, Environments and Baselines
| null |
Text-based games have emerged as an important test-bed for Reinforcement Learning (RL) research, requiring RL agents to combine grounded language understanding with sequential decision making. In this paper, we examine the problem of infusing RL agents with commonsense knowledge. Such knowledge would allow agents to efficiently act in the world by pruning out implausible actions, and to perform look-ahead planning to determine how current actions might affect future world states. We design a new text-based gaming environment called TextWorld Commonsense (TWC) for training and evaluating RL agents with a specific kind of commonsense knowledge about objects, their attributes, and affordances. We also introduce several baseline RL agents which track the sequential context and dynamically retrieve the relevant commonsense knowledge from ConceptNet. We show that agents which incorporate commonsense knowledge in TWC perform better, while acting more efficiently. We conduct user-studies to estimate human performance on TWC and show that there is ample room for future improvement.
|
Keerthiram Murugesan, Mattia Atzeni, Pavan Kapanipathi, Pushkar Shukla, Sadhana Kumaravel, Gerald Tesauro, Kartik Talamadupula, Mrinmaya Sachan, Murray Campbell
| null | null | 2,021 |
aaai
|
Elastic Consistency: A Practical Consistency Model for Distributed Stochastic Gradient Descent
| null |
One key element behind the recent progress of machine learning has been the ability to train machine learning models in large-scale distributed shared-memory and message-passing environments. Most of these models are trained employing variants of stochastic gradient descent (SGD) based optimization, but most methods involve some type of consistency relaxation relative to sequential SGD, to mitigate its large communication or synchronization costs at scale. In this paper, we introduce a general consistency condition covering communication-reduced and asynchronous distributed SGD implementations. Our framework, called elastic consistency, decouples the system-specific aspects of the implementation from the SGD convergence requirements, giving a general way to obtain convergence bounds for a wide variety of distributed SGD methods used in practice. Elastic consistency can be used to re-derive or improve several previous convergence bounds in message-passing and shared-memory settings, but also to analyze new models and distribution schemes. As a direct application, we propose and analyze a new synchronization-avoiding scheduling scheme for distributed SGD, and show that it can be used to efficiently train deep convolutional models for image classification.
|
Giorgi Nadiradze, Ilia Markov, Bapi Chatterjee, Vyacheslav Kungurtsev, Dan Alistarh
| null | null | 2,021 |
aaai
|
Advice-Guided Reinforcement Learning in a non-Markovian Environment
| null |
We study a class of reinforcement learning tasks in which the agent receives its reward for complex, temporally-extended behaviors sparsely. For such tasks, the problem is how to augment the state-space so as to make the reward function Markovian in an efficient way. While some existing solutions assume that the reward function is explicitly provided to the learning algorithm (e.g., in the form of a reward machine), the others learn the reward function from the interactions with the environment, assuming no prior knowledge provided by the user. In this paper, we generalize both approaches and enable the user to give advice to the agent, representing the user’s best knowledge about the reward function, potentially fragmented, partial, or even incorrect. We formalize advice as a set of DFAs and present a reinforcement learning algorithm that takes advantage of such advice, with optimal con- vergence guarantee. The experiments show that using well- chosen advice can reduce the number of training steps needed for convergence to optimal policy, and can decrease the computation time to learn the reward function by up to two orders of magnitude.
|
Daniel Neider, Jean-Raphael Gaglione, Ivan Gavran, Ufuk Topcu, Bo Wu, Zhe Xu
| null | null | 2,021 |
aaai
|
Differentially Private k-Means via Exponential Mechanism and Max Cover
| null |
We introduce a new (ϵₚ, δₚ)-differentially private algorithm for the k-means clustering problem. Given a dataset in Euclidean space, the k-means clustering problem requires one to find k points in that space such that the sum of squares of Euclidean distances between each data point and its closest respective point among the k returned is minimised. Although there exist privacy-preserving methods with good theoretical guarantees to solve this problem, in practice it is seen that it is the additive error which dictates the practical performance of these methods. By reducing the problem to a sequence of instances of maximum coverage on a grid, we are able to derive a new method that achieves lower additive error than previous works. For input datasets with cardinality n and diameter Δ, our algorithm has an O(Δ² (k log² n log(1/δₚ)/ϵₚ + k √(d log(1/δₚ))/ϵₚ)) additive error whilst maintaining constant multiplicative error. We conclude with some experiments and find an improvement over previously implemented work for this problem.
|
Huy L. Nguyen, Anamay Chaturvedi, Eric Z Xu
| null | null | 2,021 |
aaai
|
Modular Graph Transformer Networks for Multi-Label Image Classification
| null |
With the recent advances in graph neural networks, there is a rising number of studies on graph-based multi-label classification with the consideration of object dependencies within visual data. Nevertheless, graph representations can become indistinguishable due to the complex nature of label relationships. We propose a multi-label image classification framework based on graph transformer networks to fully exploit inter-label interactions. The paper presents a modular learning scheme to enhance the classification performance by segregating the computational graph into multiple sub-graphs based on modularity. The proposed approach, named Modular Graph Transformer Networks (MGTN), is capable of employing multiple backbones for better information propagation over different sub-graphs guided by graph transformers and convolutions. We validate our framework on MS-COCO and Fashion550K datasets to demonstrate improvements for multi-label image classification. The source code is available at https://github.com/ReML-AI/MGTN.
|
Hoang D. Nguyen, Xuan-Son Vu, Duc-Trong Le
| null | null | 2,021 |
aaai
|
Clinical Risk Prediction with Temporal Probabilistic Asymmetric Multi-Task Learning
| null |
Although recent multi-task learning methods have shown to be effective in improving the generalization of deep neural networks, they should be used with caution for safety-critical applications, such as clinical risk prediction. This is because even if they achieve improved task-average performance, they may still yield degraded performance on individual tasks, which may be critical (e.g., prediction of mortality risk). Existing asymmetric multi-task learning methods tackle this negative transfer problem by performing knowledge transfer from tasks with low loss to tasks with high loss. However, using loss as a measure of reliability is risky since low loss could result from overfitting. In the case of time-series prediction tasks, knowledge learned for one task (e.g., predicting the sepsis onset) at a specific timestep may be useful for learning another task (e.g., prediction of mortality) at a later timestep, but lack of loss at each timestep makes it difficult to measure the reliability at each timestep. To capture such dynamically changing asymmetric relationships between tasks in time-series data, we propose a novel temporal asymmetric multi-task learning model that performs knowledge transfer from certain tasks/timesteps to relevant uncertain tasks, based on the feature-level uncertainty. We validate our model on multiple clinical risk prediction tasks against various deep learning models for time-series prediction, which our model significantly outperforms without any sign of negative transfer. Further qualitative analysis of learned knowledge graphs by clinicians shows that they are helpful in analyzing the predictions of the model.
|
A. Tuan Nguyen, Hyewon Jeong, Eunho Yang, Sung Ju Hwang
| null | null | 2,021 |
aaai
|
Minimum Robust Multi-Submodular Cover for Fairness
| null |
In this paper, we study a novel problem, Minimum Robust Multi-Submodular Cover for Fairness (MinRF), as follows: given a ground set V; m monotone submodular functions f_1,...,f_m; m thresholds T_1,...,T_m and a non-negative integer r; MinRF asks for the smallest set S such that f_i(S X) ≥ T_i for all i ∈ [m] and |X| ≤ r. We prove that MinRF is inapproximable within (1- ε) ln m; and no algorithm, taking fewer than exponential number of queries in term of r, is able to output a feasible set to MinRF with high certainty. Three bicriteria approximation algorithms with performance guarantees are proposed: one for r = 0, one for r = 1, and one for general r. We further investigate our algorithms' performance in two applications of MinRF, Information Propagation for Multiple Groups and Movie Recommendation for Multiple Users. Our algorithms have shown to outperform baseline heuristics in both solution quality and the number of queries in most cases.
|
Lan N. Nguyen, My T. Thai
| null | null | 2,021 |
aaai
|
Semi-supervised Medical Image Segmentation through Dual-task Consistency
| null |
Deep learning-based semi-supervised learning (SSL) algorithms have led to promising results in medical images segmentation and can alleviate doctors' expensive annotations by leveraging unlabeled data. However, most of the existing SSL algorithms in literature tend to regularize the model training by perturbing networks and/or data. Observing that multi/dual-task learning attends to various levels of information which have inherent prediction perturbation, we ask the question in this work: can we explicitly build task-level regularization rather than implicitly constructing networks- and/or data-level perturbation and then regularization for SSL? To answer this question, we propose a novel dual-task-consistency semi-supervised framework for the first time. Concretely, we use a dual-task deep network that jointly predicts a pixel-wise segmentation map and a geometry-aware level set representation of the target. The level set representation is converted to an approximated segmentation map through a differentiable task transform layer. Simultaneously, we introduce a dual-task consistency regularization between the level set-derived segmentation maps and directly predicted segmentation maps for both labeled and unlabeled data. Extensive experiments on two public datasets show that our method can largely improve the performance by incorporating the unlabeled data. Meanwhile, our framework outperforms the state-of-the-art semi-supervised learning methods.
|
Xiangde Luo, Jieneng Chen, Tao Song, Guotai Wang
| null | null | 2,021 |
aaai
|
Temporal Latent Auto-Encoder: A Method for Probabilistic Multivariate Time Series Forecasting
| null |
Probabilistic forecasting of high dimensional multivariate time series is a notoriously challenging task, both in terms of computational burden and distribution modeling. Most previous work either makes simple distribution assumptions or abandons modeling cross-series correlations. A promising line of work exploits scalable matrix factorization for latent-space forecasting, but is limited to linear embeddings, unable to model distributions, and not trainable end-to-end when using deep learning forecasting. We introduce a novel temporal latent auto-encoder method which enables nonlinear factorization of multivariate time series, learned end-to-end with a temporal deep learning latent space forecast model. By imposing a probabilistic latent space model, complex distributions of the input series are modeled via the decoder. Extensive experiments demonstrate that our model achieves state-of-the-art performance on many popular multivariate datasets, with gains sometimes as high as 50% for several standard metrics.
|
Nam Nguyen, Brian Quanz
| null | null | 2,021 |
aaai
|
Revisiting Co-Occurring Directions: Sharper Analysis and Efficient Algorithm for Sparse Matrices
| null |
We study the streaming model for approximate matrix multiplication (AMM). We are interested in the scenario that the algorithm can only take one pass over the data with limited memory. The state-of-the-art deterministic sketching algorithm for streaming AMM is the co-occurring directions (COD), which has much smaller approximation errors than randomized algorithms and outperforms other deterministic sketching methods empirically. In this paper, we provide a tighter error bound for COD whose leading term considers the potential approximate low-rank structure and the correlation of input matrices. We prove COD is space optimal with respect to our improved error bound. We also propose a variant of COD for sparse matrices with theoretical guarantees. The experiments on real-world sparse datasets show that the proposed algorithm is more efficient than baseline methods.
|
Luo Luo, Cheng Chen, Guangzeng Xie, Haishan Ye
| null | null | 2,021 |
aaai
|
Adaptive Knowledge Driven Regularization for Deep Neural Networks
| null |
In many real-world applications, the amount of data available for training is often limited, and thus inductive bias and auxiliary knowledge are much needed for regularizing model training. One popular regularization method is to impose prior distribution assumptions on model parameters, and many recent works also attempt to regularize training by integrating external knowledge into specific neurons. However, existing regularization methods did not take account of the interaction between connected neuron pairs, which is invaluable internal knowledge for adaptive regularization for better representation learning as training progresses. In this paper, we explicitly take into account the interaction between connected neurons, and propose an adaptive internal knowledge driven regularization method, CORR-Reg. The key idea of CORR-Reg is to give a higher significance weight to connections of more correlated neuron pairs. The significance weights adaptively identify more important input neurons for each neuron. Instead of regularizing connection model parameters with a static strength such as weight decay, CORR-Reg imposes weaker regularization strength on more significant connections. As a consequence, neurons attend to more informative input features and thus learn more diversified and discriminative representation. We derive CORR-Reg with Bayesian inference framework and propose a novel optimization algorithm with Lagrange multiplier method and Stochastic Gradient Descent. Extensive evaluations on diverse benchmark datasets and neural network structures show that CORR-Reg achieves significant improvement over state-of-the-art regularization methods.
|
Zhaojing Luo, Shaofeng Cai, Can Cui, Beng Chin Ooi, Yang Yang
| null | null | 2,021 |
aaai
|
PULNS: Positive-Unlabeled Learning with Effective Negative Sample Selector
| null |
Positive-unlabeled learning (PU learning) is an important case of binary classification where the training data only contains positive and unlabeled samples. The current state-of-the-art approach for PU learning is the cost-sensitive approach, which casts PU learning as a cost-sensitive classification problem and relies on unbiased risk estimator for correcting the bias introduced by the unlabeled samples. However, this approach requires the knowledge of class prior and is subject to the potential label noise. In this paper, we propose a novel PU learning approach dubbed PULNS, equipped with an effective negative sample selector, which is optimized by reinforcement learning. Our PULNS approach employs an effective negative sample selector as the agent responsible for selecting negative samples from the unlabeled data. While the selected, likely negative samples can be used to improve the classifier, the performance of classifier is also used as the reward to improve the selector through the REINFORCE algorithm. By alternating the updates of the selector and the classifier, the performance of both is improved. Extensive experimental studies on 7 real-world application benchmarks demonstrate that PULNS consistently outperforms the current state-of-the-art methods in PU learning, and our experimental results also confirm the effectiveness of the negative sample selector underlying PULNS.
|
Chuan Luo, Pu Zhao, Chen Chen, Bo Qiao, Chao Du, Hongyu Zhang, Wei Wu, Shaowei Cai, Bing He, Saravanakumar Rajmohan, Qingwei Lin
| null | null | 2,021 |
aaai
|
On the Adequacy of Untuned Warmup for Adaptive Optimization
| null |
Adaptive optimization algorithms such as Adam (Kingma and Ba, 2014) are widely used in deep learning. The stability of such algorithms is often improved with a warmup schedule for the learning rate. Motivated by the difficulty of choosing and tuning warmup schedules, recent work proposes automatic variance rectification of Adam's adaptive learning rate, claiming that this rectified approach ("RAdam") surpasses the vanilla Adam algorithm and reduces the need for expensive tuning of Adam with warmup. In this work, we refute this analysis and provide an alternative explanation for the necessity of warmup based on the magnitude of the update term, which is of greater relevance to training stability. We then provide some "rule-of-thumb" warmup schedules, and we demonstrate that simple untuned warmup of Adam performs more-or-less identically to RAdam in typical practical settings. We conclude by suggesting that practitioners stick to linear warmup with Adam, with a sensible default being linear warmup over 2 / (1 - β₂) training iterations.
|
Jerry Ma, Denis Yarats
| null | null | 2,021 |
aaai
|
Tailoring Embedding Function to Heterogeneous Few-Shot Tasks by Global and Local Feature Adaptors
| null |
Few-Shot Learning (FSL) is essential for visual recognition. Many methods tackle this challenging problem via learning an embedding function from seen classes and transfer it to unseen classes with a few labeled instances. Researchers recently found it beneficial to incorporate task-specific feature adaptation into FSL models, which produces the most representative features for each task. However, these methods ignore the diversity of classes and apply a global transformation to the task. In this paper, we propose Global and Local Feature Adaptor (GLoFA), a unifying framework that tailors the instance representation to specific tasks by global and local feature adaptors. We claim that class-specific local transformation helps to improve the representation ability of feature adaptor. Global masks tend to capture sketchy patterns, while local masks focus on detailed characteristics. A strategy to measure the relationship between instances adaptively based on the characteristics of both tasks and classes endow GLoFA with the ability to handle mix-grained tasks. GLoFA outperforms other methods on a heterogeneous task distribution and achieves competitive results on benchmark datasets.
|
Su Lu, Han-Jia Ye, De-Chuan Zhan
| null | null | 2,021 |
aaai
|
Joint-Label Learning by Dual Augmentation for Time Series Classification
| null |
Recently, deep neural networks (DNNs) have achieved excellent performance on time series classification. However, DNNs require large amounts of labeled data for supervised training. Although data augmentation can alleviate this problem, the standard approach assigns the same label to all augmented samples from the same source. This leads to the expansion of the data distribution such that the classification boundaries may be even harder to determine. In this paper, we propose Joint-label learning by Dual Augmentation (JobDA), which can enrich the training samples without expanding the distribution of the original data. Instead, we apply simple transformations to the time series and give these modified time series new labels, so that the model has to distinguish between these and the original data, as well as separating the original classes. This approach sharpens the boundaries around the original time series, and results in superior classification performance. We use Time Series Warping for our transformations: We shrink and stretch different regions of the original time series, like a fun-house mirror. Experiments conducted on extensive time-series datasets show that JobDA can improve the model performance on small datasets. Moreover, we verify that JobDA has better generalization ability compared with conventional data augmentation, and the visualization analysis further demonstrates that JobDA can learn more compact clusters.
|
Qianli Ma, Zhenjing Zheng, Jiawei Zheng, Sen Li, Wanqing Zhuang, Garrison W. Cottrell
| null | null | 2,021 |
aaai
|
Searching for Machine Learning Pipelines Using a Context-Free Grammar
| null |
AutoML automatically selects, composes and parameterizes machine learning algorithms into a workflow or pipeline of operations that aims at maximizing performance on a given dataset. Although current methods for AutoML achieved impressive results they mostly concentrate on optimizing fixed linear workflows. In this paper, we take a different approach and focus on generating and optimizing pipelines of complex directed acyclic graph shapes. These complex pipeline structure may lead to discovering hidden features and thus boost performance considerably. We explore the power of heuristic search and context-free grammars to search and optimize these kinds of pipelines. Experiments on various benchmark datasets show that our approach is highly competitive and often outperforms existing AutoML systems.
|
Radu Marinescu, Akihiro Kishimoto, Parikshit Ram, Ambrish Rawat, Martin Wistuba, Paulito P. Palmes, Adi Botea
| null | null | 2,021 |
aaai
|
Infinite Gaussian Mixture Modeling with an Improved Estimation of the Number of Clusters
| null |
Infinite Gaussian mixture modeling (IGMM) is a modeling method that determines all the parameters of a Gaussian mixture model (GMM), including its order. It has been well documented that it is a consistent estimator for probability density functions in the sense that, given enough training data from sufficiently regular probability density functions, it will converge to the shape of the original density curve. It is also known, however, that IGMM provides an inconsistent estimation of the number of clusters. The current paper shows that the nature of this inconsistency is an overestimation, and we pinpoint that this problem is an inherent part of the training algorithm. It stems mostly from a "self-reinforcing feedback'' which is a certain relation between the likelihood function of one of the model hyperparameters (alpha) and the probability of sampling the number of components, that sustain their mutual growth during the Gibbs iterations. We show that this problem can be resolved by using informative priors for alpha and propose a modified training procedure that uses the inverse chi-square for this purpose. The modified algorithm successfully recovers the ``known" order in all the experiments with synthetic data sets. It also demonstrates good results when compared to other methods used to evaluate model order, using real-world databases. Furthermore, the improved performance is attained without undermining the fidelity of estimating the original PDFs and with a significant reduction in computational cost.
|
Avi Matza, Yuval Bistritz
| null | null | 2,021 |
aaai
|
Sequential Attacks on Kalman Filter-based Forward Collision Warning Systems
| null |
Kalman Filter (KF) is widely used in various domains to perform sequential learning or variable estimation. In the context of autonomous vehicles, KF constitutes the core component of many Advanced Driver Assistance Systems (ADAS), such as Forward Collision Warning (FCW). It tracks the states (distance, velocity etc.) of relevant traffic objects based on sensor measurements. The tracking output of KF is often fed into downstream logic to produce alerts, which will then be used by human drivers to make driving decisions in near-collision scenarios. In this paper, we study adversarial attacks on KF as part of the more complex machine-human hybrid system of Forward Collision Warning. Our attack goal is to negatively affect human braking decisions by causing KF to output incorrect state estimations that lead to false or delayed alerts. We accomplish this by sequentially manipulating measure ments fed into the KF, and propose a novel Model Predictive Control (MPC) approach to compute the optimal manipulation. Via experiments conducted in a simulated driving environment, we show that the attacker is able to successfully change FCW alert signals through planned manipulation over measurements prior to the desired target time. These results demonstrate that our attack can stealthily mislead a distracted human driver and cause vehicle collisions.
|
Yuzhe Ma, Jon A Sharp, Ruizhe Wang, Earlence Fernandes, Xiaojin Zhu
| null | null | 2,021 |
aaai
|
Deep Mutual Information Maximin for Cross-Modal Clustering
| null |
Cross-modal clustering (CMC) aims to enhance the clustering performance by exploring complementary information from multiple modalities. However, the performances of existing CMC algorithms are still unsatisfactory due to the conflict of heterogeneous modalities and the high-dimensional non-linear property of individual modality. In this paper, a novel deep mutual information maximin (DMIM) method for cross-modal clustering is proposed to maximally preserve the shared information of multiple modalities while eliminating the superfluous information of individual modalities in an end-to-end manner. Specifically, a multi-modal shared encoder is firstly built to align the latent feature distributions by sharing parameters across modalities. Then, DMIM formulates the complementarity of multi-modalities representations as an mutual information maximin objective function, in which the shared information of multiple modalities and the superfluous information of individual modalities are identified by mutual information maximization and minimization respectively. To solve the DMIM objective function, we propose a variational optimization method to ensure it converge to a local optimal solution. Moreover, an auxiliary overclustering mechanism is employed to optimize the clustering structure by introducing more detailed clustering classes. Extensive experimental results demonstrate the superiority of DMIM method over the state-of-the-art cross-modal clustering methods on IAPR-TC12, ESP-Game, MIRFlickr and NUS-Wide datasets.
|
Yiqiao Mao, Xiaoqiang Yan, Qiang Guo, Yangdong Ye
| null | null | 2,021 |
aaai
|
Unsupervised Learning of Graph Hierarchical Abstractions with Differentiable Coarsening and Optimal Transport
| null |
Hierarchical abstractions are a methodology for solving large-scale graph problems in various disciplines. Coarsening is one such approach: it generates a pyramid of graphs whereby the one in the next level is a structural summary of the prior one. With a long history in scientific computing, many coarsening strategies were developed based on mathematically driven heuristics. Recently, resurgent interests exist in deep learning to design hierarchical methods learnable through differentiable parameterization. These approaches are paired with downstream tasks for supervised learning. In practice, however, supervised signals (e.g., labels) are scarce and are often laborious to obtain. In this work, we propose an unsupervised approach, coined OTCoarsening, with the use of optimal transport. Both the coarsening matrix and the transport cost matrix are parameterized, so that an optimal coarsening strategy can be learned and tailored for a given set of graphs. We demonstrate that the proposed approach produces meaningful coarse graphs and yields competitive performance compared with supervised methods for graph classification and regression.
|
Tengfei Ma, Jie Chen
| null | null | 2,021 |
aaai
|
Learning Representations for Incomplete Time Series Clustering
| null |
Time-series clustering is an essential unsupervised technique for data analysis, applied to many real-world fields, such as medical analysis and DNA microarray. Existing clustering methods are usually based on the assumption that the data is complete. However, time series in real-world applications often contain missing values. Traditional strategy (imputing first and then clustering) does not optimize the imputation and clustering process as a whole, which not only makes per- formance dependent on the combination of imputation and clustering methods but also fails to achieve satisfactory re- sults. How to best improve the clustering performance on incomplete time series remains a challenge. This paper pro- poses a novel unsupervised temporal representation learning model, named Clustering Representation Learning on Incom- plete time-series data (CRLI). CRLI jointly optimizes the im- putation and clustering process to impute more discrimina- tive values for clustering and make the learned representa- tions possessed good clustering property. Also, to reduce the error propagation from imputation to clustering, we introduce a discriminator to make the distribution of imputation values close to the true one and train CRLI in an alternating train- ing manner. An experiment conducted on eight real-world in- complete time-series datasets shows that CRLI outperforms existing methods. We demonstrates the effectiveness of the learned representations and the convergence of the model through visualization analysis. Moreover, we reveal that the joint training strategy can impute values close to the true ones in those important sub-sequences, and impute more discrim- inative values in those less important sub-sequences at the same time, making the imputed sequence cluster-friendly.
|
Qianli Ma, Chuxin Chen, Sen Li, Garrison W. Cottrell
| null | null | 2,021 |
aaai
|
Composite Adversarial Attacks
| null |
Adversarial attack is a technique for deceiving Machine Learning (ML) models, which provides a way to evaluate the adversarial robustness. In practice, attack algorithms are artificially selected and tuned by human experts to break a ML system. However, manual selection of attackers tends to be sub-optimal, leading to a mistakenly assessment of model security. In this paper, a new procedure called Composite Adversarial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms and their hyper-parameters from a candidate pool of 32 base attackers. We design a search space where attack policy is represented as an attacking sequence, i.e., the output of the previous attacker is used as the initialization input for successors. Multi-objective NSGA-II genetic algorithm is adopted for finding the strongest attack policy with minimum complexity. The experimental result shows CAA beats 10 top attackers on 11 diverse defenses with less elapsed time (6 × faster than AutoAttack), and achieves the new state-of-the-art on linf, l2 and unrestricted adversarial attacks.
|
Xiaofeng Mao, Yuefeng Chen, Shuhui Wang, Hang Su, Yuan He, Hui Xue
| null | null | 2,021 |
aaai
|
Auto-Encoding Transformations in Reparameterized Lie Groups for Unsupervised Learning
| null |
Unsupervised training of deep representations has demonstrated remarkable potentials in mitigating the prohibitive expenses on annotating labeled data recently. Among them is predicting transformations as a pretext task to self-train representations, which has shown great potentials for unsupervised learning. However, existing approaches in this category learn representations by either treating a discrete set of transformations as separate classes, or using the Euclidean distance as the metric to minimize the errors between transformations. None of them has been dedicated to revealing the vital role of the geometry of transformation groups in learning representations. Indeed, an image must continuously transform along the curved manifold of a transformation group rather than through a straight line in the forbidden ambient Euclidean space. This suggests the use of geodesic distance to minimize the errors between the estimated and groundtruth transformations. Particularly, we focus on homographies, a general group of planar transformations containing the Euclidean, similarity and affine transformations as its special cases. To avoid an explicit computing of intractable Riemannian logarithm, we project homographies onto an alternative group of rotation transformations SR(3) with a tractable form of geodesic distance. Experiments demonstrate the proposed approach to Auto-Encoding Transformations exhibits superior performances on a variety of recognition problems.
|
Feng Lin, Haohang Xu, Houqiang Li, Hongkai Xiong, Guo-Jun Qi
| null | null | 2,021 |
aaai
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.