arxiv_id
stringlengths
7
11
title
stringlengths
7
243
abstract
stringlengths
3
2.79k
link
stringlengths
21
49
authors
sequencelengths
1
451
updated
stringlengths
20
20
published
stringlengths
20
20
2006.05344
Real-time Neural Networks Implementation Proposal for Microcontrollers
The adoption of intelligent systems with Artificial Neural Networks (ANNs) embedded in hardware for real-time applications currently faces a growing demand in fields like the Internet of Things (IoT) and Machine to Machine (M2M). However, the application of ANNs in this type of system poses a significant challenge due to the high computational power required to process its basic operations. This paper aims to show an implementation strategy of a Multilayer Perceptron (MLP) type neural network, in a microcontroller (a low-cost, low-power platform). A modular matrix-based MLP with the full classification process was implemented, and also the backpropagation training in the microcontroller. The testing and validation were performed through Hardware in the Loop (HIL) of the Mean Squared Error (MSE) of the training process, classification result, and the processing time of each implementation module. The results revealed a linear relationship between the values of the hyperparameters and the processing time required for classification, also the processing time concurs with the required time for many applications on the fields mentioned above. These findings show that this implementation strategy and this platform can be applied successfully on real-time applications that require the capabilities of ANNs.
http://arxiv.org/abs/2006.05344v1
[ "Caio J. B. V. Guimarães", "Marcelo A. C. Fernandes" ]
2020-06-08T03:51:14Z
2020-06-08T03:51:14Z
1911.13219
Automated Coronary Artery Atherosclerosis Detection and Weakly Supervised Localization on Coronary CT Angiography with a Deep 3-Dimensional Convolutional Neural Network
We propose a fully automated algorithm based on a deep learning framework enabling screening of a coronary computed tomography angiography (CCTA) examination for confident detection of the presence or absence of coronary artery atherosclerosis. The system starts with extracting the coronary arteries and their branches from CCTA datasets and representing them with multi-planar reformatted volumes; pre-processing and augmentation techniques are then applied to increase the robustness and generalization ability of the system. A 3-dimensional convolutional neural network (3D-CNN) is utilized to model pathological changes (e.g., atherosclerotic plaques) in coronary vessels. The system learns the discriminatory features between vessels with and without atherosclerosis. The discriminative features at the final convolutional layer are visualized with a saliency map approach to provide visual clues related to atherosclerosis likelihood and location. We have evaluated the system on a reference dataset representing247 patients with atherosclerosis and 246 patients free of atherosclerosis. With five-fold cross-validation,an Accuracy = 90.9%, Positive Predictive Value = 58.8%, Sensitivity = 68.9%, Specificity of 93.6%, and Negative Predictive Value (NPV) = 96.1% are achieved at the artery/branch level with threshold 0.5. The average area under the receiver operating characteristic curve is 0.91. The system indicates a high NPV, which may be potentially useful for assisting interpreting physicians in excluding coronary atherosclerosis in patients with acute chest pain.
http://arxiv.org/abs/1911.13219v3
[ "Sema Candemir", "Richard D. White", "Mutlu Demirer", "Vikash Gupta", "Matthew T. Bigelow", "Luciano M. Prevedello", "Barbaros S. Erdal" ]
2020-06-08T03:52:22Z
2019-11-26T23:23:29Z
2006.04349
Distributional Robustness with IPMs and links to Regularization and GANs
Robustness to adversarial attacks is an important concern due to the fragility of deep neural networks to small perturbations and has received an abundance of attention in recent years. Distributionally Robust Optimization (DRO), a particularly promising way of addressing this challenge, studies robustness via divergence-based uncertainty sets and has provided valuable insights into robustification strategies such as regularization. In the context of machine learning, the majority of existing results have chosen $f$-divergences, Wasserstein distances and more recently, the Maximum Mean Discrepancy (MMD) to construct uncertainty sets. We extend this line of work for the purposes of understanding robustness via regularization by studying uncertainty sets constructed with Integral Probability Metrics (IPMs) - a large family of divergences including the MMD, Total Variation and Wasserstein distances. Our main result shows that DRO under textit{any} choice of IPM corresponds to a family of regularization penalties, which recover and improve upon existing results in the setting of MMD and Wasserstein distances. Due to the generality of our result, we show that other choices of IPMs correspond to other commonly used penalties in machine learning. Furthermore, we extend our results to shed light on adversarial generative modelling via $f$-GANs, constituting the first study of distributional robustness for the $f$-GAN objective. Our results unveil the inductive properties of the discriminator set with regards to robustness, allowing us to give positive comments for several penalty-based GAN methods such as Wasserstein-, MMD- and Sobolev-GANs. In summary, our results intimately link GANs to distributional robustness, extend previous results on DRO and contribute to our understanding of the link between regularization and robustness at large.
http://arxiv.org/pdf/2006.04349v1
[ "Hisham Husain" ]
2020-06-08T04:41:29Z
2020-06-08T04:41:29Z
2002.06476
Follow the Neurally-Perturbed Leader for Adversarial Training
Game-theoretic models of learning are a powerful set of models that optimize multi-objective architectures. Among these models are zero-sum architectures that have inspired adversarial learning frameworks. An important shortcoming of these zeros-sum architectures is that gradient-based training leads to weak convergence and cyclic dynamics. We propose a novel follow the leader training algorithm for zeros-sum architectures that guarantees convergence to mixed Nash equilibrium without cyclic behaviors. It is a special type of follow the perturbed leader algorithm where perturbations are the result of a neural mediating agent. We validate our theoretical results by applying this training algorithm to games with convex and non-convex loss as well as generative adversarial architectures. Moreover, we customize the implementation of this algorithm for adversarial imitation learning applications. At every step of the training, the mediator agent perturbs the observations with generated codes. As a result of these mediating codes, the proposed algorithm is also efficient for learning in environments with various factors of variations. We validate our assertion by using a procedurally generated game environment as well as synthetic data. Github implementation is available.
http://arxiv.org/pdf/2002.06476v2
[ "Ari Azarafrooz" ]
2020-06-08T04:54:53Z
2020-02-16T00:09:02Z
2006.04353
Stable Reinforcement Learning with Unbounded State Space
We consider the problem of reinforcement learning (RL) with unbounded state space motivated by the classical problem of scheduling in a queueing network. Traditional policies as well as error metric that are designed for finite, bounded or compact state space, require infinite samples for providing any meaningful performance guarantee (e.g. $ell_infty$ error) for unbounded state space. That is, we need a new notion of performance metric. As the main contribution of this work, inspired by the literature in queuing systems and control theory, we propose stability as the notion of "goodness": the state dynamics under the policy should remain in a bounded region with high probability. As a proof of concept, we propose an RL policy using Sparse-Sampling-based Monte Carlo Oracle and argue that it satisfies the stability property as long as the system dynamics under the optimal policy respects a Lyapunov function. The assumption of existence of a Lyapunov function is not restrictive as it is equivalent to the positive recurrence or stability property of any Markov chain, i.e., if there is any policy that can stabilize the system then it must possess a Lyapunov function. And, our policy does not utilize the knowledge of the specific Lyapunov function. To make our method sample efficient, we provide an improved, sample efficient Sparse-Sampling-based Monte Carlo Oracle with Lipschitz value function that may be of interest in its own right. Furthermore, we design an adaptive version of the algorithm, based on carefully constructed statistical tests, which finds the correct tuning parameter automatically.
http://arxiv.org/pdf/2006.04353v1
[ "Devavrat Shah", "Qiaomin Xie", "Zhi Xu" ]
2020-06-08T05:00:25Z
2020-06-08T05:00:25Z
2006.04356
Associate-3Ddet: Perceptual-to-Conceptual Association for 3D Point Cloud Object Detection
Object detection from 3D point clouds remains a challenging task, though recent studies pushed the envelope with the deep learning techniques. Owing to the severe spatial occlusion and inherent variance of point density with the distance to sensors, appearance of a same object varies a lot in point cloud data. Designing robust feature representation against such appearance changes is hence the key issue in a 3D object detection method. In this paper, we innovatively propose a domain adaptation like approach to enhance the robustness of the feature representation. More specifically, we bridge the gap between the perceptual domain where the feature comes from a real scene and the conceptual domain where the feature is extracted from an augmented scene consisting of non-occlusion point cloud rich of detailed information. This domain adaptation approach mimics the functionality of the human brain when proceeding object perception. Extensive experiments demonstrate that our simple yet effective approach fundamentally boosts the performance of 3D point cloud object detection and achieves the state-of-the-art results.
http://arxiv.org/pdf/2006.04356v1
[ "Liang Du", "Xiaoqing Ye", "Xiao Tan", "Jianfeng Feng", "Zhenbo Xu", "Errui Ding", "Shilei Wen" ]
2020-06-08T05:15:06Z
2020-06-08T05:15:06Z
2006.04363
Hallucinating Value: A Pitfall of Dyna-style Planning with Imperfect Environment Models
Dyna-style reinforcement learning (RL) agents improve sample efficiency over model-free RL agents by updating the value function with simulated experience generated by an environment model. However, it is often difficult to learn accurate models of environment dynamics, and even small errors may result in failure of Dyna agents. In this paper, we investigate one type of model error: hallucinated states. These are states generated by the model, but that are not real states of the environment. We present the Hallucinated Value Hypothesis (HVH): updating values of real states towards values of hallucinated states results in misleading state-action values which adversely affect the control policy. We discuss and evaluate four Dyna variants; three which update real states toward simulated -- and therefore potentially hallucinated -- states and one which does not. The experimental results provide evidence for the HVH thus suggesting a fruitful direction toward developing Dyna algorithms robust to model error.
http://arxiv.org/pdf/2006.04363v1
[ "Taher Jafferjee", "Ehsan Imani", "Erin Talvitie", "Martha White", "Micheal Bowling" ]
2020-06-08T05:30:09Z
2020-06-08T05:30:09Z
2004.13846
Character-level Japanese Text Generation with Attention Mechanism for Chest Radiography Diagnosis
Chest radiography is a general method for diagnosing a patient's condition and identifying important information; therefore, radiography is used extensively in routine medical practice in various situations, such as emergency medical care and medical checkup. However, a high level of expertise is required to interpret chest radiographs. Thus, medical specialists spend considerable time in diagnosing such huge numbers of radiographs. In order to solve these problems, methods for generating findings have been proposed. However, the study of generating chest radiograph findings has primarily focused on the English language, and to the best of our knowledge, no studies have studied Japanese data on this subject. There are two challenges involved in generating findings in the Japanese language. The first challenge is that word splitting is difficult because the boundaries of Japanese word are not clear. The second challenge is that there are numerous orthographic variants. For deal with these two challenges, we proposed an end-to-end model that generates Japanese findings at the character-level from chest radiographs. In addition, we introduced the attention mechanism to improve not only the accuracy, but also the interpretation ability of the results. We evaluated the proposed method using a public dataset with Japanese findings. The effectiveness of the proposed method was confirmed using the Bilingual Evaluation Understudy score. And, we were confirmed from the generated findings that the proposed method was able to consider the orthographic variants. Furthermore, we confirmed via visual inspection that the attention mechanism captures the features and positional information of radiographs.
http://arxiv.org/pdf/2004.13846v2
[ "Kenya Sakka", "Kotaro Nakayama", "Nisei Kimura", "Taiki Inoue", "Yusuke Iwasawa", "Ryohei Yamaguchi", "Yosimasa Kawazoe", "Kazuhiko Ohe", "Yutaka Matsuo" ]
2020-06-08T05:37:51Z
2020-04-06T18:19:27Z
1906.08898
Sparse Spectrum Gaussian Process for Bayesian Optimization
We propose a novel sparse spectrum approximation of Gaussian process (GP) tailored for Bayesian optimization. Whilst the current sparse spectrum methods provide desired approximations for regression problems, it is observed that this particular form of sparse approximations generates an overconfident GP, i.e. it produces less epistemic uncertainty than the original GP. Since the balance between predictive mean and the predictive variance is the key determinant to the success of Bayesian optimization, the current sparse spectrum methods are less suitable for it. We derive a new regularized marginal likelihood for finding the optimal frequencies to fix this over-confidence issue, particularly for Bayesian optimization. The regularizer trades off the accuracy in the model fitting with a targeted increase in the predictive variance of the resultant GP. Specifically, we use the entropy of the global maximum distribution from the posterior GP as the regularizer that needs to be maximized. Since this distribution cannot be calculated analytically, we first propose a Thompson sampling based approach and then a more efficient sequential Monte Carlo based approach to estimate it. Later, we also show that the Expected Improvement acquisition function can be used as a proxy for the maximum distribution, thus making the whole process further efficient. Experiments show considerable improvement to Bayesian optimization convergence rate over the vanilla sparse spectrum method and over a full GP when its covariance matrix is ill-conditioned due to the presence of a large number of observations.
http://arxiv.org/pdf/1906.08898v2
[ "Ang Yang", "Cheng Li", "Santu Rana", "Sunil Gupta", "Svetha Venkatesh" ]
2020-06-08T06:51:09Z
2019-06-21T00:27:09Z
2006.04380
Learning the Compositional Visual Coherence for Complementary Recommendations
Complementary recommendations, which aim at providing users product suggestions that are supplementary and compatible with their obtained items, have become a hot topic in both academia and industry in recent years. %However, it is challenging due to its complexity and subjectivity. Existing work mainly focused on modeling the co-purchased relations between two items, but the compositional associations of item collections are largely unexplored. Actually, when a user chooses the complementary items for the purchased products, it is intuitive that she will consider the visual semantic coherence (such as color collocations, texture compatibilities) in addition to global impressions. Towards this end, in this paper, we propose a novel Content Attentive Neural Network (CANN) to model the comprehensive compositional coherence on both global contents and semantic contents. Specifically, we first propose a textit{Global Coherence Learning} (GCL) module based on multi-heads attention to model the global compositional coherence. Then, we generate the semantic-focal representations from different semantic regions and design a textit{Focal Coherence Learning} (FCL) module to learn the focal compositional coherence from different semantic-focal representations. Finally, we optimize the CANN in a novel compositional optimization strategy. Extensive experiments on the large-scale real-world data clearly demonstrate the effectiveness of CANN compared with several state-of-the-art methods.
http://arxiv.org/pdf/2006.04380v1
[ "Zhi Li", "Bo Wu", "Qi Liu", "Likang Wu", "Hongke Zhao", "Tao Mei" ]
2020-06-08T06:57:18Z
2020-06-08T06:57:18Z
2006.04381
Balance-Subsampled Stable Prediction
In machine learning, it is commonly assumed that training and test data share the same population distribution. However, this assumption is often violated in practice because the sample selection bias may induce the distribution shift from training data to test data. Such a model-agnostic distribution shift usually leads to prediction instability across unknown test data. In this paper, we propose a novel balance-subsampled stable prediction (BSSP) algorithm based on the theory of fractional factorial design. It isolates the clear effect of each predictor from the confounding variables. A design-theoretic analysis shows that the proposed method can reduce the confounding effects among predictors induced by the distribution shift, hence improve both the accuracy of parameter estimation and prediction stability. Numerical experiments on both synthetic and real-world data sets demonstrate that our BSSP algorithm significantly outperforms the baseline methods for stable prediction across unknown test data.
http://arxiv.org/pdf/2006.04381v1
[ "Kun Kuang", "Hengtao Zhang", "Fei Wu", "Yueting Zhuang", "Aijun Zhang" ]
2020-06-08T07:01:38Z
2020-06-08T07:01:38Z
2006.04386
Understanding Graph Neural Networks from Graph Signal Denoising Perspectives
Graph neural networks (GNNs) have attracted much attention because of their excellent performance on tasks such as node classification. However, there is inadequate understanding on how and why GNNs work, especially for node representation learning. This paper aims to provide a theoretical framework to understand GNNs, specifically, spectral graph convolutional networks and graph attention networks, from graph signal denoising perspectives. Our framework shows that GNNs are implicitly solving graph signal denoising problems: spectral graph convolutions work as denoising node features, while graph attentions work as denoising edge weights. We also show that a linear self-attention mechanism is able to compete with the state-of-the-art graph attention methods. Our theoretical results further lead to two new models, GSDN-F and GSDN-EF, which work effectively for graphs with noisy node features and/or noisy edges. We validate our theoretical findings and also the effectiveness of our new models by experiments on benchmark datasets. The source code is available at url{https://github.com/fuguoji/GSDN}.
http://arxiv.org/pdf/2006.04386v1
[ "Guoji Fu", "Yifan Hou", "Jian Zhang", "Kaili Ma", "Barakeel Fanseu Kamhoua", "James Cheng" ]
2020-06-08T07:10:39Z
2020-06-08T07:10:39Z
2002.07530
Improved Optimistic Algorithms for Logistic Bandits
The generalized linear bandit framework has attracted a lot of attention in recent years by extending the well-understood linear setting and allowing to model richer reward structures. It notably covers the logistic model, widely used when rewards are binary. For logistic bandits, the frequentist regret guarantees of existing algorithms are $tilde{mathcal{O}}(kappa sqrt{T})$, where $kappa$ is a problem-dependent constant. Unfortunately, $kappa$ can be arbitrarily large as it scales exponentially with the size of the decision set. This may lead to significantly loose regret bounds and poor empirical performance. In this work, we study the logistic bandit with a focus on the prohibitive dependencies introduced by $kappa$. We propose a new optimistic algorithm based on a finer examination of the non-linearities of the reward function. We show that it enjoys a $tilde{mathcal{O}}(sqrt{T})$ regret with no dependency in $kappa$, but for a second order term. Our analysis is based on a new tail-inequality for self-normalized martingales, of independent interest.
http://arxiv.org/pdf/2002.07530v2
[ "Louis Faury", "Marc Abeille", "Clément Calauzènes", "Olivier Fercoq" ]
2020-06-08T07:36:22Z
2020-02-18T12:52:32Z
2006.04403
Global Robustness Verification Networks
The wide deployment of deep neural networks, though achieving great success in many domains, has severe safety and reliability concerns. Existing adversarial attack generation and automatic verification techniques cannot formally verify whether a network is globally robust, i.e., the absence or not of adversarial examples in the input space. To address this problem, we develop a global robustness verification framework with three components: 1) a novel rule-based ``back-propagation'' finding which input region is responsible for the class assignment by logic reasoning; 2) a new network architecture Sliding Door Network (SDN) enabling feasible rule-based ``back-propagation''; 3) a region-based global robustness verification (RGRV) approach. Moreover, we demonstrate the effectiveness of our approach on both synthetic and real datasets.
http://arxiv.org/pdf/2006.04403v1
[ "Weidi Sun", "Yuteng Lu", "Xiyue Zhang", "Zhanxing Zhu", "Meng Sun" ]
2020-06-08T08:09:20Z
2020-06-08T08:09:20Z
2001.11314
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation
Current pre-training works in natural language generation pay little attention to the problem of exposure bias on downstream tasks. To address this issue, we propose an enhanced multi-flow sequence to sequence pre-training and fine-tuning framework named ERNIE-GEN, which bridges the discrepancy between training and inference with an infilling generation mechanism and a noise-aware generation method. To make generation closer to human writing patterns, this framework introduces a span-by-span generation flow that trains the model to predict semantically-complete spans consecutively rather than predicting word by word. Unlike existing pre-training methods, ERNIE-GEN incorporates multi-granularity target sampling to construct pre-training data, which enhances the correlation between encoder and decoder. Experimental results demonstrate that ERNIE-GEN achieves state-of-the-art results with a much smaller amount of pre-training data and parameters on a range of language generation tasks, including abstractive summarization (Gigaword and CNN/DailyMail), question generation (SQuAD), dialogue generation (Persona-Chat) and generative question answering (CoQA).
http://arxiv.org/pdf/2001.11314v3
[ "Dongling Xiao", "Han Zhang", "Yukun Li", "Yu Sun", "Hao Tian", "Hua Wu", "Haifeng Wang" ]
2020-06-08T08:09:33Z
2020-01-26T02:54:49Z
2006.04406
Passive Batch Injection Training Technique: Boosting Network Performance by Injecting Mini-Batches from a different Data Distribution
This work presents a novel training technique for deep neural networks that makes use of additional data from a distribution that is different from that of the original input data. This technique aims to reduce overfitting and improve the generalization performance of the network. Our proposed technique, namely Passive Batch Injection Training Technique (PBITT), even reduces the level of overfitting in networks that already use the standard techniques for reducing overfitting such as $L_2$ regularization and batch normalization, resulting in significant accuracy improvements. Passive Batch Injection Training Technique (PBITT) introduces a few passive mini-batches into the training process that contain data from a distribution that is different from the input data distribution. This technique does not increase the number of parameters in the final model and also does not increase the inference (test) time but still improves the performance of deep CNNs. To the best of our knowledge, this is the first work that makes use of different data distribution to aid the training of convolutional neural networks (CNNs). We thoroughly evaluate the proposed approach on standard architectures: VGG, ResNet, and WideResNet, and on several popular datasets: CIFAR-10, CIFAR-100, SVHN, and ImageNet. We observe consistent accuracy improvement by using the proposed technique. We also show experimentally that the model trained by our technique generalizes well to other tasks such as object detection on the MS-COCO dataset using Faster R-CNN. We present extensive ablations to validate the proposed approach. Our approach improves the accuracy of VGG-16 by a significant margin of 2.1% over the CIFAR-100 dataset.
http://arxiv.org/pdf/2006.04406v1
[ "Pravendra Singh", "Pratik Mazumder", "Vinay P. Namboodiri" ]
2020-06-08T08:17:32Z
2020-06-08T08:17:32Z
2006.04410
Propositionalization and Embeddings: Two Sides of the Same Coin
Data preprocessing is an important component of machine learning pipelines, which requires ample time and resources. An integral part of preprocessing is data transformation into the format required by a given learning algorithm. This paper outlines some of the modern data processing techniques used in relational learning that enable data fusion from different input data types and formats into a single table data representation, focusing on the propositionalization and embedding data transformation approaches. While both approaches aim at transforming data into tabular data format, they use different terminology and task definitions, are perceived to address different goals, and are used in different contexts. This paper contributes a unifying framework that allows for improved understanding of these two data transformation techniques by presenting their unified definitions, and by explaining the similarities and differences between the two approaches as variants of a unified complex data transformation task. In addition to the unifying framework, the novelty of this paper is a unifying methodology combining propositionalization and embeddings, which benefits from the advantages of both in solving complex data transformation and learning tasks. We present two efficient implementations of the unifying methodology: an instance-based PropDRM approach, and a feature-based PropStar approach to data transformation and learning, together with their empirical evaluation on several relational problems. The results show that the new algorithms can outperform existing relational learners and can solve much larger problems.
http://arxiv.org/abs/2006.04410v1
[ "Nada Lavrač", "Blaž Škrlj", "Marko Robnik-Šikonja" ]
2020-06-08T08:33:21Z
2020-06-08T08:33:21Z
1909.09436
CodeSearchNet Challenge: Evaluating the State of Semantic Code Search
Semantic code search is the task of retrieving relevant code given a natural language query. While related to other information retrieval tasks, it requires bridging the gap between the language used in code (often abbreviated and highly technical) and natural language more suitable to describe vague concepts and ideas. To enable evaluation of progress on code search, we are releasing the CodeSearchNet Corpus and are presenting the CodeSearchNet Challenge, which consists of 99 natural language queries with about 4k expert relevance annotations of likely results from CodeSearchNet Corpus. The corpus contains about 6 million functions from open-source code spanning six programming languages (Go, Java, JavaScript, PHP, Python, and Ruby). The CodeSearchNet Corpus also contains automatically generated query-like natural language for 2 million functions, obtained from mechanically scraping and preprocessing associated function documentation. In this article, we describe the methodology used to obtain the corpus and expert labels, as well as a number of simple baseline solutions for the task. We hope that CodeSearchNet Challenge encourages researchers and practitioners to study this interesting task further and will host a competition and leaderboard to track the progress on the challenge. We are also keen on extending CodeSearchNet Challenge to more queries and programming languages in the future.
http://arxiv.org/pdf/1909.09436v3
[ "Hamel Husain", "Ho-Hsiang Wu", "Tiferet Gazit", "Miltiadis Allamanis", "Marc Brockschmidt" ]
2020-06-08T09:09:28Z
2019-09-20T11:52:45Z
2006.06535
Privacy Adversarial Network: Representation Learning for Mobile Data Privacy
The remarkable success of machine learning has fostered a growing number of cloud-based intelligent services for mobile users. Such a service requires a user to send data, e.g. image, voice and video, to the provider, which presents a serious challenge to user privacy. To address this, prior works either obfuscate the data, e.g. add noise and remove identity information, or send representations extracted from the data, e.g. anonymized features. They struggle to balance between the service utility and data privacy because obfuscated data reduces utility and extracted representation may still reveal sensitive information. This work departs from prior works in methodology: we leverage adversarial learning to a better balance between privacy and utility. We design a textit{representation encoder} that generates the feature representations to optimize against the privacy disclosure risk of sensitive information (a measure of privacy) by the textit{privacy adversaries}, and concurrently optimize with the task inference accuracy (a measure of utility) by the textit{utility discriminator}. The result is the privacy adversarial network (systemname), a novel deep model with the new training algorithm, that can automatically learn representations from the raw data. Intuitively, PAN adversarially forces the extracted representations to only convey the information required by the target task. Surprisingly, this constitutes an implicit regularization that actually improves task accuracy. As a result, PAN achieves better utility and better privacy at the same time! We report extensive experiments on six popular datasets and demonstrate the superiority of systemname compared with alternative methods reported in prior work.
http://arxiv.org/abs/2006.06535v1
[ "Sicong Liu", "Junzhao Du", "Anshumali Shrivastava", "Lin Zhong" ]
2020-06-08T09:42:04Z
2020-06-08T09:42:04Z
2006.04432
AdaDeep: A Usage-Driven, Automated Deep Model Compression Framework for Enabling Ubiquitous Intelligent Mobiles
Recent breakthroughs in Deep Neural Networks (DNNs) have fueled a tremendously growing demand for bringing DNN-powered intelligence into mobile platforms. While the potential of deploying DNNs on resource-constrained platforms has been demonstrated by DNN compression techniques, the current practice suffers from two limitations: 1) merely stand-alone compression schemes are investigated even though each compression technique only suit for certain types of DNN layers; and 2) mostly compression techniques are optimized for DNNs' inference accuracy, without explicitly considering other application-driven system performance (e.g., latency and energy cost) and the varying resource availability across platforms (e.g., storage and processing capability). To this end, we propose AdaDeep, a usage-driven, automated DNN compression framework for systematically exploring the desired trade-off between performance and resource constraints, from a holistic system level. Specifically, in a layer-wise manner, AdaDeep automatically selects the most suitable combination of compression techniques and the corresponding compression hyperparameters for a given DNN. Thorough evaluations on six datasets and across twelve devices demonstrate that AdaDeep can achieve up to $18.6times$ latency reduction, $9.8times$ energy-efficiency improvement, and $37.3times$ storage reduction in DNNs while incurring negligible accuracy loss. Furthermore, AdaDeep also uncovers multiple novel combinations of compression techniques.
http://arxiv.org/pdf/2006.04432v1
[ "Sicong Liu", "Junzhao Du", "Kaiming Nan", "ZimuZhou", "Atlas Wang", "Yingyan Lin" ]
2020-06-08T09:42:12Z
2020-06-08T09:42:12Z
2006.04435
CAST: A Correlation-based Adaptive Spectral Clustering Algorithm on Multi-scale Data
We study the problem of applying spectral clustering to cluster multi-scale data, which is data whose clusters are of various sizes and densities. Traditional spectral clustering techniques discover clusters by processing a similarity matrix that reflects the proximity of objects. For multi-scale data, distance-based similarity is not effective because objects of a sparse cluster could be far apart while those of a dense cluster have to be sufficiently close. Following [16], we solve the problem of spectral clustering on multi-scale data by integrating the concept of objects' "reachability similarity" with a given distance-based similarity to derive an objects' coefficient matrix. We propose the algorithm CAST that applies trace Lasso to regularize the coefficient matrix. We prove that the resulting coefficient matrix has the "grouping effect" and that it exhibits "sparsity". We show that these two characteristics imply very effective spectral clustering. We evaluate CAST and 10 other clustering methods on a wide range of datasets w.r.t. various measures. Experimental results show that CAST provides excellent performance and is highly robust across test cases of multi-scale data.
http://arxiv.org/pdf/2006.04435v1
[ "Xiang Li", "Ben Kao", "Caihua Shan", "Dawei Yin", "Martin Ester" ]
2020-06-08T09:46:35Z
2020-06-08T09:46:35Z
2006.09892
STAD: Spatio-Temporal Adjustment of Traffic-Oblivious Travel-Time Estimation
Travel time estimation is an important component in modern transportation applications. The state of the art techniques for travel time estimation use GPS traces to learn the weights of a road network, often modeled as a directed graph, then apply Dijkstra-like algorithms to find shortest paths. Travel time is then computed as the sum of edge weights on the returned path. In order to enable time-dependency, existing systems compute multiple weighted graphs corresponding to different time windows. These graphs are often optimized offline before they are deployed into production routing engines, causing a serious engineering overhead. In this paper, we present STAD, a system that adjusts - on the fly - travel time estimates for any trip request expressed in the form of origin, destination, and departure time. STAD uses machine learning and sparse trips data to learn the imperfections of any basic routing engine, before it turns it into a full-fledged time-dependent system capable of adjusting travel times to real traffic conditions in a city. STAD leverages the spatio-temporal properties of traffic by combining spatial features such as departing and destination geographic zones with temporal features such as departing time and day to significantly improve the travel time estimates of the basic routing engine. Experiments on real trip datasets from Doha, New York City, and Porto show a reduction in median absolute errors of 14% in the first two cities and 29% in the latter. We also show that STAD performs better than different commercial and research baselines in all three cities.
http://arxiv.org/pdf/2006.09892v1
[ "Sofiane Abbar", "Rade Stanojevic", "Mohamed Mokbel" ]
2020-06-08T09:47:55Z
2020-06-08T09:47:55Z
2006.04449
On Universalized Adversarial and Invariant Perturbations
Convolutional neural networks or standard CNNs (StdCNNs) are translation-equivariant models that achieve translation invariance when trained on data augmented with sufficient translations. Recent work on equivariant models for a given group of transformations (e.g., rotations) has lead to group-equivariant convolutional neural networks (GCNNs). GCNNs trained on data augmented with sufficient rotations achieve rotation invariance. Recent work by authors arXiv:2002.11318 studies a trade-off between invariance and robustness to adversarial attacks. In another related work arXiv:2005.08632, given any model and any input-dependent attack that satisfies a certain spectral property, the authors propose a universalization technique called SVD-Universal to produce a universal adversarial perturbation by looking at very few test examples. In this paper, we study the effectiveness of SVD-Universal on GCNNs as they gain rotation invariance through higher degree of training augmentation. We empirically observe that as GCNNs gain rotation invariance through training augmented with larger rotations, the fooling rate of SVD-Universal gets better. To understand this phenomenon, we introduce universal invariant directions and study their relation to the universal adversarial direction produced by SVD-Universal.
http://arxiv.org/pdf/2006.04449v1
[ "Sandesh Kamath", "Amit Deshpande", "K V Subrahmanyam" ]
2020-06-08T10:08:20Z
2020-06-08T10:08:20Z
2006.04497
Learning under Invariable Bayesian Safety
A recent body of work addresses safety constraints in explore-and-exploit systems. Such constraints arise where, for example, exploration is carried out by individuals whose welfare should be balanced with overall welfare. In this paper, we adopt a model inspired by recent work on a bandit-like setting for recommendations. We contribute to this line of literature by introducing a safety constraint that should be respected in every round and determines that the expected value in each round is above a given threshold. Due to our modeling, the safe explore-and-exploit policy deserves careful planning, or otherwise, it will lead to sub-optimal welfare. We devise an asymptotically optimal algorithm for the setting and analyze its instance-dependent convergence rate.
http://arxiv.org/pdf/2006.04497v1
[ "Gal Bahar", "Omer Ben-Porat", "Kevin Leyton-Brown", "Moshe Tennenholtz" ]
2020-06-08T12:07:59Z
2020-06-08T12:07:59Z
1912.10558
Exploring Interpretability for Predictive Process Analytics
Modern predictive analytics underpinned by machine learning techniques has become a key enabler to the automation of data-driven decision making. In the context of business process management, predictive analytics has been applied to making predictions about the future state of an ongoing business process instance, for example, when will the process instance complete and what will be the outcome upon completion. Machine learning models can be trained on event log data recording historical process execution to build the underlying predictive models. Multiple techniques have been proposed so far which encode the information available in an event log and construct input features required to train a predictive model. While accuracy has been a dominant criterion in the choice of various techniques, they are often applied as a black-box in building predictive models. In this paper, we derive explanations using interpretable machine learning techniques to compare and contrast the suitability of multiple predictive models of high accuracy. The explanations allow us to gain an understanding of the underlying reasons for a prediction and highlight scenarios where accuracy alone may not be sufficient in assessing the suitability of techniques used to encode event log data to features used by a predictive model. Findings from this study motivate the need and importance to incorporate interpretability in predictive process analytics.
http://arxiv.org/pdf/1912.10558v3
[ "Renuka Sindhgatta", "Chun Ouyang", "Catarina Moreira" ]
2020-06-08T12:09:15Z
2019-12-22T23:09:34Z
2006.04504
Tricking Adversarial Attacks To Fail
Recent adversarial defense approaches have failed. Untargeted gradient-based attacks cause classifiers to choose any wrong class. Our novel white-box defense tricks untargeted attacks into becoming attacks targeted at designated target classes. From these target classes, we can derive the real classes. Our Target Training defense tricks the minimization at the core of untargeted, gradient-based adversarial attacks: minimize the sum of (1) perturbation and (2) classifier adversarial loss. Target Training changes the classifier minimally, and trains it with additional duplicated points (at 0 distance) labeled with designated classes. These differently-labeled duplicated samples minimize both terms (1) and (2) of the minimization, steering attack convergence to samples of designated classes, from which correct classification is derived. Importantly, Target Training eliminates the need to know the attack and the overhead of generating adversarial samples of attacks that minimize perturbations. We obtain an 86.2% accuracy for CW-L2 (confidence=0) in CIFAR10, exceeding even unsecured classifier accuracy on non-adversarial samples. Target Training presents a fundamental change in adversarial defense strategy.
http://arxiv.org/pdf/2006.04504v1
[ "Blerta Lindqvist" ]
2020-06-08T12:22:07Z
2020-06-08T12:22:07Z
2006.04513
Combining word embeddings and convolutional neural networks to detect duplicated questions
Detecting semantic similarities between sentences is still a challenge today due to the ambiguity of natural languages. In this work, we propose a simple approach to identifying semantically similar questions by combining the strengths of word embeddings and Convolutional Neural Networks (CNNs). In addition, we demonstrate how the cosine similarity metric can be used to effectively compare feature vectors. Our network is trained on the Quora dataset, which contains over 400k question pairs. We experiment with different embedding approaches such as Word2Vec, Fasttext, and Doc2Vec and investigate the effects these approaches have on model performance. Our model achieves competitive results on the Quora dataset and complements the well-established evidence that CNNs can be utilized for paraphrase detection tasks.
http://arxiv.org/pdf/2006.04513v1
[ "Yoan Dimitrov" ]
2020-06-08T12:30:25Z
2020-06-08T12:30:25Z
1905.10768
Precision-Recall Curves Using Information Divergence Frontiers
Despite the tremendous progress in the estimation of generative models, the development of tools for diagnosing their failures and assessing their performance has advanced at a much slower pace. Recent developments have investigated metrics that quantify which parts of the true distribution is modeled well, and, on the contrary, what the model fails to capture, akin to precision and recall in information retrieval. In this paper, we present a general evaluation framework for generative models that measures the trade-off between precision and recall using R'enyi divergences. Our framework provides a novel perspective on existing techniques and extends them to more general domains. As a key advantage, this formulation encompasses both continuous and discrete models and allows for the design of efficient algorithms that do not have to quantize the data. We further analyze the biases of the approximations used in practice.
http://arxiv.org/pdf/1905.10768v2
[ "Josip Djolonga", "Mario Lucic", "Marco Cuturi", "Olivier Bachem", "Olivier Bousquet", "Sylvain Gelly" ]
2020-06-08T12:54:32Z
2019-05-26T09:27:44Z
2006.04548
A Variational View on Bootstrap Ensembles as Bayesian Inference
In this paper, we employ variational arguments to establish a connection between ensemble methods for Neural Networks and Bayesian inference. We consider an ensemble-based scheme where each model/particle corresponds to a perturbation of the data by means of parametric bootstrap and a perturbation of the prior. We derive conditions under which any optimization steps of the particles makes the associated distribution reduce its divergence to the posterior over model parameters. Such conditions do not require any particular form for the approximation and they are purely geometrical, giving insights on the behavior of the ensemble on a number of interesting models such as Neural Networks with ReLU activations. Experiments confirm that ensemble methods can be a valid alternative to approximate Bayesian inference; the theoretical developments in the paper seek to explain this behavior.
http://arxiv.org/pdf/2006.04548v1
[ "Dimitrios Milios", "Pietro Michiardi", "Maurizio Filippone" ]
2020-06-08T13:01:37Z
2020-06-08T13:01:37Z
2006.04595
Softwarization, Virtualization, & Machine Learning For Intelligent & Effective V2X Communications
The concept of the fifth generation (5G) mobile network system has emerged in recent years as telecommunication operators and service providers look to upgrade their infrastructure and delivery modes to meet the growing demand. Concepts such as softwarization, virtualization, and machine learning will be key components as innovative and flexible enablers of such networks. In particular, paradigms such as software-defined networks, software-defined perimeter, cloud & edge computing, and network function virtualization will play a major role in addressing several 5G networks' challenges, especially in terms of flexibility, programmability, scalability, and security. In this work, the role and potential of these paradigms in the context of V2X communication is discussed. To do so, the paper starts off by providing an overview and background of V2X communications. Then, the paper discusses in more details the various challenges facing V2X communications and some of the previous literature work done to tackle them. Furthermore, the paper describes how softwarization, virtualization, and machine learning can be adapted to tackle the challenges of such networks.
http://arxiv.org/abs/2006.04595v1
[ "Abdallah Moubayed", "Abdallah Shami" ]
2020-06-08T13:43:43Z
2020-06-08T13:43:43Z
2003.03064
Transfer Learning for Information Extraction with Limited Data
This paper presents a practical approach to fine-grained information extraction. Through plenty of experiences of authors in practically applying information extraction to business process automation, there can be found a couple of fundamental technical challenges: (i) the availability of labeled data is usually limited and (ii) highly detailed classification is required. The main idea of our proposal is to leverage the concept of transfer learning, which is to reuse the pre-trained model of deep neural networks, with a combination of common statistical classifiers to determine the class of each extracted term. To do that, we first exploit BERT to deal with the limitation of training data in real scenarios, then stack BERT with Convolutional Neural Networks to learn hidden representation for classification. To validate our approach, we applied our model to an actual case of document processing, which is a process of competitive bids for government projects in Japan. We used 100 documents for training and testing and confirmed that the model enables to extract fine-grained named entities with a detailed level of information preciseness specialized in the targeted business process, such as a department name of application receivers.
http://arxiv.org/pdf/2003.03064v2
[ "Minh-Tien Nguyen", "Viet-Anh Phan", "Le Thai Linh", "Nguyen Hong Son", "Le Tien Dung", "Miku Hirano", "Hajime Hotta" ]
2020-06-08T13:56:57Z
2020-03-06T08:08:20Z
2006.09272
Ensemble-based Feature Selection and Classification Model for DNS Typo-squatting Detection
Domain Name System (DNS) plays in important role in the current IP-based Internet architecture. This is because it performs the domain name to IP resolution. However, the DNS protocol has several security vulnerabilities due to the lack of data integrity and origin authentication within it. This paper focuses on one particular security vulnerability, namely typo-squatting. Typo-squatting refers to the registration of a domain name that is extremely similar to that of an existing popular brand with the goal of redirecting users to malicious/suspicious websites. The danger of typo-squatting is that it can lead to information threat, corporate secret leakage, and can facilitate fraud. This paper builds on our previous work in [1], which only proposed majority-voting based classifier, by proposing an ensemble-based feature selection and bagging classification model to detect DNS typo-squatting attack. Experimental results show that the proposed framework achieves high accuracy and precision in identifying the malicious/suspicious typo-squatting domains (a loss of at most 1.5% in accuracy and 5% in precision when compared to the model that used the complete feature set) while having a lower computational complexity due to the smaller feature set (a reduction of more than 50% in feature set size).
http://arxiv.org/pdf/2006.09272v1
[ "Abdallah Moubayed", "Emad Aqeeli", "Abdallah Shami" ]
2020-06-08T14:07:19Z
2020-06-08T14:07:19Z
2006.04611
A Comprehensive Survey on Aspect Based Sentiment Analysis
Aspect Based Sentiment Analysis (ABSA) is the sub-field of Natural Language Processing that deals with essentially splitting our data into aspects ad finally extracting the sentiment information. ABSA is known to provide more information about the context than general sentiment analysis. In this study, our aim is to explore the various methodologies practiced while performing ABSA, and providing a comparative study. This survey paper discusses various solutions in-depth and gives a comparison between them. And is conveniently divided into sections to get a holistic view on the process.
http://arxiv.org/pdf/2006.04611v1
[ "Kaustubh Yadav" ]
2020-06-08T14:07:58Z
2020-06-08T14:07:58Z
1905.08539
Universal Approximation with Deep Narrow Networks
The classical Universal Approximation Theorem holds for neural networks of arbitrary width and bounded depth. Here we consider the natural `dual' scenario for networks of bounded width and arbitrary depth. Precisely, let $n$ be the number of inputs neurons, $m$ be the number of output neurons, and let $rho$ be any nonaffine continuous function, with a continuous nonzero derivative at some point. Then we show that the class of neural networks of arbitrary depth, width $n + m + 2$, and activation function $rho$, is dense in $C(K; mathbb{R}^m)$ for $K subseteq mathbb{R}^n$ with $K$ compact. This covers every activation function possible to use in practice, and also includes polynomial activation functions, which is unlike the classical version of the theorem, and provides a qualitative difference between deep narrow networks and shallow wide networks. We then consider several extensions of this result. In particular we consider nowhere differentiable activation functions, density in noncompact domains with respect to the $L^p$-norm, and how the width may be reduced to just $n + m + 1$ for `most' activation functions.
http://arxiv.org/pdf/1905.08539v2
[ "Patrick Kidger", "Terry Lyons" ]
2020-06-08T14:08:06Z
2019-05-21T10:47:55Z
2001.00526
Lightweight Residual Densely Connected Convolutional Neural Network
Extremely efficient convolutional neural network architectures are one of the most important requirements for limited-resource devices (such as embedded and mobile devices). The computing power and memory size are two important constraints of these devices. Recently, some architectures have been proposed to overcome these limitations by considering specific hardware-software equipment. In this paper, the lightweight residual densely connected blocks are proposed to guaranty the deep supervision, efficient gradient flow, and feature reuse abilities of convolutional neural network. The proposed method decreases the cost of training and inference processes without using any special hardware-software equipment by just reducing the number of parameters and computational operations while achieving a feasible accuracy. Extensive experimental results demonstrate that the proposed architecture is more efficient than the AlexNet and VGGNet in terms of model size, required parameters, and even accuracy. The proposed model has been evaluated on the ImageNet, MNIST, Fashion MNIST, SVHN, CIFAR-10, and CIFAR-100. It achieves state-of-the-art results on Fashion MNIST dataset and reasonable results on the others. The obtained results show the superiority of the proposed method to efficient models such as the SqueezNet. It is also comparable with state-of-the-art efficient models such as CondenseNet and ShuffleNet.
http://arxiv.org/abs/2001.00526v2
[ "Fahimeh Fooladgar", "Shohreh Kasaei" ]
2020-06-08T14:18:58Z
2020-01-02T17:15:32Z
2006.14054
Validating psychometric survey responses
We present an approach to classify user validity in survey responses by using a machine learning techniques. The approach is based on collecting user mouse activity on web-surveys and fast predicting validity of the survey in general without analysis of specific answers. Rule based approach, LSTM and HMM models are considered. The approach might be used in web-survey applications to detect suspicious users behaviour and request from them proper answering instead of false data recording.
http://arxiv.org/pdf/2006.14054v1
[ "Alberto Mastrotto", "Anderson Nelson", "Dev Sharma", "Ergeta Muca", "Kristina Liapchin", "Luis Losada", "Mayur Bansal", "Roman S. Samarev" ]
2020-06-08T14:33:10Z
2020-06-08T14:33:10Z
2006.04637
Graph Representation Learning Network via Adaptive Sampling
Graph Attention Network (GAT) and GraphSAGE are neural network architectures that operate on graph-structured data and have been widely studied for link prediction and node classification. One challenge raised by GraphSAGE is how to smartly combine neighbour features based on graph structure. GAT handles this problem through attention, however the challenge with GAT is its scalability over large and dense graphs. In this work, we proposed a new architecture to address these issues that is more efficient and is capable of incorporating different edge type information. It generates node representations by attending to neighbours sampled from weighted multi-step transition probabilities. We conduct experiments on both transductive and inductive settings. Experiments achieved comparable or better results on several graph benchmarks, including the Cora, Citeseer, Pubmed, PPI, Twitter, and YouTube datasets.
http://arxiv.org/pdf/2006.04637v1
[ "Anderson de Andrade", "Chen Liu" ]
2020-06-08T14:36:20Z
2020-06-08T14:36:20Z
1802.02736
Autonomous Power Allocation based on Distributed Deep Learning for Device-to-Device Communication Underlaying Cellular Network
For Device-to-device (D2D) communication of Internet-of-Things (IoT) enabled 5G system, there is a limit to allocating resources considering a complicated interference between different links in a centralized manner. If D2D link is controlled by an enhanced node base station (eNB), and thus, remains a burden on the eNB and it causes delayed latency. This paper proposes a fully autonomous power allocation method for IoT-D2D communication underlaying cellular networks using deep learning. In the proposed scheme, an IoT-D2D transmitter decides the transmit power independently from an eNB and other IoT-D2D devices. In addition, the power set can be nearly optimized by deep learning with distributed manner to achieve higher cell throughput. We present a distributed deep learning architecture in which the devices are trained as a group but operate independently. The deep learning can attain near optimal cell throughput while suppressing interference to eNB.
http://arxiv.org/abs/1802.02736v3
[ "Jeehyeong Kim", "Joohan Park", "Jaewon Noh", "Sunghyun Cho" ]
2020-06-08T14:42:16Z
2018-02-08T08:06:39Z
2006.04641
The Dual Information Bottleneck
The Information Bottleneck (IB) framework is a general characterization of optimal representations obtained using a principled approach for balancing accuracy and complexity. Here we present a new framework, the Dual Information Bottleneck (dualIB), which resolves some of the known drawbacks of the IB. We provide a theoretical analysis of the dualIB framework; (i) solving for the structure of its solutions (ii) unraveling its superiority in optimizing the mean prediction error exponent and (iii) demonstrating its ability to preserve exponential forms of the original distribution. To approach large scale problems, we present a novel variational formulation of the dualIB for Deep Neural Networks. In experiments on several data-sets, we compare it to a variational form of the IB. This exposes superior Information Plane properties of the dualIB and its potential in improvement of the error.
http://arxiv.org/pdf/2006.04641v1
[ "Zoe Piran", "Ravid Shwartz-Ziv", "Naftali Tishby" ]
2020-06-08T14:43:11Z
2020-06-08T14:43:11Z
2006.04643
ColdGANs: Taming Language GANs with Cautious Sampling Strategies
Training regimes based on Maximum Likelihood Estimation (MLE) suffer from known limitations, often leading to poorly generated text sequences. At the root of these limitations is the mismatch between training and inference, i.e. the so-called exposure bias, exacerbated by considering only the reference texts as correct, while in practice several alternative formulations could be as good. Generative Adversarial Networks (GANs) can mitigate those limitations but the discrete nature of text has hindered their application to language generation: the approaches proposed so far, based on Reinforcement Learning, have been shown to underperform MLE. Departing from previous works, we analyze the exploration step in GANs applied to text generation, and show how classical sampling results in unstable training. We propose to consider alternative exploration strategies in a GAN framework that we name ColdGANs, where we force the sampling to be close to the distribution modes to get smoother learning dynamics. For the first time, to the best of our knowledge, the proposed language GANs compare favorably to MLE, and obtain improvements over the state-of-the-art on three generative tasks, namely unconditional text generation, question generation, and abstractive summarization.
http://arxiv.org/pdf/2006.04643v1
[ "Thomas Scialom", "Paul-Alexis Dray", "Sylvain Lamprier", "Benjamin Piwowarski", "Jacopo Staiano" ]
2020-06-08T14:48:14Z
2020-06-08T14:48:14Z
1906.04477
Causal Discovery with Reinforcement Learning
Discovering causal structure among a set of variables is a fundamental problem in many empirical sciences. Traditional score-based casual discovery methods rely on various local heuristics to search for a Directed Acyclic Graph (DAG) according to a predefined score function. While these methods, e.g., greedy equivalence search, may have attractive results with infinite samples and certain model assumptions, they are usually less satisfactory in practice due to finite data and possible violation of assumptions. Motivated by recent advances in neural combinatorial optimization, we propose to use Reinforcement Learning (RL) to search for the DAG with the best scoring. Our encoder-decoder model takes observable data as input and generates graph adjacency matrices that are used to compute rewards. The reward incorporates both the predefined score function and two penalty terms for enforcing acyclicity. In contrast with typical RL applications where the goal is to learn a policy, we use RL as a search strategy and our final output would be the graph, among all graphs generated during training, that achieves the best reward. We conduct experiments on both synthetic and real datasets, and show that the proposed approach not only has an improved search ability but also allows a flexible score function under the acyclicity constraint.
http://arxiv.org/pdf/1906.04477v4
[ "Shengyu Zhu", "Ignavier Ng", "Zhitang Chen" ]
2020-06-08T14:48:29Z
2019-06-11T10:09:35Z
2006.03160
Hierarchical Optimal Transport for Robust Multi-View Learning
Traditional multi-view learning methods often rely on two assumptions: ($i$) the samples in different views are well-aligned, and ($ii$) their representations in latent space obey the same distribution. Unfortunately, these two assumptions may be questionable in practice, which limits the application of multi-view learning. In this work, we propose a hierarchical optimal transport (HOT) method to mitigate the dependency on these two assumptions. Given unaligned multi-view data, the HOT method penalizes the sliced Wasserstein distance between the distributions of different views. These sliced Wasserstein distances are used as the ground distance to calculate the entropic optimal transport across different views, which explicitly indicates the clustering structure of the views. The HOT method is applicable to both unsupervised and semi-supervised learning, and experimental results show that it performs robustly on both synthetic and real-world tasks.
http://arxiv.org/pdf/2006.03160v2
[ "Dixin Luo", "Hongteng Xu", "Lawrence Carin" ]
2020-06-08T14:54:32Z
2020-06-04T22:24:45Z
1806.05139
High-Dimensional Inference for Cluster-Based Graphical Models
Motivated by modern applications in which one constructs graphical models based on a very large number of features, this paper introduces a new class of cluster-based graphical models, in which variable clustering is applied as an initial step for reducing the dimension of the feature space. We employ model assisted clustering, in which the clusters contain features that are similar to the same unobserved latent variable. Two different cluster-based Gaussian graphical models are considered: the latent variable graph, corresponding to the graphical model associated with the unobserved latent variables, and the cluster-average graph, corresponding to the vector of features averaged over clusters. Our study reveals that likelihood based inference for the latent graph, not analyzed previously, is analytically intractable. Our main contribution is the development and analysis of alternative estimation and inference strategies, for the precision matrix of an unobservable latent vector $Z$. We replace the likelihood of the data by an appropriate class of empirical risk functions, that can be specialized to the latent graphical model and to the simpler, but under-analyzed, cluster-average graphical model. The estimators thus derived can be used for inference on the graph structure, for instance on edge strength or pattern recovery. Inference is based on the asymptotic limits of the entry-wise estimates of the precision matrices associated with the conditional independence graphs under consideration. While taking the uncertainty induced by the clustering step into account, we establish Berry-Esseen central limit theorems for the proposed estimators. It is noteworthy that, although the clusters are estimated adaptively from the data, the central limit theorems regarding the entries of the estimated graphs are proved under the same conditions one would use if the clusters were known....
http://arxiv.org/pdf/1806.05139v2
[ "Carson Eisenach", "Florentina Bunea", "Yang Ning", "Claudiu Dinicu" ]
2020-06-08T14:57:30Z
2018-06-13T16:37:34Z
2006.04667
Dynamic Time Warping as a New Evaluation for Dst Forecast with Machine Learning
Models based on neural networks and machine learning are seeing a rise in popularity in space physics. In particular, the forecasting of geomagnetic indices with neural network models is becoming a popular field of study. These models are evaluated with metrics such as the root-mean-square error (RMSE) and Pearson correlation coefficient. However, these classical metrics sometimes fail to capture crucial behavior. To show where the classical metrics are lacking, we trained a neural network, using a long short-term memory network, to make a forecast of the disturbance storm time index at origin time $t$ with a forecasting horizon of 1 up to 6 hours, trained on OMNIWeb data. Inspection of the model's results with the correlation coefficient and RMSE indicated a performance comparable to the latest publications. However, visual inspection showed that the predictions made by the neural network were behaving similarly to the persistence model. In this work, a new method is proposed to measure whether two time series are shifted in time with respect to each other, such as the persistence model output versus the observation. The new measure, based on Dynamical Time Warping, is capable of identifying results made by the persistence model and shows promising results in confirming the visual observations of the neural network's output. Finally, different methodologies for training the neural network are explored in order to remove the persistence behavior from the results.
http://arxiv.org/abs/2006.04667v1
[ "Brecht Laperre", "Jorge Amaya", "Giovanni Lapenta" ]
2020-06-08T15:14:13Z
2020-06-08T15:14:13Z
2006.04670
Traffic Flow Forecast of Road Networks with Recurrent Neural Networks
The interest in developing smart cities has increased dramatically in recent years. In this context an intelligent transportation system depicts a major topic. The forecast of traffic flow is indispensable for an efficient intelligent transportation system. The traffic flow forecast is a difficult task, due to its stochastic and non linear nature. Besides classical statistical methods, neural networks are a promising possibility to predict future traffic flow. In our work, this prediction is performed with various recurrent neural networks. These are trained on measurements of induction loops, which are placed in intersections of the city. We utilized data from beginning of January to the end of July in 2018. Each model incorporates sequences of the measured traffic flow from all sensors and predicts the future traffic flow for each sensor simultaneously. A variety of model architectures, forecast horizons and input data were investigated. Most often the vector output model with gated recurrent units achieved the smallest error on the test set over all considered prediction scenarios. Due to the small amount of data, generalization of the trained models is limited.
http://arxiv.org/pdf/2006.04670v1
[ "Ralf Rüther", "Andreas Klos", "Marius Rosenbaum", "Wolfram Schiffmann" ]
2020-06-08T15:17:58Z
2020-06-08T15:17:58Z
1812.06600
Double Deep Q-Learning for Optimal Execution
Optimal trade execution is an important problem faced by essentially all traders. Much research into optimal execution uses stringent model assumptions and applies continuous time stochastic control to solve them. Here, we instead take a model free approach and develop a variation of Deep Q-Learning to estimate the optimal actions of a trader. The model is a fully connected Neural Network trained using Experience Replay and Double DQN with input features given by the current state of the limit order book, other trading signals, and available execution actions, while the output is the Q-value function estimating the future rewards under an arbitrary action. We apply our model to nine different stocks and find that it outperforms the standard benchmark approach on most stocks using the measures of (i) mean and median out-performance, (ii) probability of out-performance, and (iii) gain-loss ratios.
http://arxiv.org/pdf/1812.06600v2
[ "Brian Ning", "Franco Ho Ting Lin", "Sebastian Jaimungal" ]
2020-06-08T15:21:40Z
2018-12-17T03:33:55Z
2006.04682
An Ensemble Approach for Compressive Sensing with Quantum
We leverage the idea of a statistical ensemble to improve the quality of quantum annealing based binary compressive sensing. Since executing quantum machine instructions on a quantum annealer can result in an excited state, rather than the ground state of the given Hamiltonian, we use different penalty parameters to generate multiple distinct quadratic unconstrained binary optimization (QUBO) functions whose ground state(s) represent a potential solution of the original problem. We then employ the attained samples from minimizing all corresponding (different) QUBOs to estimate the solution of the problem of binary compressive sensing. Our experiments, on a D-Wave 2000Q quantum processor, demonstrated that the proposed ensemble scheme is notably less sensitive to the calibration of the penalty parameter that controls the trade-off between the feasibility and sparsity of recoveries.
http://arxiv.org/pdf/2006.04682v1
[ "Ramin Ayanzadeh", "Milton Halem", "Tim Finin" ]
2020-06-08T15:32:22Z
2020-06-08T15:32:22Z
2006.04696
Unsupervised Graph Representation by Periphery and Hierarchical Information Maximization
Deep representation learning on non-Euclidean data types, such as graphs, has gained significant attention in recent years. Invent of graph neural networks has improved the state-of-the-art for both node and the entire graph representation in a vector space. However, for the entire graph representation, most of the existing graph neural networks are trained on a graph classification loss in a supervised way. But obtaining labels of a large number of graphs is expensive for real world applications. Thus, we aim to propose an unsupervised graph neural network to generate a vector representation of an entire graph in this paper. For this purpose, we combine the idea of hierarchical graph neural networks and mutual information maximization into a single framework. We also propose and use the concept of periphery representation of a graph and show its usefulness in the proposed algorithm which is referred as GraPHmax. We conduct thorough experiments on several real-world graph datasets and compare the performance of GraPHmax with a diverse set of both supervised and unsupervised baseline algorithms. Experimental results show that we are able to improve the state-of-the-art for multiple graph level tasks on several real-world datasets, while remain competitive on the others.
http://arxiv.org/pdf/2006.04696v1
[ "Sambaran Bandyopadhyay", "Manasvi Aggarwal", "M. Narasimha Murty" ]
2020-06-08T15:50:40Z
2020-06-08T15:50:40Z
2006.04697
Supervised Whole DAG Causal Discovery
We propose to address the task of causal structure learning from data in a supervised manner. Existing work on learning causal directions by supervised learning is restricted to learning pairwise relation, and not well suited for whole DAG discovery. We propose a novel approach of modeling the whole DAG structure discovery as a supervised learning. To fit the problem in hand, we propose to use permutation equivariant models that align well with the problem domain. We evaluate the proposed approach extensively on synthetic graphs of size 10,20,50,100 and real data, and show promising results compared with a variety of previous approaches.
http://arxiv.org/pdf/2006.04697v1
[ "Hebi Li", "Qi Xiao", "Jin Tian" ]
2020-06-08T15:53:20Z
2020-06-08T15:53:20Z
2006.05352
Design Challenges of Neural Network Acceleration Using Stochastic Computing
The enormous and ever-increasing complexity of state-of-the-art neural networks (NNs) has impeded the deployment of deep learning on resource-limited devices such as the Internet of Things (IoTs). Stochastic computing exploits the inherent amenability to approximation characteristic of NNs to reduce their energy and area footprint, two critical requirements of small embedded devices suitable for the IoTs. This report evaluates and compares two recently proposed stochastic-based NN designs, referred to as BISC (Binary Interfaced Stochastic Computing) by Sim and Lee, 2017, and ESL (Extended Stochastic Logic) by Canals et al., 2016. Using analysis and simulation, we compare three distinct implementations of these designs in terms of performance, power consumption, area, and accuracy. We also discuss the overall challenges faced in adopting stochastic computing for building NNs. We find that BISC outperforms the other architectures when executing the LeNet-5 NN model applied to the MNIST digit recognition dataset. Our analysis and simulation experiments indicate that this architecture is around 50X faster, occupies 5.7X and 2.9X less area, and consumes 7.8X and 1.8X less power than the two ESL architectures.
http://arxiv.org/pdf/2006.05352v1
[ "Alireza Khadem" ]
2020-06-08T16:06:56Z
2020-06-08T16:06:56Z
2006.13921
Determining Secondary Attributes for Credit Evaluation in P2P Lending
There has been an increased need for secondary means of credit evaluation by both traditional banking organizations as well as peer-to-peer lending entities. This is especially important in the present technological era where sticking with strict primary credit histories doesn't help distinguish between a 'good' and a 'bad' borrower, and ends up hurting both the individual borrower as well as the investor as a whole. We utilized machine learning classification and clustering algorithms to accurately predict a borrower's creditworthiness while identifying specific secondary attributes that contribute to this score. While extensive research has been done in predicting when a loan would be fully paid, the area of feature selection for lending is relatively new. We achieved 65% F1 and 73% AUC on the LendingClub data while identifying key secondary attributes.
http://arxiv.org/pdf/2006.13921v1
[ "Revathi Bhuvaneswari", "Antonio Segalini" ]
2020-06-08T16:12:00Z
2020-06-08T16:12:00Z
2006.04716
Energy Constraints Improve Liquid State Machine Performance
A model of metabolic energy constraints is applied to a liquid state machine in order to analyze its effects on network performance. It was found that, in certain combinations of energy constraints, a significant increase in testing accuracy emerged; an improvement of 4.25% was observed on a seizure detection task using a digital liquid state machine while reducing overall reservoir spiking activity by 6.9%. The accuracy improvements appear to be linked to the energy constraints' impact on the reservoir's dynamics, as measured through metrics such as the Lyapunov exponent and the separation of the reservoir.
http://arxiv.org/pdf/2006.04716v1
[ "Andrew Fountain", "Cory Merkel" ]
2020-06-08T16:13:16Z
2020-06-08T16:13:16Z
1905.12558
Limitations of the Empirical Fisher Approximation for Natural Gradient Descent
Natural gradient descent, which preconditions a gradient descent update with the Fisher information matrix of the underlying statistical model, is a way to capture partial second-order information. Several highly visible works have advocated an approximation known as the empirical Fisher, drawing connections between approximate second-order methods and heuristics like Adam. We dispute this argument by showing that the empirical Fisher---unlike the Fisher---does not generally capture second-order information. We further argue that the conditions under which the empirical Fisher approaches the Fisher (and the Hessian) are unlikely to be met in practice, and that, even on simple optimization problems, the pathologies of the empirical Fisher can have undesirable effects.
http://arxiv.org/pdf/1905.12558v3
[ "Frederik Kunstner", "Lukas Balles", "Philipp Hennig" ]
2020-06-08T16:20:22Z
2019-05-29T16:11:00Z
2006.04732
A Semiparametric Approach to Interpretable Machine Learning
Black box models in machine learning have demonstrated excellent predictive performance in complex problems and high-dimensional settings. However, their lack of transparency and interpretability restrict the applicability of such models in critical decision-making processes. In order to combat this shortcoming, we propose a novel approach to trading off interpretability and performance in prediction models using ideas from semiparametric statistics, allowing us to combine the interpretability of parametric regression models with performance of nonparametric methods. We achieve this by utilizing a two-piece model: the first piece is interpretable and parametric, to which a second, uninterpretable residual piece is added. The performance of the overall model is optimized using methods from the sufficient dimension reduction literature. Influence function based estimators are derived and shown to be doubly robust. This allows for use of approaches such as double Machine Learning in estimating our model parameters. We illustrate the utility of our approach via simulation studies and a data application based on predicting the length of stay in the intensive care unit among surgery patients.
http://arxiv.org/pdf/2006.04732v1
[ "Numair Sani", "Jaron Lee", "Razieh Nabi", "Ilya Shpitser" ]
2020-06-08T16:38:15Z
2020-06-08T16:38:15Z
1911.01468
Auditing and Achieving Intersectional Fairness in Classification Problems
Machine learning algorithms are extensively used to make increasingly more consequential decisions about people, so achieving optimal predictive performance can no longer be the only focus. A particularly important consideration is fairness with respect to race, gender, or any other sensitive attribute. This paper studies intersectional fairness, where intersections of multiple sensitive attributes are considered. Prior research has mainly focused on fairness with respect to a single sensitive attribute, with intersectional fairness being comparatively less studied despite its critical importance for the safety of modern machine learning systems. We present a comprehensive framework for auditing and achieving intersectional fairness in classification problems: we define a suite of metrics to assess intersectional fairness in the data or model outputs by extending known single-attribute fairness metrics, and propose methods for robustly estimating them even when some intersectional subgroups are underrepresented. Furthermore, we develop post-processing techniques to mitigate any detected intersectional bias in a classification model. Our techniques do not rely on any assumptions regarding the underlying model and preserve predictive performance at a guaranteed level of fairness. Finally, we give guidance on a practical implementation, showing how the proposed methods perform on a real-world dataset.
http://arxiv.org/pdf/1911.01468v2
[ "Giulio Morina", "Viktoriia Oliinyk", "Julian Waton", "Ines Marusic", "Konstantinos Georgatzis" ]
2020-06-08T16:41:23Z
2019-11-04T19:55:23Z
2006.06531
A Comparative Study of U-Net Topologies for Background Removal in Histopathology Images
During the last decade, the digitization of pathology has gained considerable momentum. Digital pathology offers many advantages including more efficient workflows, easier collaboration as well as a powerful venue for telepathology. At the same time, applying Computer-Aided Diagnosis (CAD) on Whole Slide Images (WSIs) has received substantial attention as a direct result of the digitization. The first step in any image analysis is to extract the tissue. Hence, background removal is an essential prerequisite for efficient and accurate results for many algorithms. In spite of the obvious discrimination for human operators, the identification of tissue regions in WSIs could be challenging for computers, mainly due to the existence of color variations and artifacts. Moreover, some cases such as alveolar tissue types, fatty tissues, and tissues with poor staining are difficult to detect. In this paper, we perform experiments on U-Net architecture with different network backbones (different topologies) to remove the background as well as artifacts from WSIs in order to extract the tissue regions. We compare a wide range of backbone networks including MobileNet, VGG16, EfficientNet-B3, ResNet50, ResNext101 and DenseNet121. We trained and evaluated the network on a manually labeled subset of The Cancer Genome Atlas (TCGA) Dataset. EfficientNet-B3 and MobileNet by almost 99% sensitivity and specificity reached the best results.
http://arxiv.org/pdf/2006.06531v1
[ "Abtin Riasatian", "Maral Rasoolijaberi", "Morteza Babaei", "H. R. Tizhoosh" ]
2020-06-08T16:41:44Z
2020-06-08T16:41:44Z
2006.04748
Serverless on FHIR: Deploying machine learning models for healthcare on the cloud
Machine Learning (ML) plays a vital role in implementing digital health. The advances in hardware and the democratization of software tools have revolutionized machine learning. However, the deployment of ML models -- the mathematical representation of the task to be performed -- for effective and efficient clinical decision support at the point of care is still a challenge. ML models undergo constant improvement of their accuracy and predictive power with a high turnover rate. Updating models consumed by downstream health information systems is essential for patient safety. We introduce a functional taxonomy and a four-tier architecture for cloud-based model deployment for digital health. The four tiers are containerized microservices for maintainability, serverless architecture for scalability, function as a service for portability and FHIR schema for discoverability. We call this architecture Serverless on FHIR and propose this as a standard to deploy digital health applications that can be consumed by downstream systems such as EMRs and visualization tools.
http://arxiv.org/pdf/2006.04748v1
[ "Bell Raj Eapen", "Kamran Sartipi", "Norm Archer" ]
2020-06-08T16:57:30Z
2020-06-08T16:57:30Z
2006.04750
Nonparametric Feature Impact and Importance
Practitioners use feature importance to rank and eliminate weak predictors during model development in an effort to simplify models and improve generality. Unfortunately, they also routinely conflate such feature importance measures with feature impact, the isolated effect of an explanatory variable on the response variable. This can lead to real-world consequences when importance is inappropriately interpreted as impact for business or medical insight purposes. The dominant approach for computing importances is through interrogation of a fitted model, which works well for feature selection, but gives distorted measures of feature impact. The same method applied to the same data set can yield different feature importances, depending on the model, leading us to conclude that impact should be computed directly from the data. While there are nonparametric feature selection algorithms, they typically provide feature rankings, rather than measures of impact or importance. They also typically focus on single-variable associations with the response. In this paper, we give mathematical definitions of feature impact and importance, derived from partial dependence curves, that operate directly on the data. To assess quality, we show that features ranked by these definitions are competitive with existing feature selection techniques using three real data sets for predictive tasks.
http://arxiv.org/pdf/2006.04750v1
[ "Terence Parr", "James D. Wilson", "Jeff Hamrick" ]
2020-06-08T17:07:35Z
2020-06-08T17:07:35Z
2006.04751
The Golden Ratio of Learning and Momentum
Gradient descent has been a central training principle for artificial neural networks from the early beginnings to today's deep learning networks. The most common implementation is the backpropagation algorithm for training feed-forward neural networks in a supervised fashion. Backpropagation involves computing the gradient of a loss function, with respect to the weights of the network, to update the weights and thus minimize loss. Although the mean square error is often used as a loss function, the general stochastic gradient descent principle does not immediately connect with a specific loss function. Another drawback of backpropagation has been the search for optimal values of two important training parameters, learning rate and momentum weight, which are determined empirically in most systems. The learning rate specifies the step size towards a minimum of the loss function when following the gradient, while the momentum weight considers previous weight changes when updating current weights. Using both parameters in conjunction with each other is generally accepted as a means to improving training, although their specific values do not follow immediately from standard backpropagation theory. This paper proposes a new information-theoretical loss function motivated by neural signal processing in a synapse. The new loss function implies a specific learning rate and momentum weight, leading to empirical parameters often used in practice. The proposed framework also provides a more formal explanation of the momentum term and its smoothing effect on the training process. All results taken together show that loss, learning rate, and momentum are closely connected. To support these theoretical findings, experiments for handwritten digit recognition show the practical usefulness of the proposed loss function and training parameters.
http://arxiv.org/pdf/2006.04751v1
[ "Stefan Jaeger" ]
2020-06-08T17:08:13Z
2020-06-08T17:08:13Z
2002.08563
The continuous categorical: a novel simplex-valued exponential family
Simplex-valued data appear throughout statistics and machine learning, for example in the context of transfer learning and compression of deep networks. Existing models for this class of data rely on the Dirichlet distribution or other related loss functions; here we show these standard choices suffer systematically from a number of limitations, including bias and numerical issues that frustrate the use of flexible network models upstream of these distributions. We resolve these limitations by introducing a novel exponential family of distributions for modeling simplex-valued data - the continuous categorical, which arises as a nontrivial multivariate generalization of the recently discovered continuous Bernoulli. Unlike the Dirichlet and other typical choices, the continuous categorical results in a well-behaved probabilistic loss function that produces unbiased estimators, while preserving the mathematical simplicity of the Dirichlet. As well as exploring its theoretical properties, we introduce sampling methods for this distribution that are amenable to the reparameterization trick, and evaluate their performance. Lastly, we demonstrate that the continuous categorical outperforms standard choices empirically, across a simulation study, an applied example on multi-party elections, and a neural network compression task.
http://arxiv.org/pdf/2002.08563v2
[ "Elliott Gordon-Rodriguez", "Gabriel Loaiza-Ganem", "John P. Cunningham" ]
2020-06-08T17:13:08Z
2020-02-20T04:28:02Z
2006.04760
Outlier Detection Using a Novel method: Quantum Clustering
We propose a new assumption in outlier detection: Normal data instances are commonly located in the area that there is hardly any fluctuation on data density, while outliers are often appeared in the area that there is violent fluctuation on data density. And based on this hypothesis, we apply a novel density-based approach to unsupervised outlier detection. This approach, called Quantum Clustering (QC), deals with unlabeled data processing and constructs a potential function to find the centroids of clusters and the outliers. The experiments show that the potential function could clearly find the hidden outliers in data points effectively. Besides, by using QC, we could find more subtle outliers by adjusting the parameter $sigma$. Moreover, our approach is also evaluated on two datasets (Air Quality Detection and Darwin Correspondence Project) from two different research areas, and the results show the wide applicability of our method.
http://arxiv.org/pdf/2006.04760v1
[ "Ding Liu", "Hui Li" ]
2020-06-08T17:19:41Z
2020-06-08T17:19:41Z
2006.04762
Nonlinear Higher-Order Label Spreading
Label spreading is a general technique for semi-supervised learning with point cloud or network data, which can be interpreted as a diffusion of labels on a graph. While there are many variants of label spreading, nearly all of them are linear models, where the incoming information to a node is a weighted sum of information from neighboring nodes. Here, we add nonlinearity to label spreading through nonlinear functions of higher-order structure in the graph, namely triangles in the graph. For a broad class of nonlinear functions, we prove convergence of our nonlinear higher-order label spreading algorithm to the global solution of a constrained semi-supervised loss function. We demonstrate the efficiency and efficacy of our approach on a variety of point cloud and network datasets, where the nonlinear higher-order model compares favorably to classical label spreading, as well as hypergraph models and graph neural networks.
http://arxiv.org/pdf/2006.04762v1
[ "Francesco Tudisco", "Austin R. Benson", "Konstantin Prokopchik" ]
2020-06-08T17:29:40Z
2020-06-08T17:29:40Z
2006.04766
A Heuristically Self-Organised Linguistic Attribute Deep Learning in Edge Computing For IoT Intelligence
With the development of Internet of Things (IoT), IoT intelligence becomes emerging technology. "Curse of Dimensionality" is the barrier of data fusion in edge devices for the success of IoT intelligence. A Linguistic Attribute Hierarchy (LAH), embedded with Linguistic Decision Trees (LDTs), can represent a new attribute deep learning. In contrast to the conventional deep learning, an LAH could overcome the shortcoming of missing interpretation by providing transparent information propagation through the rules, produced by LDTs in the LAH. Similar to the conventional deep learning, the computing complexity of optimising LAHs blocks the applications of LAHs. In this paper, we propose a heuristic approach to constructing an LAH, embedded with LDTs for decision making or classification by utilising the distance correlations between attributes and between attributes and the goal variable. The set of attributes is divided to some attribute clusters, and then they are heuristically organised to form a linguistic attribute hierarchy. The proposed approach was validated with some benchmark decision making or classification problems from the UCI machine learning repository. The experimental results show that the proposed self-organisation algorithm can construct an effective and efficient linguistic attribute hierarchy. Such a self-organised linguistic attribute hierarchy embedded with LDTs can not only efficiently tackle "curse of dimensionality" in a single LDT for data fusion with massive attributes, but also achieve better or comparable performance on decision making or classification, compared to the single LDT for the problem to be solved. The self-organisation algorithm is much efficient than the Genetic Algorithm in Wrapper for the optimisation of LAHs. This makes it feasible to embed the self-organisation algorithm in edge devices for IoT intelligence.
http://arxiv.org/pdf/2006.04766v1
[ "Hongmei He", "Zhenhuan Zhu" ]
2020-06-08T17:36:05Z
2020-06-08T17:36:05Z
2006.04769
The Penalty Imposed by Ablated Data Augmentation
There is a set of data augmentation techniques that ablate parts of the input at random. These include input dropout, cutout, and random erasing. We term these techniques ablated data augmentation. Though these techniques seems similar in spirit and have shown success in improving model performance in a variety of domains, we do not yet have a mathematical understanding of the differences between these techniques like we do for other regularization techniques like L1 or L2. First, we study a formal model of mean ablated data augmentation and inverted dropout for linear regression. We prove that ablated data augmentation is equivalent to optimizing the ordinary least squares objective along with a penalty that we call the Contribution Covariance Penalty and inverted dropout, a more common implementation than dropout in popular frameworks, is equivalent to optimizing the ordinary least squares objective along with Modified L2. For deep networks, we demonstrate an empirical version of the result if we replace contributions with attributions and coefficients with average gradients, i.e., the Contribution Covariance Penalty and Modified L2 Penalty drop with the increase of the corresponding ablated data augmentation across a variety of networks.
http://arxiv.org/pdf/2006.04769v1
[ "Frederick Liu", "Amir Najmi", "Mukund Sundararajan" ]
2020-06-08T17:38:21Z
2020-06-08T17:38:21Z
2006.04780
Lorentz Group Equivariant Neural Network for Particle Physics
We present a neural network architecture that is fully equivariant with respect to transformations under the Lorentz group, a fundamental symmetry of space and time in physics. The architecture is based on the theory of the finite-dimensional representations of the Lorentz group and the equivariant nonlinearity involves the tensor product. For classification tasks in particle physics, we demonstrate that such an equivariant architecture leads to drastically simpler models that have relatively few learnable parameters and are much more physically interpretable than leading approaches that use CNNs and point cloud approaches. The competitive performance of the network is demonstrated on a public classification dataset [27] for tagging top quark decays given energy-momenta of jet constituents produced in proton-proton collisions.
http://arxiv.org/pdf/2006.04780v1
[ "Alexander Bogatskiy", "Brandon Anderson", "Jan T. Offermann", "Marwah Roussi", "David W. Miller", "Risi Kondor" ]
2020-06-08T17:54:43Z
2020-06-08T17:54:43Z
2005.08067
Forecasting with sktime: Designing sktime's New Forecasting API and Applying It to Replicate and Extend the M4 Study
We present a new open-source framework for forecasting in Python. Our framework forms part of sktime, a more general machine learning toolbox for time series with scikit-learn compatible interfaces for different learning tasks. Our new framework provides dedicated forecasting algorithms and tools to build, tune and evaluate composite models. We use sktime to both replicate and extend key results from the M4 forecasting study. In particular, we further investigate the potential of simple off-the-shelf machine learning approaches for univariate forecasting. Our main results are that simple hybrid approaches can boost the performance of statistical models, and that simple pure approaches can achieve competitive performance on the hourly data set, outperforming the statistical algorithms and coming close to the M4 winner.
http://arxiv.org/pdf/2005.08067v2
[ "Markus Löning", "Franz Király" ]
2020-06-08T17:57:30Z
2020-05-16T19:15:09Z
2006.04788
tvGP-VAE: Tensor-variate Gaussian Process Prior Variational Autoencoder
Variational autoencoders (VAEs) are a powerful class of deep generative latent variable model for unsupervised representation learning on high-dimensional data. To ensure computational tractability, VAEs are often implemented with a univariate standard Gaussian prior and a mean-field Gaussian variational posterior distribution. This results in a vector-valued latent variables that are agnostic to the original data structure which might be highly correlated across and within multiple dimensions. We propose a tensor-variate extension to the VAE framework, the tensor-variate Gaussian process prior variational autoencoder (tvGP-VAE), which replaces the standard univariate Gaussian prior and posterior distributions with tensor-variate Gaussian processes. The tvGP-VAE is able to explicitly model correlation structures via the use of kernel functions over the dimensions of tensor-valued latent variables. Using spatiotemporally correlated image time series as an example, we show that the choice of which correlation structures to explicitly represent in the latent space has a significant impact on model performance in terms of reconstruction.
http://arxiv.org/pdf/2006.04788v1
[ "Alex Campbell", "Pietro Liò" ]
2020-06-08T17:59:13Z
2020-06-08T17:59:13Z
2006.04847
Procrustean Orthogonal Sparse Hashing
Hashing is one of the most popular methods for similarity search because of its speed and efficiency. Dense binary hashing is prevalent in the literature. Recently, insect olfaction was shown to be structurally and functionally analogous to sparse hashing [6]. Here, we prove that this biological mechanism is the solution to a well-posed optimization problem. Furthermore, we show that orthogonality increases the accuracy of sparse hashing. Next, we present a novel method, Procrustean Orthogonal Sparse Hashing (POSH), that unifies these findings, learning an orthogonal transform from training data compatible with the sparse hashing mechanism. We provide theoretical evidence of the shortcomings of Optimal Sparse Lifting (OSL) [22] and BioHash [30], two related olfaction-inspired methods, and propose two new methods, Binary OSL and SphericalHash, to address these deficiencies. We compare POSH, Binary OSL, and SphericalHash to several state-of-the-art hashing methods and provide empirical results for the superiority of the proposed methods across a wide range of standard benchmarks and parameter settings.
http://arxiv.org/pdf/2006.04847v1
[ "Mariano Tepper", "Dipanjan Sengupta", "Ted Willke" ]
2020-06-08T18:09:33Z
2020-06-08T18:09:33Z
1912.06689
Data-driven confidence bands for distributed nonparametric regression
Gaussian Process Regression and Kernel Ridge Regression are popular nonparametric regression approaches. Unfortunately, they suffer from high computational complexity rendering them inapplicable to the modern massive datasets. To that end a number of approximations have been suggested, some of them allowing for a distributed implementation. One of them is the divide and conquer approach, splitting the data into a number of partitions, obtaining the local estimates and finally averaging them. In this paper we suggest a novel computationally efficient fully data-driven algorithm, quantifying uncertainty of this method, yielding frequentist $L_2$-confidence bands. We rigorously demonstrate validity of the algorithm. Another contribution of the paper is a minimax-optimal high-probability bound for the averaged estimator, complementing and generalizing the known risk bounds.
http://arxiv.org/pdf/1912.06689v2
[ "Valeriy Avanesov" ]
2020-06-08T18:17:00Z
2019-12-13T20:13:55Z
2006.02085
A Scalable and Cloud-Native Hyperparameter Tuning System
In this paper, we introduce Katib: a scalable, cloud-native, and production-ready hyperparameter tuning system that is agnostic of the underlying machine learning framework. Though there are multiple hyperparameter tuning systems available, this is the first one that caters to the needs of both users and administrators of the system. We present the motivation and design of the system and contrast it with existing hyperparameter tuning systems, especially in terms of multi-tenancy, scalability, fault-tolerance, and extensibility. It can be deployed on local machines, or hosted as a service in on-premise data centers, or in private/public clouds. We demonstrate the advantage of our system using experimental results as well as real-world, production use cases. Katib has active contributors from multiple companies and is open-sourced at emph{https://github.com/kubeflow/katib} under the Apache 2.0 license.
http://arxiv.org/pdf/2006.02085v2
[ "Johnu George", "Ce Gao", "Richard Liu", "Hou Gang Liu", "Yuan Tang", "Ramdoot Pydipaty", "Amit Kumar Saha" ]
2020-06-08T18:26:45Z
2020-06-03T07:44:25Z
2006.08344
Wat zei je? Detecting Out-of-Distribution Translations with Variational Transformers
We detect out-of-training-distribution sentences in Neural Machine Translation using the Bayesian Deep Learning equivalent of Transformer models. For this we develop a new measure of uncertainty designed specifically for long sequences of discrete random variables -- i.e. words in the output sentence. Our new measure of uncertainty solves a major intractability in the naive application of existing approaches on long sentences. We use our new measure on a Transformer model trained with dropout approximate inference. On the task of German-English translation using WMT13 and Europarl, we show that with dropout uncertainty our measure is able to identify when Dutch source sentences, sentences which use the same word types as German, are given to the model instead of German.
http://arxiv.org/pdf/2006.08344v1
[ "Tim Z. Xiao", "Aidan N. Gomez", "Yarin Gal" ]
2020-06-08T20:00:36Z
2020-06-08T20:00:36Z
2001.06728
Big-Data Science in Porous Materials: Materials Genomics and Machine Learning
By combining metal nodes with organic linkers we can potentially synthesize millions of possible metal organic frameworks (MOFs). At present, we have libraries of over ten thousand synthesized materials and millions of in-silico predicted materials. The fact that we have so many materials opens many exciting avenues to tailor make a material that is optimal for a given application. However, from an experimental and computational point of view we simply have too many materials to screen using brute-force techniques. In this review, we show that having so many materials allows us to use big-data methods as a powerful technique to study these materials and to discover complex correlations. The first part of the review gives an introduction to the principles of big-data science. We emphasize the importance of data collection, methods to augment small data sets, how to select appropriate training sets. An important part of this review are the different approaches that are used to represent these materials in feature space. The review also includes a general overview of the different ML techniques, but as most applications in porous materials use supervised ML our review is focused on the different approaches for supervised ML. In particular, we review the different method to optimize the ML process and how to quantify the performance of the different methods. In the second part, we review how the different approaches of ML have been applied to porous materials. In particular, we discuss applications in the field of gas storage and separation, the stability of these materials, their electronic properties, and their synthesis. The range of topics illustrates the large variety of topics that can be studied with big-data science. Given the increasing interest of the scientific community in ML, we expect this list to rapidly expand in the coming years.
http://arxiv.org/abs/2001.06728v3
[ "Kevin Maik Jablonka", "Daniele Ongari", "Seyed Mohamad Moosavi", "Berend Smit" ]
2020-06-08T20:05:47Z
2020-01-18T21:01:07Z
2006.04916
An Algorithmic Introduction to Clustering
This paper tries to present a more unified view of clustering, by identifying the relationships between five different clustering algorithms. Some of the results are not new, but they are presented in a cleaner, simpler and more concise way. To the best of my knowledge, the interpretation of DBSCAN as a climbing procedure, which introduces a theoretical connection between DBSCAN and Mean shift, is a novel result.
http://arxiv.org/pdf/2006.04916v1
[ "Bernardo A. Gonzalez-Torres" ]
2020-06-08T20:21:34Z
2020-06-08T20:21:34Z
1904.07612
Speech Denoising by Accumulating Per-Frequency Modeling Fluctuations
We present a method for audio denoising that combines processing done in both the time domain and the time-frequency domain. Given a noisy audio clip, the method trains a deep neural network to fit this signal. Since the fitting is only partly successful and is able to better capture the underlying clean signal than the noise, the output of the network helps to disentangle the clean audio from the rest of the signal. This is done by accumulating a fitting score per time-frequency bin and applying the time-frequency domain filtering based on the obtained scores. The method is completely unsupervised and only trains on the specific audio clip that is being denoised. Our experiments demonstrate favorable performance in comparison to the literature methods. Our code and samples are available at github.com/mosheman5/DNP and as supplementary. Index Terms: Audio denoising; Unsupervised learning
http://arxiv.org/pdf/1904.07612v3
[ "Michael Michelashvili", "Lior Wolf" ]
2020-06-08T20:38:08Z
2019-04-16T12:06:58Z
2006.04935
Calibrated neighborhood aware confidence measure for deep metric learning
Deep metric learning has gained promising improvement in recent years following the success of deep learning. It has been successfully applied to problems in few-shot learning, image retrieval, and open-set classifications. However, measuring the confidence of a deep metric learning model and identifying unreliable predictions is still an open challenge. This paper focuses on defining a calibrated and interpretable confidence metric that closely reflects its classification accuracy. While performing similarity comparison directly in the latent space using the learned distance metric, our approach approximates the distribution of data points for each class using a Gaussian kernel smoothing function. The post-processing calibration algorithm with proposed confidence metric on the held-out validation dataset improves generalization and robustness of state-of-the-art deep metric learning models while provides an interpretable estimation of the confidence. Extensive tests on four popular benchmark datasets (Caltech-UCSD Birds, Stanford Online Product, Stanford Car-196, and In-shop Clothes Retrieval) show consistent improvements even at the presence of distribution shifts in test data related to additional noise or adversarial examples.
http://arxiv.org/pdf/2006.04935v1
[ "Maryna Karpusha", "Sunghee Yun", "Istvan Fehervari" ]
2020-06-08T21:05:38Z
2020-06-08T21:05:38Z
1812.07035
On the Continuity of Rotation Representations in Neural Networks
In neural networks, it is often desirable to work with various representations of the same space. For example, 3D rotations can be represented with quaternions or Euler angles. In this paper, we advance a definition of a continuous representation, which can be helpful for training deep neural networks. We relate this to topological concepts such as homeomorphism and embedding. We then investigate what are continuous and discontinuous representations for 2D, 3D, and n-dimensional rotations. We demonstrate that for 3D rotations, all representations are discontinuous in the real Euclidean spaces of four or fewer dimensions. Thus, widely used representations such as quaternions and Euler angles are discontinuous and difficult for neural networks to learn. We show that the 3D rotations have continuous representations in 5D and 6D, which are more suitable for learning. We also present continuous representations for the general case of the n-dimensional rotation group SO(n). While our main focus is on rotations, we also show that our constructions apply to other groups such as the orthogonal group and similarity transforms. We finally present empirical results, which show that our continuous rotation representations outperform discontinuous ones for several practical problems in graphics and vision, including a simple autoencoder sanity test, a rotation estimator for 3D point clouds, and an inverse kinematics solver for 3D human poses.
http://arxiv.org/pdf/1812.07035v4
[ "Yi Zhou", "Connelly Barnes", "Jingwan Lu", "Jimei Yang", "Hao Li" ]
2020-06-08T21:29:08Z
2018-12-17T20:13:17Z
2006.04960
A Notion of Individual Fairness for Clustering
A common distinction in fair machine learning, in particular in fair classification, is between group fairness and individual fairness. In the context of clustering, group fairness has been studied extensively in recent years; however, individual fairness for clustering has hardly been explored. In this paper, we propose a natural notion of individual fairness for clustering. Our notion asks that every data point, on average, is closer to the points in its own cluster than to the points in any other cluster. We study several questions related to our proposed notion of individual fairness. On the negative side, we show that deciding whether a given data set allows for such an individually fair clustering in general is NP-hard. On the positive side, for the special case of a data set lying on the real line, we propose an efficient dynamic programming approach to find an individually fair clustering. For general data sets, we investigate heuristics aimed at minimizing the number of individual fairness violations and compare them to standard clustering approaches on real data sets.
http://arxiv.org/pdf/2006.04960v1
[ "Matthäus Kleindessner", "Pranjal Awasthi", "Jamie Morgenstern" ]
2020-06-08T21:41:39Z
2020-06-08T21:41:39Z
1903.03630
Imputation estimators for unnormalized models with missing data
Several statistical models are given in the form of unnormalized densities, and calculation of the normalization constant is intractable. We propose estimation methods for such unnormalized models with missing data. The key concept is to combine imputation techniques with estimators for unnormalized models including noise contrastive estimation and score matching. In addition, we derive asymptotic distributions of the proposed estimators and construct confidence intervals. Simulation results with truncated Gaussian graphical models and the application to real data of wind direction reveal that the proposed methods effectively enable statistical inference with unnormalized models from missing data.
http://arxiv.org/pdf/1903.03630v2
[ "Masatoshi Uehara", "Takeru Matsuda", "Jae Kwang Kim" ]
2020-06-08T21:51:57Z
2019-03-08T19:01:45Z
2006.04972
Multi-Fidelity High-Order Gaussian Processes for Physical Simulation
The key task of physical simulation is to solve partial differential equations (PDEs) on discretized domains, which is known to be costly. In particular, high-fidelity solutions are much more expensive than low-fidelity ones. To reduce the cost, we consider novel Gaussian process (GP) models that leverage simulation examples of different fidelities to predict high-dimensional PDE solution outputs. Existing GP methods are either not scalable to high-dimensional outputs or lack effective strategies to integrate multi-fidelity examples. To address these issues, we propose Multi-Fidelity High-Order Gaussian Process (MFHoGP) that can capture complex correlations both between the outputs and between the fidelities to enhance solution estimation, and scale to large numbers of outputs. Based on a novel nonlinear coregionalization model, MFHoGP propagates bases throughout fidelities to fuse information, and places a deep matrix GP prior over the basis weights to capture the (nonlinear) relationships across the fidelities. To improve inference efficiency and quality, we use bases decomposition to largely reduce the model parameters, and layer-wise matrix Gaussian posteriors to capture the posterior dependency and to simplify the computation. Our stochastic variational learning algorithm successfully handles millions of outputs without extra sparse approximations. We show the advantages of our method in several typical applications.
http://arxiv.org/pdf/2006.04972v1
[ "Zheng Wang", "Wei Xing", "Robert Kirby", "Shandian Zhe" ]
2020-06-08T22:31:59Z
2020-06-08T22:31:59Z
2004.10899
What are We Depressed about When We Talk about COVID19: Mental Health Analysis on Tweets Using Natural Language Processing
The outbreak of coronavirus disease 2019 (COVID-19) recently has affected human life to a great extent. Besides direct physical and economic threats, the pandemic also indirectly impact people's mental health conditions, which can be overwhelming but difficult to measure. The problem may come from various reasons such as unemployment status, stay-at-home policy, fear for the virus, and so forth. In this work, we focus on applying natural language processing (NLP) techniques to analyze tweets in terms of mental health. We trained deep models that classify each tweet into the following emotions: anger, anticipation, disgust, fear, joy, sadness, surprise and trust. We build the EmoCT (Emotion-Covid19-Tweet) dataset for the training purpose by manually labeling 1,000 English tweets. Furthermore, we propose and compare two methods to find out the reasons that are causing sadness and fear.
http://arxiv.org/pdf/2004.10899v3
[ "Irene Li", "Yixin Li", "Tianxiao Li", "Sergio Alvarez-Napagao", "Dario Garcia-Gasulla", "Toyotaro Suzumura" ]
2020-06-08T23:06:46Z
2020-04-22T23:45:04Z
2006.04984
Making Convolutions Resilient via Algorithm-Based Error Detection Techniques
The ability of Convolutional Neural Networks (CNNs) to accurately process real-time telemetry has boosted their use in safety-critical and high-performance computing systems. As such systems require high levels of resilience to errors, CNNs must execute correctly in the presence of hardware faults. Full duplication provides the needed assurance but incurs a prohibitive 100% overhead. Algorithmic techniques are known to offer low-cost solutions, but the practical feasibility and performance of such techniques have never been studied for CNN deployment platforms (e.g., TensorFlow or TensorRT on GPUs). In this paper, we focus on algorithmically verifying Convolutions, which are the most resource-demanding operations in CNNs. We use checksums to verify convolutions, adding a small amount of redundancy, far less than full-duplication. We first identify the challenges that arise in employing Algorithm-Based Error Detection (ABED) for Convolutions in optimized inference platforms that fuse multiple network layers and use reduced-precision operations, and demonstrate how to overcome them. We propose and evaluate variations of ABED techniques that offer implementation complexity, runtime overhead, and coverage trade-offs. Results show that ABED can detect all transient hardware errors that might otherwise corrupt output and does so while incurring low runtime overheads (6-23%), offering at least 1.6X throughput to workloads compared to full duplication.
http://arxiv.org/pdf/2006.04984v1
[ "Siva Kumar Sastry Hari", "Michael B. Sullivan", "Timothy Tsai", "Stephen W. Keckler" ]
2020-06-08T23:17:57Z
2020-06-08T23:17:57Z
2006.04992
Deep Stock Predictions
Forecasting stock prices can be interpreted as a time series prediction problem, for which Long Short Term Memory (LSTM) neural networks are often used due to their architecture specifically built to solve such problems. In this paper, we consider the design of a trading strategy that performs portfolio optimization using the LSTM stock price prediction for four different companies. We then customize the loss function used to train the LSTM to increase the profit earned. Moreover, we propose a data driven approach for optimal selection of window length and multi-step prediction length, and consider the addition of analyst calls as technical indicators to a multi-stack Bidirectional LSTM strengthened by the addition of Attention units. We find the LSTM model with the customized loss function to have an improved performance in the training bot over a regressive baseline such as ARIMA, while the addition of analyst call does improve the performance for certain datasets.
http://arxiv.org/pdf/2006.04992v1
[ "Akash Doshi", "Alexander Issa", "Puneet Sachdeva", "Sina Rafati", "Somnath Rakshit" ]
2020-06-08T23:37:47Z
2020-06-08T23:37:47Z
2006.04996
Implicit Class-Conditioned Domain Alignment for Unsupervised Domain Adaptation
We present an approach for unsupervised domain adaptation---with a strong focus on practical considerations of within-domain class imbalance and between-domain class distribution shift---from a class-conditioned domain alignment perspective. Current methods for class-conditioned domain alignment aim to explicitly minimize a loss function based on pseudo-label estimations of the target domain. However, these methods suffer from pseudo-label bias in the form of error accumulation. We propose a method that removes the need for explicit optimization of model parameters from pseudo-labels directly. Instead, we present a sampling-based implicit alignment approach, where the sample selection procedure is implicitly guided by the pseudo-labels. Theoretical analysis reveals the existence of a domain-discriminator shortcut in misaligned classes, which is addressed by the proposed implicit alignment approach to facilitate domain-adversarial learning. Empirical results and ablation studies confirm the effectiveness of the proposed approach, especially in the presence of within-domain class imbalance and between-domain class distribution shift.
http://arxiv.org/pdf/2006.04996v1
[ "Xiang Jiang", "Qicheng Lao", "Stan Matwin", "Mohammad Havaei" ]
2020-06-09T00:20:21Z
2020-06-09T00:20:21Z
2006.04803
A two-level solution to fight against dishonest opinions in recommendation-based trust systems
In this paper, we propose a mechanism to deal with dishonest opinions in recommendation-based trust models, at both the collection and processing levels. We consider a scenario in which an agent requests recommendations from multiple parties to build trust toward another agent. At the collection level, we propose to allow agents to self-assess the accuracy of their recommendations and autonomously decide on whether they would participate in the recommendation process or not. At the processing level, we propose a recommendations aggregation technique that is resilient to collusion attacks, followed by a credibility update mechanism for the participating agents. The originality of our work stems from its consideration of dishonest opinions at both the collection and processing levels, which allows for better and more persistent protection against dishonest recommenders. Experiments conducted on the Epinions dataset show that our solution yields better performance in protecting the recommendation process against Sybil attacks, in comparison with a competing model that derives the optimal network of advisors based on the agents' trust values.
http://arxiv.org/pdf/2006.04803v1
[ "Omar Abdel Wahab", "Jamal Bentahar", "Robin Cohen", "Hadi Otrok", "Azzam Mourad" ]
2020-06-09T00:34:11Z
2020-06-09T00:34:11Z
2003.08978
Generating new concepts with hybrid neuro-symbolic models
Human conceptual knowledge supports the ability to generate novel yet highly structured concepts, and the form of this conceptual knowledge is of great interest to cognitive scientists. One tradition has emphasized structured knowledge, viewing concepts as embedded in intuitive theories or organized in complex symbolic knowledge structures. A second tradition has emphasized statistical knowledge, viewing conceptual knowledge as an emerging from the rich correlational structure captured by training neural networks and other statistical models. In this paper, we explore a synthesis of these two traditions through a novel neuro-symbolic model for generating new concepts. Using simple visual concepts as a testbed, we bring together neural networks and symbolic probabilistic programs to learn a generative model of novel handwritten characters. Two alternative models are explored with more generic neural network architectures. We compare each of these three models for their likelihoods on held-out character classes and for the quality of their productions, finding that our hybrid model learns the most convincing representation and generalizes further from the training observations.
http://arxiv.org/pdf/2003.08978v3
[ "Reuben Feinman", "Brenden M. Lake" ]
2020-06-09T01:31:57Z
2020-03-19T18:45:56Z
2004.13843
Template-based Question Answering using Recursive Neural Networks
We propose a neural network-based approach to automatically learn and classify natural language questions into its corresponding template using recursive neural networks. An obvious advantage of using neural networks is the elimination of the need for laborious feature engineering that can be cumbersome and error-prone. The input question is encoded into a vector representation. The model is trained and evaluated on the LC-QuAD dataset (Large-scale Complex Question Answering Dataset). The LC-QuAD queries are annotated based on 38 unique templates that the model attempts to classify. The resulting model is evaluated against both the LC-QuAD dataset and the 7th Question Answering Over Linked Data (QALD-7) dataset. The recursive neural network achieves template classification accuracy of 0.828 on the LC-QuAD dataset and an accuracy of 0.618 on the QALD-7 dataset. When the top-2 most likely templates were considered the model achieves an accuracy of 0.945 on the LC-QuAD dataset and 0.786 on the QALD-7 dataset. After slot filling, the overall system achieves a macro F-score 0.419 on the LC-QuAD dataset and a macro F-score of 0.417 on the QALD-7 dataset.
http://arxiv.org/pdf/2004.13843v3
[ "Ram G Athreya", "Srividya Bansal", "Axel-Cyrille Ngonga Ngomo", "Ricardo Usbeck" ]
2020-06-09T01:41:26Z
2020-04-03T18:14:39Z
2006.05018
Deep learning to estimate the physical proportion of infected region of lung for COVID-19 pneumonia with CT image set
Utilizing computed tomography (CT) images to quickly estimate the severity of cases with COVID-19 is one of the most straightforward and efficacious methods. Two tasks were studied in this present paper. One was to segment the mask of intact lung in case of pneumonia. Another was to generate the masks of regions infected by COVID-19. The masks of these two parts of images then were converted to corresponding volumes to calculate the physical proportion of infected region of lung. A total of 129 CT image set were herein collected and studied. The intrinsic Hounsfiled value of CT images was firstly utilized to generate the initial dirty version of labeled masks both for intact lung and infected regions. Then, the samples were carefully adjusted and improved by two professional radiologists to generate the final training set and test benchmark. Two deep learning models were evaluated: UNet and 2.5D UNet. For the segment of infected regions, a deep learning based classifier was followed to remove unrelated blur-edged regions that were wrongly segmented out such as air tube and blood vessel tissue etc. For the segmented masks of intact lung and infected regions, the best method could achieve 0.972 and 0.757 measure in mean Dice similarity coefficient on our test benchmark. As the overall proportion of infected region of lung, the final result showed 0.961 (Pearson's correlation coefficient) and 11.7% (mean absolute percent error). The instant proportion of infected regions of lung could be used as a visual evidence to assist clinical physician to determine the severity of the case. Furthermore, a quantified report of infected regions can help predict the prognosis for COVID-19 cases which were scanned periodically within the treatment cycle.
http://arxiv.org/pdf/2006.05018v1
[ "Wei Wu", "Yu Shi", "Xukun Li", "Yukun Zhou", "Peng Du", "Shuangzhi Lv", "Tingbo Liang", "Jifang Sheng" ]
2020-06-09T02:38:40Z
2020-06-09T02:38:40Z
1905.04833
Learning and Planning in the Feature Deception Problem
Today's high-stakes adversarial interactions feature attackers who constantly breach the ever-improving security measures. Deception mitigates the defender's loss by misleading the attacker to make suboptimal decisions. In order to formally reason about deception, we introduce the feature deception problem (FDP), a domain-independent model and present a learning and planning framework for finding the optimal deception strategy, taking into account the adversary's preferences which are initially unknown to the defender. We make the following contributions. (1) We show that we can uniformly learn the adversary's preferences using data from a modest number of deception strategies. (2) We propose an approximation algorithm for finding the optimal deception strategy given the learned preferences and show that the problem is NP-hard. (3) We perform extensive experiments to validate our methods and results. In addition, we provide a case study of the credit bureau network to illustrate how FDP implements deception on a real-world problem.
http://arxiv.org/pdf/1905.04833v2
[ "Zheyuan Ryan Shi", "Ariel D. Procaccia", "Kevin S. Chan", "Sridhar Venkatesan", "Noam Ben-Asher", "Nandi O. Leslie", "Charles Kamhoua", "Fei Fang" ]
2020-06-09T02:54:40Z
2019-05-13T02:18:45Z
2006.03736
GroupIM: A Mutual Information Maximization Framework for Neural Group Recommendation
We study the problem of making item recommendations to ephemeral groups, which comprise users with limited or no historical activities together. Existing studies target persistent groups with substantial activity history, while ephemeral groups lack historical interactions. To overcome group interaction sparsity, we propose data-driven regularization strategies to exploit both the preference covariance amongst users who are in the same group, as well as the contextual relevance of users' individual preferences to each group. We make two contributions. First, we present a recommender architecture-agnostic framework GroupIM that can integrate arbitrary neural preference encoders and aggregators for ephemeral group recommendation. Second, we regularize the user-group latent space to overcome group interaction sparsity by: maximizing mutual information between representations of groups and group members; and dynamically prioritizing the preferences of highly informative members through contextual preference weighting. Our experimental results on several real-world datasets indicate significant performance improvements (31-62% relative NDCG@20) over state-of-the-art group recommendation techniques.
http://arxiv.org/abs/2006.03736v2
[ "Aravind Sankar", "Yanhong Wu", "Yuhang Wu", "Wei Zhang", "Hao Yang", "Hari Sundaram" ]
2020-06-09T03:13:09Z
2020-06-05T23:18:19Z
2006.05028
Online Page Migration with ML Advice
We consider online algorithms for the {em page migration problem} that use predictions, potentially imperfect, to improve their performance. The best known online algorithms for this problem, due to Westbrook'94 and Bienkowski et al'17, have competitive ratios strictly bounded away from 1. In contrast, we show that if the algorithm is given a prediction of the input sequence, then it can achieve a competitive ratio that tends to $1$ as the prediction error rate tends to $0$. Specifically, the competitive ratio is equal to $1+O(q)$, where $q$ is the prediction error rate. We also design a ``fallback option'' that ensures that the competitive ratio of the algorithm for {em any} input sequence is at most $O(1/q)$. Our result adds to the recent body of work that uses machine learning to improve the performance of ``classic'' algorithms.
http://arxiv.org/pdf/2006.05028v1
[ "Piotr Indyk", "Frederik Mallmann-Trenn", "Slobodan Mitrović", "Ronitt Rubinfeld" ]
2020-06-09T03:15:34Z
2020-06-09T03:15:34Z
2006.05030
High Tissue Contrast MRI Synthesis Using Multi-Stage Attention-GAN for Glioma Segmentation
Magnetic resonance imaging (MRI) provides varying tissue contrast images of internal organs based on a strong magnetic field. Despite the non-invasive advantage of MRI in frequent imaging, the low contrast MR images in the target area make tissue segmentation a challenging problem. This paper demonstrates the potential benefits of image-to-image translation techniques to generate synthetic high tissue contrast (HTC) images. Notably, we adopt a new cycle generative adversarial network (CycleGAN) with an attention mechanism to increase the contrast within underlying tissues. The attention block, as well as training on HTC images, guides our model to converge on certain tissues. To increase the resolution of HTC images, we employ multi-stage architecture to focus on one particular tissue as a foreground and filter out the irrelevant background in each stage. This multi-stage structure also alleviates the common artifacts of the synthetic images by decreasing the gap between source and target domains. We show the application of our method for synthesizing HTC images on brain MR scans, including glioma tumor. We also employ HTC MR images in both the end-to-end and two-stage segmentation structure to confirm the effectiveness of these images. The experiments over three competitive segmentation baselines on BraTS 2018 dataset indicate that incorporating the synthetic HTC images in the multi-modal segmentation framework improves the average Dice scores 0.8%, 0.6%, and 0.5% on the whole tumor, tumor core, and enhancing tumor, respectively, while eliminating one real MRI sequence from the segmentation procedure.
http://arxiv.org/pdf/2006.05030v1
[ "Mohammad Hamghalam", "Baiying Lei", "Tianfu Wang" ]
2020-06-09T03:21:30Z
2020-06-09T03:21:30Z
2006.05031
Multi-split Optimized Bagging Ensemble Model Selection for Multi-class Educational Data Mining
Predicting students' academic performance has been a research area of interest in recent years with many institutions focusing on improving the students' performance and the education quality. The analysis and prediction of students' performance can be achieved using various data mining techniques. Moreover, such techniques allow instructors to determine possible factors that may affect the students' final marks. To that end, this work analyzes two different undergraduate datasets at two different universities. Furthermore, this work aims to predict the students' performance at two stages of course delivery (20% and 50% respectively). This analysis allows for properly choosing the appropriate machine learning algorithms to use as well as optimize the algorithms' parameters. Furthermore, this work adopts a systematic multi-split approach based on Gini index and p-value. This is done by optimizing a suitable bagging ensemble learner that is built from any combination of six potential base machine learning algorithms. It is shown through experimental results that the posited bagging ensemble models achieve high accuracy for the target group for both datasets.
http://arxiv.org/abs/2006.05031v1
[ "MohammadNoor Injadat", "Abdallah Moubayed", "Ali Bou Nassif", "Abdallah Shami" ]
2020-06-09T03:22:33Z
2020-06-09T03:22:33Z
2006.04245
Adversarial Optimal Transport Through The Convolution Of Kernels With Evolving Measures
A novel algorithm is proposed to solve the sample-based optimal transport problem. An adversarial formulation of the push-forward condition uses a test function built as a convolution between an adaptive kernel and an evolving probability distribution $nu$ over a latent variable $b$. Approximating this convolution by its simulation over evolving samples $b^i(t)$ of $nu$, the parameterization of the test function reduces to determining the flow of these samples. This flow, discretized over discrete time steps $t_n$, is built from the composition of elementary maps. The optimal transport also follows a flow that, by duality, must follow the gradient of the test function. The representation of the test function as the Monte Carlo simulation of a distribution makes the algorithm robust to dimensionality, and its evolution under a memory-less flow produces rich, complex maps from simple parametric transformations. The algorithm is illustrated with numerical examples.
http://arxiv.org/pdf/2006.04245v2
[ "Daeyoung Kim", "Esteban G. Tabak" ]
2020-06-09T03:32:00Z
2020-06-07T19:42:50Z
2006.05038
Coverage probability in wireless networks with determinantal scheduling
We propose a new class of algorithms for randomly scheduling network transmissions. The idea is to use (discrete) determinantal point processes (subsets) to randomly assign medium access to various {em repulsive} subsets of potential transmitters. This approach can be seen as a natural extension of (spatial) Aloha, which schedules transmissions independently. Under a general path loss model and Rayleigh fading, we show that, similarly to Aloha, they are also subject to elegant analysis of the coverage probabilities and transmission attempts (also known as local delay). This is mainly due to the explicit, determinantal form of the conditional (Palm) distribution and closed-form expressions for the Laplace functional of determinantal processes. Interestingly, the derived performance characteristics of the network are amenable to various optimizations of the scheduling parameters, which are determinantal kernels, allowing the use of techniques developed for statistical learning with determinantal processes. Well-established sampling algorithms for determinantal processes can be used to cope with implementation issues, which is is beyond the scope of this paper, but it creates paths for further research.
http://arxiv.org/pdf/2006.05038v1
[ "Bartek Błaszczyszyn", "Antoine Brochard", "H. Paul Keeler" ]
2020-06-09T04:05:50Z
2020-06-09T04:05:50Z
2003.09372
One Neuron to Fool Them All
Despite vast research in adversarial examples, the root causes of model susceptibility are not well understood. Instead of looking at attack-specific robustness, we propose a notion that evaluates the sensitivity of individual neurons in terms of how robust the model's output is to direct perturbations of that neuron's output. Analyzing models from this perspective reveals distinctive characteristics of standard as well as adversarially-trained robust models, and leads to several curious results. In our experiments on CIFAR-10 and ImageNet, we find that attacks using a loss function that targets just a single sensitive neuron find adversarial examples nearly as effectively as ones that target the full model. We analyze the properties of these sensitive neurons to propose a regularization term that can help a model achieve robustness to a variety of different perturbation constraints while maintaining accuracy on natural data distributions. Code for all our experiments is available at https://github.com/iamgroot42/sauron .
http://arxiv.org/pdf/2003.09372v2
[ "Anshuman Suri", "David Evans" ]
2020-06-09T04:35:30Z
2020-03-20T16:49:38Z
2006.05044
Neural Physicist: Learning Physical Dynamics from Image Sequences
We present a novel architecture named Neural Physicist (NeurPhy) to learn physical dynamics directly from image sequences using deep neural networks. For any physical system, given the global system parameters, the time evolution of states is governed by the underlying physical laws. How to learn meaningful system representations in an end-to-end way and estimate accurate state transition dynamics facilitating long-term prediction have been long-standing challenges. In this paper, by leveraging recent progresses in representation learning and state space models (SSMs), we propose NeurPhy, which uses variational auto-encoder (VAE) to extract underlying Markovian dynamic state at each time step, neural process (NP) to extract the global system parameters, and a non-linear non-recurrent stochastic state space model to learn the physical dynamic transition. We apply NeurPhy to two physical experimental environments, i.e., damped pendulum and planetary orbits motion, and achieve promising results. Our model can not only extract the physically meaningful state representations, but also learn the state transition dynamics enabling long-term predictions for unseen image sequences. Furthermore, from the manifold dimension of the latent state space, we can easily identify the degree of freedom (DoF) of the underlying physical systems.
http://arxiv.org/pdf/2006.05044v1
[ "Baocheng Zhu", "Shijun Wang", "James Zhang" ]
2020-06-09T04:36:51Z
2020-06-09T04:36:51Z
2006.04027
Efficient Architecture Search for Continual Learning
Continual learning with neural networks is an important learning framework in AI that aims to learn a sequence of tasks well. However, it is often confronted with three challenges: (1) overcome the catastrophic forgetting problem, (2) adapt the current network to new tasks, and meanwhile (3) control its model complexity. To reach these goals, we propose a novel approach named as Continual Learning with Efficient Architecture Search, or CLEAS in short. CLEAS works closely with neural architecture search (NAS) which leverages reinforcement learning techniques to search for the best neural architecture that fits a new task. In particular, we design a neuron-level NAS controller that decides which old neurons from previous tasks should be reused (knowledge transfer), and which new neurons should be added (to learn new knowledge). Such a fine-grained controller allows one to find a very concise architecture that can fit each new task well. Meanwhile, since we do not alter the weights of the reused neurons, we perfectly memorize the knowledge learned from previous tasks. We evaluate CLEAS on numerous sequential classification tasks, and the results demonstrate that CLEAS outperforms other state-of-the-art alternative methods, achieving higher classification accuracy while using simpler neural architectures.
http://arxiv.org/pdf/2006.04027v2
[ "Qiang Gao", "Zhipeng Luo", "Diego Klabjan" ]
2020-06-09T04:54:11Z
2020-06-07T02:59:29Z
2006.01895
Learning to Branch for Multi-Task Learning
Training multiple tasks jointly in one deep network yields reduced latency during inference and better performance over the single-task counterpart by sharing certain layers of a network. However, over-sharing a network could erroneously enforce over-generalization, causing negative knowledge transfer across tasks. Prior works rely on human intuition or pre-computed task relatedness scores for ad hoc branching structures. They provide sub-optimal end results and often require huge efforts for the trial-and-error process. In this work, we present an automated multi-task learning algorithm that learns where to share or branch within a network, designing an effective network topology that is directly optimized for multiple objectives across tasks. Specifically, we propose a novel tree-structured design space that casts a tree branching operation as a gumbel-softmax sampling procedure. This enables differentiable network splitting that is end-to-end trainable. We validate the proposed method on controlled synthetic data, CelebA, and Taskonomy.
http://arxiv.org/pdf/2006.01895v2
[ "Pengsheng Guo", "Chen-Yu Lee", "Daniel Ulbricht" ]
2020-06-09T05:18:55Z
2020-06-02T19:23:21Z
2006.05061
ProcData: An R Package for Process Data Analysis
Process data refer to data recorded in the log files of computer-based items. These data, represented as timestamped action sequences, keep track of respondents' response processes of solving the items. Process data analysis aims at enhancing educational assessment accuracy and serving other assessment purposes by utilizing the rich information contained in response processes. The R package ProcData presented in this article is designed to provide tools for processing, describing, and analyzing process data. We define an S3 class "proc" for organizing process data and extend generic methods summary and print for class "proc". Two feature extraction methods for process data are implemented in the package for compressing information in the irregular response processes into regular numeric vectors. ProcData also provides functions for fitting and making predictions from a neural-network-based sequence model. These functions call relevant functions in package keras for constructing and training neural networks. In addition, several response process generators and a real dataset of response processes of the climate control item in the 2012 Programme for International Student Assessment are included in the package.
http://arxiv.org/pdf/2006.05061v1
[ "Xueying Tang", "Susu Zhang", "Zhi Wang", "Jingchen Liu", "Zhiliang Ying" ]
2020-06-09T05:44:57Z
2020-06-09T05:44:57Z
2006.05071
C-SL: Contrastive Sound Localization with Inertial-Acoustic Sensors
Human brain employs perceptual information about the head and eye movements to update the spatial relationship between the individual and the surrounding environment. Based on this cognitive process known as spatial updating, we introduce contrastive sound localization (C-SL) with mobile inertial-acoustic sensor arrays of arbitrary geometry. C-SL uses unlabeled multi-channel audio recordings and inertial measurement unit (IMU) readings collected during free rotational movements of the array to learn mappings from acoustical measurements to an array-centered direction-of-arrival (DOA) in a self-supervised manner. Contrary to conventional DOA estimation methods that require the knowledge of either the array geometry or source locations in the calibration stage, C-SL is agnostic to both, and can be trained on data collected in minimally constrained settings. To achieve this capability, our proposed method utilizes a customized contrastive loss measuring the spatial contrast between source locations predicted for disjoint segments of the input to jointly update estimated DOAs and the acoustic-spatial mapping in linear time. We provide quantitative and qualitative evaluations of C-SL comparing its performance with baseline DOA estimation methods in a wide range of conditions. We believe the relaxed calibration process offered by C-SL paves the way toward truly personalized augmented hearing applications.
http://arxiv.org/pdf/2006.05071v1
[ "Majid Mirbagheri", "Bardia Doosti" ]
2020-06-09T06:36:44Z
2020-06-09T06:36:44Z