categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
cs.LG cs.DS math.ST stat.TH
null
1507.00710
null
null
http://arxiv.org/pdf/1507.00710v2
2015-11-11T17:14:21Z
2015-07-02T19:42:05Z
Fast, Provable Algorithms for Isotonic Regression in all $\ell_{p}$-norms
Given a directed acyclic graph $G,$ and a set of values $y$ on the vertices, the Isotonic Regression of $y$ is a vector $x$ that respects the partial order described by $G,$ and minimizes $||x-y||,$ for a specified norm. This paper gives improved algorithms for computing the Isotonic Regression for all weighted $\ell_{p}$-norms with rigorous performance guarantees. Our algorithms are quite practical, and their variants can be implemented to run fast in practice.
[ "['Rasmus Kyng' 'Anup Rao' 'Sushant Sachdeva']", "Rasmus Kyng and Anup Rao and Sushant Sachdeva" ]
cs.AI cs.LG stat.ML
null
1507.00814
null
null
http://arxiv.org/pdf/1507.00814v3
2015-11-19T22:40:30Z
2015-07-03T04:11:15Z
Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models
Achieving efficient and scalable exploration in complex domains poses a major challenge in reinforcement learning. While Bayesian and PAC-MDP approaches to the exploration problem offer strong formal guarantees, they are often impractical in higher dimensions due to their reliance on enumerating the state-action space. Hence, exploration in complex domains is often performed with simple epsilon-greedy methods. In this paper, we consider the challenging Atari games domain, which requires processing raw pixel inputs and delayed rewards. We evaluate several more sophisticated exploration strategies, including Thompson sampling and Boltzman exploration, and propose a new exploration method based on assigning exploration bonuses from a concurrently learned model of the system dynamics. By parameterizing our learned model with a neural network, we are able to develop a scalable and efficient approach to exploration bonuses that can be applied to tasks with complex, high-dimensional state spaces. In the Atari domain, our method provides the most consistent improvement across a range of games that pose a major challenge for prior methods. In addition to raw game-scores, we also develop an AUC-100 metric for the Atari Learning domain to evaluate the impact of exploration on this benchmark.
[ "['Bradly C. Stadie' 'Sergey Levine' 'Pieter Abbeel']", "Bradly C. Stadie, Sergey Levine, Pieter Abbeel" ]
cs.LG stat.ML
null
1507.00824
null
null
http://arxiv.org/pdf/1507.00824v1
2015-07-03T06:14:26Z
2015-07-03T06:14:26Z
D-MFVI: Distributed Mean Field Variational Inference using Bregman ADMM
Bayesian models provide a framework for probabilistic modelling of complex datasets. However, many of such models are computationally demanding especially in the presence of large datasets. On the other hand, in sensor network applications, statistical (Bayesian) parameter estimation usually needs distributed algorithms, in which both data and computation are distributed across the nodes of the network. In this paper we propose a general framework for distributed Bayesian learning using Bregman Alternating Direction Method of Multipliers (B-ADMM). We demonstrate the utility of our framework, with Mean Field Variational Bayes (MFVB) as the primitive for distributed Matrix Factorization (MF) and distributed affine structure from motion (SfM).
[ "['Behnam Babagholami-Mohamadabadi' 'Sejong Yoon' 'Vladimir Pavlovic']", "Behnam Babagholami-Mohamadabadi, Sejong Yoon, Vladimir Pavlovic" ]
cs.LG stat.ML
null
1507.00825
null
null
http://arxiv.org/pdf/1507.00825v1
2015-07-03T06:18:36Z
2015-07-03T06:18:36Z
Ridge Regression, Hubness, and Zero-Shot Learning
This paper discusses the effect of hubness in zero-shot learning, when ridge regression is used to find a mapping between the example space to the label space. Contrary to the existing approach, which attempts to find a mapping from the example space to the label space, we show that mapping labels into the example space is desirable to suppress the emergence of hubs in the subsequent nearest neighbor search step. Assuming a simple data model, we prove that the proposed approach indeed reduces hubness. This was verified empirically on the tasks of bilingual lexicon extraction and image labeling: hubness was reduced with both of these tasks and the accuracy was improved accordingly.
[ "Yutaro Shigeto, Ikumi Suzuki, Kazuo Hara, Masashi Shimbo, Yuji\n Matsumoto", "['Yutaro Shigeto' 'Ikumi Suzuki' 'Kazuo Hara' 'Masashi Shimbo'\n 'Yuji Matsumoto']" ]
cs.CV cs.LG stat.ML
10.1155/2015/824289
1507.00908
null
null
http://arxiv.org/abs/1507.00908v1
2015-07-03T13:30:41Z
2015-07-03T13:30:41Z
LogDet Rank Minimization with Application to Subspace Clustering
Low-rank matrix is desired in many machine learning and computer vision problems. Most of the recent studies use the nuclear norm as a convex surrogate of the rank operator. However, all singular values are simply added together by the nuclear norm, and thus the rank may not be well approximated in practical problems. In this paper, we propose to use a log-determinant (LogDet) function as a smooth and closer, though non-convex, approximation to rank for obtaining a low-rank representation in subspace clustering. Augmented Lagrange multipliers strategy is applied to iteratively optimize the LogDet-based non-convex objective function on potentially large-scale data. By making use of the angular information of principal directions of the resultant low-rank representation, an affinity graph matrix is constructed for spectral clustering. Experimental results on motion segmentation and face clustering data demonstrate that the proposed method often outperforms state-of-the-art subspace clustering algorithms.
[ "['Zhao Kang' 'Chong Peng' 'Jie Cheng' 'Qiang Chen']", "Zhao Kang, Chong Peng, Jie Cheng and Qiang Chen" ]
cs.CL cs.IR cs.LG stat.ME stat.ML
null
1507.00955
null
null
http://arxiv.org/pdf/1507.00955v3
2015-09-18T11:44:33Z
2015-07-03T15:46:55Z
Twitter Sentiment Analysis: Lexicon Method, Machine Learning Method and Their Combination
This paper covers the two approaches for sentiment analysis: i) lexicon based method; ii) machine learning method. We describe several techniques to implement these approaches and discuss how they can be adopted for sentiment classification of Twitter messages. We present a comparative study of different lexicon combinations and show that enhancing sentiment lexicons with emoticons, abbreviations and social-media slang expressions increases the accuracy of lexicon-based classification for Twitter. We discuss the importance of feature generation and feature selection processes for machine learning sentiment classification. To quantify the performance of the main sentiment analysis methods over Twitter we run these algorithms on a benchmark Twitter dataset from the SemEval-2013 competition, task 2-B. The results show that machine learning method based on SVM and Naive Bayes classifiers outperforms the lexicon method. We present a new ensemble method that uses a lexicon based sentiment score as input feature for the machine learning approach. The combined method proved to produce more precise classifications. We also show that employing a cost-sensitive classifier for highly unbalanced datasets yields an improvement of sentiment classification performance up to 7%.
[ "Olga Kolchyna, Tharsis T. P. Souza, Philip Treleaven, Tomaso Aste", "['Olga Kolchyna' 'Tharsis T. P. Souza' 'Philip Treleaven' 'Tomaso Aste']" ]
cs.NE cs.CL cs.CV cs.LG
10.1109/TMM.2015.2477044
1507.01053
null
null
http://arxiv.org/abs/1507.01053v1
2015-07-04T01:06:16Z
2015-07-04T01:06:16Z
Describing Multimedia Content using Attention-based Encoder--Decoder Networks
Whereas deep neural networks were first mostly used for classification tasks, they are rapidly expanding in the realm of structured output problems, where the observed target is composed of multiple random variables that have a rich joint distribution, given the input. We focus in this paper on the case where the input also has a rich structure and the input and output structures are somehow related. We describe systems that learn to attend to different places in the input, for each element of the output, for a variety of tasks: machine translation, image caption generation, video clip description and speech recognition. All these systems are based on a shared set of building blocks: gated recurrent neural networks and convolutional neural networks, along with trained attention mechanisms. We report on experimental results with these systems, showing impressively good performance and the advantage of the attention mechanism.
[ "['Kyunghyun Cho' 'Aaron Courville' 'Yoshua Bengio']", "Kyunghyun Cho, Aaron Courville, Yoshua Bengio" ]
stat.ML cs.LG
null
1507.01073
null
null
http://arxiv.org/pdf/1507.01073v5
2016-08-10T01:23:56Z
2015-07-04T05:54:29Z
Convex Factorization Machine for Regression
We propose the convex factorization machine (CFM), which is a convex variant of the widely used Factorization Machines (FMs). Specifically, we employ a linear+quadratic model and regularize the linear term with the $\ell_2$-regularizer and the quadratic term with the trace norm regularizer. Then, we formulate the CFM optimization as a semidefinite programming problem and propose an efficient optimization procedure with Hazan's algorithm. A key advantage of CFM over existing FMs is that it can find a globally optimal solution, while FMs may get a poor locally optimal solution since the objective function of FMs is non-convex. In addition, the proposed algorithm is simple yet effective and can be implemented easily. Finally, CFM is a general factorization method and can also be used for other factorization problems including including multi-view matrix factorization and tensor completion problems. Through synthetic and movielens datasets, we first show that the proposed CFM achieves results competitive to FMs. Furthermore, in a toxicogenomics prediction task, we show that CFM outperforms a state-of-the-art tensor factorization method.
[ "Makoto Yamada, Wenzhao Lian, Amit Goyal, Jianhui Chen, Kishan\n Wimalawarne, Suleiman A Khan, Samuel Kaski, Hiroshi Mamitsuka, Yi Chang", "['Makoto Yamada' 'Wenzhao Lian' 'Amit Goyal' 'Jianhui Chen'\n 'Kishan Wimalawarne' 'Suleiman A Khan' 'Samuel Kaski' 'Hiroshi Mamitsuka'\n 'Yi Chang']" ]
math.OC cs.LG stat.ML
null
1507.01160
null
null
http://arxiv.org/pdf/1507.01160v2
2015-07-07T22:27:35Z
2015-07-05T02:16:25Z
Correlated Multiarmed Bandit Problem: Bayesian Algorithms and Regret Analysis
We consider the correlated multiarmed bandit (MAB) problem in which the rewards associated with each arm are modeled by a multivariate Gaussian random variable, and we investigate the influence of the assumptions in the Bayesian prior on the performance of the upper credible limit (UCL) algorithm and a new correlated UCL algorithm. We rigorously characterize the influence of accuracy, confidence, and correlation scale in the prior on the decision-making performance of the algorithms. Our results show how priors and correlation structure can be leveraged to improve performance.
[ "['Vaibhav Srivastava' 'Paul Reverdy' 'Naomi Ehrich Leonard']", "Vaibhav Srivastava, Paul Reverdy, Naomi Ehrich Leonard" ]
cs.CL cs.AI cs.LG
null
1507.01193
null
null
http://arxiv.org/pdf/1507.01193v1
2015-07-05T11:10:24Z
2015-07-05T11:10:24Z
Dependency Recurrent Neural Language Models for Sentence Completion
Recent work on language modelling has shifted focus from count-based models to neural models. In these works, the words in each sentence are always considered in a left-to-right order. In this paper we show how we can improve the performance of the recurrent neural network (RNN) language model by incorporating the syntactic dependencies of a sentence, which have the effect of bringing relevant contexts closer to the word being predicted. We evaluate our approach on the Microsoft Research Sentence Completion Challenge and show that the dependency RNN proposed improves over the RNN by about 10 points in accuracy. Furthermore, we achieve results comparable with the state-of-the-art models on this task.
[ "['Piotr Mirowski' 'Andreas Vlachos']", "Piotr Mirowski, Andreas Vlachos" ]
cs.LG
null
1507.01215
null
null
http://arxiv.org/pdf/1507.01215v2
2015-07-23T03:07:24Z
2015-07-05T12:49:20Z
Combining Models of Approximation with Partial Learning
In Gold's framework of inductive inference, the model of partial learning requires the learner to output exactly one correct index for the target object and only the target object infinitely often. Since infinitely many of the learner's hypotheses may be incorrect, it is not obvious whether a partial learner can be modifed to "approximate" the target object. Fulk and Jain (Approximate inference and scientific method. Information and Computation 114(2):179--191, 1994) introduced a model of approximate learning of recursive functions. The present work extends their research and solves an open problem of Fulk and Jain by showing that there is a learner which approximates and partially identifies every recursive function by outputting a sequence of hypotheses which, in addition, are also almost all finite variants of the target function. The subsequent study is dedicated to the question how these findings generalise to the learning of r.e. languages from positive data. Here three variants of approximate learning will be introduced and investigated with respect to the question whether they can be combined with partial learning. Following the line of Fulk and Jain's research, further investigations provide conditions under which partial language learners can eventually output only finite variants of the target language. The combinabilities of other partial learning criteria will also be briefly studied.
[ "['Ziyuan Gao' 'Frank Stephan' 'Sandra Zilles']", "Ziyuan Gao, Frank Stephan and Sandra Zilles" ]
cs.CV cs.LG stat.ML
null
1507.01238
null
null
http://arxiv.org/pdf/1507.01238v3
2016-05-05T20:16:54Z
2015-07-05T16:29:31Z
Scalable Sparse Subspace Clustering by Orthogonal Matching Pursuit
Subspace clustering methods based on $\ell_1$, $\ell_2$ or nuclear norm regularization have become very popular due to their simplicity, theoretical guarantees and empirical success. However, the choice of the regularizer can greatly impact both theory and practice. For instance, $\ell_1$ regularization is guaranteed to give a subspace-preserving affinity (i.e., there are no connections between points from different subspaces) under broad conditions (e.g., arbitrary subspaces and corrupted data). However, it requires solving a large scale convex optimization problem. On the other hand, $\ell_2$ and nuclear norm regularization provide efficient closed form solutions, but require very strong assumptions to guarantee a subspace-preserving affinity, e.g., independent subspaces and uncorrupted data. In this paper we study a subspace clustering method based on orthogonal matching pursuit. We show that the method is both computationally efficient and guaranteed to give a subspace-preserving affinity under broad conditions. Experiments on synthetic data verify our theoretical analysis, and applications in handwritten digit and face clustering show that our approach achieves the best trade off between accuracy and efficiency.
[ "['Chong You' 'Daniel P. Robinson' 'Rene Vidal']", "Chong You, Daniel P. Robinson, Rene Vidal" ]
cs.LG cs.NE
null
1507.01239
null
null
http://arxiv.org/pdf/1507.01239v3
2018-07-01T00:21:59Z
2015-07-05T16:29:33Z
Experiments on Parallel Training of Deep Neural Network using Model Averaging
In this work we apply model averaging to parallel training of deep neural network (DNN). Parallelization is done in a model averaging manner. Data is partitioned and distributed to different nodes for local model updates, and model averaging across nodes is done every few minibatches. We use multiple GPUs for data parallelization, and Message Passing Interface (MPI) for communication between nodes, which allows us to perform model averaging frequently without losing much time on communication. We investigate the effectiveness of Natural Gradient Stochastic Gradient Descent (NG-SGD) and Restricted Boltzmann Machine (RBM) pretraining for parallel training in model-averaging framework, and explore the best setups in term of different learning rate schedules, averaging frequencies and minibatch sizes. It is shown that NG-SGD and RBM pretraining benefits parameter-averaging based model training. On the 300h Switchboard dataset, a 9.3 times speedup is achieved using 16 GPUs and 17 times speedup using 32 GPUs with limited decoding accuracy loss.
[ "Hang Su, Haoyu Chen", "['Hang Su' 'Haoyu Chen']" ]
cs.IT cs.AI cs.LG math.IT
10.1109/ICASSP.2015.7178308
1507.01269
null
null
http://arxiv.org/abs/1507.01269v1
2015-07-05T20:23:22Z
2015-07-05T20:23:22Z
Semi-supervised Multi-sensor Classification via Consensus-based Multi-View Maximum Entropy Discrimination
In this paper, we consider multi-sensor classification when there is a large number of unlabeled samples. The problem is formulated under the multi-view learning framework and a Consensus-based Multi-View Maximum Entropy Discrimination (CMV-MED) algorithm is proposed. By iteratively maximizing the stochastic agreement between multiple classifiers on the unlabeled dataset, the algorithm simultaneously learns multiple high accuracy classifiers. We demonstrate that our proposed method can yield improved performance over previous multi-view learning approaches by comparing performance on three real multi-sensor data sets.
[ "['Tianpei Xie' 'Nasser M. Nasrabadi' 'Alfred O. Hero III']", "Tianpei Xie, Nasser M. Nasrabadi and Alfred O. Hero III" ]
cs.LG cs.RO
null
1507.01273
null
null
http://arxiv.org/pdf/1507.01273v2
2015-09-23T04:59:46Z
2015-07-05T20:54:57Z
Learning Deep Neural Network Policies with Continuous Memory States
Policy learning for partially observed control tasks requires policies that can remember salient information from past observations. In this paper, we present a method for learning policies with internal memory for high-dimensional, continuous systems, such as robotic manipulators. Our approach consists of augmenting the state and action space of the system with continuous-valued memory states that the policy can read from and write to. Learning general-purpose policies with this type of memory representation directly is difficult, because the policy must automatically figure out the most salient information to memorize at each time step. We show that, by decomposing this policy search problem into a trajectory optimization phase and a supervised learning phase through a method called guided policy search, we can acquire policies with effective memorization and recall strategies. Intuitively, the trajectory optimization phase chooses the values of the memory states that will make it easier for the policy to produce the right action in future states, while the supervised learning phase encourages the policy to use memorization actions to produce those memory states. We evaluate our method on tasks involving continuous control in manipulation and navigation settings, and show that our method can learn complex policies that successfully complete a range of tasks that require memory.
[ "Marvin Zhang, Zoe McCarthy, Chelsea Finn, Sergey Levine, Pieter Abbeel", "['Marvin Zhang' 'Zoe McCarthy' 'Chelsea Finn' 'Sergey Levine'\n 'Pieter Abbeel']" ]
cs.LG math.ST stat.ML stat.TH
null
1507.01279
null
null
http://arxiv.org/pdf/1507.01279v5
2018-11-12T20:57:24Z
2015-07-05T21:46:03Z
Scan $B$-Statistic for Kernel Change-Point Detection
Detecting the emergence of an abrupt change-point is a classic problem in statistics and machine learning. Kernel-based nonparametric statistics have been used for this task which enjoy fewer assumptions on the distributions than the parametric approach and can handle high-dimensional data. In this paper we focus on the scenario when the amount of background data is large, and propose two related computationally efficient kernel-based statistics for change-point detection, which are inspired by the recently developed $B$-statistics. A novel theoretical result of the paper is the characterization of the tail probability of these statistics using the change-of-measure technique, which focuses on characterizing the tail of the detection statistics rather than obtaining its asymptotic distribution under the null distribution. Such approximations are crucial to control the false alarm rate, which corresponds to the significance level in offline change-point detection and the average-run-length in online change-point detection. Our approximations are shown to be highly accurate. Thus, they provide a convenient way to find detection thresholds for both offline and online cases without the need to resort to the more expensive simulations or bootstrapping. We show that our methods perform well on both synthetic data and real data.
[ "['Shuang Li' 'Yao Xie' 'Hanjun Dai' 'Le Song']", "Shuang Li, Yao Xie, Hanjun Dai, and Le Song" ]
cs.CV cs.LG cs.NE
null
1507.01422
null
null
http://arxiv.org/pdf/1507.01422v1
2015-07-06T12:43:26Z
2015-07-06T12:43:26Z
End-to-end Convolutional Network for Saliency Prediction
The prediction of saliency areas in images has been traditionally addressed with hand crafted features based on neuroscience principles. This paper however addresses the problem with a completely data-driven approach by training a convolutional network. The learning process is formulated as a minimization of a loss function that measures the Euclidean distance of the predicted saliency map with the provided ground truth. The recent publication of large datasets of saliency prediction has provided enough data to train a not very deep architecture which is both fast and accurate. The convolutional network in this paper, named JuntingNet, won the LSUN 2015 challenge on saliency prediction with a superior performance in all considered metrics.
[ "['Junting Pan' 'Xavier Giró-i-Nieto']", "Junting Pan and Xavier Gir\\'o-i-Nieto" ]
cs.DC cs.LG
null
1507.01461
null
null
http://arxiv.org/pdf/1507.01461v1
2015-07-06T13:50:26Z
2015-07-06T13:50:26Z
Revisiting Large Scale Distributed Machine Learning
Nowadays, with the widespread of smartphones and other portable gadgets equipped with a variety of sensors, data is ubiquitous available and the focus of machine learning has shifted from being able to infer from small training samples to dealing with large scale high-dimensional data. In domains such as personal healthcare applications, which motivates this survey, distributed machine learning is a promising line of research, both for scaling up learning algorithms, but mostly for dealing with data which is inherently produced at different locations. This report offers a thorough overview of and state-of-the-art algorithms for distributed machine learning, for both supervised and unsupervised learning, ranging from simple linear logistic regression to graphical models and clustering. We propose future directions for most categories, specific to the potential personal healthcare applications. With this in mind, the report focuses on how security and low communication overhead can be assured in the specific case of a strictly client-server architectural model. As particular directions we provides an exhaustive presentation of an empirical clustering algorithm, k-windows, and proposed an asynchronous distributed machine learning algorithm that would scale well and also would be computationally cheap and easy to implement.
[ "Radu Cristian Ionescu", "['Radu Cristian Ionescu']" ]
math.OC cs.LG
null
1507.01476
null
null
http://arxiv.org/pdf/1507.01476v1
2015-07-06T14:21:21Z
2015-07-06T14:21:21Z
Semi-proximal Mirror-Prox for Nonsmooth Composite Minimization
We propose a new first-order optimisation algorithm to solve high-dimensional non-smooth composite minimisation problems. Typical examples of such problems have an objective that decomposes into a non-smooth empirical risk part and a non-smooth regularisation penalty. The proposed algorithm, called Semi-Proximal Mirror-Prox, leverages the Fenchel-type representation of one part of the objective while handling the other part of the objective via linear minimization over the domain. The algorithm stands in contrast with more classical proximal gradient algorithms with smoothing, which require the computation of proximal operators at each iteration and can therefore be impractical for high-dimensional problems. We establish the theoretical convergence rate of Semi-Proximal Mirror-Prox, which exhibits the optimal complexity bounds, i.e. $O(1/\epsilon^2)$, for the number of calls to linear minimization oracle. We present promising experimental results showing the interest of the approach in comparison to competing methods.
[ "Niao He and Zaid Harchaoui", "['Niao He' 'Zaid Harchaoui']" ]
cs.NE cs.CL cs.LG
null
1507.01526
null
null
http://arxiv.org/pdf/1507.01526v3
2016-01-07T18:39:48Z
2015-07-06T16:30:05Z
Grid Long Short-Term Memory
This paper introduces Grid Long Short-Term Memory, a network of LSTM cells arranged in a multidimensional grid that can be applied to vectors, sequences or higher dimensional data such as images. The network differs from existing deep LSTM architectures in that the cells are connected between network layers as well as along the spatiotemporal dimensions of the data. The network provides a unified way of using LSTM for both deep and sequential computation. We apply the model to algorithmic tasks such as 15-digit integer addition and sequence memorization, where it is able to significantly outperform the standard LSTM. We then give results for two empirical tasks. We find that 2D Grid LSTM achieves 1.47 bits per character on the Wikipedia character prediction benchmark, which is state-of-the-art among neural approaches. In addition, we use the Grid LSTM to define a novel two-dimensional translation model, the Reencoder, and show that it outperforms a phrase-based reference system on a Chinese-to-English translation task.
[ "['Nal Kalchbrenner' 'Ivo Danihelka' 'Alex Graves']", "Nal Kalchbrenner, Ivo Danihelka, Alex Graves" ]
cs.LG
null
1507.01563
null
null
http://arxiv.org/pdf/1507.01563v1
2015-07-06T18:53:09Z
2015-07-06T18:53:09Z
A Simple Algorithm for Maximum Margin Classification, Revisited
In this note, we revisit the algorithm of Har-Peled et. al. [HRZ07] for computing a linear maximum margin classifier. Our presentation is self contained, and the algorithm itself is slightly simpler than the original algorithm. The algorithm itself is a simple Perceptron like iterative algorithm. For more details and background, the reader is referred to the original paper.
[ "['Sariel Har-Peled']", "Sariel Har-Peled" ]
cs.LG cs.AI
null
1507.01569
null
null
http://arxiv.org/pdf/1507.01569v1
2015-07-06T19:28:36Z
2015-07-06T19:28:36Z
Emphatic Temporal-Difference Learning
Emphatic algorithms are temporal-difference learning algorithms that change their effective state distribution by selectively emphasizing and de-emphasizing their updates on different time steps. Recent works by Sutton, Mahmood and White (2015), and Yu (2015) show that by varying the emphasis in a particular way, these algorithms become stable and convergent under off-policy training with linear function approximation. This paper serves as a unified summary of the available results from both works. In addition, we demonstrate the empirical benefits from the flexibility of emphatic algorithms, including state-dependent discounting, state-dependent bootstrapping, and the user-specified allocation of function approximation resources.
[ "['A. Rupam Mahmood' 'Huizhen Yu' 'Martha White' 'Richard S. Sutton']", "A. Rupam Mahmood, Huizhen Yu, Martha White, Richard S. Sutton" ]
cs.SE cs.LG
null
1507.01698
null
null
http://arxiv.org/pdf/1507.01698v1
2015-07-07T08:04:56Z
2015-07-07T08:04:56Z
Learning Tractable Probabilistic Models for Fault Localization
In recent years, several probabilistic techniques have been applied to various debugging problems. However, most existing probabilistic debugging systems use relatively simple statistical models, and fail to generalize across multiple programs. In this work, we propose Tractable Fault Localization Models (TFLMs) that can be learned from data, and probabilistically infer the location of the bug. While most previous statistical debugging methods generalize over many executions of a single program, TFLMs are trained on a corpus of previously seen buggy programs, and learn to identify recurring patterns of bugs. Widely-used fault localization techniques such as TARANTULA evaluate the suspiciousness of each line in isolation; in contrast, a TFLM defines a joint probability distribution over buggy indicator variables for each line. Joint distributions with rich dependency structure are often computationally intractable; TFLMs avoid this by exploiting recent developments in tractable probabilistic models (specifically, Relational SPNs). Further, TFLMs can incorporate additional sources of information, including coverage-based features such as TARANTULA. We evaluate the fault localization performance of TFLMs that include TARANTULA scores as features in the probabilistic model. Our study shows that the learned TFLMs isolate bugs more effectively than previous statistical methods or using TARANTULA directly.
[ "['Aniruddh Nath' 'Pedro Domingos']", "Aniruddh Nath and Pedro Domingos" ]
stat.ML cs.LG
null
1507.01784
null
null
http://arxiv.org/pdf/1507.01784v2
2015-11-05T20:16:04Z
2015-07-07T12:48:30Z
Rethinking LDA: moment matching for discrete ICA
We consider moment matching techniques for estimation in Latent Dirichlet Allocation (LDA). By drawing explicit links between LDA and discrete versions of independent component analysis (ICA), we first derive a new set of cumulant-based tensors, with an improved sample complexity. Moreover, we reuse standard ICA techniques such as joint diagonalization of tensors to improve over existing methods based on the tensor power method. In an extensive set of experiments on both synthetic and real datasets, we show that our new combination of tensors and orthogonal joint diagonalization techniques outperforms existing moment matching methods.
[ "Anastasia Podosinnikova, Francis Bach, and Simon Lacoste-Julien", "['Anastasia Podosinnikova' 'Francis Bach' 'Simon Lacoste-Julien']" ]
cs.CL cs.AI cs.LG
null
1507.01839
null
null
http://arxiv.org/pdf/1507.01839v2
2015-08-03T15:36:45Z
2015-07-07T15:20:36Z
Dependency-based Convolutional Neural Networks for Sentence Embedding
In sentence modeling and classification, convolutional neural network approaches have recently achieved state-of-the-art results, but all such efforts process word vectors sequentially and neglect long-distance dependencies. To exploit both deep learning and linguistic structures, we propose a tree-based convolutional neural network model which exploit various long-distance relationships between words. Our model improves the sequential baselines on all three sentiment and question classification tasks, and achieves the highest published accuracy on TREC.
[ "['Mingbo Ma' 'Liang Huang' 'Bing Xiang' 'Bowen Zhou']", "Mingbo Ma and Liang Huang and Bing Xiang and Bowen Zhou" ]
cs.LG physics.data-an
null
1507.01892
null
null
http://arxiv.org/pdf/1507.01892v1
2015-07-06T11:42:16Z
2015-07-06T11:42:16Z
A linear approach for sparse coding by a two-layer neural network
Many approaches to transform classification problems from non-linear to linear by feature transformation have been recently presented in the literature. These notably include sparse coding methods and deep neural networks. However, many of these approaches require the repeated application of a learning process upon the presentation of unseen data input vectors, or else involve the use of large numbers of parameters and hyper-parameters, which must be chosen through cross-validation, thus increasing running time dramatically. In this paper, we propose and experimentally investigate a new approach for the purpose of overcoming limitations of both kinds. The proposed approach makes use of a linear auto-associative network (called SCNN) with just one hidden layer. The combination of this architecture with a specific error function to be minimized enables one to learn a linear encoder computing a sparse code which turns out to be as similar as possible to the sparse coding that one obtains by re-training the neural network. Importantly, the linearity of SCNN and the choice of the error function allow one to achieve reduced running time in the learning phase. The proposed architecture is evaluated on the basis of two standard machine learning tasks. Its performances are compared with those of recently proposed non-linear auto-associative neural networks. The overall results suggest that linear encoders can be profitably used to obtain sparse data representations in the context of machine learning problems, provided that an appropriate error function is used during the learning phase.
[ "Alessandro Montalto, Giovanni Tessitore, Roberto Prevete", "['Alessandro Montalto' 'Giovanni Tessitore' 'Roberto Prevete']" ]
stat.ML cs.LG
null
1507.01972
null
null
http://arxiv.org/pdf/1507.01972v1
2015-07-07T21:30:36Z
2015-07-07T21:30:36Z
Wasserstein Training of Boltzmann Machines
The Boltzmann machine provides a useful framework to learn highly complex, multimodal and multiscale data distributions that occur in the real world. The default method to learn its parameters consists of minimizing the Kullback-Leibler (KL) divergence from training samples to the Boltzmann model. We propose in this work a novel approach for Boltzmann training which assumes that a meaningful metric between observations is given. This metric can be represented by the Wasserstein distance between distributions, for which we derive a gradient with respect to the model parameters. Minimization of this new Wasserstein objective leads to generative models that are better when considering the metric and that have a cluster-like structure. We demonstrate the practical potential of these models for data completion and denoising, for which the metric between observations plays a crucial role.
[ "['Grégoire Montavon' 'Klaus-Robert Müller' 'Marco Cuturi']", "Gr\\'egoire Montavon, Klaus-Robert M\\\"uller, Marco Cuturi" ]
cs.LG stat.ML
null
1507.01978
null
null
http://arxiv.org/pdf/1507.01978v3
2016-11-02T18:52:54Z
2015-07-07T22:18:43Z
Learning Leading Indicators for Time Series Predictions
We consider the problem of learning models for forecasting multiple time-series systems together with discovering the leading indicators that serve as good predictors for the system. We model the systems by linear vector autoregressive models (VAR) and link the discovery of leading indicators to inferring sparse graphs of Granger-causality. We propose new problem formulations and develop two new methods to learn such models, gradually increasing the complexity of assumptions and approaches. While the first method assumes common structures across the whole system, our second method uncovers model clusters based on the Granger-causality and leading indicators together with learning the model parameters. We study the performance of our methods on a comprehensive set of experiments and confirm their efficacy and their advantages over state-of-the-art sparse VAR and graphical Granger learning methods.
[ "Magda Gregorova, Alexandros Kalousis, St\\'ephane Marchand-Maillet", "['Magda Gregorova' 'Alexandros Kalousis' 'Stéphane Marchand-Maillet']" ]
cs.LG
null
1507.02011
null
null
http://arxiv.org/pdf/1507.02011v1
2015-07-08T03:35:58Z
2015-07-08T03:35:58Z
A Bayesian Approach for Online Classifier Ensemble
We propose a Bayesian approach for recursively estimating the classifier weights in online learning of a classifier ensemble. In contrast with past methods, such as stochastic gradient descent or online boosting, our approach estimates the weights by recursively updating its posterior distribution. For a specified class of loss functions, we show that it is possible to formulate a suitably defined likelihood function and hence use the posterior distribution as an approximation to the global empirical loss minimizer. If the stream of training data is sampled from a stationary process, we can also show that our approach admits a superior rate of convergence to the expected loss minimizer than is possible with standard stochastic gradient descent. In experiments with real-world datasets, our formulation often performs better than state-of-the-art stochastic gradient descent and online boosting algorithms.
[ "['Qinxun Bai' 'Henry Lam' 'Stan Sclaroff']", "Qinxun Bai, Henry Lam, Stan Sclaroff" ]
cs.LG math.OC
null
1507.02030
null
null
http://arxiv.org/pdf/1507.02030v3
2015-10-28T07:00:56Z
2015-07-08T05:47:42Z
Beyond Convexity: Stochastic Quasi-Convex Optimization
Stochastic convex optimization is a basic and well studied primitive in machine learning. It is well known that convex and Lipschitz functions can be minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which updates according to the direction of the gradients, rather than the gradients themselves. In this paper we analyze a stochastic version of NGD and prove its convergence to a global minimum for a wider class of functions: we require the functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens the con- cept of unimodality to multidimensions and allows for certain types of saddle points, which are a known hurdle for first-order optimization methods such as gradient descent. Locally-Lipschitz functions are only required to be Lipschitz in a small region around the optimum. This assumption circumvents gradient explosion, which is another known hurdle for gradient descent variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic normalized gradient descent algorithm provably requires a minimal minibatch size.
[ "['Elad Hazan' 'Kfir Y. Levy' 'Shai Shalev-Shwartz']", "Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz" ]
cs.LG cs.AI cs.CV
10.1016/j.patrec.2011.10.022
1507.02084
null
null
http://arxiv.org/abs/1507.02084v1
2015-07-08T09:58:06Z
2015-07-08T09:58:06Z
Shedding Light on the Asymmetric Learning Capability of AdaBoost
In this paper, we propose a different insight to analyze AdaBoost. This analysis reveals that, beyond some preconceptions, AdaBoost can be directly used as an asymmetric learning algorithm, preserving all its theoretical properties. A novel class-conditional description of AdaBoost, which models the actual asymmetric behavior of the algorithm, is presented.
[ "['Iago Landesa-Vázquez' 'José Luis Alba-Castro']", "Iago Landesa-V\\'azquez, Jos\\'e Luis Alba-Castro" ]
cs.CV cs.AI cs.LG
10.1016/j.neucom.2013.02.019
1507.02154
null
null
http://arxiv.org/abs/1507.02154v1
2015-07-08T13:44:34Z
2015-07-08T13:44:34Z
Double-Base Asymmetric AdaBoost
Based on the use of different exponential bases to define class-dependent error bounds, a new and highly efficient asymmetric boosting scheme, coined as AdaBoostDB (Double-Base), is proposed. Supported by a fully theoretical derivation procedure, unlike most of the other approaches in the literature, our algorithm preserves all the formal guarantees and properties of original (cost-insensitive) AdaBoost, similarly to the state-of-the-art Cost-Sensitive AdaBoost algorithm. However, the key advantage of AdaBoostDB is that our novel derivation scheme enables an extremely efficient conditional search procedure, dramatically improving and simplifying the training phase of the algorithm. Experiments, both over synthetic and real datasets, reveal that AdaBoostDB is able to save over 99% training time with regard to Cost-Sensitive AdaBoost, providing the same cost-sensitive results. This computational advantage of AdaBoostDB can make a difference in problems managing huge pools of weak classifiers in which boosting techniques are commonly used.
[ "['Iago Landesa-Vázquez' 'José Luis Alba-Castro']", "Iago Landesa-V\\'azquez, Jos\\'e Luis Alba-Castro" ]
cs.LG
null
1507.02158
null
null
http://arxiv.org/pdf/1507.02158v2
2016-07-20T11:02:46Z
2015-07-08T13:58:19Z
An Empirical Study on Budget-Aware Online Kernel Algorithms for Streams of Graphs
Kernel methods are considered an effective technique for on-line learning. Many approaches have been developed for compactly representing the dual solution of a kernel method when the problem imposes memory constraints. However, in literature no work is specifically tailored to streams of graphs. Motivated by the fact that the size of the feature space representation of many state-of-the-art graph kernels is relatively small and thus it is explicitly computable, we study whether executing kernel algorithms in the feature space can be more effective than the classical dual approach. We study three different algorithms and various strategies for managing the budget. Efficiency and efficacy of the proposed approaches are experimentally assessed on relatively large graph streams exhibiting concept drift. It turns out that, when strict memory budget constraints have to be enforced, working in feature space, given the current state of the art on graph kernels, is more than a viable alternative to dual approaches, both in terms of speed and classification performance.
[ "Giovanni Da San Martino, Nicol\\`o Navarin, Alessandro Sperduti", "['Giovanni Da San Martino' 'Nicolò Navarin' 'Alessandro Sperduti']" ]
cs.LG
10.1007/978-3-319-26561-2_33
1507.02186
null
null
http://arxiv.org/abs/1507.02186v2
2015-09-03T10:23:38Z
2015-07-08T14:58:49Z
Extending local features with contextual information in graph kernels
Graph kernels are usually defined in terms of simpler kernels over local substructures of the original graphs. Different kernels consider different types of substructures. However, in some cases they have similar predictive performances, probably because the substructures can be interpreted as approximations of the subgraphs they induce. In this paper, we propose to associate to each feature a piece of information about the context in which the feature appears in the graph. A substructure appearing in two different graphs will match only if it appears with the same context in both graphs. We propose a kernel based on this idea that considers trees as substructures, and where the contexts are features too. The kernel is inspired from the framework in [6], even if it is not part of it. We give an efficient algorithm for computing the kernel and show promising results on real-world graph classification datasets.
[ "Nicol\\`o Navarin, Alessandro Sperduti, Riccardo Tesselli", "['Nicolò Navarin' 'Alessandro Sperduti' 'Riccardo Tesselli']" ]
stat.ML cs.LG
null
1507.02188
null
null
http://arxiv.org/pdf/1507.02188v1
2015-07-08T15:07:39Z
2015-07-08T15:07:39Z
AutoCompete: A Framework for Machine Learning Competition
In this paper, we propose AutoCompete, a highly automated machine learning framework for tackling machine learning competitions. This framework has been learned by us, validated and improved over a period of more than two years by participating in online machine learning competitions. It aims at minimizing human interference required to build a first useful predictive model and to assess the practical difficulty of a given machine learning challenge. The proposed system helps in identifying data types, choosing a machine learn- ing model, tuning hyper-parameters, avoiding over-fitting and optimization for a provided evaluation metric. We also observe that the proposed system produces better (or comparable) results with less runtime as compared to other approaches.
[ "['Abhishek Thakur' 'Artus Krohn-Grimberghe']", "Abhishek Thakur and Artus Krohn-Grimberghe" ]
cs.LG stat.ML
null
1507.02189
null
null
http://arxiv.org/pdf/1507.02189v1
2015-07-08T15:07:40Z
2015-07-08T15:07:40Z
Intersecting Faces: Non-negative Matrix Factorization With New Guarantees
Non-negative matrix factorization (NMF) is a natural model of admixture and is widely used in science and engineering. A plethora of algorithms have been developed to tackle NMF, but due to the non-convex nature of the problem, there is little guarantee on how well these methods work. Recently a surge of research have focused on a very restricted class of NMFs, called separable NMF, where provably correct algorithms have been developed. In this paper, we propose the notion of subset-separable NMF, which substantially generalizes the property of separability. We show that subset-separability is a natural necessary condition for the factorization to be unique or to have minimum volume. We developed the Face-Intersect algorithm which provably and efficiently solves subset-separable NMF under natural conditions, and we prove that our algorithm is robust to small noise. We explored the performance of Face-Intersect on simulations and discuss settings where it empirically outperformed the state-of-art methods. Our work is a step towards finding provably correct algorithms that solve large classes of NMF problems.
[ "['Rong Ge' 'James Zou']", "Rong Ge and James Zou" ]
stat.AP cs.LG stat.ML
10.1109/LSP.2015.2463232
1507.02216
null
null
http://arxiv.org/abs/1507.02216v2
2016-04-25T12:06:29Z
2015-07-08T16:50:26Z
Robust Sparse Blind Source Separation
Blind Source Separation is a widely used technique to analyze multichannel data. In many real-world applications, its results can be significantly hampered by the presence of unknown outliers. In this paper, a novel algorithm coined rGMCA (robust Generalized Morphological Component Analysis) is introduced to retrieve sparse sources in the presence of outliers. It explicitly estimates the sources, the mixing matrix, and the outliers. It also takes advantage of the estimation of the outliers to further implement a weighting scheme, which provides a highly robust separation procedure. Numerical experiments demonstrate the efficiency of rGMCA to estimate the mixing matrix in comparison with standard BSS techniques.
[ "Cecile Chenot, Jerome Bobin and Jeremy Rapin", "['Cecile Chenot' 'Jerome Bobin' 'Jeremy Rapin']" ]
cs.DS cs.LG stat.ML
null
1507.02268
null
null
http://arxiv.org/pdf/1507.02268v3
2016-03-02T12:58:32Z
2015-07-08T19:45:21Z
Optimal approximate matrix product in terms of stable rank
We prove, using the subspace embedding guarantee in a black box way, that one can achieve the spectral norm guarantee for approximate matrix multiplication with a dimensionality-reducing map having $m = O(\tilde{r}/\varepsilon^2)$ rows. Here $\tilde{r}$ is the maximum stable rank, i.e. squared ratio of Frobenius and operator norms, of the two matrices being multiplied. This is a quantitative improvement over previous work of [MZ11, KVZ14], and is also optimal for any oblivious dimensionality-reducing map. Furthermore, due to the black box reliance on the subspace embedding property in our proofs, our theorem can be applied to a much more general class of sketching matrices than what was known before, in addition to achieving better bounds. For example, one can apply our theorem to efficient subspace embeddings such as the Subsampled Randomized Hadamard Transform or sparse subspace embeddings, or even with subspace embedding constructions that may be developed in the future. Our main theorem, via connections with spectral error matrix multiplication shown in prior work, implies quantitative improvements for approximate least squares regression and low rank approximation. Our main result has also already been applied to improve dimensionality reduction guarantees for $k$-means clustering [CEMMP14], and implies new results for nonparametric regression [YPW15]. We also separately point out that the proof of the "BSS" deterministic row-sampling result of [BSS12] can be modified to show that for any matrices $A, B$ of stable rank at most $\tilde{r}$, one can achieve the spectral norm guarantee for approximate matrix multiplication of $A^T B$ by deterministically sampling $O(\tilde{r}/\varepsilon^2)$ rows that can be found in polynomial time. The original result of [BSS12] was for rank instead of stable rank. Our observation leads to a stronger version of a main theorem of [KMST10].
[ "Michael B. Cohen, Jelani Nelson, David P. Woodruff", "['Michael B. Cohen' 'Jelani Nelson' 'David P. Woodruff']" ]
stat.ML cs.IT cs.LG math.IT
null
1507.02284
null
null
http://arxiv.org/pdf/1507.02284v3
2016-06-09T00:12:24Z
2015-07-08T20:00:42Z
The Information Sieve
We introduce a new framework for unsupervised learning of representations based on a novel hierarchical decomposition of information. Intuitively, data is passed through a series of progressively fine-grained sieves. Each layer of the sieve recovers a single latent factor that is maximally informative about multivariate dependence in the data. The data is transformed after each pass so that the remaining unexplained information trickles down to the next layer. Ultimately, we are left with a set of latent factors explaining all the dependence in the original data and remainder information consisting of independent noise. We present a practical implementation of this framework for discrete variables and apply it to a variety of fundamental tasks in unsupervised learning including independent component analysis, lossy and lossless compression, and predicting missing values in data.
[ "Greg Ver Steeg and Aram Galstyan", "['Greg Ver Steeg' 'Aram Galstyan']" ]
cs.SI cs.LG physics.soc-ph stat.ML
null
1507.02293
null
null
http://arxiv.org/pdf/1507.02293v2
2016-04-01T13:51:32Z
2015-07-08T20:01:32Z
COEVOLVE: A Joint Point Process Model for Information Diffusion and Network Co-evolution
Information diffusion in online social networks is affected by the underlying network topology, but it also has the power to change it. Online users are constantly creating new links when exposed to new information sources, and in turn these links are alternating the way information spreads. However, these two highly intertwined stochastic processes, information diffusion and network evolution, have been predominantly studied separately, ignoring their co-evolutionary dynamics. We propose a temporal point process model, COEVOLVE, for such joint dynamics, allowing the intensity of one process to be modulated by that of the other. This model allows us to efficiently simulate interleaved diffusion and network events, and generate traces obeying common diffusion and network patterns observed in real-world networks. Furthermore, we also develop a convex optimization framework to learn the parameters of the model from historical diffusion and network evolution traces. We experimented with both synthetic data and data gathered from Twitter, and show that our model provides a good fit to the data as well as more accurate predictions than alternatives.
[ "Mehrdad Farajtabar and Yichen Wang and Manuel Gomez Rodriguez and\n Shuang Li and Hongyuan Zha and Le Song", "['Mehrdad Farajtabar' 'Yichen Wang' 'Manuel Gomez Rodriguez' 'Shuang Li'\n 'Hongyuan Zha' 'Le Song']" ]
cs.AI cs.LG cs.RO
10.1109/HUMANOIDS.2015.7363448
1507.02347
null
null
http://arxiv.org/abs/1507.02347v1
2015-07-09T02:10:03Z
2015-07-09T02:10:03Z
Achieving Synergy in Cognitive Behavior of Humanoids via Deep Learning of Dynamic Visuo-Motor-Attentional Coordination
The current study examines how adequate coordination among different cognitive processes including visual recognition, attention switching, action preparation and generation can be developed via learning of robots by introducing a novel model, the Visuo-Motor Deep Dynamic Neural Network (VMDNN). The proposed model is built on coupling of a dynamic vision network, a motor generation network, and a higher level network allocated on top of these two. The simulation experiments using the iCub simulator were conducted for cognitive tasks including visual object manipulation responding to human gestures. The results showed that synergetic coordination can be developed via iterative learning through the whole network when spatio-temporal hierarchy and temporal one can be self-organized in the visual pathway and in the motor pathway, respectively, such that the higher level can manipulate them with abstraction.
[ "Jungsik Hwang, Minju Jung, Naveen Madapana, Jinhyung Kim, Minkyu Choi\n and Jun Tani", "['Jungsik Hwang' 'Minju Jung' 'Naveen Madapana' 'Jinhyung Kim'\n 'Minkyu Choi' 'Jun Tani']" ]
stat.ML cs.LG
null
1507.02356
null
null
http://arxiv.org/pdf/1507.02356v1
2015-07-09T02:52:19Z
2015-07-09T02:52:19Z
Intrinsic Non-stationary Covariance Function for Climate Modeling
Designing a covariance function that represents the underlying correlation is a crucial step in modeling complex natural systems, such as climate models. Geospatial datasets at a global scale usually suffer from non-stationarity and non-uniformly smooth spatial boundaries. A Gaussian process regression using a non-stationary covariance function has shown promise for this task, as this covariance function adapts to the variable correlation structure of the underlying distribution. In this paper, we generalize the non-stationary covariance function to address the aforementioned global scale geospatial issues. We define this generalized covariance function as an intrinsic non-stationary covariance function, because it uses intrinsic statistics of the symmetric positive definite matrices to represent the characteristic length scale and, thereby, models the local stochastic process. Experiments on a synthetic and real dataset of relative sea level changes across the world demonstrate improvements in the error metrics for the regression estimates using our newly proposed approach.
[ "['Chintan A. Dalal' 'Vladimir Pavlovic' 'Robert E. Kopp']", "Chintan A. Dalal, Vladimir Pavlovic, Robert E. Kopp" ]
cs.LG cs.IT math.IT
10.1109/TSIPN.2016.2612120
1507.02387
null
null
http://arxiv.org/abs/1507.02387v2
2015-12-18T10:58:08Z
2015-07-09T06:21:57Z
Decentralized Joint-Sparse Signal Recovery: A Sparse Bayesian Learning Approach
This work proposes a decentralized, iterative, Bayesian algorithm called CB-DSBL for in-network estimation of multiple jointly sparse vectors by a network of nodes, using noisy and underdetermined linear measurements. The proposed algorithm exploits the network wide joint sparsity of the un- known sparse vectors to recover them from significantly fewer number of local measurements compared to standalone sparse signal recovery schemes. To reduce the amount of inter-node communication and the associated overheads, the nodes exchange messages with only a small subset of their single hop neighbors. Under this communication scheme, we separately analyze the convergence of the underlying Alternating Directions Method of Multipliers (ADMM) iterations used in our proposed algorithm and establish its linear convergence rate. The findings from the convergence analysis of decentralized ADMM are used to accelerate the convergence of the proposed CB-DSBL algorithm. Using Monte Carlo simulations, we demonstrate the superior signal reconstruction as well as support recovery performance of our proposed algorithm compared to existing decentralized algorithms: DRL-1, DCOMP and DCSP.
[ "Saurabh Khanna, Chandra R. Murthy", "['Saurabh Khanna' 'Chandra R. Murthy']" ]
cs.DS cs.CR cs.LG
null
1507.02482
null
null
http://arxiv.org/pdf/1507.02482v4
2017-08-21T21:30:27Z
2015-07-09T12:32:19Z
Differentially Private Ordinary Least Squares
Linear regression is one of the most prevalent techniques in machine learning, however, it is also common to use linear regression for its \emph{explanatory} capabilities rather than label prediction. Ordinary Least Squares (OLS) is often used in statistics to establish a correlation between an attribute (e.g. gender) and a label (e.g. income) in the presence of other (potentially correlated) features. OLS assumes a particular model that randomly generates the data, and derives \emph{$t$-values} --- representing the likelihood of each real value to be the true correlation. Using $t$-values, OLS can release a \emph{confidence interval}, which is an interval on the reals that is likely to contain the true correlation, and when this interval does not intersect the origin, we can \emph{reject the null hypothesis} as it is likely that the true correlation is non-zero. Our work aims at achieving similar guarantees on data under differentially private estimators. First, we show that for well-spread data, the Gaussian Johnson-Lindenstrauss Transform (JLT) gives a very good approximation of $t$-values, secondly, when JLT approximates Ridge regression (linear regression with $l_2$-regularization) we derive, under certain conditions, confidence intervals using the projected data, lastly, we derive, under different conditions, confidence intervals for the "Analyze Gauss" algorithm (Dwork et al, STOC 2014).
[ "['Or Sheffet']", "Or Sheffet" ]
math.OC cs.LG
null
1507.02528
null
null
http://arxiv.org/pdf/1507.02528v2
2015-11-05T16:50:41Z
2015-07-09T14:32:55Z
Faster Convex Optimization: Simulated Annealing with an Efficient Universal Barrier
This paper explores a surprising equivalence between two seemingly-distinct convex optimization methods. We show that simulated annealing, a well-studied random walk algorithms, is directly equivalent, in a certain sense, to the central path interior point algorithm for the the entropic universal barrier function. This connection exhibits several benefits. First, we are able improve the state of the art time complexity for convex optimization under the membership oracle model. We improve the analysis of the randomized algorithm of Kalai and Vempala by utilizing tools developed by Nesterov and Nemirovskii that underly the central path following interior point algorithm. We are able to tighten the temperature schedule for simulated annealing which gives an improved running time, reducing by square root of the dimension in certain instances. Second, we get an efficient randomized interior point method with an efficiently computable universal barrier for any convex set described by a membership oracle. Previously, efficiently computable barriers were known only for particular convex sets.
[ "['Jacob Abernethy' 'Elad Hazan']", "Jacob Abernethy, Elad Hazan" ]
math.PR cs.DS cs.LG
null
1507.02564
null
null
http://arxiv.org/pdf/1507.02564v1
2015-07-09T15:44:57Z
2015-07-09T15:44:57Z
Sampling from a log-concave distribution with Projected Langevin Monte Carlo
We extend the Langevin Monte Carlo (LMC) algorithm to compactly supported measures via a projection step, akin to projected Stochastic Gradient Descent (SGD). We show that (projected) LMC allows to sample in polynomial time from a log-concave distribution with smooth potential. This gives a new Markov chain to sample from a log-concave distribution. Our main result shows in particular that when the target distribution is uniform, LMC mixes in $\tilde{O}(n^7)$ steps (where $n$ is the dimension). We also provide preliminary experimental evidence that LMC performs at least as well as hit-and-run, for which a better mixing time of $\tilde{O}(n^4)$ was proved by Lov{\'a}sz and Vempala.
[ "['Sébastien Bubeck' 'Ronen Eldan' 'Joseph Lehec']", "S\\'ebastien Bubeck, Ronen Eldan, Joseph Lehec" ]
cs.CG cs.LG
null
1507.02574
null
null
http://arxiv.org/pdf/1507.02574v2
2017-06-29T20:58:22Z
2015-07-09T16:02:54Z
Sparse Approximation via Generating Point Sets
$ \newcommand{\kalg}{{k_{\mathrm{alg}}}} \newcommand{\kopt}{{k_{\mathrm{opt}}}} \newcommand{\algset}{{T}} \renewcommand{\Re}{\mathbb{R}} \newcommand{\eps}{\varepsilon} \newcommand{\pth}[2][\!]{#1\left({#2}\right)} \newcommand{\npoints}{n} \newcommand{\ballD}{\mathsf{b}} \newcommand{\dataset}{{P}} $ For a set $\dataset$ of $\npoints$ points in the unit ball $\ballD \subseteq \Re^d$, consider the problem of finding a small subset $\algset \subseteq \dataset$ such that its convex-hull $\eps$-approximates the convex-hull of the original set. We present an efficient algorithm to compute such a $\eps'$-approximation of size $\kalg$, where $\eps'$ is function of $\eps$, and $\kalg$ is a function of the minimum size $\kopt$ of such an $\eps$-approximation. Surprisingly, there is no dependency on the dimension $d$ in both bounds. Furthermore, every point of $\dataset$ can be $\eps$-approximated by a convex-combination of points of $\algset$ that is $O(1/\eps^2)$-sparse. Our result can be viewed as a method for sparse, convex autoencoding: approximately representing the data in a compact way using sparse combinations of a small subset $\algset$ of the original data. The new algorithm can be kernelized, and it preserves sparsity in the original input.
[ "['Avrim Blum' 'Sariel Har-Peled' 'Benjamin Raichel']", "Avrim Blum, Sariel Har-Peled and Benjamin Raichel" ]
cs.LG stat.ML
null
1507.02592
null
null
http://arxiv.org/pdf/1507.02592v2
2015-09-01T09:38:07Z
2015-07-09T16:53:30Z
Fast rates in statistical and online learning
The speed with which a learning algorithm converges as it is presented with more data is a central problem in machine learning --- a fast rate of convergence means less data is needed for the same level of performance. The pursuit of fast rates in online and statistical learning has led to the discovery of many conditions in learning theory under which fast learning is possible. We show that most of these conditions are special cases of a single, unifying condition, that comes in two forms: the central condition for 'proper' learning algorithms that always output a hypothesis in the given model, and stochastic mixability for online algorithms that may make predictions outside of the model. We show that under surprisingly weak assumptions both conditions are, in a certain sense, equivalent. The central condition has a re-interpretation in terms of convexity of a set of pseudoprobabilities, linking it to density estimation under misspecification. For bounded losses, we show how the central condition enables a direct proof of fast rates and we prove its equivalence to the Bernstein condition, itself a generalization of the Tsybakov margin condition, both of which have played a central role in obtaining fast rates in statistical learning. Yet, while the Bernstein condition is two-sided, the central condition is one-sided, making it more suitable to deal with unbounded losses. In its stochastic mixability form, our condition generalizes both a stochastic exp-concavity condition identified by Juditsky, Rigollet and Tsybakov and Vovk's notion of mixability. Our unifying conditions thus provide a substantial step towards a characterization of fast rates in statistical learning, similar to how classical mixability characterizes constant regret in the sequential prediction with expert advice setting.
[ "['Tim van Erven' 'Peter D. Grünwald' 'Nishant A. Mehta' 'Mark D. Reid'\n 'Robert C. Williamson']", "Tim van Erven and Peter D. Gr\\\"unwald and Nishant A. Mehta and Mark D.\n Reid and Robert C. Williamson" ]
cs.LG quant-ph
null
1507.02642
null
null
http://arxiv.org/pdf/1507.02642v1
2015-07-09T18:32:19Z
2015-07-09T18:32:19Z
Quantum Inspired Training for Boltzmann Machines
We present an efficient classical algorithm for training deep Boltzmann machines (DBMs) that uses rejection sampling in concert with variational approximations to estimate the gradients of the training objective function. Our algorithm is inspired by a recent quantum algorithm for training DBMs. We obtain rigorous bounds on the errors in the approximate gradients; in turn, we find that choosing the instrumental distribution to minimize the alpha=2 divergence with the Gibbs state minimizes the asymptotic algorithmic complexity. Our rejection sampling approach can yield more accurate gradients than low-order contrastive divergence training and the costs incurred in finding increasingly accurate gradients can be easily parallelized. Finally our algorithm can train full Boltzmann machines and scales more favorably with the number of layers in a DBM than greedy contrastive divergence training.
[ "Nathan Wiebe, Ashish Kapoor, Christopher Granade, Krysta M Svore", "['Nathan Wiebe' 'Ashish Kapoor' 'Christopher Granade' 'Krysta M Svore']" ]
cs.NE cs.LG stat.ML
null
1507.02672
null
null
http://arxiv.org/pdf/1507.02672v2
2015-11-24T09:22:23Z
2015-07-09T19:52:19Z
Semi-Supervised Learning with Ladder Networks
We combine supervised learning with unsupervised learning in deep neural networks. The proposed model is trained to simultaneously minimize the sum of supervised and unsupervised cost functions by backpropagation, avoiding the need for layer-wise pre-training. Our work builds on the Ladder network proposed by Valpola (2015), which we extend by combining the model with supervision. We show that the resulting model reaches state-of-the-art performance in semi-supervised MNIST and CIFAR-10 classification, in addition to permutation-invariant MNIST classification with all labels.
[ "['Antti Rasmus' 'Harri Valpola' 'Mikko Honkala' 'Mathias Berglund'\n 'Tapani Raiko']", "Antti Rasmus and Harri Valpola and Mikko Honkala and Mathias Berglund\n and Tapani Raiko" ]
cs.LG cs.IR math.OC stat.ML
null
1507.02743
null
null
http://arxiv.org/pdf/1507.02743v1
2015-07-09T23:29:10Z
2015-07-09T23:29:10Z
Locally Non-linear Embeddings for Extreme Multi-label Learning
The objective in extreme multi-label learning is to train a classifier that can automatically tag a novel data point with the most relevant subset of labels from an extremely large label set. Embedding based approaches make training and prediction tractable by assuming that the training label matrix is low-rank and hence the effective number of labels can be reduced by projecting the high dimensional label vectors onto a low dimensional linear subspace. Still, leading embedding approaches have been unable to deliver high prediction accuracies or scale to large problems as the low rank assumption is violated in most real world applications. This paper develops the X-One classifier to address both limitations. The main technical contribution in X-One is a formulation for learning a small ensemble of local distance preserving embeddings which can accurately predict infrequently occurring (tail) labels. This allows X-One to break free of the traditional low-rank assumption and boost classification accuracy by learning embeddings which preserve pairwise distances between only the nearest label vectors. We conducted extensive experiments on several real-world as well as benchmark data sets and compared our method against state-of-the-art methods for extreme multi-label classification. Experiments reveal that X-One can make significantly more accurate predictions then the state-of-the-art methods including both embeddings (by as much as 35%) as well as trees (by as much as 6%). X-One can also scale efficiently to data sets with a million labels which are beyond the pale of leading embedding methods.
[ "Kush Bhatia and Himanshu Jain and Purushottam Kar and Prateek Jain and\n Manik Varma", "['Kush Bhatia' 'Himanshu Jain' 'Purushottam Kar' 'Prateek Jain'\n 'Manik Varma']" ]
cs.LG
null
1507.02750
null
null
http://arxiv.org/pdf/1507.02750v2
2015-09-25T11:47:38Z
2015-07-10T00:05:38Z
Utility-based Dueling Bandits as a Partial Monitoring Game
Partial monitoring is a generic framework for sequential decision-making with incomplete feedback. It encompasses a wide class of problems such as dueling bandits, learning with expect advice, dynamic pricing, dark pools, and label efficient prediction. We study the utility-based dueling bandit problem as an instance of partial monitoring problem and prove that it fits the time-regret partial monitoring hierarchy as an easy - i.e. Theta (sqrt{T})- instance. We survey some partial monitoring algorithms and see how they could be used to solve dueling bandits efficiently. Keywords: Online learning, Dueling Bandits, Partial Monitoring, Partial Feedback, Multiarmed Bandits
[ "['Pratik Gajane' 'Tanguy Urvoy']", "Pratik Gajane and Tanguy Urvoy" ]
stat.ML cs.IT cs.LG math.IT
null
1507.02801
null
null
http://arxiv.org/pdf/1507.02801v2
2015-10-22T19:50:02Z
2015-07-10T08:13:02Z
Adaptive Mixtures of Factor Analyzers
A mixture of factor analyzers is a semi-parametric density estimator that generalizes the well-known mixtures of Gaussians model by allowing each Gaussian in the mixture to be represented in a different lower-dimensional manifold. This paper presents a robust and parsimonious model selection algorithm for training a mixture of factor analyzers, carrying out simultaneous clustering and locally linear, globally nonlinear dimensionality reduction. Permitting different number of factors per mixture component, the algorithm adapts the model complexity to the data complexity. We compare the proposed algorithm with related automatic model selection algorithms on a number of benchmarks. The results indicate the effectiveness of this fast and robust approach in clustering, manifold learning and class-conditional modeling.
[ "['Heysem Kaya' 'Albert Ali Salah']", "Heysem Kaya and Albert Ali Salah" ]
cs.LG
null
1507.03032
null
null
http://arxiv.org/pdf/1507.03032v2
2015-12-14T01:58:25Z
2015-07-10T20:52:35Z
Spectral Smoothing via Random Matrix Perturbations
We consider stochastic smoothing of spectral functions of matrices using perturbations commonly studied in random matrix theory. We show that a spectral function remains spectral when smoothed using a unitarily invariant perturbation distribution. We then derive state-of-the-art smoothing bounds for the maximum eigenvalue function using the Gaussian Orthogonal Ensemble (GOE). Smoothing the maximum eigenvalue function is important for applications in semidefinite optimization and online learning. As a direct consequence of our GOE smoothing results, we obtain an $O((N \log N)^{1/4} \sqrt{T})$ expected regret bound for the online variance minimization problem using an algorithm that performs only a single maximum eigenvector computation per time step. Here $T$ is the number of rounds and $N$ is the matrix dimension. Our algorithm and its analysis also extend to the more general online PCA problem where the learner has to output a rank $k$ subspace. The algorithm just requires computing $k$ maximum eigenvectors per step and enjoys an $O(k (N \log N)^{1/4} \sqrt{T})$ expected regret bound.
[ "['Jacob Abernethy' 'Chansoo Lee' 'Ambuj Tewari']", "Jacob Abernethy, Chansoo Lee, Ambuj Tewari" ]
stat.ML cs.LG
10.1134/S105466181604009X
1507.03040
null
null
http://arxiv.org/abs/1507.03040v3
2016-07-02T22:07:43Z
2015-07-10T22:19:17Z
Tight Risk Bounds for Multi-Class Margin Classifiers
We consider a problem of risk estimation for large-margin multi-class classifiers. We propose a novel risk bound for the multi-class classification problem. The bound involves the marginal distribution of the classifier and the Rademacher complexity of the hypothesis class. We prove that our bound is tight in the number of classes. Finally, we compare our bound with the related ones and provide a simplified version of the bound for the multi-class classification with kernel based hypotheses.
[ "Yury Maximov, Daria Reshetova", "['Yury Maximov' 'Daria Reshetova']" ]
cs.LG
null
1507.03125
null
null
http://arxiv.org/pdf/1507.03125v1
2015-07-11T16:46:37Z
2015-07-11T16:46:37Z
A new boosting algorithm based on dual averaging scheme
The fields of machine learning and mathematical optimization increasingly intertwined. The special topic on supervised learning and convex optimization examines this interplay. The training part of most supervised learning algorithms can usually be reduced to an optimization problem that minimizes a loss between model predictions and training data. While most optimization techniques focus on accuracy and speed of convergence, the qualities of good optimization algorithm from the machine learning perspective can be quite different since machine learning is more than fitting the data. Better optimization algorithms that minimize the training loss can possibly give very poor generalization performance. In this paper, we examine a particular kind of machine learning algorithm, boosting, whose training process can be viewed as functional coordinate descent on the exponential loss. We study the relation between optimization techniques and machine learning by implementing a new boosting algorithm. DABoost, based on dual-averaging scheme and study its generalization performance. We show that DABoost, although slower in reducing the training error, in general enjoys a better generalization error than AdaBoost.
[ "['Nan Wang']", "Nan Wang" ]
stat.ML cs.LG cs.NA
null
1507.03194
null
null
http://arxiv.org/pdf/1507.03194v2
2015-08-28T13:43:28Z
2015-07-12T07:14:16Z
A Review of Nonnegative Matrix Factorization Methods for Clustering
Nonnegative Matrix Factorization (NMF) was first introduced as a low-rank matrix approximation technique, and has enjoyed a wide area of applications. Although NMF does not seem related to the clustering problem at first, it was shown that they are closely linked. In this report, we provide a gentle introduction to clustering and NMF before reviewing the theoretical relationship between them. We then explore several NMF variants, namely Sparse NMF, Projective NMF, Nonnegative Spectral Clustering and Cluster-NMF, along with their clustering interpretations.
[ "Ali Caner T\\\"urkmen", "['Ali Caner Türkmen']" ]
stat.ML cs.LG
null
1507.03229
null
null
http://arxiv.org/pdf/1507.03229v1
2015-07-12T13:07:26Z
2015-07-12T13:07:26Z
Homotopy Continuation Approaches for Robust SV Classification and Regression
In support vector machine (SVM) applications with unreliable data that contains a portion of outliers, non-robustness of SVMs often causes considerable performance deterioration. Although many approaches for improving the robustness of SVMs have been studied, two major challenges remain in robust SVM learning. First, robust learning algorithms are essentially formulated as non-convex optimization problems. It is thus important to develop a non-convex optimization method for robust SVM that can find a good local optimal solution. The second practical issue is how one can tune the hyperparameter that controls the balance between robustness and efficiency. Unfortunately, due to the non-convexity, robust SVM solutions with slightly different hyper-parameter values can be significantly different, which makes model selection highly unstable. In this paper, we address these two issues simultaneously by introducing a novel homotopy approach to non-convex robust SVM learning. Our basic idea is to introduce parametrized formulations of robust SVM which bridge the standard SVM and fully robust SVM via the parameter that represents the influence of outliers. We characterize the necessary and sufficient conditions of the local optimal solutions of robust SVM, and develop an algorithm that can trace a path of local optimal solutions when the influence of outliers is gradually decreased. An advantage of our homotopy approach is that it can be interpreted as simulated annealing, a common approach for finding a good local optimal solution in non-convex optimization problems. In addition, our homotopy method allows stable and efficient model selection based on the path of local optimal solutions. Empirical performances of the proposed approach are demonstrated through intensive numerical experiments both on robust classification and regression problems.
[ "['Shinya Suzumura' 'Kohei Ogawa' 'Masashi Sugiyama' 'Masayuki Karasuyama'\n 'Ichiro Takeuchi']", "Shinya Suzumura, Kohei Ogawa, Masashi Sugiyama, Masayuki Karasuyama,\n Ichiro Takeuchi" ]
cs.LG cs.CC cs.DS stat.ML
null
1507.03269
null
null
http://arxiv.org/pdf/1507.03269v1
2015-07-12T20:30:09Z
2015-07-12T20:30:09Z
Tensor principal component analysis via sum-of-squares proofs
We study a statistical model for the tensor principal component analysis problem introduced by Montanari and Richard: Given a order-$3$ tensor $T$ of the form $T = \tau \cdot v_0^{\otimes 3} + A$, where $\tau \geq 0$ is a signal-to-noise ratio, $v_0$ is a unit vector, and $A$ is a random noise tensor, the goal is to recover the planted vector $v_0$. For the case that $A$ has iid standard Gaussian entries, we give an efficient algorithm to recover $v_0$ whenever $\tau \geq \omega(n^{3/4} \log(n)^{1/4})$, and certify that the recovered vector is close to a maximum likelihood estimator, all with high probability over the random choice of $A$. The previous best algorithms with provable guarantees required $\tau \geq \Omega(n)$. In the regime $\tau \leq o(n)$, natural tensor-unfolding-based spectral relaxations for the underlying optimization problem break down (in the sense that their integrality gap is large). To go beyond this barrier, we use convex relaxations based on the sum-of-squares method. Our recovery algorithm proceeds by rounding a degree-$4$ sum-of-squares relaxations of the maximum-likelihood-estimation problem for the statistical model. To complement our algorithmic results, we show that degree-$4$ sum-of-squares relaxations break down for $\tau \leq O(n^{3/4}/\log(n)^{1/4})$, which demonstrates that improving our current guarantees (by more than logarithmic factors) would require new techniques or might even be intractable. Finally, we show how to exploit additional problem structure in order to solve our sum-of-squares relaxations, up to some approximation, very efficiently. Our fastest algorithm runs in nearly-linear time using shifted (matrix) power iteration and has similar guarantees as above. The analysis of this algorithm also confirms a variant of a conjecture of Montanari and Richard about singular vectors of tensor unfoldings.
[ "['Samuel B. Hopkins' 'Jonathan Shi' 'David Steurer']", "Samuel B. Hopkins and Jonathan Shi and David Steurer" ]
cs.LG
null
1507.03292
null
null
http://arxiv.org/pdf/1507.03292v4
2016-01-21T21:44:54Z
2015-07-12T23:27:50Z
Cluster-Aided Mobility Predictions
Predicting the future location of users in wireless net- works has numerous applications, and can help service providers to improve the quality of service perceived by their clients. The location predictors proposed so far estimate the next location of a specific user by inspecting the past individual trajectories of this user. As a consequence, when the training data collected for a given user is limited, the resulting prediction is inaccurate. In this paper, we develop cluster-aided predictors that exploit past trajectories collected from all users to predict the next location of a given user. These predictors rely on clustering techniques and extract from the training data similarities among the mobility patterns of the various users to improve the prediction accuracy. Specifically, we present CAMP (Cluster-Aided Mobility Predictor), a cluster-aided predictor whose design is based on recent non-parametric bayesian statistical tools. CAMP is robust and adaptive in the sense that it exploits similarities in users' mobility only if such similarities are really present in the training data. We analytically prove the consistency of the predictions provided by CAMP, and investigate its performance using two large-scale datasets. CAMP significantly outperforms existing predictors, and in particular those that only exploit individual past trajectories.
[ "Jaeseong Jeong, Mathieu Leconte and Alexandre Proutiere", "['Jaeseong Jeong' 'Mathieu Leconte' 'Alexandre Proutiere']" ]
cs.LG cs.SI
null
1507.03340
null
null
http://arxiv.org/pdf/1507.03340v1
2015-07-13T07:15:06Z
2015-07-13T07:15:06Z
Quantitative Evaluation of Performance and Validity Indices for Clustering the Web Navigational Sessions
Clustering techniques are widely used in Web Usage Mining to capture similar interests and trends among users accessing a Web site. For this purpose, web access logs generated at a particular web site are preprocessed to discover the user navigational sessions. Clustering techniques are then applied to group the user session data into user session clusters, where intercluster similarities are minimized while the intra cluster similarities are maximized. Since the application of different clustering algorithms generally results in different sets of cluster formation, it is important to evaluate the performance of these methods in terms of accuracy and validity of the clusters, and also the time required to generate them, using appropriate performance measures. This paper describes various validity and accuracy measures including Dunn's Index, Davies Bouldin Index, C Index, Rand Index, Jaccard Index, Silhouette Index, Fowlkes Mallows and Sum of the Squared Error (SSE). We conducted the performance evaluation of the following clustering techniques: k-Means, k-Medoids, Leader, Single Link Agglomerative Hierarchical and DBSCAN. These techniques are implemented and tested against the Web user navigational data. Finally their performance results are presented and compared.
[ "Zahid Ansari, M.F. Azeem, Waseem Ahmed and A.Vinaya Babu", "['Zahid Ansari' 'M. F. Azeem' 'Waseem Ahmed' 'A. Vinaya Babu']" ]
cs.LG
10.1016/j.neucom.2015.12.110
1507.03372
null
null
http://arxiv.org/abs/1507.03372v2
2015-12-28T14:03:57Z
2015-07-13T09:50:41Z
Ordered Decompositional DAG Kernels Enhancements
In this paper, we show how the Ordered Decomposition DAGs (ODD) kernel framework, a framework that allows the definition of graph kernels from tree kernels, allows to easily define new state-of-the-art graph kernels. Here we consider a fast graph kernel based on the Subtree kernel (ST), and we propose various enhancements to increase its expressiveness. The proposed DAG kernel has the same worst-case complexity as the one based on ST, but an improved expressivity due to an augmented set of features. Moreover, we propose a novel weighting scheme for the features, which can be applied to other kernels of the ODD framework. These improvements allow the proposed kernels to improve on the classification performances of the ST-based kernel for several real-world datasets, reaching state-of-the-art performances.
[ "Giovanni Da San Martino, Nicol\\`o Navarin, Alessandro Sperduti", "['Giovanni Da San Martino' 'Nicolò Navarin' 'Alessandro Sperduti']" ]
cs.IT cs.LG math.IT math.OC
null
1507.03707
null
null
http://arxiv.org/pdf/1507.03707v1
2015-07-14T02:48:09Z
2015-07-14T02:48:09Z
Projected Wirtinger Gradient Descent for Low-Rank Hankel Matrix Completion in Spectral Compressed Sensing
This paper considers reconstructing a spectrally sparse signal from a small number of randomly observed time-domain samples. The signal of interest is a linear combination of complex sinusoids at $R$ distinct frequencies. The frequencies can assume any continuous values in the normalized frequency domain $[0,1)$. After converting the spectrally sparse signal recovery into a low rank structured matrix completion problem, we propose an efficient feasible point approach, named projected Wirtinger gradient descent (PWGD) algorithm, to efficiently solve this structured matrix completion problem. We further accelerate our proposed algorithm by a scheme inspired by FISTA. We give the convergence analysis of our proposed algorithms. Extensive numerical experiments are provided to illustrate the efficiency of our proposed algorithm. Different from earlier approaches, our algorithm can solve problems of very large dimensions very efficiently.
[ "['Jian-Feng Cai' 'Suhui Liu' 'Weiyu Xu']", "Jian-Feng Cai, Suhui Liu, and Weiyu Xu" ]
cs.DS cs.AI cs.DC cs.LG
null
1507.03719
null
null
http://arxiv.org/pdf/1507.03719v2
2016-08-11T21:20:02Z
2015-07-14T04:46:01Z
A New Framework for Distributed Submodular Maximization
A wide variety of problems in machine learning, including exemplar clustering, document summarization, and sensor placement, can be cast as constrained submodular maximization problems. A lot of recent effort has been devoted to developing distributed algorithms for these problems. However, these results suffer from high number of rounds, suboptimal approximation ratios, or both. We develop a framework for bringing existing algorithms in the sequential setting to the distributed setting, achieving near optimal approximation ratios for many settings in only a constant number of MapReduce rounds. Our techniques also give a fast sequential algorithm for non-monotone maximization subject to a matroid constraint.
[ "Rafael da Ponte Barbosa, Alina Ene, Huy L. Nguyen, Justin Ward", "['Rafael da Ponte Barbosa' 'Alina Ene' 'Huy L. Nguyen' 'Justin Ward']" ]
cs.CV cs.LG q-bio.NC
null
1507.03751
null
null
http://arxiv.org/pdf/1507.03751v1
2015-07-14T07:57:39Z
2015-07-14T07:57:39Z
Closed Curves and Elementary Visual Object Identification
For two closed curves on a plane (discrete version) and local criteria for similarity of points on the curves one gets a potential, which describes the similarity between curve points. This is the base for a global similarity measure of closed curves (Fr\'echet distance). I use borderlines of handwritten digits to demonstrate an area of application. I imagine, measuring the similarity of closed curves is an essential and elementary task performed by a visual system. This approach to similarity measures may be used by visual systems.
[ "['Manfred Harringer']", "Manfred Harringer" ]
cs.LG stat.ML
null
1507.03867
null
null
http://arxiv.org/pdf/1507.03867v1
2015-07-14T14:38:23Z
2015-07-14T14:38:23Z
Rich Component Analysis
In many settings, we have multiple data sets (also called views) that capture different and overlapping aspects of the same phenomenon. We are often interested in finding patterns that are unique to one or to a subset of the views. For example, we might have one set of molecular observations and one set of physiological observations on the same group of individuals, and we want to quantify molecular patterns that are uncorrelated with physiology. Despite being a common problem, this is highly challenging when the correlations come from complex distributions. In this paper, we develop the general framework of Rich Component Analysis (RCA) to model settings where the observations from different views are driven by different sets of latent components, and each component can be a complex, high-dimensional distribution. We introduce algorithms based on cumulant extraction that provably learn each of the components without having to model the other components. We show how to integrate RCA with stochastic gradient descent into a meta-algorithm for learning general models, and demonstrate substantial improvement in accuracy on several synthetic and real datasets in both supervised and unsupervised tasks. Our method makes it possible to learn latent variable models when we don't have samples from the true model but only samples after complex perturbations.
[ "['Rong Ge' 'James Zou']", "Rong Ge and James Zou" ]
cs.LG
null
1507.04029
null
null
http://arxiv.org/pdf/1507.04029v1
2015-07-14T21:16:23Z
2015-07-14T21:16:23Z
Training artificial neural networks to learn a nondeterministic game
It is well known that artificial neural networks (ANNs) can learn deterministic automata. Learning nondeterministic automata is another matter. This is important because much of the world is nondeterministic, taking the form of unpredictable or probabilistic events that must be acted upon. If ANNs are to engage such phenomena, then they must be able to learn how to deal with nondeterminism. In this project the game of Pong poses a nondeterministic environment. The learner is given an incomplete view of the game state and underlying deterministic physics, resulting in a nondeterministic game. Three models were trained and tested on the game: Mona, Elman, and Numenta's NuPIC.
[ "['Thomas E. Portegys']", "Thomas E. Portegys" ]
cs.LG cs.AI math.ST stat.TH
null
1507.04121
null
null
http://arxiv.org/pdf/1507.04121v1
2015-07-15T08:37:52Z
2015-07-15T08:37:52Z
Solomonoff Induction Violates Nicod's Criterion
Nicod's criterion states that observing a black raven is evidence for the hypothesis H that all ravens are black. We show that Solomonoff induction does not satisfy Nicod's criterion: there are time steps in which observing black ravens decreases the belief in H. Moreover, while observing any computable infinite string compatible with H, the belief in H decreases infinitely often when using the unnormalized Solomonoff prior, but only finitely often when using the normalized Solomonoff prior. We argue that the fault is not with Solomonoff induction; instead we should reject Nicod's criterion.
[ "Jan Leike and Marcus Hutter", "['Jan Leike' 'Marcus Hutter']" ]
cs.AI cs.LG
null
1507.04124
null
null
http://arxiv.org/pdf/1507.04124v1
2015-07-15T08:46:06Z
2015-07-15T08:46:06Z
On the Computability of Solomonoff Induction and Knowledge-Seeking
Solomonoff induction is held as a gold standard for learning, but it is known to be incomputable. We quantify its incomputability by placing various flavors of Solomonoff's prior M in the arithmetical hierarchy. We also derive computability bounds for knowledge-seeking agents, and give a limit-computable weakly asymptotically optimal reinforcement learning agent.
[ "Jan Leike and Marcus Hutter", "['Jan Leike' 'Marcus Hutter']" ]
cs.CV cs.AI cs.LG
null
1507.04125
null
null
http://arxiv.org/pdf/1507.04125v2
2016-07-22T17:44:11Z
2015-07-15T08:50:09Z
Untangling AdaBoost-based Cost-Sensitive Classification. Part I: Theoretical Perspective
Boosting algorithms have been widely used to tackle a plethora of problems. In the last few years, a lot of approaches have been proposed to provide standard AdaBoost with cost-sensitive capabilities, each with a different focus. However, for the researcher, these algorithms shape a tangled set with diffuse differences and properties, lacking a unifying analysis to jointly compare, classify, evaluate and discuss those approaches on a common basis. In this series of two papers we aim to revisit the various proposals, both from theoretical (Part I) and practical (Part II) perspectives, in order to analyze their specific properties and behavior, with the final goal of identifying the algorithm providing the best and soundest results.
[ "['Iago Landesa-Vázquez' 'José Luis Alba-Castro']", "Iago Landesa-V\\'azquez, Jos\\'e Luis Alba-Castro" ]
cs.CV cs.AI cs.LG
null
1507.04126
null
null
http://arxiv.org/pdf/1507.04126v2
2016-07-22T17:44:33Z
2015-07-15T08:51:18Z
Untangling AdaBoost-based Cost-Sensitive Classification. Part II: Empirical Analysis
A lot of approaches, each following a different strategy, have been proposed in the literature to provide AdaBoost with cost-sensitive properties. In the first part of this series of two papers, we have presented these algorithms in a homogeneous notational framework, proposed a clustering scheme for them and performed a thorough theoretical analysis of those approaches with a fully theoretical foundation. The present paper, in order to complete our analysis, is focused on the empirical study of all the algorithms previously presented over a wide range of heterogeneous classification problems. The results of our experiments, confirming the theoretical conclusions, seem to reveal that the simplest approach, just based on cost-sensitive weight initialization, is the one showing the best and soundest results, despite having been recurrently overlooked in the literature.
[ "['Iago Landesa-Vázquez' 'José Luis Alba-Castro']", "Iago Landesa-V\\'azquez, Jos\\'e Luis Alba-Castro" ]
cs.LG stat.ML
null
1507.04155
null
null
http://arxiv.org/pdf/1507.04155v1
2015-07-15T10:31:00Z
2015-07-15T10:31:00Z
ALEVS: Active Learning by Statistical Leverage Sampling
Active learning aims to obtain a classifier of high accuracy by using fewer label requests in comparison to passive learning by selecting effective queries. Many active learning methods have been developed in the past two decades, which sample queries based on informativeness or representativeness of unlabeled data points. In this work, we explore a novel querying criterion based on statistical leverage scores. The statistical leverage scores of a row in a matrix are the squared row-norms of the matrix containing its (top) left singular vectors and is a measure of influence of the row on the matrix. Leverage scores have been used for detecting high influential points in regression diagnostics and have been recently shown to be useful for data analysis and randomized low-rank matrix approximation algorithms. We explore how sampling data instances with high statistical leverage scores perform in active learning. Our empirical comparison on several binary classification datasets indicate that querying high leverage points is an effective strategy.
[ "['Cem Orhan' 'Öznur Taştan']", "Cem Orhan and \\\"Oznur Ta\\c{s}tan" ]
stat.ML cs.LG
null
1507.04201
null
null
http://arxiv.org/pdf/1507.04201v3
2016-09-28T17:19:48Z
2015-07-15T13:08:11Z
Minimum Density Hyperplanes
Associating distinct groups of objects (clusters) with contiguous regions of high probability density (high-density clusters), is central to many statistical and machine learning approaches to the classification of unlabelled data. We propose a novel hyperplane classifier for clustering and semi-supervised classification which is motivated by this objective. The proposed minimum density hyperplane minimises the integral of the empirical probability density function along it, thereby avoiding intersection with high density clusters. We show that the minimum density and the maximum margin hyperplanes are asymptotically equivalent, thus linking this approach to maximum margin clustering and semi-supervised support vector classifiers. We propose a projection pursuit formulation of the associated optimisation problem which allows us to find minimum density hyperplanes efficiently in practice, and evaluate its performance on a range of benchmark datasets. The proposed approach is found to be very competitive with state of the art methods for clustering and semi-supervised classification.
[ "['Nicos G. Pavlidis' 'David P. Hofmeyr' 'Sotiris K. Tasoulis']", "Nicos G. Pavlidis, David P. Hofmeyr, Sotiris K. Tasoulis" ]
cs.LG stat.ML
null
1507.04208
null
null
http://arxiv.org/pdf/1507.04208v3
2015-11-17T20:27:44Z
2015-07-15T13:30:46Z
Combinatorial Cascading Bandits
We propose combinatorial cascading bandits, a class of partial monitoring problems where at each step a learning agent chooses a tuple of ground items subject to constraints and receives a reward if and only if the weights of all chosen items are one. The weights of the items are binary, stochastic, and drawn independently of each other. The agent observes the index of the first chosen item whose weight is zero. This observation model arises in network routing, for instance, where the learning agent may only observe the first link in the routing path which is down, and blocks the path. We propose a UCB-like algorithm for solving our problems, CombCascade; and prove gap-dependent and gap-free upper bounds on its $n$-step regret. Our proofs build on recent work in stochastic combinatorial semi-bandits but also address two novel challenges of our setting, a non-linear reward function and partial observability. We evaluate CombCascade on two real-world problems and show that it performs well even when our modeling assumptions are violated. We also demonstrate that our setting requires a new learning algorithm.
[ "['Branislav Kveton' 'Zheng Wen' 'Azin Ashkan' 'Csaba Szepesvari']", "Branislav Kveton, Zheng Wen, Azin Ashkan, and Csaba Szepesvari" ]
stat.ML cs.LG
10.1109/TSP.2015.2500889
1507.04230
null
null
http://arxiv.org/abs/1507.04230v1
2015-07-15T14:24:24Z
2015-07-15T14:24:24Z
The Role of Principal Angles in Subspace Classification
Subspace models play an important role in a wide range of signal processing tasks, and this paper explores how the pairwise geometry of subspaces influences the probability of misclassification. When the mismatch between the signal and the model is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. The transform presented here (TRAIT) preserves some specific characteristic of each individual class, and this approach is shown to be complementary to a previously developed transform (LRT) that enlarges inter-class distance while suppressing intra-class dispersion. Theoretical results are supported by demonstration of superior classification accuracy on synthetic and measured data even in the presence of significant model mismatch.
[ "['Jiaji Huang' 'Qiang Qiu' 'Robert Calderbank']", "Jiaji Huang and Qiang Qiu and Robert Calderbank" ]
cs.LG cs.AI cs.LO
null
1507.04285
null
null
http://arxiv.org/pdf/1507.04285v1
2015-07-15T16:32:03Z
2015-07-15T16:32:03Z
Learning Action Models: Qualitative Approach
In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite identifiability (conclusively inferring the appropriate action model in finite time) and identifiability in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while non-deterministic actions require more learning power-they are identifiable in the limit. We then move on to a particular learning method, which proceeds via restriction of a space of events within a learning-specific action model. This way of learning closely resembles the well-known update method from dynamic epistemic logic. We introduce several different learning methods suited for finite identifiability of particular types of deterministic actions.
[ "['Thomas Bolander' 'Nina Gierasimczuk']", "Thomas Bolander and Nina Gierasimczuk" ]
cs.LG cs.AI cs.DC cs.NE
null
1507.04296
null
null
http://arxiv.org/pdf/1507.04296v2
2015-07-16T09:27:06Z
2015-07-15T16:56:56Z
Massively Parallel Methods for Deep Reinforcement Learning
We present the first massively distributed architecture for deep reinforcement learning. This architecture uses four main components: parallel actors that generate new behaviour; parallel learners that are trained from stored experience; a distributed neural network to represent the value function or behaviour policy; and a distributed store of experience. We used our architecture to implement the Deep Q-Network algorithm (DQN). Our distributed algorithm was applied to 49 games from Atari 2600 games from the Arcade Learning Environment, using identical hyperparameters. Our performance surpassed non-distributed DQN in 41 of the 49 games and also reduced the wall-time required to achieve these results by an order of magnitude on most games.
[ "Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory\n Fearon, Alessandro De Maria, Vedavyas Panneershelvam, Mustafa Suleyman,\n Charles Beattie, Stig Petersen, Shane Legg, Volodymyr Mnih, Koray\n Kavukcuoglu, David Silver", "['Arun Nair' 'Praveen Srinivasan' 'Sam Blackwell' 'Cagdas Alcicek'\n 'Rory Fearon' 'Alessandro De Maria' 'Vedavyas Panneershelvam'\n 'Mustafa Suleyman' 'Charles Beattie' 'Stig Petersen' 'Shane Legg'\n 'Volodymyr Mnih' 'Koray Kavukcuoglu' 'David Silver']" ]
cs.LG cs.IT math.FA math.IT
10.1117/12.2189112
1507.04319
null
null
http://arxiv.org/abs/1507.04319v1
2015-07-15T18:38:00Z
2015-07-15T18:38:00Z
Learning Boolean functions with concentrated spectra
This paper discusses the theory and application of learning Boolean functions that are concentrated in the Fourier domain. We first estimate the VC dimension of this function class in order to establish a small sample complexity of learning in this case. Next, we propose a computationally efficient method of empirical risk minimization, and we apply this method to the MNIST database of handwritten digits. These results demonstrate the effectiveness of our model for modern classification tasks. We conclude with a list of open problems for future investigation.
[ "Dustin G. Mixon, Jesse Peterson", "['Dustin G. Mixon' 'Jesse Peterson']" ]
cs.NA cs.LG stat.ML
null
1507.04396
null
null
http://arxiv.org/pdf/1507.04396v1
2015-07-15T21:19:25Z
2015-07-15T21:19:25Z
Parallel MMF: a Multiresolution Approach to Matrix Computation
Multiresolution Matrix Factorization (MMF) was recently introduced as a method for finding multiscale structure and defining wavelets on graphs/matrices. In this paper we derive pMMF, a parallel algorithm for computing the MMF factorization. Empirically, the running time of pMMF scales linearly in the dimension for sparse matrices. We argue that this makes pMMF a valuable new computational primitive in its own right, and present experiments on using pMMF for two distinct purposes: compressing matrices and preconditioning large sparse linear systems.
[ "Risi Kondor, Nedelina Teneva, Pramod K. Mudrakarta", "['Risi Kondor' 'Nedelina Teneva' 'Pramod K. Mudrakarta']" ]
stat.ML cs.LG
null
1507.04457
null
null
http://arxiv.org/pdf/1507.04457v1
2015-07-16T06:00:51Z
2015-07-16T06:00:51Z
Preference Completion: Large-scale Collaborative Ranking from Pairwise Comparisons
In this paper we consider the collaborative ranking setting: a pool of users each provides a small number of pairwise preferences between $d$ possible items; from these we need to predict preferences of the users for items they have not yet seen. We do so by fitting a rank $r$ score matrix to the pairwise data, and provide two main contributions: (a) we show that an algorithm based on convex optimization provides good generalization guarantees once each user provides as few as $O(r\log^2 d)$ pairwise comparisons -- essentially matching the sample complexity required in the related matrix completion setting (which uses actual numerical as opposed to pairwise information), and (b) we develop a large-scale non-convex implementation, which we call AltSVM, that trains a factored form of the matrix via alternating minimization (which we show reduces to alternating SVM problems), and scales and parallelizes very well to large problem settings. It also outperforms common baselines on many moderately large popular collaborative filtering datasets in both NDCG and in other measures of ranking performance.
[ "['Dohyung Park' 'Joe Neeman' 'Jin Zhang' 'Sujay Sanghavi'\n 'Inderjit S. Dhillon']", "Dohyung Park, Joe Neeman, Jin Zhang, Sujay Sanghavi, Inderjit S.\n Dhillon" ]
cs.LG
null
1507.04502
null
null
http://arxiv.org/pdf/1507.04502v1
2015-07-16T09:28:27Z
2015-07-16T09:28:27Z
Towards Predicting First Daily Departure Times: a Gaussian Modeling Approach for Load Shift Forecasting
This work provides two statistical Gaussian forecasting methods for predicting First Daily Departure Times (FDDTs) of everyday use electric vehicles. This is important in smart grid applications to understand disconnection times of such mobile storage units, for instance to forecast storage of non dispatchable loads (e.g. wind and solar power). We provide a review of the relevant state-of-the-art driving behavior features towards FDDT prediction, to then propose an approximated Gaussian method which qualitatively forecasts how many vehicles will depart within a given time frame, by assuming that departure times follow a normal distribution. This method considers sampling sessions as Poisson distributions which are superimposed to obtain a single approximated Gaussian model. Given the Gaussian distribution assumption of the departure times, we also model the problem with Gaussian Mixture Models (GMM), in which the priorly set number of clusters represents the desired time granularity. Evaluation has proven that for the dataset tested, low error and high confidence ($\approx 95\%$) is possible for 15 and 10 minute intervals, and that GMM outperforms traditional modeling but is less generalizable across datasets, as it is a closer fit to the sampling data. Conclusively we discuss future possibilities and practical applications of the discussed model.
[ "Nicholas H. Kirk and Ilya Dianov", "['Nicholas H. Kirk' 'Ilya Dianov']" ]
cs.LG
null
1507.04523
null
null
http://arxiv.org/pdf/1507.04523v1
2015-07-16T11:02:13Z
2015-07-16T11:02:13Z
Upper-Confidence-Bound Algorithms for Active Learning in Multi-Armed Bandits
In this paper, we study the problem of estimating uniformly well the mean values of several distributions given a finite budget of samples. If the variance of the distributions were known, one could design an optimal sampling strategy by collecting a number of independent samples per distribution that is proportional to their variance. However, in the more realistic case where the distributions are not known in advance, one needs to design adaptive sampling strategies in order to select which distribution to sample from according to the previously observed samples. We describe two strategies based on pulling the distributions a number of times that is proportional to a high-probability upper-confidence-bound on their variance (built from previous observed samples) and report a finite-sample performance analysis on the excess estimation error compared to the optimal allocation. We show that the performance of these allocation strategies depends not only on the variances but also on the full shape of the distributions.
[ "['Alexandra Carpentier' 'Alessandro Lazaric' 'Mohammad Ghavamzadeh'\n 'Rémi Munos' 'Peter Auer' 'András Antos']", "Alexandra Carpentier, Alessandro Lazaric, Mohammad Ghavamzadeh, R\\'emi\n Munos, Peter Auer, Andr\\'as Antos" ]
cs.LG cs.IT math.IT stat.ML
10.1109/ICASSP.2014.6854029
1507.04540
null
null
http://arxiv.org/abs/1507.04540v3
2016-02-22T18:18:15Z
2015-07-16T12:16:02Z
Learning to classify with possible sensor failures
In this paper, we propose a general framework to learn a robust large-margin binary classifier when corrupt measurements, called anomalies, caused by sensor failure might be present in the training set. The goal is to minimize the generalization error of the classifier on non-corrupted measurements while controlling the false alarm rate associated with anomalous samples. By incorporating a non-parametric regularizer based on an empirical entropy estimator, we propose a Geometric-Entropy-Minimization regularized Maximum Entropy Discrimination (GEM-MED) method to learn to classify and detect anomalies in a joint manner. We demonstrate using simulated data and a real multimodal data set. Our GEM-MED method can yield improved performance over previous robust classification methods in terms of both classification accuracy and anomaly detection rate.
[ "Tianpei Xie, Nasser M. Nasrabadi and Alfred O. Hero", "['Tianpei Xie' 'Nasser M. Nasrabadi' 'Alfred O. Hero']" ]
cs.CL cs.LG cs.NE
null
1507.04646
null
null
http://arxiv.org/pdf/1507.04646v1
2015-07-16T16:43:55Z
2015-07-16T16:43:55Z
A Dependency-Based Neural Network for Relation Classification
Previous research on relation classification has verified the effectiveness of using dependency shortest paths or subtrees. In this paper, we further explore how to make full use of the combination of these dependency information. We first propose a new structure, termed augmented dependency path (ADP), which is composed of the shortest dependency path between two entities and the subtrees attached to the shortest path. To exploit the semantic representation behind the ADP structure, we develop dependency-based neural networks (DepNN): a recursive neural network designed to model the subtrees, and a convolutional neural network to capture the most important features on the shortest path. Experiments on the SemEval-2010 dataset show that our proposed method achieves state-of-art results.
[ "Yang Liu, Furu Wei, Sujian Li, Heng Ji, Ming Zhou, Houfeng Wang", "['Yang Liu' 'Furu Wei' 'Sujian Li' 'Heng Ji' 'Ming Zhou' 'Houfeng Wang']" ]
stat.ML cs.LG
null
1507.04717
null
null
http://arxiv.org/pdf/1507.04717v6
2016-03-17T16:27:36Z
2015-07-16T19:26:27Z
Less is More: Nystr\"om Computational Regularization
We study Nystr\"om type subsampling approaches to large scale kernel methods, and prove learning bounds in the statistical learning setting, where random sampling and high probability estimates are considered. In particular, we prove that these approaches can achieve optimal learning bounds, provided the subsampling level is suitably chosen. These results suggest a simple incremental variant of Nystr\"om Kernel Regularized Least Squares, where the subsampling level implements a form of computational regularization, in the sense that it controls at the same time regularization and computations. Extensive experimental analysis shows that the considered approach achieves state of the art performances on benchmark large scale datasets.
[ "['Alessandro Rudi' 'Raffaello Camoriano' 'Lorenzo Rosasco']", "Alessandro Rudi, Raffaello Camoriano, Lorenzo Rosasco" ]
math.OC cs.LG stat.ML
null
1507.04734
null
null
http://arxiv.org/pdf/1507.04734v3
2017-04-12T03:45:29Z
2015-07-16T19:51:39Z
Variational Gram Functions: Convex Analysis and Optimization
We propose a new class of convex penalty functions, called \emph{variational Gram functions} (VGFs), that can promote pairwise relations, such as orthogonality, among a set of vectors in a vector space. These functions can serve as regularizers in convex optimization problems arising from hierarchical classification, multitask learning, and estimating vectors with disjoint supports, among other applications. We study convexity for VGFs, and give efficient characterizations for their convex conjugates, subdifferentials, and proximal operators. We discuss efficient optimization algorithms for regularized loss minimization problems where the loss admits a common, yet simple, variational representation and the regularizer is a VGF. These algorithms enjoy a simple kernel trick, an efficient line search, as well as computational advantages over first order methods based on the subdifferential or proximal maps. We also establish a general representer theorem for such learning problems. Lastly, numerical experiments on a hierarchical classification problem are presented to demonstrate the effectiveness of VGFs and the associated optimization algorithms.
[ "Amin Jalali, Maryam Fazel, Lin Xiao", "['Amin Jalali' 'Maryam Fazel' 'Lin Xiao']" ]
cs.LG cs.NE cs.SD
null
1507.04761
null
null
http://arxiv.org/pdf/1507.04761v1
2015-07-16T20:24:18Z
2015-07-16T20:24:18Z
Deep Learning and Music Adversaries
An adversary is essentially an algorithm intent on making a classification system perform in some particular way given an input, e.g., increase the probability of a false negative. Recent work builds adversaries for deep learning systems applied to image object recognition, which exploits the parameters of the system to find the minimal perturbation of the input image such that the network misclassifies it with high confidence. We adapt this approach to construct and deploy an adversary of deep learning systems applied to music content analysis. In our case, however, the input to the systems is magnitude spectral frames, which requires special care in order to produce valid input audio signals from network-derived perturbations. For two different train-test partitionings of two benchmark datasets, and two different deep architectures, we find that this adversary is very effective in defeating the resulting systems. We find the convolutional networks are more robust, however, compared with systems based on a majority vote over individually classified audio frames. Furthermore, we integrate the adversary into the training of new deep systems, but do not find that this improves their resilience against the same adversary.
[ "['Corey Kereliuk' 'Bob L. Sturm' 'Jan Larsen']", "Corey Kereliuk and Bob L. Sturm and Jan Larsen" ]
stat.ML cs.LG
10.1007/s10994-017-5652-6
1507.04777
null
null
http://arxiv.org/abs/1507.04777v4
2017-07-17T21:10:27Z
2015-07-16T21:33:48Z
Sparse Probit Linear Mixed Model
Linear Mixed Models (LMMs) are important tools in statistical genetics. When used for feature selection, they allow to find a sparse set of genetic traits that best predict a continuous phenotype of interest, while simultaneously correcting for various confounding factors such as age, ethnicity and population structure. Formulated as models for linear regression, LMMs have been restricted to continuous phenotypes. We introduce the Sparse Probit Linear Mixed Model (Probit-LMM), where we generalize the LMM modeling paradigm to binary phenotypes. As a technical challenge, the model no longer possesses a closed-form likelihood function. In this paper, we present a scalable approximate inference algorithm that lets us fit the model to high-dimensional data sets. We show on three real-world examples from different domains that in the setup of binary labels, our algorithm leads to better prediction accuracies and also selects features which show less correlation with the confounding factors.
[ "Stephan Mandt, Florian Wenzel, Shinichi Nakajima, John P. Cunningham,\n Christoph Lippert, and Marius Kloft", "['Stephan Mandt' 'Florian Wenzel' 'Shinichi Nakajima' 'John P. Cunningham'\n 'Christoph Lippert' 'Marius Kloft']" ]
cs.IT cs.LG math.IT math.OC math.ST stat.TH
null
1507.04793
null
null
http://arxiv.org/pdf/1507.04793v2
2016-01-05T06:04:43Z
2015-07-16T23:03:00Z
Sharp Time--Data Tradeoffs for Linear Inverse Problems
In this paper we characterize sharp time-data tradeoffs for optimization problems used for solving linear inverse problems. We focus on the minimization of a least-squares objective subject to a constraint defined as the sub-level set of a penalty function. We present a unified convergence analysis of the gradient projection algorithm applied to such problems. We sharply characterize the convergence rate associated with a wide variety of random measurement ensembles in terms of the number of measurements and structural complexity of the signal with respect to the chosen penalty function. The results apply to both convex and nonconvex constraints, demonstrating that a linear convergence rate is attainable even though the least squares objective is not strongly convex in these settings. When specialized to Gaussian measurements our results show that such linear convergence occurs when the number of measurements is merely 4 times the minimal number required to recover the desired signal at all (a.k.a. the phase transition). We also achieve a slower but geometric rate of convergence precisely above the phase transition point. Extensive numerical results suggest that the derived rates exactly match the empirical performance.
[ "Samet Oymak, Benjamin Recht, and Mahdi Soltanolkotabi", "['Samet Oymak' 'Benjamin Recht' 'Mahdi Soltanolkotabi']" ]
cs.IR cs.CL cs.LG
null
1507.04798
null
null
http://arxiv.org/pdf/1507.04798v1
2015-07-16T23:11:45Z
2015-07-16T23:11:45Z
Exploratory topic modeling with distributional semantics
As we continue to collect and store textual data in a multitude of domains, we are regularly confronted with material whose largely unknown thematic structure we want to uncover. With unsupervised, exploratory analysis, no prior knowledge about the content is required and highly open-ended tasks can be supported. In the past few years, probabilistic topic modeling has emerged as a popular approach to this problem. Nevertheless, the representation of the latent topics as aggregations of semi-coherent terms limits their interpretability and level of detail. This paper presents an alternative approach to topic modeling that maps topics as a network for exploration, based on distributional semantics using learned word vectors. From the granular level of terms and their semantic similarity relations global topic structures emerge as clustered regions and gradients of concepts. Moreover, the paper discusses the visual interactive representation of the topic map, which plays an important role in supporting its exploration.
[ "Samuel R\\\"onnqvist", "['Samuel Rönnqvist']" ]
cs.CL cs.AI cs.LG cs.NE
null
1507.04808
null
null
http://arxiv.org/pdf/1507.04808v3
2016-04-06T23:20:41Z
2015-07-17T00:21:39Z
Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models
We investigate the task of building open domain, conversational dialogue systems based on large dialogue corpora using generative models. Generative models produce system responses that are autonomously generated word-by-word, opening up the possibility for realistic, flexible interactions. In support of this goal, we extend the recently proposed hierarchical recurrent encoder-decoder neural network to the dialogue domain, and demonstrate that this model is competitive with state-of-the-art neural language models and back-off n-gram models. We investigate the limitations of this and similar approaches, and show how its performance can be improved by bootstrapping the learning from a larger question-answer pair corpus and from pretrained word embeddings.
[ "['Iulian V. Serban' 'Alessandro Sordoni' 'Yoshua Bengio' 'Aaron Courville'\n 'Joelle Pineau']", "Iulian V. Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville\n and Joelle Pineau" ]
cs.CV cs.LG cs.MM cs.SD
10.1145/2733373.2806293
1507.04831
null
null
http://arxiv.org/abs/1507.04831v1
2015-07-17T04:13:12Z
2015-07-17T04:13:12Z
Deep Multimodal Speaker Naming
Automatic speaker naming is the problem of localizing as well as identifying each speaking character in a TV/movie/live show video. This is a challenging problem mainly attributes to its multimodal nature, namely face cue alone is insufficient to achieve good performance. Previous multimodal approaches to this problem usually process the data of different modalities individually and merge them using handcrafted heuristics. Such approaches work well for simple scenes, but fail to achieve high performance for speakers with large appearance variations. In this paper, we propose a novel convolutional neural networks (CNN) based learning framework to automatically learn the fusion function of both face and audio cues. We show that without using face tracking, facial landmark localization or subtitle/transcript, our system with robust multimodal feature extraction is able to achieve state-of-the-art speaker naming performance evaluated on two diverse TV series. The dataset and implementation of our algorithm are publicly available online.
[ "Yongtao Hu, Jimmy Ren, Jingwen Dai, Chang Yuan, Li Xu, and Wenping\n Wang", "['Yongtao Hu' 'Jimmy Ren' 'Jingwen Dai' 'Chang Yuan' 'Li Xu'\n 'Wenping Wang']" ]
cs.LG
null
1507.04888
null
null
http://arxiv.org/pdf/1507.04888v3
2016-03-11T11:02:00Z
2015-07-17T09:30:03Z
Maximum Entropy Deep Inverse Reinforcement Learning
This paper presents a general framework for exploiting the representational capacity of neural networks to approximate complex, nonlinear reward functions in the context of solving the inverse reinforcement learning (IRL) problem. We show in this context that the Maximum Entropy paradigm for IRL lends itself naturally to the efficient training of deep architectures. At test time, the approach leads to a computational complexity independent of the number of demonstrations, which makes it especially well-suited for applications in life-long learning scenarios. Our approach achieves performance commensurate to the state-of-the-art on existing benchmarks while exceeding on an alternative benchmark based on highly varying reward structures. Finally, we extend the basic architecture - which is equivalent to a simplified subclass of Fully Convolutional Neural Networks (FCNNs) with width one - to include larger convolutions in order to eliminate dependency on precomputed spatial features and work on raw input representations.
[ "Markus Wulfmeier, Peter Ondruska, Ingmar Posner", "['Markus Wulfmeier' 'Peter Ondruska' 'Ingmar Posner']" ]
cs.LG
null
1507.04910
null
null
http://arxiv.org/pdf/1507.04910v1
2015-07-17T10:39:52Z
2015-07-17T10:39:52Z
Lower Bounds for Multi-armed Bandit with Non-equivalent Multiple Plays
We study the stochastic multi-armed bandit problem with non-equivalent multiple plays where, at each step, an agent chooses not only a set of arms, but also their order, which influences reward distribution. In several problem formulations with different assumptions, we provide lower bounds for regret with standard asymptotics $O(\log{t})$ but novel coefficients and provide optimal algorithms, thus proving that these bounds cannot be improved.
[ "Aleksandr Vorobev and Gleb Gusev", "['Aleksandr Vorobev' 'Gleb Gusev']" ]
cs.LG cs.AI stat.ML
null
1507.04997
null
null
http://arxiv.org/pdf/1507.04997v1
2015-07-17T15:26:06Z
2015-07-17T15:26:06Z
FRULER: Fuzzy Rule Learning through Evolution for Regression
In regression problems, the use of TSK fuzzy systems is widely extended due to the precision of the obtained models. Moreover, the use of simple linear TSK models is a good choice in many real problems due to the easy understanding of the relationship between the output and input variables. In this paper we present FRULER, a new genetic fuzzy system for automatically learning accurate and simple linguistic TSK fuzzy rule bases for regression problems. In order to reduce the complexity of the learned models while keeping a high accuracy, the algorithm consists of three stages: instance selection, multi-granularity fuzzy discretization of the input variables, and the evolutionary learning of the rule base that uses the Elastic Net regularization to obtain the consequents of the rules. Each stage was validated using 28 real-world datasets and FRULER was compared with three state of the art enetic fuzzy systems. Experimental results show that FRULER achieves the most accurate and simple models compared even with approximative approaches.
[ "['I. Rodríguez-Fdez' 'M. Mucientes' 'A. Bugarín']", "I. Rodr\\'iguez-Fdez, M. Mucientes, A. Bugar\\'in" ]
cs.CV cs.LG cs.NE
null
1507.05053
null
null
http://arxiv.org/pdf/1507.05053v1
2015-07-17T17:48:49Z
2015-07-17T17:48:49Z
Massively Deep Artificial Neural Networks for Handwritten Digit Recognition
Greedy Restrictive Boltzmann Machines yield an fairly low 0.72% error rate on the famous MNIST database of handwritten digits. All that was required to achieve this result was a high number of hidden layers consisting of many neurons, and a graphics card to greatly speed up the rate of learning.
[ "[\"Keiron O'Shea\"]", "Keiron O'Shea" ]
cs.LG stat.ML
10.1109/TSP.2016.2546231
1507.05087
null
null
http://arxiv.org/abs/1507.05087v1
2015-07-17T19:57:38Z
2015-07-17T19:57:38Z
Type I and Type II Bayesian Methods for Sparse Signal Recovery using Scale Mixtures
In this paper, we propose a generalized scale mixture family of distributions, namely the Power Exponential Scale Mixture (PESM) family, to model the sparsity inducing priors currently in use for sparse signal recovery (SSR). We show that the successful and popular methods such as LASSO, Reweighted $\ell_1$ and Reweighted $\ell_2$ methods can be formulated in an unified manner in a maximum a posteriori (MAP) or Type I Bayesian framework using an appropriate member of the PESM family as the sparsity inducing prior. In addition, exploiting the natural hierarchical framework induced by the PESM family, we utilize these priors in a Type II framework and develop the corresponding EM based estimation algorithms. Some insight into the differences between Type I and Type II methods is provided and of particular interest in the algorithmic development is the Type II variant of the popular and successful reweighted $\ell_1$ method. Extensive empirical results are provided and they show that the Type II methods exhibit better support recovery than the corresponding Type I methods.
[ "['Ritwik Giri' 'Bhaskar D. Rao']", "Ritwik Giri, Bhaskar D. Rao" ]
stat.ML cs.LG
null
1507.05181
null
null
http://arxiv.org/pdf/1507.05181v1
2015-07-18T12:58:11Z
2015-07-18T12:58:11Z
The Mondrian Process for Machine Learning
This report is concerned with the Mondrian process and its applications in machine learning. The Mondrian process is a guillotine-partition-valued stochastic process that possesses an elegant self-consistency property. The first part of the report uses simple concepts from applied probability to define the Mondrian process and explore its properties. The Mondrian process has been used as the main building block of a clever online random forest classification algorithm that turns out to be equivalent to its batch counterpart. We outline a slight adaptation of this algorithm to regression, as the remainder of the report uses regression as a case study of how Mondrian processes can be utilized in machine learning. In particular, the Mondrian process will be used to construct a fast approximation to the computationally expensive kernel ridge regression problem with a Laplace kernel. The complexity of random guillotine partitions generated by a Mondrian process and hence the complexity of the resulting regression models is controlled by a lifetime hyperparameter. It turns out that these models can be efficiently trained and evaluated for all lifetimes in a given range at once, without needing to retrain them from scratch for each lifetime value. This leads to an efficient procedure for determining the right model complexity for a dataset at hand. The limitation of having a single lifetime hyperparameter will motivate the final Mondrian grid model, in which each input dimension is endowed with its own lifetime parameter. In this model we preserve the property that its hyperparameters can be tweaked without needing to retrain the modified model from scratch.
[ "['Matej Balog' 'Yee Whye Teh']", "Matej Balog and Yee Whye Teh" ]
stat.ML cs.LG
null
1507.05259
null
null
http://arxiv.org/pdf/1507.05259v5
2017-03-23T18:10:34Z
2015-07-19T07:34:25Z
Fairness Constraints: Mechanisms for Fair Classification
Algorithmic decision making systems are ubiquitous across a wide variety of online as well as offline services. These systems rely on complex learning methods and vast amounts of data to optimize the service functionality, satisfaction of the end user and profitability. However, there is a growing concern that these automated decisions can lead, even in the absence of intent, to a lack of fairness, i.e., their outcomes can disproportionately hurt (or, benefit) particular groups of people sharing one or more sensitive attributes (e.g., race, sex). In this paper, we introduce a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness. We instantiate this mechanism with two well-known classifiers, logistic regression and support vector machines, and show on real-world data that our mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy.
[ "Muhammad Bilal Zafar and Isabel Valera and Manuel Gomez Rodriguez and\n Krishna P. Gummadi", "['Muhammad Bilal Zafar' 'Isabel Valera' 'Manuel Gomez Rodriguez'\n 'Krishna P. Gummadi']" ]
cs.LG
null
1507.05307
null
null
http://arxiv.org/pdf/1507.05307v1
2015-07-19T16:55:08Z
2015-07-19T16:55:08Z
2 Notes on Classes with Vapnik-Chervonenkis Dimension 1
The Vapnik-Chervonenkis dimension is a combinatorial parameter that reflects the "complexity" of a set of sets (a.k.a. concept classes). It has been introduced by Vapnik and Chervonenkis in their seminal 1971 paper and has since found many applications, most notably in machine learning theory and in computational geometry. Arguably the most influential consequence of the VC analysis is the fundamental theorem of statistical machine learning, stating that a concept class is learnable (in some precise sense) if and only if its VC-dimension is finite. Furthermore, for such classes a most simple learning rule - empirical risk minimization (ERM) - is guaranteed to succeed. The simplest non-trivial structures, in terms of the VC-dimension, are the classes (i.e., sets of subsets) for which that dimension is 1. In this note we show a couple of curious results concerning such classes. The first result shows that such classes share a very simple structure, and, as a corollary, the labeling information contained in any sample labeled by such a class can be compressed into a single instance. The second result shows that due to some subtle measurability issues, in spite of the above mentioned fundamental theorem, there are classes of dimension 1 for which an ERM learning rule fails miserably.
[ "Shai Ben-David", "['Shai Ben-David']" ]