categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
cs.AI cs.LG stat.ML
null
1510.04935
null
null
http://arxiv.org/pdf/1510.04935v2
2015-12-07T18:05:52Z
2015-10-16T16:29:07Z
Holographic Embeddings of Knowledge Graphs
Learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs. In this work, we propose holographic embeddings (HolE) to learn compositional vector space representations of entire knowledge graphs. The proposed method is related to holographic models of associative memory in that it employs circular correlation to create compositional representations. By using correlation as the compositional operator HolE can capture rich interactions but simultaneously remains efficient to compute, easy to train, and scalable to very large datasets. In extensive experiments we show that holographic embeddings are able to outperform state-of-the-art methods for link prediction in knowledge graphs and relational learning benchmark datasets.
[ "Maximilian Nickel, Lorenzo Rosasco, Tomaso Poggio", "['Maximilian Nickel' 'Lorenzo Rosasco' 'Tomaso Poggio']" ]
stat.ML cs.LG cs.NE
null
1510.04953
null
null
http://arxiv.org/pdf/1510.04953v1
2015-10-16T17:16:14Z
2015-10-16T17:16:14Z
Optimizing and Contrasting Recurrent Neural Network Architectures
Recurrent Neural Networks (RNNs) have long been recognized for their potential to model complex time series. However, it remains to be determined what optimization techniques and recurrent architectures can be used to best realize this potential. The experiments presented take a deep look into Hessian free optimization, a powerful second order optimization method that has shown promising results, but still does not enjoy widespread use. This algorithm was used to train to a number of RNN architectures including standard RNNs, long short-term memory, multiplicative RNNs, and stacked RNNs on the task of character prediction. The insights from these experiments led to the creation of a new multiplicative LSTM hybrid architecture that outperformed both LSTM and multiplicative RNNs. When tested on a larger scale, multiplicative LSTM achieved character level modelling results competitive with the state of the art for RNNs using very different methodology.
[ "['Ben Krause']", "Ben Krause" ]
cs.LG
null
1510.05034
null
null
http://arxiv.org/pdf/1510.05034v2
2015-10-30T01:00:33Z
2015-10-16T21:49:30Z
Improving the Speed of Response of Learning Algorithms Using Multiple Models
This is the first of a series of papers that the authors propose to write on the subject of improving the speed of response of learning systems using multiple models. During the past two decades, the first author has worked on numerous methods for improving the stability, robustness, and performance of adaptive systems using multiple models and the other authors have collaborated with him on some of them. Independently, they have also worked on several learning methods, and have considerable experience with their advantages and limitations. In particular, they are well aware that it is common knowledge that machine learning is in general very slow. Numerous attempts have been made by researchers to improve the speed of convergence of algorithms in different contexts. In view of the success of multiple model based methods in improving the speed of convergence in adaptive systems, the authors believe that the same approach will also prove fruitful in the domain of learning. In this paper, a first attempt is made to use multiple models for improving the speed of response of the simplest learning schemes that have been studied. i.e. Learning Automata.
[ "['Kumpati S. Narendra' 'Snehasis Mukhopadyhay' 'Yu Wang']", "Kumpati S. Narendra, Snehasis Mukhopadyhay, and Yu Wang" ]
cs.DS cs.LG stat.ML
null
1510.05043
null
null
http://arxiv.org/pdf/1510.05043v1
2015-10-16T22:48:28Z
2015-10-16T22:48:28Z
A cost function for similarity-based hierarchical clustering
The development of algorithms for hierarchical clustering has been hampered by a shortage of precise objective functions. To help address this situation, we introduce a simple cost function on hierarchies over a set of points, given pairwise similarities between those points. We show that this criterion behaves sensibly in canonical instances and that it admits a top-down construction procedure with a provably good approximation ratio.
[ "Sanjoy Dasgupta", "['Sanjoy Dasgupta']" ]
cs.LG
null
1510.05067
null
null
http://arxiv.org/pdf/1510.05067v4
2016-02-04T08:35:58Z
2015-10-17T03:49:05Z
How Important is Weight Symmetry in Backpropagation?
Gradient backpropagation (BP) requires symmetric feedforward and feedback connections -- the same weights must be used for forward and backward passes. This "weight transport problem" (Grossberg 1987) is thought to be one of the main reasons to doubt BP's biologically plausibility. Using 15 different classification datasets, we systematically investigate to what extent BP really depends on weight symmetry. In a study that turned out to be surprisingly similar in spirit to Lillicrap et al.'s demonstration (Lillicrap et al. 2014) but orthogonal in its results, our experiments indicate that: (1) the magnitudes of feedback weights do not matter to performance (2) the signs of feedback weights do matter -- the more concordant signs between feedforward and their corresponding feedback connections, the better (3) with feedback weights having random magnitudes and 100% concordant signs, we were able to achieve the same or even better performance than SGD. (4) some normalizations/stabilizations are indispensable for such asymmetric BP to work, namely Batch Normalization (BN) (Ioffe and Szegedy 2015) and/or a "Batch Manhattan" (BM) update rule.
[ "['Qianli Liao' 'Joel Z. Leibo' 'Tomaso Poggio']", "Qianli Liao, Joel Z. Leibo, Tomaso Poggio" ]
cs.LG stat.ML
null
1510.05214
null
null
http://arxiv.org/pdf/1510.05214v1
2015-10-18T09:41:50Z
2015-10-18T09:41:50Z
Clustering Noisy Signals with Structured Sparsity Using Time-Frequency Representation
We propose a simple and efficient time-series clustering framework particularly suited for low Signal-to-Noise Ratio (SNR), by simultaneous smoothing and dimensionality reduction aimed at preserving clustering information. We extend the sparse K-means algorithm by incorporating structured sparsity, and use it to exploit the multi-scale property of wavelets and group structure in multivariate signals. Finally, we extract features invariant to translation and scaling with the scattering transform, which corresponds to a convolutional network with filters given by a wavelet operator, and use the network's structure in sparse clustering. By promoting sparsity, this transform can yield a low-dimensional representation of signals that gives improved clustering results on several real datasets.
[ "Tom Hope, Avishai Wagner and Or Zuk", "['Tom Hope' 'Avishai Wagner' 'Or Zuk']" ]
cs.LG cs.NA cs.SI
10.1109/IPDPSW.2016.58
1510.05237
null
null
http://arxiv.org/abs/1510.05237v1
2015-10-18T12:53:38Z
2015-10-18T12:53:38Z
Large Enforced Sparse Non-Negative Matrix Factorization
Non-negative matrix factorization (NMF) is a common method for generating topic models from text data. NMF is widely accepted for producing good results despite its relative simplicity of implementation and ease of computation. One challenge with applying NMF to large datasets is that intermediate matrix products often become dense, stressing the memory and compute elements of a system. In this article, we investigate a simple but powerful modification of a common NMF algorithm that enforces the generation of sparse intermediate and output matrices. This method enables the application of NMF to large datasets through improved memory and compute performance. Further, we demonstrate empirically that this method of enforcing sparsity in the NMF either preserves or improves both the accuracy of the resulting topic model and the convergence rate of the underlying algorithm.
[ "Brendan Gavin and Vijay Gadepally and Jeremy Kepner", "['Brendan Gavin' 'Vijay Gadepally' 'Jeremy Kepner']" ]
cs.SI cs.LG physics.data-an physics.soc-ph
10.1145/2872427.2883031
1510.05318
null
null
http://arxiv.org/abs/1510.05318v1
2015-10-18T22:16:38Z
2015-10-18T22:16:38Z
Latent Space Model for Multi-Modal Social Data
With the emergence of social networking services, researchers enjoy the increasing availability of large-scale heterogenous datasets capturing online user interactions and behaviors. Traditional analysis of techno-social systems data has focused mainly on describing either the dynamics of social interactions, or the attributes and behaviors of the users. However, overwhelming empirical evidence suggests that the two dimensions affect one another, and therefore they should be jointly modeled and analyzed in a multi-modal framework. The benefits of such an approach include the ability to build better predictive models, leveraging social network information as well as user behavioral signals. To this purpose, here we propose the Constrained Latent Space Model (CLSM), a generalized framework that combines Mixed Membership Stochastic Blockmodels (MMSB) and Latent Dirichlet Allocation (LDA) incorporating a constraint that forces the latent space to concurrently describe the multiple data modalities. We derive an efficient inference algorithm based on Variational Expectation Maximization that has a computational cost linear in the size of the network, thus making it feasible to analyze massive social datasets. We validate the proposed framework on two problems: prediction of social interactions from user attributes and behaviors, and behavior prediction exploiting network information. We perform experiments with a variety of multi-modal social systems, spanning location-based social networks (Gowalla), social media services (Instagram, Orkut), e-commerce and review sites (Amazon, Ciao), and finally citation networks (Cora). The results indicate significant improvement in prediction accuracy over state of the art methods, and demonstrate the flexibility of the proposed approach for addressing a variety of different learning problems commonly occurring with multi-modal social data.
[ "['Yoon-Sik Cho' 'Greg Ver Steeg' 'Emilio Ferrara' 'Aram Galstyan']", "Yoon-Sik Cho, Greg Ver Steeg, Emilio Ferrara, Aram Galstyan" ]
stat.ML cs.LG
null
1510.05336
null
null
http://arxiv.org/pdf/1510.05336v1
2015-10-19T02:40:33Z
2015-10-19T02:40:33Z
Clustering is Easy When ....What?
It is well known that most of the common clustering objectives are NP-hard to optimize. In practice, however, clustering is being routinely carried out. One approach for providing theoretical understanding of this seeming discrepancy is to come up with notions of clusterability that distinguish realistically interesting input data from worst-case data sets. The hope is that there will be clustering algorithms that are provably efficient on such "clusterable" instances. This paper addresses the thesis that the computational hardness of clustering tasks goes away for inputs that one really cares about. In other words, that "Clustering is difficult only when it does not matter" (the \emph{CDNM thesis} for short). I wish to present a a critical bird's eye overview of the results published on this issue so far and to call attention to the gap between available and desirable results on this issue. A longer, more detailed version of this note is available as arXiv:1507.05307. I discuss which requirements should be met in order to provide formal support to the the CDNM thesis and then examine existing results in view of these requirements and list some significant unsolved research challenges in that direction.
[ "Shai Ben-David", "['Shai Ben-David']" ]
stat.ME cs.LG math.OC stat.ML
null
1510.05417
null
null
http://arxiv.org/pdf/1510.05417v1
2015-10-19T10:44:53Z
2015-10-19T10:44:53Z
Piecewise-Linear Approximation for Feature Subset Selection in a Sequential Logit Model
This paper concerns a method of selecting a subset of features for a sequential logit model. Tanaka and Nakagawa (2014) proposed a mixed integer quadratic optimization formulation for solving the problem based on a quadratic approximation of the logistic loss function. However, since there is a significant gap between the logistic loss function and its quadratic approximation, their formulation may fail to find a good subset of features. To overcome this drawback, we apply a piecewise-linear approximation to the logistic loss function. Accordingly, we frame the feature subset selection problem of minimizing an information criterion as a mixed integer linear optimization problem. The computational results demonstrate that our piecewise-linear approximation approach found a better subset of features than the quadratic approximation approach.
[ "['Toshiki Sato' 'Yuichi Takano' 'Ryuhei Miyashiro']", "Toshiki Sato, Yuichi Takano, Ryuhei Miyashiro" ]
cs.LG stat.ML
null
1510.05477
null
null
http://arxiv.org/pdf/1510.05477v1
2015-10-19T13:58:37Z
2015-10-19T13:58:37Z
Accelerometer based Activity Classification with Variational Inference on Sticky HDP-SLDS
As part of daily monitoring of human activities, wearable sensors and devices are becoming increasingly popular sources of data. With the advent of smartphones equipped with acceloremeter, gyroscope and camera; it is now possible to develop activity classification platforms everyone can use conveniently. In this paper, we propose a fast inference method for an unsupervised non-parametric time series model namely variational inference for sticky HDP-SLDS(Hierarchical Dirichlet Process Switching Linear Dynamical System). We show that the proposed algorithm can differentiate various indoor activities such as sitting, walking, turning, going up/down the stairs and taking the elevator using only the acceloremeter of an Android smartphone Samsung Galaxy S4. We used the front camera of the smartphone to annotate activity types precisely. We compared the proposed method with Hidden Markov Models with Gaussian emission probabilities on a dataset of 10 subjects. We showed that the efficacy of the stickiness property. We further compared the variational inference to the Gibbs sampler on the same model and show that variational inference is faster in one order of magnitude.
[ "Mehmet Emin Basbug, Koray Ozcan and Senem Velipasalar", "['Mehmet Emin Basbug' 'Koray Ozcan' 'Senem Velipasalar']" ]
cs.LG
null
1510.05491
null
null
http://arxiv.org/pdf/1510.05491v2
2017-01-06T23:11:39Z
2015-10-19T14:25:27Z
AdaCluster : Adaptive Clustering for Heterogeneous Data
Clustering algorithms start with a fixed divergence, which captures the possibly asymmetric distance between a sample and a centroid. In the mixture model setting, the sample distribution plays the same role. When all attributes have the same topology and dispersion, the data are said to be homogeneous. If the prior knowledge of the distribution is inaccurate or the set of plausible distributions is large, an adaptive approach is essential. The motivation is more compelling for heterogeneous data, where the dispersion or the topology differs among attributes. We propose an adaptive approach to clustering using classes of parametrized Bregman divergences. We first show that the density of a steep exponential dispersion model (EDM) can be represented with a Bregman divergence. We then propose AdaCluster, an expectation-maximization (EM) algorithm to cluster heterogeneous data using classes of steep EDMs. We compare AdaCluster with EM for a Gaussian mixture model on synthetic data and nine UCI data sets. We also propose an adaptive hard clustering algorithm based on Generalized Method of Moments. We compare the hard clustering algorithm with k-means on the UCI data sets. We empirically verified that adaptively learning the underlying topology yields better clustering of heterogeneous data.
[ "['Mehmet Emin Basbug' 'Barbara Engelhardt']", "Mehmet Emin Basbug and Barbara Engelhardt" ]
cs.LG
null
1510.05577
null
null
http://arxiv.org/pdf/1510.05577v1
2015-10-19T16:43:02Z
2015-10-19T16:43:02Z
Application of Machine Learning Techniques in Human Activity Recognition
Human activity detection has seen a tremendous growth in the last decade playing a major role in the field of pervasive computing. This emerging popularity can be attributed to its myriad of real-life applications primarily dealing with human-centric problems like healthcare and elder care. Many research attempts with data mining and machine learning techniques have been undergoing to accurately detect human activities for e-health systems. This paper reviews some of the predictive data mining algorithms and compares the accuracy and performances of these models. A discussion on the future research directions is subsequently offered.
[ "Jitenkumar Babubhai Rana, Rashmi Shetty, Tanya Jha", "['Jitenkumar Babubhai Rana' 'Rashmi Shetty' 'Tanya Jha']" ]
stat.ML cs.IT cs.LG math.IT
null
1510.05610
null
null
http://arxiv.org/pdf/1510.05610v4
2016-09-28T02:14:25Z
2015-10-19T18:19:16Z
Stochastically Transitive Models for Pairwise Comparisons: Statistical and Computational Issues
There are various parametric models for analyzing pairwise comparison data, including the Bradley-Terry-Luce (BTL) and Thurstone models, but their reliance on strong parametric assumptions is limiting. In this work, we study a flexible model for pairwise comparisons, under which the probabilities of outcomes are required only to satisfy a natural form of stochastic transitivity. This class includes parametric models including the BTL and Thurstone models as special cases, but is considerably more general. We provide various examples of models in this broader stochastically transitive class for which classical parametric models provide poor fits. Despite this greater flexibility, we show that the matrix of probabilities can be estimated at the same rate as in standard parametric models. On the other hand, unlike in the BTL and Thurstone models, computing the minimax-optimal estimator in the stochastically transitive model is non-trivial, and we explore various computationally tractable alternatives. We show that a simple singular value thresholding algorithm is statistically consistent but does not achieve the minimax rate. We then propose and study algorithms that achieve the minimax rate over interesting sub-classes of the full stochastically transitive class. We complement our theoretical results with thorough numerical simulations.
[ "['Nihar B. Shah' 'Sivaraman Balakrishnan' 'Adityanand Guntuboyina'\n 'Martin J. Wainwright']", "Nihar B. Shah, Sivaraman Balakrishnan, Adityanand Guntuboyina and\n Martin J. Wainwright" ]
cs.CE cs.LG q-bio.BM
null
1510.05682
null
null
http://arxiv.org/pdf/1510.05682v1
2015-10-19T20:46:04Z
2015-10-19T20:46:04Z
Protein Structure Prediction by Protein Alignments
Proteins are the basic building blocks of life. They usually perform functions by folding to a particular structure. Understanding the folding process could help the researchers to understand the functions of proteins and could also help to develop supplemental proteins for people with deficiencies and gain more insight into diseases associated with troublesome folding proteins. Experimental methods are both expensive and time consuming. In this thesis I introduce a new machine learning based method to predict the protein structure. The new method improves the performance from two directions: creating accurate protein alignments and predicting accurate protein contacts. First, I present an alignment framework MRFalign which goes beyond state-of-the-art methods and uses Markov Random Fields to model a protein family and align two proteins by aligning two MRFs together. Compared to other methods, that can only model local-range residue correlation, MRFs can model long-range residue interactions and thus, encodes global information in a protein. Secondly, I present a Group Graphical Lasso method for contact prediction that integrates joint multi-family Evolutionary Coupling analysis and supervised learning to improve accuracy on proteins without many sequence homologs. Different from single-family EC analysis that uses residue co-evolution information in only the target protein family, our joint EC analysis uses residue co-evolution in both the target family and its related families, which may have divergent sequences but similar folds. Our method can also integrate supervised learning methods to further improve accuracy. We evaluate the performance of both methods including each of its components on large public benchmarks. Experiments show that our methods can achieve better accuracy than existing state-of-the-art methods under all the measurements on most of the protein classes.
[ "Jianzhu Ma", "['Jianzhu Ma']" ]
cs.NE cs.LG
null
1510.05711
null
null
http://arxiv.org/pdf/1510.05711v2
2015-10-28T08:42:54Z
2015-10-19T22:38:09Z
Qualitative Projection Using Deep Neural Networks
Deep neural networks (DNN) abstract by demodulating the output of linear filters. In this article, we refine this definition of abstraction to show that the inputs of a DNN are abstracted with respect to the filters. Or, to restate, the abstraction is qualified by the filters. This leads us to introduce the notion of qualitative projection. We use qualitative projection to abstract MNIST hand-written digits with respect to the various dogs, horses, planes and cars of the CIFAR dataset. We then classify the MNIST digits according to the magnitude of their dogness, horseness, planeness and carness qualities, illustrating the generality of qualitative projection.
[ "Andrew J.R. Simpson", "['Andrew J. R. Simpson']" ]
cs.LG stat.ML
null
1510.05830
null
null
http://arxiv.org/pdf/1510.05830v2
2016-02-23T20:50:55Z
2015-10-20T10:48:40Z
Unsupervised Ensemble Learning with Dependent Classifiers
In unsupervised ensemble learning, one obtains predictions from multiple sources or classifiers, yet without knowing the reliability and expertise of each source, and with no labeled data to assess it. The task is to combine these possibly conflicting predictions into an accurate meta-learner. Most works to date assumed perfect diversity between the different sources, a property known as conditional independence. In realistic scenarios, however, this assumption is often violated, and ensemble learners based on it can be severely sub-optimal. The key challenges we address in this paper are:\ (i) how to detect, in an unsupervised manner, strong violations of conditional independence; and (ii) construct a suitable meta-learner. To this end we introduce a statistical model that allows for dependencies between classifiers. Our main contributions are the development of novel unsupervised methods to detect strongly dependent classifiers, better estimate their accuracies, and construct an improved meta-learner. Using both artificial and real datasets, we showcase the importance of taking classifier dependencies into account and the competitive performance of our approach.
[ "Ariel Jaffe, Ethan Fetaya, Boaz Nadler, Tingting Jiang, Yuval Kluger", "['Ariel Jaffe' 'Ethan Fetaya' 'Boaz Nadler' 'Tingting Jiang'\n 'Yuval Kluger']" ]
cs.SD cs.LG
null
1510.05937
null
null
http://arxiv.org/pdf/1510.05937v2
2016-03-31T05:33:49Z
2015-10-20T15:49:59Z
Binary Speaker Embedding
The popular i-vector model represents speakers as low-dimensional continuous vectors (i-vectors), and hence it is a way of continuous speaker embedding. In this paper, we investigate binary speaker embedding, which transforms i-vectors to binary vectors (codes) by a hash function. We start from locality sensitive hashing (LSH), a simple binarization approach where binary codes are derived from a set of random hash functions. A potential problem of LSH is that the randomly sampled hash functions might be suboptimal. We therefore propose an improved Hamming distance learning approach, where the hash function is learned by a variable-sized block training that projects each dimension of the original i-vectors to variable-sized binary codes independently. Our experiments show that binary speaker embedding can deliver competitive or even better results on both speaker verification and identification tasks, while the memory usage and the computation cost are significantly reduced.
[ "['Lantian Li' 'Dong Wang' 'Chao Xing' 'Kaimin Yu' 'Thomas Fang Zheng']", "Lantian Li and Dong Wang and Chao Xing and Kaimin Yu and Thomas Fang\n Zheng" ]
cs.SD cs.LG
null
1510.05940
null
null
http://arxiv.org/pdf/1510.05940v2
2016-03-31T05:27:17Z
2015-10-20T16:01:05Z
Max-margin Metric Learning for Speaker Recognition
Probabilistic linear discriminant analysis (PLDA) is a popular normalization approach for the i-vector model, and has delivered state-of-the-art performance in speaker recognition. A potential problem of the PLDA model, however, is that it essentially assumes Gaussian distributions over speaker vectors, which is not always true in practice. Additionally, the objective function is not directly related to the goal of the task, e.g., discriminating true speakers and imposters. In this paper, we propose a max-margin metric learning approach to solve the problems. It learns a linear transform with a criterion that the margin between target and imposter trials are maximized. Experiments conducted on the SRE08 core test show that compared to PLDA, the new approach can obtain comparable or even better performance, though the scoring is simply a cosine computation.
[ "Lantian Li and Dong Wang and Chao Xing and Thomas Fang Zheng", "['Lantian Li' 'Dong Wang' 'Chao Xing' 'Thomas Fang Zheng']" ]
math.PR cs.LG cs.SI stat.ML
null
1510.05956
null
null
http://arxiv.org/pdf/1510.05956v6
2016-05-21T19:41:08Z
2015-10-20T16:47:27Z
Optimal Cluster Recovery in the Labeled Stochastic Block Model
We consider the problem of community detection or clustering in the labeled Stochastic Block Model (LSBM) with a finite number $K$ of clusters of sizes linearly growing with the global population of items $n$. Every pair of items is labeled independently at random, and label $\ell$ appears with probability $p(i,j,\ell)$ between two items in clusters indexed by $i$ and $j$, respectively. The objective is to reconstruct the clusters from the observation of these random labels. Clustering under the SBM and their extensions has attracted much attention recently. Most existing work aimed at characterizing the set of parameters such that it is possible to infer clusters either positively correlated with the true clusters, or with a vanishing proportion of misclassified items, or exactly matching the true clusters. We find the set of parameters such that there exists a clustering algorithm with at most $s$ misclassified items in average under the general LSBM and for any $s=o(n)$, which solves one open problem raised in \cite{abbe2015community}. We further develop an algorithm, based on simple spectral methods, that achieves this fundamental performance limit within $O(n \mbox{polylog}(n))$ computations and without the a-priori knowledge of the model parameters.
[ "['Se-Young Yun' 'Alexandre Proutiere']", "Se-Young Yun and Alexandre Proutiere" ]
cs.CV cs.LG cs.NE
null
1510.05970
null
null
http://arxiv.org/pdf/1510.05970v2
2016-05-18T19:53:41Z
2015-10-20T17:15:05Z
Stereo Matching by Training a Convolutional Neural Network to Compare Image Patches
We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.
[ "Jure \\v{Z}bontar and Yann LeCun", "['Jure Žbontar' 'Yann LeCun']" ]
cs.LG
null
1510.05976
null
null
http://arxiv.org/pdf/1510.05976v1
2015-10-20T17:27:12Z
2015-10-20T17:27:12Z
Transductive Optimization of Top k Precision
Consider a binary classification problem in which the learner is given a labeled training set, an unlabeled test set, and is restricted to choosing exactly $k$ test points to output as positive predictions. Problems of this kind---{\it transductive precision@$k$}---arise in information retrieval, digital advertising, and reserve design for endangered species. Previous methods separate the training of the model from its use in scoring the test points. This paper introduces a new approach, Transductive Top K (TTK), that seeks to minimize the hinge loss over all training instances under the constraint that exactly $k$ test instances are predicted as positive. The paper presents two optimization methods for this challenging problem. Experiments and analysis confirm the importance of incorporating the knowledge of $k$ into the learning process. Experimental evaluations of the TTK approach show that the performance of TTK matches or exceeds existing state-of-the-art methods on 7 UCI datasets and 3 reserve design problem instances.
[ "['Li-Ping Liu' 'Thomas G. Dietterich' 'Nan Li' 'Zhi-Hua Zhou']", "Li-Ping Liu and Thomas G. Dietterich and Nan Li and Zhi-Hua Zhou" ]
cs.LG
null
1510.06002
null
null
http://arxiv.org/pdf/1510.06002v2
2015-10-27T22:25:33Z
2015-10-20T18:25:45Z
Fast and Scalable Structural SVM with Slack Rescaling
We present an efficient method for training slack-rescaled structural SVM. Although finding the most violating label in a margin-rescaled formulation is often easy since the target function decomposes with respect to the structure, this is not the case for a slack-rescaled formulation, and finding the most violated label might be very difficult. Our core contribution is an efficient method for finding the most-violating-label in a slack-rescaled formulation, given an oracle that returns the most-violating-label in a (slightly modified) margin-rescaled formulation. We show that our method enables accurate and scalable training for slack-rescaled SVMs, reducing runtime by an order of magnitude compared to previous approaches to slack-rescaled SVMs.
[ "['Heejin Choi' 'Ofer Meshi' 'Nathan Srebro']", "Heejin Choi, Ofer Meshi, Nathan Srebro" ]
cs.LG
null
1510.06024
null
null
http://arxiv.org/pdf/1510.06024v1
2015-10-19T22:33:30Z
2015-10-19T22:33:30Z
Robust Semi-Supervised Classification for Multi-Relational Graphs
Graph-regularized semi-supervised learning has been used effectively for classification when (i) instances are connected through a graph, and (ii) labeled data is scarce. If available, using multiple relations (or graphs) between the instances can improve the prediction performance. On the other hand, when these relations have varying levels of veracity and exhibit varying relevance for the task, very noisy and/or irrelevant relations may deteriorate the performance. As a result, an effective weighing scheme needs to be put in place. In this work, we propose a robust and scalable approach for multi-relational graph-regularized semi-supervised classification. Under a convex optimization scheme, we simultaneously infer weights for the multiple graphs as well as a solution. We provide a careful analysis of the inferred weights, based on which we devise an algorithm that filters out irrelevant and noisy graphs and produces weights proportional to the informativeness of the remaining graphs. Moreover, the proposed method is linearly scalable w.r.t. the number of edges in the union of the multiple graphs. Through extensive experiments we show that our method yields superior results under different noise models, and under increasing number of noisy graphs and intensity of noise, as compared to a list of baselines and state-of-the-art approaches.
[ "['Junting Ye' 'Leman Akoglu']", "Junting Ye, Leman Akoglu" ]
cs.LG math.NA math.OC stat.ML
null
1510.06083
null
null
http://arxiv.org/pdf/1510.06083v1
2015-10-20T22:55:48Z
2015-10-20T22:55:48Z
Regularization vs. Relaxation: A conic optimization perspective of statistical variable selection
Variable selection is a fundamental task in statistical data analysis. Sparsity-inducing regularization methods are a popular class of methods that simultaneously perform variable selection and model estimation. The central problem is a quadratic optimization problem with an l0-norm penalty. Exactly enforcing the l0-norm penalty is computationally intractable for larger scale problems, so dif- ferent sparsity-inducing penalty functions that approximate the l0-norm have been introduced. In this paper, we show that viewing the problem from a convex relaxation perspective offers new insights. In particular, we show that a popular sparsity-inducing concave penalty function known as the Minimax Concave Penalty (MCP), and the reverse Huber penalty derived in a recent work by Pilanci, Wainwright and Ghaoui, can both be derived as special cases of a lifted convex relaxation called the perspective relaxation. The optimal perspective relaxation is a related minimax problem that balances the overall convexity and tightness of approximation to the l0 norm. We show it can be solved by a semidefinite relaxation. Moreover, a probabilistic interpretation of the semidefinite relaxation reveals connections with the boolean quadric polytope in combinatorial optimization. Finally by reformulating the l0-norm pe- nalized problem as a two-level problem, with the inner level being a Max-Cut problem, our proposed semidefinite relaxation can be realized by replacing the inner level problem with its semidefinite relaxation studied by Goemans and Williamson. This interpretation suggests using the Goemans-Williamson rounding procedure to find approximate solutions to the l0-norm penalized problem. Numerical experiments demonstrate the tightness of our proposed semidefinite relaxation, and the effectiveness of finding approximate solutions by Goemans-Williamson rounding.
[ "['Hongbo Dong' 'Kun Chen' 'Jeff Linderoth']", "Hongbo Dong and Kun Chen and Jeff Linderoth" ]
cs.LG cs.AI
null
1510.06143
null
null
http://arxiv.org/pdf/1510.06143v4
2015-11-11T05:16:06Z
2015-10-21T06:23:55Z
High Performance Latent Variable Models
Latent variable models have accumulated a considerable amount of interest from the industry and academia for their versatility in a wide range of applications. A large amount of effort has been made to develop systems that is able to extend the systems to a large scale, in the hope to make use of them on industry scale data. In this paper, we describe a system that operates at a scale orders of magnitude higher than previous works, and an order of magnitude faster than state-of-the-art system at the same scale, at the same time showing more robustness and more accurate results. Our system uses a number of advances in distributed inference: high performance in synchronization of sufficient statistics with relaxed consistency model; fast sampling, using the Metropolis-Hastings-Walker method to overcome dense generative models; statistical modeling, moving beyond Latent Dirichlet Allocation (LDA) to Pitman-Yor distributions (PDP) and Hierarchical Dirichlet Process (HDP) models; sophisticated parameter projection schemes, to resolve the conflicts within the constraint between parameters arising from the relaxed consistency model. This work significantly extends the domain of applicability of what is commonly known as the Parameter Server. We obtain results with up to hundreds billion oftokens, thousands of topics, and a vocabulary of a few million token-types, using up to 60,000 processor cores operating on a production cluster of a large Internet company. This demonstrates the feasibility to scale to problems orders of magnitude larger than any previously published work.
[ "Aaron Q. Li, Amr Ahmed, Mu Li, Vanja Josifovski", "['Aaron Q. Li' 'Amr Ahmed' 'Mu Li' 'Vanja Josifovski']" ]
cs.IT cs.LG math.IT stat.ML
10.1109/JSTSP.2016.2548442
1510.06188
null
null
http://arxiv.org/abs/1510.06188v3
2016-03-28T15:32:26Z
2015-10-21T10:03:45Z
Learning-based Compressive Subsampling
The problem of recovering a structured signal $\mathbf{x} \in \mathbb{C}^p$ from a set of dimensionality-reduced linear measurements $\mathbf{b} = \mathbf {A}\mathbf {x}$ arises in a variety of applications, such as medical imaging, spectroscopy, Fourier optics, and computerized tomography. Due to computational and storage complexity or physical constraints imposed by the problem, the measurement matrix $\mathbf{A} \in \mathbb{C}^{n \times p}$ is often of the form $\mathbf{A} = \mathbf{P}_{\Omega}\boldsymbol{\Psi}$ for some orthonormal basis matrix $\boldsymbol{\Psi}\in \mathbb{C}^{p \times p}$ and subsampling operator $\mathbf{P}_{\Omega}: \mathbb{C}^{p} \rightarrow \mathbb{C}^{n}$ that selects the rows indexed by $\Omega$. This raises the fundamental question of how best to choose the index set $\Omega$ in order to optimize the recovery performance. Previous approaches to addressing this question rely on non-uniform \emph{random} subsampling using application-specific knowledge of the structure of $\mathbf{x}$. In this paper, we instead take a principled learning-based approach in which a \emph{fixed} index set is chosen based on a set of training signals $\mathbf{x}_1,\dotsc,\mathbf{x}_m$. We formulate combinatorial optimization problems seeking to maximize the energy captured in these signals in an average-case or worst-case sense, and we show that these can be efficiently solved either exactly or approximately via the identification of modularity and submodularity structures. We provide both deterministic and statistical theoretical guarantees showing how the resulting measurement matrices perform on signals differing from the training signals, and we provide numerical examples showing our approach to be effective on a variety of data sets.
[ "['Luca Baldassarre' 'Yen-Huan Li' 'Jonathan Scarlett' 'Baran Gözcü'\n 'Ilija Bogunovic' 'Volkan Cevher']", "Luca Baldassarre and Yen-Huan Li and Jonathan Scarlett and Baran\n G\\\"ozc\\\"u and Ilija Bogunovic and Volkan Cevher" ]
cs.AI cs.LG
null
1510.06335
null
null
http://arxiv.org/pdf/1510.06335v2
2016-04-18T21:09:58Z
2015-10-21T16:42:55Z
Time-Sensitive Bayesian Information Aggregation for Crowdsourcing Systems
Crowdsourcing systems commonly face the problem of aggregating multiple judgments provided by potentially unreliable workers. In addition, several aspects of the design of efficient crowdsourcing processes, such as defining worker's bonuses, fair prices and time limits of the tasks, involve knowledge of the likely duration of the task at hand. Bringing this together, in this work we introduce a new time--sensitive Bayesian aggregation method that simultaneously estimates a task's duration and obtains reliable aggregations of crowdsourced judgments. Our method, called BCCTime, builds on the key insight that the time taken by a worker to perform a task is an important indicator of the likely quality of the produced judgment. To capture this, BCCTime uses latent variables to represent the uncertainty about the workers' completion time, the tasks' duration and the workers' accuracy. To relate the quality of a judgment to the time a worker spends on a task, our model assumes that each task is completed within a latent time window within which all workers with a propensity to genuinely attempt the labelling task (i.e., no spammers) are expected to submit their judgments. In contrast, workers with a lower propensity to valid labeling, such as spammers, bots or lazy labelers, are assumed to perform tasks considerably faster or slower than the time required by normal workers. Specifically, we use efficient message-passing Bayesian inference to learn approximate posterior probabilities of (i) the confusion matrix of each worker, (ii) the propensity to valid labeling of each worker, (iii) the unbiased duration of each task and (iv) the true label of each task. Using two real-world public datasets for entity linking tasks, we show that BCCTime produces up to 11% more accurate classifications and up to 100% more informative estimates of a task's duration compared to state-of-the-art methods.
[ "Matteo Venanzi, John Guiver, Pushmeet Kohli, Nick Jennings", "['Matteo Venanzi' 'John Guiver' 'Pushmeet Kohli' 'Nick Jennings']" ]
quant-ph cs.LG stat.ML
null
1510.06356
null
null
http://arxiv.org/pdf/1510.06356v1
2015-10-21T18:21:39Z
2015-10-21T18:21:39Z
Application of Quantum Annealing to Training of Deep Neural Networks
In Deep Learning, a well-known approach for training a Deep Neural Network starts by training a generative Deep Belief Network model, typically using Contrastive Divergence (CD), then fine-tuning the weights using backpropagation or other discriminative techniques. However, the generative training can be time-consuming due to the slow mixing of Gibbs sampling. We investigated an alternative approach that estimates model expectations of Restricted Boltzmann Machines using samples from a D-Wave quantum annealing machine. We tested this method on a coarse-grained version of the MNIST data set. In our tests we found that the quantum sampling-based training approach achieves comparable or better accuracy with significantly fewer iterations of generative training than conventional CD-based training. Further investigation is needed to determine whether similar improvements can be achieved for other data sets, and to what extent these improvements can be attributed to quantum effects.
[ "Steven H. Adachi and Maxwell P. Henderson", "['Steven H. Adachi' 'Maxwell P. Henderson']" ]
stat.ML cs.LG
null
1510.06423
null
null
http://arxiv.org/pdf/1510.06423v4
2018-08-12T16:03:00Z
2015-10-21T20:35:13Z
Optimization as Estimation with Gaussian Processes in Bandit Settings
Recently, there has been rising interest in Bayesian optimization -- the optimization of an unknown function with assumptions usually expressed by a Gaussian Process (GP) prior. We study an optimization strategy that directly uses an estimate of the argmax of the function. This strategy offers both practical and theoretical advantages: no tradeoff parameter needs to be selected, and, moreover, we establish close connections to the popular GP-UCB and GP-PI strategies. Our approach can be understood as automatically and adaptively trading off exploration and exploitation in GP-UCB and GP-PI. We illustrate the effects of this adaptive tuning via bounds on the regret as well as an extensive empirical evaluation on robotics and vision tasks, demonstrating the robustness of this strategy for a range of performance criteria.
[ "Zi Wang, Bolei Zhou, Stefanie Jegelka", "['Zi Wang' 'Bolei Zhou' 'Stefanie Jegelka']" ]
cs.DS cs.LG
null
1510.06492
null
null
http://arxiv.org/pdf/1510.06492v1
2015-10-22T05:49:31Z
2015-10-22T05:49:31Z
Generalized Shortest Path Kernel on Graphs
We consider the problem of classifying graphs using graph kernels. We define a new graph kernel, called the generalized shortest path kernel, based on the number and length of shortest paths between nodes. For our example classification problem, we consider the task of classifying random graphs from two well-known families, by the number of clusters they contain. We verify empirically that the generalized shortest path kernel outperforms the original shortest path kernel on a number of datasets. We give a theoretical analysis for explaining our experimental results. In particular, we estimate distributions of the expected feature vectors for the shortest path kernel and the generalized shortest path kernel, and we show some evidence explaining why our graph kernel outperforms the shortest path kernel for our graph classification problem.
[ "['Linus Hermansson' 'Fredrik D. Johansson' 'Osamu Watanabe']", "Linus Hermansson, Fredrik D. Johansson, and Osamu Watanabe" ]
cs.CL cs.DC cs.LG
null
1510.06549
null
null
http://arxiv.org/pdf/1510.06549v1
2015-10-22T09:40:54Z
2015-10-22T09:40:54Z
Multi-GPU Distributed Parallel Bayesian Differential Topic Modelling
There is an explosion of data, documents, and other content, and people require tools to analyze and interpret these, tools to turn the content into information and knowledge. Topic modeling have been developed to solve these problems. Topic models such as LDA [Blei et. al. 2003] allow salient patterns in data to be extracted automatically. When analyzing texts, these patterns are called topics. Among numerous extensions of LDA, few of them can reliably analyze multiple groups of documents and extract topic similarities. Recently, the introduction of differential topic modeling (SPDP) [Chen et. al. 2012] performs uniformly better than many topic models in a discriminative setting. There is also a need to improve the sampling speed for topic models. While some effort has been made for distributed algorithms, there is no work currently done using graphical processing units (GPU). Note the GPU framework has already become the most cost-efficient platform for many problems. In this thesis, I propose and implement a scalable multi-GPU distributed parallel framework which approximates SPDP. Through experiments, I have shown my algorithms have a gain in speed of about 50 times while being almost as accurate, with only one single cheap laptop GPU. Furthermore, I have shown the speed improvement is sublinearly scalable when multiple GPUs are used, while fairly maintaining the accuracy. Therefore on a medium-sized GPU cluster, the speed improvement could potentially reach a factor of a thousand. Note SPDP is just a representative of other extensions of LDA. Although my algorithm is implemented to work with SPDP, it is designed to be a general enough to work with other topic models. The speed-up on smaller collections (i.e., 1000s of documents), means that these more complex LDA extensions could now be done in real-time, thus opening up a new way of using these LDA models in industry.
[ "['Aaron Q Li']", "Aaron Q Li" ]
cs.LG math.OC stat.ML
null
1510.06567
null
null
http://arxiv.org/pdf/1510.06567v1
2015-10-22T10:19:52Z
2015-10-22T10:19:52Z
Generalized conditional gradient: analysis of convergence and applications
The objectives of this technical report is to provide additional results on the generalized conditional gradient methods introduced by Bredies et al. [BLM05]. Indeed , when the objective function is smooth, we provide a novel certificate of optimality and we show that the algorithm has a linear convergence rate. Applications of this algorithm are also discussed.
[ "Alain Rakotomamonjy (LITIS), R\\'emi Flamary (LAGRANGE, OCA), Nicolas\n Courty (OBELIX)", "['Alain Rakotomamonjy' 'Rémi Flamary' 'Nicolas Courty']" ]
physics.soc-ph cs.CY cs.LG stat.ML
null
1510.06582
null
null
http://arxiv.org/pdf/1510.06582v1
2015-10-22T11:27:03Z
2015-10-22T11:27:03Z
Collective Prediction of Individual Mobility Traces with Exponential Weights
We present and test a sequential learning algorithm for the short-term prediction of human mobility. This novel approach pairs the Exponential Weights forecaster with a very large ensemble of experts. The experts are individual sequence prediction algorithms constructed from the mobility traces of 10 million roaming mobile phone users in a European country. Average prediction accuracy is significantly higher than that of individual sequence prediction algorithms, namely constant order Markov models derived from the user's own data, that have been shown to achieve high accuracy in previous studies of human mobility prediction. The algorithm uses only time stamped location data, and accuracy depends on the completeness of the expert ensemble, which should contain redundant records of typical mobility patterns. The proposed algorithm is applicable to the prediction of any sufficiently large dataset of sequences.
[ "['Bartosz Hawelka' 'Izabela Sitko' 'Pavlos Kazakopoulos' 'Euro Beinat']", "Bartosz Hawelka, Izabela Sitko, Pavlos Kazakopoulos and Euro Beinat" ]
cs.LG cs.CL stat.ML
null
1510.06646
null
null
http://arxiv.org/pdf/1510.06646v2
2016-02-27T12:29:43Z
2015-10-22T14:39:58Z
A 'Gibbs-Newton' Technique for Enhanced Inference of Multivariate Polya Parameters and Topic Models
Hyper-parameters play a major role in the learning and inference process of latent Dirichlet allocation (LDA). In order to begin the LDA latent variables learning process, these hyper-parameters values need to be pre-determined. We propose an extension for LDA that we call 'Latent Dirichlet allocation Gibbs Newton' (LDA-GN), which places non-informative priors over these hyper-parameters and uses Gibbs sampling to learn appropriate values for them. At the heart of LDA-GN is our proposed 'Gibbs-Newton' algorithm, which is a new technique for learning the parameters of multivariate Polya distributions. We report Gibbs-Newton performance results compared with two prominent existing approaches to the latter task: Minka's fixed-point iteration method and the Moments method. We then evaluate LDA-GN in two ways: (i) by comparing it with standard LDA in terms of the ability of the resulting topic models to generalize to unseen documents; (ii) by comparing it with standard LDA in its performance on a binary classification task.
[ "['Osama Khalifa' 'David Wolfe Corne' 'Mike Chantler']", "Osama Khalifa, David Wolfe Corne, Mike Chantler" ]
cs.ET cs.LG physics.optics
10.1109/ICASSP.2016.7472872
1510.06664
null
null
http://arxiv.org/abs/1510.06664v2
2015-10-25T11:19:23Z
2015-10-22T15:54:30Z
Random Projections through multiple optical scattering: Approximating kernels at the speed of light
Random projections have proven extremely useful in many signal processing and machine learning applications. However, they often require either to store a very large random matrix, or to use a different, structured matrix to reduce the computational and memory costs. Here, we overcome this difficulty by proposing an analog, optical device, that performs the random projections literally at the speed of light without having to store any matrix in memory. This is achieved using the physical properties of multiple coherent scattering of coherent light in random media. We use this device on a simple task of classification with a kernel machine, and we show that, on the MNIST database, the experimental results closely match the theoretical performance of the corresponding kernel. This framework can help make kernel methods practical for applications that have large training sets and/or require real-time prediction. We discuss possible extensions of the method in terms of a class of kernels, speed, memory consumption and different problems.
[ "Alaa Saade, Francesco Caltagirone, Igor Carron, Laurent Daudet,\n Ang\\'elique Dr\\'emeau, Sylvain Gigan and Florent Krzakala", "['Alaa Saade' 'Francesco Caltagirone' 'Igor Carron' 'Laurent Daudet'\n 'Angélique Drémeau' 'Sylvain Gigan' 'Florent Krzakala']" ]
math.OC cs.LG
null
1510.06684
null
null
http://arxiv.org/pdf/1510.06684v3
2018-05-24T00:43:53Z
2015-10-22T16:50:56Z
Dual Free Adaptive Mini-batch SDCA for Empirical Risk Minimization
In this paper we develop dual free mini-batch SDCA with adaptive probabilities for regularized empirical risk minimization. This work is motivated by recent work of Shai Shalev-Shwartz on dual free SDCA method, however, we allow a non-uniform selection of "dual" coordinates in SDCA. Moreover, the probability can change over time, making it more efficient than fix uniform or non-uniform selection. We also propose an efficient procedure to generate a random non-uniform mini-batch through iterative process. The work is concluded with multiple numerical experiments to show the efficiency of proposed algorithms.
[ "Xi He and Martin Tak\\'a\\v{c}", "['Xi He' 'Martin Takáč']" ]
math.OC cs.LG
null
1510.06688
null
null
http://arxiv.org/pdf/1510.06688v1
2015-10-22T17:03:04Z
2015-10-22T17:03:04Z
Partitioning Data on Features or Samples in Communication-Efficient Distributed Optimization?
In this paper we study the effect of the way that the data is partitioned in distributed optimization. The original DiSCO algorithm [Communication-Efficient Distributed Optimization of Self-Concordant Empirical Loss, Yuchen Zhang and Lin Xiao, 2015] partitions the input data based on samples. We describe how the original algorithm has to be modified to allow partitioning on features and show its efficiency both in theory and also in practice.
[ "Chenxin Ma and Martin Tak\\'a\\v{c}", "['Chenxin Ma' 'Martin Takáč']" ]
cs.NE cs.CV cs.DC cs.LG
10.1109/IPDPS.2016.119
1510.06706
null
null
http://arxiv.org/abs/1510.06706v1
2015-10-22T18:14:42Z
2015-10-22T18:14:42Z
ZNN - A Fast and Scalable Algorithm for Training 3D Convolutional Networks on Multi-Core and Many-Core Shared Memory Machines
Convolutional networks (ConvNets) have become a popular approach to computer vision. It is important to accelerate ConvNet training, which is computationally costly. We propose a novel parallel algorithm based on decomposition into a set of tasks, most of which are convolutions or FFTs. Applying Brent's theorem to the task dependency graph implies that linear speedup with the number of processors is attainable within the PRAM model of parallel computation, for wide network architectures. To attain such performance on real shared-memory machines, our algorithm computes convolutions converging on the same node of the network with temporal locality to reduce cache misses, and sums the convergent convolution outputs via an almost wait-free concurrent method to reduce time spent in critical sections. We implement the algorithm with a publicly available software package called ZNN. Benchmarking with multi-core CPUs shows that ZNN can attain speedup roughly equal to the number of physical cores. We also show that ZNN can attain over 90x speedup on a many-core CPU (Xeon Phi Knights Corner). These speedups are achieved for network architectures with widths that are in common use. The task parallelism of the ZNN algorithm is suited to CPUs, while the SIMD parallelism of previous algorithms is compatible with GPUs. Through examples, we show that ZNN can be either faster or slower than certain GPU implementations depending on specifics of the network architecture, kernel sizes, and density and size of the output patch. ZNN may be less costly to develop and maintain, due to the relative ease of general-purpose CPU programming.
[ "['Aleksandar Zlateski' 'Kisuk Lee' 'H. Sebastian Seung']", "Aleksandar Zlateski, Kisuk Lee and H. Sebastian Seung" ]
cs.CL cs.IR cs.LG
null
1510.06786
null
null
http://arxiv.org/pdf/1510.06786v2
2016-03-07T14:40:36Z
2015-10-22T22:53:10Z
Freshman or Fresher? Quantifying the Geographic Variation of Internet Language
We present a new computational technique to detect and analyze statistically significant geographic variation in language. Our meta-analysis approach captures statistical properties of word usage across geographical regions and uses statistical methods to identify significant changes specific to regions. While previous approaches have primarily focused on lexical variation between regions, our method identifies words that demonstrate semantic and syntactic variation as well. We extend recently developed techniques for neural language models to learn word representations which capture differing semantics across geographical regions. In order to quantify this variation and ensure robust detection of true regional differences, we formulate a null model to determine whether observed changes are statistically significant. Our method is the first such approach to explicitly account for random variation due to chance while detecting regional variation in word meaning. To validate our model, we study and analyze two different massive online data sets: millions of tweets from Twitter spanning not only four different countries but also fifty states, as well as millions of phrases contained in the Google Book Ngrams. Our analysis reveals interesting facets of language change at multiple scales of geographic resolution -- from neighboring states to distant continents. Finally, using our model, we propose a measure of semantic distance between languages. Our analysis of British and American English over a period of 100 years reveals that semantic variation between these dialects is shrinking.
[ "['Vivek Kulkarni' 'Bryan Perozzi' 'Steven Skiena']", "Vivek Kulkarni, Bryan Perozzi, Steven Skiena" ]
cs.LG cs.CV cs.NA
10.1109/TIP.2015.2511584
1510.06895
null
null
http://arxiv.org/abs/1510.06895v1
2015-10-23T11:28:06Z
2015-10-23T11:28:06Z
Nonconvex Nonsmooth Low-Rank Minimization via Iteratively Reweighted Nuclear Norm
The nuclear norm is widely used as a convex surrogate of the rank function in compressive sensing for low rank matrix recovery with its applications in image recovery and signal processing. However, solving the nuclear norm based relaxed convex problem usually leads to a suboptimal solution of the original rank minimization problem. In this paper, we propose to perform a family of nonconvex surrogates of $L_0$-norm on the singular values of a matrix to approximate the rank function. This leads to a nonconvex nonsmooth minimization problem. Then we propose to solve the problem by Iteratively Reweighted Nuclear Norm (IRNN) algorithm. IRNN iteratively solves a Weighted Singular Value Thresholding (WSVT) problem, which has a closed form solution due to the special properties of the nonconvex surrogate functions. We also extend IRNN to solve the nonconvex problem with two or more blocks of variables. In theory, we prove that IRNN decreases the objective function value monotonically, and any limit point is a stationary point. Extensive experiments on both synthesized data and real images demonstrate that IRNN enhances the low-rank matrix recovery compared with state-of-the-art convex algorithms.
[ "['Canyi Lu' 'Jinhui Tang' 'Shuicheng Yan' 'Zhouchen Lin']", "Canyi Lu, Jinhui Tang, Shuicheng Yan, Zhouchen Lin" ]
stat.ML cs.CC cs.LG
null
1510.06920
null
null
http://arxiv.org/pdf/1510.06920v2
2016-07-04T12:32:11Z
2015-10-23T12:45:29Z
On the complexity of switching linear regression
This technical note extends recent results on the computational complexity of globally minimizing the error of piecewise-affine models to the related problem of minimizing the error of switching linear regression models. In particular, we show that, on the one hand the problem is NP-hard, but on the other hand, it admits a polynomial-time algorithm with respect to the number of data points for any fixed data dimension and number of modes.
[ "['Fabien Lauer']", "Fabien Lauer (ABC)" ]
stat.ML cs.IR cs.LG
null
1510.07025
null
null
http://arxiv.org/pdf/1510.07025v2
2016-02-04T18:58:44Z
2015-10-23T19:39:38Z
Modeling User Exposure in Recommendation
Collaborative filtering analyzes user preferences for items (e.g., books, movies, restaurants, academic papers) by exploiting the similarity patterns across users. In implicit feedback settings, all the items, including the ones that a user did not consume, are taken into consideration. But this assumption does not accord with the common sense understanding that users have a limited scope and awareness of items. For example, a user might not have heard of a certain paper, or might live too far away from a restaurant to experience it. In the language of causal analysis, the assignment mechanism (i.e., the items that a user is exposed to) is a latent variable that may change for various user/item combinations. In this paper, we propose a new probabilistic approach that directly incorporates user exposure to items into collaborative filtering. The exposure is modeled as a latent variable and the model infers its value from data. In doing so, we recover one of the most successful state-of-the-art approaches as a special case of our model, and provide a plug-in method for conditioning exposure on various forms of exposure covariates (e.g., topics in text, venue locations). We show that our scalable inference algorithm outperforms existing benchmarks in four different domains both with and without exposure covariates.
[ "Dawen Liang, Laurent Charlin, James McInerney, David M. Blei", "['Dawen Liang' 'Laurent Charlin' 'James McInerney' 'David M. Blei']" ]
cs.LG cs.CL cs.DC cs.IR
null
1510.07035
null
null
http://arxiv.org/pdf/1510.07035v1
2015-10-23T05:26:09Z
2015-10-23T05:26:09Z
Fast Latent Variable Models for Inference and Visualization on Mobile Devices
In this project we outline Vedalia, a high performance distributed network for performing inference on latent variable models in the context of Amazon review visualization. We introduce a new model, RLDA, which extends Latent Dirichlet Allocation (LDA) [Blei et al., 2003] for the review space by incorporating auxiliary data available in online reviews to improve modeling while simultaneously remaining compatible with pre-existing fast sampling techniques such as [Yao et al., 2009; Li et al., 2014a] to achieve high performance. The network is designed such that computation is efficiently offloaded to the client devices using the Chital system [Robinson & Li, 2015], improving response times and reducing server costs. The resulting system is able to rapidly compute a large number of specialized latent variable models while requiring minimal server resources.
[ "['Joseph W Robinson' 'Aaron Q Li']", "Joseph W Robinson, Aaron Q Li" ]
physics.data-an cs.LG cs.NE
10.1016/j.ins.2016.12.015
1510.07146
null
null
http://arxiv.org/abs/1510.07146v2
2016-10-03T18:19:30Z
2015-10-24T13:38:13Z
Data-driven detrending of nonstationary fractal time series with echo state networks
In this paper, we propose a novel data-driven approach for removing trends (detrending) from nonstationary, fractal and multifractal time series. We consider real-valued time series relative to measurements of an underlying dynamical system that evolves through time. We assume that such a dynamical process is predictable to a certain degree by means of a class of recurrent networks called Echo State Network (ESN), which are capable to model a generic dynamical process. In order to isolate the superimposed (multi)fractal component of interest, we define a data-driven filter by leveraging on the ESN prediction capability to identify the trend component of a given input time series. Specifically, the (estimated) trend is removed from the original time series and the residual signal is analyzed with the multifractal detrended fluctuation analysis procedure to verify the correctness of the detrending procedure. In order to demonstrate the effectiveness of the proposed technique, we consider several synthetic time series consisting of different types of trends and fractal noise components with known characteristics. We also process a real-world dataset, the sunspot time series, which is well-known for its multifractal features and has recently gained attention in the complex systems field. Results demonstrate the validity and generality of the proposed detrending method based on ESNs.
[ "['Enrico Maiorino' 'Filippo Maria Bianchi' 'Lorenzo Livi'\n 'Antonello Rizzi' 'Alireza Sadeghian']", "Enrico Maiorino, Filippo Maria Bianchi, Lorenzo Livi, Antonello Rizzi,\n Alireza Sadeghian" ]
stat.ML cs.LG math.OC
null
1510.07169
null
null
http://arxiv.org/pdf/1510.07169v1
2015-10-24T17:56:27Z
2015-10-24T17:56:27Z
Fast and Scalable Lasso via Stochastic Frank-Wolfe Methods with a Convergence Guarantee
Frank-Wolfe (FW) algorithms have been often proposed over the last few years as efficient solvers for a variety of optimization problems arising in the field of Machine Learning. The ability to work with cheap projection-free iterations and the incremental nature of the method make FW a very effective choice for many large-scale problems where computing a sparse model is desirable. In this paper, we present a high-performance implementation of the FW method tailored to solve large-scale Lasso regression problems, based on a randomized iteration, and prove that the convergence guarantees of the standard FW method are preserved in the stochastic setting. We show experimentally that our algorithm outperforms several existing state of the art methods, including the Coordinate Descent algorithm by Friedman et al. (one of the fastest known Lasso solvers), on several benchmark datasets with a very large number of features, without sacrificing the accuracy of the model. Our results illustrate that the algorithm is able to generate the complete regularization path on problems of size up to four million variables in less than one minute.
[ "Emanuele Frandi, Ricardo Nanculef, Stefano Lodi, Claudio Sartori,\n Johan A. K. Suykens", "['Emanuele Frandi' 'Ricardo Nanculef' 'Stefano Lodi' 'Claudio Sartori'\n 'Johan A. K. Suykens']" ]
cs.LG cs.NE
null
1510.07208
null
null
http://arxiv.org/pdf/1510.07208v1
2015-10-25T05:52:59Z
2015-10-25T05:52:59Z
Vehicle Speed Prediction using Deep Learning
Global optimization of the energy consumption of dual power source vehicles such as hybrid electric vehicles, plug-in hybrid electric vehicles, and plug in fuel cell electric vehicles requires knowledge of the complete route characteristics at the beginning of the trip. One of the main characteristics is the vehicle speed profile across the route. The profile will translate directly into energy requirements for a given vehicle. However, the vehicle speed that a given driver chooses will vary from driver to driver and from time to time, and may be slower, equal to, or faster than the average traffic flow. If the specific driver speed profile can be predicted, the energy usage can be optimized across the route chosen. The purpose of this paper is to research the application of Deep Learning techniques to this problem to identify at the beginning of a drive cycle the driver specific vehicle speed profile for an individual driver repeated drive cycle, which can be used in an optimization algorithm to minimize the amount of fossil fuel energy used during the trip.
[ "['Joe Lemieux' 'Yuan Ma']", "Joe Lemieux, Yuan Ma" ]
cs.SE cs.LG
null
1510.07211
null
null
http://arxiv.org/pdf/1510.07211v1
2015-10-25T06:52:45Z
2015-10-25T06:52:45Z
On End-to-End Program Generation from User Intention by Deep Neural Networks
This paper envisions an end-to-end program generation scenario using recurrent neural networks (RNNs): Users can express their intention in natural language; an RNN then automatically generates corresponding code in a characterby-by-character fashion. We demonstrate its feasibility through a case study and empirical analysis. To fully make such technique useful in practice, we also point out several cross-disciplinary challenges, including modeling user intention, providing datasets, improving model architectures, etc. Although much long-term research shall be addressed in this new field, we believe end-to-end program generation would become a reality in future decades, and we are looking forward to its practice.
[ "['Lili Mou' 'Rui Men' 'Ge Li' 'Lu Zhang' 'Zhi Jin']", "Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin" ]
cs.LG
null
1510.07303
null
null
http://arxiv.org/pdf/1510.07303v1
2015-10-25T21:04:12Z
2015-10-25T21:04:12Z
A Framework for Distributed Deep Learning Layer Design in Python
In this paper, a framework for testing Deep Neural Network (DNN) design in Python is presented. First, big data, machine learning (ML), and Artificial Neural Networks (ANNs) are discussed to familiarize the reader with the importance of such a system. Next, the benefits and detriments of implementing such a system in Python are presented. Lastly, the specifics of the system are explained, and some experimental results are presented to prove the effectiveness of the system.
[ "['Clay McLeod']", "Clay McLeod" ]
cs.LG cs.AI stat.ML
null
1510.07389
null
null
http://arxiv.org/pdf/1510.07389v3
2015-12-03T18:07:35Z
2015-10-26T07:39:47Z
The Human Kernel
Bayesian nonparametric models, such as Gaussian processes, provide a compelling framework for automatic statistical modelling: these models have a high degree of flexibility, and automatically calibrated complexity. However, automating human expertise remains elusive; for example, Gaussian processes with standard kernels struggle on function extrapolation problems that are trivial for human learners. In this paper, we create function extrapolation problems and acquire human responses, and then design a kernel learning framework to reverse engineer the inductive biases of human learners across a set of behavioral experiments. We use the learned kernels to gain psychological insights and to extrapolate in human-like ways that go beyond traditional stationary and polynomial kernels. Finally, we investigate Occam's razor in human and Gaussian process based function learning.
[ "['Andrew Gordon Wilson' 'Christoph Dann' 'Christopher G. Lucas'\n 'Eric P. Xing']", "Andrew Gordon Wilson, Christoph Dann, Christopher G. Lucas, Eric P.\n Xing" ]
null
null
1510.07471
null
null
http://arxiv.org/pdf/1510.07471v1
2015-10-26T13:23:48Z
2015-10-26T13:23:48Z
A Parallel algorithm for $\mathcal{X}$-Armed bandits
The target of $mathcal{X}$-armed bandit problem is to find the global maximum of an unknown stochastic function $f$, given a finite budget of $n$ evaluations. Recently, $mathcal{X}$-armed bandits have been widely used in many situations. Many of these applications need to deal with large-scale data sets. To deal with these large-scale data sets, we study a distributed setting of $mathcal{X}$-armed bandits, where $m$ players collaborate to find the maximum of the unknown function. We develop a novel anytime distributed $mathcal{X}$-armed bandit algorithm. Compared with prior work on $mathcal{X}$-armed bandits, our algorithm uses a quite different searching strategy so as to fit distributed learning scenarios. Our theoretical analysis shows that our distributed algorithm is $m$ times faster than the classical single-player algorithm. Moreover, the number of communication rounds of our algorithm is only logarithmic in $mn$. The numerical results show that our method can make effective use of every players to minimize the loss. Thus, our distributed approach is attractive and useful.
[ "['Cheng Chen' 'Shuang Liu' 'Zhihua Zhang' 'Wu-Jun Li']" ]
cs.CL cs.AI cs.LG
null
1510.07526
null
null
http://arxiv.org/pdf/1510.07526v3
2015-11-20T15:36:56Z
2015-10-26T16:03:27Z
Empirical Study on Deep Learning Models for Question Answering
In this paper we explore deep learning models with memory component or attention mechanism for question answering task. We combine and compare three models, Neural Machine Translation, Neural Turing Machine, and Memory Networks for a simulated QA data set. This paper is the first one that uses Neural Machine Translation and Neural Turing Machines for solving QA tasks. Our results suggest that the combination of attention and memory have potential to solve certain QA problem.
[ "['Yang Yu' 'Wei Zhang' 'Chung-Wei Hang' 'Bing Xiang' 'Bowen Zhou']", "Yang Yu, Wei Zhang, Chung-Wei Hang, Bing Xiang and Bowen Zhou" ]
cs.HC cs.IR cs.LG
null
1510.07545
null
null
http://arxiv.org/pdf/1510.07545v2
2016-02-08T12:16:23Z
2015-10-26T16:49:07Z
Using Shortlists to Support Decision Making and Improve Recommender System Performance
In this paper, we study shortlists as an interface component for recommender systems with the dual goal of supporting the user's decision process, as well as improving implicit feedback elicitation for increased recommendation quality. A shortlist is a temporary list of candidates that the user is currently considering, e.g., a list of a few movies the user is currently considering for viewing. From a cognitive perspective, shortlists serve as digital short-term memory where users can off-load the items under consideration -- thereby decreasing their cognitive load. From a machine learning perspective, adding items to the shortlist generates a new implicit feedback signal as a by-product of exploration and decision making which can improve recommendation quality. Shortlisting therefore provides additional data for training recommendation systems without the increases in cognitive load that requesting explicit feedback would incur. We perform an user study with a movie recommendation setup to compare interfaces that offer shortlist support with those that do not. From the user studies we conclude: (i) users make better decisions with a shortlist; (ii) users prefer an interface with shortlist support; and (iii) the additional implicit feedback from sessions with a shortlist improves the quality of recommendations by nearly a factor of two.
[ "['Tobias Schnabel' 'Paul N. Bennett' 'Susan T. Dumais' 'Thorsten Joachims']", "Tobias Schnabel, Paul N. Bennett, Susan T. Dumais and Thorsten\n Joachims" ]
stat.ML cs.LG
null
1510.07609
null
null
http://arxiv.org/pdf/1510.07609v1
2015-10-26T19:40:10Z
2015-10-26T19:40:10Z
Efficient Learning by Directed Acyclic Graph For Resource Constrained Prediction
We study the problem of reducing test-time acquisition costs in classification systems. Our goal is to learn decision rules that adaptively select sensors for each example as necessary to make a confident prediction. We model our system as a directed acyclic graph (DAG) where internal nodes correspond to sensor subsets and decision functions at each node choose whether to acquire a new sensor or classify using the available measurements. This problem can be naturally posed as an empirical risk minimization over training data. Rather than jointly optimizing such a highly coupled and non-convex problem over all decision nodes, we propose an efficient algorithm motivated by dynamic programming. We learn node policies in the DAG by reducing the global objective to a series of cost sensitive learning problems. Our approach is computationally efficient and has proven guarantees of convergence to the optimal system for a fixed architecture. In addition, we present an extension to map other budgeted learning problems with large number of sensors to our DAG architecture and demonstrate empirical performance exceeding state-of-the-art algorithms for data composed of both few and many sensors.
[ "Joseph Wang, Kirill Trapeznikov, Venkatesh Saligrama", "['Joseph Wang' 'Kirill Trapeznikov' 'Venkatesh Saligrama']" ]
cs.LG
null
1510.07641
null
null
http://arxiv.org/pdf/1510.07641v2
2017-03-21T21:28:55Z
2015-10-26T20:18:56Z
Phenotyping of Clinical Time Series with LSTM Recurrent Neural Networks
We present a novel application of LSTM recurrent neural networks to multilabel classification of diagnoses given variable-length time series of clinical measurements. Our method outperforms a strong baseline on a variety of metrics.
[ "['Zachary C. Lipton' 'David C. Kale' 'Randall C. Wetzel']", "Zachary C. Lipton, David C. Kale, Randall C. Wetzel" ]
stat.CO cs.LG stat.ML
null
1510.07727
null
null
http://arxiv.org/pdf/1510.07727v7
2017-04-11T02:23:00Z
2015-10-27T00:00:54Z
Statistically efficient thinning of a Markov chain sampler
It is common to subsample Markov chain output to reduce the storage burden. Geyer (1992) shows that discarding $k-1$ out of every $k$ observations will not improve statistical efficiency, as quantified through variance in a given computational budget. That observation is often taken to mean that thinning MCMC output cannot improve statistical efficiency. Here we suppose that it costs one unit of time to advance a Markov chain and then $\theta>0$ units of time to compute a sampled quantity of interest. For a thinned process, that cost $\theta$ is incurred less often, so it can be advanced through more stages. Here we provide examples to show that thinning will improve statistical efficiency if $\theta$ is large and the sample autocorrelations decay slowly enough. If the lag $\ell\ge1$ autocorrelations of a scalar measurement satisfy $\rho_\ell\ge\rho_{\ell+1}\ge0$, then there is always a $\theta<\infty$ at which thinning becomes more efficient for averages of that scalar. Many sample autocorrelation functions resemble first order AR(1) processes with $\rho_\ell =\rho^{|\ell|}$ for some $-1<\rho<1$. For an AR(1) process it is possible to compute the most efficient subsampling frequency $k$. The optimal $k$ grows rapidly as $\rho$ increases towards $1$. The resulting efficiency gain depends primarily on $\theta$, not $\rho$. Taking $k=1$ (no thinning) is optimal when $\rho\le0$. For $\rho>0$ it is optimal if and only if $\theta \le (1-\rho)^2/(2\rho)$. This efficiency gain never exceeds $1+\theta$. This paper also gives efficiency bounds for autocorrelations bounded between those of two AR(1) processes.
[ "['Art B. Owen']", "Art B. Owen" ]
stat.ML cond-mat.stat-mech cs.CV cs.LG
null
1510.07740
null
null
http://arxiv.org/pdf/1510.07740v2
2015-11-11T22:28:55Z
2015-10-27T01:04:05Z
The Wilson Machine for Image Modeling
Learning the distribution of natural images is one of the hardest and most important problems in machine learning. The problem remains open, because the enormous complexity of the structures in natural images spans all length scales. We break down the complexity of the problem and show that the hierarchy of structures in natural images fuels a new class of learning algorithms based on the theory of critical phenomena and stochastic processes. We approach this problem from the perspective of the theory of critical phenomena, which was developed in condensed matter physics to address problems with infinite length-scale fluctuations, and build a framework to integrate the criticality of natural images into a learning algorithm. The problem is broken down by mapping images into a hierarchy of binary images, called bitplanes. In this representation, the top bitplane is critical, having fluctuations in structures over a vast range of scales. The bitplanes below go through a gradual stochastic heating process to disorder. We turn this representation into a directed probabilistic graphical model, transforming the learning problem into the unsupervised learning of the distribution of the critical bitplane and the supervised learning of the conditional distributions for the remaining bitplanes. We learnt the conditional distributions by logistic regression in a convolutional architecture. Conditioned on the critical binary image, this simple architecture can generate large, natural-looking images, with many shades of gray, without the use of hidden units, unprecedented in the studies of natural images. The framework presented here is a major step in bringing criticality and stochastic processes to machine learning and in studying natural image statistics.
[ "['Saeed Saremi' 'Terrence J. Sejnowski']", "Saeed Saremi, Terrence J. Sejnowski" ]
stat.ML cs.LG
null
1510.07925
null
null
http://arxiv.org/pdf/1510.07925v1
2015-10-27T14:58:17Z
2015-10-27T14:58:17Z
Exclusive Sparsity Norm Minimization with Random Groups via Cone Projection
Many practical applications such as gene expression analysis, multi-task learning, image recognition, signal processing, and medical data analysis pursue a sparse solution for the feature selection purpose and particularly favor the nonzeros \emph{evenly} distributed in different groups. The exclusive sparsity norm has been widely used to serve to this purpose. However, it still lacks systematical studies for exclusive sparsity norm optimization. This paper offers two main contributions from the optimization perspective: 1) We provide several efficient algorithms to solve exclusive sparsity norm minimization with either smooth loss or hinge loss (non-smooth loss). All algorithms achieve the optimal convergence rate $O(1/k^2)$ ($k$ is the iteration number). To the best of our knowledge, this is the first time to guarantee such convergence rate for the general exclusive sparsity norm minimization; 2) When the group information is unavailable to define the exclusive sparsity norm, we propose to use the random grouping scheme to construct groups and prove that if the number of groups is appropriately chosen, the nonzeros (true features) would be grouped in the ideal way with high probability. Empirical studies validate the efficiency of proposed algorithms, and the effectiveness of random grouping scheme on the proposed exclusive SVM formulation.
[ "Yijun Huang and Ji Liu", "['Yijun Huang' 'Ji Liu']" ]
stat.ML cs.LG
null
1510.08108
null
null
http://arxiv.org/pdf/1510.08108v1
2015-10-27T21:59:33Z
2015-10-27T21:59:33Z
Online Learning with Gaussian Payoffs and Side Observations
We consider a sequential learning problem with Gaussian payoffs and side information: after selecting an action $i$, the learner receives information about the payoff of every action $j$ in the form of Gaussian observations whose mean is the same as the mean payoff, but the variance depends on the pair $(i,j)$ (and may be infinite). The setup allows a more refined information transfer from one action to another than previous partial monitoring setups, including the recently introduced graph-structured feedback case. For the first time in the literature, we provide non-asymptotic problem-dependent lower bounds on the regret of any algorithm, which recover existing asymptotic problem-dependent lower bounds and finite-time minimax lower bounds available in the literature. We also provide algorithms that achieve the problem-dependent lower bound (up to some universal constant factor) or the minimax lower bounds (up to logarithmic factors).
[ "['Yifan Wu' 'András György' 'Csaba Szepesvári']", "Yifan Wu, Andr\\'as Gy\\\"orgy, Csaba Szepesv\\'ari" ]
cs.LG stat.ML
null
1510.08231
null
null
http://arxiv.org/pdf/1510.08231v3
2016-11-02T14:29:29Z
2015-10-28T09:18:50Z
Operator-valued Kernels for Learning from Functional Response Data
In this paper we consider the problems of supervised classification and regression in the case where attributes and labels are functions: a data is represented by a set of functions, and the label is also a function. We focus on the use of reproducing kernel Hilbert space theory to learn from such functional data. Basic concepts and properties of kernel-based learning are extended to include the estimation of function-valued functions. In this setting, the representer theorem is restated, a set of rigorously defined infinite-dimensional operator-valued kernels that can be valuably applied when the data are functions is described, and a learning algorithm for nonlinear functional data analysis is introduced. The methodology is illustrated through speech and audio signal processing experiments.
[ "['Hachem Kadri' 'Emmanuel Duflos' 'Philippe Preux' 'Stéphane Canu'\n 'Alain Rakotomamonjy' 'Julien Audiffren']", "Hachem Kadri (LIF), Emmanuel Duflos (CRIStAL), Philippe Preux\n (CRIStAL, SEQUEL), St\\'ephane Canu (LITIS), Alain Rakotomamonjy (LITIS),\n Julien Audiffren (CMLA)" ]
stat.ML cs.LG
null
1510.08370
null
null
http://arxiv.org/pdf/1510.08370v1
2015-10-28T16:31:57Z
2015-10-28T16:31:57Z
Canonical Divergence Analysis
We aim to analyze the relation between two random vectors that may potentially have both different number of attributes as well as realizations, and which may even not have a joint distribution. This problem arises in many practical domains, including biology and architecture. Existing techniques assume the vectors to have the same domain or to be jointly distributed, and hence are not applicable. To address this, we propose Canonical Divergence Analysis (CDA). We introduce three instantiations, each of which permits practical implementation. Extensive empirical evaluation shows the potential of our method.
[ "['Hoang-Vu Nguyen' 'Jilles Vreeken']", "Hoang-Vu Nguyen and Jilles Vreeken" ]
stat.ML cs.LG
null
1510.08382
null
null
http://arxiv.org/pdf/1510.08382v1
2015-10-28T17:18:46Z
2015-10-28T17:18:46Z
Flexibly Mining Better Subgroups
In subgroup discovery, also known as supervised pattern mining, discovering high quality one-dimensional subgroups and refinements of these is a crucial task. For nominal attributes, this is relatively straightforward, as we can consider individual attribute values as binary features. For numerical attributes, the task is more challenging as individual numeric values are not reliable statistics. Instead, we can consider combinations of adjacent values, i.e. bins. Existing binning strategies, however, are not tailored for subgroup discovery. That is, they do not directly optimize for the quality of subgroups, therewith potentially degrading the mining result. To address this issue, we propose FLEXI. In short, with FLEXI we propose to use optimal binning to find high quality binary features for both numeric and ordinal attributes. We instantiate FLEXI with various quality measures and show how to achieve efficiency accordingly. Experiments on both synthetic and real-world data sets show that FLEXI outperforms state of the art with up to 25 times improvement in subgroup quality.
[ "['Hoang-Vu Nguyen' 'Jilles Vreeken']", "Hoang-Vu Nguyen and Jilles Vreeken" ]
stat.ML cs.LG
null
1510.08385
null
null
http://arxiv.org/pdf/1510.08385v1
2015-10-28T17:28:38Z
2015-10-28T17:28:38Z
Linear-time Detection of Non-linear Changes in Massively High Dimensional Time Series
Change detection in multivariate time series has applications in many domains, including health care and network monitoring. A common approach to detect changes is to compare the divergence between the distributions of a reference window and a test window. When the number of dimensions is very large, however, the naive approach has both quality and efficiency issues: to ensure robustness the window size needs to be large, which not only leads to missed alarms but also increases runtime. To this end, we propose LIGHT, a linear-time algorithm for robustly detecting non-linear changes in massively high dimensional time series. Importantly, LIGHT provides high flexibility in choosing the window size, allowing the domain expert to fit the level of details required. To do such, we 1) perform scalable PCA to reduce dimensionality, 2) perform scalable factorization of the joint distribution, and 3) scalably compute divergences between these lower dimensional distributions. Extensive empirical evaluation on both synthetic and real-world data show that LIGHT outperforms state of the art with up to 100% improvement in both quality and efficiency.
[ "['Hoang-Vu Nguyen' 'Jilles Vreeken']", "Hoang-Vu Nguyen and Jilles Vreeken" ]
stat.ML cs.LG
null
1510.08389
null
null
http://arxiv.org/pdf/1510.08389v1
2015-10-28T17:40:18Z
2015-10-28T17:40:18Z
Universal Dependency Analysis
Most data is multi-dimensional. Discovering whether any subset of dimensions, or subspaces, of such data is significantly correlated is a core task in data mining. To do so, we require a measure that quantifies how correlated a subspace is. For practical use, such a measure should be universal in the sense that it captures correlation in subspaces of any dimensionality and allows to meaningfully compare correlation scores across different subspaces, regardless how many dimensions they have and what specific statistical properties their dimensions possess. Further, it would be nice if the measure can non-parametrically and efficiently capture both linear and non-linear correlations. In this paper, we propose UDS, a multivariate correlation measure that fulfills all of these desiderata. In short, we define \uds based on cumulative entropy and propose a principled normalization scheme to bring its scores across different subspaces to the same domain, enabling universal correlation assessment. UDS is purely non-parametric as we make no assumption on data distributions nor types of correlation. To compute it on empirical data, we introduce an efficient and non-parametric method. Extensive experiments show that UDS outperforms state of the art.
[ "['Hoang-Vu Nguyen' 'Jilles Vreeken']", "Hoang-Vu Nguyen and Jilles Vreeken" ]
cs.LG cs.CV
null
1510.08520
null
null
http://arxiv.org/pdf/1510.08520v2
2015-11-18T07:11:42Z
2015-10-28T22:48:09Z
Learning with $\ell^{0}$-Graph: $\ell^{0}$-Induced Sparse Subspace Clustering
Sparse subspace clustering methods, such as Sparse Subspace Clustering (SSC) \cite{ElhamifarV13} and $\ell^{1}$-graph \cite{YanW09,ChengYYFH10}, are effective in partitioning the data that lie in a union of subspaces. Most of those methods use $\ell^{1}$-norm or $\ell^{2}$-norm with thresholding to impose the sparsity of the constructed sparse similarity graph, and certain assumptions, e.g. independence or disjointness, on the subspaces are required to obtain the subspace-sparse representation, which is the key to their success. Such assumptions are not guaranteed to hold in practice and they limit the application of sparse subspace clustering on subspaces with general location. In this paper, we propose a new sparse subspace clustering method named $\ell^{0}$-graph. In contrast to the required assumptions on subspaces for most existing sparse subspace clustering methods, it is proved that subspace-sparse representation can be obtained by $\ell^{0}$-graph for arbitrary distinct underlying subspaces almost surely under the mild i.i.d. assumption on the data generation. We develop a proximal method to obtain the sub-optimal solution to the optimization problem of $\ell^{0}$-graph with proved guarantee of convergence. Moreover, we propose a regularized $\ell^{0}$-graph that encourages nearby data to have similar neighbors so that the similarity graph is more aligned within each cluster and the graph connectivity issue is alleviated. Extensive experimental results on various data sets demonstrate the superiority of $\ell^{0}$-graph compared to other competing clustering methods, as well as the effectiveness of regularized $\ell^{0}$-graph.
[ "Yingzhen Yang, Jiashi Feng, Jianchao Yang, Thomas S. Huang", "['Yingzhen Yang' 'Jiashi Feng' 'Jianchao Yang' 'Thomas S. Huang']" ]
cs.LG
null
1510.08532
null
null
http://arxiv.org/pdf/1510.08532v1
2015-10-29T00:59:53Z
2015-10-29T00:59:53Z
The Singular Value Decomposition, Applications and Beyond
The singular value decomposition (SVD) is not only a classical theory in matrix computation and analysis, but also is a powerful tool in machine learning and modern data analysis. In this tutorial we first study the basic notion of SVD and then show the central role of SVD in matrices. Using majorization theory, we consider variational principles of singular values and eigenvalues. Built on SVD and a theory of symmetric gauge functions, we discuss unitarily invariant norms, which are then used to formulate general results for matrix low rank approximation. We study the subdifferentials of unitarily invariant norms. These results would be potentially useful in many machine learning problems such as matrix completion and matrix data classification. Finally, we discuss matrix low rank approximation and its recent developments such as randomized SVD, approximate matrix multiplication, CUR decomposition, and Nystrom approximation. Randomized algorithms are important approaches to large scale SVD as well as fast matrix computations.
[ "['Zhihua Zhang']", "Zhihua Zhang" ]
cs.NE cs.AI cs.HC cs.LG
null
1510.08565
null
null
http://arxiv.org/pdf/1510.08565v3
2015-11-05T07:26:01Z
2015-10-29T05:31:28Z
Attention with Intention for a Neural Network Conversation Model
In a conversation or a dialogue process, attention and intention play intrinsic roles. This paper proposes a neural network based approach that models the attention and intention processes. It essentially consists of three recurrent networks. The encoder network is a word-level model representing source side sentences. The intention network is a recurrent network that models the dynamics of the intention process. The decoder network is a recurrent network produces responses to the input from the source side. It is a language model that is dependent on the intention and has an attention mechanism to attend to particular source side words, when predicting a symbol in the response. The model is trained end-to-end without labeling data. Experiments show that this model generates natural responses to user inputs.
[ "['Kaisheng Yao' 'Geoffrey Zweig' 'Baolin Peng']", "Kaisheng Yao and Geoffrey Zweig and Baolin Peng" ]
stat.ML cs.DC cs.IR cs.LG
null
1510.08628
null
null
http://arxiv.org/pdf/1510.08628v2
2016-03-02T06:29:30Z
2015-10-29T10:33:20Z
WarpLDA: a Cache Efficient O(1) Algorithm for Latent Dirichlet Allocation
Developing efficient and scalable algorithms for Latent Dirichlet Allocation (LDA) is of wide interest for many applications. Previous work has developed an O(1) Metropolis-Hastings sampling method for each token. However, the performance is far from being optimal due to random accesses to the parameter matrices and frequent cache misses. In this paper, we first carefully analyze the memory access efficiency of existing algorithms for LDA by the scope of random access, which is the size of the memory region in which random accesses fall, within a short period of time. We then develop WarpLDA, an LDA sampler which achieves both the best O(1) time complexity per token and the best O(K) scope of random access. Our empirical results in a wide range of testing conditions demonstrate that WarpLDA is consistently 5-15x faster than the state-of-the-art Metropolis-Hastings based LightLDA, and is comparable or faster than the sparsity aware F+LDA. With WarpLDA, users can learn up to one million topics from hundreds of millions of documents in a few hours, at an unprecedentedly throughput of 11G tokens per second.
[ "['Jianfei Chen' 'Kaiwei Li' 'Jun Zhu' 'Wenguang Chen']", "Jianfei Chen, Kaiwei Li, Jun Zhu, Wenguang Chen" ]
cs.LG
null
1510.08660
null
null
http://arxiv.org/pdf/1510.08660v4
2016-04-28T07:32:03Z
2015-10-29T12:06:56Z
RATM: Recurrent Attentive Tracking Model
We present an attention-based modular neural framework for computer vision. The framework uses a soft attention mechanism allowing models to be trained with gradient descent. It consists of three modules: a recurrent attention module controlling where to look in an image or video frame, a feature-extraction module providing a representation of what is seen, and an objective module formalizing why the model learns its attentive behavior. The attention module allows the model to focus computation on task-related information in the input. We apply the framework to several object tracking tasks and explore various design choices. We experiment with three data sets, bouncing ball, moving digits and the real-world KTH data set. The proposed Recurrent Attentive Tracking Model performs well on all three tasks and can generalize to related but previously unseen sequences from a challenging tracking data set.
[ "['Samira Ebrahimi Kahou' 'Vincent Michalski' 'Roland Memisevic']", "Samira Ebrahimi Kahou, Vincent Michalski, Roland Memisevic" ]
stat.ML cs.LG
null
1510.08692
null
null
http://arxiv.org/pdf/1510.08692v2
2020-02-12T21:23:53Z
2015-10-29T13:57:11Z
Covariance-Controlled Adaptive Langevin Thermostat for Large-Scale Bayesian Sampling
Monte Carlo sampling for Bayesian posterior inference is a common approach used in machine learning. The Markov Chain Monte Carlo procedures that are used are often discrete-time analogues of associated stochastic differential equations (SDEs). These SDEs are guaranteed to leave invariant the required posterior distribution. An area of current research addresses the computational benefits of stochastic gradient methods in this setting. Existing techniques rely on estimating the variance or covariance of the subsampling error, and typically assume constant variance. In this article, we propose a covariance-controlled adaptive Langevin thermostat that can effectively dissipate parameter-dependent noise while maintaining a desired target distribution. The proposed method achieves a substantial speedup over popular alternative schemes for large-scale machine learning applications.
[ "Xiaocheng Shang, Zhanxing Zhu, Benedict Leimkuhler, Amos J. Storkey", "['Xiaocheng Shang' 'Zhanxing Zhu' 'Benedict Leimkuhler' 'Amos J. Storkey']" ]
cs.LG
null
1510.08713
null
null
http://arxiv.org/pdf/1510.08713v1
2015-10-26T08:34:07Z
2015-10-26T08:34:07Z
How good is good enough? Re-evaluating the bar for energy disaggregation
Since the early 1980s, the research community has developed ever more sophisticated algorithms for the problem of energy disaggregation, but despite decades of research, there is still a dearth of applications with demonstrated value. In this work, we explore a question that is highly pertinent to this research community: how good does energy disaggregation need to be in order to infer characteristics of a household? We present novel techniques that use unsupervised energy disaggregation to predict both household occupancy and static properties of the household such as size of the home and number of occupants. Results show that basic disaggregation approaches performs up to 30% better at occupancy estimation than using aggregate power data alone, and are up to 10% better at estimating static household characteristics. These results show that even rudimentary energy disaggregation techniques are sufficient for improved inference of household characteristics. To conclude, we re-evaluate the bar set by the community for energy disaggregation accuracy and try to answer the question "how good is good enough?"
[ "['Nipun Batra' 'Rishi Baijal' 'Amarjeet Singh' 'Kamin Whitehouse']", "Nipun Batra and Rishi Baijal and Amarjeet Singh and Kamin Whitehouse" ]
cs.LG cs.NE
null
1510.08829
null
null
http://arxiv.org/pdf/1510.08829v1
2015-10-29T19:24:03Z
2015-10-29T19:24:03Z
Spiking Deep Networks with LIF Neurons
We train spiking deep networks using leaky integrate-and-fire (LIF) neurons, and achieve state-of-the-art results for spiking networks on the CIFAR-10 and MNIST datasets. This demonstrates that biologically-plausible spiking LIF neurons can be integrated into deep networks can perform as well as other spiking models (e.g. integrate-and-fire). We achieved this result by softening the LIF response function, such that its derivative remains bounded, and by training the network with noise to provide robustness against the variability introduced by spikes. Our method is general and could be applied to other neuron types, including those used on modern neuromorphic hardware. Our work brings more biological realism into modern image classification models, with the hope that these models can inform how the brain performs this difficult task. It also provides new methods for training deep networks to run on neuromorphic hardware, with the aim of fast, power-efficient image classification for robotics applications.
[ "['Eric Hunsberger' 'Chris Eliasmith']", "Eric Hunsberger and Chris Eliasmith" ]
cs.DS cs.DM cs.LG
null
1510.08865
null
null
http://arxiv.org/pdf/1510.08865v2
2016-08-16T04:00:41Z
2015-10-29T20:07:32Z
Mixed Robust/Average Submodular Partitioning: Fast Algorithms, Guarantees, and Applications to Parallel Machine Learning and Multi-Label Image Segmentation
We study two mixed robust/average-case submodular partitioning problems that we collectively call Submodular Partitioning. These problems generalize both purely robust instances of the problem (namely max-min submodular fair allocation (SFA) and min-max submodular load balancing (SLB) and also generalize average-case instances (that is the submodular welfare problem (SWP) and submodular multiway partition (SMP). While the robust versions have been studied in the theory community, existing work has focused on tight approximation guarantees, and the resultant algorithms are not, in general, scalable to very large real-world applications. This is in contrast to the average case, where most of the algorithms are scalable. In the present paper, we bridge this gap, by proposing several new algorithms (including those based on greedy, majorization-minimization, minorization-maximization, and relaxation algorithms) that not only scale to large sizes but that also achieve theoretical approximation guarantees close to the state-of-the-art, and in some cases achieve new tight bounds. We also provide new scalable algorithms that apply to additive combinations of the robust and average-case extreme objectives. We show that these problems have many applications in machine learning (ML). This includes: 1) data partitioning and load balancing for distributed machine algorithms on parallel machines; 2) data clustering; and 3) multi-label image segmentation with (only) Boolean submodular functions via pixel partitioning. We empirically demonstrate the efficacy of our algorithms on real-world problems involving data partitioning for distributed optimization of standard machine learning objectives (including both convex and deep neural network objectives), and also on purely unsupervised (i.e., no supervised or semi-supervised learning, and no interactive segmentation) image segmentation.
[ "['Kai Wei' 'Rishabh Iyer' 'Shengjie Wang' 'Wenruo Bai' 'Jeff Bilmes']", "Kai Wei, Rishabh Iyer, Shengjie Wang, Wenruo Bai, Jeff Bilmes" ]
cs.DS cs.LG math.NA math.OC
null
1510.08896
null
null
http://arxiv.org/pdf/1510.08896v2
2016-05-30T02:40:00Z
2015-10-29T20:47:27Z
Robust Shift-and-Invert Preconditioning: Faster and More Sample Efficient Algorithms for Eigenvector Computation
We provide faster algorithms and improved sample complexities for approximating the top eigenvector of a matrix. Offline Setting: Given an $n \times d$ matrix $A$, we show how to compute an $\epsilon$ approximate top eigenvector in time $\tilde O ( [nnz(A) + \frac{d \cdot sr(A)}{gap^2}]\cdot \log 1/\epsilon )$ and $\tilde O([\frac{nnz(A)^{3/4} (d \cdot sr(A))^{1/4}}{\sqrt{gap}}]\cdot \log1/\epsilon )$. Here $sr(A)$ is the stable rank and $gap$ is the multiplicative eigenvalue gap. By separating the $gap$ dependence from $nnz(A)$ we improve on the classic power and Lanczos methods. We also improve prior work using fast subspace embeddings and stochastic optimization, giving significantly improved dependencies on $sr(A)$ and $\epsilon$. Our second running time improves this further when $nnz(A) \le \frac{d\cdot sr(A)}{gap^2}$. Online Setting: Given a distribution $D$ with covariance matrix $\Sigma$ and a vector $x_0$ which is an $O(gap)$ approximate top eigenvector for $\Sigma$, we show how to refine to an $\epsilon$ approximation using $\tilde O(\frac{v(D)}{gap^2} + \frac{v(D)}{gap \cdot \epsilon})$ samples from $D$. Here $v(D)$ is a natural variance measure. Combining our algorithm with previous work to initialize $x_0$, we obtain a number of improved sample complexity and runtime results. For general distributions, we achieve asymptotically optimal accuracy as a function of sample size as the number of samples grows large. Our results center around a robust analysis of the classic method of shift-and-invert preconditioning to reduce eigenvector computation to approximately solving a sequence of linear systems. We then apply fast SVRG based approximate system solvers to achieve our claims. We believe our results suggest the general effectiveness of shift-and-invert based approaches and imply that further computational gains may be reaped in practice.
[ "Chi Jin and Sham M. Kakade and Cameron Musco and Praneeth Netrapalli\n and Aaron Sidford", "['Chi Jin' 'Sham M. Kakade' 'Cameron Musco' 'Praneeth Netrapalli'\n 'Aaron Sidford']" ]
stat.ML cs.AI cs.LG
null
1510.08906
null
null
http://arxiv.org/pdf/1510.08906v3
2016-05-11T15:27:28Z
2015-10-29T21:14:42Z
Sample Complexity of Episodic Fixed-Horizon Reinforcement Learning
Recently, there has been significant progress in understanding reinforcement learning in discounted infinite-horizon Markov decision processes (MDPs) by deriving tight sample complexity bounds. However, in many real-world applications, an interactive learning agent operates for a fixed or bounded period of time, for example tutoring students for exams or handling customer service requests. Such scenarios can often be better treated as episodic fixed-horizon MDPs, for which only looser bounds on the sample complexity exist. A natural notion of sample complexity in this setting is the number of episodes required to guarantee a certain performance with high probability (PAC guarantee). In this paper, we derive an upper PAC bound $\tilde O(\frac{|\mathcal S|^2 |\mathcal A| H^2}{\epsilon^2} \ln\frac 1 \delta)$ and a lower PAC bound $\tilde \Omega(\frac{|\mathcal S| |\mathcal A| H^2}{\epsilon^2} \ln \frac 1 {\delta + c})$ that match up to log-terms and an additional linear dependency on the number of states $|\mathcal S|$. The lower bound is the first of its kind for this setting. Our upper bound leverages Bernstein's inequality to improve on previous bounds for episodic finite-horizon MDPs which have a time-horizon dependency of at least $H^3$.
[ "['Christoph Dann' 'Emma Brunskill']", "Christoph Dann, Emma Brunskill" ]
cs.LG
null
1510.08949
null
null
http://arxiv.org/pdf/1510.08949v1
2015-10-30T01:39:54Z
2015-10-30T01:39:54Z
Testing Visual Attention in Dynamic Environments
We investigate attention as the active pursuit of useful information. This contrasts with attention as a mechanism for the attenuation of irrelevant information. We also consider the role of short-term memory, whose use is critical to any model incapable of simultaneously perceiving all information on which its output depends. We present several simple synthetic tasks, which become considerably more interesting when we impose strong constraints on how a model can interact with its input, and on how long it can take to produce its output. We develop a model with a different structure from those seen in previous work, and we train it using stochastic variational inference with a learned proposal distribution.
[ "['Philip Bachman' 'David Krueger' 'Doina Precup']", "Philip Bachman and David Krueger and Doina Precup" ]
stat.ML cs.LG stat.ME
null
1510.08956
null
null
http://arxiv.org/pdf/1510.08956v1
2015-10-30T03:06:00Z
2015-10-30T03:06:00Z
Principal Differences Analysis: Interpretable Characterization of Differences between Distributions
We introduce principal differences analysis (PDA) for analyzing differences between high-dimensional distributions. The method operates by finding the projection that maximizes the Wasserstein divergence between the resulting univariate populations. Relying on the Cramer-Wold device, it requires no assumptions about the form of the underlying distributions, nor the nature of their inter-class differences. A sparse variant of the method is introduced to identify features responsible for the differences. We provide algorithms for both the original minimax formulation as well as its semidefinite relaxation. In addition to deriving some convergence results, we illustrate how the approach may be applied to identify differences between cell populations in the somatosensory cortex and hippocampus as manifested by single cell RNA-seq. Our broader framework extends beyond the specific choice of Wasserstein divergence.
[ "['Jonas Mueller' 'Tommi Jaakkola']", "Jonas Mueller and Tommi Jaakkola" ]
cs.CV cs.AI cs.LG stat.ML
10.1145/2806416.2806506
1510.08971
null
null
http://arxiv.org/abs/1510.08971v1
2015-10-30T05:34:49Z
2015-10-30T05:34:49Z
Robust Subspace Clustering via Tighter Rank Approximation
Matrix rank minimization problem is in general NP-hard. The nuclear norm is used to substitute the rank function in many recent studies. Nevertheless, the nuclear norm approximation adds all singular values together and the approximation error may depend heavily on the magnitudes of singular values. This might restrict its capability in dealing with many practical problems. In this paper, an arctangent function is used as a tighter approximation to the rank function. We use it on the challenging subspace clustering problem. For this nonconvex minimization problem, we develop an effective optimization procedure based on a type of augmented Lagrange multipliers (ALM) method. Extensive experiments on face clustering and motion segmentation show that the proposed method is effective for rank approximation.
[ "['Zhao Kang' 'Chong Peng' 'Qiang Cheng']", "Zhao Kang, Chong Peng, Qiang Cheng" ]
cs.LG stat.ML
null
1510.08974
null
null
http://arxiv.org/pdf/1510.08974v1
2015-10-30T05:46:23Z
2015-10-30T05:46:23Z
CONQUER: Confusion Queried Online Bandit Learning
We present a new recommendation setting for picking out two items from a given set to be highlighted to a user, based on contextual input. These two items are presented to a user who chooses one of them, possibly stochastically, with a bias that favours the item with the higher value. We propose a second-order algorithm framework that members of it use uses relative upper-confidence bounds to trade off exploration and exploitation, and some explore via sampling. We analyze one algorithm in this framework in an adversarial setting with only mild assumption on the data, and prove a regret bound of $O(Q_T + \sqrt{TQ_T\log T} + \sqrt{T}\log T)$, where $T$ is the number of rounds and $Q_T$ is the cumulative approximation error of item values using a linear model. Experiments with product reviews from 33 domains show the advantage of our methods over algorithms designed for related settings, and that UCB based algorithms are inferior to greed or sampling based algorithms.
[ "Daniel Barsky and Koby Crammer", "['Daniel Barsky' 'Koby Crammer']" ]
cs.NE cs.AI cs.CL cs.LG eess.AS
null
1510.08983
null
null
http://arxiv.org/pdf/1510.08983v2
2016-01-11T09:48:01Z
2015-10-30T06:40:14Z
Highway Long Short-Term Memory RNNs for Distant Speech Recognition
In this paper, we extend the deep long short-term memory (DLSTM) recurrent neural networks by introducing gated direct connections between memory cells in adjacent layers. These direct links, called highway connections, enable unimpeded information flow across different layers and thus alleviate the gradient vanishing problem when building deeper LSTMs. We further introduce the latency-controlled bidirectional LSTMs (BLSTMs) which can exploit the whole history while keeping the latency under control. Efficient algorithms are proposed to train these novel networks using both frame and sequence discriminative criteria. Experiments on the AMI distant speech recognition (DSR) task indicate that we can train deeper LSTMs and achieve better improvement from sequence training with highway LSTMs (HLSTMs). Our novel model obtains $43.9/47.7\%$ WER on AMI (SDM) dev and eval sets, outperforming all previous works. It beats the strong DNN and DLSTM baselines with $15.7\%$ and $5.3\%$ relative improvement respectively.
[ "Yu Zhang and Guoguo Chen and Dong Yu and Kaisheng Yao and Sanjeev\n Khudanpur and James Glass", "['Yu Zhang' 'Guoguo Chen' 'Dong Yu' 'Kaisheng Yao' 'Sanjeev Khudanpur'\n 'James Glass']" ]
cs.CL cs.LG cs.NE eess.AS
null
1510.08985
null
null
http://arxiv.org/pdf/1510.08985v1
2015-10-30T06:42:03Z
2015-10-30T06:42:03Z
Prediction-Adaptation-Correction Recurrent Neural Networks for Low-Resource Language Speech Recognition
In this paper, we investigate the use of prediction-adaptation-correction recurrent neural networks (PAC-RNNs) for low-resource speech recognition. A PAC-RNN is comprised of a pair of neural networks in which a {\it correction} network uses auxiliary information given by a {\it prediction} network to help estimate the state probability. The information from the correction network is also used by the prediction network in a recurrent loop. Our model outperforms other state-of-the-art neural networks (DNNs, LSTMs) on IARPA-Babel tasks. Moreover, transfer learning from a language that is similar to the target language can help improve performance further.
[ "Yu Zhang, Ekapol Chuangsuwanich, James Glass, Dong Yu", "['Yu Zhang' 'Ekapol Chuangsuwanich' 'James Glass' 'Dong Yu']" ]
cs.CG cs.LG
null
1510.09123
null
null
http://arxiv.org/pdf/1510.09123v1
2015-10-30T15:27:31Z
2015-10-30T15:27:31Z
Subsampling in Smoothed Range Spaces
We consider smoothed versions of geometric range spaces, so an element of the ground set (e.g. a point) can be contained in a range with a non-binary value in $[0,1]$. Similar notions have been considered for kernels; we extend them to more general types of ranges. We then consider approximations of these range spaces through $\varepsilon $-nets and $\varepsilon $-samples (aka $\varepsilon$-approximations). We characterize when size bounds for $\varepsilon $-samples on kernels can be extended to these more general smoothed range spaces. We also describe new generalizations for $\varepsilon $-nets to these range spaces and show when results from binary range spaces can carry over to these smoothed ones.
[ "['Jeff M. Phillips' 'Yan Zheng']", "Jeff M. Phillips and Yan Zheng" ]
cs.LG cs.NE
null
1510.09142
null
null
http://arxiv.org/pdf/1510.09142v1
2015-10-30T16:07:51Z
2015-10-30T16:07:51Z
Learning Continuous Control Policies by Stochastic Value Gradients
We present a unified framework for learning continuous control policies using backpropagation. It supports stochastic control by treating stochasticity in the Bellman equation as a deterministic function of exogenous noise. The product is a spectrum of general policy gradient algorithms that range from model-free methods with value functions to model-based methods without value functions. We use learned models but only require observations from the environment in- stead of observations from model-predicted trajectories, minimizing the impact of compounded model errors. We apply these algorithms first to a toy stochastic control problem and then to several physics-based control problems in simulation. One of these variants, SVG(1), shows the effectiveness of learning models, value functions, and policies simultaneously in continuous domains.
[ "['Nicolas Heess' 'Greg Wayne' 'David Silver' 'Timothy Lillicrap'\n 'Yuval Tassa' 'Tom Erez']", "Nicolas Heess, Greg Wayne, David Silver, Timothy Lillicrap, Yuval\n Tassa, and Tom Erez" ]
cs.LG stat.ML
null
1510.09161
null
null
http://arxiv.org/pdf/1510.09161v1
2015-10-30T17:04:33Z
2015-10-30T17:04:33Z
Streaming, Distributed Variational Inference for Bayesian Nonparametrics
This paper presents a methodology for creating streaming, distributed inference algorithms for Bayesian nonparametric (BNP) models. In the proposed framework, processing nodes receive a sequence of data minibatches, compute a variational posterior for each, and make asynchronous streaming updates to a central model. In contrast to previous algorithms, the proposed framework is truly streaming, distributed, asynchronous, learning-rate-free, and truncation-free. The key challenge in developing the framework, arising from the fact that BNP models do not impose an inherent ordering on their components, is finding the correspondence between minibatch and central BNP posterior components before performing each update. To address this, the paper develops a combinatorial optimization problem over component correspondences, and provides an efficient solution technique. The paper concludes with an application of the methodology to the DP mixture model, with experimental results demonstrating its practical scalability and performance.
[ "Trevor Campbell, Julian Straub, John W. Fisher III, Jonathan P. How", "['Trevor Campbell' 'Julian Straub' 'John W. Fisher III' 'Jonathan P. How']" ]
cs.CL cs.LG cs.NE
null
1510.09202
null
null
http://arxiv.org/pdf/1510.09202v1
2015-10-30T19:02:53Z
2015-10-30T19:02:53Z
Generating Text with Deep Reinforcement Learning
We introduce a novel schema for sequence to sequence learning with a Deep Q-Network (DQN), which decodes the output sequence iteratively. The aim here is to enable the decoder to first tackle easier portions of the sequences, and then turn to cope with difficult parts. Specifically, in each iteration, an encoder-decoder Long Short-Term Memory (LSTM) network is employed to, from the input sequence, automatically create features to represent the internal states of and formulate a list of potential actions for the DQN. Take rephrasing a natural sentence as an example. This list can contain ranked potential words. Next, the DQN learns to make decision on which action (e.g., word) will be selected from the list to modify the current decoded sequence. The newly modified output sequence is subsequently used as the input to the DQN for the next decoding iteration. In each iteration, we also bias the reinforcement learning's attention to explore sequence portions which are previously difficult to be decoded. For evaluation, the proposed strategy was trained to decode ten thousands natural sentences. Our experiments indicate that, when compared to a left-to-right greedy beam search LSTM decoder, the proposed method performed competitively well when decoding sentences from the training set, but significantly outperformed the baseline when decoding unseen sentences, in terms of BLEU score obtained.
[ "['Hongyu Guo']", "Hongyu Guo" ]
cs.AI cs.IT cs.LG math.IT stat.ML
null
1511.00041
null
null
http://arxiv.org/pdf/1511.00041v1
2015-10-30T22:24:13Z
2015-10-30T22:24:13Z
Learning Causal Graphs with Small Interventions
We consider the problem of learning causal networks with interventions, when each intervention is limited in size under Pearl's Structural Equation Model with independent errors (SEM-IE). The objective is to minimize the number of experiments to discover the causal directions of all the edges in a causal graph. Previous work has focused on the use of separating systems for complete graphs for this task. We prove that any deterministic adaptive algorithm needs to be a separating system in order to learn complete graphs in the worst case. In addition, we present a novel separating system construction, whose size is close to optimal and is arguably simpler than previous work in combinatorics. We also develop a novel information theoretic lower bound on the number of interventions that applies in full generality, including for randomized adaptive learning algorithms. For general chordal graphs, we derive worst case lower bounds on the number of interventions. Building on observations about induced trees, we give a new deterministic adaptive algorithm to learn directions on any chordal skeleton completely. In the worst case, our achievable scheme is an $\alpha$-approximation algorithm where $\alpha$ is the independence number of the graph. We also show that there exist graph classes for which the sufficient number of experiments is close to the lower bound. In the other extreme, there are graph classes for which the required number of experiments is multiplicatively $\alpha$ away from our lower bound. In simulations, our algorithm almost always performs very close to the lower bound, while the approach based on separating systems for complete graphs is significantly worse for random chordal graphs.
[ "['Karthikeyan Shanmugam' 'Murat Kocaoglu' 'Alexandros G. Dimakis'\n 'Sriram Vishwanath']", "Karthikeyan Shanmugam, Murat Kocaoglu, Alexandros G. Dimakis, Sriram\n Vishwanath" ]
cs.AI cs.GT cs.LG
null
1511.00043
null
null
http://arxiv.org/pdf/1511.00043v3
2015-11-20T08:34:43Z
2015-10-30T22:27:25Z
Learning Adversary Behavior in Security Games: A PAC Model Perspective
Recent applications of Stackelberg Security Games (SSG), from wildlife crime to urban crime, have employed machine learning tools to learn and predict adversary behavior using available data about defender-adversary interactions. Given these recent developments, this paper commits to an approach of directly learning the response function of the adversary. Using the PAC model, this paper lays a firm theoretical foundation for learning in SSGs (e.g., theoretically answer questions about the numbers of samples required to learn adversary behavior) and provides utility guarantees when the learned adversary model is used to plan the defender's strategy. The paper also aims to answer practical questions such as how much more data is needed to improve an adversary model's accuracy. Additionally, we explain a recently observed phenomenon that prediction accuracy of learned adversary behavior is not enough to discover the utility maximizing defender strategy. We provide four main contributions: (1) a PAC model of learning adversary response functions in SSGs; (2) PAC-model analysis of the learning of key, existing bounded rationality models in SSGs; (3) an entirely new approach to adversary modeling based on a non-parametric class of response functions with PAC-model analysis and (4) identification of conditions under which computing the best defender strategy against the learned adversary behavior is indeed the optimal strategy. Finally, we conduct experiments with real-world data from a national park in Uganda, showing the benefit of our new adversary modeling approach and verification of our PAC model predictions.
[ "['Arunesh Sinha' 'Debarun Kar' 'Milind Tambe']", "Arunesh Sinha, Debarun Kar, Milind Tambe" ]
cs.LG
null
1511.00048
null
null
http://arxiv.org/pdf/1511.00048v1
2015-10-30T23:30:30Z
2015-10-30T23:30:30Z
The Pareto Regret Frontier for Bandits
Given a multi-armed bandit problem it may be desirable to achieve a smaller-than-usual worst-case regret for some special actions. I show that the price for such unbalanced worst-case regret guarantees is rather high. Specifically, if an algorithm enjoys a worst-case regret of B with respect to some action, then there must exist another action for which the worst-case regret is at least {\Omega}(nK/B), where n is the horizon and K the number of actions. I also give upper bounds in both the stochastic and adversarial settings showing that this result cannot be improved. For the stochastic case the pareto regret frontier is characterised exactly up to constant factors.
[ "Tor Lattimore", "['Tor Lattimore']" ]
cs.LG stat.ML
null
1511.00054
null
null
http://arxiv.org/pdf/1511.00054v1
2015-10-31T01:02:14Z
2015-10-31T01:02:14Z
Gaussian Process Random Fields
Gaussian processes have been successful in both supervised and unsupervised machine learning tasks, but their computational complexity has constrained practical applications. We introduce a new approximation for large-scale Gaussian processes, the Gaussian Process Random Field (GPRF), in which local GPs are coupled via pairwise potentials. The GPRF likelihood is a simple, tractable, and parallelizeable approximation to the full GP marginal likelihood, enabling latent variable modeling and hyperparameter selection on large datasets. We demonstrate its effectiveness on synthetic spatial data as well as a real-world application to seismic event location.
[ "['David A. Moore' 'Stuart J. Russell']", "David A. Moore and Stuart J. Russell" ]
cs.CL cs.LG
null
1511.00060
null
null
http://arxiv.org/pdf/1511.00060v3
2016-04-03T23:30:17Z
2015-10-31T02:05:28Z
Top-down Tree Long Short-Term Memory Networks
Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have been successfully applied to a variety of sequence modeling tasks. In this paper we develop Tree Long Short-Term Memory (TreeLSTM), a neural network model based on LSTM, which is designed to predict a tree rather than a linear sequence. TreeLSTM defines the probability of a sentence by estimating the generation probability of its dependency tree. At each time step, a node is generated based on the representation of the generated sub-tree. We further enhance the modeling power of TreeLSTM by explicitly representing the correlations between left and right dependents. Application of our model to the MSR sentence completion challenge achieves results beyond the current state of the art. We also report results on dependency parsing reranking achieving competitive performance.
[ "Xingxing Zhang, Liang Lu, Mirella Lapata", "['Xingxing Zhang' 'Liang Lu' 'Mirella Lapata']" ]
stat.ML cs.LG stat.CO
null
1511.00146
null
null
http://arxiv.org/pdf/1511.00146v3
2016-08-12T00:47:22Z
2015-10-31T15:56:32Z
Faster Stochastic Variational Inference using Proximal-Gradient Methods with General Divergence Functions
Several recent works have explored stochastic gradient methods for variational inference that exploit the geometry of the variational-parameter space. However, the theoretical properties of these methods are not well-understood and these methods typically only apply to conditionally-conjugate models. We present a new stochastic method for variational inference which exploits the geometry of the variational-parameter space and also yields simple closed-form updates even for non-conjugate models. We also give a convergence-rate analysis of our method and many other previous methods which exploit the geometry of the space. Our analysis generalizes existing convergence results for stochastic mirror-descent on non-convex objectives by using a more general class of divergence functions. Beyond giving a theoretical justification for a variety of recent methods, our experiments show that new algorithms derived in this framework lead to state of the art results on a variety of problems. Further, due to its generality, we expect that our theoretical analysis could also apply to other applications.
[ "['Mohammad Emtiyaz Khan' 'Reza Babanezhad' 'Wu Lin' 'Mark Schmidt'\n 'Masashi Sugiyama']", "Mohammad Emtiyaz Khan, Reza Babanezhad, Wu Lin, Mark Schmidt, Masashi\n Sugiyama" ]
stat.ML cs.LG
10.1109/TIT.2017.2672725
1511.00152
null
null
http://arxiv.org/abs/1511.00152v3
2016-09-20T00:35:14Z
2015-10-31T17:20:00Z
Preconditioned Data Sparsification for Big Data with Applications to PCA and K-means
We analyze a compression scheme for large data sets that randomly keeps a small percentage of the components of each data sample. The benefit is that the output is a sparse matrix and therefore subsequent processing, such as PCA or K-means, is significantly faster, especially in a distributed-data setting. Furthermore, the sampling is single-pass and applicable to streaming data. The sampling mechanism is a variant of previous methods proposed in the literature combined with a randomized preconditioning to smooth the data. We provide guarantees for PCA in terms of the covariance matrix, and guarantees for K-means in terms of the error in the center estimators at a given step. We present numerical evidence to show both that our bounds are nearly tight and that our algorithms provide a real benefit when applied to standard test data sets, as well as providing certain benefits over related sampling approaches.
[ "Farhad Pourkamali-Anaraki and Stephen Becker", "['Farhad Pourkamali-Anaraki' 'Stephen Becker']" ]
stat.ML cs.LG
null
1511.00158
null
null
http://arxiv.org/pdf/1511.00158v3
2018-06-20T14:03:25Z
2015-10-31T18:00:39Z
Prediction of Dynamical time Series Using Kernel Based Regression and Smooth Splines
Prediction of dynamical time series with additive noise using support vector machines or kernel based regression has been proved to be consistent for certain classes of discrete dynamical systems. Consistency implies that these methods are effective at computing the expected value of a point at a future time given the present coordinates. However, the present coordinates themselves are noisy, and therefore, these methods are not necessarily effective at removing noise. In this article, we consider denoising and prediction as separate problems for flows, as opposed to discrete time dynamical systems, and show that the use of smooth splines is more effective at removing noise. Combination of smooth splines and kernel based regression yields predictors that are more accurate on benchmarks typically by a factor of 2 or more. We prove that kernel based regression in combination with smooth splines converges to the exact predictor for time series extracted from any compact invariant set of any sufficiently smooth flow. As a consequence of convergence, one can find examples where the combination of kernel based regression with smooth splines is superior by even a factor of $100$. The predictors that we compute operate on delay coordinate data and not the full state vector, which is typically not observable.
[ "['Raymundo Navarrete' 'Divakar Viswanath']", "Raymundo Navarrete and Divakar Viswanath" ]
cs.LG
null
1511.00213
null
null
http://arxiv.org/pdf/1511.00213v2
2015-11-13T09:28:34Z
2015-11-01T07:16:04Z
Large-scale probabilistic predictors with and without guarantees of validity
This paper studies theoretically and empirically a method of turning machine-learning algorithms into probabilistic predictors that automatically enjoys a property of validity (perfect calibration) and is computationally efficient. The price to pay for perfect calibration is that these probabilistic predictors produce imprecise (in practice, almost precise for large data sets) probabilities. When these imprecise probabilities are merged into precise probabilities, the resulting predictors, while losing the theoretical property of perfect calibration, are consistently more accurate than the existing methods in empirical studies.
[ "Vladimir Vovk, Ivan Petej, and Valentina Fedorova", "['Vladimir Vovk' 'Ivan Petej' 'Valentina Fedorova']" ]
cs.IR cs.LG
null
1511.00271
null
null
http://arxiv.org/pdf/1511.00271v1
2015-11-01T16:34:52Z
2015-11-01T16:34:52Z
Stochastic Top-k ListNet
ListNet is a well-known listwise learning to rank model and has gained much attention in recent years. A particular problem of ListNet, however, is the high computation complexity in model training, mainly due to the large number of object permutations involved in computing the gradients. This paper proposes a stochastic ListNet approach which computes the gradient within a bounded permutation subset. It significantly reduces the computation complexity of model training and allows extension to Top-k models, which is impossible with the conventional implementation based on full-set permutations. Meanwhile, the new approach utilizes partial ranking information of human labels, which helps improve model quality. Our experiments demonstrated that the stochastic ListNet method indeed leads to better ranking performance and speeds up the model training remarkably.
[ "Tianyi Luo, Dong Wang, Rong Liu, Yiqiao Pan", "['Tianyi Luo' 'Dong Wang' 'Rong Liu' 'Yiqiao Pan']" ]
cs.LG cs.CL stat.ML
null
1511.00352
null
null
http://arxiv.org/pdf/1511.00352v3
2016-05-28T18:59:48Z
2015-11-02T01:45:41Z
Spatial Semantic Scan: Jointly Detecting Subtle Events and their Spatial Footprint
Many methods have been proposed for detecting emerging events in text streams using topic modeling. However, these methods have shortcomings that make them unsuitable for rapid detection of locally emerging events on massive text streams. We describe Spatially Compact Semantic Scan (SCSS) that has been developed specifically to overcome the shortcomings of current methods in detecting new spatially compact events in text streams. SCSS employs alternating optimization between using semantic scan to estimate contrastive foreground topics in documents, and discovering spatial neighborhoods with high occurrence of documents containing the foreground topics. We evaluate our method on Emergency Department chief complaints dataset (ED dataset) to verify the effectiveness of our method in detecting real-world disease outbreaks from free-text ED chief complaint data.
[ "Abhinav Maurya", "['Abhinav Maurya']" ]
cs.LG cs.CV cs.NE
null
1511.00363
null
null
http://arxiv.org/pdf/1511.00363v3
2016-04-18T13:11:45Z
2015-11-02T02:50:05Z
BinaryConnect: Training Deep Neural Networks with binary weights during propagations
Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and power-hungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.
[ "['Matthieu Courbariaux' 'Yoshua Bengio' 'Jean-Pierre David']", "Matthieu Courbariaux, Yoshua Bengio and Jean-Pierre David" ]
cs.LG math.OC
null
1511.00394
null
null
http://arxiv.org/pdf/1511.00394v2
2016-02-23T19:46:11Z
2015-11-02T06:33:59Z
Submodular Functions: from Discrete to Continous Domains
Submodular set-functions have many applications in combinatorial optimization, as they can be minimized and approximately maximized in polynomial time. A key element in many of the algorithms and analyses is the possibility of extending the submodular set-function to a convex function, which opens up tools from convex optimization. Submodularity goes beyond set-functions and has naturally been considered for problems with multiple labels or for functions defined on continuous domains, where it corresponds essentially to cross second-derivatives being nonpositive. In this paper, we show that most results relating submodularity and convexity for set-functions can be extended to all submodular functions. In particular, (a) we naturally define a continuous extension in a set of probability measures, (b) show that the extension is convex if and only if the original function is submodular, (c) prove that the problem of minimizing a submodular function is equivalent to a typically non-smooth convex optimization problem, and (d) propose another convex optimization problem with better computational properties (e.g., a smooth dual problem). Most of these extensions from the set-function situation are obtained by drawing links with the theory of multi-marginal optimal transport, which provides also a new interpretation of existing results for set-functions. We then provide practical algorithms to minimize generic submodular functions on discrete domains, with associated convergence rates.
[ "['Francis Bach']", "Francis Bach (LIENS, SIERRA)" ]
math.PR cs.LG cs.SI stat.ML
null
1511.00546
null
null
http://arxiv.org/pdf/1511.00546v3
2018-11-24T21:46:20Z
2015-11-02T15:30:40Z
An Impossibility Result for Reconstruction in a Degree-Corrected Planted-Partition Model
We consider the Degree-Corrected Stochastic Block Model (DC-SBM): a random graph on $n$ nodes, having i.i.d. weights $(\phi_u)_{u=1}^n$ (possibly heavy-tailed), partitioned into $q \geq 2$ asymptotically equal-sized clusters. The model parameters are two constants $a,b > 0$ and the finite second moment of the weights $\Phi^{(2)}$. Vertices $u$ and $v$ are connected by an edge with probability $\frac{\phi_u \phi_v}{n}a$ when they are in the same class and with probability $\frac{\phi_u \phi_v}{n}b$ otherwise. We prove that it is information-theoretically impossible to estimate the clusters in a way positively correlated with the true community structure when $(a-b)^2 \Phi^{(2)} \leq q(a+b)$. As by-products of our proof we obtain $(1)$ a precise coupling result for local neighbourhoods in DC-SBM's, that we use in a follow up paper [Gulikers et al., 2017] to establish a law of large numbers for local-functionals and $(2)$ that long-range interactions are weak in (power-law) DC-SBM's.
[ "Lennart Gulikers, Marc Lelarge, Laurent Massouli\\'e", "['Lennart Gulikers' 'Marc Lelarge' 'Laurent Massoulié']" ]
cs.CV cs.LG cs.NE
null
1511.00561
null
null
http://arxiv.org/pdf/1511.00561v3
2016-10-10T21:11:59Z
2015-11-02T15:51:03Z
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network. The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN and also with the well known DeepLab-LargeFOV, DeconvNet architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. We show that SegNet provides good performance with competitive inference time and more efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet/.
[ "Vijay Badrinarayanan and Alex Kendall and Roberto Cipolla", "['Vijay Badrinarayanan' 'Alex Kendall' 'Roberto Cipolla']" ]