categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
cs.LG cs.AI
null
1410.6414
null
null
http://arxiv.org/pdf/1410.6414v1
2014-10-22T01:04:00Z
2014-10-22T01:04:00Z
A Parallel and Efficient Algorithm for Learning to Match
Many tasks in data mining and related fields can be formalized as matching between objects in two heterogeneous domains, including collaborative filtering, link prediction, image tagging, and web search. Machine learning techniques, referred to as learning-to-match in this paper, have been successfully applied to the problems. Among them, a class of state-of-the-art methods, named feature-based matrix factorization, formalize the task as an extension to matrix factorization by incorporating auxiliary features into the model. Unfortunately, making those algorithms scale to real world problems is challenging, and simple parallelization strategies fail due to the complex cross talking patterns between sub-tasks. In this paper, we tackle this challenge with a novel parallel and efficient algorithm for feature-based matrix factorization. Our algorithm, based on coordinate descent, can easily handle hundreds of millions of instances and features on a single machine. The key recipe of this algorithm is an iterative relaxation of the objective to facilitate parallel updates of parameters, with guaranteed convergence on minimizing the original objective function. Experimental results demonstrate that the proposed method is effective on a wide range of matching problems, with efficiency significantly improved upon the baselines while accuracy retained unchanged.
[ "Jingbo Shang, Tianqi Chen, Hang Li, Zhengdong Lu, Yong Yu", "['Jingbo Shang' 'Tianqi Chen' 'Hang Li' 'Zhengdong Lu' 'Yong Yu']" ]
stat.ML cs.IR cs.LG stat.CO
null
1410.6466
null
null
http://arxiv.org/pdf/1410.6466v2
2015-02-17T01:39:14Z
2014-10-23T19:38:44Z
Model Selection for Topic Models via Spectral Decomposition
Topic models have achieved significant successes in analyzing large-scale text corpus. In practical applications, we are always confronted with the challenge of model selection, i.e., how to appropriately set the number of topics. Following recent advances in topic model inference via tensor decomposition, we make a first attempt to provide theoretical analysis on model selection in latent Dirichlet allocation. Under mild conditions, we derive the upper bound and lower bound on the number of topics given a text collection of finite size. Experimental results demonstrate that our bounds are accurate and tight. Furthermore, using Gaussian mixture model as an example, we show that our methodology can be easily generalized to model selection analysis for other latent models.
[ "['Dehua Cheng' 'Xinran He' 'Yan Liu']", "Dehua Cheng, Xinran He, Yan Liu" ]
cs.LG stat.ML
null
1410.6776
null
null
http://arxiv.org/pdf/1410.6776v1
2014-10-24T18:45:23Z
2014-10-24T18:45:23Z
Online and Stochastic Gradient Methods for Non-decomposable Loss Functions
Modern applications in sensitive domains such as biometrics and medicine frequently require the use of non-decomposable loss functions such as precision@k, F-measure etc. Compared to point loss functions such as hinge-loss, these offer much more fine grained control over prediction, but at the same time present novel challenges in terms of algorithm design and analysis. In this work we initiate a study of online learning techniques for such non-decomposable loss functions with an aim to enable incremental learning as well as design scalable solvers for batch problems. To this end, we propose an online learning framework for such loss functions. Our model enjoys several nice properties, chief amongst them being the existence of efficient online learning algorithms with sublinear regret and online to batch conversion bounds. Our model is a provable extension of existing online learning models for point loss functions. We instantiate two popular losses, prec@k and pAUC, in our model and prove sublinear regret bounds for both of them. Our proofs require a novel structural lemma over ranked lists which may be of independent interest. We then develop scalable stochastic gradient descent solvers for non-decomposable loss functions. We show that for a large family of loss functions satisfying a certain uniform convergence property (that includes prec@k, pAUC, and F-measure), our methods provably converge to the empirical risk minimizer. Such uniform convergence results were not known for these losses and we establish these using novel proof techniques. We then use extensive experimentation on real life and benchmark datasets to establish that our method can be orders of magnitude faster than a recently proposed cutting plane method.
[ "['Purushottam Kar' 'Harikrishna Narasimhan' 'Prateek Jain']", "Purushottam Kar, Harikrishna Narasimhan, Prateek Jain" ]
cs.DS cs.LG
null
1410.6801
null
null
http://arxiv.org/pdf/1410.6801v3
2015-04-03T02:33:24Z
2014-10-24T19:43:16Z
Dimensionality Reduction for k-Means Clustering and Low Rank Approximation
We show how to approximate a data matrix $\mathbf{A}$ with a much smaller sketch $\mathbf{\tilde A}$ that can be used to solve a general class of constrained k-rank approximation problems to within $(1+\epsilon)$ error. Importantly, this class of problems includes $k$-means clustering and unconstrained low rank approximation (i.e. principal component analysis). By reducing data points to just $O(k)$ dimensions, our methods generically accelerate any exact, approximate, or heuristic algorithm for these ubiquitous problems. For $k$-means dimensionality reduction, we provide $(1+\epsilon)$ relative error results for many common sketching techniques, including random row projection, column selection, and approximate SVD. For approximate principal component analysis, we give a simple alternative to known algorithms that has applications in the streaming setting. Additionally, we extend recent work on column-based matrix reconstruction, giving column subsets that not only `cover' a good subspace for $\bv{A}$, but can be used directly to compute this subspace. Finally, for $k$-means clustering, we show how to achieve a $(9+\epsilon)$ approximation by Johnson-Lindenstrauss projecting data points to just $O(\log k/\epsilon^2)$ dimensions. This gives the first result that leverages the specific structure of $k$-means to achieve dimension independent of input size and sublinear in $k$.
[ "['Michael B. Cohen' 'Sam Elder' 'Cameron Musco' 'Christopher Musco'\n 'Madalina Persu']", "Michael B. Cohen, Sam Elder, Cameron Musco, Christopher Musco,\n Madalina Persu" ]
cs.CL cs.LG
null
1410.6830
null
null
http://arxiv.org/pdf/1410.6830v1
2014-10-24T20:34:01Z
2014-10-24T20:34:01Z
Clustering Words by Projection Entropy
We apply entropy agglomeration (EA), a recently introduced algorithm, to cluster the words of a literary text. EA is a greedy agglomerative procedure that minimizes projection entropy (PE), a function that can quantify the segmentedness of an element set. To apply it, the text is reduced to a feature allocation, a combinatorial object to represent the word occurences in the text's paragraphs. The experiment results demonstrate that EA, despite its reduction and simplicity, is useful in capturing significant relationships among the words in the text. This procedure was implemented in Python and published as a free software: REBUS.
[ "['Işık Barış Fidaner' 'Ali Taylan Cemgil']", "I\\c{s}{\\i}k Bar{\\i}\\c{s} Fidaner, Ali Taylan Cemgil" ]
stat.ML cs.LG stat.ME
null
1410.6853
null
null
http://arxiv.org/pdf/1410.6853v2
2014-12-08T22:07:47Z
2014-10-24T23:31:44Z
Covariance Matrices for Mean Field Variational Bayes
Mean Field Variational Bayes (MFVB) is a popular posterior approximation method due to its fast runtime on large-scale data sets. However, it is well known that a major failing of MFVB is its (sometimes severe) underestimates of the uncertainty of model variables and lack of information about model variable covariance. We develop a fast, general methodology for exponential families that augments MFVB to deliver accurate uncertainty estimates for model variables -- both for individual variables and coherently across variables. MFVB for exponential families defines a fixed-point equation in the means of the approximating posterior, and our approach yields a covariance estimate by perturbing this fixed point. Inspired by linear response theory, we call our method linear response variational Bayes (LRVB). We demonstrate the accuracy of our method on simulated data sets.
[ "Ryan Giordano, Tamara Broderick", "['Ryan Giordano' 'Tamara Broderick']" ]
stat.ML cs.LG
null
1410.6880
null
null
http://arxiv.org/pdf/1410.6880v1
2014-10-25T04:06:49Z
2014-10-25T04:06:49Z
Screening Rules for Overlapping Group Lasso
Recently, to solve large-scale lasso and group lasso problems, screening rules have been developed, the goal of which is to reduce the problem size by efficiently discarding zero coefficients using simple rules independently of the others. However, screening for overlapping group lasso remains an open challenge because the overlaps between groups make it infeasible to test each group independently. In this paper, we develop screening rules for overlapping group lasso. To address the challenge arising from groups with overlaps, we take into account overlapping groups only if they are inclusive of the group being tested, and then we derive screening rules, adopting the dual polytope projection approach. This strategy allows us to screen each group independently of each other. In our experiments, we demonstrate the efficiency of our screening rules on various datasets.
[ "Seunghak Lee and Eric P. Xing", "['Seunghak Lee' 'Eric P. Xing']" ]
cs.LG
null
1410.6973
null
null
http://arxiv.org/pdf/1410.6973v2
2015-02-05T20:48:11Z
2014-10-26T00:16:16Z
Differentially- and non-differentially-private random decision trees
We consider supervised learning with random decision trees, where the tree construction is completely random. The method is popularly used and works well in practice despite the simplicity of the setting, but its statistical mechanism is not yet well-understood. In this paper we provide strong theoretical guarantees regarding learning with random decision trees. We analyze and compare three different variants of the algorithm that have minimal memory requirements: majority voting, threshold averaging and probabilistic averaging. The random structure of the tree enables us to adapt these methods to a differentially-private setting thus we also propose differentially-private versions of all three schemes. We give upper-bounds on the generalization error and mathematically explain how the accuracy depends on the number of random decision trees. Furthermore, we prove that only logarithmic (in the size of the dataset) number of independently selected random decision trees suffice to correctly classify most of the data, even when differential-privacy guarantees must be maintained. We empirically show that majority voting and threshold averaging give the best accuracy, also for conservative users requiring high privacy guarantees. Furthermore, we demonstrate that a simple majority voting rule is an especially good candidate for the differentially-private classifier since it is much less sensitive to the choice of forest parameters than other methods.
[ "['Mariusz Bojarski' 'Anna Choromanska' 'Krzysztof Choromanski'\n 'Yann LeCun']", "Mariusz Bojarski, Anna Choromanska, Krzysztof Choromanski, Yann LeCun" ]
cs.LG
null
1410.6975
null
null
http://arxiv.org/pdf/1410.6975v1
2014-10-26T00:59:11Z
2014-10-26T00:59:11Z
Notes on using Determinantal Point Processes for Clustering with Applications to Text Clustering
In this paper, we compare three initialization schemes for the KMEANS clustering algorithm: 1) random initialization (KMEANSRAND), 2) KMEANS++, and 3) KMEANSD++. Both KMEANSRAND and KMEANS++ have a major that the value of k needs to be set by the user of the algorithms. (Kang 2013) recently proposed a novel use of determinantal point processes for sampling the initial centroids for the KMEANS algorithm (we call it KMEANSD++). They, however, do not provide any evaluation establishing that KMEANSD++ is better than other algorithms. In this paper, we show that the performance of KMEANSD++ is comparable to KMEANS++ (both of which are better than KMEANSRAND) with KMEANSD++ having an additional that it can automatically approximate the value of k.
[ "['Apoorv Agarwal' 'Anna Choromanska' 'Krzysztof Choromanski']", "Apoorv Agarwal, Anna Choromanska, Krzysztof Choromanski" ]
stat.ML cs.LG
null
1410.6990
null
null
http://arxiv.org/pdf/1410.6990v1
2014-10-26T05:52:33Z
2014-10-26T05:52:33Z
Local Rademacher Complexity for Multi-label Learning
We analyze the local Rademacher complexity of empirical risk minimization (ERM)-based multi-label learning algorithms, and in doing so propose a new algorithm for multi-label learning. Rather than using the trace norm to regularize the multi-label predictor, we instead minimize the tail sum of the singular values of the predictor in multi-label learning. Benefiting from the use of the local Rademacher complexity, our algorithm, therefore, has a sharper generalization error bound and a faster convergence rate. Compared to methods that minimize over all singular values, concentrating on the tail singular values results in better recovery of the low-rank structure of the multi-label predictor, which plays an import role in exploiting label correlations. We propose a new conditional singular value thresholding algorithm to solve the resulting objective function. Empirical studies on real-world datasets validate our theoretical results and demonstrate the effectiveness of the proposed algorithm.
[ "Chang Xu, Tongliang Liu, Dacheng Tao, Chao Xu", "['Chang Xu' 'Tongliang Liu' 'Dacheng Tao' 'Chao Xu']" ]
stat.ML cs.LG
null
1410.6991
null
null
http://arxiv.org/pdf/1410.6991v3
2014-11-04T05:14:25Z
2014-10-26T06:00:36Z
A provable SVD-based algorithm for learning topics in dominant admixture corpus
Topic models, such as Latent Dirichlet Allocation (LDA), posit that documents are drawn from admixtures of distributions over words, known as topics. The inference problem of recovering topics from admixtures, is NP-hard. Assuming separability, a strong assumption, [4] gave the first provable algorithm for inference. For LDA model, [6] gave a provable algorithm using tensor-methods. But [4,6] do not learn topic vectors with bounded $l_1$ error (a natural measure for probability vectors). Our aim is to develop a model which makes intuitive and empirically supported assumptions and to design an algorithm with natural, simple components such as SVD, which provably solves the inference problem for the model with bounded $l_1$ error. A topic in LDA and other models is essentially characterized by a group of co-occurring words. Motivated by this, we introduce topic specific Catchwords, group of words which occur with strictly greater frequency in a topic than any other topic individually and are required to have high frequency together rather than individually. A major contribution of the paper is to show that under this more realistic assumption, which is empirically verified on real corpora, a singular value decomposition (SVD) based algorithm with a crucial pre-processing step of thresholding, can provably recover the topics from a collection of documents drawn from Dominant admixtures. Dominant admixtures are convex combination of distributions in which one distribution has a significantly higher contribution than others. Apart from the simplicity of the algorithm, the sample complexity has near optimal dependence on $w_0$, the lowest probability that a topic is dominant, and is better than [4]. Empirical evidence shows that on several real world corpora, both Catchwords and Dominant admixture assumptions hold and the proposed algorithm substantially outperforms the state of the art [5].
[ "['Trapit Bansal' 'Chiranjib Bhattacharyya' 'Ravindran Kannan']", "Trapit Bansal, Chiranjib Bhattacharyya, Ravindran Kannan" ]
cs.DS cs.LG
null
1410.7050
null
null
http://arxiv.org/pdf/1410.7050v3
2015-06-25T06:28:49Z
2014-10-26T15:41:37Z
A PTAS for Agnostically Learning Halfspaces
We present a PTAS for agnostically learning halfspaces w.r.t. the uniform distribution on the $d$ dimensional sphere. Namely, we show that for every $\mu>0$ there is an algorithm that runs in time $\mathrm{poly}(d,\frac{1}{\epsilon})$, and is guaranteed to return a classifier with error at most $(1+\mu)\mathrm{opt}+\epsilon$, where $\mathrm{opt}$ is the error of the best halfspace classifier. This improves on Awasthi, Balcan and Long [ABL14] who showed an algorithm with an (unspecified) constant approximation ratio. Our algorithm combines the classical technique of polynomial regression (e.g. [LMN89, KKMS05]), together with the new localization technique of [ABL14].
[ "Amit Daniely", "['Amit Daniely']" ]
cs.LG cs.DC cs.SY stat.ML
10.1109/ISCAS.2015.7168664
1410.7057
null
null
http://arxiv.org/abs/1410.7057v1
2014-10-26T16:38:38Z
2014-10-26T16:38:38Z
Sparse Distributed Learning via Heterogeneous Diffusion Adaptive Networks
In-network distributed estimation of sparse parameter vectors via diffusion LMS strategies has been studied and investigated in recent years. In all the existing works, some convex regularization approach has been used at each node of the network in order to achieve an overall network performance superior to that of the simple diffusion LMS, albeit at the cost of increased computational overhead. In this paper, we provide analytical as well as experimental results which show that the convex regularization can be selectively applied only to some chosen nodes keeping rest of the nodes sparsity agnostic, while still enjoying the same optimum behavior as can be realized by deploying the convex regularization at all the nodes. Due to the incorporation of unregularized learning at a subset of nodes, less computational cost is needed in the proposed approach. We also provide a guideline for selection of the sparsity aware nodes and a closed form expression for the optimum regularization parameter.
[ "Bijit Kumar Das, Mrityunjoy Chakraborty and Jer\\'onimo Arenas-Garc\\'ia", "['Bijit Kumar Das' 'Mrityunjoy Chakraborty' 'Jerónimo Arenas-García']" ]
cs.CY cs.LG stat.ME
null
1410.7074
null
null
http://arxiv.org/pdf/1410.7074v4
2015-08-21T16:18:15Z
2014-10-26T20:12:32Z
Random Sampling in an Age of Automation: Minimizing Expenditures through Balanced Collection and Annotation
Methods for automated collection and annotation are changing the cost-structures of sampling surveys for a wide range of applications. Digital samples in the form of images or audio recordings can be collected rapidly, and annotated by computer programs or crowd workers. We consider the problem of estimating a population mean under these new cost-structures, and propose a Hybrid-Offset sampling design. This design utilizes two annotators: a primary, which is accurate but costly (e.g. a human expert) and an auxiliary which is noisy but cheap (e.g. a computer program), in order to minimize total sampling expenditures. Our analysis gives necessary conditions for the Hybrid-Offset design and specifies optimal sample sizes for both annotators. Simulations on data from a coral reef survey program indicate that the Hybrid-Offset design outperforms several alternative sampling designs. In particular, sampling expenditures are reduced 50% compared to the Conventional design currently deployed by the coral ecologists.
[ "Oscar Beijbom", "['Oscar Beijbom']" ]
cs.LG stat.AP
null
1410.7140
null
null
http://arxiv.org/pdf/1410.7140v5
2016-02-24T16:05:53Z
2014-10-27T07:32:36Z
A data-driven method for syndrome type identification and classification in traditional Chinese medicine
Objective: The efficacy of traditional Chinese medicine (TCM) treatments for Western medicine (WM) diseases relies heavily on the proper classification of patients into TCM syndrome types. We develop a data-driven method for solving the classification problem, where syndrome types are identified and quantified based on patterns detected in unlabeled symptom survey data. Method: Latent class analysis (LCA) has been applied in WM research to solve a similar problem, i.e., to identify subtypes of a patient population in the absence of a gold standard. A widely known weakness of LCA is that it makes an unrealistically strong independence assumption. We relax the assumption by first detecting symptom co-occurrence patterns from survey data and use those patterns instead of the symptoms as features for LCA. Results: The result of the investigation is a six-step method: Data collection, symptom co-occurrence pattern discovery, pattern interpretation, syndrome identification, syndrome type identification, and syndrome type classification. A software package called Lantern is developed to support the application of the method. The method is illustrated using a data set on Vascular Mild Cognitive Impairment (VMCI). Conclusions: A data-driven method for TCM syndrome identification and classification is presented. The method can be used to answer the following questions about a Western medicine disease: What TCM syndrome types are there among the patients with the disease? What is the prevalence of each syndrome type? What are the statistical characteristics of each syndrome type in terms of occurrence of symptoms? How can we determine the syndrome type(s) of a patient?
[ "Nevin L. Zhang, Chen Fu, Teng Fei Liu, Bao Xin Chen, Kin Man Poon, Pei\n Xian Chen, Yun Ling Zhang", "['Nevin L. Zhang' 'Chen Fu' 'Teng Fei Liu' 'Bao Xin Chen' 'Kin Man Poon'\n 'Pei Xian Chen' 'Yun Ling Zhang']" ]
math.OC cs.DS cs.LG
null
1410.7171
null
null
http://arxiv.org/pdf/1410.7171v2
2015-02-05T07:58:42Z
2014-10-27T10:28:12Z
Exponentiated Subgradient Algorithm for Online Optimization under the Random Permutation Model
Online optimization problems arise in many resource allocation tasks, where the future demands for each resource and the associated utility functions change over time and are not known apriori, yet resources need to be allocated at every point in time despite the future uncertainty. In this paper, we consider online optimization problems with general concave utilities. We modify and extend an online optimization algorithm proposed by Devanur et al. for linear programming to this general setting. The model we use for the arrival of the utilities and demands is known as the random permutation model, where a fixed collection of utilities and demands are presented to the algorithm in random order. We prove that under this model the algorithm achieves a competitive ratio of $1-O(\epsilon)$ under a near-optimal assumption that the bid to budget ratio is $O (\frac{\epsilon^2}{\log({m}/{\epsilon})})$, where $m$ is the number of resources, while enjoying a significantly lower computational cost than the optimal algorithm proposed by Kesselheim et al. We draw a connection between the proposed algorithm and subgradient methods used in convex optimization. In addition, we present numerical experiments that demonstrate the performance and speed of this algorithm in comparison to existing algorithms.
[ "Reza Eghbali, Jon Swenson, Maryam Fazel", "['Reza Eghbali' 'Jon Swenson' 'Maryam Fazel']" ]
cs.LG math.OC stat.ML
null
1410.7172
null
null
http://arxiv.org/pdf/1410.7172v2
2015-03-04T20:03:23Z
2014-10-27T10:28:36Z
Heteroscedastic Treed Bayesian Optimisation
Optimising black-box functions is important in many disciplines, such as tuning machine learning models, robotics, finance and mining exploration. Bayesian optimisation is a state-of-the-art technique for the global optimisation of black-box functions which are expensive to evaluate. At the core of this approach is a Gaussian process prior that captures our belief about the distribution over functions. However, in many cases a single Gaussian process is not flexible enough to capture non-stationarity in the objective function. Consequently, heteroscedasticity negatively affects performance of traditional Bayesian methods. In this paper, we propose a novel prior model with hierarchical parameter learning that tackles the problem of non-stationarity in Bayesian optimisation. Our results demonstrate substantial improvements in a wide range of applications, including automatic machine learning and mining exploration.
[ "['John-Alexander M. Assael' 'Ziyu Wang' 'Bobak Shahriari'\n 'Nando de Freitas']", "John-Alexander M. Assael, Ziyu Wang, Bobak Shahriari, Nando de Freitas" ]
math.NA cs.LG cs.NA math.OC stat.ML
10.1137/140993272
1410.7220
null
null
http://arxiv.org/abs/1410.7220v3
2015-05-08T05:55:15Z
2014-10-27T13:09:29Z
Exact and Heuristic Algorithms for Semi-Nonnegative Matrix Factorization
Given a matrix $M$ (not necessarily nonnegative) and a factorization rank $r$, semi-nonnegative matrix factorization (semi-NMF) looks for a matrix $U$ with $r$ columns and a nonnegative matrix $V$ with $r$ rows such that $UV$ is the best possible approximation of $M$ according to some metric. In this paper, we study the properties of semi-NMF from which we develop exact and heuristic algorithms. Our contribution is threefold. First, we prove that the error of a semi-NMF of rank $r$ has to be smaller than the best unconstrained approximation of rank $r-1$. This leads us to a new initialization procedure based on the singular value decomposition (SVD) with a guarantee on the quality of the approximation. Second, we propose an exact algorithm (that is, an algorithm that finds an optimal solution), also based on the SVD, for a certain class of matrices (including nonnegative irreducible matrices) from which we derive an initialization for matrices not belonging to that class. Numerical experiments illustrate that this second approach performs extremely well, and allows us to compute optimal semi-NMF decompositions in many situations. Finally, we analyze the computational complexity of semi-NMF proving its NP-hardness, already in the rank-one case (that is, for $r = 1$), and we show that semi-NMF is sometimes ill-posed (that is, an optimal solution does not exist).
[ "['Nicolas Gillis' 'Abhishek Kumar']", "Nicolas Gillis and Abhishek Kumar" ]
cs.LG
null
1410.7372
null
null
http://arxiv.org/pdf/1410.7372v1
2014-10-27T19:46:55Z
2014-10-27T19:46:55Z
Feature Selection through Minimization of the VC dimension
Feature selection involes identifying the most relevant subset of input features, with a view to improving generalization of predictive models by reducing overfitting. Directly searching for the most relevant combination of attributes is NP-hard. Variable selection is of critical importance in many applications, such as micro-array data analysis, where selecting a small number of discriminative features is crucial to developing useful models of disease mechanisms, as well as for prioritizing targets for drug discovery. The recently proposed Minimal Complexity Machine (MCM) provides a way to learn a hyperplane classifier by minimizing an exact (\boldmath{$\Theta$}) bound on its VC dimension. It is well known that a lower VC dimension contributes to good generalization. For a linear hyperplane classifier in the input space, the VC dimension is upper bounded by the number of features; hence, a linear classifier with a small VC dimension is parsimonious in the set of features it employs. In this paper, we use the linear MCM to learn a classifier in which a large number of weights are zero; features with non-zero weights are the ones that are chosen. Selected features are used to learn a kernel SVM classifier. On a number of benchmark datasets, the features chosen by the linear MCM yield comparable or better test set accuracy than when methods such as ReliefF and FCBF are used for the task. The linear MCM typically chooses one-tenth the number of attributes chosen by the other methods; on some very high dimensional datasets, the MCM chooses about $0.6\%$ of the features; in comparison, ReliefF and FCBF choose 70 to 140 times more features, thus demonstrating that minimizing the VC dimension may provide a new, and very effective route for feature selection and for learning sparse representations.
[ "['Jayadeva' 'Sanjit S. Batra' 'Siddharth Sabharwal']", "Jayadeva, Sanjit S. Batra, and Siddharth Sabharwal" ]
stat.ML cs.LG physics.data-an
null
1410.7404
null
null
http://arxiv.org/pdf/1410.7404v2
2015-01-31T00:38:54Z
2014-10-27T20:00:40Z
Maximally Informative Hierarchical Representations of High-Dimensional Data
We consider a set of probabilistic functions of some input variables as a representation of the inputs. We present bounds on how informative a representation is about input data. We extend these bounds to hierarchical representations so that we can quantify the contribution of each layer towards capturing the information in the original data. The special form of these bounds leads to a simple, bottom-up optimization procedure to construct hierarchical representations that are also maximally informative about the data. This optimization has linear computational complexity and constant sample complexity in the number of variables. These results establish a new approach to unsupervised learning of deep representations that is both principled and practical. We demonstrate the usefulness of the approach on both synthetic and real-world data.
[ "Greg Ver Steeg and Aram Galstyan", "['Greg Ver Steeg' 'Aram Galstyan']" ]
stat.ML cs.LG
null
1410.7414
null
null
http://arxiv.org/pdf/1410.7414v1
2014-10-27T20:15:18Z
2014-10-27T20:15:18Z
Fast Function to Function Regression
We analyze the problem of regression when both input covariates and output responses are functions from a nonparametric function class. Function to function regression (FFR) covers a large range of interesting applications including time-series prediction problems, and also more general tasks like studying a mapping between two separate types of distributions. However, previous nonparametric estimators for FFR type problems scale badly computationally with the number of input/output pairs in a data-set. Given the complexity of a mapping between general functions it may be necessary to consider large data-sets in order to achieve a low estimation risk. To address this issue, we develop a novel scalable nonparametric estimator, the Triple-Basis Estimator (3BE), which is capable of operating over datasets with many instances. To the best of our knowledge, the 3BE is the first nonparametric FFR estimator that can scale to massive datasets. We analyze the 3BE's risk and derive an upperbound rate. Furthermore, we show an improvement of several orders of magnitude in terms of prediction speed and a reduction in error over previous estimators in various real-world data-sets.
[ "['Junier Oliva' 'Willie Neiswanger' 'Barnabas Poczos' 'Eric Xing'\n 'Jeff Schneider']", "Junier Oliva, Willie Neiswanger, Barnabas Poczos, Eric Xing, Jeff\n Schneider" ]
cs.CV cs.AI cs.LG
null
1410.7452
null
null
http://arxiv.org/pdf/1410.7452v2
2015-01-26T21:36:36Z
2014-10-27T22:40:52Z
Consensus Message Passing for Layered Graphical Models
Generative models provide a powerful framework for probabilistic reasoning. However, in many domains their use has been hampered by the practical difficulties of inference. This is particularly the case in computer vision, where models of the imaging process tend to be large, loopy and layered. For this reason bottom-up conditional models have traditionally dominated in such domains. We find that widely-used, general-purpose message passing inference algorithms such as Expectation Propagation (EP) and Variational Message Passing (VMP) fail on the simplest of vision models. With these models in mind, we introduce a modification to message passing that learns to exploit their layered structure by passing 'consensus' messages that guide inference towards good solutions. Experiments on a variety of problems show that the proposed technique leads to significantly more accurate inference results, not only when compared to standard EP and VMP, but also when compared to competitive bottom-up conditional models.
[ "Varun Jampani, S. M. Ali Eslami, Daniel Tarlow, Pushmeet Kohli and\n John Winn", "['Varun Jampani' 'S. M. Ali Eslami' 'Daniel Tarlow' 'Pushmeet Kohli'\n 'John Winn']" ]
cs.NE cs.LG stat.ML
null
1410.7455
null
null
http://arxiv.org/pdf/1410.7455v8
2015-06-22T22:07:56Z
2014-10-27T22:45:41Z
Parallel training of DNNs with Natural Gradient and Parameter Averaging
We describe the neural-network training framework used in the Kaldi speech recognition toolkit, which is geared towards training DNNs with large amounts of training data using multiple GPU-equipped or multi-core machines. In order to be as hardware-agnostic as possible, we needed a way to use multiple machines without generating excessive network traffic. Our method is to average the neural network parameters periodically (typically every minute or two), and redistribute the averaged parameters to the machines for further training. Each machine sees different data. By itself, this method does not work very well. However, we have another method, an approximate and efficient implementation of Natural Gradient for Stochastic Gradient Descent (NG-SGD), which seems to allow our periodic-averaging method to work well, as well as substantially improving the convergence of SGD on a single machine.
[ "['Daniel Povey' 'Xiaohui Zhang' 'Sanjeev Khudanpur']", "Daniel Povey, Xiaohui Zhang, Sanjeev Khudanpur" ]
stat.ML cs.LG cs.NE cs.SY
null
1410.7550
null
null
http://arxiv.org/pdf/1410.7550v1
2014-10-28T08:37:01Z
2014-10-28T08:37:01Z
Learning deep dynamical models from image pixels
Modeling dynamical systems is important in many disciplines, e.g., control, robotics, or neurotechnology. Commonly the state of these systems is not directly observed, but only available through noisy and potentially high-dimensional observations. In these cases, system identification, i.e., finding the measurement mapping and the transition mapping (system dynamics) in latent space can be challenging. For linear system dynamics and measurement mappings efficient solutions for system identification are available. However, in practical applications, the linearity assumptions does not hold, requiring non-linear system identification techniques. If additionally the observations are high-dimensional (e.g., images), non-linear system identification is inherently hard. To address the problem of non-linear system identification from high-dimensional observations, we combine recent advances in deep learning and system identification. In particular, we jointly learn a low-dimensional embedding of the observation by means of deep auto-encoders and a predictive transition model in this low-dimensional space. We demonstrate that our model enables learning good predictive models of dynamical systems from pixel information only.
[ "['Niklas Wahlström' 'Thomas B. Schön' 'Marc Peter Deisenroth']", "Niklas Wahlstr\\\"om, Thomas B. Sch\\\"on, Marc Peter Deisenroth" ]
cs.LG cs.DS math.OC
null
1410.7596
null
null
http://arxiv.org/pdf/1410.7596v1
2014-10-28T11:57:54Z
2014-10-28T11:57:54Z
Fast Algorithms for Online Stochastic Convex Programming
We introduce the online stochastic Convex Programming (CP) problem, a very general version of stochastic online problems which allows arbitrary concave objectives and convex feasibility constraints. Many well-studied problems like online stochastic packing and covering, online stochastic matching with concave returns, etc. form a special case of online stochastic CP. We present fast algorithms for these problems, which achieve near-optimal regret guarantees for both the i.i.d. and the random permutation models of stochastic inputs. When applied to the special case online packing, our ideas yield a simpler and faster primal-dual algorithm for this well studied problem, which achieves the optimal competitive ratio. Our techniques make explicit the connection of primal-dual paradigm and online learning to online stochastic CP.
[ "['Shipra Agrawal' 'Nikhil R. Devanur']", "Shipra Agrawal, Nikhil R. Devanur" ]
cs.LG cs.IT math.IT stat.CO stat.ML
null
1410.7659
null
null
http://arxiv.org/pdf/1410.7659v2
2014-11-29T02:31:20Z
2014-10-28T15:32:09Z
Learning graphical models from the Glauber dynamics
In this paper we consider the problem of learning undirected graphical models from data generated according to the Glauber dynamics. The Glauber dynamics is a Markov chain that sequentially updates individual nodes (variables) in a graphical model and it is frequently used to sample from the stationary distribution (to which it converges given sufficient time). Additionally, the Glauber dynamics is a natural dynamical model in a variety of settings. This work deviates from the standard formulation of graphical model learning in the literature, where one assumes access to i.i.d. samples from the distribution. Much of the research on graphical model learning has been directed towards finding algorithms with low computational cost. As the main result of this work, we establish that the problem of reconstructing binary pairwise graphical models is computationally tractable when we observe the Glauber dynamics. Specifically, we show that a binary pairwise graphical model on $p$ nodes with maximum degree $d$ can be learned in time $f(d)p^2\log p$, for a function $f(d)$, using nearly the information-theoretic minimum number of samples.
[ "['Guy Bresler' 'David Gamarnik' 'Devavrat Shah']", "Guy Bresler and David Gamarnik and Devavrat Shah" ]
cs.IT cs.LG math.IT stat.ML
null
1410.7660
null
null
http://arxiv.org/pdf/1410.7660v1
2014-10-28T15:33:13Z
2014-10-28T15:33:13Z
Non-convex Robust PCA
We propose a new method for robust PCA -- the task of recovering a low-rank matrix from sparse corruptions that are of unknown value and support. Our method involves alternating between projecting appropriate residuals onto the set of low-rank matrices, and the set of sparse matrices; each projection is {\em non-convex} but easy to compute. In spite of this non-convexity, we establish exact recovery of the low-rank matrix, under the same conditions that are required by existing methods (which are based on convex optimization). For an $m \times n$ input matrix ($m \leq n)$, our method has a running time of $O(r^2mn)$ per iteration, and needs $O(\log(1/\epsilon))$ iterations to reach an accuracy of $\epsilon$. This is close to the running time of simple PCA via the power method, which requires $O(rmn)$ per iteration, and $O(\log(1/\epsilon))$ iterations. In contrast, existing methods for robust PCA, which are based on convex optimization, have $O(m^2n)$ complexity per iteration, and take $O(1/\epsilon)$ iterations, i.e., exponentially more iterations for the same accuracy. Experiments on both synthetic and real data establishes the improved speed and accuracy of our method over existing convex implementations.
[ "['Praneeth Netrapalli' 'U N Niranjan' 'Sujay Sanghavi'\n 'Animashree Anandkumar' 'Prateek Jain']", "Praneeth Netrapalli and U N Niranjan and Sujay Sanghavi and Animashree\n Anandkumar and Prateek Jain" ]
stat.ML cs.AI cs.LG stat.ME
null
1410.7690
null
null
http://arxiv.org/pdf/1410.7690v5
2016-06-04T17:03:24Z
2014-10-28T16:22:32Z
Trend Filtering on Graphs
We introduce a family of adaptive estimators on graphs, based on penalizing the $\ell_1$ norm of discrete graph differences. This generalizes the idea of trend filtering [Kim et al. (2009), Tibshirani (2014)], used for univariate nonparametric regression, to graphs. Analogous to the univariate case, graph trend filtering exhibits a level of local adaptivity unmatched by the usual $\ell_2$-based graph smoothers. It is also defined by a convex minimization problem that is readily solved (e.g., by fast ADMM or Newton algorithms). We demonstrate the merits of graph trend filtering through examples and theory.
[ "['Yu-Xiang Wang' 'James Sharpnack' 'Alex Smola' 'Ryan J. Tibshirani']", "Yu-Xiang Wang, James Sharpnack, Alex Smola, Ryan J. Tibshirani" ]
cs.LG cs.CR
null
1410.7709
null
null
http://arxiv.org/pdf/1410.7709v1
2014-10-28T17:29:42Z
2014-10-28T17:29:42Z
Anomaly Detection Framework Using Rule Extraction for Efficient Intrusion Detection
Huge datasets in cyber security, such as network traffic logs, can be analyzed using machine learning and data mining methods. However, the amount of collected data is increasing, which makes analysis more difficult. Many machine learning methods have not been designed for big datasets, and consequently are slow and difficult to understand. We address the issue of efficient network traffic classification by creating an intrusion detection framework that applies dimensionality reduction and conjunctive rule extraction. The system can perform unsupervised anomaly detection and use this information to create conjunctive rules that classify huge amounts of traffic in real time. We test the implemented system with the widely used KDD Cup 99 dataset and real-world network logs to confirm that the performance is satisfactory. This system is transparent and does not work like a black box, making it intuitive for domain experts, such as network administrators.
[ "Antti Juvonen and Tuomo Sipola", "['Antti Juvonen' 'Tuomo Sipola']" ]
cs.LG cs.AI stat.ML
null
1410.7827
null
null
http://arxiv.org/pdf/1410.7827v2
2015-11-24T03:12:57Z
2014-10-28T22:04:30Z
Generalized Product of Experts for Automatic and Principled Fusion of Gaussian Process Predictions
In this work, we propose a generalized product of experts (gPoE) framework for combining the predictions of multiple probabilistic models. We identify four desirable properties that are important for scalability, expressiveness and robustness, when learning and inferring with a combination of multiple models. Through analysis and experiments, we show that gPoE of Gaussian processes (GP) have these qualities, while no other existing combination schemes satisfy all of them at the same time. The resulting GP-gPoE is highly scalable as individual GP experts can be independently learned in parallel; very expressive as the way experts are combined depends on the input rather than fixed; the combined prediction is still a valid probabilistic model with natural interpretation; and finally robust to unreliable predictions from individual experts.
[ "Yanshuai Cao, David J. Fleet", "['Yanshuai Cao' 'David J. Fleet']" ]
cs.LG
null
1410.7835
null
null
http://arxiv.org/pdf/1410.7835v2
2014-12-09T01:07:36Z
2014-10-28T23:14:56Z
Fast Learning of Relational Dependency Networks
A Relational Dependency Network (RDN) is a directed graphical model widely used for multi-relational data. These networks allow cyclic dependencies, necessary to represent relational autocorrelations. We describe an approach for learning both the RDN's structure and its parameters, given an input relational database: First learn a Bayesian network (BN), then transform the Bayesian network to an RDN. Thus fast Bayes net learning can provide fast RDN learning. The BN-to-RDN transform comprises a simple, local adjustment of the Bayes net structure and a closed-form transform of the Bayes net parameters. This method can learn an RDN for a dataset with a million tuples in minutes. We empirically compare our approach to state-of-the art RDN learning methods that use functional gradient boosting, on five benchmark datasets. Learning RDNs via BNs scales much better to large datasets than learning RDNs with boosting, and provides competitive accuracy in predictions.
[ "['Oliver Schulte' 'Zhensong Qian' 'Arthur E. Kirkpatrick' 'Xiaoqian Yin'\n 'Yan Sun']", "Oliver Schulte, Zhensong Qian, Arthur E. Kirkpatrick, Xiaoqian Yin,\n Yan Sun" ]
cs.LG cs.IR math.OC
null
1410.7852
null
null
http://arxiv.org/pdf/1410.7852v1
2014-10-29T01:15:20Z
2014-10-29T01:15:20Z
A Markov Decision Process Analysis of the Cold Start Problem in Bayesian Information Filtering
We consider the information filtering problem, in which we face a stream of items, and must decide which ones to forward to a user to maximize the number of relevant items shown, minus a penalty for each irrelevant item shown. Forwarding decisions are made separately in a personalized way for each user. We focus on the cold-start setting for this problem, in which we have limited historical data on the user's preferences, and must rely on feedback from forwarded articles to learn which the fraction of items relevant to the user in each of several item categories. Performing well in this setting requires trading exploration vs. exploitation, forwarding items that are likely to be irrelevant, to allow learning that will improve later performance. In a Bayesian setting, and using Markov decision processes, we show how the Bayes-optimal forwarding algorithm can be computed efficiently when the user will examine each forwarded article, and how an upper bound on the Bayes-optimal procedure and a heuristic index policy can be obtained for the setting when the user will examine only a limited number of forwarded items. We present results from simulation experiments using parameters estimated using historical data from arXiv.org.
[ "['Xiaoting Zhao' 'Peter I. Frazier']", "Xiaoting Zhao, Peter I. Frazier" ]
cs.CV cs.LG stat.ML
null
1410.7876
null
null
http://arxiv.org/pdf/1410.7876v2
2016-06-16T16:46:36Z
2014-10-29T05:25:44Z
Collaborative Multi-sensor Classification via Sparsity-based Representation
In this paper, we propose a general collaborative sparse representation framework for multi-sensor classification, which takes into account the correlations as well as complementary information between heterogeneous sensors simultaneously while considering joint sparsity within each sensor's observations. We also robustify our models to deal with the presence of sparse noise and low-rank interference signals. Specifically, we demonstrate that incorporating the noise or interference signal as a low-rank component in our models is essential in a multi-sensor classification problem when multiple co-located sources/sensors simultaneously record the same physical event. We further extend our frameworks to kernelized models which rely on sparsely representing a test sample in terms of all the training samples in a feature space induced by a kernel function. A fast and efficient algorithm based on alternative direction method is proposed where its convergence to an optimal solution is guaranteed. Extensive experiments are conducted on several real multi-sensor data sets and results are compared with the conventional classifiers to verify the effectiveness of the proposed methods.
[ "['Minh Dao' 'Nam H. Nguyen' 'Nasser M. Nasrabadi' 'Trac D. Tran']", "Minh Dao, Nam H. Nguyen, Nasser M. Nasrabadi, and Trac D. Tran" ]
cs.LG
null
1410.7890
null
null
http://arxiv.org/pdf/1410.7890v1
2014-10-29T07:05:21Z
2014-10-29T07:05:21Z
Global Bandits with Holder Continuity
Standard Multi-Armed Bandit (MAB) problems assume that the arms are independent. However, in many application scenarios, the information obtained by playing an arm provides information about the remainder of the arms. Hence, in such applications, this informativeness can and should be exploited to enable faster convergence to the optimal solution. In this paper, we introduce and formalize the Global MAB (GMAB), in which arms are globally informative through a global parameter, i.e., choosing an arm reveals information about all the arms. We propose a greedy policy for the GMAB which always selects the arm with the highest estimated expected reward, and prove that it achieves bounded parameter-dependent regret. Hence, this policy selects suboptimal arms only finitely many times, and after a finite number of initial time steps, the optimal arm is selected in all of the remaining time steps with probability one. In addition, we also study how the informativeness of the arms about each other's rewards affects the speed of learning. Specifically, we prove that the parameter-free (worst-case) regret is sublinear in time, and decreases with the informativeness of the arms. We also prove a sublinear in time Bayesian risk bound for the GMAB which reduces to the well-known Bayesian risk bound for linearly parameterized bandits when the arms are fully informative. GMABs have applications ranging from drug and treatment discovery to dynamic pricing.
[ "['Onur Atan' 'Cem Tekin' 'Mihaela van der Schaar']", "Onur Atan, Cem Tekin, Mihaela van der Schaar" ]
cs.AI cs.CL cs.CV cs.LG
null
1410.8027
null
null
http://arxiv.org/pdf/1410.8027v3
2015-05-05T18:03:56Z
2014-10-29T15:38:29Z
Towards a Visual Turing Challenge
As language and visual understanding by machines progresses rapidly, we are observing an increasing interest in holistic architectures that tightly interlink both modalities in a joint learning and inference process. This trend has allowed the community to progress towards more challenging and open tasks and refueled the hope at achieving the old AI dream of building machines that could pass a turing test in open domains. In order to steadily make progress towards this goal, we realize that quantifying performance becomes increasingly difficult. Therefore we ask how we can precisely define such challenges and how we can evaluate different algorithms on this open tasks? In this paper, we summarize and discuss such challenges as well as try to give answers where appropriate options are available in the literature. We exemplify some of the solutions on a recently presented dataset of question-answering task based on real-world indoor images that establishes a visual turing challenge. Finally, we argue despite the success of unique ground-truth annotation, we likely have to step away from carefully curated dataset and rather rely on 'social consensus' as the main driving force to create suitable benchmarks. Providing coverage in this inherently ambiguous output space is an emerging challenge that we face in order to make quantifiable progress in this area.
[ "['Mateusz Malinowski' 'Mario Fritz']", "Mateusz Malinowski and Mario Fritz" ]
cs.LG cs.IR stat.ML
null
1410.8034
null
null
http://arxiv.org/pdf/1410.8034v1
2014-10-29T15:51:54Z
2014-10-29T15:51:54Z
Latent Feature Based FM Model For Rating Prediction
Rating Prediction is a basic problem in Recommender System, and one of the most widely used method is Factorization Machines(FM). However, traditional matrix factorization methods fail to utilize the benefit of implicit feedback, which has been proved to be important in Rating Prediction problem. In this work, we consider a specific situation, movie rating prediction, where we assume that watching history has a big influence on his/her rating behavior on an item. We introduce two models, Latent Dirichlet Allocation(LDA) and word2vec, both of which perform state-of-the-art results in training latent features. Based on that, we propose two feature based models. One is the Topic-based FM Model which provides the implicit feedback to the matrix factorization. The other is the Vector-based FM Model which expresses the order info of watching history. Empirical results on three datasets demonstrate that our method performs better than the baseline model and confirm that Vector-based FM Model usually works better as it contains the order info.
[ "Xudong Liu, Bin Zhang, Ting Zhang and Chang Liu", "['Xudong Liu' 'Bin Zhang' 'Ting Zhang' 'Chang Liu']" ]
cs.LG stat.ML
null
1410.8043
null
null
http://arxiv.org/pdf/1410.8043v1
2014-10-29T16:19:21Z
2014-10-29T16:19:21Z
High-Performance Distributed ML at Scale through Parameter Server Consistency Models
As Machine Learning (ML) applications increase in data size and model complexity, practitioners turn to distributed clusters to satisfy the increased computational and memory demands. Unfortunately, effective use of clusters for ML requires considerable expertise in writing distributed code, while highly-abstracted frameworks like Hadoop have not, in practice, approached the performance seen in specialized ML implementations. The recent Parameter Server (PS) paradigm is a middle ground between these extremes, allowing easy conversion of single-machine parallel ML applications into distributed ones, while maintaining high throughput through relaxed "consistency models" that allow inconsistent parameter reads. However, due to insufficient theoretical study, it is not clear which of these consistency models can really ensure correct ML algorithm output; at the same time, there remain many theoretically-motivated but undiscovered opportunities to maximize computational throughput. Motivated by this challenge, we study both the theoretical guarantees and empirical behavior of iterative-convergent ML algorithms in existing PS consistency models. We then use the gleaned insights to improve a consistency model using an "eager" PS communication mechanism, and implement it as a new PS system that enables ML algorithms to reach their solution more quickly.
[ "Wei Dai, Abhimanu Kumar, Jinliang Wei, Qirong Ho, Garth Gibson, Eric\n P. Xing", "['Wei Dai' 'Abhimanu Kumar' 'Jinliang Wei' 'Qirong Ho' 'Garth Gibson'\n 'Eric P. Xing']" ]
cs.CL cs.LG
null
1410.8149
null
null
http://arxiv.org/pdf/1410.8149v1
2014-10-29T20:07:21Z
2014-10-29T20:07:21Z
Detecting Structural Irregularity in Electronic Dictionaries Using Language Modeling
Dictionaries are often developed using tools that save to Extensible Markup Language (XML)-based standards. These standards often allow high-level repeating elements to represent lexical entries, and utilize descendants of these repeating elements to represent the structure within each lexical entry, in the form of an XML tree. In many cases, dictionaries are published that have errors and inconsistencies that are expensive to find manually. This paper discusses a method for dictionary writers to quickly audit structural regularity across entries in a dictionary by using statistical language modeling. The approach learns the patterns of XML nodes that could occur within an XML tree, and then calculates the probability of each XML tree in the dictionary against these patterns to look for entries that diverge from the norm.
[ "['Paul Rodrigues' 'David Zajic' 'David Doermann' 'Michael Bloodgood'\n 'Peng Ye']", "Paul Rodrigues, David Zajic, David Doermann, Michael Bloodgood and\n Peng Ye" ]
cs.CL cs.LG cs.NE
null
1410.8206
null
null
http://arxiv.org/pdf/1410.8206v4
2015-05-30T19:57:28Z
2014-10-30T00:20:31Z
Addressing the Rare Word Problem in Neural Machine Translation
Neural Machine Translation (NMT) is a new approach to machine translation that has shown promising results that are comparable to traditional approaches. A significant weakness in conventional NMT systems is their inability to correctly translate very rare words: end-to-end NMTs tend to have relatively small vocabularies with a single unk symbol that represents every possible out-of-vocabulary (OOV) word. In this paper, we propose and implement an effective technique to address this problem. We train an NMT system on data that is augmented by the output of a word alignment algorithm, allowing the NMT system to emit, for each OOV word in the target sentence, the position of its corresponding word in the source sentence. This information is later utilized in a post-processing step that translates every OOV word using a dictionary. Our experiments on the WMT14 English to French translation task show that this method provides a substantial improvement of up to 2.8 BLEU points over an equivalent NMT system that does not use this technique. With 37.5 BLEU points, our NMT system is the first to surpass the best result achieved on a WMT14 contest task.
[ "['Minh-Thang Luong' 'Ilya Sutskever' 'Quoc V. Le' 'Oriol Vinyals'\n 'Wojciech Zaremba']", "Minh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, Wojciech\n Zaremba" ]
cs.LG
null
1410.8251
null
null
http://arxiv.org/pdf/1410.8251v1
2014-10-30T04:33:36Z
2014-10-30T04:33:36Z
Notes on Noise Contrastive Estimation and Negative Sampling
Estimating the parameters of probabilistic models of language such as maxent models and probabilistic neural models is computationally difficult since it involves evaluating partition functions by summing over an entire vocabulary, which may be millions of word types in size. Two closely related strategies---noise contrastive estimation (Mnih and Teh, 2012; Mnih and Kavukcuoglu, 2013; Vaswani et al., 2013) and negative sampling (Mikolov et al., 2012; Goldberg and Levy, 2014)---have emerged as popular solutions to this computational problem, but some confusion remains as to which is more appropriate and when. This document explicates their relationships to each other and to other estimation techniques. The analysis shows that, although they are superficially similar, NCE is a general parameter estimation technique that is asymptotically unbiased, while negative sampling is best understood as a family of binary classification models that are useful for learning word representations but not as a general-purpose estimator.
[ "['Chris Dyer']", "Chris Dyer" ]
stat.ME cs.LG stat.ML
null
1410.8275
null
null
http://arxiv.org/pdf/1410.8275v3
2016-06-28T05:13:33Z
2014-10-30T07:22:33Z
Bootstrap-Based Regularization for Low-Rank Matrix Estimation
We develop a flexible framework for low-rank matrix estimation that allows us to transform noise models into regularization schemes via a simple bootstrap algorithm. Effectively, our procedure seeks an autoencoding basis for the observed matrix that is stable with respect to the specified noise model; we call the resulting procedure a stable autoencoder. In the simplest case, with an isotropic noise model, our method is equivalent to a classical singular value shrinkage estimator. For non-isotropic noise models, e.g., Poisson noise, the method does not reduce to singular value shrinkage, and instead yields new estimators that perform well in experiments. Moreover, by iterating our stable autoencoding scheme, we can automatically generate low-rank estimates without specifying the target rank as a tuning parameter.
[ "Julie Josse and Stefan Wager", "['Julie Josse' 'Stefan Wager']" ]
cs.LG cs.AI cs.CL cs.RO
null
1410.8326
null
null
http://arxiv.org/pdf/1410.8326v1
2014-10-30T11:02:39Z
2014-10-30T11:02:39Z
Towards Learning Object Affordance Priors from Technical Texts
Everyday activities performed by artificial assistants can potentially be executed naively and dangerously given their lack of common sense knowledge. This paper presents conceptual work towards obtaining prior knowledge on the usual modality (passive or active) of any given entity, and their affordance estimates, by extracting high-confidence ability modality semantic relations (X can Y relationship) from non-figurative texts, by analyzing co-occurrence of grammatical instances of subjects and verbs, and verbs and objects. The discussion includes an outline of the concept, potential and limitations, and possible feature and learning framework adoption.
[ "Nicholas H. Kirk", "['Nicholas H. Kirk']" ]
cs.CC cs.DM cs.LG
null
1410.8420
null
null
http://arxiv.org/pdf/1410.8420v1
2014-10-30T16:10:26Z
2014-10-30T16:10:26Z
Learning circuits with few negations
Monotone Boolean functions, and the monotone Boolean circuits that compute them, have been intensively studied in complexity theory. In this paper we study the structure of Boolean functions in terms of the minimum number of negations in any circuit computing them, a complexity measure that interpolates between monotone functions and the class of all functions. We study this generalization of monotonicity from the vantage point of learning theory, giving near-matching upper and lower bounds on the uniform-distribution learnability of circuits in terms of the number of negations they contain. Our upper bounds are based on a new structural characterization of negation-limited circuits that extends a classical result of A. A. Markov. Our lower bounds, which employ Fourier-analytic tools from hardness amplification, give new results even for circuits with no negations (i.e. monotone functions).
[ "['Eric Blais' 'Clément L. Canonne' 'Igor C. Oliveira' 'Rocco A. Servedio'\n 'Li-Yang Tan']", "Eric Blais, Cl\\'ement L. Canonne, Igor C. Oliveira, Rocco A. Servedio\n and Li-Yang Tan" ]
cs.LG
null
1410.8516
null
null
http://arxiv.org/pdf/1410.8516v6
2015-04-10T12:27:56Z
2014-10-30T19:44:20Z
NICE: Non-linear Independent Components Estimation
We propose a deep learning framework for modeling complex high-dimensional densities called Non-linear Independent Component Estimation (NICE). It is based on the idea that a good representation is one in which the data has a distribution that is easy to model. For this purpose, a non-linear deterministic transformation of the data is learned that maps it to a latent space so as to make the transformed data conform to a factorized distribution, i.e., resulting in independent latent variables. We parametrize this transformation so that computing the Jacobian determinant and inverse transform is trivial, yet we maintain the ability to learn complex non-linear transformations, via a composition of simple building blocks, each based on a deep neural network. The training criterion is simply the exact log-likelihood, which is tractable. Unbiased ancestral sampling is also easy. We show that this approach yields good generative models on four image datasets and can be used for inpainting.
[ "Laurent Dinh, David Krueger and Yoshua Bengio", "['Laurent Dinh' 'David Krueger' 'Yoshua Bengio']" ]
cs.CL cs.LG stat.ML
null
1410.8553
null
null
http://arxiv.org/pdf/1410.8553v1
2014-10-30T20:52:48Z
2014-10-30T20:52:48Z
A random forest system combination approach for error detection in digital dictionaries
When digitizing a print bilingual dictionary, whether via optical character recognition or manual entry, it is inevitable that errors are introduced into the electronic version that is created. We investigate automating the process of detecting errors in an XML representation of a digitized print dictionary using a hybrid approach that combines rule-based, feature-based, and language model-based methods. We investigate combining methods and show that using random forests is a promising approach. We find that in isolation, unsupervised methods rival the performance of supervised methods. Random forests typically require training data so we investigate how we can apply random forests to combine individual base methods that are themselves unsupervised without requiring large amounts of training data. Experiments reveal empirically that a relatively small amount of data is sufficient and can potentially be further reduced through specific selection criteria.
[ "Michael Bloodgood, Peng Ye, Paul Rodrigues, David Zajic and David\n Doermann", "['Michael Bloodgood' 'Peng Ye' 'Paul Rodrigues' 'David Zajic'\n 'David Doermann']" ]
cs.CV cs.LG stat.AP stat.ML
10.1016/j.knosys.2013.12.023
1410.8576
null
null
http://arxiv.org/abs/1410.8576v1
2014-10-30T22:14:18Z
2014-10-30T22:14:18Z
An ensemble-based system for automatic screening of diabetic retinopathy
In this paper, an ensemble-based method for the screening of diabetic retinopathy (DR) is proposed. This approach is based on features extracted from the output of several retinal image processing algorithms, such as image-level (quality assessment, pre-screening, AM/FM), lesion-specific (microaneurysms, exudates) and anatomical (macula, optic disc) components. The actual decision about the presence of the disease is then made by an ensemble of machine learning classifiers. We have tested our approach on the publicly available Messidor database, where 90% sensitivity, 91% specificity and 90% accuracy and 0.989 AUC are achieved in a disease/no-disease setting. These results are highly competitive in this field and suggest that retinal image processing is a valid approach for automatic DR screening.
[ "['Balint Antal' 'Andras Hajdu']", "Balint Antal, Andras Hajdu" ]
q-bio.NC cs.LG
null
1410.8580
null
null
http://arxiv.org/pdf/1410.8580v1
2014-10-30T22:37:41Z
2014-10-30T22:37:41Z
An Online Algorithm for Learning Selectivity to Mixture Means
We develop a biologically-plausible learning rule called Triplet BCM that provably converges to the class means of general mixture models. This rule generalizes the classical BCM neural rule, and provides a novel interpretation of classical BCM as performing a kind of tensor decomposition. It achieves a substantial generalization over classical BCM by incorporating triplets of samples from the mixtures, which provides a novel information processing interpretation to spike-timing-dependent plasticity. We provide complete proofs of convergence of this learning rule, and an extended discussion of the connection between BCM and tensor learning.
[ "['Matthew Lawlor' 'Steven Zucker']", "Matthew Lawlor and Steven Zucker" ]
cs.CV cs.LG cs.MM cs.NE
null
1410.8586
null
null
http://arxiv.org/pdf/1410.8586v1
2014-10-30T22:57:12Z
2014-10-30T22:57:12Z
DeepSentiBank: Visual Sentiment Concept Classification with Deep Convolutional Neural Networks
This paper introduces a visual sentiment concept classification method based on deep convolutional neural networks (CNNs). The visual sentiment concepts are adjective noun pairs (ANPs) automatically discovered from the tags of web photos, and can be utilized as effective statistical cues for detecting emotions depicted in the images. Nearly one million Flickr images tagged with these ANPs are downloaded to train the classifiers of the concepts. We adopt the popular model of deep convolutional neural networks which recently shows great performance improvement on classifying large-scale web-based image dataset such as ImageNet. Our deep CNNs model is trained based on Caffe, a newly developed deep learning framework. To deal with the biased training data which only contains images with strong sentiment and to prevent overfitting, we initialize the model with the model weights trained from ImageNet. Performance evaluation shows the newly trained deep CNNs model SentiBank 2.0 (or called DeepSentiBank) is significantly improved in both annotation accuracy and retrieval performance, compared to its predecessors which mainly use binary SVM classification models.
[ "Tao Chen, Damian Borth, Trevor Darrell and Shih-Fu Chang", "['Tao Chen' 'Damian Borth' 'Trevor Darrell' 'Shih-Fu Chang']" ]
cs.LG cs.AI
null
1410.8620
null
null
http://arxiv.org/pdf/1410.8620v1
2014-10-31T02:19:19Z
2014-10-31T02:19:19Z
A Comparison of learning algorithms on the Arcade Learning Environment
Reinforcement learning agents have traditionally been evaluated on small toy problems. With advances in computing power and the advent of the Arcade Learning Environment, it is now possible to evaluate algorithms on diverse and difficult problems within a consistent framework. We discuss some challenges posed by the arcade learning environment which do not manifest in simpler environments. We then provide a comparison of model-free, linear learning algorithms on this challenging problem set.
[ "Aaron Defazio and Thore Graepel", "['Aaron Defazio' 'Thore Graepel']" ]
stat.ML cs.LG
null
1410.8675
null
null
http://arxiv.org/pdf/1410.8675v1
2014-10-31T09:01:27Z
2014-10-31T09:01:27Z
Partition-wise Linear Models
Region-specific linear models are widely used in practical applications because of their non-linear but highly interpretable model representations. One of the key challenges in their use is non-convexity in simultaneous optimization of regions and region-specific models. This paper proposes novel convex region-specific linear models, which we refer to as partition-wise linear models. Our key ideas are 1) assigning linear models not to regions but to partitions (region-specifiers) and representing region-specific linear models by linear combinations of partition-specific models, and 2) optimizing regions via partition selection from a large number of given partition candidates by means of convex structured regularizations. In addition to providing initialization-free globally-optimal solutions, our convex formulation makes it possible to derive a generalization bound and to use such advanced optimization techniques as proximal methods and decomposition of the proximal maps for sparsity-inducing regularizations. Experimental results demonstrate that our partition-wise linear models perform better than or are at least competitive with state-of-the-art region-specific or locally linear models.
[ "['Hidekazu Oiwa' 'Ryohei Fujimaki']", "Hidekazu Oiwa, Ryohei Fujimaki" ]
cs.LG
null
1410.8750
null
null
http://arxiv.org/pdf/1410.8750v1
2014-10-31T14:31:54Z
2014-10-31T14:31:54Z
Learning Mixtures of Ranking Models
This work concerns learning probabilistic models for ranking data in a heterogeneous population. The specific problem we study is learning the parameters of a Mallows Mixture Model. Despite being widely studied, current heuristics for this problem do not have theoretical guarantees and can get stuck in bad local optima. We present the first polynomial time algorithm which provably learns the parameters of a mixture of two Mallows models. A key component of our algorithm is a novel use of tensor decomposition techniques to learn the top-k prefix in both the rankings. Before this work, even the question of identifiability in the case of a mixture of two Mallows models was unresolved.
[ "['Pranjal Awasthi' 'Avrim Blum' 'Or Sheffet' 'Aravindan Vijayaraghavan']", "Pranjal Awasthi, Avrim Blum, Or Sheffet, Aravindan Vijayaraghavan" ]
cs.CL cs.LG
null
1410.8783
null
null
http://arxiv.org/pdf/1410.8783v1
2014-10-31T15:53:49Z
2014-10-31T15:53:49Z
Supervised learning model for parsing Arabic language
Parsing the Arabic language is a difficult task given the specificities of this language and given the scarcity of digital resources (grammars and annotated corpora). In this paper, we suggest a method for Arabic parsing based on supervised machine learning. We used the SVMs algorithm to select the syntactic labels of the sentence. Furthermore, we evaluated our parser following the cross validation method by using the Penn Arabic Treebank. The obtained results are very encouraging.
[ "['Nabil Khoufi' 'Chafik Aloulou' 'Lamia Hadrich Belguith']", "Nabil Khoufi, Chafik Aloulou, Lamia Hadrich Belguith" ]
stat.ML cs.IT cs.LG math.IT
null
1410.8864
null
null
http://arxiv.org/pdf/1410.8864v1
2014-10-31T19:50:42Z
2014-10-31T19:50:42Z
Greedy Subspace Clustering
We consider the problem of subspace clustering: given points that lie on or near the union of many low-dimensional linear subspaces, recover the subspaces. To this end, one first identifies sets of points close to the same subspace and uses the sets to estimate the subspaces. As the geometric structure of the clusters (linear subspaces) forbids proper performance of general distance based approaches such as K-means, many model-specific methods have been proposed. In this paper, we provide new simple and efficient algorithms for this problem. Our statistical analysis shows that the algorithms are guaranteed exact (perfect) clustering performance under certain conditions on the number of points and the affinity between subspaces. These conditions are weaker than those considered in the standard statistical literature. Experimental results on synthetic data generated from the standard unions of subspaces model demonstrate our theory. We also show that our algorithm performs competitively against state-of-the-art algorithms on real-world applications such as motion segmentation and face clustering, with much simpler implementation and lower computational cost.
[ "Dohyung Park, Constantine Caramanis, Sujay Sanghavi", "['Dohyung Park' 'Constantine Caramanis' 'Sujay Sanghavi']" ]
cs.CL cs.LG stat.ML
null
1411.0007
null
null
http://arxiv.org/pdf/1411.0007v1
2014-10-31T20:04:09Z
2014-10-31T20:04:09Z
Rapid Adaptation of POS Tagging for Domain Specific Uses
Part-of-speech (POS) tagging is a fundamental component for performing natural language tasks such as parsing, information extraction, and question answering. When POS taggers are trained in one domain and applied in significantly different domains, their performance can degrade dramatically. We present a methodology for rapid adaptation of POS taggers to new domains. Our technique is unsupervised in that a manually annotated corpus for the new domain is not necessary. We use suffix information gathered from large amounts of raw text as well as orthographic information to increase the lexical coverage. We present an experiment in the Biological domain where our POS tagger achieves results comparable to POS taggers specifically trained to this domain.
[ "John E. Miller, Michael Bloodgood, Manabu Torii and K. Vijay-Shanker", "['John E. Miller' 'Michael Bloodgood' 'Manabu Torii' 'K. Vijay-Shanker']" ]
cs.LG stat.ML
null
1411.0023
null
null
http://arxiv.org/pdf/1411.0023v2
2016-04-11T04:59:59Z
2014-10-31T20:46:44Z
Validation of Matching
We introduce a technique to compute probably approximately correct (PAC) bounds on precision and recall for matching algorithms. The bounds require some verified matches, but those matches may be used to develop the algorithms. The bounds can be applied to network reconciliation or entity resolution algorithms, which identify nodes in different networks or values in a data set that correspond to the same entity. For network reconciliation, the bounds do not require knowledge of the network generation process.
[ "Ya Le, Eric Bax, Nicola Barbieri, David Garcia Soriano, Jitesh Mehta,\n James Li", "['Ya Le' 'Eric Bax' 'Nicola Barbieri' 'David Garcia Soriano'\n 'Jitesh Mehta' 'James Li']" ]
math.OC cs.LG cs.SY stat.ML
null
1411.0024
null
null
http://arxiv.org/pdf/1411.0024v1
2014-10-30T05:30:42Z
2014-10-30T05:30:42Z
Robust sketching for multiple square-root LASSO problems
Many learning tasks, such as cross-validation, parameter search, or leave-one-out analysis, involve multiple instances of similar problems, each instance sharing a large part of learning data with the others. We introduce a robust framework for solving multiple square-root LASSO problems, based on a sketch of the learning data that uses low-rank approximations. Our approach allows a dramatic reduction in computational effort, in effect reducing the number of observations from $m$ (the number of observations to start with) to $k$ (the number of singular values retained in the low-rank model), while not sacrificing---sometimes even improving---the statistical performance. Theoretical analysis, as well as numerical experiments on both synthetic and real data, illustrate the efficiency of the method in large scale applications.
[ "Vu Pham, Laurent El Ghaoui, Arturo Fernandez", "['Vu Pham' 'Laurent El Ghaoui' 'Arturo Fernandez']" ]
cs.IT cs.CV cs.LG cs.NE math.IT stat.ML
null
1411.0161
null
null
http://arxiv.org/pdf/1411.0161v1
2014-11-01T19:41:14Z
2014-11-01T19:41:14Z
Entropy of Overcomplete Kernel Dictionaries
In signal analysis and synthesis, linear approximation theory considers a linear decomposition of any given signal in a set of atoms, collected into a so-called dictionary. Relevant sparse representations are obtained by relaxing the orthogonality condition of the atoms, yielding overcomplete dictionaries with an extended number of atoms. More generally than the linear decomposition, overcomplete kernel dictionaries provide an elegant nonlinear extension by defining the atoms through a mapping kernel function (e.g., the gaussian kernel). Models based on such kernel dictionaries are used in neural networks, gaussian processes and online learning with kernels. The quality of an overcomplete dictionary is evaluated with a diversity measure the distance, the approximation, the coherence and the Babel measures. In this paper, we develop a framework to examine overcomplete kernel dictionaries with the entropy from information theory. Indeed, a higher value of the entropy is associated to a further uniform spread of the atoms over the space. For each of the aforementioned diversity measures, we derive lower bounds on the entropy. Several definitions of the entropy are examined, with an extensive analysis in both the input space and the mapped feature space.
[ "['Paul Honeine']", "Paul Honeine" ]
cs.LG cs.DS math.ST stat.TH
null
1411.0169
null
null
http://arxiv.org/pdf/1411.0169v1
2014-11-01T21:03:59Z
2014-11-01T21:03:59Z
Near-Optimal Density Estimation in Near-Linear Time Using Variable-Width Histograms
Let $p$ be an unknown and arbitrary probability distribution over $[0,1)$. We consider the problem of {\em density estimation}, in which a learning algorithm is given i.i.d. draws from $p$ and must (with high probability) output a hypothesis distribution that is close to $p$. The main contribution of this paper is a highly efficient density estimation algorithm for learning using a variable-width histogram, i.e., a hypothesis distribution with a piecewise constant probability density function. In more detail, for any $k$ and $\epsilon$, we give an algorithm that makes $\tilde{O}(k/\epsilon^2)$ draws from $p$, runs in $\tilde{O}(k/\epsilon^2)$ time, and outputs a hypothesis distribution $h$ that is piecewise constant with $O(k \log^2(1/\epsilon))$ pieces. With high probability the hypothesis $h$ satisfies $d_{\mathrm{TV}}(p,h) \leq C \cdot \mathrm{opt}_k(p) + \epsilon$, where $d_{\mathrm{TV}}$ denotes the total variation distance (statistical distance), $C$ is a universal constant, and $\mathrm{opt}_k(p)$ is the smallest total variation distance between $p$ and any $k$-piecewise constant distribution. The sample size and running time of our algorithm are optimal up to logarithmic factors. The "approximation factor" $C$ in our result is inherent in the problem, as we prove that no algorithm with sample size bounded in terms of $k$ and $\epsilon$ can achieve $C<2$ regardless of what kind of hypothesis distribution it uses.
[ "Siu-On Chan, Ilias Diakonikolas, Rocco A. Servedio, Xiaorui Sun", "['Siu-On Chan' 'Ilias Diakonikolas' 'Rocco A. Servedio' 'Xiaorui Sun']" ]
cs.LG cs.DB
null
1411.0189
null
null
http://arxiv.org/pdf/1411.0189v1
2014-11-02T01:09:00Z
2014-11-02T01:09:00Z
Synchronization Clustering based on a Linearized Version of Vicsek model
This paper presents a kind of effective synchronization clustering method based on a linearized version of Vicsek model. This method can be represented by an Effective Synchronization Clustering algorithm (ESynC), an Improved version of ESynC algorithm (IESynC), a Shrinking Synchronization Clustering algorithm based on another linear Vicsek model (SSynC), and an effective Multi-level Synchronization Clustering algorithm (MSynC). After some analysis and comparisions, we find that ESynC algorithm based on the Linearized version of the Vicsek model has better synchronization effect than SynC algorithm based on an extensive Kuramoto model and a similar synchronization clustering algorithm based on the original Vicsek model. By simulated experiments of some artificial data sets, we observe that ESynC algorithm, IESynC algorithm, and SSynC algorithm can get better synchronization effect although it needs less iterative times and less time than SynC algorithm. In some simulations, we also observe that IESynC algorithm and SSynC algorithm can get some improvements in time cost than ESynC algorithm. At last, it gives some research expectations to popularize this algorithm.
[ "['Xinquan Chen']", "Xinquan Chen" ]
stat.ML cs.LG
null
1411.0292
null
null
http://arxiv.org/pdf/1411.0292v2
2015-06-08T21:36:22Z
2014-11-02T18:50:14Z
Population Empirical Bayes
Bayesian predictive inference analyzes a dataset to make predictions about new observations. When a model does not match the data, predictive accuracy suffers. We develop population empirical Bayes (POP-EB), a hierarchical framework that explicitly models the empirical population distribution as part of Bayesian analysis. We introduce a new concept, the latent dataset, as a hierarchical variable and set the empirical population as its prior. This leads to a new predictive density that mitigates model mismatch. We efficiently apply this method to complex models by proposing a stochastic variational inference algorithm, called bumping variational inference (BUMP-VI). We demonstrate improved predictive accuracy over classical Bayesian inference in three models: a linear regression model of health data, a Bayesian mixture model of natural images, and a latent Dirichlet allocation topic model of scientific documents.
[ "['Alp Kucukelbir' 'David M. Blei']", "Alp Kucukelbir, David M. Blei" ]
cs.LG cs.CV
null
1411.0296
null
null
http://arxiv.org/pdf/1411.0296v2
2014-11-17T06:22:19Z
2014-11-02T19:08:14Z
Geodesic Exponential Kernels: When Curvature and Linearity Conflict
We consider kernel methods on general geodesic metric spaces and provide both negative and positive results. First we show that the common Gaussian kernel can only be generalized to a positive definite kernel on a geodesic metric space if the space is flat. As a result, for data on a Riemannian manifold, the geodesic Gaussian kernel is only positive definite if the Riemannian manifold is Euclidean. This implies that any attempt to design geodesic Gaussian kernels on curved Riemannian manifolds is futile. However, we show that for spaces with conditionally negative definite distances the geodesic Laplacian kernel can be generalized while retaining positive definiteness. This implies that geodesic Laplacian kernels can be generalized to some curved spaces, including spheres and hyperbolic spaces. Our theoretical results are verified empirically.
[ "['Aasa Feragen' 'Francois Lauze' 'Søren Hauberg']", "Aasa Feragen, Francois Lauze, S{\\o}ren Hauberg" ]
stat.ML cs.LG stat.CO
null
1411.0306
null
null
http://arxiv.org/pdf/1411.0306v3
2015-11-08T20:33:45Z
2014-11-02T19:57:31Z
Fast Randomized Kernel Methods With Statistical Guarantees
One approach to improving the running time of kernel-based machine learning methods is to build a small sketch of the input and use it in lieu of the full kernel matrix in the machine learning task of interest. Here, we describe a version of this approach that comes with running time guarantees as well as improved guarantees on its statistical performance. By extending the notion of \emph{statistical leverage scores} to the setting of kernel ridge regression, our main statistical result is to identify an importance sampling distribution that reduces the size of the sketch (i.e., the required number of columns to be sampled) to the \emph{effective dimensionality} of the problem. This quantity is often much smaller than previous bounds that depend on the \emph{maximal degrees of freedom}. Our main algorithmic result is to present a fast algorithm to compute approximations to these scores. This algorithm runs in time that is linear in the number of samples---more precisely, the running time is $O(np^2)$, where the parameter $p$ depends only on the trace of the kernel matrix and the regularization parameter---and it can be applied to the matrix of feature vectors, without having to form the full kernel matrix. This is obtained via a variant of length-squared sampling that we adapt to the kernel setting in a way that is of independent interest. Lastly, we provide empirical results illustrating our theory, and we discuss how this new notion of the statistical leverage of a data point captures in a fine way the difficulty of the original statistical learning problem.
[ "['Ahmed El Alaoui' 'Michael W. Mahoney']", "Ahmed El Alaoui, Michael W. Mahoney" ]
math.OC cs.IT cs.LG math.IT stat.ML
null
1411.0347
null
null
http://arxiv.org/pdf/1411.0347v1
2014-11-03T02:59:39Z
2014-11-03T02:59:39Z
Iterative Hessian sketch: Fast and accurate solution approximation for constrained least-squares
We study randomized sketching methods for approximately solving least-squares problem with a general convex constraint. The quality of a least-squares approximation can be assessed in different ways: either in terms of the value of the quadratic objective function (cost approximation), or in terms of some distance measure between the approximate minimizer and the true minimizer (solution approximation). Focusing on the latter criterion, our first main result provides a general lower bound on any randomized method that sketches both the data matrix and vector in a least-squares problem; as a surprising consequence, the most widely used least-squares sketch is sub-optimal for solution approximation. We then present a new method known as the iterative Hessian sketch, and show that it can be used to obtain approximations to the original least-squares problem using a projection dimension proportional to the statistical complexity of the least-squares minimizer, and a logarithmic number of iterations. We illustrate our general theory with simulations for both unconstrained and constrained versions of least-squares, including $\ell_1$-regularization and nuclear norm constraints. We also numerically demonstrate the practicality of our approach in a real face expression classification experiment.
[ "Mert Pilanci and Martin J. Wainwright", "['Mert Pilanci' 'Martin J. Wainwright']" ]
cs.LG cs.AI cs.DC cs.IR
null
1411.0541
null
null
http://arxiv.org/pdf/1411.0541v2
2016-06-27T16:32:35Z
2014-11-03T16:03:05Z
Distributed Submodular Maximization
Many large-scale machine learning problems--clustering, non-parametric learning, kernel machines, etc.--require selecting a small yet representative subset from a large dataset. Such problems can often be reduced to maximizing a submodular set function subject to various constraints. Classical approaches to submodular optimization require centralized access to the full dataset, which is impractical for truly large-scale problems. In this paper, we consider the problem of submodular function maximization in a distributed fashion. We develop a simple, two-stage protocol GreeDi, that is easily implemented using MapReduce style computations. We theoretically analyze our approach, and show that under certain natural conditions, performance close to the centralized approach can be achieved. We begin with monotone submodular maximization subject to a cardinality constraint, and then extend this approach to obtain approximation guarantees for (not necessarily monotone) submodular maximization subject to more general constraints including matroid or knapsack constraints. In our extensive experiments, we demonstrate the effectiveness of our approach on several applications, including sparse Gaussian process inference and exemplar based clustering on tens of millions of examples using Hadoop.
[ "Baharan Mirzasoleiman, Amin Karbasi, Rik Sarkar, and Andreas Krause", "['Baharan Mirzasoleiman' 'Amin Karbasi' 'Rik Sarkar' 'Andreas Krause']" ]
cs.LG cs.DS
null
1411.0547
null
null
http://arxiv.org/pdf/1411.0547v3
2015-05-22T16:41:01Z
2014-11-03T16:17:31Z
Correlation Clustering with Constrained Cluster Sizes and Extended Weights Bounds
We consider the problem of correlation clustering on graphs with constraints on both the cluster sizes and the positive and negative weights of edges. Our contributions are twofold: First, we introduce the problem of correlation clustering with bounded cluster sizes. Second, we extend the regime of weight values for which the clustering may be performed with constant approximation guarantees in polynomial time and apply the results to the bounded cluster size problem.
[ "Gregory J. Puleo, Olgica Milenkovic", "['Gregory J. Puleo' 'Olgica Milenkovic']" ]
cond-mat.stat-mech cs.LG stat.ML
null
1411.0591
null
null
http://arxiv.org/pdf/1411.0591v1
2014-11-03T18:15:29Z
2014-11-03T18:15:29Z
Bayesian feature selection with strongly-regularizing priors maps to the Ising Model
Identifying small subsets of features that are relevant for prediction and/or classification tasks is a central problem in machine learning and statistics. The feature selection task is especially important, and computationally difficult, for modern datasets where the number of features can be comparable to, or even exceed, the number of samples. Here, we show that feature selection with Bayesian inference takes a universal form and reduces to calculating the magnetizations of an Ising model, under some mild conditions. Our results exploit the observation that the evidence takes a universal form for strongly-regularizing priors --- priors that have a large effect on the posterior probability even in the infinite data limit. We derive explicit expressions for feature selection for generalized linear models, a large class of statistical techniques that include linear and logistic regression. We illustrate the power of our approach by analyzing feature selection in a logistic regression-based classifier trained to distinguish between the letters B and D in the notMNIST dataset.
[ "['Charles K. Fisher' 'Pankaj Mehta']", "Charles K. Fisher and Pankaj Mehta" ]
cs.LG
null
1411.0602
null
null
http://arxiv.org/pdf/1411.0602v1
2014-11-03T18:49:25Z
2014-11-03T18:49:25Z
Factorbird - a Parameter Server Approach to Distributed Matrix Factorization
We present Factorbird, a prototype of a parameter server approach for factorizing large matrices with Stochastic Gradient Descent-based algorithms. We designed Factorbird to meet the following desiderata: (a) scalability to tall and wide matrices with dozens of billions of non-zeros, (b) extensibility to different kinds of models and loss functions as long as they can be optimized using Stochastic Gradient Descent (SGD), and (c) adaptability to both batch and streaming scenarios. Factorbird uses a parameter server in order to scale to models that exceed the memory of an individual machine, and employs lock-free Hogwild!-style learning with a special partitioning scheme to drastically reduce conflicting updates. We also discuss other aspects of the design of our system such as how to efficiently grid search for hyperparameters at scale. We present experiments of Factorbird on a matrix built from a subset of Twitter's interaction graph, consisting of more than 38 billion non-zeros and about 200 million rows and columns, which is to the best of our knowledge the largest matrix on which factorization results have been reported in the literature.
[ "['Sebastian Schelter' 'Venu Satuluri' 'Reza Zadeh']", "Sebastian Schelter, Venu Satuluri, Reza Zadeh" ]
stat.ML cond-mat.dis-nn cs.IT cs.LG math.IT
10.1007/s10955-015-1321-y
1411.0630
null
null
http://arxiv.org/abs/1411.0630v1
2014-11-03T19:46:07Z
2014-11-03T19:46:07Z
Active Inference for Binary Symmetric Hidden Markov Models
We consider active maximum a posteriori (MAP) inference problem for Hidden Markov Models (HMM), where, given an initial MAP estimate of the hidden sequence, we select to label certain states in the sequence to improve the estimation accuracy of the remaining states. We develop an analytical approach to this problem for the case of binary symmetric HMMs, and obtain a closed form solution that relates the expected error reduction to model parameters under the specified active inference scheme. We then use this solution to determine most optimal active inference scheme in terms of error reduction, and examine the relation of those schemes to heuristic principles of uncertainty reduction and solution unicity.
[ "['Armen E. Allahverdyan' 'Aram Galstyan']", "Armen E. Allahverdyan and Aram Galstyan" ]
cs.SI cs.CY cs.LG physics.soc-ph
10.1007/s13278-014-0237-x
1411.0652
null
null
http://arxiv.org/abs/1411.0652v1
2014-11-03T20:41:00Z
2014-11-03T20:41:00Z
Clustering memes in social media streams
The problem of clustering content in social media has pervasive applications, including the identification of discussion topics, event detection, and content recommendation. Here we describe a streaming framework for online detection and clustering of memes in social media, specifically Twitter. A pre-clustering procedure, namely protomeme detection, first isolates atomic tokens of information carried by the tweets. Protomemes are thereafter aggregated, based on multiple similarity measures, to obtain memes as cohesive groups of tweets reflecting actual concepts or topics of discussion. The clustering algorithm takes into account various dimensions of the data and metadata, including natural language, the social network, and the patterns of information diffusion. As a result, our system can build clusters of semantically, structurally, and topically related tweets. The clustering process is based on a variant of Online K-means that incorporates a memory mechanism, used to "forget" old memes and replace them over time with the new ones. The evaluation of our framework is carried out by using a dataset of Twitter trending topics. Over a one-week period, we systematically determined whether our algorithm was able to recover the trending hashtags. We show that the proposed method outperforms baseline algorithms that only use content features, as well as a state-of-the-art event detection method that assumes full knowledge of the underlying follower network. We finally show that our online learning framework is flexible, due to its independence of the adopted clustering algorithm, and best suited to work in a streaming scenario.
[ "['Mohsen JafariAsbagh' 'Emilio Ferrara' 'Onur Varol' 'Filippo Menczer'\n 'Alessandro Flammini']", "Mohsen JafariAsbagh, Emilio Ferrara, Onur Varol, Filippo Menczer,\n Alessandro Flammini" ]
cs.LG cs.GT cs.SY math.OC
null
1411.0728
null
null
http://arxiv.org/pdf/1411.0728v3
2016-06-21T00:43:57Z
2014-11-03T22:59:11Z
Approachability in Stackelberg Stochastic Games with Vector Costs
The notion of approachability was introduced by Blackwell [1] in the context of vector-valued repeated games. The famous Blackwell's approachability theorem prescribes a strategy for approachability, i.e., for `steering' the average cost of a given agent towards a given target set, irrespective of the strategies of the other agents. In this paper, motivated by the multi-objective optimization/decision making problems in dynamically changing environments, we address the approachability problem in Stackelberg stochastic games with vector valued cost functions. We make two main contributions. Firstly, we give a simple and computationally tractable strategy for approachability for Stackelberg stochastic games along the lines of Blackwell's. Secondly, we give a reinforcement learning algorithm for learning the approachable strategy when the transition kernel is unknown. We also recover as a by-product Blackwell's necessary and sufficient condition for approachability for convex sets in this set up and thus a complete characterization. We also give sufficient conditions for non-convex sets.
[ "['Dileep Kalathil' 'Vivek Borkar' 'Rahul Jain']", "Dileep Kalathil, Vivek Borkar, Rahul Jain" ]
cs.LG
null
1411.0860
null
null
http://arxiv.org/pdf/1411.0860v1
2014-11-04T11:03:50Z
2014-11-04T11:03:50Z
CUR Algorithm for Partially Observed Matrices
CUR matrix decomposition computes the low rank approximation of a given matrix by using the actual rows and columns of the matrix. It has been a very useful tool for handling large matrices. One limitation with the existing algorithms for CUR matrix decomposition is that they need an access to the {\it full} matrix, a requirement that can be difficult to fulfill in many real world applications. In this work, we alleviate this limitation by developing a CUR decomposition algorithm for partially observed matrices. In particular, the proposed algorithm computes the low rank approximation of the target matrix based on (i) the randomly sampled rows and columns, and (ii) a subset of observed entries that are randomly sampled from the matrix. Our analysis shows the relative error bound, measured by spectral norm, for the proposed algorithm when the target matrix is of full rank. We also show that only $O(n r\ln r)$ observed entries are needed by the proposed algorithm to perfectly recover a rank $r$ matrix of size $n\times n$, which improves the sample complexity of the existing algorithms for matrix completion. Empirical studies on both synthetic and real-world datasets verify our theoretical claims and demonstrate the effectiveness of the proposed algorithm.
[ "Miao Xu, Rong Jin, Zhi-Hua Zhou", "['Miao Xu' 'Rong Jin' 'Zhi-Hua Zhou']" ]
math.OC cs.LG stat.ML
10.1109/MSP.2014.2329397
1411.0972
null
null
http://arxiv.org/abs/1411.0972v1
2014-11-04T17:14:27Z
2014-11-04T17:14:27Z
Convex Optimization for Big Data
This article reviews recent advances in convex optimization algorithms for Big Data, which aim to reduce the computational, storage, and communications bottlenecks. We provide an overview of this emerging field, describe contemporary approximation techniques like first-order methods and randomization for scalability, and survey the important role of parallel and distributed computation. The new Big Data algorithms are based on surprisingly simple principles and attain staggering accelerations even on classical problems.
[ "['Volkan Cevher' 'Stephen Becker' 'Mark Schmidt']", "Volkan Cevher and Stephen Becker and Mark Schmidt" ]
cs.LG stat.ML
null
1411.0997
null
null
http://arxiv.org/pdf/1411.0997v1
2014-11-04T18:46:34Z
2014-11-04T18:46:34Z
Iterated geometric harmonics for data imputation and reconstruction of missing data
The method of geometric harmonics is adapted to the situation of incomplete data by means of the iterated geometric harmonics (IGH) scheme. The method is tested on natural and synthetic data sets with 50--500 data points and dimensionality of 400--10,000. Experiments suggest that the algorithm converges to a near optimal solution within 4--6 iterations, at runtimes of less than 30 minutes on a medium-grade desktop computer. The imputation of missing data values is applied to collections of damaged images (suffering from data annihilation rates of up to 70\%) which are reconstructed with a surprising degree of accuracy.
[ "['Chad Eckman' 'Jonathan A. Lindgren' 'Erin P. J. Pearse' 'David J. Sacco'\n 'Zachariah Zhang']", "Chad Eckman, Jonathan A. Lindgren, Erin P. J. Pearse, David J. Sacco,\n Zachariah Zhang" ]
cs.LG cs.IT math.IT stat.ML
null
1411.1076
null
null
http://arxiv.org/pdf/1411.1076v1
2014-11-04T21:01:56Z
2014-11-04T21:01:56Z
A statistical model for tensor PCA
We consider the Principal Component Analysis problem for large tensors of arbitrary order $k$ under a single-spike (or rank-one plus noise) model. On the one hand, we use information theory, and recent results in probability theory, to establish necessary and sufficient conditions under which the principal component can be estimated using unbounded computational resources. It turns out that this is possible as soon as the signal-to-noise ratio $\beta$ becomes larger than $C\sqrt{k\log k}$ (and in particular $\beta$ can remain bounded as the problem dimensions increase). On the other hand, we analyze several polynomial-time estimation algorithms, based on tensor unfolding, power iteration and message passing ideas from graphical models. We show that, unless the signal-to-noise ratio diverges in the system dimensions, none of these approaches succeeds. This is possibly related to a fundamental limitation of computationally tractable estimators for this problem. We discuss various initializations for tensor power iteration, and show that a tractable initialization based on the spectrum of the matricized tensor outperforms significantly baseline methods, statistically and computationally. Finally, we consider the case in which additional side information is available about the unknown signal. We characterize the amount of side information that allows the iterative algorithms to converge to a good estimate.
[ "['Andrea Montanari' 'Emile Richard']", "Andrea Montanari and Emile Richard" ]
cs.NA cs.DS cs.IT cs.LG math.IT stat.ML
null
1411.1087
null
null
http://arxiv.org/pdf/1411.1087v1
2014-11-04T21:16:23Z
2014-11-04T21:16:23Z
Fast Exact Matrix Completion with Finite Samples
Matrix completion is the problem of recovering a low rank matrix by observing a small fraction of its entries. A series of recent works [KOM12,JNS13,HW14] have proposed fast non-convex optimization based iterative algorithms to solve this problem. However, the sample complexity in all these results is sub-optimal in its dependence on the rank, condition number and the desired accuracy. In this paper, we present a fast iterative algorithm that solves the matrix completion problem by observing $O(nr^5 \log^3 n)$ entries, which is independent of the condition number and the desired accuracy. The run time of our algorithm is $O(nr^7\log^3 n\log 1/\epsilon)$ which is near linear in the dimension of the matrix. To the best of our knowledge, this is the first near linear time algorithm for exact matrix completion with finite sample complexity (i.e. independent of $\epsilon$). Our algorithm is based on a well known projected gradient descent method, where the projection is onto the (non-convex) set of low rank matrices. There are two key ideas in our result: 1) our argument is based on a $\ell_{\infty}$ norm potential function (as opposed to the spectral norm) and provides a novel way to obtain perturbation bounds for it. 2) we prove and use a natural extension of the Davis-Kahan theorem to obtain perturbation bounds on the best low rank approximation of matrices with good eigen-gap. Both of these ideas may be of independent interest.
[ "['Prateek Jain' 'Praneeth Netrapalli']", "Prateek Jain and Praneeth Netrapalli" ]
stat.ML cs.LG
null
1411.1088
null
null
http://arxiv.org/pdf/1411.1088v1
2014-11-04T21:23:35Z
2014-11-04T21:23:35Z
Expectation-Maximization for Learning Determinantal Point Processes
A determinantal point process (DPP) is a probabilistic model of set diversity compactly parameterized by a positive semi-definite kernel matrix. To fit a DPP to a given task, we would like to learn the entries of its kernel matrix by maximizing the log-likelihood of the available data. However, log-likelihood is non-convex in the entries of the kernel matrix, and this learning problem is conjectured to be NP-hard. Thus, previous work has instead focused on more restricted convex learning settings: learning only a single weight for each row of the kernel matrix, or learning weights for a linear combination of DPPs with fixed kernel matrices. In this work we propose a novel algorithm for learning the full kernel matrix. By changing the kernel parameterization from matrix entries to eigenvalues and eigenvectors, and then lower-bounding the likelihood in the manner of expectation-maximization algorithms, we obtain an effective optimization procedure. We test our method on a real-world product recommendation task, and achieve relative gains of up to 16.5% in test log-likelihood compared to the naive approach of maximizing likelihood by projected gradient ascent on the entries of the kernel matrix.
[ "Jennifer Gillenwater, Alex Kulesza, Emily Fox, Ben Taskar", "['Jennifer Gillenwater' 'Alex Kulesza' 'Emily Fox' 'Ben Taskar']" ]
cs.CV cs.LG cs.NE
null
1411.1091
null
null
http://arxiv.org/pdf/1411.1091v1
2014-11-04T21:35:55Z
2014-11-04T21:35:55Z
Do Convnets Learn Correspondence?
Convolutional neural nets (convnets) trained from massive labeled datasets have substantially improved the state-of-the-art in image classification and object detection. However, visual understanding requires establishing correspondence on a finer level than object category. Given their large pooling regions and training from whole-image labels, it is not clear that convnets derive their success from an accurate correspondence model which could be used for precise localization. In this paper, we study the effectiveness of convnet activation features for tasks requiring correspondence. We present evidence that convnet features localize at a much finer scale than their receptive field sizes, that they can be used to perform intraclass alignment as well as conventional hand-engineered features, and that they outperform conventional features in keypoint prediction on objects from PASCAL VOC 2011.
[ "['Jonathan Long' 'Ning Zhang' 'Trevor Darrell']", "Jonathan Long, Ning Zhang, Trevor Darrell" ]
cs.LG stat.ML
null
1411.1119
null
null
http://arxiv.org/pdf/1411.1119v3
2014-11-12T00:05:12Z
2014-11-05T00:43:08Z
Projecting Markov Random Field Parameters for Fast Mixing
Markov chain Monte Carlo (MCMC) algorithms are simple and extremely powerful techniques to sample from almost arbitrary distributions. The flaw in practice is that it can take a large and/or unknown amount of time to converge to the stationary distribution. This paper gives sufficient conditions to guarantee that univariate Gibbs sampling on Markov Random Fields (MRFs) will be fast mixing, in a precise sense. Further, an algorithm is given to project onto this set of fast-mixing parameters in the Euclidean norm. Following recent work, we give an example use of this to project in various divergence measures, comparing univariate marginals obtained by sampling after projection to common variational methods and Gibbs sampling on the original parameters.
[ "['Xianghang Liu' 'Justin Domke']", "Xianghang Liu and Justin Domke" ]
cs.IT cs.LG math.IT
null
1411.1125
null
null
http://arxiv.org/pdf/1411.1125v1
2014-11-05T01:12:17Z
2014-11-05T01:12:17Z
Distributed Low-Rank Estimation Based on Joint Iterative Optimization in Wireless Sensor Networks
This paper proposes a novel distributed reduced--rank scheme and an adaptive algorithm for distributed estimation in wireless sensor networks. The proposed distributed scheme is based on a transformation that performs dimensionality reduction at each agent of the network followed by a reduced-dimension parameter vector. A distributed reduced-rank joint iterative estimation algorithm is developed, which has the ability to achieve significantly reduced communication overhead and improved performance when compared with existing techniques. Simulation results illustrate the advantages of the proposed strategy in terms of convergence rate and mean square error performance.
[ "S. Xu, R. C. de Lamare and H. V. Poor", "['S. Xu' 'R. C. de Lamare' 'H. V. Poor']" ]
cs.LG math.OC stat.ML
null
1411.1134
null
null
http://arxiv.org/pdf/1411.1134v3
2015-02-10T20:19:28Z
2014-11-05T03:05:43Z
Global Convergence of Stochastic Gradient Descent for Some Non-convex Matrix Problems
Stochastic gradient descent (SGD) on a low-rank factorization is commonly employed to speed up matrix problems including matrix completion, subspace tracking, and SDP relaxation. In this paper, we exhibit a step size scheme for SGD on a low-rank least-squares problem, and we prove that, under broad sampling conditions, our method converges globally from a random starting point within $O(\epsilon^{-1} n \log n)$ steps with constant probability for constant-rank problems. Our modification of SGD relates it to stochastic power iteration. We also show experiments to illustrate the runtime and convergence of the algorithm.
[ "Christopher De Sa, Kunle Olukotun, and Christopher R\\'e", "['Christopher De Sa' 'Kunle Olukotun' 'Christopher Ré']" ]
cs.LG cs.CL
null
1411.1147
null
null
http://arxiv.org/pdf/1411.1147v2
2014-11-10T05:58:04Z
2014-11-05T04:49:38Z
Conditional Random Field Autoencoders for Unsupervised Structured Prediction
We introduce a framework for unsupervised learning of structured predictors with overlapping, global features. Each input's latent representation is predicted conditional on the observable data using a feature-rich conditional random field. Then a reconstruction of the input is (re)generated, conditional on the latent structure, using models for which maximum likelihood estimation has a closed-form. Our autoencoder formulation enables efficient learning without making unrealistic independence assumptions or restricting the kinds of features that can be used. We illustrate insightful connections to traditional autoencoders, posterior regularization and multi-view learning. We show competitive results with instantiations of the model for two canonical NLP tasks: part-of-speech induction and bitext word alignment, and show that training our model can be substantially more efficient than comparable feature-rich baselines.
[ "Waleed Ammar, Chris Dyer, Noah A. Smith", "['Waleed Ammar' 'Chris Dyer' 'Noah A. Smith']" ]
cs.LG stat.ML
null
1411.1158
null
null
http://arxiv.org/pdf/1411.1158v1
2014-11-05T06:18:14Z
2014-11-05T06:18:14Z
On the Complexity of Learning with Kernels
A well-recognized limitation of kernel learning is the requirement to handle a kernel matrix, whose size is quadratic in the number of training examples. Many methods have been proposed to reduce this computational cost, mostly by using a subset of the kernel matrix entries, or some form of low-rank matrix approximation, or a random projection method. In this paper, we study lower bounds on the error attainable by such methods as a function of the number of entries observed in the kernel matrix or the rank of an approximate kernel matrix. We show that there are kernel learning problems where no such method will lead to non-trivial computational savings. Our results also quantify how the problem difficulty depends on parameters such as the nature of the loss function, the regularization parameter, the norm of the desired predictor, and the kernel matrix rank. Our results also suggest cases where more efficient kernel learning might be possible.
[ "['Nicolò Cesa-Bianchi' 'Yishay Mansour' 'Ohad Shamir']", "Nicol\\`o Cesa-Bianchi, Yishay Mansour and Ohad Shamir" ]
cs.HC cs.LG
null
1411.1316
null
null
http://arxiv.org/pdf/1411.1316v2
2014-11-06T13:04:25Z
2014-11-05T16:41:12Z
Rapid Skill Capture in a First-Person Shooter
Various aspects of computer game design, including adaptive elements of game levels, characteristics of 'bot' behavior, and player matching in multiplayer games, would ideally be sensitive to a player's skill level. Yet, while difficulty and player learning have been explored in the context of games, there has been little work analyzing skill per se, and how it pertains to a player's input. To this end, we present a data set of 476 game logs from over 40 players of a first-person shooter game (Red Eclipse) as a basis of a case study. We then analyze different metrics of skill and show that some of these can be predicted using only a few seconds of keyboard and mouse input. We argue that the techniques used here are useful for adapting games to match players' skill levels rapidly, perhaps more rapidly than solutions based on performance averaging such as TrueSkill.
[ "['David Buckley' 'Ke Chen' 'Joshua Knowles']", "David Buckley, Ke Chen, Joshua Knowles" ]
cs.LG
null
1411.1420
null
null
http://arxiv.org/pdf/1411.1420v6
2018-02-23T02:55:26Z
2014-11-05T21:07:20Z
Eigenvectors of Orthogonally Decomposable Functions
The Eigendecomposition of quadratic forms (symmetric matrices) guaranteed by the spectral theorem is a foundational result in applied mathematics. Motivated by a shared structure found in inferential problems of recent interest---namely orthogonal tensor decompositions, Independent Component Analysis (ICA), topic models, spectral clustering, and Gaussian mixture learning---we generalize the eigendecomposition from quadratic forms to a broad class of "orthogonally decomposable" functions. We identify a key role of convexity in our extension, and we generalize two traditional characterizations of eigenvectors: First, the eigenvectors of a quadratic form arise from the optima structure of the quadratic form on the sphere. Second, the eigenvectors are the fixed points of the power iteration. In our setting, we consider a simple first order generalization of the power method which we call gradient iteration. It leads to efficient and easily implementable methods for basis recovery. It includes influential Machine Learning methods such as cumulant-based FastICA and the tensor power iteration for orthogonally decomposable tensors as special cases. We provide a complete theoretical analysis of gradient iteration using the structure theory of discrete dynamical systems to show almost sure convergence and fast (super-linear) convergence rates. The analysis also extends to the case when the observed function is only approximately orthogonally decomposable, with bounds that are polynomial in dimension and other relevant parameters, such as perturbation size. Our perturbation results can be considered as a non-linear version of the classical Davis-Kahan theorem for perturbations of eigenvectors of symmetric matrices.
[ "Mikhail Belkin, Luis Rademacher, James Voss", "['Mikhail Belkin' 'Luis Rademacher' 'James Voss']" ]
cs.LG
null
1411.1434
null
null
http://arxiv.org/pdf/1411.1434v2
2014-12-05T22:02:55Z
2014-11-05T22:28:00Z
On the Information Theoretic Limits of Learning Ising Models
We provide a general framework for computing lower-bounds on the sample complexity of recovering the underlying graphs of Ising models, given i.i.d samples. While there have been recent results for specific graph classes, these involve fairly extensive technical arguments that are specialized to each specific graph class. In contrast, we isolate two key graph-structural ingredients that can then be used to specify sample complexity lower-bounds. Presence of these structural properties makes the graph class hard to learn. We derive corollaries of our main result that not only recover existing recent results, but also provide lower bounds for novel graph classes not considered previously. We also extend our framework to the random graph setting and derive corollaries for Erd\H{o}s-R\'{e}nyi graphs in a certain dense setting.
[ "Karthikeyan Shanmugam, Rashish Tandon, Alexandros G. Dimakis, Pradeep\n Ravikumar", "['Karthikeyan Shanmugam' 'Rashish Tandon' 'Alexandros G. Dimakis'\n 'Pradeep Ravikumar']" ]
cs.CV cs.LG
null
1411.1446
null
null
http://arxiv.org/pdf/1411.1446v1
2014-11-05T23:14:10Z
2014-11-05T23:14:10Z
Electrocardiography Separation of Mother and Baby
Extraction of Electrocardiography (ECG or EKG) signals of mother and baby is a challenging task, because one single device is used and it receives a mixture of multiple heart beats. In this paper, we would like to design a filter to separate the signals from each other.
[ "Wei Wang", "['Wei Wang']" ]
cs.LG stat.ML
null
1411.1488
null
null
http://arxiv.org/pdf/1411.1488v2
2015-09-14T20:56:57Z
2014-11-06T03:25:54Z
Analyzing Tensor Power Method Dynamics in Overcomplete Regime
We present a novel analysis of the dynamics of tensor power iterations in the overcomplete regime where the tensor CP rank is larger than the input dimension. Finding the CP decomposition of an overcomplete tensor is NP-hard in general. We consider the case where the tensor components are randomly drawn, and show that the simple power iteration recovers the components with bounded error under mild initialization conditions. We apply our analysis to unsupervised learning of latent variable models, such as multi-view mixture models and spherical Gaussian mixtures. Given the third order moment tensor, we learn the parameters using tensor power iterations. We prove it can correctly learn the model parameters when the number of hidden components $k$ is much larger than the data dimension $d$, up to $k = o(d^{1.5})$. We initialize the power iterations with data samples and prove its success under mild conditions on the signal-to-noise ratio of the samples. Our analysis significantly expands the class of latent variable models where spectral methods are applicable. Our analysis also deals with noise in the input tensor leading to sample complexity result in the application to learning latent variable models.
[ "['Anima Anandkumar' 'Rong Ge' 'Majid Janzamin']", "Anima Anandkumar, Rong Ge, Majid Janzamin" ]
cs.LG
null
1411.1490
null
null
http://arxiv.org/pdf/1411.1490v2
2014-12-04T22:59:04Z
2014-11-06T03:51:39Z
Efficient Representations for Life-Long Learning and Autoencoding
It has been a long-standing goal in machine learning, as well as in AI more generally, to develop life-long learning systems that learn many different tasks over time, and reuse insights from tasks learned, "learning to learn" as they do so. In this work we pose and provide efficient algorithms for several natural theoretical formulations of this goal. Specifically, we consider the problem of learning many different target functions over time, that share certain commonalities that are initially unknown to the learning algorithm. Our aim is to learn new internal representations as the algorithm learns new target functions, that capture this commonality and allow subsequent learning tasks to be solved more efficiently and from less data. We develop efficient algorithms for two very different kinds of commonalities that target functions might share: one based on learning common low-dimensional and unions of low-dimensional subspaces and one based on learning nonlinear Boolean combinations of features. Our algorithms for learning Boolean feature combinations additionally have a dual interpretation, and can be viewed as giving an efficient procedure for constructing near-optimal sparse Boolean autoencoders under a natural "anchor-set" assumption.
[ "['Maria-Florina Balcan' 'Avrim Blum' 'Santosh Vempala']", "Maria-Florina Balcan, Avrim Blum, Santosh Vempala" ]
cs.CV cs.LG cs.NE
null
1411.1509
null
null
http://arxiv.org/pdf/1411.1509v1
2014-11-06T07:03:15Z
2014-11-06T07:03:15Z
Convolutional Neural Network-based Place Recognition
Recently Convolutional Neural Networks (CNNs) have been shown to achieve state-of-the-art performance on various classification tasks. In this paper, we present for the first time a place recognition technique based on CNN models, by combining the powerful features learnt by CNNs with a spatial and sequential filter. Applying the system to a 70 km benchmark place recognition dataset we achieve a 75% increase in recall at 100% precision, significantly outperforming all previous state of the art techniques. We also conduct a comprehensive performance comparison of the utility of features from all 21 layers for place recognition, both for the benchmark dataset and for a second dataset with more significant viewpoint changes.
[ "['Zetao Chen' 'Obadiah Lam' 'Adam Jacobson' 'Michael Milford']", "Zetao Chen, Obadiah Lam, Adam Jacobson and Michael Milford" ]
stat.ML cs.CV cs.LG
null
1411.1537
null
null
http://arxiv.org/pdf/1411.1537v2
2014-11-07T05:21:03Z
2014-11-06T09:14:02Z
Large-Margin Determinantal Point Processes
Determinantal point processes (DPPs) offer a powerful approach to modeling diversity in many applications where the goal is to select a diverse subset. We study the problem of learning the parameters (the kernel matrix) of a DPP from labeled training data. We make two contributions. First, we show how to reparameterize a DPP's kernel matrix with multiple kernel functions, thus enhancing modeling flexibility. Second, we propose a novel parameter estimation technique based on the principle of large margin separation. In contrast to the state-of-the-art method of maximum likelihood estimation, our large-margin loss function explicitly models errors in selecting the target subsets, and it can be customized to trade off different types of errors (precision vs. recall). Extensive empirical studies validate our contributions, including applications on challenging document and video summarization, where flexibility in modeling the kernel matrix and balancing different errors is indispensable.
[ "['Boqing Gong' 'Wei-lun Chao' 'Kristen Grauman' 'Fei Sha']", "Boqing Gong, Wei-lun Chao, Kristen Grauman and Fei Sha" ]
cs.LG
null
1411.1623
null
null
http://arxiv.org/pdf/1411.1623v1
2014-11-06T14:18:39Z
2014-11-06T14:18:39Z
A Hybrid Recurrent Neural Network For Music Transcription
We investigate the problem of incorporating higher-level symbolic score-like information into Automatic Music Transcription (AMT) systems to improve their performance. We use recurrent neural networks (RNNs) and their variants as music language models (MLMs) and present a generative architecture for combining these models with predictions from a frame level acoustic classifier. We also compare different neural network architectures for acoustic modeling. The proposed model computes a distribution over possible output sequences given the acoustic input signal and we present an algorithm for performing a global search for good candidate transcriptions. The performance of the proposed model is evaluated on piano music from the MAPS dataset and we observe that the proposed model consistently outperforms existing transcription methods.
[ "Siddharth Sigtia, Emmanouil Benetos, Nicolas Boulanger-Lewandowski,\n Tillman Weyde, Artur S. d'Avila Garcez, Simon Dixon", "['Siddharth Sigtia' 'Emmanouil Benetos' 'Nicolas Boulanger-Lewandowski'\n 'Tillman Weyde' \"Artur S. d'Avila Garcez\" 'Simon Dixon']" ]
cs.LG cs.AI cs.CV cs.IR stat.ML
null
1411.1752
null
null
http://arxiv.org/pdf/1411.1752v1
2014-11-06T20:07:37Z
2014-11-06T20:07:37Z
Submodular meets Structured: Finding Diverse Subsets in Exponentially-Large Structured Item Sets
To cope with the high level of ambiguity faced in domains such as Computer Vision or Natural Language processing, robust prediction methods often search for a diverse set of high-quality candidate solutions or proposals. In structured prediction problems, this becomes a daunting task, as the solution space (image labelings, sentence parses, etc.) is exponentially large. We study greedy algorithms for finding a diverse subset of solutions in structured-output spaces by drawing new connections between submodular functions over combinatorial item sets and High-Order Potentials (HOPs) studied for graphical models. Specifically, we show via examples that when marginal gains of submodular diversity functions allow structured representations, this enables efficient (sub-linear time) approximate maximization by reducing the greedy augmentation step to inference in a factor graph with appropriately constructed HOPs. We discuss benefits, tradeoffs, and show that our constructions lead to significantly better proposals.
[ "['Adarsh Prasad' 'Stefanie Jegelka' 'Dhruv Batra']", "Adarsh Prasad, Stefanie Jegelka and Dhruv Batra" ]
cs.LG cs.AI cs.CV stat.ML
null
1411.1784
null
null
http://arxiv.org/pdf/1411.1784v1
2014-11-06T22:33:22Z
2014-11-06T22:33:22Z
Conditional Generative Adversarial Nets
Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.
[ "['Mehdi Mirza' 'Simon Osindero']", "Mehdi Mirza, Simon Osindero" ]
cs.LG cs.NE
null
1411.1792
null
null
http://arxiv.org/pdf/1411.1792v1
2014-11-06T23:09:37Z
2014-11-06T23:09:37Z
How transferable are features in deep neural networks?
Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.
[ "Jason Yosinski, Jeff Clune, Yoshua Bengio, Hod Lipson", "['Jason Yosinski' 'Jeff Clune' 'Yoshua Bengio' 'Hod Lipson']" ]
stat.ML cs.LG
null
1411.1804
null
null
http://arxiv.org/pdf/1411.1804v2
2014-12-02T05:23:23Z
2014-11-07T00:51:03Z
Beta Process Non-negative Matrix Factorization with Stochastic Structured Mean-Field Variational Inference
Beta process is the standard nonparametric Bayesian prior for latent factor model. In this paper, we derive a structured mean-field variational inference algorithm for a beta process non-negative matrix factorization (NMF) model with Poisson likelihood. Unlike the linear Gaussian model, which is well-studied in the nonparametric Bayesian literature, NMF model with beta process prior does not enjoy the conjugacy. We leverage the recently developed stochastic structured mean-field variational inference to relax the conjugacy constraint and restore the dependencies among the latent variables in the approximating variational distribution. Preliminary results on both synthetic and real examples demonstrate that the proposed inference algorithm can reasonably recover the hidden structure of the data.
[ "Dawen Liang, Matthew D. Hoffman", "['Dawen Liang' 'Matthew D. Hoffman']" ]
stat.ML cs.LG
null
1411.1810
null
null
http://arxiv.org/pdf/1411.1810v4
2016-05-28T19:58:17Z
2014-11-07T01:28:41Z
Variational Tempering
Variational inference (VI) combined with data subsampling enables approximate posterior inference over large data sets, but suffers from poor local optima. We first formulate a deterministic annealing approach for the generic class of conditionally conjugate exponential family models. This approach uses a decreasing temperature parameter which deterministically deforms the objective during the course of the optimization. A well-known drawback to this annealing approach is the choice of the cooling schedule. We therefore introduce variational tempering, a variational algorithm that introduces a temperature latent variable to the model. In contrast to related work in the Markov chain Monte Carlo literature, this algorithm results in adaptive annealing schedules. Lastly, we develop local variational tempering, which assigns a latent temperature to each data point; this allows for dynamic annealing that varies across data. Compared to the traditional VI, all proposed approaches find improved predictive likelihoods on held-out data.
[ "['Stephan Mandt' 'James McInerney' 'Farhan Abrol' 'Rajesh Ranganath'\n 'David Blei']", "Stephan Mandt, James McInerney, Farhan Abrol, Rajesh Ranganath, and\n David Blei" ]
cs.CV cs.LG stat.ML
null
1411.1971
null
null
http://arxiv.org/pdf/1411.1971v2
2014-11-25T21:41:07Z
2014-10-29T20:46:20Z
Power-Law Graph Cuts
Algorithms based on spectral graph cut objectives such as normalized cuts, ratio cuts and ratio association have become popular in recent years because they are widely applicable and simple to implement via standard eigenvector computations. Despite strong performance for a number of clustering tasks, spectral graph cut algorithms still suffer from several limitations: first, they require the number of clusters to be known in advance, but this information is often unknown a priori; second, they tend to produce clusters with uniform sizes. In some cases, the true clusters exhibit a known size distribution; in image segmentation, for instance, human-segmented images tend to yield segment sizes that follow a power-law distribution. In this paper, we propose a general framework of power-law graph cut algorithms that produce clusters whose sizes are power-law distributed, and also does not fix the number of clusters upfront. To achieve our goals, we treat the Pitman-Yor exchangeable partition probability function (EPPF) as a regularizer to graph cut objectives. Because the resulting objectives cannot be solved by relaxing via eigenvectors, we derive a simple iterative algorithm to locally optimize the objectives. Moreover, we show that our proposed algorithm can be viewed as performing MAP inference on a particular Pitman-Yor mixture model. Our experiments on various data sets show the effectiveness of our algorithms.
[ "Xiangyang Zhou, Jiaxin Zhang, Brian Kulis", "['Xiangyang Zhou' 'Jiaxin Zhang' 'Brian Kulis']" ]
cs.LG stat.ML
null
1411.1990
null
null
http://arxiv.org/pdf/1411.1990v2
2015-03-03T14:23:47Z
2014-11-07T17:24:30Z
A totally unimodular view of structured sparsity
This paper describes a simple framework for structured sparse recovery based on convex optimization. We show that many structured sparsity models can be naturally represented by linear matrix inequalities on the support of the unknown parameters, where the constraint matrix has a totally unimodular (TU) structure. For such structured models, tight convex relaxations can be obtained in polynomial time via linear programming. Our modeling framework unifies the prevalent structured sparsity norms in the literature, introduces new interesting ones, and renders their tightness and tractability arguments transparent.
[ "['Marwa El Halabi' 'Volkan Cevher']", "Marwa El Halabi and Volkan Cevher" ]
cs.DS cs.LG
null
1411.2021
null
null
http://arxiv.org/pdf/1411.2021v3
2017-01-31T15:07:33Z
2014-11-07T20:23:50Z
Partitioning Well-Clustered Graphs: Spectral Clustering Works!
In this paper we study variants of the widely used spectral clustering that partitions a graph into k clusters by (1) embedding the vertices of a graph into a low-dimensional space using the bottom eigenvectors of the Laplacian matrix, and (2) grouping the embedded points into k clusters via k-means algorithms. We show that, for a wide class of graphs, spectral clustering gives a good approximation of the optimal clustering. While this approach was proposed in the early 1990s and has comprehensive applications, prior to our work similar results were known only for graphs generated from stochastic models. We also give a nearly-linear time algorithm for partitioning well-clustered graphs based on computing a matrix exponential and approximate nearest neighbor data structures.
[ "['Richard Peng' 'He Sun' 'Luca Zanetti']", "Richard Peng and He Sun and Luca Zanetti" ]
cs.LG
null
1411.2057
null
null
http://arxiv.org/pdf/1411.2057v1
2014-11-07T22:52:19Z
2014-11-07T22:52:19Z
Online Collaborative-Filtering on Graphs
A common phenomena in modern recommendation systems is the use of feedback from one user to infer the `value' of an item to other users. This results in an exploration vs. exploitation trade-off, in which items of possibly low value have to be presented to users in order to ascertain their value. Existing approaches to solving this problem focus on the case where the number of items are small, or admit some underlying structure -- it is unclear, however, if good recommendation is possible when dealing with content-rich settings with unstructured content. We consider this problem under a simple natural model, wherein the number of items and the number of item-views are of the same order, and an `access-graph' constrains which user is allowed to see which item. Our main insight is that the presence of the access-graph in fact makes good recommendation possible -- however this requires the exploration policy to be designed to take advantage of the access-graph. Our results demonstrate the importance of `serendipity' in exploration, and how higher graph-expansion translates to a higher quality of recommendations; it also suggests a reason why in some settings, simple policies like Twitter's `Latest-First' policy achieve a good performance. From a technical perspective, our model presents a way to study exploration-exploitation tradeoffs in settings where the number of `trials' and `strategies' are large (potentially infinite), and more importantly, of the same order. Our algorithms admit competitive-ratio guarantees which hold for the worst-case user, under both finite-population and infinite-horizon settings, and are parametrized in terms of properties of the underlying graph. Conversely, we also demonstrate that improperly-designed policies can be highly sub-optimal, and that in many settings, our results are order-wise optimal.
[ "Siddhartha Banerjee, Sujay Sanghavi, Sanjay Shakkottai", "['Siddhartha Banerjee' 'Sujay Sanghavi' 'Sanjay Shakkottai']" ]