categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
cs.LG
null
1501.03786
null
null
http://arxiv.org/pdf/1501.03786v2
2015-01-16T11:10:13Z
2015-01-15T19:33:34Z
Multi-view learning for multivariate performance measures optimization
In this paper, we propose the problem of optimizing multivariate performance measures from multi-view data, and an effective method to solve it. This problem has two features: the data points are presented by multiple views, and the target of learning is to optimize complex multivariate performance measures. We propose to learn a linear discriminant functions for each view, and combine them to construct a overall multivariate mapping function for mult-view data. To learn the parameters of the linear dis- criminant functions of different views to optimize multivariate performance measures, we formulate a optimization problem. In this problem, we propose to minimize the complexity of the linear discriminant functions of each view, encourage the consistences of the responses of different views over the same data points, and minimize the upper boundary of a given multivariate performance measure. To optimize this problem, we employ the cutting-plane method in an iterative algorithm. In each iteration, we update a set of constrains, and optimize the mapping function parameter of each view one by one.
[ "['Jim Jing-Yan Wang']", "Jim Jing-Yan Wang" ]
cs.LG stat.ML
null
1501.03796
null
null
http://arxiv.org/pdf/1501.03796v1
2015-01-15T20:08:49Z
2015-01-15T20:08:49Z
The Fast Convergence of Incremental PCA
We consider a situation in which we see samples in $\mathbb{R}^d$ drawn i.i.d. from some distribution with mean zero and unknown covariance A. We wish to compute the top eigenvector of A in an incremental fashion - with an algorithm that maintains an estimate of the top eigenvector in O(d) space, and incrementally adjusts the estimate with each new data point that arrives. Two classical such schemes are due to Krasulina (1969) and Oja (1983). We give finite-sample convergence rates for both.
[ "['Akshay Balsubramani' 'Sanjoy Dasgupta' 'Yoav Freund']", "Akshay Balsubramani, Sanjoy Dasgupta, Yoav Freund" ]
cs.LG stat.ML
null
1501.03838
null
null
http://arxiv.org/pdf/1501.03838v1
2015-01-15T21:59:39Z
2015-01-15T21:59:39Z
PAC-Bayes with Minimax for Confidence-Rated Transduction
We consider using an ensemble of binary classifiers for transductive prediction, when unlabeled test data are known in advance. We derive minimax optimal rules for confidence-rated prediction in this setting. By using PAC-Bayes analysis on these rules, we obtain data-dependent performance guarantees without distributional assumptions on the data. Our analysis techniques are readily extended to a setting in which the predictor is allowed to abstain.
[ "Akshay Balsubramani, Yoav Freund", "['Akshay Balsubramani' 'Yoav Freund']" ]
physics.comp-ph cs.LG stat.ML
null
1501.03854
null
null
http://arxiv.org/pdf/1501.03854v2
2015-01-28T12:02:50Z
2015-01-16T00:00:46Z
Understanding Kernel Ridge Regression: Common behaviors from simple functions to density functionals
Accurate approximations to density functionals have recently been obtained via machine learning (ML). By applying ML to a simple function of one variable without any random sampling, we extract the qualitative dependence of errors on hyperparameters. We find universal features of the behavior in extreme limits, including both very small and very large length scales, and the noise-free limit. We show how such features arise in ML models of density functionals.
[ "Kevin Vu, John Snyder, Li Li, Matthias Rupp, Brandon F. Chen, Tarek\n Khelif, Klaus-Robert M\\\"uller, Kieron Burke", "['Kevin Vu' 'John Snyder' 'Li Li' 'Matthias Rupp' 'Brandon F. Chen'\n 'Tarek Khelif' 'Klaus-Robert Müller' 'Kieron Burke']" ]
physics.med-ph cs.CV cs.LG
10.1155/2015/814104
1501.03915
null
null
http://arxiv.org/abs/1501.03915v1
2015-01-16T08:45:55Z
2015-01-16T08:45:55Z
Feature Selection based on Machine Learning in MRIs for Hippocampal Segmentation
Neurodegenerative diseases are frequently associated with structural changes in the brain. Magnetic Resonance Imaging (MRI) scans can show these variations and therefore be used as a supportive feature for a number of neurodegenerative diseases. The hippocampus has been known to be a biomarker for Alzheimer disease and other neurological and psychiatric diseases. However, it requires accurate, robust and reproducible delineation of hippocampal structures. Fully automatic methods are usually the voxel based approach, for each voxel a number of local features were calculated. In this paper we compared four different techniques for feature selection from a set of 315 features extracted for each voxel: (i) filter method based on the Kolmogorov-Smirnov test; two wrapper methods, respectively, (ii) Sequential Forward Selection and (iii) Sequential Backward Elimination; and (iv) embedded method based on the Random Forest Classifier on a set of 10 T1-weighted brain MRIs and tested on an independent set of 25 subjects. The resulting segmentations were compared with manual reference labelling. By using only 23 features for each voxel (sequential backward elimination) we obtained comparable state of-the-art performances with respect to the standard tool FreeSurfer.
[ "Sabina Tangaro, Nicola Amoroso, Massimo Brescia, Stefano Cavuoti,\n Andrea Chincarini, Rosangela Errico, Paolo Inglese, Giuseppe Longo, Rosalia\n Maglietta, Andrea Tateo, Giuseppe Riccio, Roberto Bellotti", "['Sabina Tangaro' 'Nicola Amoroso' 'Massimo Brescia' 'Stefano Cavuoti'\n 'Andrea Chincarini' 'Rosangela Errico' 'Paolo Inglese' 'Giuseppe Longo'\n 'Rosalia Maglietta' 'Andrea Tateo' 'Giuseppe Riccio' 'Roberto Bellotti']" ]
cs.AI cs.LG stat.ML
null
1501.03959
null
null
http://arxiv.org/pdf/1501.03959v1
2015-01-16T12:02:51Z
2015-01-16T12:02:51Z
Value Iteration with Options and State Aggregation
This paper presents a way of solving Markov Decision Processes that combines state abstraction and temporal abstraction. Specifically, we combine state aggregation with the options framework and demonstrate that they work well together and indeed it is only after one combines the two that the full benefit of each is realized. We introduce a hierarchical value iteration algorithm where we first coarsely solve subgoals and then use these approximate solutions to exactly solve the MDP. This algorithm solved several problems faster than vanilla value iteration.
[ "Kamil Ciosek and David Silver", "['Kamil Ciosek' 'David Silver']" ]
cs.NE cs.LG cs.SY
null
1501.03975
null
null
http://arxiv.org/pdf/1501.03975v1
2015-01-16T13:18:34Z
2015-01-16T13:18:34Z
Stochastic Gradient Based Extreme Learning Machines For Online Learning of Advanced Combustion Engines
In this article, a stochastic gradient based online learning algorithm for Extreme Learning Machines (ELM) is developed (SG-ELM). A stability criterion based on Lyapunov approach is used to prove both asymptotic stability of estimation error and stability in the estimated parameters suitable for identification of nonlinear dynamic systems. The developed algorithm not only guarantees stability, but also reduces the computational demand compared to the OS-ELM approach based on recursive least squares. In order to demonstrate the effectiveness of the algorithm on a real-world scenario, an advanced combustion engine identification problem is considered. The algorithm is applied to two case studies: An online regression learning for system identification of a Homogeneous Charge Compression Ignition (HCCI) Engine and an online classification learning (with class imbalance) for identifying the dynamic operating envelope of the HCCI Engine. The results indicate that the accuracy of the proposed SG-ELM is comparable to that of the state-of-the-art but adds stability and a reduction in computational effort.
[ "['Vijay Manikandan Janakiraman' 'XuanLong Nguyen' 'Dennis Assanis']", "Vijay Manikandan Janakiraman and XuanLong Nguyen and Dennis Assanis" ]
cs.LG stat.ML
10.1016/j.cageo.2015.05.018
1501.04053
null
null
http://arxiv.org/abs/1501.04053v2
2015-05-05T13:03:58Z
2015-01-16T17:09:01Z
Stochastic Local Interaction (SLI) Model: Interfacing Machine Learning and Geostatistics
Machine learning and geostatistics are powerful mathematical frameworks for modeling spatial data. Both approaches, however, suffer from poor scaling of the required computational resources for large data applications. We present the Stochastic Local Interaction (SLI) model, which employs a local representation to improve computational efficiency. SLI combines geostatistics and machine learning with ideas from statistical physics and computational geometry. It is based on a joint probability density function defined by an energy functional which involves local interactions implemented by means of kernel functions with adaptive local kernel bandwidths. SLI is expressed in terms of an explicit, typically sparse, precision (inverse covariance) matrix. This representation leads to a semi-analytical expression for interpolation (prediction), which is valid in any number of dimensions and avoids the computationally costly covariance matrix inversion.
[ "Dionissios T. Hristopulos", "['Dionissios T. Hristopulos']" ]
cs.LG
null
1501.04244
null
null
http://arxiv.org/pdf/1501.04244v1
2015-01-17T23:42:08Z
2015-01-17T23:42:08Z
Generalised Random Forest Space Overview
Assuming a view of the Random Forest as a special case of a nested ensemble of interchangeable modules, we construct a generalisation space allowing one to easily develop novel methods based on this algorithm. We discuss the role and required properties of modules at each level, especially in context of some already proposed RF generalisations.
[ "Miron B. Kursa", "['Miron B. Kursa']" ]
cs.LG
null
1501.04267
null
null
http://arxiv.org/pdf/1501.04267v2
2015-01-20T03:41:28Z
2015-01-18T05:15:55Z
Comment on "Clustering by fast search and find of density peaks"
In [1], a clustering algorithm was given to find the centers of clusters quickly. However, the accuracy of this algorithm heavily depend on the threshold value of d-c. Furthermore, [1] has not provided any efficient way to select the threshold value of d-c, that is, one can have to estimate the value of d_c depend on one's subjective experience. In this paper, based on the data field [2], we propose a new way to automatically extract the threshold value of d_c from the original data set by using the potential entropy of data field. For any data set to be clustered, the most reasonable value of d_c can be objectively calculated from the data set by using our proposed method. The same experiments in [1] are redone with our proposed method on the same experimental data set used in [1], the results of which shows that the problem to calculate the threshold value of d_c in [1] has been solved by using our method.
[ "Shuliang Wang, Dakui Wang, Caoyuan Li, Yan Li", "['Shuliang Wang' 'Dakui Wang' 'Caoyuan Li' 'Yan Li']" ]
cs.LG
null
1501.04282
null
null
http://arxiv.org/pdf/1501.04282v1
2015-01-18T11:46:30Z
2015-01-18T11:46:30Z
Regularized maximum correntropy machine
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two chal- lenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
[ "Jim Jing-Yan Wang, Yunji Wang, Bing-Yi Jing, Xin Gao", "['Jim Jing-Yan Wang' 'Yunji Wang' 'Bing-Yi Jing' 'Xin Gao']" ]
cs.CV cs.LG
null
1501.04284
null
null
http://arxiv.org/pdf/1501.04284v1
2015-01-18T11:52:21Z
2015-01-18T11:52:21Z
Pairwise Constraint Propagation on Multi-View Data
This paper presents a graph-based learning approach to pairwise constraint propagation on multi-view data. Although pairwise constraint propagation has been studied extensively, pairwise constraints are usually defined over pairs of data points from a single view, i.e., only intra-view constraint propagation is considered for multi-view tasks. In fact, very little attention has been paid to inter-view constraint propagation, which is more challenging since pairwise constraints are now defined over pairs of data points from different views. In this paper, we propose to decompose the challenging inter-view constraint propagation problem into semi-supervised learning subproblems so that they can be efficiently solved based on graph-based label propagation. To the best of our knowledge, this is the first attempt to give an efficient solution to inter-view constraint propagation from a semi-supervised learning viewpoint. Moreover, since graph-based label propagation has been adopted for basic optimization, we develop two constrained graph construction methods for interview constraint propagation, which only differ in how the intra-view pairwise constraints are exploited. The experimental results in cross-view retrieval have shown the promising performance of our inter-view constraint propagation.
[ "['Zhiwu Lu' 'Liwei Wang']", "Zhiwu Lu and Liwei Wang" ]
cs.IT cs.LG math.IT
null
1501.04309
null
null
http://arxiv.org/pdf/1501.04309v1
2015-01-18T14:57:02Z
2015-01-18T14:57:02Z
Information Theory and its Relation to Machine Learning
In this position paper, I first describe a new perspective on machine learning (ML) by four basic problems (or levels), namely, "What to learn?", "How to learn?", "What to evaluate?", and "What to adjust?". The paper stresses more on the first level of "What to learn?", or "Learning Target Selection". Towards this primary problem within the four levels, I briefly review the existing studies about the connection between information theoretical learning (ITL [1]) and machine learning. A theorem is given on the relation between the empirically-defined similarity measure and information measures. Finally, a conjecture is proposed for pursuing a unified mathematical interpretation to learning target selection.
[ "['Bao-Gang Hu']", "Bao-Gang Hu" ]
cs.LG cs.CV stat.ML
null
1501.04318
null
null
http://arxiv.org/pdf/1501.04318v2
2018-01-29T00:34:26Z
2015-01-18T15:34:19Z
Clustering based on the In-tree Graph Structure and Affinity Propagation
A recently proposed clustering method, called the Nearest Descent (ND), can organize the whole dataset into a sparsely connected graph, called the In-tree. This ND-based Intree structure proves able to reveal the clustering structure underlying the dataset, except one imperfect place, that is, there are some undesired edges in this In-tree which require to be removed. Here, we propose an effective way to automatically remove the undesired edges in In-tree via an effective combination of the In-tree structure with affinity propagation (AP). The key for the combination is to add edges between the reachable nodes in In-tree before using AP to remove the undesired edges. The experiments on both synthetic and real datasets demonstrate the effectiveness of the proposed method.
[ "Teng Qiu, Yongjie Li", "['Teng Qiu' 'Yongjie Li']" ]
cs.CL cs.LG stat.ML
null
1501.04325
null
null
http://arxiv.org/pdf/1501.04325v1
2015-01-18T17:12:59Z
2015-01-18T17:12:59Z
Deep Belief Nets for Topic Modeling
Applying traditional collaborative filtering to digital publishing is challenging because user data is very sparse due to the high volume of documents relative to the number of users. Content based approaches, on the other hand, is attractive because textual content is often very informative. In this paper we describe large-scale content based collaborative filtering for digital publishing. To solve the digital publishing recommender problem we compare two approaches: latent Dirichlet allocation (LDA) and deep belief nets (DBN) that both find low-dimensional latent representations for documents. Efficient retrieval can be carried out in the latent representation. We work both on public benchmarks and digital media content provided by Issuu, an online publishing platform. This article also comes with a newly developed deep belief nets toolbox for topic modeling tailored towards performance evaluation of the DBN model and comparisons to the LDA model.
[ "['Lars Maaloe' 'Morten Arngren' 'Ole Winther']", "Lars Maaloe and Morten Arngren and Ole Winther" ]
stat.ML cs.AI cs.CL cs.LG
null
1501.04346
null
null
http://arxiv.org/pdf/1501.04346v1
2015-01-18T20:50:39Z
2015-01-18T20:50:39Z
Mathematical Language Processing: Automatic Grading and Feedback for Open Response Mathematical Questions
While computer and communication technologies have provided effective means to scale up many aspects of education, the submission and grading of assessments such as homework assignments and tests remains a weak link. In this paper, we study the problem of automatically grading the kinds of open response mathematical questions that figure prominently in STEM (science, technology, engineering, and mathematics) courses. Our data-driven framework for mathematical language processing (MLP) leverages solution data from a large number of learners to evaluate the correctness of their solutions, assign partial-credit scores, and provide feedback to each learner on the likely locations of any errors. MLP takes inspiration from the success of natural language processing for text data and comprises three main steps. First, we convert each solution to an open response mathematical question into a series of numerical features. Second, we cluster the features from several solutions to uncover the structures of correct, partially correct, and incorrect solutions. We develop two different clustering approaches, one that leverages generic clustering algorithms and one based on Bayesian nonparametrics. Third, we automatically grade the remaining (potentially large number of) solutions based on their assigned cluster and one instructor-provided grade per cluster. As a bonus, we can track the cluster assignment of each step of a multistep solution and determine when it departs from a cluster of correct solutions, which enables us to indicate the likely locations of errors to learners. We test and validate MLP on real-world MOOC data to demonstrate how it can substantially reduce the human effort required in large-scale educational platforms.
[ "['Andrew S. Lan' 'Divyanshu Vats' 'Andrew E. Waters' 'Richard G. Baraniuk']", "Andrew S. Lan and Divyanshu Vats and Andrew E. Waters and Richard G.\n Baraniuk" ]
cs.AI cs.LG stat.ML
null
1501.04370
null
null
http://arxiv.org/pdf/1501.04370v1
2015-01-19T01:32:43Z
2015-01-19T01:32:43Z
Structure Learning in Bayesian Networks of Moderate Size by Efficient Sampling
We study the Bayesian model averaging approach to learning Bayesian network structures (DAGs) from data. We develop new algorithms including the first algorithm that is able to efficiently sample DAGs according to the exact structure posterior. The DAG samples can then be used to construct estimators for the posterior of any feature. We theoretically prove good properties of our estimators and empirically show that our estimators considerably outperform the estimators from the previous state-of-the-art methods.
[ "Ru He, Jin Tian, Huaiqing Wu", "['Ru He' 'Jin Tian' 'Huaiqing Wu']" ]
stat.ML cond-mat.dis-nn cond-mat.stat-mech cs.AI cs.LG
10.7566/JPSJ.84.034003
1501.04413
null
null
http://arxiv.org/abs/1501.04413v1
2015-01-19T07:24:21Z
2015-01-19T07:24:21Z
Statistical-mechanical analysis of pre-training and fine tuning in deep learning
In this paper, we present a statistical-mechanical analysis of deep learning. We elucidate some of the essential components of deep learning---pre-training by unsupervised learning and fine tuning by supervised learning. We formulate the extraction of features from the training data as a margin criterion in a high-dimensional feature-vector space. The self-organized classifier is then supplied with small amounts of labelled data, as in deep learning. Although we employ a simple single-layer perceptron model, rather than directly analyzing a multi-layer neural network, we find a nontrivial phase transition that is dependent on the number of unlabelled data in the generalization error of the resultant classifier. In this sense, we evaluate the efficacy of the unsupervised learning component of deep learning. The analysis is performed by the replica method, which is a sophisticated tool in statistical mechanics. We validate our result in the manner of deep learning, using a simple iterative algorithm to learn the weight vector on the basis of belief propagation.
[ "Masayuki Ohzeki", "['Masayuki Ohzeki']" ]
q-bio.QM cs.LG q-bio.NC
null
1501.04621
null
null
http://arxiv.org/pdf/1501.04621v1
2015-01-19T16:11:03Z
2015-01-19T16:11:03Z
Sparse Bayesian Learning for EEG Source Localization
Purpose: Localizing the sources of electrical activity from electroencephalographic (EEG) data has gained considerable attention over the last few years. In this paper, we propose an innovative source localization method for EEG, based on Sparse Bayesian Learning (SBL). Methods: To better specify the sparsity profile and to ensure efficient source localization, the proposed approach considers grouping of the electrical current dipoles inside human brain. SBL is used to solve the localization problem in addition with imposed constraint that the electric current dipoles associated with the brain activity are isotropic. Results: Numerical experiments are conducted on a realistic head model that is obtained by segmentation of MRI images of the head and includes four major components, namely the scalp, the skull, the cerebrospinal fluid (CSF) and the brain, with appropriate relative conductivity values. The results demonstrate that the isotropy constraint significantly improves the performance of SBL. In a noiseless environment, the proposed method was 1 found to accurately (with accuracy of >75%) locate up to 6 simultaneously active sources, whereas for SBL without the isotropy constraint, the accuracy of finding just 3 simultaneously active sources was <75%. Conclusions: Compared to the state-of-the-art algorithms, the proposed method is potentially more consistent in specifying the sparsity profile of human brain activity and is able to produce better source localization for EEG.
[ "['Sajib Saha' 'Frank de Hoog' 'Ya. I. Nesterets' 'Rajib Rana' 'M. Tahtali'\n 'T. E. Gureyev']", "Sajib Saha, Frank de Hoog, Ya.I. Nesterets, Rajib Rana, M. Tahtali and\n T.E. Gureyev" ]
stat.ML cs.CV cs.LG q-bio.QM
null
1501.04656
null
null
http://arxiv.org/pdf/1501.04656v2
2015-01-27T21:13:28Z
2015-01-19T22:07:27Z
Microscopic Advances with Large-Scale Learning: Stochastic Optimization for Cryo-EM
Determining the 3D structures of biological molecules is a key problem for both biology and medicine. Electron Cryomicroscopy (Cryo-EM) is a promising technique for structure estimation which relies heavily on computational methods to reconstruct 3D structures from 2D images. This paper introduces the challenging Cryo-EM density estimation problem as a novel application for stochastic optimization techniques. Structure discovery is formulated as MAP estimation in a probabilistic latent-variable model, resulting in an optimization problem to which an array of seven stochastic optimization methods are applied. The methods are tested on both real and synthetic data, with some methods recovering reasonable structures in less than one epoch from a random initialization. Complex quasi-Newton methods are found to converge more slowly than simple gradient-based methods, but all stochastic methods are found to converge to similar optima. This method represents a major improvement over existing methods as it is significantly faster and is able to converge from a random initialization.
[ "['Ali Punjani' 'Marcus A. Brubaker']", "Ali Punjani and Marcus A. Brubaker" ]
cs.CV cs.LG
null
1501.04717
null
null
http://arxiv.org/pdf/1501.04717v1
2015-01-20T06:05:01Z
2015-01-20T06:05:01Z
Robust Face Recognition by Constrained Part-based Alignment
Developing a reliable and practical face recognition system is a long-standing goal in computer vision research. Existing literature suggests that pixel-wise face alignment is the key to achieve high-accuracy face recognition. By assuming a human face as piece-wise planar surfaces, where each surface corresponds to a facial part, we develop in this paper a Constrained Part-based Alignment (CPA) algorithm for face recognition across pose and/or expression. Our proposed algorithm is based on a trainable CPA model, which learns appearance evidence of individual parts and a tree-structured shape configuration among different parts. Given a probe face, CPA simultaneously aligns all its parts by fitting them to the appearance evidence with consideration of the constraint from the tree-structured shape configuration. This objective is formulated as a norm minimization problem regularized by graph likelihoods. CPA can be easily integrated with many existing classifiers to perform part-based face recognition. Extensive experiments on benchmark face datasets show that CPA outperforms or is on par with existing methods for robust face recognition across pose, expression, and/or illumination changes.
[ "Yuting Zhang, Kui Jia, Yueming Wang, Gang Pan, Tsung-Han Chan, Yi Ma", "['Yuting Zhang' 'Kui Jia' 'Yueming Wang' 'Gang Pan' 'Tsung-Han Chan'\n 'Yi Ma']" ]
cs.PL cs.LG
null
1501.04725
null
null
http://arxiv.org/pdf/1501.04725v1
2015-01-20T07:20:30Z
2015-01-20T07:20:30Z
Learning Invariants using Decision Trees
The problem of inferring an inductive invariant for verifying program safety can be formulated in terms of binary classification. This is a standard problem in machine learning: given a sample of good and bad points, one is asked to find a classifier that generalizes from the sample and separates the two sets. Here, the good points are the reachable states of the program, and the bad points are those that reach a safety property violation. Thus, a learned classifier is a candidate invariant. In this paper, we propose a new algorithm that uses decision trees to learn candidate invariants in the form of arbitrary Boolean combinations of numerical inequalities. We have used our algorithm to verify C programs taken from the literature. The algorithm is able to infer safe invariants for a range of challenging benchmarks and compares favorably to other ML-based invariant inference techniques. In particular, it scales well to large sample sets.
[ "Siddharth Krishna, Christian Puhrsch, Thomas Wies", "['Siddharth Krishna' 'Christian Puhrsch' 'Thomas Wies']" ]
cs.LO cs.DB cs.LG
10.23638/LMCS-15(1:10)2019
1501.04826
null
null
http://arxiv.org/abs/1501.04826v5
2019-02-05T11:44:54Z
2015-01-20T14:41:36Z
Relative Entailment Among Probabilistic Implications
We study a natural variant of the implicational fragment of propositional logic. Its formulas are pairs of conjunctions of positive literals, related together by an implicational-like connective; the semantics of this sort of implication is defined in terms of a threshold on a conditional probability of the consequent, given the antecedent: we are dealing with what the data analysis community calls confidence of partial implications or association rules. Existing studies of redundancy among these partial implications have characterized so far only entailment from one premise and entailment from two premises, both in the stand-alone case and in the case of presence of additional classical implications (this is what we call "relative entailment"). By exploiting a previously noted alternative view of the entailment in terms of linear programming duality, we characterize exactly the cases of entailment from arbitrary numbers of premises, again both in the stand-alone case and in the case of presence of additional classical implications. As a result, we obtain decision algorithms of better complexity; additionally, for each potential case of entailment, we identify a critical confidence threshold and show that it is, actually, intrinsic to each set of premises and antecedent of the conclusion.
[ "['Albert Atserias' 'José L. Balcázar' 'Marie Ely Piceno']", "Albert Atserias and Jos\\'e L. Balc\\'azar and Marie Ely Piceno" ]
stat.ML cs.CV cs.DS cs.LG stat.CO
10.1016/j.patcog.2015.01.004
1501.04870
null
null
http://arxiv.org/abs/1501.04870v1
2015-01-20T16:33:40Z
2015-01-20T16:33:40Z
Scalable Multi-Output Label Prediction: From Classifier Chains to Classifier Trellises
Multi-output inference tasks, such as multi-label classification, have become increasingly important in recent years. A popular method for multi-label classification is classifier chains, in which the predictions of individual classifiers are cascaded along a chain, thus taking into account inter-label dependencies and improving the overall performance. Several varieties of classifier chain methods have been introduced, and many of them perform very competitively across a wide range of benchmark datasets. However, scalability limitations become apparent on larger datasets when modeling a fully-cascaded chain. In particular, the methods' strategies for discovering and modeling a good chain structure constitutes a mayor computational bottleneck. In this paper, we present the classifier trellis (CT) method for scalable multi-label classification. We compare CT with several recently proposed classifier chain methods to show that it occupies an important niche: it is highly competitive on standard multi-label problems, yet it can also scale up to thousands or even tens of thousands of labels.
[ "J. Read, L. Martino, P. Olmos, D. Luengo", "['J. Read' 'L. Martino' 'P. Olmos' 'D. Luengo']" ]
cs.DM cs.LG
null
1501.05141
null
null
http://arxiv.org/pdf/1501.05141v2
2015-01-22T12:18:56Z
2015-01-21T11:42:58Z
An Algebra to Merge Heterogeneous Classifiers
In distributed classification, each learner observes its environment and deduces a classifier. As a learner has only a local view of its environment, classifiers can be exchanged among the learners and integrated, or merged, to improve accuracy. However, the operation of merging is not defined for most classifiers. Furthermore, the classifiers that have to be merged may be of different types in settings such as ad-hoc networks in which several generations of sensors may be creating classifiers. We introduce decision spaces as a framework for merging possibly different classifiers. We formally study the merging operation as an algebra, and prove that it satisfies a desirable set of properties. The impact of time is discussed for the two main data mining settings. Firstly, decision spaces can naturally be used with non-stationary distributions, such as the data collected by sensor networks, as the impact of a model decays over time. Secondly, we introduce an approach for stationary distributions, such as homogeneous databases partitioned over different learners, which ensures that all models have the same impact. We also present a method that uses storage flexibly to achieve different types of decay for non-stationary distributions. Finally, we show that the algebraic approach developed for merging can also be used to analyze the behaviour of other operators.
[ "Philippe J. Giabbanelli and Joseph G. Peters", "['Philippe J. Giabbanelli' 'Joseph G. Peters']" ]
stat.ML cs.LG q-bio.QM
10.1371/journal.pone.0137278
1501.05194
null
null
http://arxiv.org/abs/1501.05194v2
2015-10-19T11:31:38Z
2015-01-21T15:22:13Z
A Bayesian alternative to mutual information for the hierarchical clustering of dependent random variables
The use of mutual information as a similarity measure in agglomerative hierarchical clustering (AHC) raises an important issue: some correction needs to be applied for the dimensionality of variables. In this work, we formulate the decision of merging dependent multivariate normal variables in an AHC procedure as a Bayesian model comparison. We found that the Bayesian formulation naturally shrinks the empirical covariance matrix towards a matrix set a priori (e.g., the identity), provides an automated stopping rule, and corrects for dimensionality using a term that scales up the measure as a function of the dimensionality of the variables. Also, the resulting log Bayes factor is asymptotically proportional to the plug-in estimate of mutual information, with an additive correction for dimensionality in agreement with the Bayesian information criterion. We investigated the behavior of these Bayesian alternatives (in exact and asymptotic forms) to mutual information on simulated and real data. An encouraging result was first derived on simulations: the hierarchical clustering based on the log Bayes factor outperformed off-the-shelf clustering techniques as well as raw and normalized mutual information in terms of classification accuracy. On a toy example, we found that the Bayesian approaches led to results that were similar to those of mutual information clustering techniques, with the advantage of an automated thresholding. On real functional magnetic resonance imaging (fMRI) datasets measuring brain activity, it identified clusters consistent with the established outcome of standard procedures. On this application, normalized mutual information had a highly atypical behavior, in the sense that it systematically favored very large clusters. These initial experiments suggest that the proposed Bayesian alternatives to mutual information are a useful new tool for hierarchical clustering.
[ "['Guillaume Marrelec' 'Arnaud Messé' 'Pierre Bellec']", "Guillaume Marrelec, Arnaud Mess\\'e, Pierre Bellec" ]
cs.DS cs.LG
null
1501.05222
null
null
http://arxiv.org/pdf/1501.05222v1
2015-01-21T16:39:43Z
2015-01-21T16:39:43Z
Plug-and-play dual-tree algorithm runtime analysis
Numerous machine learning algorithms contain pairwise statistical problems at their core---that is, tasks that require computations over all pairs of input points if implemented naively. Often, tree structures are used to solve these problems efficiently. Dual-tree algorithms can efficiently solve or approximate many of these problems. Using cover trees, rigorous worst-case runtime guarantees have been proven for some of these algorithms. In this paper, we present a problem-independent runtime guarantee for any dual-tree algorithm using the cover tree, separating out the problem-dependent and the problem-independent elements. This allows us to just plug in bounds for the problem-dependent elements to get runtime guarantees for dual-tree algorithms for any pairwise statistical problem without re-deriving the entire proof. We demonstrate this plug-and-play procedure for nearest-neighbor search and approximate kernel density estimation to get improved runtime guarantees. Under mild assumptions, we also present the first linear runtime guarantee for dual-tree based range search.
[ "['Ryan R. Curtin' 'Dongryeol Lee' 'William B. March' 'Parikshit Ram']", "Ryan R. Curtin, Dongryeol Lee, William B. March, Parikshit Ram" ]
cs.LG
null
1501.05279
null
null
http://arxiv.org/pdf/1501.05279v1
2015-01-21T19:54:26Z
2015-01-21T19:54:26Z
Extreme Entropy Machines: Robust information theoretic classification
Most of the existing classification methods are aimed at minimization of empirical risk (through some simple point-based error measured with loss function) with added regularization. We propose to approach this problem in a more information theoretic way by investigating applicability of entropy measures as a classification model objective function. We focus on quadratic Renyi's entropy and connected Cauchy-Schwarz Divergence which leads to the construction of Extreme Entropy Machines (EEM). The main contribution of this paper is proposing a model based on the information theoretic concepts which on the one hand shows new, entropic perspective on known linear classifiers and on the other leads to a construction of very robust method competetitive with the state of the art non-information theoretic ones (including Support Vector Machines and Extreme Learning Machines). Evaluation on numerous problems spanning from small, simple ones from UCI repository to the large (hundreads of thousands of samples) extremely unbalanced (up to 100:1 classes' ratios) datasets shows wide applicability of the EEM in real life problems and that it scales well.
[ "Wojciech Marian Czarnecki, Jacek Tabor", "['Wojciech Marian Czarnecki' 'Jacek Tabor']" ]
cs.LG cs.CV math.OC stat.ML
null
1501.05352
null
null
http://arxiv.org/pdf/1501.05352v2
2016-02-05T01:25:26Z
2015-01-21T23:53:47Z
Optimizing affinity-based binary hashing using auxiliary coordinates
In supervised binary hashing, one wants to learn a function that maps a high-dimensional feature vector to a vector of binary codes, for application to fast image retrieval. This typically results in a difficult optimization problem, nonconvex and nonsmooth, because of the discrete variables involved. Much work has simply relaxed the problem during training, solving a continuous optimization, and truncating the codes a posteriori. This gives reasonable results but is quite suboptimal. Recent work has tried to optimize the objective directly over the binary codes and achieved better results, but the hash function was still learned a posteriori, which remains suboptimal. We propose a general framework for learning hash functions using affinity-based loss functions that uses auxiliary coordinates. This closes the loop and optimizes jointly over the hash functions and the binary codes so that they gradually match each other. The resulting algorithm can be seen as a corrected, iterated version of the procedure of optimizing first over the codes and then learning the hash function. Compared to this, our optimization is guaranteed to obtain better hash functions while being not much slower, as demonstrated experimentally in various supervised datasets. In addition, our framework facilitates the design of optimization algorithms for arbitrary types of loss and hash functions.
[ "Ramin Raziperchikolaei and Miguel \\'A. Carreira-Perpi\\~n\\'an", "['Ramin Raziperchikolaei' 'Miguel Á. Carreira-Perpiñán']" ]
cs.CL cs.LG
null
1501.05396
null
null
http://arxiv.org/pdf/1501.05396v1
2015-01-22T05:25:33Z
2015-01-22T05:25:33Z
Deep Multimodal Learning for Audio-Visual Speech Recognition
In this paper, we present methods in deep multimodal learning for fusing speech and visual modalities for Audio-Visual Automatic Speech Recognition (AV-ASR). First, we study an approach where uni-modal deep networks are trained separately and their final hidden layers fused to obtain a joint feature space in which another deep network is built. While the audio network alone achieves a phone error rate (PER) of $41\%$ under clean condition on the IBM large vocabulary audio-visual studio dataset, this fusion model achieves a PER of $35.83\%$ demonstrating the tremendous value of the visual channel in phone classification even in audio with high signal to noise ratio. Second, we present a new deep network architecture that uses a bilinear softmax layer to account for class specific correlations between modalities. We show that combining the posteriors from the bilinear networks with those from the fused model mentioned above results in a further significant phone error rate reduction, yielding a final PER of $34.03\%$.
[ "Youssef Mroueh, Etienne Marcheret, Vaibhava Goel", "['Youssef Mroueh' 'Etienne Marcheret' 'Vaibhava Goel']" ]
stat.ML cs.LG
10.1109/JSTSP.2015.2396477
1501.05590
null
null
http://arxiv.org/abs/1501.05590v1
2015-01-22T18:16:30Z
2015-01-22T18:16:30Z
Sketch and Validate for Big Data Clustering
In response to the need for learning tools tuned to big data analytics, the present paper introduces a framework for efficient clustering of huge sets of (possibly high-dimensional) data. Building on random sampling and consensus (RANSAC) ideas pursued earlier in a different (computer vision) context for robust regression, a suite of novel dimensionality and set-reduction algorithms is developed. The advocated sketch-and-validate (SkeVa) family includes two algorithms that rely on K-means clustering per iteration on reduced number of dimensions and/or feature vectors: The first operates in a batch fashion, while the second sequential one offers computational efficiency and suitability with streaming modes of operation. For clustering even nonlinearly separable vectors, the SkeVa family offers also a member based on user-selected kernel functions. Further trading off performance for reduced complexity, a fourth member of the SkeVa family is based on a divergence criterion for selecting proper minimal subsets of feature variables and vectors, thus bypassing the need for K-means clustering per iteration. Extensive numerical tests on synthetic and real data sets highlight the potential of the proposed algorithms, and demonstrate their competitive performance relative to state-of-the-art random projection alternatives.
[ "['Panagiotis A. Traganitis' 'Konstantinos Slavakis'\n 'Georgios B. Giannakis']", "Panagiotis A. Traganitis, Konstantinos Slavakis, Georgios B. Giannakis" ]
stat.ML cs.LG
null
1501.05624
null
null
http://arxiv.org/pdf/1501.05624v1
2015-01-22T20:24:32Z
2015-01-22T20:24:32Z
A Collaborative Kalman Filter for Time-Evolving Dyadic Processes
We present the collaborative Kalman filter (CKF), a dynamic model for collaborative filtering and related factorization models. Using the matrix factorization approach to collaborative filtering, the CKF accounts for time evolution by modeling each low-dimensional latent embedding as a multidimensional Brownian motion. Each observation is a random variable whose distribution is parameterized by the dot product of the relevant Brownian motions at that moment in time. This is naturally interpreted as a Kalman filter with multiple interacting state space vectors. We also present a method for learning a dynamically evolving drift parameter for each location by modeling it as a geometric Brownian motion. We handle posterior intractability via a mean-field variational approximation, which also preserves tractability for downstream calculations in a manner similar to the Kalman filter. We evaluate the model on several large datasets, providing quantitative evaluation on the 10 million Movielens and 100 million Netflix datasets and qualitative evaluation on a set of 39 million stock returns divided across roughly 6,500 companies from the years 1962-2014.
[ "['San Gultekin' 'John Paisley']", "San Gultekin and John Paisley" ]
stat.ML cs.CV cs.LG math.OC
null
1501.05684
null
null
http://arxiv.org/pdf/1501.05684v1
2015-01-22T22:59:47Z
2015-01-22T22:59:47Z
Bi-Objective Nonnegative Matrix Factorization: Linear Versus Kernel-Based Models
Nonnegative matrix factorization (NMF) is a powerful class of feature extraction techniques that has been successfully applied in many fields, namely in signal and image processing. Current NMF techniques have been limited to a single-objective problem in either its linear or nonlinear kernel-based formulation. In this paper, we propose to revisit the NMF as a multi-objective problem, in particular a bi-objective one, where the objective functions defined in both input and feature spaces are taken into account. By taking the advantage of the sum-weighted method from the literature of multi-objective optimization, the proposed bi-objective NMF determines a set of nondominated, Pareto optimal, solutions instead of a single optimal decomposition. Moreover, the corresponding Pareto front is studied and approximated. Experimental results on unmixing real hyperspectral images confirm the efficiency of the proposed bi-objective NMF compared with the state-of-the-art methods.
[ "Paul Honeine, Fei Zhu", "['Paul Honeine' 'Fei Zhu']" ]
stat.ML cs.LG cs.NA
null
1501.05740
null
null
http://arxiv.org/pdf/1501.05740v1
2015-01-23T08:52:35Z
2015-01-23T08:52:35Z
Bayesian Learning for Low-Rank matrix reconstruction
We develop latent variable models for Bayesian learning based low-rank matrix completion and reconstruction from linear measurements. For under-determined systems, the developed methods are shown to reconstruct low-rank matrices when neither the rank nor the noise power is known a-priori. We derive relations between the latent variable models and several low-rank promoting penalty functions. The relations justify the use of Kronecker structured covariance matrices in a Gaussian based prior. In the methods, we use evidence approximation and expectation-maximization to learn the model parameters. The performance of the methods is evaluated through extensive numerical simulations.
[ "Martin Sundin, Cristian R. Rojas, Magnus Jansson and Saikat Chatterjee", "['Martin Sundin' 'Cristian R. Rojas' 'Magnus Jansson' 'Saikat Chatterjee']" ]
stat.ML cs.LG
null
1501.06060
null
null
http://arxiv.org/pdf/1501.06060v1
2015-01-24T16:41:55Z
2015-01-24T16:41:55Z
Consistency Analysis of Nearest Subspace Classifier
The Nearest subspace classifier (NSS) finds an estimation of the underlying subspace within each class and assigns data points to the class that corresponds to its nearest subspace. This paper mainly studies how well NSS can be generalized to new samples. It is proved that NSS is strongly consistent under certain assumptions. For completeness, NSS is evaluated through experiments on various simulated and real data sets, in comparison with some other linear model based classifiers. It is also shown that NSS can obtain effective classification results and is very efficient, especially for large scale data sets.
[ "Yi Wang", "['Yi Wang']" ]
cs.DS cs.CR cs.LG
null
1501.06095
null
null
http://arxiv.org/pdf/1501.06095v1
2015-01-24T23:26:21Z
2015-01-24T23:26:21Z
Between Pure and Approximate Differential Privacy
We show a new lower bound on the sample complexity of $(\varepsilon, \delta)$-differentially private algorithms that accurately answer statistical queries on high-dimensional databases. The novelty of our bound is that it depends optimally on the parameter $\delta$, which loosely corresponds to the probability that the algorithm fails to be private, and is the first to smoothly interpolate between approximate differential privacy ($\delta > 0$) and pure differential privacy ($\delta = 0$). Specifically, we consider a database $D \in \{\pm1\}^{n \times d}$ and its \emph{one-way marginals}, which are the $d$ queries of the form "What fraction of individual records have the $i$-th bit set to $+1$?" We show that in order to answer all of these queries to within error $\pm \alpha$ (on average) while satisfying $(\varepsilon, \delta)$-differential privacy, it is necessary that $$ n \geq \Omega\left( \frac{\sqrt{d \log(1/\delta)}}{\alpha \varepsilon} \right), $$ which is optimal up to constant factors. To prove our lower bound, we build on the connection between \emph{fingerprinting codes} and lower bounds in differential privacy (Bun, Ullman, and Vadhan, STOC'14). In addition to our lower bound, we give new purely and approximately differentially private algorithms for answering arbitrary statistical queries that improve on the sample complexity of the standard Laplace and Gaussian mechanisms for achieving worst-case accuracy guarantees by a logarithmic factor.
[ "['Thomas Steinke' 'Jonathan Ullman']", "Thomas Steinke and Jonathan Ullman" ]
cs.LG cs.CV cs.NE
null
1501.06115
null
null
http://arxiv.org/pdf/1501.06115v2
2015-02-04T11:42:01Z
2015-01-25T05:11:34Z
Constrained Extreme Learning Machines: A Study on Classification Cases
Extreme learning machine (ELM) is an extremely fast learning method and has a powerful performance for pattern recognition tasks proven by enormous researches and engineers. However, its good generalization ability is built on large numbers of hidden neurons, which is not beneficial to real time response in the test process. In this paper, we proposed new ways, named "constrained extreme learning machines" (CELMs), to randomly select hidden neurons based on sample distribution. Compared to completely random selection of hidden nodes in ELM, the CELMs randomly select hidden nodes from the constrained vector space containing some basic combinations of original sample vectors. The experimental results show that the CELMs have better generalization ability than traditional ELM, SVM and some other related methods. Additionally, the CELMs have a similar fast learning speed as ELM.
[ "['Wentao Zhu' 'Jun Miao' 'Laiyun Qing']", "Wentao Zhu, Jun Miao, Laiyun Qing" ]
stat.ML cs.DS cs.LG stat.CO
null
1501.06195
null
null
http://arxiv.org/pdf/1501.06195v1
2015-01-25T19:06:59Z
2015-01-25T19:06:59Z
Randomized sketches for kernels: Fast and optimal non-parametric regression
Kernel ridge regression (KRR) is a standard method for performing non-parametric regression over reproducing kernel Hilbert spaces. Given $n$ samples, the time and space complexity of computing the KRR estimate scale as $\mathcal{O}(n^3)$ and $\mathcal{O}(n^2)$ respectively, and so is prohibitive in many cases. We propose approximations of KRR based on $m$-dimensional randomized sketches of the kernel matrix, and study how small the projection dimension $m$ can be chosen while still preserving minimax optimality of the approximate KRR estimate. For various classes of randomized sketches, including those based on Gaussian and randomized Hadamard matrices, we prove that it suffices to choose the sketch dimension $m$ proportional to the statistical dimension (modulo logarithmic factors). Thus, we obtain fast and minimax optimal approximations to the KRR estimate for non-parametric regression.
[ "Yun Yang, Mert Pilanci, Martin J. Wainwright", "['Yun Yang' 'Mert Pilanci' 'Martin J. Wainwright']" ]
cs.CV cs.LG cs.MM cs.SI math.ST stat.TH
10.1109/TPAMI.2015.2456887
1501.06202
null
null
http://arxiv.org/abs/1501.06202v4
2015-07-27T14:42:17Z
2015-01-25T20:02:45Z
Robust Subjective Visual Property Prediction from Crowdsourced Pairwise Labels
The problem of estimating subjective visual properties from image and video has attracted increasing interest. A subjective visual property is useful either on its own (e.g. image and video interestingness) or as an intermediate representation for visual recognition (e.g. a relative attribute). Due to its ambiguous nature, annotating the value of a subjective visual property for learning a prediction model is challenging. To make the annotation more reliable, recent studies employ crowdsourcing tools to collect pairwise comparison labels because human annotators are much better at ranking two images/videos (e.g. which one is more interesting) than giving an absolute value to each of them separately. However, using crowdsourced data also introduces outliers. Existing methods rely on majority voting to prune the annotation outliers/errors. They thus require large amount of pairwise labels to be collected. More importantly as a local outlier detection method, majority voting is ineffective in identifying outliers that can cause global ranking inconsistencies. In this paper, we propose a more principled way to identify annotation outliers by formulating the subjective visual property prediction task as a unified robust learning to rank problem, tackling both the outlier detection and learning to rank jointly. Differing from existing methods, the proposed method integrates local pairwise comparison labels together to minimise a cost that corresponds to global inconsistency of ranking order. This not only leads to better detection of annotation outliers but also enables learning with extremely sparse annotations. Extensive experiments on various benchmark datasets demonstrate that our new approach significantly outperforms state-of-the-arts alternatives.
[ "Yanwei Fu, Timothy M. Hospedales, Tao Xiang, Jiechao Xiong, Shaogang\n Gong, Yizhou Wang, and Yuan Yao", "['Yanwei Fu' 'Timothy M. Hospedales' 'Tao Xiang' 'Jiechao Xiong'\n 'Shaogang Gong' 'Yizhou Wang' 'Yuan Yao']" ]
cs.LG math.OC stat.ML
null
1501.06225
null
null
http://arxiv.org/pdf/1501.06225v1
2015-01-26T00:40:08Z
2015-01-26T00:40:08Z
Online Optimization : Competing with Dynamic Comparators
Recent literature on online learning has focused on developing adaptive algorithms that take advantage of a regularity of the sequence of observations, yet retain worst-case performance guarantees. A complementary direction is to develop prediction methods that perform well against complex benchmarks. In this paper, we address these two directions together. We present a fully adaptive method that competes with dynamic benchmarks in which regret guarantee scales with regularity of the sequence of cost functions and comparators. Notably, the regret bound adapts to the smaller complexity measure in the problem environment. Finally, we apply our results to drifting zero-sum, two-player games where both players achieve no regret guarantees against best sequences of actions in hindsight.
[ "['Ali Jadbabaie' 'Alexander Rakhlin' 'Shahin Shahrampour'\n 'Karthik Sridharan']", "Ali Jadbabaie, Alexander Rakhlin, Shahin Shahrampour and Karthik\n Sridharan" ]
cs.LG
null
1501.06237
null
null
http://arxiv.org/pdf/1501.06237v1
2015-01-26T02:28:18Z
2015-01-26T02:28:18Z
Deep Transductive Semi-supervised Maximum Margin Clustering
Semi-supervised clustering is an very important topic in machine learning and computer vision. The key challenge of this problem is how to learn a metric, such that the instances sharing the same label are more likely close to each other on the embedded space. However, little attention has been paid to learn better representations when the data lie on non-linear manifold. Fortunately, deep learning has led to great success on feature learning recently. Inspired by the advances of deep learning, we propose a deep transductive semi-supervised maximum margin clustering approach. More specifically, given pairwise constraints, we exploit both labeled and unlabeled data to learn a non-linear mapping under maximum margin framework for clustering analysis. Thus, our model unifies transductive learning, feature learning and maximum margin techniques in the semi-supervised clustering framework. We pretrain the deep network structure with restricted Boltzmann machines (RBMs) layer by layer greedily, and optimize our objective function with gradient descent. By checking the most violated constraints, our approach updates the model parameters through error backpropagation, in which deep features are learned automatically. The experimental results shows that our model is significantly better than the state of the art on semi-supervised clustering.
[ "['Gang Chen']", "Gang Chen" ]
stat.ML cs.IT cs.LG math.IT math.ST stat.TH
10.1109/ISIT.2015.7282736
1501.06241
null
null
http://arxiv.org/abs/1501.06241v2
2015-03-16T22:38:33Z
2015-01-26T02:51:13Z
Sequential Sensing with Model Mismatch
We characterize the performance of sequential information guided sensing, Info-Greedy Sensing, when there is a mismatch between the true signal model and the assumed model, which may be a sample estimate. In particular, we consider a setup where the signal is low-rank Gaussian and the measurements are taken in the directions of eigenvectors of the covariance matrix in a decreasing order of eigenvalues. We establish a set of performance bounds when a mismatched covariance matrix is used, in terms of the gap of signal posterior entropy, as well as the additional amount of power required to achieve the same signal recovery precision. Based on this, we further study how to choose an initialization for Info-Greedy Sensing using the sample covariance matrix, or using an efficient covariance sketching scheme.
[ "Ruiyang Song, Yao Xie, Sebastian Pokutta", "['Ruiyang Song' 'Yao Xie' 'Sebastian Pokutta']" ]
stat.ML cs.LG
null
1501.06243
null
null
http://arxiv.org/pdf/1501.06243v6
2015-03-25T18:03:15Z
2015-01-26T03:02:51Z
Poisson Matrix Completion
We extend the theory of matrix completion to the case where we make Poisson observations for a subset of entries of a low-rank matrix. We consider the (now) usual matrix recovery formulation through maximum likelihood with proper constraints on the matrix $M$, and establish theoretical upper and lower bounds on the recovery error. Our bounds are nearly optimal up to a factor on the order of $\mathcal{O}(\log(d_1 d_2))$. These bounds are obtained by adapting the arguments used for one-bit matrix completion \cite{davenport20121} (although these two problems are different in nature) and the adaptation requires new techniques exploiting properties of the Poisson likelihood function and tackling the difficulties posed by the locally sub-Gaussian characteristic of the Poisson distribution. Our results highlight a few important distinctions of Poisson matrix completion compared to the prior work in matrix completion including having to impose a minimum signal-to-noise requirement on each observed entry. We also develop an efficient iterative algorithm and demonstrate its good performance in recovering solar flare images.
[ "Yang Cao and Yao Xie", "['Yang Cao' 'Yao Xie']" ]
cs.CV cs.LG
null
1501.06272
null
null
http://arxiv.org/pdf/1501.06272v2
2015-04-19T04:28:58Z
2015-01-26T07:33:40Z
Deep Semantic Ranking Based Hashing for Multi-Label Image Retrieval
With the rapid growth of web images, hashing has received increasing interests in large scale image retrieval. Research efforts have been devoted to learning compact binary codes that preserve semantic similarity based on labels. However, most of these hashing methods are designed to handle simple binary similarity. The complex multilevel semantic structure of images associated with multiple labels have not yet been well explored. Here we propose a deep semantic ranking based method for learning hash functions that preserve multilevel semantic similarity between multi-label images. In our approach, deep convolutional neural network is incorporated into hash functions to jointly learn feature representations and mappings from them to hash codes, which avoids the limitation of semantic representation power of hand-crafted features. Meanwhile, a ranking list that encodes the multilevel similarity information is employed to guide the learning of such deep hash functions. An effective scheme based on surrogate loss is used to solve the intractable optimization problem of nonsmooth and multivariate ranking measures involved in the learning procedure. Experimental results show the superiority of our proposed approach over several state-of-the-art hashing methods in term of ranking evaluation metrics when tested on multi-label image datasets.
[ "['Fang Zhao' 'Yongzhen Huang' 'Liang Wang' 'Tieniu Tan']", "Fang Zhao, Yongzhen Huang, Liang Wang, Tieniu Tan" ]
cs.LG
null
1501.06284
null
null
http://arxiv.org/pdf/1501.06284v1
2015-01-26T08:30:55Z
2015-01-26T08:30:55Z
On a Family of Decomposable Kernels on Sequences
In many applications data is naturally presented in terms of orderings of some basic elements or symbols. Reasoning about such data requires a notion of similarity capable of handling sequences of different lengths. In this paper we describe a family of Mercer kernel functions for such sequentially structured data. The family is characterized by a decomposable structure in terms of symbol-level and structure-level similarities, representing a specific combination of kernels which allows for efficient computation. We provide an experimental evaluation on sequential classification tasks comparing kernels from our family of kernels to a state of the art sequence kernel called the Global Alignment kernel which has been shown to outperform Dynamic Time Warping
[ "Andrea Baisero, Florian T. Pokorny, Carl Henrik Ek", "['Andrea Baisero' 'Florian T. Pokorny' 'Carl Henrik Ek']" ]
stat.ML cs.CV cs.LG
null
1501.06450
null
null
http://arxiv.org/pdf/1501.06450v2
2015-03-18T14:48:42Z
2015-01-26T15:37:22Z
IT-map: an Effective Nonlinear Dimensionality Reduction Method for Interactive Clustering
Scientists in many fields have the common and basic need of dimensionality reduction: visualizing the underlying structure of the massive multivariate data in a low-dimensional space. However, many dimensionality reduction methods confront the so-called "crowding problem" that clusters tend to overlap with each other in the embedding. Previously, researchers expect to avoid that problem and seek to make clusters maximally separated in the embedding. However, the proposed in-tree (IT) based method, called IT-map, allows clusters in the embedding to be locally overlapped, while seeking to make them distinguishable by some small yet key parts. IT-map provides a simple, effective and novel solution to cluster-preserving mapping, which makes it possible to cluster the original data points interactively and thus should be of general meaning in science and engineering.
[ "Teng Qiu, Yongjie Li", "['Teng Qiu' 'Yongjie Li']" ]
cs.LG
null
1501.06478
null
null
http://arxiv.org/pdf/1501.06478v2
2015-02-02T16:24:38Z
2015-01-26T16:51:34Z
Compressed Support Vector Machines
Support vector machines (SVM) can classify data sets along highly non-linear decision boundaries because of the kernel-trick. This expressiveness comes at a price: During test-time, the SVM classifier needs to compute the kernel inner-product between a test sample and all support vectors. With large training data sets, the time required for this computation can be substantial. In this paper, we introduce a post-processing algorithm, which compresses the learned SVM model by reducing and optimizing support vectors. We evaluate our algorithm on several medium-scaled real-world data sets, demonstrating that it maintains high test accuracy while reducing the test-time evaluation cost by several orders of magnitude---in some cases from hours to seconds. It is fair to say that most of the work in this paper was previously been invented by Burges and Sch\"olkopf almost 20 years ago. For most of the time during which we conducted this research, we were unaware of this prior work. However, in the past two decades, computing power has increased drastically, and we can therefore provide empirical insights that were not possible in their original paper.
[ "['Zhixiang Xu' 'Jacob R. Gardner' 'Stephen Tyree' 'Kilian Q. Weinberger']", "Zhixiang Xu, Jacob R. Gardner, Stephen Tyree, Kilian Q. Weinberger" ]
cs.LG cs.DS stat.ML
null
1501.06521
null
null
http://arxiv.org/pdf/1501.06521v3
2016-02-18T16:37:39Z
2015-01-26T18:48:55Z
Noisy Tensor Completion via the Sum-of-Squares Hierarchy
In the noisy tensor completion problem we observe $m$ entries (whose location is chosen uniformly at random) from an unknown $n_1 \times n_2 \times n_3$ tensor $T$. We assume that $T$ is entry-wise close to being rank $r$. Our goal is to fill in its missing entries using as few observations as possible. Let $n = \max(n_1, n_2, n_3)$. We show that if $m = n^{3/2} r$ then there is a polynomial time algorithm based on the sixth level of the sum-of-squares hierarchy for completing it. Our estimate agrees with almost all of $T$'s entries almost exactly and works even when our observations are corrupted by noise. This is also the first algorithm for tensor completion that works in the overcomplete case when $r > n$, and in fact it works all the way up to $r = n^{3/2-\epsilon}$. Our proofs are short and simple and are based on establishing a new connection between noisy tensor completion (through the language of Rademacher complexity) and the task of refuting random constant satisfaction problems. This connection seems to have gone unnoticed even in the context of matrix completion. Furthermore, we use this connection to show matching lower bounds. Our main technical result is in characterizing the Rademacher complexity of the sequence of norms that arise in the sum-of-squares relaxations to the tensor nuclear norm. These results point to an interesting new direction: Can we explore computational vs. sample complexity tradeoffs through the sum-of-squares hierarchy?
[ "Boaz Barak and Ankur Moitra", "['Boaz Barak' 'Ankur Moitra']" ]
cs.DL cs.CL cs.LG
10.1002/asi.23179
1501.06587
null
null
http://arxiv.org/abs/1501.06587v1
2015-01-26T21:06:02Z
2015-01-26T21:06:02Z
Measuring academic influence: Not all citations are equal
The importance of a research article is routinely measured by counting how many times it has been cited. However, treating all citations with equal weight ignores the wide variety of functions that citations perform. We want to automatically identify the subset of references in a bibliography that have a central academic influence on the citing paper. For this purpose, we examine the effectiveness of a variety of features for determining the academic influence of a citation. By asking authors to identify the key references in their own work, we created a data set in which citations were labeled according to their academic influence. Using automatic feature selection with supervised machine learning, we found a model for predicting academic influence that achieves good performance on this data set using only four features. The best features, among those we evaluated, were those based on the number of times a reference is mentioned in the body of a citing paper. The performance of these features inspired us to design an influence-primed h-index (the hip-index). Unlike the conventional h-index, it weights citations by how many times a reference is mentioned. According to our experiments, the hip-index is a better indicator of researcher performance than the conventional h-index.
[ "['Xiaodan Zhu' 'Peter Turney' 'Daniel Lemire' 'André Vellino']", "Xiaodan Zhu, Peter Turney, Daniel Lemire, Andr\\'e Vellino" ]
stat.ML cs.IT cs.LG math.IT
null
1501.06598
null
null
http://arxiv.org/pdf/1501.06598v1
2015-01-26T21:52:41Z
2015-01-26T21:52:41Z
Online Nonparametric Regression with General Loss Functions
This paper establishes minimax rates for online regression with arbitrary classes of functions and general losses. We show that below a certain threshold for the complexity of the function class, the minimax rates depend on both the curvature of the loss function and the sequential complexities of the class. Above this threshold, the curvature of the loss does not affect the rates. Furthermore, for the case of square loss, our results point to the interesting phenomenon: whenever sequential and i.i.d. empirical entropies match, the rates for statistical and online learning are the same. In addition to the study of minimax regret, we derive a generic forecaster that enjoys the established optimal rates. We also provide a recipe for designing online prediction algorithms that can be computationally efficient for certain problems. We illustrate the techniques by deriving existing and new forecasters for the case of finite experts and for online linear regression.
[ "['Alexander Rakhlin' 'Karthik Sridharan']", "Alexander Rakhlin and Karthik Sridharan" ]
cs.NE cs.DC cs.LG
null
1501.06633
null
null
http://arxiv.org/pdf/1501.06633v3
2015-01-30T23:50:49Z
2015-01-27T01:19:12Z
maxDNN: An Efficient Convolution Kernel for Deep Learning with Maxwell GPUs
This paper describes maxDNN, a computationally efficient convolution kernel for deep learning with the NVIDIA Maxwell GPU. maxDNN reaches 96.3% computational efficiency on typical deep learning network architectures. The design combines ideas from cuda-convnet2 with the Maxas SGEMM assembly code. We only address forward propagation (FPROP) operation of the network, but we believe that the same techniques used here will be effective for backward propagation (BPROP) as well.
[ "['Andrew Lavin']", "Andrew Lavin" ]
stat.ML cs.DS cs.LG
null
1501.06794
null
null
http://arxiv.org/pdf/1501.06794v1
2015-01-27T15:36:22Z
2015-01-27T15:36:22Z
Computing Functions of Random Variables via Reproducing Kernel Hilbert Space Representations
We describe a method to perform functional operations on probability distributions of random variables. The method uses reproducing kernel Hilbert space representations of probability distributions, and it is applicable to all operations which can be applied to points drawn from the respective distributions. We refer to our approach as {\em kernel probabilistic programming}. We illustrate it on synthetic data, and show how it can be used for nonparametric structural equation models, with an application to causal inference.
[ "['Bernhard Schölkopf' 'Krikamol Muandet' 'Kenji Fukumizu' 'Jonas Peters']", "Bernhard Sch\\\"olkopf, Krikamol Muandet, Kenji Fukumizu, Jonas Peters" ]
cs.LG
10.1109/TITB.2012.2227271
1501.07093
null
null
http://arxiv.org/abs/1501.07093v2
2015-01-31T03:22:15Z
2015-01-28T13:26:02Z
Novel Approaches for Predicting Risk Factors of Atherosclerosis
Coronary heart disease (CHD) caused by hardening of artery walls due to cholesterol known as atherosclerosis is responsible for large number of deaths world-wide. The disease progression is slow, asymptomatic and may lead to sudden cardiac arrest, stroke or myocardial infraction. Presently, imaging techniques are being employed to understand the molecular and metabolic activity of atherosclerotic plaques to estimate the risk. Though imaging methods are able to provide some information on plaque metabolism they lack the required resolution and sensitivity for detection. In this paper we consider the clinical observations and habits of individuals for predicting the risk factors of CHD. The identification of risk factors helps in stratifying patients for further intensive tests such as nuclear imaging or coronary angiography. We present a novel approach for predicting the risk factors of atherosclerosis with an in-built imputation algorithm and particle swarm optimization (PSO). We compare the performance of our methodology with other machine learning techniques on STULONG dataset which is based on longitudinal study of middle aged individuals lasting for twenty years. Our methodology powered by PSO search has identified physical inactivity as one of the risk factor for the onset of atherosclerosis in addition to other already known factors. The decision rules extracted by our methodology are able to predict the risk factors with an accuracy of $99.73%$ which is higher than the accuracies obtained by application of the state-of-the-art machine learning techniques presently being employed in the identification of atherosclerosis risk studies.
[ "['V. Sree Hari Rao' 'M. Naresh Kumar']", "V. Sree Hari Rao and M. Naresh Kumar" ]
cs.LG cs.NE stat.ML
null
1501.07227
null
null
http://arxiv.org/pdf/1501.07227v5
2016-02-10T22:29:26Z
2015-01-28T18:42:42Z
A Neural Network Anomaly Detector Using the Random Cluster Model
The random cluster model is used to define an upper bound on a distance measure as a function of the number of data points to be classified and the expected value of the number of classes to form in a hybrid K-means and regression classification methodology, with the intent of detecting anomalies. Conditions are given for the identification of classes which contain anomalies and individual anomalies within identified classes. A neural network model describes the decision region-separating surface for offline storage and recall in any new anomaly detection.
[ "['Robert A. Murphy']", "Robert A. Murphy" ]
cs.NA cs.LG math.OC
null
1501.07242
null
null
http://arxiv.org/pdf/1501.07242v2
2015-06-15T15:11:54Z
2015-01-28T19:12:40Z
Escaping the Local Minima via Simulated Annealing: Optimization of Approximately Convex Functions
We consider the problem of optimizing an approximately convex function over a bounded convex set in $\mathbb{R}^n$ using only function evaluations. The problem is reduced to sampling from an \emph{approximately} log-concave distribution using the Hit-and-Run method, which is shown to have the same $\mathcal{O}^*$ complexity as sampling from log-concave distributions. In addition to extend the analysis for log-concave distributions to approximate log-concave distributions, the implementation of the 1-dimensional sampler of the Hit-and-Run walk requires new methods and analysis. The algorithm then is based on simulated annealing which does not relies on first order conditions which makes it essentially immune to local minima. We then apply the method to different motivating problems. In the context of zeroth order stochastic convex optimization, the proposed method produces an $\epsilon$-minimizer after $\mathcal{O}^*(n^{7.5}\epsilon^{-2})$ noisy function evaluations by inducing a $\mathcal{O}(\epsilon/n)$-approximately log concave distribution. We also consider in detail the case when the "amount of non-convexity" decays towards the optimum of the function. Other applications of the method discussed in this work include private computation of empirical risk minimizers, two-stage stochastic programming, and approximate dynamic programming for online learning.
[ "['Alexandre Belloni' 'Tengyuan Liang' 'Hariharan Narayanan'\n 'Alexander Rakhlin']", "Alexandre Belloni, Tengyuan Liang, Hariharan Narayanan, Alexander\n Rakhlin" ]
cs.LG
null
1501.07315
null
null
http://arxiv.org/pdf/1501.07315v3
2017-01-26T15:49:01Z
2015-01-29T00:15:28Z
Per-Block-Convex Data Modeling by Accelerated Stochastic Approximation
Applications involving dictionary learning, non-negative matrix factorization, subspace clustering, and parallel factor tensor decomposition tasks motivate well algorithms for per-block-convex and non-smooth optimization problems. By leveraging the stochastic approximation paradigm and first-order acceleration schemes, this paper develops an online and modular learning algorithm for a large class of non-convex data models, where convexity is manifested only per-block of variables whenever the rest of them are held fixed. The advocated algorithm incurs computational complexity that scales linearly with the number of unknowns. Under minimal assumptions on the cost functions of the composite optimization task, without bounding constraints on the optimization variables, or any explicit information on bounds of Lipschitz coefficients, the expected cost evaluated online at the resultant iterates is provably convergent with quadratic rate to an accumulation point of the (per-block) minima, while subgradients of the expected cost asymptotically vanish in the mean-squared sense. The merits of the general approach are demonstrated in two online learning setups: (i) Robust linear regression using a sparsity-cognizant total least-squares criterion; and (ii) semi-supervised dictionary learning for network-wide link load tracking and imputation with missing entries. Numerical tests on synthetic and real data highlight the potential of the proposed framework for streaming data analytics by demonstrating superior performance over block coordinate descent, and reduced complexity relative to the popular alternating-direction method of multipliers.
[ "Konstantinos Slavakis and Georgios B. Giannakis", "['Konstantinos Slavakis' 'Georgios B. Giannakis']" ]
cs.LG stat.ML
null
1501.07320
null
null
http://arxiv.org/pdf/1501.07320v2
2015-05-18T21:53:34Z
2015-01-29T01:01:34Z
Tensor Factorization via Matrix Factorization
Tensor factorization arises in many machine learning applications, such knowledge base modeling and parameter estimation in latent variable models. However, numerical methods for tensor factorization have not reached the level of maturity of matrix factorization methods. In this paper, we propose a new method for CP tensor factorization that uses random projections to reduce the problem to simultaneous matrix diagonalization. Our method is conceptually simple and also applies to non-orthogonal and asymmetric tensors of arbitrary order. We prove that a small number random projections essentially preserves the spectral information in the tensor, allowing us to remove the dependence on the eigengap that plagued earlier tensor-to-matrix reductions. Experimentally, our method outperforms existing tensor factorization methods on both simulated data and two real datasets.
[ "['Volodymyr Kuleshov' 'Arun Tejasvi Chaganty' 'Percy Liang']", "Volodymyr Kuleshov and Arun Tejasvi Chaganty and Percy Liang" ]
cs.IT cs.LG math.IT stat.ML
null
1501.07340
null
null
http://arxiv.org/pdf/1501.07340v1
2015-01-29T04:02:39Z
2015-01-29T04:02:39Z
Sequential Probability Assignment with Binary Alphabets and Large Classes of Experts
We analyze the problem of sequential probability assignment for binary outcomes with side information and logarithmic loss, where regret---or, redundancy---is measured with respect to a (possibly infinite) class of experts. We provide upper and lower bounds for minimax regret in terms of sequential complexities of the class. These complexities were recently shown to give matching (up to logarithmic factors) upper and lower bounds for sequential prediction with general convex Lipschitz loss functions (Rakhlin and Sridharan, 2015). To deal with unbounded gradients of the logarithmic loss, we present a new analysis that employs a sequential chaining technique with a Bernstein-type bound. The introduced complexities are intrinsic to the problem of sequential probability assignment, as illustrated by our lower bound. We also consider an example of a large class of experts parametrized by vectors in a high-dimensional Euclidean ball (or a Hilbert ball). The typical discretization approach fails, while our techniques give a non-trivial bound. For this problem we also present an algorithm based on regularization with a self-concordant barrier. This algorithm is of an independent interest, as it requires a bound on the function values rather than gradients.
[ "['Alexander Rakhlin' 'Karthik Sridharan']", "Alexander Rakhlin and Karthik Sridharan" ]
cs.LG cs.NE
10.1016/j.knosys.2015.10.021
1501.07399
null
null
http://arxiv.org/abs/1501.07399v1
2015-01-29T10:18:46Z
2015-01-29T10:18:46Z
Particle swarm optimization for time series motif discovery
Efficiently finding similar segments or motifs in time series data is a fundamental task that, due to the ubiquity of these data, is present in a wide range of domains and situations. Because of this, countless solutions have been devised but, to date, none of them seems to be fully satisfactory and flexible. In this article, we propose an innovative standpoint and present a solution coming from it: an anytime multimodal optimization algorithm for time series motif discovery based on particle swarms. By considering data from a variety of domains, we show that this solution is extremely competitive when compared to the state-of-the-art, obtaining comparable motifs in considerably less time using minimal memory. In addition, we show that it is robust to different implementation choices and see that it offers an unprecedented degree of flexibility with regard to the task. All these qualities make the presented solution stand out as one of the most prominent candidates for motif discovery in long time series streams. Besides, we believe the proposed standpoint can be exploited in further time series analysis and mining tasks, widening the scope of research and potentially yielding novel effective solutions.
[ "Joan Serr\\`a and Josep Lluis Arcos", "['Joan Serrà' 'Josep Lluis Arcos']" ]
stat.ML cs.LG
null
1501.07430
null
null
http://arxiv.org/pdf/1501.07430v2
2015-06-03T00:45:09Z
2015-01-29T12:13:01Z
Bayesian Hierarchical Clustering with Exponential Family: Small-Variance Asymptotics and Reducibility
Bayesian hierarchical clustering (BHC) is an agglomerative clustering method, where a probabilistic model is defined and its marginal likelihoods are evaluated to decide which clusters to merge. While BHC provides a few advantages over traditional distance-based agglomerative clustering algorithms, successive evaluation of marginal likelihoods and careful hyperparameter tuning are cumbersome and limit the scalability. In this paper we relax BHC into a non-probabilistic formulation, exploring small-variance asymptotics in conjugate-exponential models. We develop a novel clustering algorithm, referred to as relaxed BHC (RBHC), from the asymptotic limit of the BHC model that exhibits the scalability of distance-based agglomerative clustering algorithms as well as the flexibility of Bayesian nonparametric models. We also investigate the reducibility of the dissimilarity measure emerged from the asymptotic limit of the BHC model, allowing us to use scalable algorithms such as the nearest neighbor chain algorithm. Numerical experiments on both synthetic and real-world datasets demonstrate the validity and high performance of our method.
[ "Juho Lee and Seungjin Choi", "['Juho Lee' 'Seungjin Choi']" ]
cs.IR cs.LG
10.1145/2668067.2668077
1501.07467
null
null
http://arxiv.org/abs/1501.07467v1
2015-01-29T14:54:12Z
2015-01-29T14:54:12Z
Regression and Learning to Rank Aggregation for User Engagement Evaluation
User engagement refers to the amount of interaction an instance (e.g., tweet, news, and forum post) achieves. Ranking the items in social media websites based on the amount of user participation in them, can be used in different applications, such as recommender systems. In this paper, we consider a tweet containing a rating for a movie as an instance and focus on ranking the instances of each user based on their engagement, i.e., the total number of retweets and favorites it will gain. For this task, we define several features which can be extracted from the meta-data of each tweet. The features are partitioned into three categories: user-based, movie-based, and tweet-based. We show that in order to obtain good results, features from all categories should be considered. We exploit regression and learning to rank methods to rank the tweets and propose to aggregate the results of regression and learning to rank methods to achieve better performance. We have run our experiments on an extended version of MovieTweeting dataset provided by ACM RecSys Challenge 2014. The results show that learning to rank approach outperforms most of the regression models and the combination can improve the performance significantly.
[ "['Hamed Zamani' 'Azadeh Shakery' 'Pooya Moradi']", "Hamed Zamani, Azadeh Shakery, Pooya Moradi" ]
cs.LG
10.1109/JSYST.2015.2478800
1501.07584
null
null
http://arxiv.org/abs/1501.07584v1
2015-01-29T20:41:29Z
2015-01-29T20:41:29Z
Efficient Divide-And-Conquer Classification Based on Feature-Space Decomposition
This study presents a divide-and-conquer (DC) approach based on feature space decomposition for classification. When large-scale datasets are present, typical approaches usually employed truncated kernel methods on the feature space or DC approaches on the sample space. However, this did not guarantee separability between classes, owing to overfitting. To overcome such problems, this work proposes a novel DC approach on feature spaces consisting of three steps. Firstly, we divide the feature space into several subspaces using the decomposition method proposed in this paper. Subsequently, these feature subspaces are sent into individual local classifiers for training. Finally, the outcomes of local classifiers are fused together to generate the final classification results. Experiments on large-scale datasets are carried out for performance evaluation. The results show that the error rates of the proposed DC method decreased comparing with the state-of-the-art fast SVM solvers, e.g., reducing error rates by 10.53% and 7.53% on RCV1 and covtype datasets respectively.
[ "Qi Guo, Bo-Wei Chen, Feng Jiang, Xiangyang Ji, and Sun-Yuan Kung", "['Qi Guo' 'Bo-Wei Chen' 'Feng Jiang' 'Xiangyang Ji' 'Sun-Yuan Kung']" ]
cs.LG
null
1501.07627
null
null
http://arxiv.org/pdf/1501.07627v1
2015-01-29T22:13:02Z
2015-01-29T22:13:02Z
Representing Objects, Relations, and Sequences
Vector Symbolic Architectures (VSAs) are high-dimensional vector representations of objects (eg., words, image parts), relations (eg., sentence structures), and sequences for use with machine learning algorithms. They consist of a vector addition operator for representing a collection of unordered objects, a Binding operator for associating groups of objects, and a methodology for encoding complex structures. We first develop Constraints that machine learning imposes upon VSAs: for example, similar structures must be represented by similar vectors. The constraints suggest that current VSAs should represent phrases ("The smart Brazilian girl") by binding sums of terms, in addition to simply binding the terms directly. We show that matrix multiplication can be used as the binding operator for a VSA, and that matrix elements can be chosen at random. A consequence for living systems is that binding is mathematically possible without the need to specify, in advance, precise neuron-to-neuron connection properties for large numbers of synapses. A VSA that incorporates these ideas, MBAT (Matrix Binding of Additive Terms), is described that satisfies all Constraints. With respect to machine learning, for some types of problems appropriate VSA representations permit us to prove learnability, rather than relying on simulations. We also propose dividing machine (and neural) learning and representation into three Stages, with differing roles for learning in each stage. For neural modeling, we give "representational reasons" for nervous systems to have many recurrent connections, as well as for the importance of phrases in language processing. Sizing simulations and analyses suggest that VSAs in general, and MBAT in particular, are ready for real-world applications.
[ "['Stephen I. Gallant' 'T. Wendy Okaywe']", "Stephen I. Gallant and T. Wendy Okaywe" ]
cs.CV cs.LG
null
1501.07645
null
null
http://arxiv.org/pdf/1501.07645v2
2015-05-17T03:32:22Z
2015-01-30T02:08:51Z
Hyper-parameter optimization of Deep Convolutional Networks for object recognition
Recently sequential model based optimization (SMBO) has emerged as a promising hyper-parameter optimization strategy in machine learning. In this work, we investigate SMBO to identify architecture hyper-parameters of deep convolution networks (DCNs) object recognition. We propose a simple SMBO strategy that starts from a set of random initial DCN architectures to generate new architectures, which on training perform well on a given dataset. Using the proposed SMBO strategy we are able to identify a number of DCN architectures that produce results that are comparable to state-of-the-art results on object recognition benchmarks.
[ "['Sachin S. Talathi']", "Sachin S. Talathi" ]
stat.ME cs.LG
null
1502.00060
null
null
http://arxiv.org/pdf/1502.00060v2
2015-09-15T05:40:40Z
2015-01-31T03:07:40Z
A Random Matrix Theoretical Approach to Early Event Detection in Smart Grid
Power systems are developing very fast nowadays, both in size and in complexity; this situation is a challenge for Early Event Detection (EED). This paper proposes a data- driven unsupervised learning method to handle this challenge. Specifically, the random matrix theories (RMTs) are introduced as the statistical foundations for random matrix models (RMMs); based on the RMMs, linear eigenvalue statistics (LESs) are defined via the test functions as the system indicators. By comparing the values of the LES between the experimental and the theoretical ones, the anomaly detection is conducted. Furthermore, we develop 3D power-map to visualize the LES; it provides a robust auxiliary decision-making mechanism to the operators. In this sense, the proposed method conducts EED with a pure statistical procedure, requiring no knowledge of system topologies, unit operation/control models, etc. The LES, as a key ingredient during this procedure, is a high dimensional indictor derived directly from raw data. As an unsupervised learning indicator, the LES is much more sensitive than the low dimensional indictors obtained from supervised learning. With the statistical procedure, the proposed method is universal and fast; moreover, it is robust against traditional EED challenges (such as error accumulations, spurious correlations, and even bad data in core area). Case studies, with both simulated data and real ones, validate the proposed method. To manage large-scale distributed systems, data fusion is mentioned as another data processing ingredient.
[ "['Xing He' 'Robert Caiming Qiu' 'Qian Ai' 'Yinshuang Cao' 'Jie Gu'\n 'Zhijian Jin']", "Xing He, Robert Caiming Qiu, Qian Ai, Yinshuang Cao, Jie Gu, Zhijian\n Jin" ]
stat.ML cs.AI cs.LG
10.1109/TITB.2011.2171978
1502.00062
null
null
http://arxiv.org/abs/1502.00062v1
2015-01-31T03:15:12Z
2015-01-31T03:15:12Z
A New Intelligence Based Approach for Computer-Aided Diagnosis of Dengue Fever
Identification of the influential clinical symptoms and laboratory features that help in the diagnosis of dengue fever in early phase of the illness would aid in designing effective public health management and virological surveillance strategies. Keeping this as our main objective we develop in this paper, a new computational intelligence based methodology that predicts the diagnosis in real time, minimizing the number of false positives and false negatives. Our methodology consists of three major components (i) a novel missing value imputation procedure that can be applied on any data set consisting of categorical (nominal) and/or numeric (real or integer) (ii) a wrapper based features selection method with genetic search for extracting a subset of most influential symptoms that can diagnose the illness and (iii) an alternating decision tree method that employs boosting for generating highly accurate decision rules. The predictive models developed using our methodology are found to be more accurate than the state-of-the-art methodologies used in the diagnosis of the dengue fever.
[ "Vadrevu Sree Hari Rao and Mallenahalli Naresh Kumar", "['Vadrevu Sree Hari Rao' 'Mallenahalli Naresh Kumar']" ]
cs.LG
null
1502.00064
null
null
http://arxiv.org/pdf/1502.00064v1
2015-01-31T03:40:17Z
2015-01-31T03:40:17Z
A Batchwise Monotone Algorithm for Dictionary Learning
We propose a batchwise monotone algorithm for dictionary learning. Unlike the state-of-the-art dictionary learning algorithms which impose sparsity constraints on a sample-by-sample basis, we instead treat the samples as a batch, and impose the sparsity constraint on the whole. The benefit of batchwise optimization is that the non-zeros can be better allocated across the samples, leading to a better approximation of the whole. To accomplish this, we propose procedures to switch non-zeros in both rows and columns in the support of the coefficient matrix to reduce the reconstruction error. We prove in the proposed support switching procedure the objective of the algorithm, i.e., the reconstruction error, decreases monotonically and converges. Furthermore, we introduce a block orthogonal matching pursuit algorithm that also operates on sample batches to provide a warm start. Experiments on both natural image patches and UCI data sets show that the proposed algorithm produces a better approximation with the same sparsity levels compared to the state-of-the-art algorithms.
[ "['Huan Wang' 'John Wright' 'Daniel Spielman']", "Huan Wang, John Wright, Daniel Spielman" ]
cs.DB cs.DC cs.LG
null
1502.00068
null
null
http://arxiv.org/pdf/1502.00068v2
2015-03-08T22:02:24Z
2015-01-31T04:51:58Z
TuPAQ: An Efficient Planner for Large-scale Predictive Analytic Queries
The proliferation of massive datasets combined with the development of sophisticated analytical techniques have enabled a wide variety of novel applications such as improved product recommendations, automatic image tagging, and improved speech-driven interfaces. These and many other applications can be supported by Predictive Analytic Queries (PAQs). A major obstacle to supporting PAQs is the challenging and expensive process of identifying and training an appropriate predictive model. Recent efforts aiming to automate this process have focused on single node implementations and have assumed that model training itself is a black box, thus limiting the effectiveness of such approaches on large-scale problems. In this work, we build upon these recent efforts and propose an integrated PAQ planning architecture that combines advanced model search techniques, bandit resource allocation via runtime algorithm introspection, and physical optimization via batching. The result is TuPAQ, a component of the MLbase system, which solves the PAQ planning problem with comparable quality to exhaustive strategies but an order of magnitude more efficiently than the standard baseline approach, and can scale to models trained on terabytes of data across hundreds of machines.
[ "['Evan R. Sparks' 'Ameet Talwalkar' 'Michael J. Franklin'\n 'Michael I. Jordan' 'Tim Kraska']", "Evan R. Sparks, Ameet Talwalkar, Michael J. Franklin, Michael I.\n Jordan, Tim Kraska" ]
stat.ML cs.LG q-bio.NC
null
1502.00093
null
null
http://arxiv.org/pdf/1502.00093v1
2015-01-31T11:58:26Z
2015-01-31T11:58:26Z
Deep learning of fMRI big data: a novel approach to subject-transfer decoding
As a technology to read brain states from measurable brain activities, brain decoding are widely applied in industries and medical sciences. In spite of high demands in these applications for a universal decoder that can be applied to all individuals simultaneously, large variation in brain activities across individuals has limited the scope of many studies to the development of individual-specific decoders. In this study, we used deep neural network (DNN), a nonlinear hierarchical model, to construct a subject-transfer decoder. Our decoder is the first successful DNN-based subject-transfer decoder. When applied to a large-scale functional magnetic resonance imaging (fMRI) database, our DNN-based decoder achieved higher decoding accuracy than other baseline methods, including support vector machine (SVM). In order to analyze the knowledge acquired by this decoder, we applied principal sensitivity analysis (PSA) to the decoder and visualized the discriminative features that are common to all subjects in the dataset. Our PSA successfully visualized the subject-independent features contributing to the subject-transferability of the trained decoder.
[ "['Sotetsu Koyamada' 'Yumi Shikauchi' 'Ken Nakae' 'Masanori Koyama'\n 'Shin Ishii']", "Sotetsu Koyamada and Yumi Shikauchi and Ken Nakae and Masanori Koyama\n and Shin Ishii" ]
cs.IR cs.LG
null
1502.00094
null
null
http://arxiv.org/pdf/1502.00094v1
2015-01-31T12:15:53Z
2015-01-31T12:15:53Z
Twitter Hash Tag Recommendation
The rise in popularity of microblogging services like Twitter has led to increased use of content annotation strategies like the hashtag. Hashtags provide users with a tagging mechanism to help organize, group, and create visibility for their posts. This is a simple idea but can be challenging for the user in practice which leads to infrequent usage. In this paper, we will investigate various methods of recommending hashtags as new posts are created to encourage more widespread adoption and usage. Hashtag recommendation comes with numerous challenges including processing huge volumes of streaming data and content which is small and noisy. We will investigate preprocessing methods to reduce noise in the data and determine an effective method of hashtag recommendation based on the popular classification algorithms.
[ "Roman Dovgopol, Matt Nohelty", "['Roman Dovgopol' 'Matt Nohelty']" ]
stat.ML cs.LG
null
1502.00133
null
null
http://arxiv.org/pdf/1502.00133v1
2015-01-31T16:18:14Z
2015-01-31T16:18:14Z
Sparse Dueling Bandits
The dueling bandit problem is a variation of the classical multi-armed bandit in which the allowable actions are noisy comparisons between pairs of arms. This paper focuses on a new approach for finding the "best" arm according to the Borda criterion using noisy comparisons. We prove that in the absence of structural assumptions, the sample complexity of this problem is proportional to the sum of the inverse squared gaps between the Borda scores of each suboptimal arm and the best arm. We explore this dependence further and consider structural constraints on the pairwise comparison matrix (a particular form of sparsity natural to this problem) that can significantly reduce the sample complexity. This motivates a new algorithm called Successive Elimination with Comparison Sparsity (SECS) that exploits sparsity to find the Borda winner using fewer samples than standard algorithms. We also evaluate the new algorithm experimentally with synthetic and real data. The results show that the sparsity model and the new algorithm can provide significant improvements over standard approaches.
[ "Kevin Jamieson, Sumeet Katariya, Atul Deshpande and Robert Nowak", "['Kevin Jamieson' 'Sumeet Katariya' 'Atul Deshpande' 'Robert Nowak']" ]
cs.SI cond-mat.dis-nn cs.LG math.PR
10.1109/ISIT.2015.7282642
1502.00163
null
null
http://arxiv.org/abs/1502.00163v2
2015-06-10T20:50:30Z
2015-01-31T21:20:53Z
Spectral Detection in the Censored Block Model
We consider the problem of partially recovering hidden binary variables from the observation of (few) censored edge weights, a problem with applications in community detection, correlation clustering and synchronization. We describe two spectral algorithms for this task based on the non-backtracking and the Bethe Hessian operators. These algorithms are shown to be asymptotically optimal for the partial recovery problem, in that they detect the hidden assignment as soon as it is information theoretically possible to do so.
[ "Alaa Saade, Florent Krzakala, Marc Lelarge and Lenka Zdeborov\\'a", "['Alaa Saade' 'Florent Krzakala' 'Marc Lelarge' 'Lenka Zdeborová']" ]
cs.NA cs.DS cs.LG math.NA stat.ML
10.1109/TSP.2017.2649482
1502.00182
null
null
http://arxiv.org/abs/1502.00182v3
2017-03-16T06:41:34Z
2015-02-01T00:57:57Z
High Dimensional Low Rank plus Sparse Matrix Decomposition
This paper is concerned with the problem of low rank plus sparse matrix decomposition for big data. Conventional algorithms for matrix decomposition use the entire data to extract the low-rank and sparse components, and are based on optimization problems with complexity that scales with the dimension of the data, which limits their scalability. Furthermore, existing randomized approaches mostly rely on uniform random sampling, which is quite inefficient for many real world data matrices that exhibit additional structures (e.g. clustering). In this paper, a scalable subspace-pursuit approach that transforms the decomposition problem to a subspace learning problem is proposed. The decomposition is carried out using a small data sketch formed from sampled columns/rows. Even when the data is sampled uniformly at random, it is shown that the sufficient number of sampled columns/rows is roughly O(r\mu), where \mu is the coherency parameter and r the rank of the low rank component. In addition, adaptive sampling algorithms are proposed to address the problem of column/row sampling from structured data. We provide an analysis of the proposed method with adaptive sampling and show that adaptive sampling makes the required number of sampled columns/rows invariant to the distribution of the data. The proposed approach is amenable to online implementation and an online scheme is proposed.
[ "Mostafa Rahmani, George Atia", "['Mostafa Rahmani' 'George Atia']" ]
cond-mat.stat-mech cs.LG q-bio.NC stat.ML
10.1103/PhysRevE.91.050101
1502.00186
null
null
http://arxiv.org/abs/1502.00186v3
2015-05-02T03:54:30Z
2015-02-01T02:23:12Z
Advanced Mean Field Theory of Restricted Boltzmann Machine
Learning in restricted Boltzmann machine is typically hard due to the computation of gradients of log-likelihood function. To describe the network state statistics of the restricted Boltzmann machine, we develop an advanced mean field theory based on the Bethe approximation. Our theory provides an efficient message passing based method that evaluates not only the partition function (free energy) but also its gradients without requiring statistical sampling. The results are compared with those obtained by the computationally expensive sampling based method.
[ "['Haiping Huang' 'Taro Toyoizumi']", "Haiping Huang and Taro Toyoizumi" ]
cs.LG stat.ML
null
1502.00231
null
null
http://arxiv.org/pdf/1502.00231v1
2015-02-01T10:44:26Z
2015-02-01T10:44:26Z
Feature Selection with Redundancy-complementariness Dispersion
Feature selection has attracted significant attention in data mining and machine learning in the past decades. Many existing feature selection methods eliminate redundancy by measuring pairwise inter-correlation of features, whereas the complementariness of features and higher inter-correlation among more than two features are ignored. In this study, a modification item concerning the complementariness of features is introduced in the evaluation criterion of features. Additionally, in order to identify the interference effect of already-selected False Positives (FPs), the redundancy-complementariness dispersion is also taken into account to adjust the measurement of pairwise inter-correlation of features. To illustrate the effectiveness of proposed method, classification experiments are applied with four frequently used classifiers on ten datasets. Classification results verify the superiority of proposed method compared with five representative feature selection methods.
[ "Zhijun Chen, Chaozhong Wu, Yishi Zhang, Zhen Huang, Bin Ran, Ming\n Zhong, Nengchao Lyu", "['Zhijun Chen' 'Chaozhong Wu' 'Yishi Zhang' 'Zhen Huang' 'Bin Ran'\n 'Ming Zhong' 'Nengchao Lyu']" ]
cs.LG cs.AI
null
1502.00245
null
null
http://arxiv.org/pdf/1502.00245v1
2015-02-01T12:57:40Z
2015-02-01T12:57:40Z
Injury risk prediction for traffic accidents in Porto Alegre/RS, Brazil
This study describes the experimental application of Machine Learning techniques to build prediction models that can assess the injury risk associated with traffic accidents. This work uses an freely available data set of traffic accident records that took place in the city of Porto Alegre/RS (Brazil) during the year of 2013. This study also provides an analysis of the most important attributes of a traffic accident that could produce an outcome of injury to the people involved in the accident.
[ "Christian S. Perone", "['Christian S. Perone']" ]
cs.LG cs.CV
null
1502.00363
null
null
http://arxiv.org/pdf/1502.00363v1
2015-02-02T05:30:44Z
2015-02-02T05:30:44Z
Iterated Support Vector Machines for Distance Metric Learning
Distance metric learning aims to learn from the given training data a valid distance metric, with which the similarity between data samples can be more effectively evaluated for classification. Metric learning is often formulated as a convex or nonconvex optimization problem, while many existing metric learning algorithms become inefficient for large scale problems. In this paper, we formulate metric learning as a kernel classification problem, and solve it by iterated training of support vector machines (SVM). The new formulation is easy to implement, efficient in training, and tractable for large-scale problems. Two novel metric learning models, namely Positive-semidefinite Constrained Metric Learning (PCML) and Nonnegative-coefficient Constrained Metric Learning (NCML), are developed. Both PCML and NCML can guarantee the global optimality of their solutions. Experimental results on UCI dataset classification, handwritten digit recognition, face verification and person re-identification demonstrate that the proposed metric learning methods achieve higher classification accuracy than state-of-the-art methods and they are significantly more efficient in training.
[ "['Wangmeng Zuo' 'Faqiang Wang' 'David Zhang' 'Liang Lin' 'Yuchi Huang'\n 'Deyu Meng' 'Lei Zhang']", "Wangmeng Zuo, Faqiang Wang, David Zhang, Liang Lin, Yuchi Huang, Deyu\n Meng, Lei Zhang" ]
cs.CL cs.LG
null
1502.00512
null
null
http://arxiv.org/pdf/1502.00512v1
2015-02-02T15:27:37Z
2015-02-02T15:27:37Z
Scaling Recurrent Neural Network Language Models
This paper investigates the scaling properties of Recurrent Neural Network Language Models (RNNLMs). We discuss how to train very large RNNs on GPUs and address the questions of how RNNLMs scale with respect to model size, training-set size, computational costs and memory. Our analysis shows that despite being more costly to train, RNNLMs obtain much lower perplexities on standard benchmarks than n-gram models. We train the largest known RNNs and present relative word error rates gains of 18% on an ASR task. We also present the new lowest perplexities on the recently released billion word language modelling benchmark, 1 BLEU point gain on machine translation and a 17% relative hit rate gain in word prediction.
[ "['Will Williams' 'Niranjani Prasad' 'David Mrva' 'Tom Ash' 'Tony Robinson']", "Will Williams, Niranjani Prasad, David Mrva, Tom Ash, Tony Robinson" ]
cs.SD cs.IR cs.LG stat.ML
10.1109/TASLP.2016.2530409
1502.00524
null
null
http://arxiv.org/abs/1502.00524v2
2015-10-23T14:37:45Z
2015-02-02T15:45:38Z
Unsupervised Incremental Learning and Prediction of Music Signals
A system is presented that segments, clusters and predicts musical audio in an unsupervised manner, adjusting the number of (timbre) clusters instantaneously to the audio input. A sequence learning algorithm adapts its structure to a dynamically changing clustering tree. The flow of the system is as follows: 1) segmentation by onset detection, 2) timbre representation of each segment by Mel frequency cepstrum coefficients, 3) discretization by incremental clustering, yielding a tree of different sound classes (e.g. instruments) that can grow or shrink on the fly driven by the instantaneous sound events, resulting in a discrete symbol sequence, 4) extraction of statistical regularities of the symbol sequence, using hierarchical N-grams and the newly introduced conceptual Boltzmann machine, and 5) prediction of the next sound event in the sequence. The system's robustness is assessed with respect to complexity and noisiness of the signal. Clustering in isolation yields an adjusted Rand index (ARI) of 82.7% / 85.7% for data sets of singing voice and drums. Onset detection jointly with clustering achieve an ARI of 81.3% / 76.3% and the prediction of the entire system yields an ARI of 27.2% / 39.2%.
[ "['Ricard Marxer' 'Hendrik Purwins']", "Ricard Marxer and Hendrik Purwins" ]
cs.LG
null
1502.00598
null
null
http://arxiv.org/pdf/1502.00598v3
2016-01-12T08:20:21Z
2015-02-02T20:00:13Z
Lock in Feedback in Sequential Experiments
We often encounter situations in which an experimenter wants to find, by sequential experimentation, $x_{max} = \arg\max_{x} f(x)$, where $f(x)$ is a (possibly unknown) function of a well controllable variable $x$. Taking inspiration from physics and engineering, we have designed a new method to address this problem. In this paper, we first introduce the method in continuous time, and then present two algorithms for use in sequential experiments. Through a series of simulation studies, we show that the method is effective for finding maxima of unknown functions by experimentation, even when the maximum of the functions drifts or when the signal to noise ratio is low.
[ "['Maurits Kaptein' 'Davide Iannuzzi']", "Maurits Kaptein and Davide Iannuzzi" ]
cs.LG cs.NE
null
1502.00702
null
null
http://arxiv.org/pdf/1502.00702v2
2015-06-06T01:57:42Z
2015-02-03T01:38:19Z
Hybrid Orthogonal Projection and Estimation (HOPE): A New Framework to Probe and Learn Neural Networks
In this paper, we propose a novel model for high-dimensional data, called the Hybrid Orthogonal Projection and Estimation (HOPE) model, which combines a linear orthogonal projection and a finite mixture model under a unified generative modeling framework. The HOPE model itself can be learned unsupervised from unlabelled data based on the maximum likelihood estimation as well as discriminatively from labelled data. More interestingly, we have shown the proposed HOPE models are closely related to neural networks (NNs) in a sense that each hidden layer can be reformulated as a HOPE model. As a result, the HOPE framework can be used as a novel tool to probe why and how NNs work, more importantly, to learn NNs in either supervised or unsupervised ways. In this work, we have investigated the HOPE framework to learn NNs for several standard tasks, including image recognition on MNIST and speech recognition on TIMIT. Experimental results have shown that the HOPE framework yields significant performance gains over the current state-of-the-art methods in various types of NN learning problems, including unsupervised feature learning, supervised or semi-supervised learning.
[ "['Shiliang Zhang' 'Hui Jiang']", "Shiliang Zhang and Hui Jiang" ]
stat.ML cs.AI cs.LG stat.AP
null
1502.00725
null
null
http://arxiv.org/pdf/1502.00725v1
2015-02-03T03:45:48Z
2015-02-03T03:45:48Z
Cheaper and Better: Selecting Good Workers for Crowdsourcing
Crowdsourcing provides a popular paradigm for data collection at scale. We study the problem of selecting subsets of workers from a given worker pool to maximize the accuracy under a budget constraint. One natural question is whether we should hire as many workers as the budget allows, or restrict on a small number of top-quality workers. By theoretically analyzing the error rate of a typical setting in crowdsourcing, we frame the worker selection problem into a combinatorial optimization problem and propose an algorithm to solve it efficiently. Empirical results on both simulated and real-world datasets show that our algorithm is able to select a small number of high-quality workers, and performs as good as, sometimes even better than, the much larger crowds as the budget allows.
[ "Hongwei Li and Qiang Liu", "['Hongwei Li' 'Qiang Liu']" ]
cs.DB cs.CL cs.LG
null
1502.00731
null
null
http://arxiv.org/pdf/1502.00731v4
2015-06-15T22:24:05Z
2015-02-03T04:16:24Z
Incremental Knowledge Base Construction Using DeepDive
Populating a database with unstructured information is a long-standing problem in industry and research that encompasses problems of extraction, cleaning, and integration. Recent names used for this problem include dealing with dark data and knowledge base construction (KBC). In this work, we describe DeepDive, a system that combines database and machine learning ideas to help develop KBC systems, and we present techniques to make the KBC process more efficient. We observe that the KBC process is iterative, and we develop techniques to incrementally produce inference results for KBC systems. We propose two methods for incremental inference, based respectively on sampling and variational techniques. We also study the tradeoff space of these methods and develop a simple rule-based optimizer. DeepDive includes all of these contributions, and we evaluate DeepDive on five KBC systems, showing that it can speed up KBC inference tasks by up to two orders of magnitude with negligible impact on quality.
[ "['Jaeho Shin' 'Sen Wu' 'Feiran Wang' 'Christopher De Sa' 'Ce Zhang'\n 'Christopher Ré']", "Jaeho Shin, Sen Wu, Feiran Wang, Christopher De Sa, Ce Zhang,\n Christopher R\\'e" ]
cs.IR cs.LG
null
1502.01057
null
null
http://arxiv.org/pdf/1502.01057v1
2015-02-03T22:37:37Z
2015-02-03T22:37:37Z
Personalized Web Search
Personalization is important for search engines to improve user experience. Most of the existing work do pure feature engineering and extract a lot of session-style features and then train a ranking model. Here we proposed a novel way to model both long term and short term user behavior using Multi-armed bandit algorithm. Our algorithm can generalize session information across users well, and as an Explore-Exploit style algorithm, it can generalize to new urls and new users well. Experiments show that our algorithm can improve performance over the default ranking and outperforms several popular Multi-armed bandit algorithms.
[ "['Li Zhou']", "Li Zhou" ]
stat.ML cs.CV cs.LG
10.1109/TIP.2015.2496275
1502.01094
null
null
http://arxiv.org/abs/1502.01094v2
2015-10-27T07:26:59Z
2015-02-04T05:17:50Z
Multimodal Task-Driven Dictionary Learning for Image Classification
Dictionary learning algorithms have been successfully used for both reconstructive and discriminative tasks, where an input signal is represented with a sparse linear combination of dictionary atoms. While these methods are mostly developed for single-modality scenarios, recent studies have demonstrated the advantages of feature-level fusion based on the joint sparse representation of the multimodal inputs. In this paper, we propose a multimodal task-driven dictionary learning algorithm under the joint sparsity constraint (prior) to enforce collaborations among multiple homogeneous/heterogeneous sources of information. In this task-driven formulation, the multimodal dictionaries are learned simultaneously with their corresponding classifiers. The resulting multimodal dictionaries can generate discriminative latent features (sparse codes) from the data that are optimized for a given task such as binary or multiclass classification. Moreover, we present an extension of the proposed formulation using a mixed joint and independent sparsity prior which facilitates more flexible fusion of the modalities at feature level. The efficacy of the proposed algorithms for multimodal classification is illustrated on four different applications -- multimodal face recognition, multi-view face recognition, multi-view action recognition, and multimodal biometric recognition. It is also shown that, compared to the counterpart reconstructive-based dictionary learning algorithms, the task-driven formulations are more computationally efficient in the sense that they can be equipped with more compact dictionaries and still achieve superior performance.
[ "['Soheil Bahrampour' 'Nasser M. Nasrabadi' 'Asok Ray' 'W. Kenneth Jenkins']", "Soheil Bahrampour, Nasser M. Nasrabadi, Asok Ray, W. Kenneth Jenkins" ]
cs.LG stat.ML
null
1502.01176
null
null
http://arxiv.org/pdf/1502.01176v1
2015-02-04T12:27:04Z
2015-02-04T12:27:04Z
Learning Local Invariant Mahalanobis Distances
For many tasks and data types, there are natural transformations to which the data should be invariant or insensitive. For instance, in visual recognition, natural images should be insensitive to rotation and translation. This requirement and its implications have been important in many machine learning applications, and tolerance for image transformations was primarily achieved by using robust feature vectors. In this paper we propose a novel and computationally efficient way to learn a local Mahalanobis metric per datum, and show how we can learn a local invariant metric to any transformation in order to improve performance.
[ "['Ethan Fetaya' 'Shimon Ullman']", "Ethan Fetaya and Shimon Ullman" ]
cs.LG stat.ML
10.1109/JSTSP.2015.2402646
1502.01418
null
null
http://arxiv.org/abs/1502.01418v2
2015-02-07T08:29:14Z
2015-02-05T03:03:16Z
RELEAF: An Algorithm for Learning and Exploiting Relevance
Recommender systems, medical diagnosis, network security, etc., require on-going learning and decision-making in real time. These -- and many others -- represent perfect examples of the opportunities and difficulties presented by Big Data: the available information often arrives from a variety of sources and has diverse features so that learning from all the sources may be valuable but integrating what is learned is subject to the curse of dimensionality. This paper develops and analyzes algorithms that allow efficient learning and decision-making while avoiding the curse of dimensionality. We formalize the information available to the learner/decision-maker at a particular time as a context vector which the learner should consider when taking actions. In general the context vector is very high dimensional, but in many settings, the most relevant information is embedded into only a few relevant dimensions. If these relevant dimensions were known in advance, the problem would be simple -- but they are not. Moreover, the relevant dimensions may be different for different actions. Our algorithm learns the relevant dimensions for each action, and makes decisions based in what it has learned. Formally, we build on the structure of a contextual multi-armed bandit by adding and exploiting a relevance relation. We prove a general regret bound for our algorithm whose time order depends only on the maximum number of relevant dimensions among all the actions, which in the special case where the relevance relation is single-valued (a function), reduces to $\tilde{O}(T^{2(\sqrt{2}-1)})$; in the absence of a relevance relation, the best known contextual bandit algorithms achieve regret $\tilde{O}(T^{(D+1)/(D+2)})$, where $D$ is the full dimension of the context vector.
[ "Cem Tekin and Mihaela van der Schaar", "['Cem Tekin' 'Mihaela van der Schaar']" ]
stat.ML cs.LG stat.ME
null
1502.01493
null
null
http://arxiv.org/pdf/1502.01493v1
2015-02-05T10:45:54Z
2015-02-05T10:45:54Z
A mixture Cox-Logistic model for feature selection from survival and classification data
This paper presents an original approach for jointly fitting survival times and classifying samples into subgroups. The Coxlogit model is a generalized linear model with a common set of selected features for both tasks. Survival times and class labels are here assumed to be conditioned by a common risk score which depends on those features. Learning is then naturally expressed as maximizing the joint probability of subgroup labels and the ordering of survival events, conditioned to a common weight vector. The model is estimated by minimizing a regularized log-likelihood through a coordinate descent algorithm. Validation on synthetic and breast cancer data shows that the proposed approach outperforms a standard Cox model or logistic regression when both predicting the survival times and classifying new samples into subgroups. It is also better at selecting informative features for both tasks.
[ "['Samuel Branders' \"Roberto D'Ambrosio\" 'Pierre Dupont']", "Samuel Branders, Roberto D'Ambrosio and Pierre Dupont" ]
stat.ML cs.LG math.OC
null
1502.01563
null
null
http://arxiv.org/pdf/1502.01563v1
2015-02-05T14:17:55Z
2015-02-05T14:17:55Z
A PARTAN-Accelerated Frank-Wolfe Algorithm for Large-Scale SVM Classification
Frank-Wolfe algorithms have recently regained the attention of the Machine Learning community. Their solid theoretical properties and sparsity guarantees make them a suitable choice for a wide range of problems in this field. In addition, several variants of the basic procedure exist that improve its theoretical properties and practical performance. In this paper, we investigate the application of some of these techniques to Machine Learning, focusing in particular on a Parallel Tangent (PARTAN) variant of the FW algorithm that has not been previously suggested or studied for this type of problems. We provide experiments both in a standard setting and using a stochastic speed-up technique, showing that the considered algorithms obtain promising results on several medium and large-scale benchmark datasets for SVM classification.
[ "Emanuele Frandi, Ricardo Nanculef, Johan A. K. Suykens", "['Emanuele Frandi' 'Ricardo Nanculef' 'Johan A. K. Suykens']" ]
cs.LG math.PR
null
1502.01632
null
null
http://arxiv.org/pdf/1502.01632v1
2015-02-05T16:37:31Z
2015-02-05T16:37:31Z
A Simple Expression for Mill's Ratio of the Student's $t$-Distribution
I show a simple expression of the Mill's ratio of the Student's t-Distribution. I use it to prove Conjecture 1 in P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Mach. Learn., 47(2-3):235--256, May 2002.
[ "Francesco Orabona", "['Francesco Orabona']" ]
stat.ML cs.LG
null
1502.01664
null
null
http://arxiv.org/pdf/1502.01664v1
2015-02-05T18:13:34Z
2015-02-05T18:13:34Z
Estimating Optimal Active Learning via Model Retraining Improvement
A central question for active learning (AL) is: "what is the optimal selection?" Defining optimality by classifier loss produces a new characterisation of optimal AL behaviour, by treating expected loss reduction as a statistical target for estimation. This target forms the basis of model retraining improvement (MRI), a novel approach providing a statistical estimation framework for AL. This framework is constructed to address the central question of AL optimality, and to motivate the design of estimation algorithms. MRI allows the exploration of optimal AL behaviour, and the examination of AL heuristics, showing precisely how they make sub-optimal selections. The abstract formulation of MRI is used to provide a new guarantee for AL, that an unbiased MRI estimator should outperform random selection. This MRI framework reveals intricate estimation issues that in turn motivate the construction of new statistical AL algorithms. One new algorithm in particular performs strongly in a large-scale experimental study, compared to standard AL methods. This competitive performance suggests that practical efforts to minimise estimation bias may be important for AL applications.
[ "['Lewis P. G. Evans' 'Niall M. Adams' 'Christoforos Anagnostopoulos']", "Lewis P. G. Evans and Niall M. Adams and Christoforos Anagnostopoulos" ]
cs.CL cs.LG stat.ML
null
1502.01682
null
null
http://arxiv.org/pdf/1502.01682v1
2015-02-05T19:10:26Z
2015-02-05T19:10:26Z
Use of Modality and Negation in Semantically-Informed Syntactic MT
This paper describes the resource- and system-building efforts of an eight-week Johns Hopkins University Human Language Technology Center of Excellence Summer Camp for Applied Language Exploration (SCALE-2009) on Semantically-Informed Machine Translation (SIMT). We describe a new modality/negation (MN) annotation scheme, the creation of a (publicly available) MN lexicon, and two automated MN taggers that we built using the annotation scheme and lexicon. Our annotation scheme isolates three components of modality and negation: a trigger (a word that conveys modality or negation), a target (an action associated with modality or negation) and a holder (an experiencer of modality). We describe how our MN lexicon was semi-automatically produced and we demonstrate that a structure-based MN tagger results in precision around 86% (depending on genre) for tagging of a standard LDC data set. We apply our MN annotation scheme to statistical machine translation using a syntactic framework that supports the inclusion of semantic annotations. Syntactic tags enriched with semantic annotations are assigned to parse trees in the target-language training texts through a process of tree grafting. While the focus of our work is modality and negation, the tree grafting procedure is general and supports other types of semantic information. We exploit this capability by including named entities, produced by a pre-existing tagger, in addition to the MN elements produced by the taggers described in this paper. The resulting system significantly outperformed a linguistically naive baseline model (Hiero), and reached the highest scores yet reported on the NIST 2009 Urdu-English test set. This finding supports the hypothesis that both syntactic and semantic information can improve translation quality.
[ "['Kathryn Baker' 'Michael Bloodgood' 'Bonnie J. Dorr'\n 'Chris Callison-Burch' 'Nathaniel W. Filardo' 'Christine Piatko'\n 'Lori Levin' 'Scott Miller']", "Kathryn Baker, Michael Bloodgood, Bonnie J. Dorr, Chris\n Callison-Burch, Nathaniel W. Filardo, Christine Piatko, Lori Levin and Scott\n Miller" ]
cs.LG stat.ML
null
1502.01705
null
null
http://arxiv.org/pdf/1502.01705v1
2015-02-05T20:28:01Z
2015-02-05T20:28:01Z
A Confident Information First Principle for Parametric Reduction and Model Selection of Boltzmann Machines
Typical dimensionality reduction (DR) methods are often data-oriented, focusing on directly reducing the number of random variables (features) while retaining the maximal variations in the high-dimensional data. In unsupervised situations, one of the main limitations of these methods lies in their dependency on the scale of data features. This paper aims to address the problem from a new perspective and considers model-oriented dimensionality reduction in parameter spaces of binary multivariate distributions. Specifically, we propose a general parameter reduction criterion, called Confident-Information-First (CIF) principle, to maximally preserve confident parameters and rule out less confident parameters. Formally, the confidence of each parameter can be assessed by its contribution to the expected Fisher information distance within the geometric manifold over the neighbourhood of the underlying real distribution. We then revisit Boltzmann machines (BM) from a model selection perspective and theoretically show that both the fully visible BM (VBM) and the BM with hidden units can be derived from the general binary multivariate distribution using the CIF principle. This can help us uncover and formalize the essential parts of the target density that BM aims to capture and the non-essential parts that BM should discard. Guided by the theoretical analysis, we develop a sample-specific CIF for model selection of BM that is adaptive to the observed samples. The method is studied in a series of density estimation experiments and has been shown effective in terms of the estimate accuracy.
[ "Xiaozhao Zhao, Yuexian Hou, Dawei Song, Wenjie Li", "['Xiaozhao Zhao' 'Yuexian Hou' 'Dawei Song' 'Wenjie Li']" ]
cs.LG cs.CL
null
1502.01710
null
null
http://arxiv.org/pdf/1502.01710v5
2016-04-04T02:40:48Z
2015-02-05T20:45:19Z
Text Understanding from Scratch
This article demontrates that we can apply deep learning to text understanding from character-level inputs all the way up to abstract text concepts, using temporal convolutional networks (ConvNets). We apply ConvNets to various large-scale datasets, including ontology classification, sentiment analysis, and text categorization. We show that temporal ConvNets can achieve astonishing performance without the knowledge of words, phrases, sentences and any other syntactic or semantic structures with regards to a human language. Evidence shows that our models can work for both English and Chinese.
[ "Xiang Zhang, Yann LeCun", "['Xiang Zhang' 'Yann LeCun']" ]
cs.CE cs.LG
null
1502.01733
null
null
http://arxiv.org/pdf/1502.01733v1
2015-02-05T21:31:25Z
2015-02-05T21:31:25Z
Arrhythmia Detection using Mutual Information-Based Integration Method
The aim of this paper is to propose an application of mutual information-based ensemble methods to the analysis and classification of heart beats associated with different types of Arrhythmia. Models of multilayer perceptrons, support vector machines, and radial basis function neural networks were trained and tested using the MIT-BIH arrhythmia database. This research brings a focus to an ensemble method that, to our knowledge, is a novel application in the area of ECG Arrhythmia detection. The proposed classifier ensemble method showed improved performance, relative to either majority voting classifier integration or to individual classifier performance. The overall ensemble accuracy was 98.25%.
[ "Othman Soufan and Samer Arafat", "['Othman Soufan' 'Samer Arafat']" ]
cs.CL cs.LG cs.NE stat.ML
10.1109/IJCNN.2015.7280766
1502.01753
null
null
http://arxiv.org/abs/1502.01753v1
2015-02-05T22:51:45Z
2015-02-05T22:51:45Z
Monitoring Term Drift Based on Semantic Consistency in an Evolving Vector Field
Based on the Aristotelian concept of potentiality vs. actuality allowing for the study of energy and dynamics in language, we propose a field approach to lexical analysis. Falling back on the distributional hypothesis to statistically model word meaning, we used evolving fields as a metaphor to express time-dependent changes in a vector space model by a combination of random indexing and evolving self-organizing maps (ESOM). To monitor semantic drifts within the observation period, an experiment was carried out on the term space of a collection of 12.8 million Amazon book reviews. For evaluation, the semantic consistency of ESOM term clusters was compared with their respective neighbourhoods in WordNet, and contrasted with distances among term vectors by random indexing. We found that at 0.05 level of significance, the terms in the clusters showed a high level of semantic consistency. Tracking the drift of distributional patterns in the term space across time periods, we found that consistency decreased, but not at a statistically significant level. Our method is highly scalable, with interpretations in philosophy.
[ "['Peter Wittek' 'Sándor Darányi' 'Efstratios Kontopoulos'\n 'Theodoros Moysiadis' 'Ioannis Kompatsiaris']", "Peter Wittek, S\\'andor Dar\\'anyi, Efstratios Kontopoulos, Theodoros\n Moysiadis, Ioannis Kompatsiaris" ]
cs.LG stat.ML
null
1502.01783
null
null
http://arxiv.org/pdf/1502.01783v1
2015-02-06T03:36:51Z
2015-02-06T03:36:51Z
Learning Efficient Anomaly Detectors from $K$-NN Graphs
We propose a non-parametric anomaly detection algorithm for high dimensional data. We score each datapoint by its average $K$-NN distance, and rank them accordingly. We then train limited complexity models to imitate these scores based on the max-margin learning-to-rank framework. A test-point is declared as an anomaly at $\alpha$-false alarm level if the predicted score is in the $\alpha$-percentile. The resulting anomaly detector is shown to be asymptotically optimal in that for any false alarm rate $\alpha$, its decision region converges to the $\alpha$-percentile minimum volume level set of the unknown underlying density. In addition, we test both the statistical performance and computational efficiency of our algorithm on a number of synthetic and real-data experiments. Our results demonstrate the superiority of our algorithm over existing $K$-NN based anomaly detection algorithms, with significant computational savings.
[ "Jing Qian, Jonathan Root, Venkatesh Saligrama", "['Jing Qian' 'Jonathan Root' 'Venkatesh Saligrama']" ]
cs.LG cs.CV
null
1502.01823
null
null
http://arxiv.org/pdf/1502.01823v1
2015-02-06T08:28:57Z
2015-02-06T08:28:57Z
Unsupervised Fusion Weight Learning in Multiple Classifier Systems
In this paper we present an unsupervised method to learn the weights with which the scores of multiple classifiers must be combined in classifier fusion settings. We also introduce a novel metric for ranking instances based on an index which depends upon the rank of weighted scores of test points among the weighted scores of training points. We show that the optimized index can be used for computing measures such as average precision. Unlike most classifier fusion methods where a single weight is learned to weigh all examples our method learns instance-specific weights. The problem is formulated as learning the weight which maximizes a clarity index; subsequently the index itself and the learned weights both are used separately to rank all the test points. Our method gives an unsupervised method of optimizing performance on actual test data, unlike the well known stacking-based methods where optimization is done over a labeled training set. Moreover, we show that our method is tolerant to noisy classifiers and can be used for selecting N-best classifiers.
[ "['Anurag Kumar' 'Bhiksha Raj']", "Anurag Kumar, Bhiksha Raj" ]
cs.LG cs.CV
null
1502.01827
null
null
http://arxiv.org/pdf/1502.01827v1
2015-02-06T08:37:55Z
2015-02-06T08:37:55Z
Hierarchical Maximum-Margin Clustering
We present a hierarchical maximum-margin clustering method for unsupervised data analysis. Our method extends beyond flat maximum-margin clustering, and performs clustering recursively in a top-down manner. We propose an effective greedy splitting criteria for selecting which cluster to split next, and employ regularizers that enforce feature sharing/competition for capturing data semantics. Experimental results obtained on four standard datasets show that our method outperforms flat and hierarchical clustering baselines, while forming clean and semantically meaningful cluster hierarchies.
[ "Guang-Tong Zhou, Sung Ju Hwang, Mark Schmidt, Leonid Sigal and Greg\n Mori", "['Guang-Tong Zhou' 'Sung Ju Hwang' 'Mark Schmidt' 'Leonid Sigal'\n 'Greg Mori']" ]
cs.CV cs.AI cs.LG
null
1502.01852
null
null
http://arxiv.org/pdf/1502.01852v1
2015-02-06T10:44:00Z
2015-02-06T10:44:00Z
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
Rectified activation units (rectifiers) are essential for state-of-the-art neural networks. In this work, we study rectifier neural networks for image classification from two aspects. First, we propose a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. This method enables us to train extremely deep rectified models directly from scratch and to investigate deeper or wider network architectures. Based on our PReLU networks (PReLU-nets), we achieve 4.94% top-5 test error on the ImageNet 2012 classification dataset. This is a 26% relative improvement over the ILSVRC 2014 winner (GoogLeNet, 6.66%). To our knowledge, our result is the first to surpass human-level performance (5.1%, Russakovsky et al.) on this visual recognition challenge.
[ "Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun", "['Kaiming He' 'Xiangyu Zhang' 'Shaoqing Ren' 'Jian Sun']" ]