categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
sequence
stat.ML cs.LG
null
1303.6935
null
null
http://arxiv.org/pdf/1303.6935v1
2013-03-27T19:34:05Z
2013-03-27T19:34:05Z
Efficiently Using Second Order Information in Large l1 Regularization Problems
We propose a novel general algorithm LHAC that efficiently uses second-order information to train a class of large-scale l1-regularized problems. Our method executes cheap iterations while achieving fast local convergence rate by exploiting the special structure of a low-rank matrix, constructed via quasi-Newton approximation of the Hessian of the smooth loss function. A greedy active-set strategy, based on the largest violations in the dual constraints, is employed to maintain a working set that iteratively estimates the complement of the optimal active set. This allows for smaller size of subproblems and eventually identifies the optimal active set. Empirical comparisons confirm that LHAC is highly competitive with several recently proposed state-of-the-art specialized solvers for sparse logistic regression and sparse inverse covariance matrix selection.
[ "Xiaocheng Tang and Katya Scheinberg", "['Xiaocheng Tang' 'Katya Scheinberg']" ]
stat.ML cs.LG
null
1303.6977
null
null
http://arxiv.org/pdf/1303.6977v4
2013-06-28T11:18:26Z
2013-03-27T20:51:33Z
ABC Reinforcement Learning
This paper introduces a simple, general framework for likelihood-free Bayesian reinforcement learning, through Approximate Bayesian Computation (ABC). The main advantage is that we only require a prior distribution on a class of simulators (generative models). This is useful in domains where an analytical probabilistic model of the underlying process is too complex to formulate, but where detailed simulation models are available. ABC-RL allows the use of any Bayesian reinforcement learning technique, even in this case. In addition, it can be seen as an extension of rollout algorithms to the case where we do not know what the correct model to draw rollouts from is. We experimentally demonstrate the potential of this approach in a comparison with LSPI. Finally, we introduce a theorem showing that ABC is a sound methodology in principle, even when non-sufficient statistics are used.
[ "Christos Dimitrakakis, Nikolaos Tziortziotis", "['Christos Dimitrakakis' 'Nikolaos Tziortziotis']" ]
cs.LG
10.1109/CVPR.2013.205
1303.7043
null
null
http://arxiv.org/abs/1303.7043v1
2013-03-28T05:45:21Z
2013-03-28T05:45:21Z
Inductive Hashing on Manifolds
Learning based hashing methods have attracted considerable attention due to their ability to greatly increase the scale at which existing algorithms may operate. Most of these methods are designed to generate binary codes that preserve the Euclidean distance in the original space. Manifold learning techniques, in contrast, are better able to model the intrinsic structure embedded in the original high-dimensional data. The complexity of these models, and the problems with out-of-sample data, have previously rendered them unsuitable for application to large-scale embedding, however. In this work, we consider how to learn compact binary embeddings on their intrinsic manifolds. In order to address the above-mentioned difficulties, we describe an efficient, inductive solution to the out-of-sample data problem, and a process by which non-parametric manifold learning may be used as the basis of a hashing method. Our proposed approach thus allows the development of a range of new hashing techniques exploiting the flexibility of the wide variety of manifold learning approaches available. We particularly show that hashing on the basis of t-SNE .
[ "['Fumin Shen' 'Chunhua Shen' 'Qinfeng Shi' 'Anton van den Hengel'\n 'Zhenmin Tang']", "Fumin Shen, Chunhua Shen, Qinfeng Shi, Anton van den Hengel, Zhenmin\n Tang" ]
stat.ML cs.LG
10.1007/978-3-642-39712-7_15
1303.7093
null
null
http://arxiv.org/abs/1303.7093v3
2013-04-08T14:26:49Z
2013-03-28T11:01:53Z
Relevance As a Metric for Evaluating Machine Learning Algorithms
In machine learning, the choice of a learning algorithm that is suitable for the application domain is critical. The performance metric used to compare different algorithms must also reflect the concerns of users in the application domain under consideration. In this work, we propose a novel probability-based performance metric called Relevance Score for evaluating supervised learning algorithms. We evaluate the proposed metric through empirical analysis on a dataset gathered from an intelligent lighting pilot installation. In comparison to the commonly used Classification Accuracy metric, the Relevance Score proves to be more appropriate for a certain class of applications.
[ "['Aravind Kota Gopalakrishna' 'Tanir Ozcelebi' 'Antonio Liotta'\n 'Johan J. Lukkien']", "Aravind Kota Gopalakrishna, Tanir Ozcelebi, Antonio Liotta, Johan J.\n Lukkien" ]
math.ST cs.CG cs.LG stat.TH
10.1214/14-AOS1252
1303.7117
null
null
http://arxiv.org/abs/1303.7117v3
2014-11-20T08:16:51Z
2013-03-28T12:59:00Z
Confidence sets for persistence diagrams
Persistent homology is a method for probing topological properties of point clouds and functions. The method involves tracking the birth and death of topological features (2000) as one varies a tuning parameter. Features with short lifetimes are informally considered to be "topological noise," and those with a long lifetime are considered to be "topological signal." In this paper, we bring some statistical ideas to persistent homology. In particular, we derive confidence sets that allow us to separate topological signal from topological noise.
[ "Brittany Terese Fasy, Fabrizio Lecci, Alessandro Rinaldo, Larry\n Wasserman, Sivaraman Balakrishnan, Aarti Singh", "['Brittany Terese Fasy' 'Fabrizio Lecci' 'Alessandro Rinaldo'\n 'Larry Wasserman' 'Sivaraman Balakrishnan' 'Aarti Singh']" ]
cs.SI cs.LG physics.soc-ph stat.ML
null
1303.7226
null
null
http://arxiv.org/pdf/1303.7226v1
2013-03-28T19:56:39Z
2013-03-28T19:56:39Z
Detecting Overlapping Temporal Community Structure in Time-Evolving Networks
We present a principled approach for detecting overlapping temporal community structure in dynamic networks. Our method is based on the following framework: find the overlapping temporal community structure that maximizes a quality function associated with each snapshot of the network subject to a temporal smoothness constraint. A novel quality function and a smoothness constraint are proposed to handle overlaps, and a new convex relaxation is used to solve the resulting combinatorial optimization problem. We provide theoretical guarantees as well as experimental results that reveal community structure in real and synthetic networks. Our main insight is that certain structures can be identified only when temporal correlation is considered and when communities are allowed to overlap. In general, discovering such overlapping temporal community structure can enhance our understanding of real-world complex networks by revealing the underlying stability behind their seemingly chaotic evolution.
[ "['Yudong Chen' 'Vikas Kawadia' 'Rahul Urgaonkar']", "Yudong Chen, Vikas Kawadia, Rahul Urgaonkar" ]
cs.LG cs.IR cs.SI physics.data-an stat.ML
10.1145/2487575.2487693
1303.7264
null
null
http://arxiv.org/abs/1303.7264v1
2013-03-28T22:34:51Z
2013-03-28T22:34:51Z
Scalable Text and Link Analysis with Mixed-Topic Link Models
Many data sets contain rich information about objects, as well as pairwise relations between them. For instance, in networks of websites, scientific papers, and other documents, each node has content consisting of a collection of words, as well as hyperlinks or citations to other nodes. In order to perform inference on such data sets, and make predictions and recommendations, it is useful to have models that are able to capture the processes which generate the text at each node and the links between them. In this paper, we combine classic ideas in topic modeling with a variant of the mixed-membership block model recently developed in the statistical physics community. The resulting model has the advantage that its parameters, including the mixture of topics of each document and the resulting overlapping communities, can be inferred with a simple and scalable expectation-maximization algorithm. We test our model on three data sets, performing unsupervised topic classification and link prediction. For both tasks, our model outperforms several existing state-of-the-art methods, achieving higher accuracy with significantly less computation, analyzing a data set with 1.3 million words and 44 thousand links in a few minutes.
[ "['Yaojia Zhu' 'Xiaoran Yan' 'Lise Getoor' 'Cristopher Moore']", "Yaojia Zhu, Xiaoran Yan, Lise Getoor and Cristopher Moore" ]
cs.IT cs.LG math.IT stat.ML
10.1109/LSP.2013.2260538
1303.7286
null
null
http://arxiv.org/abs/1303.7286v3
2014-01-22T05:35:12Z
2013-03-29T03:11:21Z
On the symmetrical Kullback-Leibler Jeffreys centroids
Due to the success of the bag-of-word modeling paradigm, clustering histograms has become an important ingredient of modern information processing. Clustering histograms can be performed using the celebrated $k$-means centroid-based algorithm. From the viewpoint of applications, it is usually required to deal with symmetric distances. In this letter, we consider the Jeffreys divergence that symmetrizes the Kullback-Leibler divergence, and investigate the computation of Jeffreys centroids. We first prove that the Jeffreys centroid can be expressed analytically using the Lambert $W$ function for positive histograms. We then show how to obtain a fast guaranteed approximation when dealing with frequency histograms. Finally, we conclude with some remarks on the $k$-means histogram clustering.
[ "['Frank Nielsen']", "Frank Nielsen" ]
stat.ML cs.LG math.PR
null
1303.7461
null
null
http://arxiv.org/pdf/1303.7461v2
2014-01-28T21:50:07Z
2013-03-29T19:15:04Z
Universal Approximation Depth and Errors of Narrow Belief Networks with Discrete Units
We generalize recent theoretical work on the minimal number of layers of narrow deep belief networks that can approximate any probability distribution on the states of their visible units arbitrarily well. We relax the setting of binary units (Sutskever and Hinton, 2008; Le Roux and Bengio, 2008, 2010; Mont\'ufar and Ay, 2011) to units with arbitrary finite state spaces, and the vanishing approximation error to an arbitrary approximation error tolerance. For example, we show that a $q$-ary deep belief network with $L\geq 2+\frac{q^{\lceil m-\delta \rceil}-1}{q-1}$ layers of width $n \leq m + \log_q(m) + 1$ for some $m\in \mathbb{N}$ can approximate any probability distribution on $\{0,1,\ldots,q-1\}^n$ without exceeding a Kullback-Leibler divergence of $\delta$. Our analysis covers discrete restricted Boltzmann machines and na\"ive Bayes models as special cases.
[ "Guido F. Mont\\'ufar", "['Guido F. Montúfar']" ]
cs.LG cs.IT math.IT stat.ML
10.1109/TSP.2014.2333554
1303.7474
null
null
http://arxiv.org/abs/1303.7474v1
2013-03-29T19:52:31Z
2013-03-29T19:52:31Z
Independent Vector Analysis: Identification Conditions and Performance Bounds
Recently, an extension of independent component analysis (ICA) from one to multiple datasets, termed independent vector analysis (IVA), has been the subject of significant research interest. IVA has also been shown to be a generalization of Hotelling's canonical correlation analysis. In this paper, we provide the identification conditions for a general IVA formulation, which accounts for linear, nonlinear, and sample-to-sample dependencies. The identification conditions are a generalization of previous results for ICA and for IVA when samples are independently and identically distributed. Furthermore, a principal aim of IVA is the identification of dependent sources between datasets. Thus, we provide the additional conditions for when the arbitrary ordering of the sources within each dataset is common. Performance bounds in terms of the Cramer-Rao lower bound are also provided for the demixing matrices and interference to source ratio. The performance of two IVA algorithms are compared to the theoretical bounds.
[ "Matthew Anderson, Geng-Shen Fu, Ronald Phlypo, and T\\\"ulay Adal{\\i}", "['Matthew Anderson' 'Geng-Shen Fu' 'Ronald Phlypo' 'Tülay Adalı']" ]
cs.CV cs.LG cs.SD
10.1016/j.sigpro.2013.06
1304.0035
null
null
http://arxiv.org/abs/1304.0035v1
2013-03-29T22:00:01Z
2013-03-29T22:00:01Z
Translation-Invariant Shrinkage/Thresholding of Group Sparse Signals
This paper addresses signal denoising when large-amplitude coefficients form clusters (groups). The L1-norm and other separable sparsity models do not capture the tendency of coefficients to cluster (group sparsity). This work develops an algorithm, called 'overlapping group shrinkage' (OGS), based on the minimization of a convex cost function involving a group-sparsity promoting penalty function. The groups are fully overlapping so the denoising method is translation-invariant and blocking artifacts are avoided. Based on the principle of majorization-minimization (MM), we derive a simple iterative minimization algorithm that reduces the cost function monotonically. A procedure for setting the regularization parameter, based on attenuating the noise to a specified level, is also described. The proposed approach is illustrated on speech enhancement, wherein the OGS approach is applied in the short-time Fourier transform (STFT) domain. The denoised speech produced by OGS does not suffer from musical noise.
[ "['Po-Yu Chen' 'Ivan W. Selesnick']", "Po-Yu Chen and Ivan W. Selesnick" ]
cs.LG cs.AI cs.GT
null
1304.0160
null
null
http://arxiv.org/pdf/1304.0160v8
2013-12-25T16:18:54Z
2013-03-31T06:45:47Z
Parallel Computation Is ESS
There are enormous amount of examples of Computation in nature, exemplified across multiple species in biology. One crucial aim for these computations across all life forms their ability to learn and thereby increase the chance of their survival. In the current paper a formal definition of autonomous learning is proposed. From that definition we establish a Turing Machine model for learning, where rule tables can be added or deleted, but can not be modified. Sequential and parallel implementations of this model are discussed. It is found that for general purpose learning based on this model, the implementations capable of parallel execution would be evolutionarily stable. This is proposed to be of the reasons why in Nature parallelism in computation is found in abundance.
[ "Nabarun Mondal and Partha P. Ghosh", "['Nabarun Mondal' 'Partha P. Ghosh']" ]
cs.IT cs.LG math.IT math.ST stat.ML stat.TH
10.1109/TIT.2016.2605122
1304.0682
null
null
http://arxiv.org/abs/1304.0682v8
2016-08-25T20:46:55Z
2013-04-02T16:35:28Z
Sparse Signal Processing with Linear and Nonlinear Observations: A Unified Shannon-Theoretic Approach
We derive fundamental sample complexity bounds for recovering sparse and structured signals for linear and nonlinear observation models including sparse regression, group testing, multivariate regression and problems with missing features. In general, sparse signal processing problems can be characterized in terms of the following Markovian property. We are given a set of $N$ variables $X_1,X_2,\ldots,X_N$, and there is an unknown subset of variables $S \subset \{1,\ldots,N\}$ that are relevant for predicting outcomes $Y$. More specifically, when $Y$ is conditioned on $\{X_n\}_{n\in S}$ it is conditionally independent of the other variables, $\{X_n\}_{n \not \in S}$. Our goal is to identify the set $S$ from samples of the variables $X$ and the associated outcomes $Y$. We characterize this problem as a version of the noisy channel coding problem. Using asymptotic information theoretic analyses, we establish mutual information formulas that provide sufficient and necessary conditions on the number of samples required to successfully recover the salient variables. These mutual information expressions unify conditions for both linear and nonlinear observations. We then compute sample complexity bounds for the aforementioned models, based on the mutual information expressions in order to demonstrate the applicability and flexibility of our results in general sparse signal processing models.
[ "['Cem Aksoylar' 'George Atia' 'Venkatesh Saligrama']", "Cem Aksoylar, George Atia, Venkatesh Saligrama" ]
cs.LG cs.CV stat.ML
null
1304.0725
null
null
http://arxiv.org/pdf/1304.0725v1
2013-03-11T05:28:06Z
2013-03-11T05:28:06Z
Improved Performance of Unsupervised Method by Renovated K-Means
Clustering is a separation of data into groups of similar objects. Every group called cluster consists of objects that are similar to one another and dissimilar to objects of other groups. In this paper, the K-Means algorithm is implemented by three distance functions and to identify the optimal distance function for clustering methods. The proposed K-Means algorithm is compared with K-Means, Static Weighted K-Means (SWK-Means) and Dynamic Weighted K-Means (DWK-Means) algorithm by using Davis Bouldin index, Execution Time and Iteration count methods. Experimental results show that the proposed K-Means algorithm performed better on Iris and Wine dataset when compared with other three clustering methods.
[ "['P. Ashok' 'G. M Kadhar Nawaz' 'E. Elayaraja' 'V. Vadivel']", "P. Ashok, G.M Kadhar Nawaz, E. Elayaraja, V. Vadivel" ]
cs.LG cs.CC cs.DS
null
1304.0730
null
null
http://arxiv.org/pdf/1304.0730v1
2013-04-02T18:37:35Z
2013-04-02T18:37:35Z
Representation, Approximation and Learning of Submodular Functions Using Low-rank Decision Trees
We study the complexity of approximate representation and learning of submodular functions over the uniform distribution on the Boolean hypercube $\{0,1\}^n$. Our main result is the following structural theorem: any submodular function is $\epsilon$-close in $\ell_2$ to a real-valued decision tree (DT) of depth $O(1/\epsilon^2)$. This immediately implies that any submodular function is $\epsilon$-close to a function of at most $2^{O(1/\epsilon^2)}$ variables and has a spectral $\ell_1$ norm of $2^{O(1/\epsilon^2)}$. It also implies the closest previous result that states that submodular functions can be approximated by polynomials of degree $O(1/\epsilon^2)$ (Cheraghchi et al., 2012). Our result is proved by constructing an approximation of a submodular function by a DT of rank $4/\epsilon^2$ and a proof that any rank-$r$ DT can be $\epsilon$-approximated by a DT of depth $\frac{5}{2}(r+\log(1/\epsilon))$. We show that these structural results can be exploited to give an attribute-efficient PAC learning algorithm for submodular functions running in time $\tilde{O}(n^2) \cdot 2^{O(1/\epsilon^{4})}$. The best previous algorithm for the problem requires $n^{O(1/\epsilon^{2})}$ time and examples (Cheraghchi et al., 2012) but works also in the agnostic setting. In addition, we give improved learning algorithms for a number of related settings. We also prove that our PAC and agnostic learning algorithms are essentially optimal via two lower bounds: (1) an information-theoretic lower bound of $2^{\Omega(1/\epsilon^{2/3})}$ on the complexity of learning monotone submodular functions in any reasonable model; (2) computational lower bound of $n^{\Omega(1/\epsilon^{2/3})}$ based on a reduction to learning of sparse parities with noise, widely-believed to be intractable. These are the first lower bounds for learning of submodular functions over the uniform distribution.
[ "Vitaly Feldman and Pravesh Kothari and Jan Vondrak", "['Vitaly Feldman' 'Pravesh Kothari' 'Jan Vondrak']" ]
cs.LG
null
1304.0740
null
null
http://arxiv.org/pdf/1304.0740v1
2013-04-02T19:11:23Z
2013-04-02T19:11:23Z
O(logT) Projections for Stochastic Optimization of Smooth and Strongly Convex Functions
Traditional algorithms for stochastic optimization require projecting the solution at each iteration into a given domain to ensure its feasibility. When facing complex domains, such as positive semi-definite cones, the projection operation can be expensive, leading to a high computational cost per iteration. In this paper, we present a novel algorithm that aims to reduce the number of projections for stochastic optimization. The proposed algorithm combines the strength of several recent developments in stochastic optimization, including mini-batch, extra-gradient, and epoch gradient descent, in order to effectively explore the smoothness and strong convexity. We show, both in expectation and with a high probability, that when the objective function is both smooth and strongly convex, the proposed algorithm achieves the optimal $O(1/T)$ rate of convergence with only $O(\log T)$ projections. Our empirical study verifies the theoretical result.
[ "Lijun Zhang, Tianbao Yang, Rong Jin, Xiaofei He", "['Lijun Zhang' 'Tianbao Yang' 'Rong Jin' 'Xiaofei He']" ]
cs.CV cs.LG
10.1109/CVPR.2013.173
1304.0840
null
null
http://arxiv.org/abs/1304.0840v1
2013-04-03T04:31:10Z
2013-04-03T04:31:10Z
A Fast Semidefinite Approach to Solving Binary Quadratic Problems
Many computer vision problems can be formulated as binary quadratic programs (BQPs). Two classic relaxation methods are widely used for solving BQPs, namely, spectral methods and semidefinite programming (SDP), each with their own advantages and disadvantages. Spectral relaxation is simple and easy to implement, but its bound is loose. Semidefinite relaxation has a tighter bound, but its computational complexity is high for large scale problems. We present a new SDP formulation for BQPs, with two desirable properties. First, it has a similar relaxation bound to conventional SDP formulations. Second, compared with conventional SDP methods, the new SDP formulation leads to a significantly more efficient and scalable dual optimization approach, which has the same degree of complexity as spectral methods. Extensive experiments on various applications including clustering, image segmentation, co-segmentation and registration demonstrate the usefulness of our SDP formulation for solving large-scale BQPs.
[ "Peng Wang, Chunhua Shen, Anton van den Hengel", "['Peng Wang' 'Chunhua Shen' 'Anton van den Hengel']" ]
cs.CV cs.AI cs.LG math.OC stat.ML
null
1304.1014
null
null
http://arxiv.org/pdf/1304.1014v2
2013-10-13T09:50:26Z
2013-04-03T17:15:43Z
A Novel Frank-Wolfe Algorithm. Analysis and Applications to Large-Scale SVM Training
Recently, there has been a renewed interest in the machine learning community for variants of a sparse greedy approximation procedure for concave optimization known as {the Frank-Wolfe (FW) method}. In particular, this procedure has been successfully applied to train large-scale instances of non-linear Support Vector Machines (SVMs). Specializing FW to SVM training has allowed to obtain efficient algorithms but also important theoretical results, including convergence analysis of training algorithms and new characterizations of model sparsity. In this paper, we present and analyze a novel variant of the FW method based on a new way to perform away steps, a classic strategy used to accelerate the convergence of the basic FW procedure. Our formulation and analysis is focused on a general concave maximization problem on the simplex. However, the specialization of our algorithm to quadratic forms is strongly related to some classic methods in computational geometry, namely the Gilbert and MDM algorithms. On the theoretical side, we demonstrate that the method matches the guarantees in terms of convergence rate and number of iterations obtained by using classic away steps. In particular, the method enjoys a linear rate of convergence, a result that has been recently proved for MDM on quadratic forms. On the practical side, we provide experiments on several classification datasets, and evaluate the results using statistical tests. Experiments show that our method is faster than the FW method with classic away steps, and works well even in the cases in which classic away steps slow down the algorithm. Furthermore, these improvements are obtained without sacrificing the predictive accuracy of the obtained SVM model.
[ "Hector Allende, Emanuele Frandi, Ricardo Nanculef, Claudio Sartori", "['Hector Allende' 'Emanuele Frandi' 'Ricardo Nanculef' 'Claudio Sartori']" ]
cs.LG cs.CL cs.NE
null
1304.1018
null
null
http://arxiv.org/pdf/1304.1018v2
2013-06-12T11:23:34Z
2013-04-03T17:20:41Z
Estimating Phoneme Class Conditional Probabilities from Raw Speech Signal using Convolutional Neural Networks
In hybrid hidden Markov model/artificial neural networks (HMM/ANN) automatic speech recognition (ASR) system, the phoneme class conditional probabilities are estimated by first extracting acoustic features from the speech signal based on prior knowledge such as, speech perception or/and speech production knowledge, and, then modeling the acoustic features with an ANN. Recent advances in machine learning techniques, more specifically in the field of image processing and text processing, have shown that such divide and conquer strategy (i.e., separating feature extraction and modeling steps) may not be necessary. Motivated from these studies, in the framework of convolutional neural networks (CNNs), this paper investigates a novel approach, where the input to the ANN is raw speech signal and the output is phoneme class conditional probability estimates. On TIMIT phoneme recognition task, we study different ANN architectures to show the benefit of CNNs and compare the proposed approach against conventional approach where, spectral-based feature MFCC is extracted and modeled by a multilayer perceptron. Our studies show that the proposed approach can yield comparable or better phoneme recognition performance when compared to the conventional approach. It indicates that CNNs can learn features relevant for phoneme classification automatically from the raw speech signal.
[ "Dimitri Palaz, Ronan Collobert, Mathew Magimai.-Doss", "['Dimitri Palaz' 'Ronan Collobert' 'Mathew Magimai. -Doss']" ]
cs.LG
null
1304.1192
null
null
http://arxiv.org/pdf/1304.1192v1
2013-04-03T21:14:50Z
2013-04-03T21:14:50Z
Efficient Distance Metric Learning by Adaptive Sampling and Mini-Batch Stochastic Gradient Descent (SGD)
Distance metric learning (DML) is an important task that has found applications in many domains. The high computational cost of DML arises from the large number of variables to be determined and the constraint that a distance metric has to be a positive semi-definite (PSD) matrix. Although stochastic gradient descent (SGD) has been successfully applied to improve the efficiency of DML, it can still be computationally expensive because in order to ensure that the solution is a PSD matrix, it has to, at every iteration, project the updated distance metric onto the PSD cone, an expensive operation. We address this challenge by developing two strategies within SGD, i.e. mini-batch and adaptive sampling, to effectively reduce the number of updates (i.e., projections onto the PSD cone) in SGD. We also develop hybrid approaches that combine the strength of adaptive sampling with that of mini-batch online learning techniques to further improve the computational efficiency of SGD for DML. We prove the theoretical guarantees for both adaptive sampling and mini-batch based approaches for DML. We also conduct an extensive empirical study to verify the effectiveness of the proposed algorithms for DML.
[ "Qi Qian, Rong Jin, Jinfeng Yi, Lijun Zhang, Shenghuo Zhu", "['Qi Qian' 'Rong Jin' 'Jinfeng Yi' 'Lijun Zhang' 'Shenghuo Zhu']" ]
cs.LG
null
1304.1391
null
null
http://arxiv.org/pdf/1304.1391v1
2013-04-04T15:08:31Z
2013-04-04T15:08:31Z
Fast SVM training using approximate extreme points
Applications of non-linear kernel Support Vector Machines (SVMs) to large datasets is seriously hampered by its excessive training time. We propose a modification, called the approximate extreme points support vector machine (AESVM), that is aimed at overcoming this burden. Our approach relies on conducting the SVM optimization over a carefully selected subset, called the representative set, of the training dataset. We present analytical results that indicate the similarity of AESVM and SVM solutions. A linear time algorithm based on convex hulls and extreme points is used to compute the representative set in kernel space. Extensive computational experiments on nine datasets compared AESVM to LIBSVM \citep{LIBSVM}, CVM \citep{Tsang05}, BVM \citep{Tsang07}, LASVM \citep{Bordes05}, $\text{SVM}^{\text{perf}}$ \citep{Joachims09}, and the random features method \citep{rahimi07}. Our AESVM implementation was found to train much faster than the other methods, while its classification accuracy was similar to that of LIBSVM in all cases. In particular, for a seizure detection dataset, AESVM training was almost $10^3$ times faster than LIBSVM and LASVM and more than forty times faster than CVM and BVM. Additionally, AESVM also gave competitively fast classification times.
[ "['Manu Nandan' 'Pramod P. Khargonekar' 'Sachin S. Talathi']", "Manu Nandan, Pramod P. Khargonekar, Sachin S. Talathi" ]
cs.LG math.PR
null
1304.1574
null
null
http://arxiv.org/pdf/1304.1574v1
2013-04-04T22:34:55Z
2013-04-04T22:34:55Z
Generalization Bounds for Domain Adaptation
In this paper, we provide a new framework to obtain the generalization bounds of the learning process for domain adaptation, and then apply the derived bounds to analyze the asymptotical convergence of the learning process. Without loss of generality, we consider two kinds of representative domain adaptation: one is with multiple sources and the other is combining source and target data. In particular, we use the integral probability metric to measure the difference between two domains. For either kind of domain adaptation, we develop a related Hoeffding-type deviation inequality and a symmetrization inequality to achieve the corresponding generalization bound based on the uniform entropy number. We also generalized the classical McDiarmid's inequality to a more general setting where independent random variables can take values from different domains. By using this inequality, we then obtain generalization bounds based on the Rademacher complexity. Afterwards, we analyze the asymptotic convergence and the rate of convergence of the learning process for such kind of domain adaptation. Meanwhile, we discuss the factors that affect the asymptotic behavior of the learning process and the numerical experiments support our theoretical findings as well.
[ "Chao Zhang, Lei Zhang, Jieping Ye", "['Chao Zhang' 'Lei Zhang' 'Jieping Ye']" ]
cs.SE cs.IR cs.LG
null
1304.1677
null
null
http://arxiv.org/pdf/1304.1677v1
2013-04-05T11:05:18Z
2013-04-05T11:05:18Z
Bug Classification: Feature Extraction and Comparison of Event Model using Na\"ive Bayes Approach
In software industries, individuals at different levels from customer to an engineer apply diverse mechanisms to detect to which class a particular bug should be allocated. Sometimes while a simple search in Internet might help, in many other cases a lot of effort is spent in analyzing the bug report to classify the bug. So there is a great need of a structured mining algorithm - where given a crash log, the existing bug database could be mined to find out the class to which the bug should be allocated. This would involve Mining patterns and applying different classification algorithms. This paper focuses on the feature extraction, noise reduction in data and classification of network bugs using probabilistic Na\"ive Bayes approach. Different event models like Bernoulli and Multinomial are applied on the extracted features. When new, unseen bugs are given as input to the algorithms, the performance comparison of different algorithms is done on the basis of accuracy and recall parameters.
[ "['Sunil Joy Dommati' 'Ruchi Agrawal' 'Ram Mohana Reddy G.'\n 'S. Sowmya Kamath']", "Sunil Joy Dommati, Ruchi Agrawal, Ram Mohana Reddy G. and S. Sowmya\n Kamath" ]
cs.CV cs.DB cs.LG
null
1304.1995
null
null
http://arxiv.org/pdf/1304.1995v2
2013-04-09T17:59:33Z
2013-04-07T13:15:17Z
Image Retrieval using Histogram Factorization and Contextual Similarity Learning
Image retrieval has been a top topic in the field of both computer vision and machine learning for a long time. Content based image retrieval, which tries to retrieve images from a database visually similar to a query image, has attracted much attention. Two most important issues of image retrieval are the representation and ranking of the images. Recently, bag-of-words based method has shown its power as a representation method. Moreover, nonnegative matrix factorization is also a popular way to represent the data samples. In addition, contextual similarity learning has also been studied and proven to be an effective method for the ranking problem. However, these technologies have never been used together. In this paper, we developed an effective image retrieval system by representing each image using the bag-of-words method as histograms, and then apply the nonnegative matrix factorization to factorize the histograms, and finally learn the ranking score using the contextual similarity learning method. The proposed novel system is evaluated on a large scale image database and the effectiveness is shown.
[ "Liu Liang", "['Liu Liang']" ]
cs.LG cs.AI cs.MA stat.ML
null
1304.2024
null
null
http://arxiv.org/pdf/1304.2024v3
2014-03-16T15:10:35Z
2013-04-07T17:00:37Z
A General Framework for Interacting Bayes-Optimally with Self-Interested Agents using Arbitrary Parametric Model and Model Prior
Recent advances in Bayesian reinforcement learning (BRL) have shown that Bayes-optimality is theoretically achievable by modeling the environment's latent dynamics using Flat-Dirichlet-Multinomial (FDM) prior. In self-interested multi-agent environments, the transition dynamics are mainly controlled by the other agent's stochastic behavior for which FDM's independence and modeling assumptions do not hold. As a result, FDM does not allow the other agent's behavior to be generalized across different states nor specified using prior domain knowledge. To overcome these practical limitations of FDM, we propose a generalization of BRL to integrate the general class of parametric models and model priors, thus allowing practitioners' domain knowledge to be exploited to produce a fine-grained and compact representation of the other agent's behavior. Empirical evaluation shows that our approach outperforms existing multi-agent reinforcement learning algorithms.
[ "Trong Nghia Hoang and Kian Hsiang Low", "['Trong Nghia Hoang' 'Kian Hsiang Low']" ]
null
null
1304.2079
null
null
http://arxiv.org/pdf/1304.2079v3
2014-05-28T00:38:46Z
2013-04-08T00:06:26Z
Learning Coverage Functions and Private Release of Marginals
We study the problem of approximating and learning coverage functions. A function $c: 2^{[n]} rightarrow mathbf{R}^{+}$ is a coverage function, if there exists a universe $U$ with non-negative weights $w(u)$ for each $u in U$ and subsets $A_1, A_2, ldots, A_n$ of $U$ such that $c(S) = sum_{u in cup_{i in S} A_i} w(u)$. Alternatively, coverage functions can be described as non-negative linear combinations of monotone disjunctions. They are a natural subclass of submodular functions and arise in a number of applications. We give an algorithm that for any $gamma,delta>0$, given random and uniform examples of an unknown coverage function $c$, finds a function $h$ that approximates $c$ within factor $1+gamma$ on all but $delta$-fraction of the points in time $poly(n,1/gamma,1/delta)$. This is the first fully-polynomial algorithm for learning an interesting class of functions in the demanding PMAC model of Balcan and Harvey (2011). Our algorithms are based on several new structural properties of coverage functions. Using the results in (Feldman and Kothari, 2014), we also show that coverage functions are learnable agnostically with excess $ell_1$-error $epsilon$ over all product and symmetric distributions in time $n^{log(1/epsilon)}$. In contrast, we show that, without assumptions on the distribution, learning coverage functions is at least as hard as learning polynomial-size disjoint DNF formulas, a class of functions for which the best known algorithm runs in time $2^{tilde{O}(n^{1/3})}$ (Klivans and Servedio, 2004). As an application of our learning results, we give simple differentially-private algorithms for releasing monotone conjunction counting queries with low average error. In particular, for any $k leq n$, we obtain private release of $k$-way marginals with average error $bar{alpha}$ in time $n^{O(log(1/bar{alpha}))}$.
[ "['Vitaly Feldman' 'Pravesh Kothari']" ]
stat.ML cs.DC cs.LG
null
1304.2302
null
null
http://arxiv.org/pdf/1304.2302v1
2013-04-08T18:34:32Z
2013-04-08T18:34:32Z
ClusterCluster: Parallel Markov Chain Monte Carlo for Dirichlet Process Mixtures
The Dirichlet process (DP) is a fundamental mathematical tool for Bayesian nonparametric modeling, and is widely used in tasks such as density estimation, natural language processing, and time series modeling. Although MCMC inference methods for the DP often provide a gold standard in terms asymptotic accuracy, they can be computationally expensive and are not obviously parallelizable. We propose a reparameterization of the Dirichlet process that induces conditional independencies between the atoms that form the random measure. This conditional independence enables many of the Markov chain transition operators for DP inference to be simulated in parallel across multiple cores. Applied to mixture modeling, our approach enables the Dirichlet process to simultaneously learn clusters that describe the data and superclusters that define the granularity of parallelization. Unlike previous approaches, our technique does not require alteration of the model and leaves the true posterior distribution invariant. It also naturally lends itself to a distributed software implementation in terms of Map-Reduce, which we test in cluster configurations of over 50 machines and 100 cores. We present experiments exploring the parallel efficiency and convergence properties of our approach on both synthetic and real-world data, including runs on 1MM data vectors in 256 dimensions.
[ "Dan Lovell, Jonathan Malmaud, Ryan P. Adams, Vikash K. Mansinghka", "['Dan Lovell' 'Jonathan Malmaud' 'Ryan P. Adams' 'Vikash K. Mansinghka']" ]
stat.AP cs.LG stat.ML
null
1304.2331
null
null
http://arxiv.org/pdf/1304.2331v1
2013-04-08T19:49:51Z
2013-04-08T19:49:51Z
The PAV algorithm optimizes binary proper scoring rules
There has been much recent interest in application of the pool-adjacent-violators (PAV) algorithm for the purpose of calibrating the probabilistic outputs of automatic pattern recognition and machine learning algorithms. Special cost functions, known as proper scoring rules form natural objective functions to judge the goodness of such calibration. We show that for binary pattern classifiers, the non-parametric optimization of calibration, subject to a monotonicity constraint, can be solved by PAV and that this solution is optimal for all regular binary proper scoring rules. This extends previous results which were limited to convex binary proper scoring rules. We further show that this result holds not only for calibration of probabilities, but also for calibration of log-likelihood-ratios, in which case optimality holds independently of the prior probabilities of the pattern classes.
[ "['Niko Brummer' 'Johan du Preez']", "Niko Brummer and Johan du Preez" ]
cs.LG cs.AI stat.ML
null
1304.2363
null
null
http://arxiv.org/pdf/1304.2363v1
2013-03-27T19:43:53Z
2013-03-27T19:43:53Z
Multiple decision trees
This paper describes experiments, on two domains, to investigate the effect of averaging over predictions of multiple decision trees, instead of using a single tree. Other authors have pointed out theoretical and commonsense reasons for preferring the multiple tree approach. Ideally, we would like to consider predictions from all trees, weighted by their probability. However, there is a vast number of different trees, and it is difficult to estimate the probability of each tree. We sidestep the estimation problem by using a modified version of the ID3 algorithm to build good trees, and average over only these trees. Our results are encouraging. For each domain, we managed to produce a small number of good trees. We find that it is best to average across sets of trees with different structure; this usually gives better performance than any of the constituent trees, including the ID3 tree.
[ "Suk Wah Kwok, Chris Carter", "['Suk Wah Kwok' 'Chris Carter']" ]
cs.CV cs.LG
null
1304.2490
null
null
http://arxiv.org/pdf/1304.2490v1
2013-04-09T08:45:57Z
2013-04-09T08:45:57Z
Kernel Reconstruction ICA for Sparse Representation
Independent Component Analysis (ICA) is an effective unsupervised tool to learn statistically independent representation. However, ICA is not only sensitive to whitening but also difficult to learn an over-complete basis. Consequently, ICA with soft Reconstruction cost(RICA) was presented to learn sparse representations with over-complete basis even on unwhitened data. Whereas RICA is infeasible to represent the data with nonlinear structure due to its intrinsic linearity. In addition, RICA is essentially an unsupervised method and can not utilize the class information. In this paper, we propose a kernel ICA model with reconstruction constraint (kRICA) to capture the nonlinear features. To bring in the class information, we further extend the unsupervised kRICA to a supervised one by introducing a discrimination constraint, namely d-kRICA. This constraint leads to learn a structured basis consisted of basis vectors from different basis subsets corresponding to different class labels. Then each subset will sparsely represent well for its own class but not for the others. Furthermore, data samples belonging to the same class will have similar representations, and thereby the learned sparse representations can take more discriminative power. Experimental results validate the effectiveness of kRICA and d-kRICA for image classification.
[ "Yanhui Xiao, Zhenfeng Zhu, Yao Zhao", "['Yanhui Xiao' 'Zhenfeng Zhu' 'Yao Zhao']" ]
cond-mat.dis-nn cond-mat.stat-mech cs.LG
10.1088/1751-8113/46/37/375002
1304.2850
null
null
http://arxiv.org/abs/1304.2850v2
2013-08-09T02:15:12Z
2013-04-10T06:17:07Z
Entropy landscape of solutions in the binary perceptron problem
The statistical picture of the solution space for a binary perceptron is studied. The binary perceptron learns a random classification of input random patterns by a set of binary synaptic weights. The learning of this network is difficult especially when the pattern (constraint) density is close to the capacity, which is supposed to be intimately related to the structure of the solution space. The geometrical organization is elucidated by the entropy landscape from a reference configuration and of solution-pairs separated by a given Hamming distance in the solution space. We evaluate the entropy at the annealed level as well as replica symmetric level and the mean field result is confirmed by the numerical simulations on single instances using the proposed message passing algorithms. From the first landscape (a random configuration as a reference), we see clearly how the solution space shrinks as more constraints are added. From the second landscape of solution-pairs, we deduce the coexistence of clustering and freezing in the solution space.
[ "Haiping Huang, K. Y. Michael Wong and Yoshiyuki Kabashima", "['Haiping Huang' 'K. Y. Michael Wong' 'Yoshiyuki Kabashima']" ]
stat.AP cs.LG stat.ML
null
1304.2865
null
null
http://arxiv.org/pdf/1304.2865v1
2013-04-10T07:32:31Z
2013-04-10T07:32:31Z
The BOSARIS Toolkit: Theory, Algorithms and Code for Surviving the New DCF
The change of two orders of magnitude in the 'new DCF' of NIST's SRE'10, relative to the 'old DCF' evaluation criterion, posed a difficult challenge for participants and evaluator alike. Initially, participants were at a loss as to how to calibrate their systems, while the evaluator underestimated the required number of evaluation trials. After the fact, it is now obvious that both calibration and evaluation require very large sets of trials. This poses the challenges of (i) how to decide what number of trials is enough, and (ii) how to process such large data sets with reasonable memory and CPU requirements. After SRE'10, at the BOSARIS Workshop, we built solutions to these problems into the freely available BOSARIS Toolkit. This paper explains the principles and algorithms behind this toolkit. The main contributions of the toolkit are: 1. The Normalized Bayes Error-Rate Plot, which analyses likelihood- ratio calibration over a wide range of DCF operating points. These plots also help in judging the adequacy of the sizes of calibration and evaluation databases. 2. Efficient algorithms to compute DCF and minDCF for large score files, over the range of operating points required by these plots. 3. A new score file format, which facilitates working with very large trial lists. 4. A faster logistic regression optimizer for fusion and calibration. 5. A principled way to define EER (equal error rate), which is of practical interest when the absolute error count is small.
[ "['Niko Brümmer' 'Edward de Villiers']", "Niko Br\\\"ummer and Edward de Villiers" ]
cs.LG
null
1304.2994
null
null
http://arxiv.org/pdf/1304.2994v3
2014-07-14T01:05:45Z
2013-04-10T15:26:13Z
A Generalized Online Mirror Descent with Applications to Classification and Regression
Online learning algorithms are fast, memory-efficient, easy to implement, and applicable to many prediction problems, including classification, regression, and ranking. Several online algorithms were proposed in the past few decades, some based on additive updates, like the Perceptron, and some on multiplicative updates, like Winnow. A unifying perspective on the design and the analysis of online algorithms is provided by online mirror descent, a general prediction strategy from which most first-order algorithms can be obtained as special cases. We generalize online mirror descent to time-varying regularizers with generic updates. Unlike standard mirror descent, our more general formulation also captures second order algorithms, algorithms for composite losses and algorithms for adaptive filtering. Moreover, we recover, and sometimes improve, known regret bounds as special cases of our analysis using specific regularizers. Finally, we show the power of our approach by deriving a new second order algorithm with a regret bound invariant with respect to arbitrary rescalings of individual features.
[ "['Francesco Orabona' 'Koby Crammer' 'Nicolò Cesa-Bianchi']", "Francesco Orabona, Koby Crammer, Nicol\\`o Cesa-Bianchi" ]
stat.ML cs.LG
null
1304.3285
null
null
http://arxiv.org/pdf/1304.3285v4
2013-07-24T19:20:15Z
2013-04-11T13:20:51Z
Scaling the Indian Buffet Process via Submodular Maximization
Inference for latent feature models is inherently difficult as the inference space grows exponentially with the size of the input data and number of latent features. In this work, we use Kurihara & Welling (2008)'s maximization-expectation framework to perform approximate MAP inference for linear-Gaussian latent feature models with an Indian Buffet Process (IBP) prior. This formulation yields a submodular function of the features that corresponds to a lower bound on the model evidence. By adding a constant to this function, we obtain a nonnegative submodular function that can be maximized via a greedy algorithm that obtains at least a one-third approximation to the optimal solution. Our inference method scales linearly with the size of the input data, and we show the efficacy of our method on the largest datasets currently analyzed using an IBP model.
[ "Colorado Reed and Zoubin Ghahramani", "['Colorado Reed' 'Zoubin Ghahramani']" ]
cs.LG math.ST stat.TH
null
1304.3345
null
null
http://arxiv.org/pdf/1304.3345v1
2013-04-11T15:44:18Z
2013-04-11T15:44:18Z
Probabilistic Classification using Fuzzy Support Vector Machines
In medical applications such as recognizing the type of a tumor as Malignant or Benign, a wrong diagnosis can be devastating. Methods like Fuzzy Support Vector Machines (FSVM) try to reduce the effect of misplaced training points by assigning a lower weight to the outliers. However, there are still uncertain points which are similar to both classes and assigning a class by the given information will cause errors. In this paper, we propose a two-phase classification method which probabilistically assigns the uncertain points to each of the classes. The proposed method is applied to the Breast Cancer Wisconsin (Diagnostic) Dataset which consists of 569 instances in 2 classes of Malignant and Benign. This method assigns certain instances to their appropriate classes with probability of one, and the uncertain instances to each of the classes with associated probabilities. Therefore, based on the degree of uncertainty, doctors can suggest further examinations before making the final diagnosis.
[ "Marzieh Parandehgheibi", "['Marzieh Parandehgheibi']" ]
cs.AI cs.CL cs.LG
null
1304.3432
null
null
http://arxiv.org/pdf/1304.3432v1
2013-03-27T19:56:55Z
2013-03-27T19:56:55Z
Machine Learning, Clustering, and Polymorphy
This paper describes a machine induction program (WITT) that attempts to model human categorization. Properties of categories to which human subjects are sensitive includes best or prototypical members, relative contrasts between putative categories, and polymorphy (neither necessary or sufficient features). This approach represents an alternative to usual Artificial Intelligence approaches to generalization and conceptual clustering which tend to focus on necessary and sufficient feature rules, equivalence classes, and simple search and match schemes. WITT is shown to be more consistent with human categorization while potentially including results produced by more traditional clustering schemes. Applications of this approach in the domains of expert systems and information retrieval are also discussed.
[ "Stephen Jose Hanson, Malcolm Bauer", "['Stephen Jose Hanson' 'Malcolm Bauer']" ]
stat.ML cs.LG stat.AP
null
1304.3568
null
null
http://arxiv.org/pdf/1304.3568v1
2013-04-12T08:47:38Z
2013-04-12T08:47:38Z
Distributed dictionary learning over a sensor network
We consider the problem of distributed dictionary learning, where a set of nodes is required to collectively learn a common dictionary from noisy measurements. This approach may be useful in several contexts including sensor networks. Diffusion cooperation schemes have been proposed to solve the distributed linear regression problem. In this work we focus on a diffusion-based adaptive dictionary learning strategy: each node records observations and cooperates with its neighbors by sharing its local dictionary. The resulting algorithm corresponds to a distributed block coordinate descent (alternate optimization). Beyond dictionary learning, this strategy could be adapted to many matrix factorization problems and generalized to various settings. This article presents our approach and illustrates its efficiency on some numerical examples.
[ "['Pierre Chainais' 'Cédric Richard']", "Pierre Chainais and C\\'edric Richard" ]
cs.LG stat.ML
null
1304.3708
null
null
http://arxiv.org/pdf/1304.3708v1
2013-04-12T19:09:56Z
2013-04-12T19:09:56Z
Advice-Efficient Prediction with Expert Advice
Advice-efficient prediction with expert advice (in analogy to label-efficient prediction) is a variant of prediction with expert advice game, where on each round of the game we are allowed to ask for advice of a limited number $M$ out of $N$ experts. This setting is especially interesting when asking for advice of every expert on every round is expensive. We present an algorithm for advice-efficient prediction with expert advice that achieves $O(\sqrt{\frac{N}{M}T\ln N})$ regret on $T$ rounds of the game.
[ "Yevgeny Seldin and Peter Bartlett and Koby Crammer", "['Yevgeny Seldin' 'Peter Bartlett' 'Koby Crammer']" ]
cs.LG stat.ML
10.5121/ijdkp.2013.3207
1304.3745
null
null
http://arxiv.org/abs/1304.3745v1
2013-04-12T22:23:53Z
2013-04-12T22:23:53Z
Towards more accurate clustering method by using dynamic time warping
An intrinsic problem of classifiers based on machine learning (ML) methods is that their learning time grows as the size and complexity of the training dataset increases. For this reason, it is important to have efficient computational methods and algorithms that can be applied on large datasets, such that it is still possible to complete the machine learning tasks in reasonable time. In this context, we present in this paper a more accurate simple process to speed up ML methods. An unsupervised clustering algorithm is combined with Expectation, Maximization (EM) algorithm to develop an efficient Hidden Markov Model (HMM) training. The idea of the proposed process consists of two steps. In the first step, training instances with similar inputs are clustered and a weight factor which represents the frequency of these instances is assigned to each representative cluster. Dynamic Time Warping technique is used as a dissimilarity function to cluster similar examples. In the second step, all formulas in the classical HMM training algorithm (EM) associated with the number of training instances are modified to include the weight factor in appropriate terms. This process significantly accelerates HMM training while maintaining the same initial, transition and emission probabilities matrixes as those obtained with the classical HMM training algorithm. Accordingly, the classification accuracy is preserved. Depending on the size of the training set, speedups of up to 2200 times is possible when the size is about 100.000 instances. The proposed approach is not limited to training HMMs, but it can be employed for a large variety of MLs methods.
[ "['Khadoudja Ghanem']", "Khadoudja Ghanem" ]
stat.ME cs.LG q-bio.QM stat.AP stat.ML
null
1304.3760
null
null
http://arxiv.org/pdf/1304.3760v3
2016-09-21T23:59:53Z
2013-04-13T02:15:20Z
Identification of relevant subtypes via preweighted sparse clustering
Cluster analysis methods are used to identify homogeneous subgroups in a data set. In biomedical applications, one frequently applies cluster analysis in order to identify biologically interesting subgroups. In particular, one may wish to identify subgroups that are associated with a particular outcome of interest. Conventional clustering methods generally do not identify such subgroups, particularly when there are a large number of high-variance features in the data set. Conventional methods may identify clusters associated with these high-variance features when one wishes to obtain secondary clusters that are more interesting biologically or more strongly associated with a particular outcome of interest. A modification of sparse clustering can be used to identify such secondary clusters or clusters associated with an outcome of interest. This method correctly identifies such clusters of interest in several simulation scenarios. The method is also applied to a large prospective cohort study of temporomandibular disorders and a leukemia microarray data set.
[ "Sheila Gaynor and Eric Bair", "['Sheila Gaynor' 'Eric Bair']" ]
cs.LG
10.5120/11267-6526
1304.3840
null
null
http://arxiv.org/abs/1304.3840v1
2013-04-13T20:19:25Z
2013-04-13T20:19:25Z
A New Homogeneity Inter-Clusters Measure in SemiSupervised Clustering
Many studies in data mining have proposed a new learning called semi-Supervised. Such type of learning combines unlabeled and labeled data which are hard to obtain. However, in unsupervised methods, the only unlabeled data are used. The problem of significance and the effectiveness of semi-supervised clustering results is becoming of main importance. This paper pursues the thesis that muchgreater accuracy can be achieved in such clustering by improving the similarity computing. Hence, we introduce a new approach of semisupervised clustering using an innovative new homogeneity measure of generated clusters. Our experimental results demonstrate significantly improved accuracy as a result.
[ "Badreddine Meftahi, Ourida Ben Boubaker Saidi", "['Badreddine Meftahi' 'Ourida Ben Boubaker Saidi']" ]
stat.ME cs.CV cs.LG
null
1304.4077
null
null
http://arxiv.org/pdf/1304.4077v2
2013-05-31T16:57:33Z
2013-04-15T12:54:52Z
A new Bayesian ensemble of trees classifier for identifying multi-class labels in satellite images
Classification of satellite images is a key component of many remote sensing applications. One of the most important products of a raw satellite image is the classified map which labels the image pixels into meaningful classes. Though several parametric and non-parametric classifiers have been developed thus far, accurate labeling of the pixels still remains a challenge. In this paper, we propose a new reliable multiclass-classifier for identifying class labels of a satellite image in remote sensing applications. The proposed multiclass-classifier is a generalization of a binary classifier based on the flexible ensemble of regression trees model called Bayesian Additive Regression Trees (BART). We used three small areas from the LANDSAT 5 TM image, acquired on August 15, 2009 (path/row: 08/29, L1T product, UTM map projection) over Kings County, Nova Scotia, Canada to classify the land-use. Several prediction accuracy and uncertainty measures have been used to compare the reliability of the proposed classifier with the state-of-the-art classifiers in remote sensing.
[ "Reshu Agarwal, Pritam Ranjan, Hugh Chipman", "['Reshu Agarwal' 'Pritam Ranjan' 'Hugh Chipman']" ]
cs.LG cs.CV stat.ML
10.1007/978-3-642-33709-3_16
1304.4344
null
null
http://arxiv.org/abs/1304.4344v1
2013-04-16T06:47:03Z
2013-04-16T06:47:03Z
Sparse Coding and Dictionary Learning for Symmetric Positive Definite Matrices: A Kernel Approach
Recent advances suggest that a wide range of computer vision problems can be addressed more appropriately by considering non-Euclidean geometry. This paper tackles the problem of sparse coding and dictionary learning in the space of symmetric positive definite matrices, which form a Riemannian manifold. With the aid of the recently introduced Stein kernel (related to a symmetric version of Bregman matrix divergence), we propose to perform sparse coding by embedding Riemannian manifolds into reproducing kernel Hilbert spaces. This leads to a convex and kernel version of the Lasso problem, which can be solved efficiently. We furthermore propose an algorithm for learning a Riemannian dictionary (used for sparse coding), closely tied to the Stein kernel. Experiments on several classification tasks (face recognition, texture classification, person re-identification) show that the proposed sparse coding approach achieves notable improvements in discrimination accuracy, in comparison to state-of-the-art methods such as tensor sparse coding, Riemannian locality preserving projection, and symmetry-driven accumulation of local features.
[ "Mehrtash T. Harandi, Conrad Sanderson, Richard Hartley, Brian C.\n Lovell", "['Mehrtash T. Harandi' 'Conrad Sanderson' 'Richard Hartley'\n 'Brian C. Lovell']" ]
cs.IT cs.LG math.IT math.NA stat.ML
null
1304.4610
null
null
http://arxiv.org/pdf/1304.4610v2
2013-05-01T00:29:31Z
2013-04-16T20:26:15Z
Spectral Compressed Sensing via Structured Matrix Completion
The paper studies the problem of recovering a spectrally sparse object from a small number of time domain samples. Specifically, the object of interest with ambient dimension $n$ is assumed to be a mixture of $r$ complex multi-dimensional sinusoids, while the underlying frequencies can assume any value in the unit disk. Conventional compressed sensing paradigms suffer from the {\em basis mismatch} issue when imposing a discrete dictionary on the Fourier representation. To address this problem, we develop a novel nonparametric algorithm, called enhanced matrix completion (EMaC), based on structured matrix completion. The algorithm starts by arranging the data into a low-rank enhanced form with multi-fold Hankel structure, then attempts recovery via nuclear norm minimization. Under mild incoherence conditions, EMaC allows perfect recovery as soon as the number of samples exceeds the order of $\mathcal{O}(r\log^{2} n)$. We also show that, in many instances, accurate completion of a low-rank multi-fold Hankel matrix is possible when the number of observed entries is proportional to the information theoretical limits (except for a logarithmic gap). The robustness of EMaC against bounded noise and its applicability to super resolution are further demonstrated by numerical experiments.
[ "Yuxin Chen, Yuejie Chi", "['Yuxin Chen' 'Yuejie Chi']" ]
cs.DS cs.LG cs.LO
null
1304.4633
null
null
http://arxiv.org/pdf/1304.4633v1
2013-04-16T22:10:26Z
2013-04-16T22:10:26Z
PAC Quasi-automatizability of Resolution over Restricted Distributions
We consider principled alternatives to unsupervised learning in data mining by situating the learning task in the context of the subsequent analysis task. Specifically, we consider a query-answering (hypothesis-testing) task: In the combined task, we decide whether an input query formula is satisfied over a background distribution by using input examples directly, rather than invoking a two-stage process in which (i) rules over the distribution are learned by an unsupervised learning algorithm and (ii) a reasoning algorithm decides whether or not the query formula follows from the learned rules. In a previous work (2013), we observed that the learning task could satisfy numerous desirable criteria in this combined context -- effectively matching what could be achieved by agnostic learning of CNFs from partial information -- that are not known to be achievable directly. In this work, we show that likewise, there are reasoning tasks that are achievable in such a combined context that are not known to be achievable directly (and indeed, have been seriously conjectured to be impossible, cf. (Alekhnovich and Razborov, 2008)). Namely, we test for a resolution proof of the query formula of a given size in quasipolynomial time (that is, "quasi-automatizing" resolution). The learning setting we consider is a partial-information, restricted-distribution setting that generalizes learning parities over the uniform distribution from partial information, another task that is known not to be achievable directly in various models (cf. (Ben-David and Dichterman, 1998) and (Michael, 2010)).
[ "Brendan Juba", "['Brendan Juba']" ]
quant-ph cs.CC cs.LG
10.4230/LIPIcs.TQC.2013.50
1304.4642
null
null
http://arxiv.org/abs/1304.4642v1
2013-04-16T23:24:38Z
2013-04-16T23:24:38Z
Easy and hard functions for the Boolean hidden shift problem
We study the quantum query complexity of the Boolean hidden shift problem. Given oracle access to f(x+s) for a known Boolean function f, the task is to determine the n-bit string s. The quantum query complexity of this problem depends strongly on f. We demonstrate that the easiest instances of this problem correspond to bent functions, in the sense that an exact one-query algorithm exists if and only if the function is bent. We partially characterize the hardest instances, which include delta functions. Moreover, we show that the problem is easy for random functions, since two queries suffice. Our algorithm for random functions is based on performing the pretty good measurement on several copies of a certain state; its analysis relies on the Fourier transform. We also use this approach to improve the quantum rejection sampling approach to the Boolean hidden shift problem.
[ "['Andrew M. Childs' 'Robin Kothari' 'Maris Ozols' 'Martin Roetteler']", "Andrew M. Childs, Robin Kothari, Maris Ozols, Martin Roetteler" ]
cs.LG q-bio.QM stat.ML
10.1109/TIT.2019.2961814
1304.4806
null
null
http://arxiv.org/abs/1304.4806v4
2019-12-25T18:08:49Z
2013-04-17T13:06:59Z
Unsupervised model-free representation learning
Numerous control and learning problems face the situation where sequences of high-dimensional highly dependent data are available but no or little feedback is provided to the learner, which makes any inference rather challenging. To address this challenge, we formulate the following problem. Given a series of observations $X_0,\dots,X_n$ coming from a large (high-dimensional) space $\mathcal X$, find a representation function $f$ mapping $\mathcal X$ to a finite space $\mathcal Y$ such that the series $f(X_0),\dots,f(X_n)$ preserves as much information as possible about the original time-series dependence in $X_0,\dots,X_n$. We show that, for stationary time series, the function $f$ can be selected as the one maximizing a certain information criterion that we call time-series information. Some properties of this functions are investigated, including its uniqueness and consistency of its empirical estimates. Implications for the problem of optimal control are presented.
[ "Daniil Ryabko", "['Daniil Ryabko']" ]
cs.CV cs.LG cs.MM
null
1304.5063
null
null
http://arxiv.org/pdf/1304.5063v2
2013-04-26T11:49:10Z
2013-04-18T09:40:12Z
Combinaison d'information visuelle, conceptuelle, et contextuelle pour la construction automatique de hierarchies semantiques adaptees a l'annotation d'images
This paper proposes a new methodology to automatically build semantic hierarchies suitable for image annotation and classification. The building of the hierarchy is based on a new measure of semantic similarity. The proposed measure incorporates several sources of information: visual, conceptual and contextual as we defined in this paper. The aim is to provide a measure that best represents image semantics. We then propose rules based on this measure, for the building of the final hierarchy, and which explicitly encode hierarchical relationships between different concepts. Therefore, the built hierarchy is used in a semantic hierarchical classification framework for image annotation. Our experiments and results show that the hierarchy built improves classification results. Ce papier propose une nouvelle methode pour la construction automatique de hierarchies semantiques adaptees a la classification et a l'annotation d'images. La construction de la hierarchie est basee sur une nouvelle mesure de similarite semantique qui integre plusieurs sources d'informations: visuelle, conceptuelle et contextuelle que nous definissons dans ce papier. L'objectif est de fournir une mesure qui est plus proche de la semantique des images. Nous proposons ensuite des regles, basees sur cette mesure, pour la construction de la hierarchie finale qui encode explicitement les relations hierarchiques entre les differents concepts. La hierarchie construite est ensuite utilisee dans un cadre de classification semantique hierarchique d'images en concepts visuels. Nos experiences et resultats montrent que la hierarchie construite permet d'ameliorer les resultats de la classification.
[ "Hichem Bannour and C\\'eline Hudelot", "['Hichem Bannour' 'Céline Hudelot']" ]
cs.IR cs.LG
null
1304.5168
null
null
http://arxiv.org/pdf/1304.5168v1
2013-04-18T15:57:34Z
2013-04-18T15:57:34Z
Image Retrieval based on Bag-of-Words model
This article gives a survey for bag-of-words (BoW) or bag-of-features model in image retrieval system. In recent years, large-scale image retrieval shows significant potential in both industry applications and research problems. As local descriptors like SIFT demonstrate great discriminative power in solving vision problems like object recognition, image classification and annotation, more and more state-of-the-art large scale image retrieval systems are trying to rely on them. A common way to achieve this is first quantizing local descriptors into visual words, and then applying scalable textual indexing and retrieval schemes. We call this model as bag-of-words or bag-of-features model. The goal of this survey is to give an overview of this model and introduce different strategies when building the system based on this model.
[ "['Jialu Liu']", "Jialu Liu" ]
cs.LG stat.ML
null
1304.5299
null
null
http://arxiv.org/pdf/1304.5299v4
2014-02-14T07:42:15Z
2013-04-19T02:51:52Z
Austerity in MCMC Land: Cutting the Metropolis-Hastings Budget
Can we make Bayesian posterior MCMC sampling more efficient when faced with very large datasets? We argue that computing the likelihood for N datapoints in the Metropolis-Hastings (MH) test to reach a single binary decision is computationally inefficient. We introduce an approximate MH rule based on a sequential hypothesis test that allows us to accept or reject samples with high confidence using only a fraction of the data required for the exact MH rule. While this method introduces an asymptotic bias, we show that this bias can be controlled and is more than offset by a decrease in variance due to our ability to draw more samples per unit of time.
[ "Anoop Korattikara, Yutian Chen, Max Welling", "['Anoop Korattikara' 'Yutian Chen' 'Max Welling']" ]
cs.LG stat.ML
10.1007/978-3-642-40988-2_15
1304.5350
null
null
http://arxiv.org/abs/1304.5350v3
2013-09-02T07:57:23Z
2013-04-19T09:11:34Z
Parallel Gaussian Process Optimization with Upper Confidence Bound and Pure Exploration
In this paper, we consider the challenge of maximizing an unknown function f for which evaluations are noisy and are acquired with high cost. An iterative procedure uses the previous measures to actively select the next estimation of f which is predicted to be the most useful. We focus on the case where the function can be evaluated in parallel with batches of fixed size and analyze the benefit compared to the purely sequential procedure in terms of cumulative regret. We introduce the Gaussian Process Upper Confidence Bound and Pure Exploration algorithm (GP-UCB-PE) which combines the UCB strategy and Pure Exploration in the same batch of evaluations along the parallel iterations. We prove theoretical upper bounds on the regret with batches of size K for this procedure which show the improvement of the order of sqrt{K} for fixed iteration cost over purely sequential versions. Moreover, the multiplicative constants involved have the property of being dimension-free. We also confirm empirically the efficiency of GP-UCB-PE on real and synthetic problems compared to state-of-the-art competitors.
[ "['Emile Contal' 'David Buffoni' 'Alexandre Robicquet' 'Nicolas Vayatis']", "Emile Contal and David Buffoni and Alexandre Robicquet and Nicolas\n Vayatis" ]
cs.IR cs.DL cs.LG
null
1304.5457
null
null
http://arxiv.org/pdf/1304.5457v1
2013-04-19T15:53:53Z
2013-04-19T15:53:53Z
Personalized Academic Research Paper Recommendation System
A huge number of academic papers are coming out from a lot of conferences and journals these days. In these circumstances, most researchers rely on key-based search or browsing through proceedings of top conferences and journals to find their related work. To ease this difficulty, we propose a Personalized Academic Research Paper Recommendation System, which recommends related articles, for each researcher, that may be interesting to her/him. In this paper, we first introduce our web crawler to retrieve research papers from the web. Then, we define similarity between two research papers based on the text similarity between them. Finally, we propose our recommender system developed using collaborative filtering methods. Our evaluation results demonstrate that our system recommends good quality research papers.
[ "Joonseok Lee, Kisung Lee, Jennifer G. Kim", "['Joonseok Lee' 'Kisung Lee' 'Jennifer G. Kim']" ]
cs.LG stat.ML
null
1304.5504
null
null
http://arxiv.org/pdf/1304.5504v6
2016-05-24T05:07:06Z
2013-04-19T18:51:07Z
Optimal Stochastic Strongly Convex Optimization with a Logarithmic Number of Projections
We consider stochastic strongly convex optimization with a complex inequality constraint. This complex inequality constraint may lead to computationally expensive projections in algorithmic iterations of the stochastic gradient descent~(SGD) methods. To reduce the computation costs pertaining to the projections, we propose an Epoch-Projection Stochastic Gradient Descent~(Epro-SGD) method. The proposed Epro-SGD method consists of a sequence of epochs; it applies SGD to an augmented objective function at each iteration within the epoch, and then performs a projection at the end of each epoch. Given a strongly convex optimization and for a total number of $T$ iterations, Epro-SGD requires only $\log(T)$ projections, and meanwhile attains an optimal convergence rate of $O(1/T)$, both in expectation and with a high probability. To exploit the structure of the optimization problem, we propose a proximal variant of Epro-SGD, namely Epro-ORDA, based on the optimal regularized dual averaging method. We apply the proposed methods on real-world applications; the empirical results demonstrate the effectiveness of our methods.
[ "['Jianhui Chen' 'Tianbao Yang' 'Qihang Lin' 'Lijun Zhang' 'Yi Chang']", "Jianhui Chen, Tianbao Yang, Qihang Lin, Lijun Zhang, Yi Chang" ]
cs.LG stat.ML
null
1304.5575
null
null
http://arxiv.org/pdf/1304.5575v2
2013-04-25T11:46:51Z
2013-04-20T00:57:35Z
Inverse Density as an Inverse Problem: The Fredholm Equation Approach
In this paper we address the problem of estimating the ratio $\frac{q}{p}$ where $p$ is a density function and $q$ is another density, or, more generally an arbitrary function. Knowing or approximating this ratio is needed in various problems of inference and integration, in particular, when one needs to average a function with respect to one probability distribution, given a sample from another. It is often referred as {\it importance sampling} in statistical inference and is also closely related to the problem of {\it covariate shift} in transfer learning as well as to various MCMC methods. It may also be useful for separating the underlying geometry of a space, say a manifold, from the density function defined on it. Our approach is based on reformulating the problem of estimating $\frac{q}{p}$ as an inverse problem in terms of an integral operator corresponding to a kernel, and thus reducing it to an integral equation, known as the Fredholm problem of the first kind. This formulation, combined with the techniques of regularization and kernel methods, leads to a principled kernel-based framework for constructing algorithms and for analyzing them theoretically. The resulting family of algorithms (FIRE, for Fredholm Inverse Regularized Estimator) is flexible, simple and easy to implement. We provide detailed theoretical analysis including concentration bounds and convergence rates for the Gaussian kernel in the case of densities defined on $\R^d$, compact domains in $\R^d$ and smooth $d$-dimensional sub-manifolds of the Euclidean space. We also show experimental results including applications to classification and semi-supervised learning within the covariate shift framework and demonstrate some encouraging experimental comparisons. We also show how the parameters of our algorithms can be chosen in a completely unsupervised manner.
[ "['Qichao Que' 'Mikhail Belkin']", "Qichao Que and Mikhail Belkin" ]
cs.CV cs.DC cs.LG stat.ML
null
1304.5583
null
null
http://arxiv.org/pdf/1304.5583v2
2013-10-16T02:55:18Z
2013-04-20T03:54:48Z
Distributed Low-rank Subspace Segmentation
Vision problems ranging from image clustering to motion segmentation to semi-supervised learning can naturally be framed as subspace segmentation problems, in which one aims to recover multiple low-dimensional subspaces from noisy and corrupted input data. Low-Rank Representation (LRR), a convex formulation of the subspace segmentation problem, is provably and empirically accurate on small problems but does not scale to the massive sizes of modern vision datasets. Moreover, past work aimed at scaling up low-rank matrix factorization is not applicable to LRR given its non-decomposable constraints. In this work, we propose a novel divide-and-conquer algorithm for large-scale subspace segmentation that can cope with LRR's non-decomposable constraints and maintains LRR's strong recovery guarantees. This has immediate implications for the scalability of subspace segmentation, which we demonstrate on a benchmark face recognition dataset and in simulations. We then introduce novel applications of LRR-based subspace segmentation to large-scale semi-supervised learning for multimedia event detection, concept detection, and image tagging. In each case, we obtain state-of-the-art results and order-of-magnitude speed ups.
[ "Ameet Talwalkar, Lester Mackey, Yadong Mu, Shih-Fu Chang, Michael I.\n Jordan", "['Ameet Talwalkar' 'Lester Mackey' 'Yadong Mu' 'Shih-Fu Chang'\n 'Michael I. Jordan']" ]
cs.LG
null
1304.5634
null
null
http://arxiv.org/pdf/1304.5634v1
2013-04-20T14:43:35Z
2013-04-20T14:43:35Z
A Survey on Multi-view Learning
In recent years, a great many methods of learning from multi-view data by considering the diversity of different views have been proposed. These views may be obtained from multiple sources or different feature subsets. In trying to organize and highlight similarities and differences between the variety of multi-view learning approaches, we review a number of representative multi-view learning algorithms in different areas and classify them into three groups: 1) co-training, 2) multiple kernel learning, and 3) subspace learning. Notably, co-training style algorithms train alternately to maximize the mutual agreement on two distinct views of the data; multiple kernel learning algorithms exploit kernels that naturally correspond to different views and combine kernels either linearly or non-linearly to improve learning performance; and subspace learning algorithms aim to obtain a latent subspace shared by multiple views by assuming that the input views are generated from this latent subspace. Though there is significant variance in the approaches to integrating multiple views to improve learning performance, they mainly exploit either the consensus principle or the complementary principle to ensure the success of multi-view learning. Since accessing multiple views is the fundament of multi-view learning, with the exception of study on learning a model from multiple views, it is also valuable to study how to construct multiple views and how to evaluate these views. Overall, by exploring the consistency and complementary properties of different views, multi-view learning is rendered more effective, more promising, and has better generalization ability than single-view learning.
[ "Chang Xu, Dacheng Tao, Chao Xu", "['Chang Xu' 'Dacheng Tao' 'Chao Xu']" ]
cs.LG stat.ML
null
1304.5678
null
null
http://arxiv.org/pdf/1304.5678v1
2013-04-20T23:38:45Z
2013-04-20T23:38:45Z
Analytic Feature Selection for Support Vector Machines
Support vector machines (SVMs) rely on the inherent geometry of a data set to classify training data. Because of this, we believe SVMs are an excellent candidate to guide the development of an analytic feature selection algorithm, as opposed to the more commonly used heuristic methods. We propose a filter-based feature selection algorithm based on the inherent geometry of a feature set. Through observation, we identified six geometric properties that differ between optimal and suboptimal feature sets, and have statistically significant correlations to classifier performance. Our algorithm is based on logistic and linear regression models using these six geometric properties as predictor variables. The proposed algorithm achieves excellent results on high dimensional text data sets, with features that can be organized into a handful of feature types; for example, unigrams, bigrams or semantic structural features. We believe this algorithm is a novel and effective approach to solving the feature selection problem for linear SVMs.
[ "Carly Stambaugh, Hui Yang, Felix Breuer", "['Carly Stambaugh' 'Hui Yang' 'Felix Breuer']" ]
stat.ML cs.LG
null
1304.5758
null
null
http://arxiv.org/pdf/1304.5758v2
2013-10-03T00:48:53Z
2013-04-21T15:58:56Z
Prior-free and prior-dependent regret bounds for Thompson Sampling
We consider the stochastic multi-armed bandit problem with a prior distribution on the reward distributions. We are interested in studying prior-free and prior-dependent regret bounds, very much in the same spirit as the usual distribution-free and distribution-dependent bounds for the non-Bayesian stochastic bandit. Building on the techniques of Audibert and Bubeck [2009] and Russo and Roy [2013] we first show that Thompson Sampling attains an optimal prior-free bound in the sense that for any prior distribution its Bayesian regret is bounded from above by $14 \sqrt{n K}$. This result is unimprovable in the sense that there exists a prior distribution such that any algorithm has a Bayesian regret bounded from below by $\frac{1}{20} \sqrt{n K}$. We also study the case of priors for the setting of Bubeck et al. [2013] (where the optimal mean is known as well as a lower bound on the smallest gap) and we show that in this case the regret of Thompson Sampling is in fact uniformly bounded over time, thus showing that Thompson Sampling can greatly take advantage of the nice properties of these priors.
[ "['Sébastien Bubeck' 'Che-Yu Liu']", "S\\'ebastien Bubeck and Che-Yu Liu" ]
cs.LG
null
1304.5793
null
null
http://arxiv.org/pdf/1304.5793v4
2014-08-22T14:59:13Z
2013-04-21T20:03:23Z
Continuum armed bandit problem of few variables in high dimensions
We consider the stochastic and adversarial settings of continuum armed bandits where the arms are indexed by [0,1]^d. The reward functions r:[0,1]^d -> R are assumed to intrinsically depend on at most k coordinate variables implying r(x_1,..,x_d) = g(x_{i_1},..,x_{i_k}) for distinct and unknown i_1,..,i_k from {1,..,d} and some locally Holder continuous g:[0,1]^k -> R with exponent 0 < alpha <= 1. Firstly, assuming (i_1,..,i_k) to be fixed across time, we propose a simple modification of the CAB1 algorithm where we construct the discrete set of sampling points to obtain a bound of O(n^((alpha+k)/(2*alpha+k)) (log n)^((alpha)/(2*alpha+k)) C(k,d)) on the regret, with C(k,d) depending at most polynomially in k and sub-logarithmically in d. The construction is based on creating partitions of {1,..,d} into k disjoint subsets and is probabilistic, hence our result holds with high probability. Secondly we extend our results to also handle the more general case where (i_1,...,i_k) can change over time and derive regret bounds for the same.
[ "['Hemant Tyagi' 'Bernd Gärtner']", "Hemant Tyagi and Bernd G\\\"artner" ]
cs.LG cs.SD stat.ML
null
1304.5862
null
null
http://arxiv.org/pdf/1304.5862v2
2013-05-29T17:36:07Z
2013-04-22T07:44:05Z
Multi-Label Classifier Chains for Bird Sound
Bird sound data collected with unattended microphones for automatic surveys, or mobile devices for citizen science, typically contain multiple simultaneously vocalizing birds of different species. However, few works have considered the multi-label structure in birdsong. We propose to use an ensemble of classifier chains combined with a histogram-of-segments representation for multi-label classification of birdsong. The proposed method is compared with binary relevance and three multi-instance multi-label learning (MIML) algorithms from prior work (which focus more on structure in the sound, and less on structure in the label sets). Experiments are conducted on two real-world birdsong datasets, and show that the proposed method usually outperforms binary relevance (using the same features and base-classifier), and is better in some cases and worse in others compared to the MIML algorithms.
[ "['Forrest Briggs' 'Xiaoli Z. Fern' 'Jed Irvine']", "Forrest Briggs, Xiaoli Z. Fern, Jed Irvine" ]
cs.CV cs.LG
null
1304.5894
null
null
http://arxiv.org/pdf/1304.5894v2
2013-04-23T09:00:01Z
2013-04-22T09:46:47Z
Bayesian crack detection in ultra high resolution multimodal images of paintings
The preservation of our cultural heritage is of paramount importance. Thanks to recent developments in digital acquisition techniques, powerful image analysis algorithms are developed which can be useful non-invasive tools to assist in the restoration and preservation of art. In this paper we propose a semi-supervised crack detection method that can be used for high-dimensional acquisitions of paintings coming from different modalities. Our dataset consists of a recently acquired collection of images of the Ghent Altarpiece (1432), one of Northern Europe's most important art masterpieces. Our goal is to build a classifier that is able to discern crack pixels from the background consisting of non-crack pixels, making optimal use of the information that is provided by each modality. To accomplish this we employ a recently developed non-parametric Bayesian classifier, that uses tensor factorizations to characterize any conditional probability. A prior is placed on the parameters of the factorization such that every possible interaction between predictors is allowed while still identifying a sparse subset among these predictors. The proposed Bayesian classifier, which we will refer to as conditional Bayesian tensor factorization or CBTF, is assessed by visually comparing classification results with the Random Forest (RF) algorithm.
[ "Bruno Cornelis, Yun Yang, Joshua T. Vogelstein, Ann Dooms, Ingrid\n Daubechies, David Dunson", "['Bruno Cornelis' 'Yun Yang' 'Joshua T. Vogelstein' 'Ann Dooms'\n 'Ingrid Daubechies' 'David Dunson']" ]
cs.SI cs.LG physics.soc-ph stat.ME
10.1007/978-3-642-37210-0_22
1304.5974
null
null
http://arxiv.org/abs/1304.5974v1
2013-04-22T15:07:19Z
2013-04-22T15:07:19Z
Dynamic stochastic blockmodels: Statistical models for time-evolving networks
Significant efforts have gone into the development of statistical models for analyzing data in the form of networks, such as social networks. Most existing work has focused on modeling static networks, which represent either a single time snapshot or an aggregate view over time. There has been recent interest in statistical modeling of dynamic networks, which are observed at multiple points in time and offer a richer representation of many complex phenomena. In this paper, we propose a state-space model for dynamic networks that extends the well-known stochastic blockmodel for static networks to the dynamic setting. We then propose a procedure to fit the model using a modification of the extended Kalman filter augmented with a local search. We apply the procedure to analyze a dynamic social network of email communication.
[ "['Kevin S. Xu' 'Alfred O. Hero III']", "Kevin S. Xu and Alfred O. Hero III" ]
cs.LG cs.AI
null
1304.6383
null
null
http://arxiv.org/pdf/1304.6383v2
2014-01-25T10:46:53Z
2013-04-23T19:24:02Z
The Stochastic Gradient Descent for the Primal L1-SVM Optimization Revisited
We reconsider the stochastic (sub)gradient approach to the unconstrained primal L1-SVM optimization. We observe that if the learning rate is inversely proportional to the number of steps, i.e., the number of times any training pattern is presented to the algorithm, the update rule may be transformed into the one of the classical perceptron with margin in which the margin threshold increases linearly with the number of steps. Moreover, if we cycle repeatedly through the possibly randomly permuted training set the dual variables defined naturally via the expansion of the weight vector as a linear combination of the patterns on which margin errors were made are shown to obey at the end of each complete cycle automatically the box constraints arising in dual optimization. This renders the dual Lagrangian a running lower bound on the primal objective tending to it at the optimum and makes available an upper bound on the relative accuracy achieved which provides a meaningful stopping criterion. In addition, we propose a mechanism of presenting the same pattern repeatedly to the algorithm which maintains the above properties. Finally, we give experimental evidence that algorithms constructed along these lines exhibit a considerably improved performance.
[ "['Constantinos Panagiotakopoulos' 'Petroula Tsampouka']", "Constantinos Panagiotakopoulos and Petroula Tsampouka" ]
cs.LG stat.ME stat.ML
null
1304.6478
null
null
http://arxiv.org/pdf/1304.6478v1
2013-04-24T03:59:39Z
2013-04-24T03:59:39Z
The K-modes algorithm for clustering
Many clustering algorithms exist that estimate a cluster centroid, such as K-means, K-medoids or mean-shift, but no algorithm seems to exist that clusters data by returning exactly K meaningful modes. We propose a natural definition of a K-modes objective function by combining the notions of density and cluster assignment. The algorithm becomes K-means and K-medoids in the limit of very large and very small scales. Computationally, it is slightly slower than K-means but much faster than mean-shift or K-medoids. Unlike K-means, it is able to find centroids that are valid patterns, truly representative of a cluster, even with nonconvex clusters, and appears robust to outliers and misspecification of the scale and number of clusters.
[ "Miguel \\'A. Carreira-Perpi\\~n\\'an and Weiran Wang", "['Miguel Á. Carreira-Perpiñán' 'Weiran Wang']" ]
cs.LG cs.IR stat.ML
null
1304.6480
null
null
http://arxiv.org/pdf/1304.6480v1
2013-04-24T04:08:23Z
2013-04-24T04:08:23Z
A Theoretical Analysis of NDCG Type Ranking Measures
A central problem in ranking is to design a ranking measure for evaluation of ranking functions. In this paper we study, from a theoretical perspective, the widely used Normalized Discounted Cumulative Gain (NDCG)-type ranking measures. Although there are extensive empirical studies of NDCG, little is known about its theoretical properties. We first show that, whatever the ranking function is, the standard NDCG which adopts a logarithmic discount, converges to 1 as the number of items to rank goes to infinity. On the first sight, this result is very surprising. It seems to imply that NDCG cannot differentiate good and bad ranking functions, contradicting to the empirical success of NDCG in many applications. In order to have a deeper understanding of ranking measures in general, we propose a notion referred to as consistent distinguishability. This notion captures the intuition that a ranking measure should have such a property: For every pair of substantially different ranking functions, the ranking measure can decide which one is better in a consistent manner on almost all datasets. We show that NDCG with logarithmic discount has consistent distinguishability although it converges to the same limit for all ranking functions. We next characterize the set of all feasible discount functions for NDCG according to the concept of consistent distinguishability. Specifically we show that whether NDCG has consistent distinguishability depends on how fast the discount decays, and 1/r is a critical point. We then turn to the cut-off version of NDCG, i.e., NDCG@k. We analyze the distinguishability of NDCG@k for various choices of k and the discount functions. Experimental results on real Web search datasets agree well with the theory.
[ "['Yining Wang' 'Liwei Wang' 'Yuanzhi Li' 'Di He' 'Tie-Yan Liu' 'Wei Chen']", "Yining Wang, Liwei Wang, Yuanzhi Li, Di He, Tie-Yan Liu, Wei Chen" ]
cs.LG stat.ML
null
1304.6487
null
null
http://arxiv.org/pdf/1304.6487v2
2017-05-16T13:28:28Z
2013-04-24T06:37:07Z
Locally linear representation for image clustering
It is a key to construct a similarity graph in graph-oriented subspace learning and clustering. In a similarity graph, each vertex denotes a data point and the edge weight represents the similarity between two points. There are two popular schemes to construct a similarity graph, i.e., pairwise distance based scheme and linear representation based scheme. Most existing works have only involved one of the above schemes and suffered from some limitations. Specifically, pairwise distance based methods are sensitive to the noises and outliers compared with linear representation based methods. On the other hand, there is the possibility that linear representation based algorithms wrongly select inter-subspaces points to represent a point, which will degrade the performance. In this paper, we propose an algorithm, called Locally Linear Representation (LLR), which integrates pairwise distance with linear representation together to address the problems. The proposed algorithm can automatically encode each data point over a set of points that not only could denote the objective point with less residual error, but also are close to the point in Euclidean space. The experimental results show that our approach is promising in subspace learning and subspace clustering.
[ "['Liangli Zhen' 'Zhang Yi' 'Xi Peng' 'Dezhong Peng']", "Liangli Zhen, Zhang Yi, Xi Peng, Dezhong Peng" ]
math.OC cs.LG stat.ML
10.1109/CDC.2011.6160810
1304.6663
null
null
http://arxiv.org/abs/1304.6663v2
2013-04-25T09:26:19Z
2013-04-24T16:52:34Z
Low-rank optimization for distance matrix completion
This paper addresses the problem of low-rank distance matrix completion. This problem amounts to recover the missing entries of a distance matrix when the dimension of the data embedding space is possibly unknown but small compared to the number of considered data points. The focus is on high-dimensional problems. We recast the considered problem into an optimization problem over the set of low-rank positive semidefinite matrices and propose two efficient algorithms for low-rank distance matrix completion. In addition, we propose a strategy to determine the dimension of the embedding space. The resulting algorithms scale to high-dimensional problems and monotonically converge to a global solution of the problem. Finally, numerical experiments illustrate the good performance of the proposed algorithms on benchmarks.
[ "['B. Mishra' 'G. Meyer' 'R. Sepulchre']", "B. Mishra, G. Meyer and R. Sepulchre" ]
cs.AI cs.LG cs.LO
10.1017/S1471068414000076
1304.6810
null
null
http://arxiv.org/abs/1304.6810v1
2013-04-25T06:10:55Z
2013-04-25T06:10:55Z
Inference and learning in probabilistic logic programs using weighted Boolean formulas
Probabilistic logic programs are logic programs in which some of the facts are annotated with probabilities. This paper investigates how classical inference and learning tasks known from the graphical model community can be tackled for probabilistic logic programs. Several such tasks such as computing the marginals given evidence and learning from (partial) interpretations have not really been addressed for probabilistic logic programs before. The first contribution of this paper is a suite of efficient algorithms for various inference tasks. It is based on a conversion of the program and the queries and evidence to a weighted Boolean formula. This allows us to reduce the inference tasks to well-studied tasks such as weighted model counting, which can be solved using state-of-the-art methods known from the graphical model and knowledge compilation literature. The second contribution is an algorithm for parameter estimation in the learning from interpretations setting. The algorithm employs Expectation Maximization, and is built on top of the developed inference algorithms. The proposed approach is experimentally evaluated. The results show that the inference algorithms improve upon the state-of-the-art in probabilistic logic programming and that it is indeed possible to learn the parameters of a probabilistic logic program from interpretations.
[ "['Daan Fierens' 'Guy Van den Broeck' 'Joris Renkens' 'Dimitar Shterionov'\n 'Bernd Gutmann' 'Ingo Thon' 'Gerda Janssens' 'Luc De Raedt']", "Daan Fierens, Guy Van den Broeck, Joris Renkens, Dimitar Shterionov,\n Bernd Gutmann, Ingo Thon, Gerda Janssens, Luc De Raedt" ]
cs.LG cs.CV cs.MS
null
1304.6899
null
null
http://arxiv.org/pdf/1304.6899v1
2013-04-25T12:59:31Z
2013-04-25T12:59:31Z
An implementation of the relational k-means algorithm
A C# implementation of a generalized k-means variant called relational k-means is described here. Relational k-means is a generalization of the well-known k-means clustering method which works for non-Euclidean scenarios as well. The input is an arbitrary distance matrix, as opposed to the traditional k-means method, where the clustered objects need to be identified with vectors.
[ "['Balázs Szalkai']", "Bal\\'azs Szalkai" ]
cs.LG cs.AI stat.ML
null
1304.7045
null
null
http://arxiv.org/pdf/1304.7045v2
2014-02-20T13:14:59Z
2013-04-26T00:35:37Z
An Algorithm for Training Polynomial Networks
We consider deep neural networks, in which the output of each node is a quadratic function of its inputs. Similar to other deep architectures, these networks can compactly represent any function on a finite training set. The main goal of this paper is the derivation of an efficient layer-by-layer algorithm for training such networks, which we denote as the \emph{Basis Learner}. The algorithm is a universal learner in the sense that the training error is guaranteed to decrease at every iteration, and can eventually reach zero under mild conditions. We present practical implementations of this algorithm, as well as preliminary experimental results. We also compare our deep architecture to other shallow architectures for learning polynomials, in particular kernel learning.
[ "Roi Livni, Shai Shalev-Shwartz, Ohad Shamir", "['Roi Livni' 'Shai Shalev-Shwartz' 'Ohad Shamir']" ]
cs.LG
null
1304.7158
null
null
http://arxiv.org/pdf/1304.7158v1
2013-04-26T13:28:47Z
2013-04-26T13:28:47Z
Irreflexive and Hierarchical Relations as Translations
We consider the problem of embedding entities and relations of knowledge bases in low-dimensional vector spaces. Unlike most existing approaches, which are primarily efficient for modeling equivalence relations, our approach is designed to explicitly model irreflexive relations, such as hierarchies, by interpreting them as translations operating on the low-dimensional embeddings of the entities. Preliminary experiments show that, despite its simplicity and a smaller number of parameters than previous approaches, our approach achieves state-of-the-art performance according to standard evaluation protocols on data from WordNet and Freebase.
[ "Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston\n and Oksana Yakhnenko", "['Antoine Bordes' 'Nicolas Usunier' 'Alberto Garcia-Duran' 'Jason Weston'\n 'Oksana Yakhnenko']" ]
stat.ML cs.LG
null
1304.7230
null
null
http://arxiv.org/pdf/1304.7230v2
2013-04-29T21:16:46Z
2013-04-26T16:56:30Z
Learning Densities Conditional on Many Interacting Features
Learning a distribution conditional on a set of discrete-valued features is a commonly encountered task. This becomes more challenging with a high-dimensional feature set when there is the possibility of interaction between the features. In addition, many frequently applied techniques consider only prediction of the mean, but the complete conditional density is needed to answer more complex questions. We demonstrate a novel nonparametric Bayes method based upon a tensor factorization of feature-dependent weights for Gaussian kernels. The method makes use of multistage feature selection for dimension reduction. The resulting conditional density morphs flexibly with the selected features.
[ "David C. Kessler and Jack Taylor and David B. Dunson", "['David C. Kessler' 'Jack Taylor' 'David B. Dunson']" ]
cs.LG cs.CE stat.ML
null
1304.7284
null
null
http://arxiv.org/pdf/1304.7284v2
2013-10-16T07:04:04Z
2013-04-26T20:47:46Z
Supervised Heterogeneous Multiview Learning for Joint Association Study and Disease Diagnosis
Given genetic variations and various phenotypical traits, such as Magnetic Resonance Imaging (MRI) features, we consider two important and related tasks in biomedical research: i)to select genetic and phenotypical markers for disease diagnosis and ii) to identify associations between genetic and phenotypical data. These two tasks are tightly coupled because underlying associations between genetic variations and phenotypical features contain the biological basis for a disease. While a variety of sparse models have been applied for disease diagnosis and canonical correlation analysis and its extensions have bee widely used in association studies (e.g., eQTL analysis), these two tasks have been treated separately. To unify these two tasks, we present a new sparse Bayesian approach for joint association study and disease diagnosis. In this approach, common latent features are extracted from different data sources based on sparse projection matrices and used to predict multiple disease severity levels based on Gaussian process ordinal regression; in return, the disease status is used to guide the discovery of relationships between the data sources. The sparse projection matrices not only reveal interactions between data sources but also select groups of biomarkers related to the disease. To learn the model from data, we develop an efficient variational expectation maximization algorithm. Simulation results demonstrate that our approach achieves higher accuracy in both predicting ordinal labels and discovering associations between data sources than alternative methods. We apply our approach to an imaging genetics dataset for the study of Alzheimer's Disease (AD). Our method identifies biologically meaningful relationships between genetic variations, MRI features, and AD status, and achieves significantly higher accuracy for predicting ordinal AD stages than the competing methods.
[ "Shandian Zhe, Zenglin Xu, and Yuan Qi", "['Shandian Zhe' 'Zenglin Xu' 'Yuan Qi']" ]
cs.LG cs.CV
10.1142/S0218001412500188
1304.7465
null
null
http://arxiv.org/abs/1304.7465v1
2013-04-28T13:31:44Z
2013-04-28T13:31:44Z
Deterministic Initialization of the K-Means Algorithm Using Hierarchical Clustering
K-means is undoubtedly the most widely used partitional clustering algorithm. Unfortunately, due to its gradient descent nature, this algorithm is highly sensitive to the initial placement of the cluster centers. Numerous initialization methods have been proposed to address this problem. Many of these methods, however, have superlinear complexity in the number of data points, making them impractical for large data sets. On the other hand, linear methods are often random and/or order-sensitive, which renders their results unrepeatable. Recently, Su and Dy proposed two highly successful hierarchical initialization methods named Var-Part and PCA-Part that are not only linear, but also deterministic (non-random) and order-invariant. In this paper, we propose a discriminant analysis based approach that addresses a common deficiency of these two methods. Experiments on a large and diverse collection of data sets from the UCI Machine Learning Repository demonstrate that Var-Part and PCA-Part are highly competitive with one of the best random initialization methods to date, i.e., k-means++, and that the proposed approach significantly improves the performance of both hierarchical methods.
[ "M. Emre Celebi and Hassan A. Kingravi", "['M. Emre Celebi' 'Hassan A. Kingravi']" ]
cs.LG math.SP stat.ML
null
1304.7528
null
null
http://arxiv.org/pdf/1304.7528v1
2013-04-28T21:52:12Z
2013-04-28T21:52:12Z
Semi-supervised Eigenvectors for Large-scale Locally-biased Learning
In many applications, one has side information, e.g., labels that are provided in a semi-supervised manner, about a specific target region of a large data set, and one wants to perform machine learning and data analysis tasks "nearby" that prespecified target region. For example, one might be interested in the clustering structure of a data graph near a prespecified "seed set" of nodes, or one might be interested in finding partitions in an image that are near a prespecified "ground truth" set of pixels. Locally-biased problems of this sort are particularly challenging for popular eigenvector-based machine learning and data analysis tools. At root, the reason is that eigenvectors are inherently global quantities, thus limiting the applicability of eigenvector-based methods in situations where one is interested in very local properties of the data. In this paper, we address this issue by providing a methodology to construct semi-supervised eigenvectors of a graph Laplacian, and we illustrate how these locally-biased eigenvectors can be used to perform locally-biased machine learning. These semi-supervised eigenvectors capture successively-orthogonalized directions of maximum variance, conditioned on being well-correlated with an input seed set of nodes that is assumed to be provided in a semi-supervised manner. We show that these semi-supervised eigenvectors can be computed quickly as the solution to a system of linear equations; and we also describe several variants of our basic method that have improved scaling properties. We provide several empirical examples demonstrating how these semi-supervised eigenvectors can be used to perform locally-biased learning; and we discuss the relationship between our results and recent machine learning algorithms that use global eigenvectors of the graph Laplacian.
[ "['Toke J. Hansen' 'Michael W. Mahoney']", "Toke J. Hansen and Michael W. Mahoney" ]
cs.LG
null
1304.7576
null
null
http://arxiv.org/pdf/1304.7576v1
2013-04-29T07:16:54Z
2013-04-29T07:16:54Z
Fractal structures in Adversarial Prediction
Fractals are self-similar recursive structures that have been used in modeling several real world processes. In this work we study how "fractal-like" processes arise in a prediction game where an adversary is generating a sequence of bits and an algorithm is trying to predict them. We will see that under a certain formalization of the predictive payoff for the algorithm it is most optimal for the adversary to produce a fractal-like sequence to minimize the algorithm's ability to predict. Indeed it has been suggested before that financial markets exhibit a fractal-like behavior. We prove that a fractal-like distribution arises naturally out of an optimization from the adversary's perspective. In addition, we give optimal trade-offs between predictability and expected deviation (i.e. sum of bits) for our formalization of predictive payoff. This result is motivated by the observation that several time series data exhibit higher deviations than expected for a completely random walk.
[ "['Rina Panigrahy' 'Preyas Popat']", "Rina Panigrahy and Preyas Popat" ]
cs.LG cs.DS stat.ML
null
1304.7577
null
null
http://arxiv.org/pdf/1304.7577v1
2013-04-29T07:17:31Z
2013-04-29T07:17:31Z
Optimal amortized regret in every interval
Consider the classical problem of predicting the next bit in a sequence of bits. A standard performance measure is {\em regret} (loss in payoff) with respect to a set of experts. For example if we measure performance with respect to two constant experts one that always predicts 0's and another that always predicts 1's it is well known that one can get regret $O(\sqrt T)$ with respect to the best expert by using, say, the weighted majority algorithm. But this algorithm does not provide performance guarantee in any interval. There are other algorithms that ensure regret $O(\sqrt {x \log T})$ in any interval of length $x$. In this paper we show a randomized algorithm that in an amortized sense gets a regret of $O(\sqrt x)$ for any interval when the sequence is partitioned into intervals arbitrarily. We empirically estimated the constant in the $O()$ for $T$ upto 2000 and found it to be small -- around 2.1. We also experimentally evaluate the efficacy of this algorithm in predicting high frequency stock data.
[ "['Rina Panigrahy' 'Preyas Popat']", "Rina Panigrahy and Preyas Popat" ]
cs.SY cs.LG physics.soc-ph
null
1304.7710
null
null
http://arxiv.org/pdf/1304.7710v1
2013-04-29T16:48:02Z
2013-04-29T16:48:02Z
Learning Geo-Temporal Non-Stationary Failure and Recovery of Power Distribution
Smart energy grid is an emerging area for new applications of machine learning in a non-stationary environment. Such a non-stationary environment emerges when large-scale failures occur at power distribution networks due to external disturbances such as hurricanes and severe storms. Power distribution networks lie at the edge of the grid, and are especially vulnerable to external disruptions. Quantifiable approaches are lacking and needed to learn non-stationary behaviors of large-scale failure and recovery of power distribution. This work studies such non-stationary behaviors in three aspects. First, a novel formulation is derived for an entire life cycle of large-scale failure and recovery of power distribution. Second, spatial-temporal models of failure and recovery of power distribution are developed as geo-location based multivariate non-stationary GI(t)/G(t)/Infinity queues. Third, the non-stationary spatial-temporal models identify a small number of parameters to be learned. Learning is applied to two real-life examples of large-scale disruptions. One is from Hurricane Ike, where data from an operational network is exact on failures and recoveries. The other is from Hurricane Sandy, where aggregated data is used for inferring failure and recovery processes at one of the impacted areas. Model parameters are learned using real data. Two findings emerge as results of learning: (a) Failure rates behave similarly at the two different provider networks for two different hurricanes but differently at the geographical regions. (b) Both rapid- and slow-recovery are present for Hurricane Ike but only slow recovery is shown for a regional distribution network from Hurricane Sandy.
[ "Yun Wei and Chuanyi Ji and Floyd Galvan and Stephen Couvillon and\n George Orellana and James Momoh", "['Yun Wei' 'Chuanyi Ji' 'Floyd Galvan' 'Stephen Couvillon'\n 'George Orellana' 'James Momoh']" ]
cs.LG cs.SD
null
1304.7851
null
null
http://arxiv.org/pdf/1304.7851v2
2013-06-07T02:01:07Z
2013-04-30T03:41:14Z
North Atlantic Right Whale Contact Call Detection
The North Atlantic right whale (Eubalaena glacialis) is an endangered species. These whales continuously suffer from deadly vessel impacts alongside the eastern coast of North America. There have been countless efforts to save the remaining 350 - 400 of them. One of the most prominent works is done by Marinexplore and Cornell University. A system of hydrophones linked to satellite connected-buoys has been deployed in the whales habitat. These hydrophones record and transmit live sounds to a base station. These recording might contain the right whale contact call as well as many other noises. The noise rate increases rapidly in vessel-busy areas such as by the Boston harbor. This paper presents and studies the problem of detecting the North Atlantic right whale contact call with the presence of noise and other marine life sounds. A novel algorithm was developed to preprocess the sound waves before a tree based hierarchical classifier is used to classify the data and provide a score. The developed model was trained with 30,000 data points made available through the Cornell University Whale Detection Challenge program. Results showed that the developed algorithm had close to 85% success rate in detecting the presence of the North Atlantic right whale.
[ "['Rami Abousleiman' 'Guangzhi Qu' 'Osamah Rawashdeh']", "Rami Abousleiman, Guangzhi Qu, Osamah Rawashdeh" ]
cs.LG stat.ML
null
1304.8020
null
null
http://arxiv.org/pdf/1304.8020v2
2013-05-01T11:53:05Z
2013-04-30T14:59:49Z
Semi-Supervised Information-Maximization Clustering
Semi-supervised clustering aims to introduce prior knowledge in the decision process of a clustering algorithm. In this paper, we propose a novel semi-supervised clustering algorithm based on the information-maximization principle. The proposed method is an extension of a previous unsupervised information-maximization clustering algorithm based on squared-loss mutual information to effectively incorporate must-links and cannot-links. The proposed method is computationally efficient because the clustering solution can be obtained analytically via eigendecomposition. Furthermore, the proposed method allows systematic optimization of tuning parameters such as the kernel width, given the degree of belief in the must-links and cannot-links. The usefulness of the proposed method is demonstrated through experiments.
[ "Daniele Calandriello, Gang Niu, Masashi Sugiyama", "['Daniele Calandriello' 'Gang Niu' 'Masashi Sugiyama']" ]
cs.DS cs.LG math.ST stat.TH
null
1304.8087
null
null
http://arxiv.org/pdf/1304.8087v1
2013-04-30T17:35:37Z
2013-04-30T17:35:37Z
Uniqueness of Tensor Decompositions with Applications to Polynomial Identifiability
We give a robust version of the celebrated result of Kruskal on the uniqueness of tensor decompositions: we prove that given a tensor whose decomposition satisfies a robust form of Kruskal's rank condition, it is possible to approximately recover the decomposition if the tensor is known up to a sufficiently small (inverse polynomial) error. Kruskal's theorem has found many applications in proving the identifiability of parameters for various latent variable models and mixture models such as Hidden Markov models, topic models etc. Our robust version immediately implies identifiability using only polynomially many samples in many of these settings. This polynomial identifiability is an essential first step towards efficient learning algorithms for these models. Recently, algorithms based on tensor decompositions have been used to estimate the parameters of various hidden variable models efficiently in special cases as long as they satisfy certain "non-degeneracy" properties. Our methods give a way to go beyond this non-degeneracy barrier, and establish polynomial identifiability of the parameters under much milder conditions. Given the importance of Kruskal's theorem in the tensor literature, we expect that this robust version will have several applications beyond the settings we explore in this work.
[ "['Aditya Bhaskara' 'Moses Charikar' 'Aravindan Vijayaraghavan']", "Aditya Bhaskara, Moses Charikar, Aravindan Vijayaraghavan" ]
null
null
1304.8132
null
null
http://arxiv.org/pdf/1304.8132v2
2013-11-07T18:25:15Z
2013-04-30T19:57:36Z
Local Graph Clustering Beyond Cheeger's Inequality
Motivated by applications of large-scale graph clustering, we study random-walk-based LOCAL algorithms whose running times depend only on the size of the output cluster, rather than the entire graph. All previously known such algorithms guarantee an output conductance of $tilde{O}(sqrt{phi(A)})$ when the target set $A$ has conductance $phi(A)in[0,1]$. In this paper, we improve it to $$tilde{O}bigg( minBig{sqrt{phi(A)}, frac{phi(A)}{sqrt{mathsf{Conn}(A)}} Big} bigg)enspace, $$ where the internal connectivity parameter $mathsf{Conn}(A) in [0,1]$ is defined as the reciprocal of the mixing time of the random walk over the induced subgraph on $A$. For instance, using $mathsf{Conn}(A) = Omega(lambda(A) / log n)$ where $lambda$ is the second eigenvalue of the Laplacian of the induced subgraph on $A$, our conductance guarantee can be as good as $tilde{O}(phi(A)/sqrt{lambda(A)})$. This builds an interesting connection to the recent advance of the so-called improved Cheeger's Inequality [KKL+13], which says that global spectral algorithms can provide a conductance guarantee of $O(phi_{mathsf{opt}}/sqrt{lambda_3})$ instead of $O(sqrt{phi_{mathsf{opt}}})$. In addition, we provide theoretical guarantee on the clustering accuracy (in terms of precision and recall) of the output set. We also prove that our analysis is tight, and perform empirical evaluation to support our theory on both synthetic and real data. It is worth noting that, our analysis outperforms prior work when the cluster is well-connected. In fact, the better it is well-connected inside, the more significant improvement (both in terms of conductance and accuracy) we can obtain. Our results shed light on why in practice some random-walk-based algorithms perform better than its previous theory, and help guide future research about local clustering.
[ "['Zeyuan Allen Zhu' 'Silvio Lattanzi' 'Vahab Mirrokni']" ]
stat.ML cs.LG
null
1305.0015
null
null
http://arxiv.org/pdf/1305.0015v1
2013-04-30T20:12:01Z
2013-04-30T20:12:01Z
Inferring ground truth from multi-annotator ordinal data: a probabilistic approach
A popular approach for large scale data annotation tasks is crowdsourcing, wherein each data point is labeled by multiple noisy annotators. We consider the problem of inferring ground truth from noisy ordinal labels obtained from multiple annotators of varying and unknown expertise levels. Annotation models for ordinal data have been proposed mostly as extensions of their binary/categorical counterparts and have received little attention in the crowdsourcing literature. We propose a new model for crowdsourced ordinal data that accounts for instance difficulty as well as annotator expertise, and derive a variational Bayesian inference algorithm for parameter estimation. We analyze the ordinal extensions of several state-of-the-art annotator models for binary/categorical labels and evaluate the performance of all the models on two real world datasets containing ordinal query-URL relevance scores, collected through Amazon's Mechanical Turk. Our results indicate that the proposed model performs better or as well as existing state-of-the-art methods and is more resistant to `spammy' annotators (i.e., annotators who assign labels randomly without actually looking at the instance) than popular baselines such as mean, median, and majority vote which do not account for annotator expertise.
[ "Balaji Lakshminarayanan and Yee Whye Teh", "['Balaji Lakshminarayanan' 'Yee Whye Teh']" ]
cs.SI cs.LG physics.soc-ph stat.ML
10.1109/ICC.2009.5199418
1305.0051
null
null
http://arxiv.org/abs/1305.0051v1
2013-04-30T22:57:12Z
2013-04-30T22:57:12Z
Revealing social networks of spammers through spectral clustering
To date, most studies on spam have focused only on the spamming phase of the spam cycle and have ignored the harvesting phase, which consists of the mass acquisition of email addresses. It has been observed that spammers conceal their identity to a lesser degree in the harvesting phase, so it may be possible to gain new insights into spammers' behavior by studying the behavior of harvesters, which are individuals or bots that collect email addresses. In this paper, we reveal social networks of spammers by identifying communities of harvesters with high behavioral similarity using spectral clustering. The data analyzed was collected through Project Honey Pot, a distributed system for monitoring harvesting and spamming. Our main findings are (1) that most spammers either send only phishing emails or no phishing emails at all, (2) that most communities of spammers also send only phishing emails or no phishing emails at all, and (3) that several groups of spammers within communities exhibit coherent temporal behavior and have similar IP addresses. Our findings reveal some previously unknown behavior of spammers and suggest that there is indeed social structure between spammers to be discovered.
[ "Kevin S. Xu, Mark Kliger, Yilun Chen, Peter J. Woolf, and Alfred O.\n Hero III", "['Kevin S. Xu' 'Mark Kliger' 'Yilun Chen' 'Peter J. Woolf'\n 'Alfred O. Hero III']" ]
cs.LG
null
1305.0103
null
null
http://arxiv.org/pdf/1305.0103v1
2013-05-01T06:32:12Z
2013-05-01T06:32:12Z
Clustering Unclustered Data: Unsupervised Binary Labeling of Two Datasets Having Different Class Balances
We consider the unsupervised learning problem of assigning labels to unlabeled data. A naive approach is to use clustering methods, but this works well only when data is properly clustered and each cluster corresponds to an underlying class. In this paper, we first show that this unsupervised labeling problem in balanced binary cases can be solved if two unlabeled datasets having different class balances are available. More specifically, estimation of the sign of the difference between probability densities of two unlabeled datasets gives the solution. We then introduce a new method to directly estimate the sign of the density difference without density estimation. Finally, we demonstrate the usefulness of the proposed method against several clustering methods on various toy problems and real-world datasets.
[ "['Marthinus Christoffel du Plessis' 'Masashi Sugiyama']", "Marthinus Christoffel du Plessis and Masashi Sugiyama" ]
cs.LG
null
1305.0208
null
null
http://arxiv.org/pdf/1305.0208v2
2013-07-23T02:13:57Z
2013-05-01T15:45:34Z
Perceptron Mistake Bounds
We present a brief survey of existing mistake bounds and introduce novel bounds for the Perceptron or the kernel Perceptron algorithm. Our novel bounds generalize beyond standard margin-loss type bounds, allow for any convex and Lipschitz loss function, and admit a very simple proof.
[ "Mehryar Mohri, Afshin Rostamizadeh", "['Mehryar Mohri' 'Afshin Rostamizadeh']" ]
math.ST cs.IT cs.LG math.IT stat.ME stat.ML stat.TH
null
1305.0355
null
null
http://arxiv.org/pdf/1305.0355v1
2013-05-02T07:25:52Z
2013-05-02T07:25:52Z
Model Selection for High-Dimensional Regression under the Generalized Irrepresentability Condition
In the high-dimensional regression model a response variable is linearly related to $p$ covariates, but the sample size $n$ is smaller than $p$. We assume that only a small subset of covariates is `active' (i.e., the corresponding coefficients are non-zero), and consider the model-selection problem of identifying the active covariates. A popular approach is to estimate the regression coefficients through the Lasso ($\ell_1$-regularized least squares). This is known to correctly identify the active set only if the irrelevant covariates are roughly orthogonal to the relevant ones, as quantified through the so called `irrepresentability' condition. In this paper we study the `Gauss-Lasso' selector, a simple two-stage method that first solves the Lasso, and then performs ordinary least squares restricted to the Lasso active set. We formulate `generalized irrepresentability condition' (GIC), an assumption that is substantially weaker than irrepresentability. We prove that, under GIC, the Gauss-Lasso correctly recovers the active set.
[ "['Adel Javanmard' 'Andrea Montanari']", "Adel Javanmard and Andrea Montanari" ]
cs.NA cs.LG q-bio.NC stat.ML
null
1305.0395
null
null
http://arxiv.org/pdf/1305.0395v1
2013-05-02T11:17:47Z
2013-05-02T11:17:47Z
Tensor Decompositions: A New Concept in Brain Data Analysis?
Matrix factorizations and their extensions to tensor factorizations and decompositions have become prominent techniques for linear and multilinear blind source separation (BSS), especially multiway Independent Component Analysis (ICA), NonnegativeMatrix and Tensor Factorization (NMF/NTF), Smooth Component Analysis (SmoCA) and Sparse Component Analysis (SCA). Moreover, tensor decompositions have many other potential applications beyond multilinear BSS, especially feature extraction, classification, dimensionality reduction and multiway clustering. In this paper, we briefly overview new and emerging models and approaches for tensor decompositions in applications to group and linked multiway BSS/ICA, feature extraction, classification andMultiway Partial Least Squares (MPLS) regression problems. Keywords: Multilinear BSS, linked multiway BSS/ICA, tensor factorizations and decompositions, constrained Tucker and CP models, Penalized Tensor Decompositions (PTD), feature extraction, classification, multiway PLS and CCA.
[ "Andrzej Cichocki", "['Andrzej Cichocki']" ]
cs.LG cs.AI stat.ML
null
1305.0423
null
null
http://arxiv.org/pdf/1305.0423v1
2013-05-02T13:03:53Z
2013-05-02T13:03:53Z
Testing Hypotheses by Regularized Maximum Mean Discrepancy
Do two data samples come from different distributions? Recent studies of this fundamental problem focused on embedding probability distributions into sufficiently rich characteristic Reproducing Kernel Hilbert Spaces (RKHSs), to compare distributions by the distance between their embeddings. We show that Regularized Maximum Mean Discrepancy (RMMD), our novel measure for kernel-based hypothesis testing, yields substantial improvements even when sample sizes are small, and excels at hypothesis tests involving multiple comparisons with power control. We derive asymptotic distributions under the null and alternative hypotheses, and assess power control. Outstanding results are obtained on: challenging EEG data, MNIST, the Berkley Covertype, and the Flare-Solar dataset.
[ "Somayeh Danafar, Paola M.V. Rancoita, Tobias Glasmachers, Kevin\n Whittingstall, Juergen Schmidhuber", "['Somayeh Danafar' 'Paola M. V. Rancoita' 'Tobias Glasmachers'\n 'Kevin Whittingstall' 'Juergen Schmidhuber']" ]
cs.LG
null
1305.0445
null
null
http://arxiv.org/pdf/1305.0445v2
2013-06-07T02:35:21Z
2013-05-02T14:33:28Z
Deep Learning of Representations: Looking Forward
Deep learning research aims at discovering learning algorithms that discover multiple levels of distributed representations, with higher levels representing more abstract concepts. Although the study of deep learning has already led to impressive theoretical results, learning algorithms and breakthrough experiments, several challenges lie ahead. This paper proposes to examine some of these challenges, centering on the questions of scaling deep learning algorithms to much larger models and datasets, reducing optimization difficulties due to ill-conditioning or local minima, designing more efficient and powerful inference and sampling procedures, and learning to disentangle the factors of variation underlying the observed data. It also proposes a few forward-looking research directions aimed at overcoming these challenges.
[ "['Yoshua Bengio']", "Yoshua Bengio" ]
cs.LG cs.AI stat.ML
null
1305.0626
null
null
http://arxiv.org/pdf/1305.0626v1
2013-05-03T06:25:41Z
2013-05-03T06:25:41Z
An Improved EM algorithm
In this paper, we firstly give a brief introduction of expectation maximization (EM) algorithm, and then discuss the initial value sensitivity of expectation maximization algorithm. Subsequently, we give a short proof of EM's convergence. Then, we implement experiments with the expectation maximization algorithm (We implement all the experiments on Gaussion mixture model (GMM)). Our experiment with expectation maximization is performed in the following three cases: initialize randomly; initialize with result of K-means; initialize with result of K-medoids. The experiment result shows that expectation maximization algorithm depend on its initial state or parameters. And we found that EM initialized with K-medoids performed better than both the one initialized with K-means and the one initialized randomly.
[ "Fuqiang Chen", "['Fuqiang Chen']" ]
cs.LG cs.IR stat.ML
null
1305.0638
null
null
http://arxiv.org/pdf/1305.0638v1
2013-05-03T08:26:05Z
2013-05-03T08:26:05Z
Feature Selection Based on Term Frequency and T-Test for Text Categorization
Much work has been done on feature selection. Existing methods are based on document frequency, such as Chi-Square Statistic, Information Gain etc. However, these methods have two shortcomings: one is that they are not reliable for low-frequency terms, and the other is that they only count whether one term occurs in a document and ignore the term frequency. Actually, high-frequency terms within a specific category are often regards as discriminators. This paper focuses on how to construct the feature selection function based on term frequency, and proposes a new approach based on $t$-test, which is used to measure the diversity of the distributions of a term between the specific category and the entire corpus. Extensive comparative experiments on two text corpora using three classifiers show that our new approach is comparable to or or slightly better than the state-of-the-art feature selection methods (i.e., $\chi^2$, and IG) in terms of macro-$F_1$ and micro-$F_1$.
[ "['Deqing Wang' 'Hui Zhang' 'Rui Liu' 'Weifeng Lv']", "Deqing Wang, Hui Zhang, Rui Liu, Weifeng Lv" ]
cs.LG
null
1305.0665
null
null
http://arxiv.org/pdf/1305.0665v2
2013-10-13T01:03:56Z
2013-05-03T10:20:02Z
Spectral Classification Using Restricted Boltzmann Machine
In this study, a novel machine learning algorithm, restricted Boltzmann machine (RBM), is introduced. The algorithm is applied for the spectral classification in astronomy. RBM is a bipartite generative graphical model with two separate layers (one visible layer and one hidden layer), which can extract higher level features to represent the original data. Despite generative, RBM can be used for classification when modified with a free energy and a soft-max function. Before spectral classification, the original data is binarized according to some rule. Then we resort to the binary RBM to classify cataclysmic variables (CVs) and non-CVs (one half of all the given data for training and the other half for testing). The experiment result shows state-of-the-art accuracy of 100%, which indicates the efficiency of the binary RBM algorithm.
[ "['Fuqiang Chen' 'Yan Wu' 'Yude Bu' 'Guodong Zhao']", "Fuqiang Chen, Yan Wu, Yude Bu, Guodong Zhao" ]
cs.LG
null
1305.0698
null
null
http://arxiv.org/pdf/1305.0698v1
2013-05-03T13:26:24Z
2013-05-03T13:26:24Z
Learning from Imprecise and Fuzzy Observations: Data Disambiguation through Generalized Loss Minimization
Methods for analyzing or learning from "fuzzy data" have attracted increasing attention in recent years. In many cases, however, existing methods (for precise, non-fuzzy data) are extended to the fuzzy case in an ad-hoc manner, and without carefully considering the interpretation of a fuzzy set when being used for modeling data. Distinguishing between an ontic and an epistemic interpretation of fuzzy set-valued data, and focusing on the latter, we argue that a "fuzzification" of learning algorithms based on an application of the generic extension principle is not appropriate. In fact, the extension principle fails to properly exploit the inductive bias underlying statistical and machine learning methods, although this bias, at least in principle, offers a means for "disambiguating" the fuzzy data. Alternatively, we therefore propose a method which is based on the generalization of loss functions in empirical risk minimization, and which performs model identification and data disambiguation simultaneously. Elaborating on the fuzzification of specific types of losses, we establish connections to well-known loss functions in regression and classification. We compare our approach with related methods and illustrate its use in logistic regression for binary classification.
[ "['Eyke Hüllermeier']", "Eyke H\\\"ullermeier" ]
cs.NE cs.LG
null
1305.0922
null
null
http://arxiv.org/pdf/1305.0922v1
2013-05-04T14:06:48Z
2013-05-04T14:06:48Z
On Comparison between Evolutionary Programming Network-based Learning and Novel Evolution Strategy Algorithm-based Learning
This paper presents two different evolutionary systems - Evolutionary Programming Network (EPNet) and Novel Evolutions Strategy (NES) Algorithm. EPNet does both training and architecture evolution simultaneously, whereas NES does a fixed network and only trains the network. Five mutation operators proposed in EPNet to reflect the emphasis on evolving ANNs behaviors. Close behavioral links between parents and their offspring are maintained by various mutations, such as partial training and node splitting. On the other hand, NES uses two new genetic operators - subpopulation-based max-mean arithmetical crossover and time-variant mutation. The above-mentioned two algorithms have been tested on a number of benchmark problems, such as the medical diagnosis problems (breast cancer, diabetes, and heart disease). The results and the comparison between them are also presented in this paper.
[ "M.A. Khayer Azad, Md. Shafiqul Islam and M.M.A. Hashem", "['M. A. Khayer Azad' 'Md. Shafiqul Islam' 'M. M. A. Hashem']" ]
cs.LG stat.ML
null
1305.1002
null
null
http://arxiv.org/pdf/1305.1002v1
2013-05-05T09:44:08Z
2013-05-05T09:44:08Z
Efficient Estimation of the number of neighbours in Probabilistic K Nearest Neighbour Classification
Probabilistic k-nearest neighbour (PKNN) classification has been introduced to improve the performance of original k-nearest neighbour (KNN) classification algorithm by explicitly modelling uncertainty in the classification of each feature vector. However, an issue common to both KNN and PKNN is to select the optimal number of neighbours, $k$. The contribution of this paper is to incorporate the uncertainty in $k$ into the decision making, and in so doing use Bayesian model averaging to provide improved classification. Indeed the problem of assessing the uncertainty in $k$ can be viewed as one of statistical model selection which is one of the most important technical issues in the statistics and machine learning domain. In this paper, a new functional approximation algorithm is proposed to reconstruct the density of the model (order) without relying on time consuming Monte Carlo simulations. In addition, this algorithm avoids cross validation by adopting Bayesian framework. The performance of this algorithm yielded very good performance on several real experimental datasets.
[ "Ji Won Yoon and Nial Friel", "['Ji Won Yoon' 'Nial Friel']" ]
cs.LG
null
1305.1019
null
null
http://arxiv.org/pdf/1305.1019v2
2014-01-02T23:37:26Z
2013-05-05T14:58:15Z
Simple Deep Random Model Ensemble
Representation learning and unsupervised learning are two central topics of machine learning and signal processing. Deep learning is one of the most effective unsupervised representation learning approach. The main contributions of this paper to the topics are as follows. (i) We propose to view the representative deep learning approaches as special cases of the knowledge reuse framework of clustering ensemble. (ii) We propose to view sparse coding when used as a feature encoder as the consensus function of clustering ensemble, and view dictionary learning as the training process of the base clusterings of clustering ensemble. (ii) Based on the above two views, we propose a very simple deep learning algorithm, named deep random model ensemble (DRME). It is a stack of random model ensembles. Each random model ensemble is a special k-means ensemble that discards the expectation-maximization optimization of each base k-means but only preserves the default initialization method of the base k-means. (iv) We propose to select the most powerful representation among the layers by applying DRME to clustering where the single-linkage is used as the clustering algorithm. Moreover, the DRME based clustering can also detect the number of the natural clusters accurately. Extensive experimental comparisons with 5 representation learning methods on 19 benchmark data sets demonstrate the effectiveness of DRME.
[ "['Xiao-Lei Zhang' 'Ji Wu']", "Xiao-Lei Zhang, Ji Wu" ]
stat.ML cs.LG
null
1305.1027
null
null
http://arxiv.org/pdf/1305.1027v2
2013-07-17T21:07:37Z
2013-05-05T16:59:58Z
Regret Bounds for Reinforcement Learning with Policy Advice
In some reinforcement learning problems an agent may be provided with a set of input policies, perhaps learned from prior experience or provided by advisors. We present a reinforcement learning with policy advice (RLPA) algorithm which leverages this input set and learns to use the best policy in the set for the reinforcement learning task at hand. We prove that RLPA has a sub-linear regret of \tilde O(\sqrt{T}) relative to the best input policy, and that both this regret and its computational complexity are independent of the size of the state and action space. Our empirical simulations support our theoretical analysis. This suggests RLPA may offer significant advantages in large domains where some prior good policies are provided.
[ "Mohammad Gheshlaghi Azar and Alessandro Lazaric and Emma Brunskill", "['Mohammad Gheshlaghi Azar' 'Alessandro Lazaric' 'Emma Brunskill']" ]
stat.ML cs.LG
null
1305.1040
null
null
http://arxiv.org/pdf/1305.1040v1
2013-05-05T18:51:24Z
2013-05-05T18:51:24Z
On the Convergence and Consistency of the Blurring Mean-Shift Process
The mean-shift algorithm is a popular algorithm in computer vision and image processing. It can also be cast as a minimum gamma-divergence estimation. In this paper we focus on the "blurring" mean shift algorithm, which is one version of the mean-shift process that successively blurs the dataset. The analysis of the blurring mean-shift is relatively more complicated compared to the nonblurring version, yet the algorithm convergence and the estimation consistency have not been well studied in the literature. In this paper we prove both the convergence and the consistency of the blurring mean-shift. We also perform simulation studies to compare the efficiency of the blurring and the nonblurring versions of the mean-shift algorithms. Our results show that the blurring mean-shift has more efficiency.
[ "Ting-Li Chen", "['Ting-Li Chen']" ]
cs.CG cs.LG math.MG
null
1305.1172
null
null
http://arxiv.org/pdf/1305.1172v1
2013-05-06T12:57:24Z
2013-05-06T12:57:24Z
Gromov-Hausdorff Approximation of Metric Spaces with Linear Structure
In many real-world applications data come as discrete metric spaces sampled around 1-dimensional filamentary structures that can be seen as metric graphs. In this paper we address the metric reconstruction problem of such filamentary structures from data sampled around them. We prove that they can be approximated, with respect to the Gromov-Hausdorff distance by well-chosen Reeb graphs (and some of their variants) and we provide an efficient and easy to implement algorithm to compute such approximations in almost linear time. We illustrate the performances of our algorithm on a few synthetic and real data sets.
[ "Fr\\'ed\\'eric Chazal (INRIA Sophia Antipolis / INRIA Saclay - Ile de\n France), Jian Sun", "['Frédéric Chazal' 'Jian Sun']" ]