title
stringlengths
5
246
categories
stringlengths
5
94
abstract
stringlengths
54
5.03k
authors
stringlengths
0
6.72k
doi
stringlengths
12
54
id
stringlengths
6
10
year
float64
2.02k
2.02k
venue
stringclasses
13 values
Surrogate Losses in Passive and Active Learning
math.ST cs.LG stat.ML stat.TH
Active learning is a type of sequential design for supervised machine learning, in which the learning algorithm sequentially requests the labels of selected instances from a large pool of unlabeled data points. The objective is to produce a classifier of relatively low risk, as measured under the 0-1 loss, ideally using fewer label requests than the number of random labeled data points sufficient to achieve the same. This work investigates the potential uses of surrogate loss functions in the context of active learning. Specifically, it presents an active learning algorithm based on an arbitrary classification-calibrated surrogate loss function, along with an analysis of the number of label requests sufficient for the classifier returned by the algorithm to achieve a given risk under the 0-1 loss. Interestingly, these results cannot be obtained by simply optimizing the surrogate risk via active learning to an extent sufficient to provide a guarantee on the 0-1 loss, as is common practice in the analysis of surrogate losses for passive learning. Some of the results have additional implications for the use of surrogate losses in passive learning.
Steve Hanneke and Liu Yang
10.1214/19-EJS1635
1207.3772
null
null
Accuracy Measures for the Comparison of Classifiers
cs.LG
The selection of the best classification algorithm for a given dataset is a very widespread problem. It is also a complex one, in the sense it requires to make several important methodological choices. Among them, in this work we focus on the measure used to assess the classification performance and rank the algorithms. We present the most popular measures and discuss their properties. Despite the numerous measures proposed over the years, many of them turn out to be equivalent in this specific case, to have interpretation problems, or to be unsuitable for our purpose. Consequently, classic overall success rate or marginal rates should be preferred for this specific task.
Vincent Labatut (BIT Lab), Hocine Cherifi (Le2i)
null
1207.3790
null
null
Approximate Message Passing with Consistent Parameter Estimation and Applications to Sparse Learning
cs.IT cs.LG math.IT
We consider the estimation of an i.i.d. (possibly non-Gaussian) vector $\xbf \in \R^n$ from measurements $\ybf \in \R^m$ obtained by a general cascade model consisting of a known linear transform followed by a probabilistic componentwise (possibly nonlinear) measurement channel. A novel method, called adaptive generalized approximate message passing (Adaptive GAMP), that enables joint learning of the statistics of the prior and measurement channel along with estimation of the unknown vector $\xbf$ is presented. The proposed algorithm is a generalization of a recently-developed EM-GAMP that uses expectation-maximization (EM) iterations where the posteriors in the E-steps are computed via approximate message passing. The methodology can be applied to a large class of learning problems including the learning of sparse priors in compressed sensing or identification of linear-nonlinear cascade models in dynamical systems and neural spiking processes. We prove that for large i.i.d. Gaussian transform matrices the asymptotic componentwise behavior of the adaptive GAMP algorithm is predicted by a simple set of scalar state evolution equations. In addition, we show that when a certain maximum-likelihood estimation can be performed in each step, the adaptive GAMP method can yield asymptotically consistent parameter estimates, which implies that the algorithm achieves a reconstruction quality equivalent to the oracle algorithm that knows the correct parameter values. Remarkably, this result applies to essentially arbitrary parametrizations of the unknown distributions, including ones that are nonlinear and non-Gaussian. The adaptive GAMP methodology thus provides a systematic, general and computationally efficient method applicable to a large range of complex linear-nonlinear models with provable guarantees.
Ulugbek S. Kamilov, Sundeep Rangan, Alyson K. Fletcher, Michael Unser
null
1207.3859
null
null
Ensemble Clustering with Logic Rules
stat.ML cs.LG
In this article, the logic rule ensembles approach to supervised learning is applied to the unsupervised or semi-supervised clustering. Logic rules which were obtained by combining simple conjunctive rules are used to partition the input space and an ensemble of these rules is used to define a similarity matrix. Similarity partitioning is used to partition the data in an hierarchical manner. We have used internal and external measures of cluster validity to evaluate the quality of clusterings or to identify the number of clusters.
Deniz Akdemir
null
1207.3961
null
null
A Two-Stage Combined Classifier in Scale Space Texture Classification
cs.CV cs.LG
Textures often show multiscale properties and hence multiscale techniques are considered useful for texture analysis. Scale-space theory as a biologically motivated approach may be used to construct multiscale textures. In this paper various ways are studied to combine features on different scales for texture classification of small image patches. We use the N-jet of derivatives up to the second order at different scales to generate distinct pattern representations (DPR) of feature subsets. Each feature subset in the DPR is given to a base classifier (BC) of a two-stage combined classifier. The decisions made by these BCs are combined in two stages over scales and derivatives. Various combining systems and their significances and differences are discussed. The learning curves are used to evaluate the performances. We found for small sample sizes combining classifiers performs significantly better than combining feature spaces (CFS). It is also shown that combining classifiers performs better than the support vector machine on CFS in multiscale texture classification.
Mehrdad J. Gangeh, Robert P. W. Duin, Bart M. ter Haar Romeny, Mohamed S. Kamel
null
1207.4089
null
null
The Minimum Information Principle for Discriminative Learning
cs.LG stat.ML
Exponential models of distributions are widely used in machine learning for classiffication and modelling. It is well known that they can be interpreted as maximum entropy models under empirical expectation constraints. In this work, we argue that for classiffication tasks, mutual information is a more suitable information theoretic measure to be optimized. We show how the principle of minimum mutual information generalizes that of maximum entropy, and provides a comprehensive framework for building discriminative classiffiers. A game theoretic interpretation of our approach is then given, and several generalization bounds provided. We present iterative algorithms for solving the minimum information problem and its convex dual, and demonstrate their performance on various classiffication tasks. The results show that minimum information classiffiers outperform the corresponding maximum entropy models.
Amir Globerson, Naftali Tishby
null
1207.4110
null
null
Algebraic Statistics in Model Selection
cs.LG stat.ML
We develop the necessary theory in computational algebraic geometry to place Bayesian networks into the realm of algebraic statistics. We present an algebra{statistics dictionary focused on statistical modeling. In particular, we link the notion of effiective dimension of a Bayesian network with the notion of algebraic dimension of a variety. We also obtain the independence and non{independence constraints on the distributions over the observable variables implied by a Bayesian network with hidden variables, via a generating set of an ideal of polynomials associated to the network. These results extend previous work on the subject. Finally, the relevance of these results for model selection is discussed.
Luis David Garcia
null
1207.4112
null
null
On-line Prediction with Kernels and the Complexity Approximation Principle
cs.LG stat.ML
The paper describes an application of Aggregating Algorithm to the problem of regression. It generalizes earlier results concerned with plain linear regression to kernel techniques and presents an on-line algorithm which performs nearly as well as any oblivious kernel predictor. The paper contains the derivation of an estimate on the performance of this algorithm. The estimate is then used to derive an application of the Complexity Approximation Principle to kernel methods.
Alex Gammerman, Yuri Kalnishkan, Vladimir Vovk
null
1207.4113
null
null
Iterative Conditional Fitting for Gaussian Ancestral Graph Models
stat.ME cs.LG stat.ML
Ancestral graph models, introduced by Richardson and Spirtes (2002), generalize both Markov random fields and Bayesian networks to a class of graphs with a global Markov property that is closed under conditioning and marginalization. By design, ancestral graphs encode precisely the conditional independence structures that can arise from Bayesian networks with selection and unobserved (hidden/latent) variables. Thus, ancestral graph models provide a potentially very useful framework for exploratory model selection when unobserved variables might be involved in the data-generating process but no particular hidden structure can be specified. In this paper, we present the Iterative Conditional Fitting (ICF) algorithm for maximum likelihood estimation in Gaussian ancestral graph models. The name reflects that in each step of the procedure a conditional distribution is estimated, subject to constraints, while a marginal distribution is held fixed. This approach is in duality to the well-known Iterative Proportional Fitting algorithm, in which marginal distributions are fitted while conditional distributions are held fixed.
Mathias Drton, Thomas S. Richardson
null
1207.4118
null
null
Applying Discrete PCA in Data Analysis
cs.LG stat.ML
Methods for analysis of principal components in discrete data have existed for some time under various names such as grade of membership modelling, probabilistic latent semantic analysis, and genotype inference with admixture. In this paper we explore a number of extensions to the common theory, and present some application of these methods to some common statistical tasks. We show that these methods can be interpreted as a discrete version of ICA. We develop a hierarchical version yielding components at different levels of detail, and additional techniques for Gibbs sampling. We compare the algorithms on a text prediction task using support vector machines, and to information retrieval.
Wray L. Buntine, Aleks Jakulin
null
1207.4125
null
null
Exponential Families for Conditional Random Fields
cs.LG stat.ML
In this paper we de ne conditional random elds in reproducing kernel Hilbert spaces and show connections to Gaussian Process classi cation. More speci cally, we prove decomposition results for undirected graphical models and we give constructions for kernels. Finally we present e cient means of solving the optimization problem using reduced rank decompositions and we show how stationarity can be exploited e ciently in the optimization process.
Yasemin Altun, Alex Smola, Thomas Hofmann
null
1207.4131
null
null
MOB-ESP and other Improvements in Probability Estimation
cs.LG cs.AI stat.ML
A key prerequisite to optimal reasoning under uncertainty in intelligent systems is to start with good class probability estimates. This paper improves on the current best probability estimation trees (Bagged-PETs) and also presents a new ensemble-based algorithm (MOB-ESP). Comparisons are made using several benchmark datasets and multiple metrics. These experiments show that MOB-ESP outputs significantly more accurate class probabilities than either the baseline BPETs algorithm or the enhanced version presented here (EB-PETs). These results are based on metrics closely associated with the average accuracy of the predictions. MOB-ESP also provides much better probability rankings than B-PETs. The paper further suggests how these estimation techniques can be applied in concert with a broader category of classifiers.
Rodney Nielsen
null
1207.4132
null
null
"Ideal Parent" Structure Learning for Continuous Variable Networks
cs.LG stat.ML
In recent years, there is a growing interest in learning Bayesian networks with continuous variables. Learning the structure of such networks is a computationally expensive procedure, which limits most applications to parameter learning. This problem is even more acute when learning networks with hidden variables. We present a general method for significantly speeding the structure search algorithm for continuous variable networks with common parametric distributions. Importantly, our method facilitates the addition of new hidden variables into the network structure efficiently. We demonstrate the method on several data sets, both for learning structure on fully observable data, and for introducing new hidden variables during structure search.
Iftach Nachman, Gal Elidan, Nir Friedman
null
1207.4133
null
null
Bayesian Learning in Undirected Graphical Models: Approximate MCMC algorithms
cs.LG stat.ML
Bayesian learning in undirected graphical models|computing posterior distributions over parameters and predictive quantities is exceptionally difficult. We conjecture that for general undirected models, there are no tractable MCMC (Markov Chain Monte Carlo) schemes giving the correct equilibrium distribution over parameters. While this intractability, due to the partition function, is familiar to those performing parameter optimisation, Bayesian learning of posterior distributions over undirected model parameters has been unexplored and poses novel challenges. we propose several approximate MCMC schemes and test on fully observed binary models (Boltzmann machines) for a small coronary heart disease data set and larger artificial systems. While approximations must perform well on the model, their interaction with the sampling scheme is also important. Samplers based on variational mean- field approximations generally performed poorly, more advanced methods using loopy propagation, brief sampling and stochastic dynamics lead to acceptable parameter posteriors. Finally, we demonstrate these techniques on a Markov random field with hidden variables.
Iain Murray, Zoubin Ghahramani
null
1207.4134
null
null
Active Model Selection
cs.LG stat.ML
Classical learning assumes the learner is given a labeled data sample, from which it learns a model. The field of Active Learning deals with the situation where the learner begins not with a training sample, but instead with resources that it can use to obtain information to help identify the optimal model. To better understand this task, this paper presents and analyses the simplified "(budgeted) active model selection" version, which captures the pure exploration aspect of many active learning problems in a clean and simple problem formulation. Here the learner can use a fixed budget of "model probes" (where each probe evaluates the specified model on a random indistinguishable instance) to identify which of a given set of possible models has the highest expected accuracy. Our goal is a policy that sequentially determines which model to probe next, based on the information observed so far. We present a formal description of this task, and show that it is NPhard in general. We then investigate a number of algorithms for this task, including several existing ones (eg, "Round-Robin", "Interval Estimation", "Gittins") as well as some novel ones (e.g., "Biased-Robin"), describing first their approximation properties and then their empirical performance on various problem instances. We observe empirically that the simple biased-robin algorithm significantly outperforms the other algorithms in the case of identical costs and priors.
Omid Madani, Daniel J. Lizotte, Russell Greiner
null
1207.4138
null
null
An Extended Cencov-Campbell Characterization of Conditional Information Geometry
cs.LG stat.ML
We formulate and prove an axiomatic characterization of conditional information geometry, for both the normalized and the nonnormalized cases. This characterization extends the axiomatic derivation of the Fisher geometry by Cencov and Campbell to the cone of positive conditional models, and as a special case to the manifold of conditional distributions. Due to the close connection between the conditional I-divergence and the product Fisher information metric the characterization provides a new axiomatic interpretation of the primal problems underlying logistic regression and AdaBoost.
Guy Lebanon
null
1207.4139
null
null
Conditional Chow-Liu Tree Structures for Modeling Discrete-Valued Vector Time Series
cs.LG stat.ML
We consider the problem of modeling discrete-valued vector time series data using extensions of Chow-Liu tree models to capture both dependencies across time and dependencies across variables. Conditional Chow-Liu tree models are introduced, as an extension to standard Chow-Liu trees, for modeling conditional rather than joint densities. We describe learning algorithms for such models and show how they can be used to learn parsimonious representations for the output distributions in hidden Markov models. These models are applied to the important problem of simulating and forecasting daily precipitation occurrence for networks of rain stations. To demonstrate the effectiveness of the models, we compare their performance versus a number of alternatives using historical precipitation data from Southwestern Australia and the Western United States. We illustrate how the structure and parameters of the models can be used to provide an improved meteorological interpretation of such data.
Sergey Kirshner, Padhraic Smyth, Andrew Robertson
null
1207.4142
null
null
A Generative Bayesian Model for Aggregating Experts' Probabilities
cs.LG stat.ML
In order to improve forecasts, a decisionmaker often combines probabilities given by various sources, such as human experts and machine learning classifiers. When few training data are available, aggregation can be improved by incorporating prior knowledge about the event being forecasted and about salient properties of the experts. To this end, we develop a generative Bayesian aggregation model for probabilistic classi cation. The model includes an event-specific prior, measures of individual experts' bias, calibration, accuracy, and a measure of dependence betweeen experts. Rather than require absolute measures, we show that aggregation may be expressed in terms of relative accuracy between experts. The model results in a weighted logarithmic opinion pool (LogOps) that satis es consistency criteria such as the external Bayesian property. We derive analytic solutions for independent and for exchangeable experts. Empirical tests demonstrate the model's use, comparing its accuracy with other aggregation methods.
Joseph Kahn
null
1207.4144
null
null
A Bayesian Approach toward Active Learning for Collaborative Filtering
cs.LG cs.IR stat.ML
Collaborative filtering is a useful technique for exploiting the preference patterns of a group of users to predict the utility of items for the active user. In general, the performance of collaborative filtering depends on the number of rated examples given by the active user. The more the number of rated examples given by the active user, the more accurate the predicted ratings will be. Active learning provides an effective way to acquire the most informative rated examples from active users. Previous work on active learning for collaborative filtering only considers the expected loss function based on the estimated model, which can be misleading when the estimated model is inaccurate. This paper takes one step further by taking into account of the posterior distribution of the estimated model, which results in more robust active learning algorithm. Empirical studies with datasets of movie ratings show that when the number of ratings from the active user is restricted to be small, active learning methods only based on the estimated model don't perform well while the active learning method using the model distribution achieves substantially better performance.
Rong Jin, Luo Si
null
1207.4146
null
null
Dynamical Systems Trees
cs.LG stat.ML
We propose dynamical systems trees (DSTs) as a flexible class of models for describing multiple processes that interact via a hierarchy of aggregating parent chains. DSTs extend Kalman filters, hidden Markov models and nonlinear dynamical systems to an interactive group scenario. Various individual processes interact as communities and sub-communities in a tree structure that is unrolled in time. To accommodate nonlinear temporal activity, each individual leaf process is modeled as a dynamical system containing discrete and/or continuous hidden states with discrete and/or Gaussian emissions. Subsequent higher level parent processes act like hidden Markov models and mediate the interaction between leaf processes or between other parent processes in the hierarchy. Aggregator chains are parents of child processes that they combine and mediate, yielding a compact overall parameterization. We provide tractable inference and learning algorithms for arbitrary DST topologies via an efficient structured mean-field algorithm. The diverse applicability of DSTs is demonstrated by experiments on gene expression data and by modeling group behavior in the setting of an American football game.
Andrew Howard, Tony S. Jebara
null
1207.4148
null
null
From Fields to Trees
stat.CO cs.LG
We present new MCMC algorithms for computing the posterior distributions and expectations of the unknown variables in undirected graphical models with regular structure. For demonstration purposes, we focus on Markov Random Fields (MRFs). By partitioning the MRFs into non-overlapping trees, it is possible to compute the posterior distribution of a particular tree exactly by conditioning on the remaining tree. These exact solutions allow us to construct efficient blocked and Rao-Blackwellised MCMC algorithms. We show empirically that tree sampling is considerably more efficient than other partitioned sampling schemes and the naive Gibbs sampler, even in cases where loopy belief propagation fails to converge. We prove that tree sampling exhibits lower variance than the naive Gibbs sampler and other naive partitioning schemes using the theoretical measure of maximal correlation. We also construct new information theory tools for comparing different MCMC schemes and show that, under these, tree sampling is more efficient.
Firas Hamze, Nando de Freitas
null
1207.4149
null
null
PAC-learning bounded tree-width Graphical Models
cs.LG cs.DS stat.ML
We show that the class of strongly connected graphical models with treewidth at most k can be properly efficiently PAC-learnt with respect to the Kullback-Leibler Divergence. Previous approaches to this problem, such as those of Chow ([1]), and Ho gen ([7]) have shown that this class is PAC-learnable by reducing it to a combinatorial optimization problem. However, for k > 1, this problem is NP-complete ([15]), and so unless P=NP, these approaches will take exponential amounts of time. Our approach differs significantly from these, in that it first attempts to find approximate conditional independencies by solving (polynomially many) submodular optimization problems, and then using a dynamic programming formulation to combine the approximate conditional independence information to derive a graphical model with underlying graph of the tree-width specified. This gives us an efficient (polynomial time in the number of random variables) PAC-learning algorithm which requires only polynomial number of samples of the true distribution, and only polynomial running time.
Mukund Narasimhan, Jeff A. Bilmes
null
1207.4151
null
null
Maximum Entropy for Collaborative Filtering
cs.IR cs.LG
Within the task of collaborative filtering two challenges for computing conditional probabilities exist. First, the amount of training data available is typically sparse with respect to the size of the domain. Thus, support for higher-order interactions is generally not present. Second, the variables that we are conditioning upon vary for each query. That is, users label different variables during each query. For this reason, there is no consistent input to output mapping. To address these problems we purpose a maximum entropy approach using a non-standard measure of entropy. This approach can be simplified to solving a set of linear equations that can be efficiently solved.
Lawrence Zitnick, Takeo Kanade
null
1207.4152
null
null
Similarity-Driven Cluster Merging Method for Unsupervised Fuzzy Clustering
cs.LG stat.ML
In this paper, a similarity-driven cluster merging method is proposed for unsuper-vised fuzzy clustering. The cluster merging method is used to resolve the problem of cluster validation. Starting with an overspecified number of clusters in the data, pairs of similar clusters are merged based on the proposed similarity-driven cluster merging criterion. The similarity between clusters is calculated by a fuzzy cluster similarity matrix, while an adaptive threshold is used for merging. In addition, a modified generalized ob- jective function is used for prototype-based fuzzy clustering. The function includes the p-norm distance measure as well as principal components of the clusters. The number of the principal components is determined automatically from the data being clustered. The properties of this unsupervised fuzzy clustering algorithm are illustrated by several experiments.
Xuejian Xiong, Kap Chan, Kian Lee Tan
null
1207.4155
null
null
Graph partition strategies for generalized mean field inference
cs.LG stat.ML
An autonomous variational inference algorithm for arbitrary graphical models requires the ability to optimize variational approximations over the space of model parameters as well as over the choice of tractable families used for the variational approximation. In this paper, we present a novel combination of graph partitioning algorithms with a generalized mean field (GMF) inference algorithm. This combination optimizes over disjoint clustering of variables and performs inference using those clusters. We provide a formal analysis of the relationship between the graph cut and the GMF approximation, and explore several graph partition strategies empirically. Our empirical results provide rather clear support for a weighted version of MinCut as a useful clustering algorithm for GMF inference, which is consistent with the implications from the formal analysis.
Eric P. Xing, Michael I. Jordan, Stuart Russell
null
1207.4156
null
null
An Integrated, Conditional Model of Information Extraction and Coreference with Applications to Citation Matching
cs.LG cs.DL cs.IR stat.ML
Although information extraction and coreference resolution appear together in many applications, most current systems perform them as ndependent steps. This paper describes an approach to integrated inference for extraction and coreference based on conditionally-trained undirected graphical models. We discuss the advantages of conditional probability training, and of a coreference model structure based on graph partitioning. On a data set of research paper citations, we show significant reduction in error by using extraction uncertainty to improve coreference citation matching accuracy, and using coreference to improve the accuracy of the extracted fields.
Ben Wellner, Andrew McCallum, Fuchun Peng, Michael Hay
null
1207.4157
null
null
On the Choice of Regions for Generalized Belief Propagation
cs.AI cs.LG
Generalized belief propagation (GBP) has proven to be a promising technique for approximate inference tasks in AI and machine learning. However, the choice of a good set of clusters to be used in GBP has remained more of an art then a science until this day. This paper proposes a sequential approach to adding new clusters of nodes and their interactions (i.e. "regions") to the approximation. We first review and analyze the recently introduced region graphs and find that three kinds of operations ("split", "merge" and "death") leave the free energy and (under some conditions) the fixed points of GBP invariant. This leads to the notion of "weakly irreducible" regions as the natural candidates to be added to the approximation. Computational complexity of the GBP algorithm is controlled by restricting attention to regions with small "region-width". Combining the above with an efficient (i.e. local in the graph) measure to predict the improved accuracy of GBP leads to the sequential "region pursuit" algorithm for adding new regions bottom-up to the region graph. Experiments show that this algorithm can indeed perform close to optimally.
Max Welling
null
1207.4158
null
null
ARMA Time-Series Modeling with Graphical Models
stat.AP cs.LG stat.ME
We express the classic ARMA time-series model as a directed graphical model. In doing so, we find that the deterministic relationships in the model make it effectively impossible to use the EM algorithm for learning model parameters. To remedy this problem, we replace the deterministic relationships with Gaussian distributions having a small variance, yielding the stochastic ARMA (ARMA) model. This modification allows us to use the EM algorithm to learn parmeters and to forecast,even in situations where some data is missing. This modification, in conjunction with the graphicalmodel approach, also allows us to include cross predictors in situations where there are multiple times series and/or additional nontemporal covariates. More surprising,experiments suggest that the move to stochastic ARMA yields improved accuracy through better smoothing. We demonstrate improvements afforded by cross prediction and better smoothing on real data.
Bo Thiesson, David Maxwell Chickering, David Heckerman, Christopher Meek
null
1207.4162
null
null
Factored Latent Analysis for far-field tracking data
cs.LG stat.ML
This paper uses Factored Latent Analysis (FLA) to learn a factorized, segmental representation for observations of tracked objects over time. Factored Latent Analysis is latent class analysis in which the observation space is subdivided and each aspect of the original space is represented by a separate latent class model. One could simply treat these factors as completely independent and ignore their interdependencies or one could concatenate them together and attempt to learn latent class structure for the complete observation space. Alternatively, FLA allows the interdependencies to be exploited in estimating an effective model, which is also capable of representing a factored latent state. In this paper, FLA is used to learn a set of factored latent classes to represent different modalities of observations of tracked objects. Different characteristics of the state of tracked objects are each represented by separate latent class models, including normalized size, normalized speed, normalized direction, and position. This model also enables effective temporal segmentation of these sequences. This method is data-driven, unsupervised using only pairwise observation statistics. This data-driven and unsupervised activity classi- fication technique exhibits good performance in multiple challenging environments.
Chris Stauffer
null
1207.4164
null
null
Predictive State Representations: A New Theory for Modeling Dynamical Systems
cs.AI cs.LG
Modeling dynamical systems, both for control purposes and to make predictions about their behavior, is ubiquitous in science and engineering. Predictive state representations (PSRs) are a recently introduced class of models for discrete-time dynamical systems. The key idea behind PSRs and the closely related OOMs (Jaeger's observable operator models) is to represent the state of the system as a set of predictions of observable outcomes of experiments one can do in the system. This makes PSRs rather different from history-based models such as nth-order Markov models and hidden-state-based models such as HMMs and POMDPs. We introduce an interesting construct, the systemdynamics matrix, and show how PSRs can be derived simply from it. We also use this construct to show formally that PSRs are more general than both nth-order Markov models and HMMs/POMDPs. Finally, we discuss the main difference between PSRs and OOMs and conclude with directions for future work.
Satinder Singh, Michael James, Matthew Rudary
null
1207.4167
null
null
The Author-Topic Model for Authors and Documents
cs.IR cs.LG stat.ML
We introduce the author-topic model, a generative model for documents that extends Latent Dirichlet Allocation (LDA; Blei, Ng, & Jordan, 2003) to include authorship information. Each author is associated with a multinomial distribution over topics and each topic is associated with a multinomial distribution over words. A document with multiple authors is modeled as a distribution over topics that is a mixture of the distributions associated with the authors. We apply the model to a collection of 1,700 NIPS conference papers and 160,000 CiteSeer abstracts. Exact inference is intractable for these datasets and we use Gibbs sampling to estimate the topic and author distributions. We compare the performance with two other generative models for documents, which are special cases of the author-topic model: LDA (a topic model) and a simple author model in which each author is associated with a distribution over words rather than a distribution over topics. We show topics recovered by the author-topic model, and demonstrate applications to computing similarity between authors and entropy of author output.
Michal Rosen-Zvi, Thomas Griffiths, Mark Steyvers, Padhraic Smyth
null
1207.4169
null
null
Variational Chernoff Bounds for Graphical Models
cs.LG stat.ML
Recent research has made significant progress on the problem of bounding log partition functions for exponential family graphical models. Such bounds have associated dual parameters that are often used as heuristic estimates of the marginal probabilities required in inference and learning. However these variational estimates do not give rigorous bounds on marginal probabilities, nor do they give estimates for probabilities of more general events than simple marginals. In this paper we build on this recent work by deriving rigorous upper and lower bounds on event probabilities for graphical models. Our approach is based on the use of generalized Chernoff bounds to express bounds on event probabilities in terms of convex optimization problems; these optimization problems, in turn, require estimates of generalized log partition functions. Simulations indicate that this technique can result in useful, rigorous bounds to complement the heuristic variational estimates, with comparable computational cost.
Pradeep Ravikumar, John Lafferty
null
1207.4172
null
null
A Hierarchical Graphical Model for Record Linkage
cs.LG cs.IR stat.ML
The task of matching co-referent records is known among other names as rocord linkage. For large record-linkage problems, often there is little or no labeled data available, but unlabeled data shows a reasonable clear structure. For such problems, unsupervised or semi-supervised methods are preferable to supervised methods. In this paper, we describe a hierarchical graphical model framework for the linakge-problem in an unsupervised setting. In addition to proposing new methods, we also cast existing unsupervised probabilistic record-linkage methods in this framework. Some of the techniques we propose to minimize overfitting in the above model are of interest in the general graphical model setting. We describe a method for incorporating monotinicity constraints in a graphical model. We also outline a bootstrapping approach of using "single-field" classifiers to noisily label latent variables in a hierarchical model. Experimental results show that our proposed unsupervised methods perform quite competitively even with fully supervised record-linkage methods.
Pradeep Ravikumar, William Cohen
null
1207.4180
null
null
On the Statistical Efficiency of $\ell_{1,p}$ Multi-Task Learning of Gaussian Graphical Models
cs.LG stat.ML
In this paper, we present $\ell_{1,p}$ multi-task structure learning for Gaussian graphical models. We analyze the sufficient number of samples for the correct recovery of the support union and edge signs. We also analyze the necessary number of samples for any conceivable method by providing information-theoretic lower bounds. We compare the statistical efficiency of multi-task learning versus that of single-task learning. For experiments, we use a block coordinate descent method that is provably convergent and generates a sequence of positive definite solutions. We provide experimental validation on synthetic data as well as on two publicly available real-world data sets, including functional magnetic resonance imaging and gene expression data.
Jean Honorio, Tommi Jaakkola and Dimitris Samaras
null
1207.4255
null
null
Better Mixing via Deep Representations
cs.LG
It has previously been hypothesized, and supported with some experimental evidence, that deeper representations, when well trained, tend to do a better job at disentangling the underlying factors of variation. We study the following related conjecture: better representations, in the sense of better disentangling, can be exploited to produce faster-mixing Markov chains. Consequently, mixing would be more efficient at higher levels of representation. To better understand why and how this is happening, we propose a secondary conjecture: the higher-level samples fill more uniformly the space they occupy and the high-density manifolds tend to unfold when represented at higher levels. The paper discusses these hypotheses and tests them experimentally through visualization and measurements of mixing and interpolating between samples.
Yoshua Bengio, Gr\'egoire Mesnil, Yann Dauphin and Salah Rifai
null
1207.4404
null
null
Protein Function Prediction Based on Kernel Logistic Regression with 2-order Graphic Neighbor Information
q-bio.QM cs.LG q-bio.MN
To enhance the accuracy of protein-protein interaction function prediction, a 2-order graphic neighbor information feature extraction method based on undirected simple graph is proposed in this paper, which extends the 1-order graphic neighbor featureextraction method. And the chi-square test statistical method is also involved in feature combination. To demonstrate the effectiveness of our 2-order graphic neighbor feature, four logistic regression models (logistic regression (abbrev. LR), diffusion kernel logistic regression (abbrev. DKLR), polynomial kernel logistic regression (abbrev. PKLR), and radial basis function (RBF) based kernel logistic regression (abbrev. RBF KLR)) are investigated on the two feature sets. The experimental results of protein function prediction of Yeast Proteome Database (YPD) using the the protein-protein interaction data of Munich Information Center for Protein Sequences (MIPS) show that 2-order graphic neighbor information of proteins can significantly improve the average overall percentage of protein function prediction especially with RBF KLR. And, with a new 5-top chi-square feature combination method, RBF KLR can achieve 99.05% average overall percentage on 2-order neighbor feature combination set.
Jingwei Liu
null
1207.4463
null
null
Local stability of Belief Propagation algorithm with multiple fixed points
stat.ML cs.LG
A number of problems in statistical physics and computer science can be expressed as the computation of marginal probabilities over a Markov random field. Belief propagation, an iterative message-passing algorithm, computes exactly such marginals when the underlying graph is a tree. But it has gained its popularity as an efficient way to approximate them in the more general case, even if it can exhibits multiple fixed points and is not guaranteed to converge. In this paper, we express a new sufficient condition for local stability of a belief propagation fixed point in terms of the graph structure and the beliefs values at the fixed point. This gives credence to the usual understanding that Belief Propagation performs better on sparse graphs.
Victorin Martin, Jean-Marc Lasgouttes and Cyril Furtlehner
10.3233/978-1-61499-096-3-180
1207.4597
null
null
Proceedings of the 29th International Conference on Machine Learning (ICML-12)
cs.LG stat.ML
This is an index to the papers that appear in the Proceedings of the 29th International Conference on Machine Learning (ICML-12). The conference was held in Edinburgh, Scotland, June 27th - July 3rd, 2012.
John Langford and Joelle Pineau (Editors)
null
1207.4676
null
null
Block-Coordinate Frank-Wolfe Optimization for Structural SVMs
cs.LG math.OC stat.ML
We propose a randomized block-coordinate variant of the classic Frank-Wolfe algorithm for convex optimization with block-separable constraints. Despite its lower iteration cost, we show that it achieves a similar convergence rate in duality gap as the full Frank-Wolfe algorithm. We also show that, when applied to the dual structural support vector machine (SVM) objective, this yields an online algorithm that has the same low iteration complexity as primal stochastic subgradient methods. However, unlike stochastic subgradient methods, the block-coordinate Frank-Wolfe algorithm allows us to compute the optimal step-size and yields a computable duality gap guarantee. Our experiments indicate that this simple algorithm outperforms competing structural SVM solvers.
Simon Lacoste-Julien, Martin Jaggi, Mark Schmidt, Patrick Pletscher
null
1207.4747
null
null
Hierarchical Clustering using Randomly Selected Similarities
stat.ML cs.IT cs.LG math.IT
The problem of hierarchical clustering items from pairwise similarities is found across various scientific disciplines, from biology to networking. Often, applications of clustering techniques are limited by the cost of obtaining similarities between pairs of items. While prior work has been developed to reconstruct clustering using a significantly reduced set of pairwise similarities via adaptive measurements, these techniques are only applicable when choice of similarities are available to the user. In this paper, we examine reconstructing hierarchical clustering under similarity observations at-random. We derive precise bounds which show that a significant fraction of the hierarchical clustering can be recovered using fewer than all the pairwise similarities. We find that the correct hierarchical clustering down to a constant fraction of the total number of items (i.e., clusters sized O(N)) can be found using only O(N log N) randomly selected pairwise similarities in expectation.
Brian Eriksson
null
1207.4748
null
null
Automorphism Groups of Graphical Models and Lifted Variational Inference
cs.AI cs.LG math.CO stat.CO stat.ML
Using the theory of group action, we first introduce the concept of the automorphism group of an exponential family or a graphical model, thus formalizing the general notion of symmetry of a probabilistic model. This automorphism group provides a precise mathematical framework for lifted inference in the general exponential family. Its group action partitions the set of random variables and feature functions into equivalent classes (called orbits) having identical marginals and expectations. Then the inference problem is effectively reduced to that of computing marginals or expectations for each class, thus avoiding the need to deal with each individual variable or feature. We demonstrate the usefulness of this general framework in lifting two classes of variational approximation for MAP inference: local LP relaxation and local LP relaxation with cycle constraints; the latter yields the first lifted inference that operate on a bound tighter than local constraints. Initial experimental results demonstrate that lifted MAP inference with cycle constraints achieved the state of the art performance, obtaining much better objective function values than local approximation while remaining relatively efficient.
Hung Hai Bui and Tuyen N. Huynh and Sebastian Riedel
null
1207.4814
null
null
Motion Planning Of an Autonomous Mobile Robot Using Artificial Neural Network
cs.RO cs.AI cs.LG cs.NE
The paper presents the electronic design and motion planning of a robot based on decision making regarding its straight motion and precise turn using Artificial Neural Network (ANN). The ANN helps in learning of robot so that it performs motion autonomously. The weights calculated are implemented in microcontroller. The performance has been tested to be excellent.
G. N. Tripathi and V. Rihani
null
1207.4931
null
null
Fast nonparametric classification based on data depth
stat.ML cs.LG
A new procedure, called DDa-procedure, is developed to solve the problem of classifying d-dimensional objects into q >= 2 classes. The procedure is completely nonparametric; it uses q-dimensional depth plots and a very efficient algorithm for discrimination analysis in the depth space [0,1]^q. Specifically, the depth is the zonoid depth, and the algorithm is the alpha-procedure. In case of more than two classes several binary classifications are performed and a majority rule is applied. Special treatments are discussed for 'outsiders', that is, data having zero depth vector. The DDa-classifier is applied to simulated as well as real data, and the results are compared with those of similar procedures that have been recently proposed. In most cases the new procedure has comparable error rates, but is much faster than other classification approaches, including the SVM.
Tatjana Lange, Karl Mosler and Pavlo Mozharovskyi
null
1207.4992
null
null
Learning Probabilistic Systems from Tree Samples
cs.LO cs.LG
We consider the problem of learning a non-deterministic probabilistic system consistent with a given finite set of positive and negative tree samples. Consistency is defined with respect to strong simulation conformance. We propose learning algorithms that use traditional and a new "stochastic" state-space partitioning, the latter resulting in the minimum number of states. We then use them to solve the problem of "active learning", that uses a knowledgeable teacher to generate samples as counterexamples to simulation equivalence queries. We show that the problem is undecidable in general, but that it becomes decidable under a suitable condition on the teacher which comes naturally from the way samples are generated from failed simulation checks. The latter problem is shown to be undecidable if we impose an additional condition on the learner to always conjecture a "minimum state" hypothesis. We therefore propose a semi-algorithm using stochastic partitions. Finally, we apply the proposed (semi-) algorithms to infer intermediate assumptions in an automated assume-guarantee verification framework for probabilistic systems.
Anvesh Komuravelli, Corina S. Pasareanu and Edmund M. Clarke
10.1109/LICS.2012.54
1207.5091
null
null
Causal Inference on Time Series using Structural Equation Models
stat.ML cs.LG stat.ME
Causal inference uses observations to infer the causal structure of the data generating system. We study a class of functional models that we call Time Series Models with Independent Noise (TiMINo). These models require independent residual time series, whereas traditional methods like Granger causality exploit the variance of residuals. There are two main contributions: (1) Theoretical: By restricting the model class (e.g. to additive noise) we can provide a more general identifiability result than existing ones. This result incorporates lagged and instantaneous effects that can be nonlinear and do not need to be faithful, and non-instantaneous feedbacks between the time series. (2) Practical: If there are no feedback loops between time series, we propose an algorithm based on non-linear independence tests of time series. When the data are causally insufficient, or the data generating process does not satisfy the model assumptions, this algorithm may still give partial results, but mostly avoids incorrect answers. An extension to (non-instantaneous) feedbacks is possible, but not discussed. It outperforms existing methods on artificial and real data. Code can be provided upon request.
Jonas Peters, Dominik Janzing and Bernhard Sch\"olkopf
null
1207.5136
null
null
Meta-Learning of Exploration/Exploitation Strategies: The Multi-Armed Bandit Case
cs.AI cs.LG stat.ML
The exploration/exploitation (E/E) dilemma arises naturally in many subfields of Science. Multi-armed bandit problems formalize this dilemma in its canonical form. Most current research in this field focuses on generic solutions that can be applied to a wide range of problems. However, in practice, it is often the case that a form of prior information is available about the specific class of target problems. Prior knowledge is rarely used in current solutions due to the lack of a systematic approach to incorporate it into the E/E strategy. To address a specific class of E/E problems, we propose to proceed in three steps: (i) model prior knowledge in the form of a probability distribution over the target class of E/E problems; (ii) choose a large hypothesis space of candidate E/E strategies; and (iii), solve an optimization problem to find a candidate E/E strategy of maximal average performance over a sample of problems drawn from the prior distribution. We illustrate this meta-learning approach with two different hypothesis spaces: one where E/E strategies are numerically parameterized and another where E/E strategies are represented as small symbolic formulas. We propose appropriate optimization algorithms for both cases. Our experiments, with two-armed Bernoulli bandit problems and various playing budgets, show that the meta-learnt E/E strategies outperform generic strategies of the literature (UCB1, UCB1-Tuned, UCB-v, KL-UCB and epsilon greedy); they also evaluate the robustness of the learnt E/E strategies, by tests carried out on arms whose rewards follow a truncated Gaussian distribution.
Francis Maes and Damien Ernst and Louis Wehenkel
null
1207.5208
null
null
Optimal discovery with probabilistic expert advice: finite time analysis and macroscopic optimality
cs.LG stat.ML
We consider an original problem that arises from the issue of security analysis of a power system and that we name optimal discovery with probabilistic expert advice. We address it with an algorithm based on the optimistic paradigm and on the Good-Turing missing mass estimator. We prove two different regret bounds on the performance of this algorithm under weak assumptions on the probabilistic experts. Under more restrictive hypotheses, we also prove a macroscopic optimality result, comparing the algorithm both with an oracle strategy and with uniform sampling. Finally, we provide numerical experiments illustrating these theoretical findings.
Sebastien Bubeck and Damien Ernst and Aurelien Garivier
null
1207.5259
null
null
A Robust Signal Classification Scheme for Cognitive Radio
cs.IT cs.LG cs.NI math.IT
This paper presents a robust signal classification scheme for achieving comprehensive spectrum sensing of multiple coexisting wireless systems. It is built upon a group of feature-based signal detection algorithms enhanced by the proposed dimension cancelation (DIC) method for mitigating the noise uncertainty problem. The classification scheme is implemented on our testbed consisting real-world wireless devices. The simulation and experimental performances agree with each other well and shows the effectiveness and robustness of the proposed scheme.
Hanwen Cao and J\"urgen Peissig
null
1207.5342
null
null
Generalization Bounds for Metric and Similarity Learning
cs.LG stat.ML
Recently, metric learning and similarity learning have attracted a large amount of interest. Many models and optimisation algorithms have been proposed. However, there is relatively little work on the generalization analysis of such methods. In this paper, we derive novel generalization bounds of metric and similarity learning. In particular, we first show that the generalization analysis reduces to the estimation of the Rademacher average over "sums-of-i.i.d." sample-blocks related to the specific matrix norm. Then, we derive generalization bounds for metric/similarity learning with different matrix-norm regularisers by estimating their specific Rademacher complexities. Our analysis indicates that sparse metric/similarity learning with $L^1$-norm regularisation could lead to significantly better bounds than those with Frobenius-norm regularisation. Our novel generalization analysis develops and refines the techniques of U-statistics and Rademacher complexity analysis.
Qiong Cao, Zheng-Chu Guo and Yiming Ying
null
1207.5437
null
null
MCTS Based on Simple Regret
cs.AI cs.LG
UCT, a state-of-the art algorithm for Monte Carlo tree search (MCTS) in games and Markov decision processes, is based on UCB, a sampling policy for the Multi-armed Bandit problem (MAB) that minimizes the cumulative regret. However, search differs from MAB in that in MCTS it is usually only the final "arm pull" (the actual move selection) that collects a reward, rather than all "arm pulls". Therefore, it makes more sense to minimize the simple regret, as opposed to the cumulative regret. We begin by introducing policies for multi-armed bandits with lower finite-time and asymptotic simple regret than UCB, using it to develop a two-stage scheme (SR+CR) for MCTS which outperforms UCT empirically. Optimizing the sampling process is itself a metareasoning problem, a solution of which can use value of information (VOI) techniques. Although the theory of VOI for search exists, applying it to MCTS is non-trivial, as typical myopic assumptions fail. Lacking a complete working VOI theory for MCTS, we nevertheless propose a sampling scheme that is "aware" of VOI, achieving an algorithm that in empirical evaluation outperforms both UCT and the other proposed algorithms.
David Tolpin and Solomon Eyal Shimony
null
1207.5536
null
null
Bellman Error Based Feature Generation using Random Projections on Sparse Spaces
cs.LG stat.ML
We address the problem of automatic generation of features for value function approximation. Bellman Error Basis Functions (BEBFs) have been shown to improve the error of policy evaluation with function approximation, with a convergence rate similar to that of value iteration. We propose a simple, fast and robust algorithm based on random projections to generate BEBFs for sparse feature spaces. We provide a finite sample analysis of the proposed method, and prove that projections logarithmic in the dimension of the original space are enough to guarantee contraction in the error. Empirical results demonstrate the strength of this method.
Mahdi Milani Fard, Yuri Grinberg, Amir-massoud Farahmand, Joelle Pineau, Doina Precup
null
1207.5554
null
null
VOI-aware MCTS
cs.AI cs.LG
UCT, a state-of-the art algorithm for Monte Carlo tree search (MCTS) in games and Markov decision processes, is based on UCB1, a sampling policy for the Multi-armed Bandit problem (MAB) that minimizes the cumulative regret. However, search differs from MAB in that in MCTS it is usually only the final "arm pull" (the actual move selection) that collects a reward, rather than all "arm pulls". In this paper, an MCTS sampling policy based on Value of Information (VOI) estimates of rollouts is suggested. Empirical evaluation of the policy and comparison to UCB1 and UCT is performed on random MAB instances as well as on Computer Go.
David Tolpin and Solomon Eyal Shimony
null
1207.5589
null
null
A New Training Algorithm for Kanerva's Sparse Distributed Memory
cs.CV cs.LG cs.NE
The Sparse Distributed Memory proposed by Pentii Kanerva (SDM in short) was thought to be a model of human long term memory. The architecture of the SDM permits to store binary patterns and to retrieve them using partially matching patterns. However Kanerva's model is especially efficient only in handling random data. The purpose of this article is to introduce a new approach of training Kanerva's SDM that can handle efficiently non-random data, and to provide it the capability to recognize inverted patterns. This approach uses a signal model which is different from the one proposed for different purposes by Hely, Willshaw and Hayes in [4]. This article additionally suggests a different way of creating hard locations in the memory despite the Kanerva's static model.
Lou Marvin Caraig
null
1207.5774
null
null
Equivalence of distance-based and RKHS-based statistics in hypothesis testing
stat.ME cs.LG math.ST stat.ML stat.TH
We provide a unifying framework linking two classes of statistics used in two-sample and independence testing: on the one hand, the energy distances and distance covariances from the statistics literature; on the other, maximum mean discrepancies (MMD), that is, distances between embeddings of distributions to reproducing kernel Hilbert spaces (RKHS), as established in machine learning. In the case where the energy distance is computed with a semimetric of negative type, a positive definite kernel, termed distance kernel, may be defined such that the MMD corresponds exactly to the energy distance. Conversely, for any positive definite kernel, we can interpret the MMD as energy distance with respect to some negative-type semimetric. This equivalence readily extends to distance covariance using kernels on the product space. We determine the class of probability distributions for which the test statistics are consistent against all alternatives. Finally, we investigate the performance of the family of distance kernels in two-sample and independence tests: we show in particular that the energy distance most commonly employed in statistics is just one member of a parametric family of kernels, and that other choices from this family can yield more powerful tests.
Dino Sejdinovic, Bharath Sriperumbudur, Arthur Gretton, Kenji Fukumizu
10.1214/13-AOS1140
1207.6076
null
null
Determinantal point processes for machine learning
stat.ML cs.IR cs.LG
Determinantal point processes (DPPs) are elegant probabilistic models of repulsion that arise in quantum physics and random matrix theory. In contrast to traditional structured models like Markov random fields, which become intractable and hard to approximate in the presence of negative correlations, DPPs offer efficient and exact algorithms for sampling, marginalization, conditioning, and other inference tasks. We provide a gentle introduction to DPPs, focusing on the intuitions, algorithms, and extensions that are most relevant to the machine learning community, and show how DPPs can be applied to real-world applications like finding diverse sets of high-quality search results, building informative summaries by selecting diverse sentences from documents, modeling non-overlapping human poses in images or video, and automatically building timelines of important news stories.
Alex Kulesza, Ben Taskar
10.1561/2200000044
1207.6083
null
null
Touchalytics: On the Applicability of Touchscreen Input as a Behavioral Biometric for Continuous Authentication
cs.CR cs.LG
We investigate whether a classifier can continuously authenticate users based on the way they interact with the touchscreen of a smart phone. We propose a set of 30 behavioral touch features that can be extracted from raw touchscreen logs and demonstrate that different users populate distinct subspaces of this feature space. In a systematic experiment designed to test how this behavioral pattern exhibits consistency over time, we collected touch data from users interacting with a smart phone using basic navigation maneuvers, i.e., up-down and left-right scrolling. We propose a classification framework that learns the touch behavior of a user during an enrollment phase and is able to accept or reject the current user by monitoring interaction with the touch screen. The classifier achieves a median equal error rate of 0% for intra-session authentication, 2%-3% for inter-session authentication and below 4% when the authentication test was carried out one week after the enrollment phase. While our experimental findings disqualify this method as a standalone authentication mechanism for long-term authentication, it could be implemented as a means to extend screen-lock time or as a part of a multi-modal biometric authentication system.
Mario Frank, Ralf Biedert, Eugene Ma, Ivan Martinovic, Dawn Song
10.1109/TIFS.2012.2225048
1207.6231
null
null
On When and How to use SAT to Mine Frequent Itemsets
cs.AI cs.DB cs.LG
A new stream of research was born in the last decade with the goal of mining itemsets of interest using Constraint Programming (CP). This has promoted a natural way to combine complex constraints in a highly flexible manner. Although CP state-of-the-art solutions formulate the task using Boolean variables, the few attempts to adopt propositional Satisfiability (SAT) provided an unsatisfactory performance. This work deepens the study on when and how to use SAT for the frequent itemset mining (FIM) problem by defining different encodings with multiple task-driven enumeration options and search strategies. Although for the majority of the scenarios SAT-based solutions appear to be non-competitive with CP peers, results show a variety of interesting cases where SAT encodings are the best option.
Rui Henriques and In\^es Lynce and Vasco Manquinho
null
1207.6253
null
null
Identifying Users From Their Rating Patterns
cs.IR cs.LG stat.ML
This paper reports on our analysis of the 2011 CAMRa Challenge dataset (Track 2) for context-aware movie recommendation systems. The train dataset comprises 4,536,891 ratings provided by 171,670 users on 23,974$ movies, as well as the household groupings of a subset of the users. The test dataset comprises 5,450 ratings for which the user label is missing, but the household label is provided. The challenge required to identify the user labels for the ratings in the test set. Our main finding is that temporal information (time labels of the ratings) is significantly more useful for achieving this objective than the user preferences (the actual ratings). Using a model that leverages on this fact, we are able to identify users within a known household with an accuracy of approximately 96% (i.e. misclassification rate around 4%).
Jos\'e Bento, Nadia Fawaz, Andrea Montanari, Stratis Ioannidis
null
1207.6379
null
null
Optimal Data Collection For Informative Rankings Expose Well-Connected Graphs
stat.ML cs.LG stat.AP
Given a graph where vertices represent alternatives and arcs represent pairwise comparison data, the statistical ranking problem is to find a potential function, defined on the vertices, such that the gradient of the potential function agrees with the pairwise comparisons. Our goal in this paper is to develop a method for collecting data for which the least squares estimator for the ranking problem has maximal Fisher information. Our approach, based on experimental design, is to view data collection as a bi-level optimization problem where the inner problem is the ranking problem and the outer problem is to identify data which maximizes the informativeness of the ranking. Under certain assumptions, the data collection problem decouples, reducing to a problem of finding multigraphs with large algebraic connectivity. This reduction of the data collection problem to graph-theoretic questions is one of the primary contributions of this work. As an application, we study the Yahoo! Movie user rating dataset and demonstrate that the addition of a small number of well-chosen pairwise comparisons can significantly increase the Fisher informativeness of the ranking. As another application, we study the 2011-12 NCAA football schedule and propose schedules with the same number of games which are significantly more informative. Using spectral clustering methods to identify highly-connected communities within the division, we argue that the NCAA could improve its notoriously poor rankings by simply scheduling more out-of-conference games.
Braxton Osting and Christoph Brune and Stanley J. Osher
null
1207.6430
null
null
Gaussian process regression as a predictive model for Quality-of-Service in Web service systems
cs.NI cs.LG
In this paper, we present the Gaussian process regression as the predictive model for Quality-of-Service (QoS) attributes in Web service systems. The goal is to predict performance of the execution system expressed as QoS attributes given existing execution system, service repository, and inputs, e.g., streams of requests. In order to evaluate the performance of Gaussian process regression the simulation environment was developed. Two quality indexes were used, namely, Mean Absolute Error and Mean Squared Error. The results obtained within the experiment show that the Gaussian process performed the best with linear kernel and statistically significantly better comparing to Classification and Regression Trees (CART) method.
Jakub M. Tomczak, Jerzy Swiatek, Krzysztof Latawiec
null
1207.6910
null
null
Supervised Laplacian Eigenmaps with Applications in Clinical Diagnostics for Pediatric Cardiology
cs.LG
Electronic health records contain rich textual data which possess critical predictive information for machine-learning based diagnostic aids. However many traditional machine learning methods fail to simultaneously integrate both vector space data and text. We present a supervised method using Laplacian eigenmaps to augment existing machine-learning methods with low-dimensional representations of textual predictors which preserve the local similarities. The proposed implementation performs alternating optimization using gradient descent. For the evaluation we applied our method to over 2,000 patient records from a large single-center pediatric cardiology practice to predict if patients were diagnosed with cardiac disease. Our method was compared with latent semantic indexing, latent Dirichlet allocation, and local Fisher discriminant analysis. The results were assessed using AUC, MCC, specificity, and sensitivity. Results indicate supervised Laplacian eigenmaps was the highest performing method in our study, achieving 0.782 and 0.374 for AUC and MCC respectively. SLE showed an increase in 8.16% in AUC and 20.6% in MCC over the baseline which excluded textual data and a 2.69% and 5.35% increase in AUC and MCC respectively over unsupervised Laplacian eigenmaps. This method allows many existing machine learning predictors to effectively and efficiently utilize the potential of textual predictors.
Thomas Perry and Hongyuan Zha and Patricio Frias and Dadan Zeng and Mark Braunstein
null
1207.7035
null
null
Predicate Generation for Learning-Based Quantifier-Free Loop Invariant Inference
cs.LO cs.LG
We address the predicate generation problem in the context of loop invariant inference. Motivated by the interpolation-based abstraction refinement technique, we apply the interpolation theorem to synthesize predicates implicitly implied by program texts. Our technique is able to improve the effectiveness and efficiency of the learning-based loop invariant inference algorithm in [14]. We report experiment results of examples from Linux, SPEC2000, and Tar utility.
Wonchan Lee (Seoul National University), Yungbum Jung (Seoul National University), Bow-yaw Wang (Academia Sinica), Kwangkuen Yi (Seoul National University)
10.2168/LMCS-8(3:25)2012
1207.7167
null
null
Learning a peptide-protein binding affinity predictor with kernel ridge regression
q-bio.QM cs.LG q-bio.BM stat.ML
We propose a specialized string kernel for small bio-molecules, peptides and pseudo-sequences of binding interfaces. The kernel incorporates physico-chemical properties of amino acids and elegantly generalize eight kernels, such as the Oligo, the Weighted Degree, the Blended Spectrum, and the Radial Basis Function. We provide a low complexity dynamic programming algorithm for the exact computation of the kernel and a linear time algorithm for it's approximation. Combined with kernel ridge regression and SupCK, a novel binding pocket kernel, the proposed kernel yields biologically relevant and good prediction accuracy on the PepX database. For the first time, a machine learning predictor is capable of accurately predicting the binding affinity of any peptide to any protein. The method was also applied to both single-target and pan-specific Major Histocompatibility Complex class II benchmark datasets and three Quantitative Structure Affinity Model benchmark datasets. On all benchmarks, our method significantly (p-value < 0.057) outperforms the current state-of-the-art methods at predicting peptide-protein binding affinities. The proposed approach is flexible and can be applied to predict any quantitative biological activity. The method should be of value to a large segment of the research community with the potential to accelerate peptide-based drug and vaccine development.
S\'ebastien Gigu\`ere, Mario Marchand, Fran\c{c}ois Laviolette, Alexandre Drouin and Jacques Corbeil
10.1186/1471-2105-14-82
1207.7253
null
null
Oracle inequalities for computationally adaptive model selection
stat.ML cs.LG
We analyze general model selection procedures using penalized empirical loss minimization under computational constraints. While classical model selection approaches do not consider computational aspects of performing model selection, we argue that any practical model selection procedure must not only trade off estimation and approximation error, but also the computational effort required to compute empirical minimizers for different function classes. We provide a framework for analyzing such problems, and we give algorithms for model selection under a computational budget. These algorithms satisfy oracle inequalities that show that the risk of the selected model is not much worse than if we had devoted all of our omputational budget to the optimal function class.
Alekh Agarwal, Peter L. Bartlett, John C. Duchi
null
1208.0129
null
null
Fast Planar Correlation Clustering for Image Segmentation
cs.CV cs.DS cs.LG stat.ML
We describe a new optimization scheme for finding high-quality correlation clusterings in planar graphs that uses weighted perfect matching as a subroutine. Our method provides lower-bounds on the energy of the optimal correlation clustering that are typically fast to compute and tight in practice. We demonstrate our algorithm on the problem of image segmentation where this approach outperforms existing global optimization techniques in minimizing the objective and is competitive with the state of the art in producing high-quality segmentations.
Julian Yarkony, Alexander T. Ihler, Charless C. Fowlkes
null
1208.0378
null
null
Multidimensional Membership Mixture Models
cs.LG stat.ML
We present the multidimensional membership mixture (M3) models where every dimension of the membership represents an independent mixture model and each data point is generated from the selected mixture components jointly. This is helpful when the data has a certain shared structure. For example, three unique means and three unique variances can effectively form a Gaussian mixture model with nine components, while requiring only six parameters to fully describe it. In this paper, we present three instantiations of M3 models (together with the learning and inference algorithms): infinite, finite, and hybrid, depending on whether the number of mixtures is fixed or not. They are built upon Dirichlet process mixture models, latent Dirichlet allocation, and a combination respectively. We then consider two applications: topic modeling and learning 3D object arrangements. Our experiments show that our M3 models achieve better performance using fewer topics than many classic topic models. We also observe that topics from the different dimensions of M3 models are meaningful and orthogonal to each other.
Yun Jiang, Marcus Lim and Ashutosh Saxena
null
1208.0402
null
null
Efficient Point-to-Subspace Query in $\ell^1$ with Application to Robust Object Instance Recognition
cs.CV cs.LG stat.ML
Motivated by vision tasks such as robust face and object recognition, we consider the following general problem: given a collection of low-dimensional linear subspaces in a high-dimensional ambient (image) space, and a query point (image), efficiently determine the nearest subspace to the query in $\ell^1$ distance. In contrast to the naive exhaustive search which entails large-scale linear programs, we show that the computational burden can be cut down significantly by a simple two-stage algorithm: (1) projecting the query and data-base subspaces into lower-dimensional space by random Cauchy matrix, and solving small-scale distance evaluations (linear programs) in the projection space to locate candidate nearest; (2) with few candidates upon independent repetition of (1), getting back to the high-dimensional space and performing exhaustive search. To preserve the identity of the nearest subspace with nontrivial probability, the projection dimension typically is low-order polynomial of the subspace dimension multiplied by logarithm of number of the subspaces (Theorem 2.1). The reduced dimensionality and hence complexity renders the proposed algorithm particularly relevant to vision application such as robust face and object instance recognition that we investigate empirically.
Ju Sun and Yuqian Zhang and John Wright
10.1137/130936166
1208.0432
null
null
Detection of Deviations in Mobile Applications Network Behavior
cs.CR cs.LG
In this paper a novel system for detecting meaningful deviations in a mobile application's network behavior is proposed. The main goal of the proposed system is to protect mobile device users and cellular infrastructure companies from malicious applications. The new system is capable of: (1) identifying malicious attacks or masquerading applications installed on a mobile device, and (2) identifying republishing of popular applications injected with a malicious code. The detection is performed based on the application's network traffic patterns only. For each application two types of models are learned. The first model, local, represents the personal traffic pattern for each user using an application and is learned on the device. The second model, collaborative, represents traffic patterns of numerous users using an application and is learned on the system server. Machine-learning methods are used for learning and detection purposes. This paper focuses on methods utilized for local (i.e., on mobile device) learning and detection of deviations from the normal application's behavior. These methods were implemented and evaluated on Android devices. The evaluation experiments demonstrate that: (1) various applications have specific network traffic patterns and certain application categories can be distinguishable by their network patterns, (2) different levels of deviations from normal behavior can be detected accurately, and (3) local learning is feasible and has a low performance overhead on mobile devices.
L. Chekina, D. Mimran, L. Rokach, Y. Elovici, B. Shapira
null
1208.0564
null
null
On the Consistency of AUC Pairwise Optimization
cs.LG stat.ML
AUC (area under ROC curve) is an important evaluation criterion, which has been popularly used in many learning tasks such as class-imbalance learning, cost-sensitive learning, learning to rank, etc. Many learning approaches try to optimize AUC, while owing to the non-convexity and discontinuousness of AUC, almost all approaches work with surrogate loss functions. Thus, the consistency of AUC is crucial; however, it has been almost untouched before. In this paper, we provide a sufficient condition for the asymptotic consistency of learning approaches based on surrogate loss functions. Based on this result, we prove that exponential loss and logistic loss are consistent with AUC, but hinge loss is inconsistent. Then, we derive the $q$-norm hinge loss and general hinge loss that are consistent with AUC. We also derive the consistent bounds for exponential loss and logistic loss, and obtain the consistent bounds for many surrogate loss functions under the non-noise setting. Further, we disclose an equivalence between the exponential surrogate loss of AUC and exponential surrogate loss of accuracy, and one straightforward consequence of such finding is that AdaBoost and RankBoost are equivalent.
Wei Gao and Zhi-Hua Zhou
null
1208.0645
null
null
Wisdom of the Crowd: Incorporating Social Influence in Recommendation Models
cs.IR cs.LG cs.SI physics.soc-ph
Recommendation systems have received considerable attention recently. However, most research has been focused on improving the performance of collaborative filtering (CF) techniques. Social networks, indispensably, provide us extra information on people's preferences, and should be considered and deployed to improve the quality of recommendations. In this paper, we propose two recommendation models, for individuals and for groups respectively, based on social contagion and social influence network theory. In the recommendation model for individuals, we improve the result of collaborative filtering prediction with social contagion outcome, which simulates the result of information cascade in the decision-making process. In the recommendation model for groups, we apply social influence network theory to take interpersonal influence into account to form a settled pattern of disagreement, and then aggregate opinions of group members. By introducing the concept of susceptibility and interpersonal influence, the settled rating results are flexible, and inclined to members whose ratings are "essential".
Shang Shang, Pan Hui, Sanjeev R. Kulkarni and Paul W. Cuff
null
1208.0782
null
null
A Random Walk Based Model Incorporating Social Information for Recommendations
cs.IR cs.LG
Collaborative filtering (CF) is one of the most popular approaches to build a recommendation system. In this paper, we propose a hybrid collaborative filtering model based on a Makovian random walk to address the data sparsity and cold start problems in recommendation systems. More precisely, we construct a directed graph whose nodes consist of items and users, together with item content, user profile and social network information. We incorporate user's ratings into edge settings in the graph model. The model provides personalized recommendations and predictions to individuals and groups. The proposed algorithms are evaluated on MovieLens and Epinions datasets. Experimental results show that the proposed methods perform well compared with other graph-based methods, especially in the cold start case.
Shang Shang, Sanjeev R. Kulkarni, Paul W. Cuff and Pan Hui
null
1208.0787
null
null
Cross-conformal predictors
stat.ML cs.LG
This note introduces the method of cross-conformal prediction, which is a hybrid of the methods of inductive conformal prediction and cross-validation, and studies its validity and predictive efficiency empirically.
Vladimir Vovk
null
1208.0806
null
null
Learning Theory Approach to Minimum Error Entropy Criterion
cs.LG stat.ML
We consider the minimum error entropy (MEE) criterion and an empirical risk minimization learning algorithm in a regression setting. A learning theory approach is presented for this MEE algorithm and explicit error bounds are provided in terms of the approximation ability and capacity of the involved hypothesis space when the MEE scaling parameter is large. Novel asymptotic analysis is conducted for the generalization error associated with Renyi's entropy and a Parzen window function, to overcome technical difficulties arisen from the essential differences between the classical least squares problems and the MEE setting. A semi-norm and the involved symmetrized least squares error are introduced, which is related to some ranking algorithms.
Ting Hu, Jun Fan, Qiang Wu, Ding-Xuan Zhou
null
1208.0848
null
null
Statistical Results on Filtering and Epi-convergence for Learning-Based Model Predictive Control
math.OC cs.LG cs.SY
Learning-based model predictive control (LBMPC) is a technique that provides deterministic guarantees on robustness, while statistical identification tools are used to identify richer models of the system in order to improve performance. This technical note provides proofs that elucidate the reasons for our choice of measurement model, as well as giving proofs concerning the stochastic convergence of LBMPC. The first part of this note discusses simultaneous state estimation and statistical identification (or learning) of unmodeled dynamics, for dynamical systems that can be described by ordinary differential equations (ODE's). The second part provides proofs concerning the epi-convergence of different statistical estimators that can be used with the learning-based model predictive control (LBMPC) technique. In particular, we prove results on the statistical properties of a nonparametric estimator that we have designed to have the correct deterministic and stochastic properties for numerical implementation when used in conjunction with LBMPC.
Anil Aswani, Humberto Gonzalez, S. Shankar Sastry, Claire Tomlin
null
1208.0864
null
null
Recklessly Approximate Sparse Coding
cs.LG cs.CV stat.ML
It has recently been observed that certain extremely simple feature encoding techniques are able to achieve state of the art performance on several standard image classification benchmarks including deep belief networks, convolutional nets, factored RBMs, mcRBMs, convolutional RBMs, sparse autoencoders and several others. Moreover, these "triangle" or "soft threshold" encodings are ex- tremely efficient to compute. Several intuitive arguments have been put forward to explain this remarkable performance, yet no mathematical justification has been offered. The main result of this report is to show that these features are realized as an approximate solution to the a non-negative sparse coding problem. Using this connection we describe several variants of the soft threshold features and demonstrate their effectiveness on two image classification benchmark tasks.
Misha Denil and Nando de Freitas
null
1208.0959
null
null
APRIL: Active Preference-learning based Reinforcement Learning
cs.LG
This paper focuses on reinforcement learning (RL) with limited prior knowledge. In the domain of swarm robotics for instance, the expert can hardly design a reward function or demonstrate the target behavior, forbidding the use of both standard RL and inverse reinforcement learning. Although with a limited expertise, the human expert is still often able to emit preferences and rank the agent demonstrations. Earlier work has presented an iterative preference-based RL framework: expert preferences are exploited to learn an approximate policy return, thus enabling the agent to achieve direct policy search. Iteratively, the agent selects a new candidate policy and demonstrates it; the expert ranks the new demonstration comparatively to the previous best one; the expert's ranking feedback enables the agent to refine the approximate policy return, and the process is iterated. In this paper, preference-based reinforcement learning is combined with active ranking in order to decrease the number of ranking queries to the expert needed to yield a satisfactory policy. Experiments on the mountain car and the cancer treatment testbeds witness that a couple of dozen rankings enable to learn a competent policy.
Riad Akrour (INRIA Saclay - Ile de France, LRI), Marc Schoenauer (INRIA Saclay - Ile de France, LRI), Mich\`ele Sebag (LRI)
null
1208.0984
null
null
Sequential Estimation Methods from Inclusion Principle
math.ST cs.LG math.PR stat.TH
In this paper, we propose new sequential estimation methods based on inclusion principle. The main idea is to reformulate the estimation problems as constructing sequential random intervals and use confidence sequences to control the associated coverage probabilities. In contrast to existing asymptotic sequential methods, our estimation procedures rigorously guarantee the pre-specified levels of confidence.
Xinjia Chen
null
1208.1056
null
null
Fast and Robust Recursive Algorithms for Separable Nonnegative Matrix Factorization
stat.ML cs.LG math.OC
In this paper, we study the nonnegative matrix factorization problem under the separability assumption (that is, there exists a cone spanned by a small subset of the columns of the input nonnegative data matrix containing all columns), which is equivalent to the hyperspectral unmixing problem under the linear mixing model and the pure-pixel assumption. We present a family of fast recursive algorithms, and prove they are robust under any small perturbations of the input data matrix. This family generalizes several existing hyperspectral unmixing algorithms and hence provides for the first time a theoretical justification of their better practical performance.
Nicolas Gillis and Stephen A. Vavasis
10.1109/TPAMI.2013.226
1208.1237
null
null
One Permutation Hashing for Efficient Search and Learning
cs.LG cs.IR cs.IT math.IT stat.CO stat.ML
Recently, the method of b-bit minwise hashing has been applied to large-scale linear learning and sublinear time near-neighbor search. The major drawback of minwise hashing is the expensive preprocessing cost, as the method requires applying (e.g.,) k=200 to 500 permutations on the data. The testing time can also be expensive if a new data point (e.g., a new document or image) has not been processed, which might be a significant issue in user-facing applications. We develop a very simple solution based on one permutation hashing. Conceptually, given a massive binary data matrix, we permute the columns only once and divide the permuted columns evenly into k bins; and we simply store, for each data vector, the smallest nonzero location in each bin. The interesting probability analysis (which is validated by experiments) reveals that our one permutation scheme should perform very similarly to the original (k-permutation) minwise hashing. In fact, the one permutation scheme can be even slightly more accurate, due to the "sample-without-replacement" effect. Our experiments with training linear SVM and logistic regression on the webspam dataset demonstrate that this one permutation hashing scheme can achieve the same (or even slightly better) accuracies compared to the original k-permutation scheme. To test the robustness of our method, we also experiment with the small news20 dataset which is very sparse and has merely on average 500 nonzeros in each data vector. Interestingly, our one permutation scheme noticeably outperforms the k-permutation scheme when k is not too small on the news20 dataset. In summary, our method can achieve at least the same accuracy as the original k-permutation scheme, at merely 1/k of the original preprocessing cost.
Ping Li and Art Owen and Cun-Hui Zhang
null
1208.1259
null
null
Data Selection for Semi-Supervised Learning
cs.LG
The real challenge in pattern recognition task and machine learning process is to train a discriminator using labeled data and use it to distinguish between future data as accurate as possible. However, most of the problems in the real world have numerous data, which labeling them is a cumbersome or even an impossible matter. Semi-supervised learning is one approach to overcome these types of problems. It uses only a small set of labeled with the company of huge remain and unlabeled data to train the discriminator. In semi-supervised learning, it is very essential that which data is labeled and depend on position of data it effectiveness changes. In this paper, we proposed an evolutionary approach called Artificial Immune System (AIS) to determine which data is better to be labeled to get the high quality data. The experimental results represent the effectiveness of this algorithm in finding these data points.
Shafigh Parsazad, Ehsan Saboori and Amin Allahyar
null
1208.1315
null
null
Guess Who Rated This Movie: Identifying Users Through Subspace Clustering
cs.LG
It is often the case that, within an online recommender system, multiple users share a common account. Can such shared accounts be identified solely on the basis of the user- provided ratings? Once a shared account is identified, can the different users sharing it be identified as well? Whenever such user identification is feasible, it opens the way to possible improvements in personalized recommendations, but also raises privacy concerns. We develop a model for composite accounts based on unions of linear subspaces, and use subspace clustering for carrying out the identification task. We show that a significant fraction of such accounts is identifiable in a reliable manner, and illustrate potential uses for personalized recommendation.
Amy Zhang, Nadia Fawaz, Stratis Ioannidis and Andrea Montanari
null
1208.1544
null
null
Self-Organizing Time Map: An Abstraction of Temporal Multivariate Patterns
cs.LG cs.DS
This paper adopts and adapts Kohonen's standard Self-Organizing Map (SOM) for exploratory temporal structure analysis. The Self-Organizing Time Map (SOTM) implements SOM-type learning to one-dimensional arrays for individual time units, preserves the orientation with short-term memory and arranges the arrays in an ascending order of time. The two-dimensional representation of the SOTM attempts thus twofold topology preservation, where the horizontal direction preserves time topology and the vertical direction data topology. This enables discovering the occurrence and exploring the properties of temporal structural changes in data. For representing qualities and properties of SOTMs, we adapt measures and visualizations from the standard SOM paradigm, as well as introduce a measure of temporal structural changes. The functioning of the SOTM, and its visualizations and quality and property measures, are illustrated on artificial toy data. The usefulness of the SOTM in a real-world setting is shown on poverty, welfare and development indicators.
Peter Sarlin
10.1016/j.neucom.2012.07.011
1208.1819
null
null
Metric Learning across Heterogeneous Domains by Respectively Aligning Both Priors and Posteriors
cs.LG
In this paper, we attempts to learn a single metric across two heterogeneous domains where source domain is fully labeled and has many samples while target domain has only a few labeled samples but abundant unlabeled samples. To the best of our knowledge, this task is seldom touched. The proposed learning model has a simple underlying motivation: all the samples in both the source and the target domains are mapped into a common space, where both their priors P(sample)s and their posteriors P(label|sample)s are forced to be respectively aligned as much as possible. We show that the two mappings, from both the source domain and the target domain to the common space, can be reparameterized into a single positive semi-definite(PSD) matrix. Then we develop an efficient Bregman Projection algorithm to optimize the PDS matrix over which a LogDet function is used to regularize. Furthermore, we also show that this model can be easily kernelized and verify its effectiveness in crosslanguage retrieval task and cross-domain object recognition task.
Qiang Qian and Songcan Chen
null
1208.1829
null
null
Margin Distribution Controlled Boosting
cs.LG
Schapire's margin theory provides a theoretical explanation to the success of boosting-type methods and manifests that a good margin distribution (MD) of training samples is essential for generalization. However the statement that a MD is good is vague, consequently, many recently developed algorithms try to generate a MD in their goodness senses for boosting generalization. Unlike their indirect control over MD, in this paper, we propose an alternative boosting algorithm termed Margin distribution Controlled Boosting (MCBoost) which directly controls the MD by introducing and optimizing a key adjustable margin parameter. MCBoost's optimization implementation adopts the column generation technique to ensure fast convergence and small number of weak classifiers involved in the final MCBooster. We empirically demonstrate: 1) AdaBoost is actually also a MD controlled algorithm and its iteration number acts as a parameter controlling the distribution and 2) the generalization performance of MCBoost evaluated on UCI benchmark datasets is validated better than those of AdaBoost, L2Boost, LPBoost, AdaBoost-CG and MDBoost.
Guangxu Guo and Songcan Chen
null
1208.1846
null
null
Scaling Multiple-Source Entity Resolution using Statistically Efficient Transfer Learning
cs.DB cs.LG
We consider a serious, previously-unexplored challenge facing almost all approaches to scaling up entity resolution (ER) to multiple data sources: the prohibitive cost of labeling training data for supervised learning of similarity scores for each pair of sources. While there exists a rich literature describing almost all aspects of pairwise ER, this new challenge is arising now due to the unprecedented ability to acquire and store data from online sources, features driven by ER such as enriched search verticals, and the uniqueness of noisy and missing data characteristics for each source. We show on real-world and synthetic data that for state-of-the-art techniques, the reality of heterogeneous sources means that the number of labeled training data must scale quadratically in the number of sources, just to maintain constant precision/recall. We address this challenge with a brand new transfer learning algorithm which requires far less training data (or equivalently, achieves superior accuracy with the same data) and is trained using fast convex optimization. The intuition behind our approach is to adaptively share structure learned about one scoring problem with all other scoring problems sharing a data source in common. We demonstrate that our theoretically motivated approach incurs no runtime cost while it can maintain constant precision/recall with the cost of labeling increasing only linearly with the number of sources.
Sahand Negahban, Benjamin I. P. Rubinstein and Jim Gemmell
null
1208.1860
null
null
Sharp analysis of low-rank kernel matrix approximations
cs.LG math.ST stat.TH
We consider supervised learning problems within the positive-definite kernel framework, such as kernel ridge regression, kernel logistic regression or the support vector machine. With kernels leading to infinite-dimensional feature spaces, a common practical limiting difficulty is the necessity of computing the kernel matrix, which most frequently leads to algorithms with running time at least quadratic in the number of observations n, i.e., O(n^2). Low-rank approximations of the kernel matrix are often considered as they allow the reduction of running time complexities to O(p^2 n), where p is the rank of the approximation. The practicality of such methods thus depends on the required rank p. In this paper, we show that in the context of kernel ridge regression, for approximations based on a random subset of columns of the original kernel matrix, the rank p may be chosen to be linear in the degrees of freedom associated with the problem, a quantity which is classically used in the statistical analysis of such methods, and is often seen as the implicit number of parameters of non-parametric estimators. This result enables simple algorithms that have sub-quadratic running time complexity, but provably exhibit the same predictive performance than existing algorithms, for any given problem instance, and not only for worst-case situations.
Francis Bach (INRIA Paris - Rocquencourt, LIENS)
null
1208.2015
null
null
Inverse Reinforcement Learning with Gaussian Process
cs.LG
We present new algorithms for inverse reinforcement learning (IRL, or inverse optimal control) in convex optimization settings. We argue that finite-space IRL can be posed as a convex quadratic program under a Bayesian inference framework with the objective of maximum a posterior estimation. To deal with problems in large or even infinite state space, we propose a Gaussian process model and use preference graphs to represent observations of decision trajectories. Our method is distinguished from other approaches to IRL in that it makes no assumptions about the form of the reward function and yet it retains the promise of computationally manageable implementations for potential real-world applications. In comparison with an establish algorithm on small-scale numerical problems, our method demonstrated better accuracy in apprenticeship learning and a more robust dependence on the number of observations.
Qifeng Qiao and Peter A. Beling
null
1208.2112
null
null
Brain tumor MRI image classification with feature selection and extraction using linear discriminant analysis
cs.CV cs.LG
Feature extraction is a method of capturing visual content of an image. The feature extraction is the process to represent raw image in its reduced form to facilitate decision making such as pattern classification. We have tried to address the problem of classification MRI brain images by creating a robust and more accurate classifier which can act as an expert assistant to medical practitioners. The objective of this paper is to present a novel method of feature selection and extraction. This approach combines the Intensity, Texture, shape based features and classifies the tumor as white matter, Gray matter, CSF, abnormal and normal area. The experiment is performed on 140 tumor contained brain MR images from the Internet Brain Segmentation Repository. The proposed technique has been carried out over a larger database as compare to any previous work and is more robust and effective. PCA and Linear Discriminant Analysis (LDA) were applied on the training sets. The Support Vector Machine (SVM) classifier served as a comparison of nonlinear techniques Vs linear ones. PCA and LDA methods are used to reduce the number of features used. The feature selection using the proposed technique is more beneficial as it analyses the data according to grouping class variable and gives reduced feature set with high classification accuracy.
V. P. Gladis Pushpa Rathi and S. Palani
null
1208.2128
null
null
How to sample if you must: on optimal functional sampling
stat.ML cs.LG
We examine a fundamental problem that models various active sampling setups, such as network tomography. We analyze sampling of a multivariate normal distribution with an unknown expectation that needs to be estimated: in our setup it is possible to sample the distribution from a given set of linear functionals, and the difficulty addressed is how to optimally select the combinations to achieve low estimation error. Although this problem is in the heart of the field of optimal design, no efficient solutions for the case with many functionals exist. We present some bounds and an efficient sub-optimal solution for this problem for more structured sets such as binary functionals that are induced by graph walks.
Assaf Hallak and Shie Mannor
null
1208.2417
null
null
Path Integral Control by Reproducing Kernel Hilbert Space Embedding
cs.LG stat.ML
We present an embedding of stochastic optimal control problems, of the so called path integral form, into reproducing kernel Hilbert spaces. Using consistent, sample based estimates of the embedding leads to a model free, non-parametric approach for calculation of an approximate solution to the control problem. This formulation admits a decomposition of the problem into an invariant and task dependent component. Consequently, we make much more efficient use of the sample data compared to previous sample based approaches in this domain, e.g., by allowing sample re-use across tasks. Numerical examples on test problems, which illustrate the sample efficiency, are provided.
Konrad Rawlik and Marc Toussaint and Sethu Vijayakumar
null
1208.2523
null
null
Nonparametric sparsity and regularization
stat.ML cs.LG math.OC
In this work we are interested in the problems of supervised learning and variable selection when the input-output dependence is described by a nonlinear function depending on a few variables. Our goal is to consider a sparse nonparametric model, hence avoiding linear or additive models. The key idea is to measure the importance of each variable in the model by making use of partial derivatives. Based on this intuition we propose a new notion of nonparametric sparsity and a corresponding least squares regularization scheme. Using concepts and results from the theory of reproducing kernel Hilbert spaces and proximal methods, we show that the proposed learning algorithm corresponds to a minimization problem which can be provably solved by an iterative procedure. The consistency properties of the obtained estimator are studied both in terms of prediction and selection performance. An extensive empirical analysis shows that the proposed method performs favorably with respect to the state-of-the-art methods.
Lorenzo Rosasco, Silvia Villa, Sofia Mosci, Matteo Santoro, Alessandro verri
null
1208.2572
null
null
Analysis of a Statistical Hypothesis Based Learning Mechanism for Faster crawling
cs.LG cs.IR
The growth of world-wide-web (WWW) spreads its wings from an intangible quantities of web-pages to a gigantic hub of web information which gradually increases the complexity of crawling process in a search engine. A search engine handles a lot of queries from various parts of this world, and the answers of it solely depend on the knowledge that it gathers by means of crawling. The information sharing becomes a most common habit of the society, and it is done by means of publishing structured, semi-structured and unstructured resources on the web. This social practice leads to an exponential growth of web-resource, and hence it became essential to crawl for continuous updating of web-knowledge and modification of several existing resources in any situation. In this paper one statistical hypothesis based learning mechanism is incorporated for learning the behavior of crawling speed in different environment of network, and for intelligently control of the speed of crawler. The scaling technique is used to compare the performance proposed method with the standard crawler. The high speed performance is observed after scaling, and the retrieval of relevant web-resource in such a high speed is analyzed.
Sudarshan Nandy, Partha Pratim Sarkar and Achintya Das
10.5121/ijaia.2012.3409
1208.2808
null
null
Detecting Events and Patterns in Large-Scale User Generated Textual Streams with Statistical Learning Methods
cs.LG cs.CL cs.IR cs.SI stat.AP stat.ML
A vast amount of textual web streams is influenced by events or phenomena emerging in the real world. The social web forms an excellent modern paradigm, where unstructured user generated content is published on a regular basis and in most occasions is freely distributed. The present Ph.D. Thesis deals with the problem of inferring information - or patterns in general - about events emerging in real life based on the contents of this textual stream. We show that it is possible to extract valuable information about social phenomena, such as an epidemic or even rainfall rates, by automatic analysis of the content published in Social Media, and in particular Twitter, using Statistical Machine Learning methods. An important intermediate task regards the formation and identification of features which characterise a target event; we select and use those textual features in several linear, non-linear and hybrid inference approaches achieving a significantly good performance in terms of the applied loss function. By examining further this rich data set, we also propose methods for extracting various types of mood signals revealing how affective norms - at least within the social web's population - evolve during the day and how significant events emerging in the real world are influencing them. Lastly, we present some preliminary findings showing several spatiotemporal characteristics of this textual information as well as the potential of using it to tackle tasks such as the prediction of voting intentions.
Vasileios Lampos
null
1208.2873
null
null
Using Program Synthesis for Social Recommendations
cs.LG cs.DB cs.PL cs.SI physics.soc-ph
This paper presents a new approach to select events of interest to a user in a social media setting where events are generated by the activities of the user's friends through their mobile devices. We argue that given the unique requirements of the social media setting, the problem is best viewed as an inductive learning problem, where the goal is to first generalize from the users' expressed "likes" and "dislikes" of specific events, then to produce a program that can be manipulated by the system and distributed to the collection devices to collect only data of interest. The key contribution of this paper is a new algorithm that combines existing machine learning techniques with new program synthesis technology to learn users' preferences. We show that when compared with the more standard approaches, our new algorithm provides up to order-of-magnitude reductions in model training time, and significantly higher prediction accuracies for our target application. The approach also improves on standard machine learning techniques in that it produces clear programs that can be manipulated to optimize data collection and filtering.
Alvin Cheung, Armando Solar-Lezama, Samuel Madden
null
1208.2925
null
null
Asymptotic Generalization Bound of Fisher's Linear Discriminant Analysis
stat.ML cs.LG
Fisher's linear discriminant analysis (FLDA) is an important dimension reduction method in statistical pattern recognition. It has been shown that FLDA is asymptotically Bayes optimal under the homoscedastic Gaussian assumption. However, this classical result has the following two major limitations: 1) it holds only for a fixed dimensionality $D$, and thus does not apply when $D$ and the training sample size $N$ are proportionally large; 2) it does not provide a quantitative description on how the generalization ability of FLDA is affected by $D$ and $N$. In this paper, we present an asymptotic generalization analysis of FLDA based on random matrix theory, in a setting where both $D$ and $N$ increase and $D/N\longrightarrow\gamma\in[0,1)$. The obtained lower bound of the generalization discrimination power overcomes both limitations of the classical result, i.e., it is applicable when $D$ and $N$ are proportionally large and provides a quantitative description of the generalization ability of FLDA in terms of the ratio $\gamma=D/N$ and the population discrimination power. Besides, the discrimination power bound also leads to an upper bound on the generalization error of binary-classification with FLDA.
Wei Bian and Dacheng Tao
null
1208.3030
null
null
Metric distances derived from cosine similarity and Pearson and Spearman correlations
stat.ME cs.LG
We investigate two classes of transformations of cosine similarity and Pearson and Spearman correlations into metric distances, utilising the simple tool of metric-preserving functions. The first class puts anti-correlated objects maximally far apart. Previously known transforms fall within this class. The second class collates correlated and anti-correlated objects. An example of such a transformation that yields a metric distance is the sine function when applied to centered data.
Stijn van Dongen and Anton J. Enright
null
1208.3145
null
null
Structured Prediction Cascades
stat.ML cs.LG
Structured prediction tasks pose a fundamental trade-off between the need for model complexity to increase predictive power and the limited computational resources for inference in the exponentially-sized output spaces such models require. We formulate and develop the Structured Prediction Cascade architecture: a sequence of increasingly complex models that progressively filter the space of possible outputs. The key principle of our approach is that each model in the cascade is optimized to accurately filter and refine the structured output state space of the next model, speeding up both learning and inference in the next layer of the cascade. We learn cascades by optimizing a novel convex loss function that controls the trade-off between the filtering efficiency and the accuracy of the cascade, and provide generalization bounds for both accuracy and efficiency. We also extend our approach to intractable models using tree-decomposition ensembles, and provide algorithms and theory for this setting. We evaluate our approach on several large-scale problems, achieving state-of-the-art performance in handwriting recognition and human pose recognition. We find that structured prediction cascades allow tremendous speedups and the use of previously intractable features and models in both settings.
David Weiss, Benjamin Sapp, Ben Taskar
null
1208.3279
null
null
Distance Metric Learning for Kernel Machines
stat.ML cs.LG
Recent work in metric learning has significantly improved the state-of-the-art in k-nearest neighbor classification. Support vector machines (SVM), particularly with RBF kernels, are amongst the most popular classification algorithms that uses distance metrics to compare examples. This paper provides an empirical analysis of the efficacy of three of the most popular Mahalanobis metric learning algorithms as pre-processing for SVM training. We show that none of these algorithms generate metrics that lead to particularly satisfying improvements for SVM-RBF classification. As a remedy we introduce support vector metric learning (SVML), a novel algorithm that seamlessly combines the learning of a Mahalanobis metric with the training of the RBF-SVM parameters. We demonstrate the capabilities of SVML on nine benchmark data sets of varying sizes and difficulties. In our study, SVML outperforms all alternative state-of-the-art metric learning algorithms in terms of accuracy and establishes itself as a serious alternative to the standard Euclidean metric with model selection by cross validation.
Zhixiang Xu, Kilian Q. Weinberger, Olivier Chapelle
null
1208.3422
null
null
Efficient Active Learning of Halfspaces: an Aggressive Approach
cs.LG
We study pool-based active learning of half-spaces. We revisit the aggressive approach for active learning in the realizable case, and show that it can be made efficient and practical, while also having theoretical guarantees under reasonable assumptions. We further show, both theoretically and experimentally, that it can be preferable to mellow approaches. Our efficient aggressive active learner of half-spaces has formal approximation guarantees that hold when the pool is separable with a margin. While our analysis is focused on the realizable setting, we show that a simple heuristic allows using the same algorithm successfully for pools with low error as well. We further compare the aggressive approach to the mellow approach, and prove that there are cases in which the aggressive approach results in significantly better label complexity compared to the mellow approach. We demonstrate experimentally that substantial improvements in label complexity can be achieved using the aggressive approach, for both realizable and low-error settings.
Alon Gonen, Sivan Sabato, Shai Shalev-Shwartz
null
1208.3561
null
null
An improvement direction for filter selection techniques using information theory measures and quadratic optimization
cs.LG cs.IT math.IT
Filter selection techniques are known for their simplicity and efficiency. However this kind of methods doesn't take into consideration the features inter-redundancy. Consequently the un-removed redundant features remain in the final classification model, giving lower generalization performance. In this paper we propose to use a mathematical optimization method that reduces inter-features redundancy and maximize relevance between each feature and the target variable.
Waad Bouaguel and Ghazi Bel Mufti
null
1208.3689
null
null