categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
sequence
cs.AI cs.LG cs.SY
null
1206.3285
null
null
http://arxiv.org/pdf/1206.3285v1
2012-06-13T15:45:04Z
2012-06-13T15:45:04Z
Dyna-Style Planning with Linear Function Approximation and Prioritized Sweeping
We consider the problem of efficiently learning optimal control policies and value functions over large state spaces in an online setting in which estimates must be available after each interaction with the world. This paper develops an explicitly model-based approach extending the Dyna architecture to linear function approximation. Dynastyle planning proceeds by generating imaginary experience from the world model and then applying model-free reinforcement learning algorithms to the imagined state transitions. Our main results are to prove that linear Dyna-style planning converges to a unique solution independent of the generating distribution, under natural conditions. In the policy evaluation setting, we prove that the limit point is the least-squares (LSTD) solution. An implication of our results is that prioritized-sweeping can be soundly extended to the linear approximation case, backing up to preceding features rather than to preceding states. We introduce two versions of prioritized sweeping with linear Dyna and briefly illustrate their performance empirically on the Mountain Car and Boyan Chain problems.
[ "Richard S. Sutton, Csaba Szepesvari, Alborz Geramifard, Michael P.\n Bowling", "['Richard S. Sutton' 'Csaba Szepesvari' 'Alborz Geramifard'\n 'Michael P. Bowling']" ]
cs.LG stat.ME stat.ML
null
1206.3287
null
null
http://arxiv.org/pdf/1206.3287v1
2012-06-13T15:45:39Z
2012-06-13T15:45:39Z
Learning the Bayesian Network Structure: Dirichlet Prior versus Data
In the Bayesian approach to structure learning of graphical models, the equivalent sample size (ESS) in the Dirichlet prior over the model parameters was recently shown to have an important effect on the maximum-a-posteriori estimate of the Bayesian network structure. In our first contribution, we theoretically analyze the case of large ESS-values, which complements previous work: among other results, we find that the presence of an edge in a Bayesian network is favoured over its absence even if both the Dirichlet prior and the data imply independence, as long as the conditional empirical distribution is notably different from uniform. In our second contribution, we focus on realistic ESS-values, and provide an analytical approximation to the "optimal" ESS-value in a predictive sense (its accuracy is also validated experimentally): this approximation provides an understanding as to which properties of the data have the main effect determining the "optimal" ESS-value.
[ "['Harald Steck']", "Harald Steck" ]
cs.LG stat.ML
null
1206.3290
null
null
http://arxiv.org/pdf/1206.3290v1
2012-06-13T15:47:16Z
2012-06-13T15:47:16Z
Modelling local and global phenomena with sparse Gaussian processes
Much recent work has concerned sparse approximations to speed up the Gaussian process regression from the unfavorable O(n3) scaling in computational time to O(nm2). Thus far, work has concentrated on models with one covariance function. However, in many practical situations additive models with multiple covariance functions may perform better, since the data may contain both long and short length-scale phenomena. The long length-scales can be captured with global sparse approximations, such as fully independent conditional (FIC), and the short length-scales can be modeled naturally by covariance functions with compact support (CS). CS covariance functions lead to naturally sparse covariance matrices, which are computationally cheaper to handle than full covariance matrices. In this paper, we propose a new sparse Gaussian process model with two additive components: FIC for the long length-scales and CS covariance function for the short length-scales. We give theoretical and experimental results and show that under certain conditions the proposed model has the same computational complexity as FIC. We also compare the model performance of the proposed model to additive models approximated by fully and partially independent conditional (PIC). We use real data sets and show that our model outperforms FIC and PIC approximations for data sets with two additive phenomena.
[ "['Jarno Vanhatalo' 'Aki Vehtari']", "Jarno Vanhatalo, Aki Vehtari" ]
cs.LG stat.ML
null
1206.3294
null
null
http://arxiv.org/pdf/1206.3294v1
2012-06-13T15:52:35Z
2012-06-13T15:52:35Z
Flexible Priors for Exemplar-based Clustering
Exemplar-based clustering methods have been shown to produce state-of-the-art results on a number of synthetic and real-world clustering problems. They are appealing because they offer computational benefits over latent-mean models and can handle arbitrary pairwise similarity measures between data points. However, when trying to recover underlying structure in clustering problems, tailored similarity measures are often not enough; we also desire control over the distribution of cluster sizes. Priors such as Dirichlet process priors allow the number of clusters to be unspecified while expressing priors over data partitions. To our knowledge, they have not been applied to exemplar-based models. We show how to incorporate priors, including Dirichlet process priors, into the recently introduced affinity propagation algorithm. We develop an efficient maxproduct belief propagation algorithm for our new model and demonstrate experimentally how the expanded range of clustering priors allows us to better recover true clusterings in situations where we have some information about the generating process.
[ "Daniel Tarlow, Richard S. Zemel, Brendan J. Frey", "['Daniel Tarlow' 'Richard S. Zemel' 'Brendan J. Frey']" ]
cs.LG stat.ML
null
1206.3297
null
null
http://arxiv.org/pdf/1206.3297v1
2012-06-13T15:56:12Z
2012-06-13T15:56:12Z
Hybrid Variational/Gibbs Collapsed Inference in Topic Models
Variational Bayesian inference and (collapsed) Gibbs sampling are the two important classes of inference algorithms for Bayesian networks. Both have their advantages and disadvantages: collapsed Gibbs sampling is unbiased but is also inefficient for large count values and requires averaging over many samples to reduce variance. On the other hand, variational Bayesian inference is efficient and accurate for large count values but suffers from bias for small counts. We propose a hybrid algorithm that combines the best of both worlds: it samples very small counts and applies variational updates to large counts. This hybridization is shown to significantly improve testset perplexity relative to variational inference at no computational cost.
[ "Max Welling, Yee Whye Teh, Hilbert Kappen", "['Max Welling' 'Yee Whye Teh' 'Hilbert Kappen']" ]
cs.IR cs.LG stat.ML
null
1206.3298
null
null
http://arxiv.org/pdf/1206.3298v2
2015-05-16T22:57:04Z
2012-06-13T15:56:33Z
Continuous Time Dynamic Topic Models
In this paper, we develop the continuous time dynamic topic model (cDTM). The cDTM is a dynamic topic model that uses Brownian motion to model the latent topics through a sequential collection of documents, where a "topic" is a pattern of word use that we expect to evolve over the course of the collection. We derive an efficient variational approximate inference algorithm that takes advantage of the sparsity of observations in text, a property that lets us easily handle many time points. In contrast to the cDTM, the original discrete-time dynamic topic model (dDTM) requires that time be discretized. Moreover, the complexity of variational inference for the dDTM grows quickly as time granularity increases, a drawback which limits fine-grained discretization. We demonstrate the cDTM on two news corpora, reporting both predictive perplexity and the novel task of time stamp prediction.
[ "['Chong Wang' 'David Blei' 'David Heckerman']", "Chong Wang, David Blei, David Heckerman" ]
cs.AI cs.LG
null
1206.3382
null
null
http://arxiv.org/pdf/1206.3382v2
2012-12-19T08:48:44Z
2012-06-15T07:23:28Z
Simple Regret Optimization in Online Planning for Markov Decision Processes
We consider online planning in Markov decision processes (MDPs). In online planning, the agent focuses on its current state only, deliberates about the set of possible policies from that state onwards and, when interrupted, uses the outcome of that exploratory deliberation to choose what action to perform next. The performance of algorithms for online planning is assessed in terms of simple regret, which is the agent's expected performance loss when the chosen action, rather than an optimal one, is followed. To date, state-of-the-art algorithms for online planning in general MDPs are either best effort, or guarantee only polynomial-rate reduction of simple regret over time. Here we introduce a new Monte-Carlo tree search algorithm, BRUE, that guarantees exponential-rate reduction of simple regret and error probability. This algorithm is based on a simple yet non-standard state-space sampling scheme, MCTS2e, in which different parts of each sample are dedicated to different exploratory objectives. Our empirical evaluation shows that BRUE not only provides superior performance guarantees, but is also very effective in practice and favorably compares to state-of-the-art. We then extend BRUE with a variant of "learning by forgetting." The resulting set of algorithms, BRUE(alpha), generalizes BRUE, improves the exponential factor in the upper bound on its reduction rate, and exhibits even more attractive empirical performance.
[ "Zohar Feldman, Carmel Domshlak", "['Zohar Feldman' 'Carmel Domshlak']" ]
cs.LG q-bio.BM
null
1206.3509
null
null
http://arxiv.org/pdf/1206.3509v1
2012-06-15T16:31:45Z
2012-06-15T16:31:45Z
A Novel Approach for Protein Structure Prediction
The idea of this project is to study the protein structure and sequence relationship using the hidden markov model and artificial neural network. In this context we have assumed two hidden markov models. In first model we have taken protein secondary structures as hidden and protein sequences as observed. In second model we have taken protein sequences as hidden and protein structures as observed. The efficiencies for both the hidden markov models have been calculated. The results show that the efficiencies of first model is greater that the second one .These efficiencies are cross validated using artificial neural network. This signifies the importance of protein secondary structures as the main hidden controlling factors due to which we observe a particular amino acid sequence. This also signifies that protein secondary structure is more conserved in comparison to amino acid sequence.
[ "Saurabh Sarkar, Prateek Malhotra, Virender Guman", "['Saurabh Sarkar' 'Prateek Malhotra' 'Virender Guman']" ]
math.OC cs.LG cs.SY
10.1109/CDC.2012.6426587
1206.3582
null
null
http://arxiv.org/abs/1206.3582v1
2012-06-14T07:07:58Z
2012-06-14T07:07:58Z
Decentralized Learning for Multi-player Multi-armed Bandits
We consider the problem of distributed online learning with multiple players in multi-armed bandits (MAB) models. Each player can pick among multiple arms. When a player picks an arm, it gets a reward. We consider both i.i.d. reward model and Markovian reward model. In the i.i.d. model each arm is modelled as an i.i.d. process with an unknown distribution with an unknown mean. In the Markovian model, each arm is modelled as a finite, irreducible, aperiodic and reversible Markov chain with an unknown probability transition matrix and stationary distribution. The arms give different rewards to different players. If two players pick the same arm, there is a "collision", and neither of them get any reward. There is no dedicated control channel for coordination or communication among the players. Any other communication between the users is costly and will add to the regret. We propose an online index-based distributed learning policy called ${\tt dUCB_4}$ algorithm that trades off \textit{exploration v. exploitation} in the right way, and achieves expected regret that grows at most as near-$O(\log^2 T)$. The motivation comes from opportunistic spectrum access by multiple secondary users in cognitive radio networks wherein they must pick among various wireless channels that look different to different users. This is the first distributed learning algorithm for multi-player MABs to the best of our knowledge.
[ "['Dileep Kalathil' 'Naumaan Nayyar' 'Rahul Jain']", "Dileep Kalathil, Naumaan Nayyar and Rahul Jain" ]
cs.LG q-bio.NC
null
1206.3666
null
null
http://arxiv.org/pdf/1206.3666v1
2012-06-16T13:35:21Z
2012-06-16T13:35:21Z
Unsupervised adaptation of brain machine interface decoders
The performance of neural decoders can degrade over time due to nonstationarities in the relationship between neuronal activity and behavior. In this case, brain-machine interfaces (BMI) require adaptation of their decoders to maintain high performance across time. One way to achieve this is by use of periodical calibration phases, during which the BMI system (or an external human demonstrator) instructs the user to perform certain movements or behaviors. This approach has two disadvantages: (i) calibration phases interrupt the autonomous operation of the BMI and (ii) between two calibration phases the BMI performance might not be stable but continuously decrease. A better alternative would be that the BMI decoder is able to continuously adapt in an unsupervised manner during autonomous BMI operation, i.e. without knowing the movement intentions of the user. In the present article, we present an efficient method for such unsupervised training of BMI systems for continuous movement control. The proposed method utilizes a cost function derived from neuronal recordings, which guides a learning algorithm to evaluate the decoding parameters. We verify the performance of our adaptive method by simulating a BMI user with an optimal feedback control model and its interaction with our adaptive BMI decoder. The simulation results show that the cost function and the algorithm yield fast and precise trajectories towards targets at random orientations on a 2-dimensional computer screen. For initially unknown and non-stationary tuning parameters, our unsupervised method is still able to generate precise trajectories and to keep its performance stable in the long term. The algorithm can optionally work also with neuronal error signals instead or in conjunction with the proposed unsupervised adaptation.
[ "Tayfun G\\\"urel, Carsten Mehring", "['Tayfun Gürel' 'Carsten Mehring']" ]
cs.LG cs.GT stat.ML
null
1206.3713
null
null
http://arxiv.org/pdf/1206.3713v4
2015-05-04T02:04:26Z
2012-06-16T23:20:09Z
Learning the Structure and Parameters of Large-Population Graphical Games from Behavioral Data
We consider learning, from strictly behavioral data, the structure and parameters of linear influence games (LIGs), a class of parametric graphical games introduced by Irfan and Ortiz (2014). LIGs facilitate causal strategic inference (CSI): Making inferences from causal interventions on stable behavior in strategic settings. Applications include the identification of the most influential individuals in large (social) networks. Such tasks can also support policy-making analysis. Motivated by the computational work on LIGs, we cast the learning problem as maximum-likelihood estimation (MLE) of a generative model defined by pure-strategy Nash equilibria (PSNE). Our simple formulation uncovers the fundamental interplay between goodness-of-fit and model complexity: good models capture equilibrium behavior within the data while controlling the true number of equilibria, including those unobserved. We provide a generalization bound establishing the sample complexity for MLE in our framework. We propose several algorithms including convex loss minimization (CLM) and sigmoidal approximations. We prove that the number of exact PSNE in LIGs is small, with high probability; thus, CLM is sound. We illustrate our approach on synthetic data and real-world U.S. congressional voting records. We briefly discuss our learning framework's generality and potential applicability to general graphical games.
[ "['Jean Honorio' 'Luis Ortiz']", "Jean Honorio and Luis Ortiz" ]
cs.CV cs.AI cs.LG
null
1206.3714
null
null
http://arxiv.org/pdf/1206.3714v1
2012-06-16T23:26:38Z
2012-06-16T23:26:38Z
How important are Deformable Parts in the Deformable Parts Model?
The main stated contribution of the Deformable Parts Model (DPM) detector of Felzenszwalb et al. (over the Histogram-of-Oriented-Gradients approach of Dalal and Triggs) is the use of deformable parts. A secondary contribution is the latent discriminative learning. Tertiary is the use of multiple components. A common belief in the vision community (including ours, before this study) is that their ordering of contributions reflects the performance of detector in practice. However, what we have experimentally found is that the ordering of importance might actually be the reverse. First, we show that by increasing the number of components, and switching the initialization step from their aspect-ratio, left-right flipping heuristics to appearance-based clustering, considerable improvement in performance is obtained. But more intriguingly, we show that with these new components, the part deformations can now be completely switched off, yet obtaining results that are almost on par with the original DPM detector. Finally, we also show initial results for using multiple components on a different problem -- scene classification, suggesting that this idea might have wider applications in addition to object detection.
[ "['Santosh K. Divvala' 'Alexei A. Efros' 'Martial Hebert']", "Santosh K. Divvala and Alexei A. Efros and Martial Hebert" ]
cs.LG stat.ML
null
1206.3721
null
null
http://arxiv.org/pdf/1206.3721v1
2012-06-17T04:40:09Z
2012-06-17T04:40:09Z
Constraint-free Graphical Model with Fast Learning Algorithm
In this paper, we propose a simple, versatile model for learning the structure and parameters of multivariate distributions from a data set. Learning a Markov network from a given data set is not a simple problem, because Markov networks rigorously represent Markov properties, and this rigor imposes complex constraints on the design of the networks. Our proposed model removes these constraints, acquiring important aspects from the information geometry. The proposed parameter- and structure-learning algorithms are simple to execute as they are based solely on local computation at each node. Experiments demonstrate that our algorithms work appropriately.
[ "Kazuya Takabatake and Shotaro Akaho", "['Kazuya Takabatake' 'Shotaro Akaho']" ]
cs.LG stat.ML
null
1206.3881
null
null
http://arxiv.org/pdf/1206.3881v1
2012-06-18T10:33:29Z
2012-06-18T10:33:29Z
DANCo: Dimensionality from Angle and Norm Concentration
In the last decades the estimation of the intrinsic dimensionality of a dataset has gained considerable importance. Despite the great deal of research work devoted to this task, most of the proposed solutions prove to be unreliable when the intrinsic dimensionality of the input dataset is high and the manifold where the points lie is nonlinearly embedded in a higher dimensional space. In this paper we propose a novel robust intrinsic dimensionality estimator that exploits the twofold complementary information conveyed both by the normalized nearest neighbor distances and by the angles computed on couples of neighboring points, providing also closed-forms for the Kullback-Leibler divergences of the respective distributions. Experiments performed on both synthetic and real datasets highlight the robustness and the effectiveness of the proposed algorithm when compared to state of the art methodologies.
[ "Claudio Ceruti and Simone Bassis and Alessandro Rozza and Gabriele\n Lombardi and Elena Casiraghi and Paola Campadelli", "['Claudio Ceruti' 'Simone Bassis' 'Alessandro Rozza' 'Gabriele Lombardi'\n 'Elena Casiraghi' 'Paola Campadelli']" ]
cs.LG cs.CV stat.ML
null
1206.4074
null
null
http://arxiv.org/pdf/1206.4074v3
2013-06-12T19:29:18Z
2012-06-18T21:05:16Z
A Linear Approximation to the chi^2 Kernel with Geometric Convergence
We propose a new analytical approximation to the $\chi^2$ kernel that converges geometrically. The analytical approximation is derived with elementary methods and adapts to the input distribution for optimal convergence rate. Experiments show the new approximation leads to improved performance in image classification and semantic segmentation tasks using a random Fourier feature approximation of the $\exp-\chi^2$ kernel. Besides, out-of-core principal component analysis (PCA) methods are introduced to reduce the dimensionality of the approximation and achieve better performance at the expense of only an additional constant factor to the time complexity. Moreover, when PCA is performed jointly on the training and unlabeled testing data, further performance improvements can be obtained. Experiments conducted on the PASCAL VOC 2010 segmentation and the ImageNet ILSVRC 2010 datasets show statistically significant improvements over alternative approximation methods.
[ "Fuxin Li, Guy Lebanon, Cristian Sminchisescu", "['Fuxin Li' 'Guy Lebanon' 'Cristian Sminchisescu']" ]
cs.LG cs.IR
null
1206.4110
null
null
http://arxiv.org/pdf/1206.4110v1
2012-06-19T02:24:55Z
2012-06-19T02:24:55Z
ConeRANK: Ranking as Learning Generalized Inequalities
We propose a new data mining approach in ranking documents based on the concept of cone-based generalized inequalities between vectors. A partial ordering between two vectors is made with respect to a proper cone and thus learning the preferences is formulated as learning proper cones. A pairwise learning-to-rank algorithm (ConeRank) is proposed to learn a non-negative subspace, formulated as a polyhedral cone, over document-pair differences. The algorithm is regularized by controlling the `volume' of the cone. The experimental studies on the latest and largest ranking dataset LETOR 4.0 shows that ConeRank is competitive against other recent ranking approaches.
[ "Truyen T. Tran and Duc Son Pham", "['Truyen T. Tran' 'Duc Son Pham']" ]
cs.LG
null
1206.4169
null
null
http://arxiv.org/pdf/1206.4169v1
2012-06-19T10:26:45Z
2012-06-19T10:26:45Z
Clustered Bandits
We consider a multi-armed bandit setting that is inspired by real-world applications in e-commerce. In our setting, there are a few types of users, each with a specific response to the different arms. When a user enters the system, his type is unknown to the decision maker. The decision maker can either treat each user separately ignoring the previously observed users, or can attempt to take advantage of knowing that only few types exist and cluster the users according to their response to the arms. We devise algorithms that combine the usual exploration-exploitation tradeoff with clustering of users and demonstrate the value of clustering. In the process of developing algorithms for the clustered setting, we propose and analyze simple algorithms for the setup where a decision maker knows that a user belongs to one of few types, but does not know which one.
[ "['Loc Bui' 'Ramesh Johari' 'Shie Mannor']", "Loc Bui, Ramesh Johari, Shie Mannor" ]
cs.NA cs.LG
null
1206.4481
null
null
http://arxiv.org/pdf/1206.4481v2
2012-09-10T13:01:47Z
2012-06-20T12:49:48Z
Parsimonious Mahalanobis Kernel for the Classification of High Dimensional Data
The classification of high dimensional data with kernel methods is considered in this article. Exploit- ing the emptiness property of high dimensional spaces, a kernel based on the Mahalanobis distance is proposed. The computation of the Mahalanobis distance requires the inversion of a covariance matrix. In high dimensional spaces, the estimated covariance matrix is ill-conditioned and its inversion is unstable or impossible. Using a parsimonious statistical model, namely the High Dimensional Discriminant Analysis model, the specific signal and noise subspaces are estimated for each considered class making the inverse of the class specific covariance matrix explicit and stable, leading to the definition of a parsimonious Mahalanobis kernel. A SVM based framework is used for selecting the hyperparameters of the parsimonious Mahalanobis kernel by optimizing the so-called radius-margin bound. Experimental results on three high dimensional data sets show that the proposed kernel is suitable for classifying high dimensional data, providing better classification accuracies than the conventional Gaussian kernel.
[ "M. Fauvel, A. Villa, J. Chanussot and J. A. Benediktsson", "['M. Fauvel' 'A. Villa' 'J. Chanussot' 'J. A. Benediktsson']" ]
cs.LG stat.ML
null
1206.4560
null
null
http://arxiv.org/pdf/1206.4560v1
2012-06-18T15:11:22Z
2012-06-18T15:11:22Z
Residual Component Analysis: Generalising PCA for more flexible inference in linear-Gaussian models
Probabilistic principal component analysis (PPCA) seeks a low dimensional representation of a data set in the presence of independent spherical Gaussian noise. The maximum likelihood solution for the model is an eigenvalue problem on the sample covariance matrix. In this paper we consider the situation where the data variance is already partially explained by other actors, for example sparse conditional dependencies between the covariates, or temporal correlations leaving some residual variance. We decompose the residual variance into its components through a generalised eigenvalue problem, which we call residual component analysis (RCA). We explore a range of new algorithms that arise from the framework, including one that factorises the covariance of a Gaussian density into a low-rank and a sparse-inverse component. We illustrate the ideas on the recovery of a protein-signaling network, a gene expression time-series data set and the recovery of the human skeleton from motion capture 3-D cloud data.
[ "Alfredo Kalaitzis (University of Sheffield), Neil Lawrence (University\n of Sheffield)", "['Alfredo Kalaitzis' 'Neil Lawrence']" ]
cs.LG stat.ML
null
1206.4599
null
null
http://arxiv.org/pdf/1206.4599v1
2012-06-18T14:39:39Z
2012-06-18T14:39:39Z
A Unified Robust Classification Model
A wide variety of machine learning algorithms such as support vector machine (SVM), minimax probability machine (MPM), and Fisher discriminant analysis (FDA), exist for binary classification. The purpose of this paper is to provide a unified classification model that includes the above models through a robust optimization approach. This unified model has several benefits. One is that the extensions and improvements intended for SVM become applicable to MPM and FDA, and vice versa. Another benefit is to provide theoretical results to above learning methods at once by dealing with the unified model. We give a statistical interpretation of the unified classification model and propose a non-convex optimization algorithm that can be applied to non-convex variants of existing learning methods.
[ "['Akiko Takeda' 'Hiroyuki Mitsugi' 'Takafumi Kanamori']", "Akiko Takeda (Keio University), Hiroyuki Mitsugi (Keio University),\n Takafumi Kanamori (Nagoya University)" ]
cs.LG stat.ML
null
1206.4600
null
null
http://arxiv.org/pdf/1206.4600v1
2012-06-18T14:40:38Z
2012-06-18T14:40:38Z
Bayesian Nonexhaustive Learning for Online Discovery and Modeling of Emerging Classes
We present a framework for online inference in the presence of a nonexhaustively defined set of classes that incorporates supervised classification with class discovery and modeling. A Dirichlet process prior (DPP) model defined over class distributions ensures that both known and unknown class distributions originate according to a common base distribution. In an attempt to automatically discover potentially interesting class formations, the prior model is coupled with a suitably chosen data model, and sequential Monte Carlo sampling is used to perform online inference. Our research is driven by a biodetection application, where a new class of pathogen may suddenly appear, and the rapid increase in the number of samples originating from this class indicates the onset of an outbreak.
[ "['Murat Dundar' 'Ferit Akova' 'Alan Qi' 'Bartek Rajwa']", "Murat Dundar (IUPUI), Ferit Akova (IUPUI), Alan Qi (Purdue), Bartek\n Rajwa (Purdue)" ]
cs.LG stat.ML
null
1206.4601
null
null
http://arxiv.org/pdf/1206.4601v1
2012-06-18T14:40:55Z
2012-06-18T14:40:55Z
Convex Multitask Learning with Flexible Task Clusters
Traditionally, multitask learning (MTL) assumes that all the tasks are related. This can lead to negative transfer when tasks are indeed incoherent. Recently, a number of approaches have been proposed that alleviate this problem by discovering the underlying task clusters or relationships. However, they are limited to modeling these relationships at the task level, which may be restrictive in some applications. In this paper, we propose a novel MTL formulation that captures task relationships at the feature-level. Depending on the interactions among tasks and features, the proposed method construct different task clusters for different features, without even the need of pre-specifying the number of clusters. Computationally, the proposed formulation is strongly convex, and can be efficiently solved by accelerated proximal methods. Experiments are performed on a number of synthetic and real-world data sets. Under various degrees of task relationships, the accuracy of the proposed method is consistently among the best. Moreover, the feature-specific task clusters obtained agree with the known/plausible task structures of the data.
[ "['Wenliang Zhong' 'James Kwok']", "Wenliang Zhong (HKUST), James Kwok (HKUST)" ]
cs.NA cs.LG stat.ML
null
1206.4602
null
null
http://arxiv.org/pdf/1206.4602v1
2012-06-18T14:41:11Z
2012-06-18T14:41:11Z
Quasi-Newton Methods: A New Direction
Four decades after their invention, quasi-Newton methods are still state of the art in unconstrained numerical optimization. Although not usually interpreted thus, these are learning algorithms that fit a local quadratic approximation to the objective function. We show that many, including the most popular, quasi-Newton methods can be interpreted as approximations of Bayesian linear regression under varying prior assumptions. This new notion elucidates some shortcomings of classical algorithms, and lights the way to a novel nonparametric quasi-Newton method, which is able to make more efficient use of available information at computational cost similar to its predecessors.
[ "['Philipp Hennig' 'Martin Kiefel']", "Philipp Hennig (MPI Intelligent Systems), Martin Kiefel (MPI for\n Intelligent Systems)" ]
cs.LG cs.AI
null
1206.4604
null
null
http://arxiv.org/pdf/1206.4604v1
2012-06-18T14:42:16Z
2012-06-18T14:42:16Z
Learning the Experts for Online Sequence Prediction
Online sequence prediction is the problem of predicting the next element of a sequence given previous elements. This problem has been extensively studied in the context of individual sequence prediction, where no prior assumptions are made on the origin of the sequence. Individual sequence prediction algorithms work quite well for long sequences, where the algorithm has enough time to learn the temporal structure of the sequence. However, they might give poor predictions for short sequences. A possible remedy is to rely on the general model of prediction with expert advice, where the learner has access to a set of $r$ experts, each of which makes its own predictions on the sequence. It is well known that it is possible to predict almost as well as the best expert if the sequence length is order of $\log(r)$. But, without firm prior knowledge on the problem, it is not clear how to choose a small set of {\em good} experts. In this paper we describe and analyze a new algorithm that learns a good set of experts using a training set of previously observed sequences. We demonstrate the merits of our approach by applying it on the task of click prediction on the web.
[ "['Elad Eban' 'Aharon Birnbaum' 'Shai Shalev-Shwartz' 'Amir Globerson']", "Elad Eban (Hebrew University), Aharon Birnbaum (Hebrew University),\n Shai Shalev-Shwartz (Hebrew University), Amir Globerson (Hebrew University)" ]
cs.LG cs.AI stat.ML
null
1206.4606
null
null
http://arxiv.org/pdf/1206.4606v1
2012-06-18T14:43:42Z
2012-06-18T14:43:42Z
TrueLabel + Confusions: A Spectrum of Probabilistic Models in Analyzing Multiple Ratings
This paper revisits the problem of analyzing multiple ratings given by different judges. Different from previous work that focuses on distilling the true labels from noisy crowdsourcing ratings, we emphasize gaining diagnostic insights into our in-house well-trained judges. We generalize the well-known DawidSkene model (Dawid & Skene, 1979) to a spectrum of probabilistic models under the same "TrueLabel + Confusion" paradigm, and show that our proposed hierarchical Bayesian model, called HybridConfusion, consistently outperforms DawidSkene on both synthetic and real-world data sets.
[ "Chao Liu (Tencent Inc.), Yi-Min Wang (Microsoft Research)", "['Chao Liu' 'Yi-Min Wang']" ]
cs.LG stat.ML
null
1206.4607
null
null
http://arxiv.org/pdf/1206.4607v1
2012-06-18T14:44:09Z
2012-06-18T14:44:09Z
Distributed Tree Kernels
In this paper, we propose the distributed tree kernels (DTK) as a novel method to reduce time and space complexity of tree kernels. Using a linear complexity algorithm to compute vectors for trees, we embed feature spaces of tree fragments in low-dimensional spaces where the kernel computation is directly done with dot product. We show that DTKs are faster, correlate with tree kernels, and obtain a statistically similar performance in two natural language processing tasks.
[ "['Fabio Massimo Zanzotto' \"Lorenzo Dell'Arciprete\"]", "Fabio Massimo Zanzotto (University of Rome-Tor Vergata), Lorenzo\n Dell'Arciprete (University of Rome-Tor Vergata)" ]
cs.LG cs.DS cs.NA stat.ML
null
1206.4608
null
null
http://arxiv.org/pdf/1206.4608v1
2012-06-18T14:44:28Z
2012-06-18T14:44:28Z
A Hybrid Algorithm for Convex Semidefinite Optimization
We present a hybrid algorithm for optimizing a convex, smooth function over the cone of positive semidefinite matrices. Our algorithm converges to the global optimal solution and can be used to solve general large-scale semidefinite programs and hence can be readily applied to a variety of machine learning problems. We show experimental results on three machine learning problems (matrix completion, metric learning, and sparse PCA) . Our approach outperforms state-of-the-art algorithms.
[ "['Soeren Laue']", "Soeren Laue (Friedrich-Schiller-University)" ]
cs.CV cs.LG stat.ML
null
1206.4609
null
null
http://arxiv.org/pdf/1206.4609v1
2012-06-18T14:45:17Z
2012-06-18T14:45:17Z
On multi-view feature learning
Sparse coding is a common approach to learning local features for object recognition. Recently, there has been an increasing interest in learning features from spatio-temporal, binocular, or other multi-observation data, where the goal is to encode the relationship between images rather than the content of a single image. We provide an analysis of multi-view feature learning, which shows that hidden variables encode transformations by detecting rotation angles in the eigenspaces shared among multiple image warps. Our analysis helps explain recent experimental results showing that transformation-specific features emerge when training complex cell models on videos. Our analysis also shows that transformation-invariant features can emerge as a by-product of learning representations of transformations.
[ "Roland Memisevic (University of Frankfurt)", "['Roland Memisevic']" ]
cs.LG cs.CV stat.ML
null
1206.4610
null
null
http://arxiv.org/pdf/1206.4610v1
2012-06-18T14:45:37Z
2012-06-18T14:45:37Z
Manifold Relevance Determination
In this paper we present a fully Bayesian latent variable model which exploits conditional nonlinear(in)-dependence structures to learn an efficient latent representation. The latent space is factorized to represent shared and private information from multiple views of the data. In contrast to previous approaches, we introduce a relaxation to the discrete segmentation and allow for a "softly" shared latent space. Further, Bayesian techniques allow us to automatically estimate the dimensionality of the latent spaces. The model is capable of capturing structure underlying extremely high dimensional spaces. This is illustrated by modelling unprocessed images with tenths of thousands of pixels. This also allows us to directly generate novel images from the trained model by sampling from the discovered latent spaces. We also demonstrate the model by prediction of human pose in an ambiguous setting. Our Bayesian framework allows us to perform disambiguation in a principled manner by including latent space priors which incorporate the dynamic nature of the data.
[ "Andreas Damianou (University of Sheffield), Carl Ek (KTH), Michalis\n Titsias (University of Oxford), Neil Lawrence (University of Sheffield)", "['Andreas Damianou' 'Carl Ek' 'Michalis Titsias' 'Neil Lawrence']" ]
cs.LG stat.ML
null
1206.4611
null
null
http://arxiv.org/pdf/1206.4611v1
2012-06-18T15:00:07Z
2012-06-18T15:00:07Z
A Convex Feature Learning Formulation for Latent Task Structure Discovery
This paper considers the multi-task learning problem and in the setting where some relevant features could be shared across few related tasks. Most of the existing methods assume the extent to which the given tasks are related or share a common feature space to be known apriori. In real-world applications however, it is desirable to automatically discover the groups of related tasks that share a feature space. In this paper we aim at searching the exponentially large space of all possible groups of tasks that may share a feature space. The main contribution is a convex formulation that employs a graph-based regularizer and simultaneously discovers few groups of related tasks, having close-by task parameters, as well as the feature space shared within each group. The regularizer encodes an important structure among the groups of tasks leading to an efficient algorithm for solving it: if there is no feature space under which a group of tasks has close-by task parameters, then there does not exist such a feature space for any of its supersets. An efficient active set algorithm that exploits this simplification and performs a clever search in the exponentially large space is presented. The algorithm is guaranteed to solve the proposed formulation (within some precision) in a time polynomial in the number of groups of related tasks discovered. Empirical results on benchmark datasets show that the proposed formulation achieves good generalization and outperforms state-of-the-art multi-task learning algorithms in some cases.
[ "['Pratik Jawanpuria' 'J. Saketha Nath']", "Pratik Jawanpuria (IIT Bombay), J. Saketha Nath (IIT Bombay)" ]
cs.LG
null
1206.4612
null
null
http://arxiv.org/pdf/1206.4612v1
2012-06-18T15:00:20Z
2012-06-18T15:00:20Z
Exact Soft Confidence-Weighted Learning
In this paper, we propose a new Soft Confidence-Weighted (SCW) online learning scheme, which enables the conventional confidence-weighted learning method to handle non-separable cases. Unlike the previous confidence-weighted learning algorithms, the proposed soft confidence-weighted learning method enjoys all the four salient properties: (i) large margin training, (ii) confidence weighting, (iii) capability to handle non-separable data, and (iv) adaptive margin. Our experimental results show that the proposed SCW algorithms significantly outperform the original CW algorithm. When comparing with a variety of state-of-the-art algorithms (including AROW, NAROW and NHERD), we found that SCW generally achieves better or at least comparable predictive accuracy, but enjoys significant advantage of computational efficiency (i.e., smaller number of updates and lower time cost).
[ "['Jialei Wang' 'Peilin Zhao' 'Steven C. H. Hoi']", "Jialei Wang (NTU), Peilin Zhao (NTU), Steven C.H. Hoi (NTU)" ]
cs.AI cs.LG stat.ML
null
1206.4613
null
null
http://arxiv.org/pdf/1206.4613v1
2012-06-18T15:00:40Z
2012-06-18T15:00:40Z
Near-Optimal BRL using Optimistic Local Transitions
Model-based Bayesian Reinforcement Learning (BRL) allows a found formalization of the problem of acting optimally while facing an unknown environment, i.e., avoiding the exploration-exploitation dilemma. However, algorithms explicitly addressing BRL suffer from such a combinatorial explosion that a large body of work relies on heuristic algorithms. This paper introduces BOLT, a simple and (almost) deterministic heuristic algorithm for BRL which is optimistic about the transition function. We analyze BOLT's sample complexity, and show that under certain parameters, the algorithm is near-optimal in the Bayesian sense with high probability. Then, experimental results highlight the key differences of this method compared to previous work.
[ "['Mauricio Araya' 'Olivier Buffet' 'Vincent Thomas']", "Mauricio Araya (LORIA/INRIA), Olivier Buffet (LORIA/INRIA), Vincent\n Thomas (LORIA/INRIA)" ]
cs.LG stat.ML
null
1206.4614
null
null
http://arxiv.org/pdf/1206.4614v1
2012-06-18T15:01:43Z
2012-06-18T15:01:43Z
Information-theoretic Semi-supervised Metric Learning via Entropy Regularization
We propose a general information-theoretic approach called Seraph (SEmi-supervised metRic leArning Paradigm with Hyper-sparsity) for metric learning that does not rely upon the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize the entropy of that probability on labeled data and minimize it on unlabeled data following entropy regularization, which allows the supervised and unsupervised parts to be integrated in a natural and meaningful way. Furthermore, Seraph is regularized by encouraging a low-rank projection induced from the metric. The optimization of Seraph is solved efficiently and stably by an EM-like scheme with the analytical E-Step and convex M-Step. Experiments demonstrate that Seraph compares favorably with many well-known global and local metric learning methods.
[ "Gang Niu (Tokyo Institute of Technology), Bo Dai (Purdue University),\n Makoto Yamada (Tokyo Institute of Technology), Masashi Sugiyama (Tokyo\n Institute of Technology)", "['Gang Niu' 'Bo Dai' 'Makoto Yamada' 'Masashi Sugiyama']" ]
stat.ME cs.LG math.ST stat.TH
null
1206.4615
null
null
http://arxiv.org/pdf/1206.4615v1
2012-06-18T15:01:58Z
2012-06-18T15:01:58Z
Levy Measure Decompositions for the Beta and Gamma Processes
We develop new representations for the Levy measures of the beta and gamma processes. These representations are manifested in terms of an infinite sum of well-behaved (proper) beta and gamma distributions. Further, we demonstrate how these infinite sums may be truncated in practice, and explicitly characterize truncation errors. We also perform an analysis of the characteristics of posterior distributions, based on the proposed decompositions. The decompositions provide new insights into the beta and gamma processes (and their generalizations), and we demonstrate how the proposed representation unifies some properties of the two. This paper is meant to provide a rigorous foundation for and new perspectives on Levy processes, as these are of increasing importance in machine learning.
[ "Yingjian Wang (Duke University), Lawrence Carin (Duke University)", "['Yingjian Wang' 'Lawrence Carin']" ]
stat.AP cs.LG stat.ML
null
1206.4616
null
null
http://arxiv.org/pdf/1206.4616v1
2012-06-18T15:02:12Z
2012-06-18T15:02:12Z
A Hierarchical Dirichlet Process Model with Multiple Levels of Clustering for Human EEG Seizure Modeling
Driven by the multi-level structure of human intracranial electroencephalogram (iEEG) recordings of epileptic seizures, we introduce a new variant of a hierarchical Dirichlet Process---the multi-level clustering hierarchical Dirichlet Process (MLC-HDP)---that simultaneously clusters datasets on multiple levels. Our seizure dataset contains brain activity recorded in typically more than a hundred individual channels for each seizure of each patient. The MLC-HDP model clusters over channels-types, seizure-types, and patient-types simultaneously. We describe this model and its implementation in detail. We also present the results of a simulation study comparing the MLC-HDP to a similar model, the Nested Dirichlet Process and finally demonstrate the MLC-HDP's use in modeling seizures across multiple patients. We find the MLC-HDP's clustering to be comparable to independent human physician clusterings. To our knowledge, the MLC-HDP model is the first in the epilepsy literature capable of clustering seizures within and between patients.
[ "Drausin Wulsin (University of Pennsylvania), Shane Jensen (University\n of Pennsylvania), Brian Litt (University of Pennsylvania)", "['Drausin Wulsin' 'Shane Jensen' 'Brian Litt']" ]
cs.LG cs.AI stat.ML
null
1206.4617
null
null
http://arxiv.org/pdf/1206.4617v1
2012-06-18T15:02:28Z
2012-06-18T15:02:28Z
Continuous Inverse Optimal Control with Locally Optimal Examples
Inverse optimal control, also known as inverse reinforcement learning, is the problem of recovering an unknown reward function in a Markov decision process from expert demonstrations of the optimal policy. We introduce a probabilistic inverse optimal control algorithm that scales gracefully with task dimensionality, and is suitable for large, continuous domains where even computing a full policy is impractical. By using a local approximation of the reward function, our method can also drop the assumption that the demonstrations are globally optimal, requiring only local optimality. This allows it to learn from examples that are unsuitable for prior methods.
[ "Sergey Levine (Stanford University), Vladlen Koltun (Stanford\n University)", "['Sergey Levine' 'Vladlen Koltun']" ]
cs.LG stat.ML
null
1206.4618
null
null
http://arxiv.org/pdf/1206.4618v1
2012-06-18T15:03:10Z
2012-06-18T15:03:10Z
Compact Hyperplane Hashing with Bilinear Functions
Hyperplane hashing aims at rapidly searching nearest points to a hyperplane, and has shown practical impact in scaling up active learning with SVMs. Unfortunately, the existing randomized methods need long hash codes to achieve reasonable search accuracy and thus suffer from reduced search speed and large memory overhead. To this end, this paper proposes a novel hyperplane hashing technique which yields compact hash codes. The key idea is the bilinear form of the proposed hash functions, which leads to higher collision probability than the existing hyperplane hash functions when using random projections. To further increase the performance, we propose a learning based framework in which the bilinear functions are directly learned from the data. This results in short yet discriminative codes, and also boosts the search performance over the random projection based solutions. Large-scale active learning experiments carried out on two datasets with up to one million samples demonstrate the overall superiority of the proposed approach.
[ "Wei Liu (Columbia University), Jun Wang (IBM T. J. Watson Research\n Center), Yadong Mu (Columbia University), Sanjiv Kumar (Google), Shih-Fu\n Chang (Columbia University)", "['Wei Liu' 'Jun Wang' 'Yadong Mu' 'Sanjiv Kumar' 'Shih-Fu Chang']" ]
cs.LG
null
1206.4619
null
null
http://arxiv.org/pdf/1206.4619v1
2012-06-18T15:04:39Z
2012-06-18T15:04:39Z
Inductive Kernel Low-rank Decomposition with Priors: A Generalized Nystrom Method
Low-rank matrix decomposition has gained great popularity recently in scaling up kernel methods to large amounts of data. However, some limitations could prevent them from working effectively in certain domains. For example, many existing approaches are intrinsically unsupervised, which does not incorporate side information (e.g., class labels) to produce task specific decompositions; also, they typically work "transductively", i.e., the factorization does not generalize to new samples, so the complete factorization needs to be recomputed when new samples become available. To solve these problems, in this paper we propose an"inductive"-flavored method for low-rank kernel decomposition with priors. We achieve this by generalizing the Nystr\"om method in a novel way. On the one hand, our approach employs a highly flexible, nonparametric structure that allows us to generalize the low-rank factors to arbitrarily new samples; on the other hand, it has linear time and space complexities, which can be orders of magnitudes faster than existing approaches and renders great efficiency in learning a low-rank kernel decomposition. Empirical results demonstrate the efficacy and efficiency of the proposed method.
[ "Kai Zhang (Siemens), Liang Lan (temple university), Jun Liu (Siemens),\n andreas Rauber (TU Wien), Fabian Moerchen (Siemens Corporate Research and\n Technology)", "['Kai Zhang' 'Liang Lan' 'Jun Liu' 'andreas Rauber' 'Fabian Moerchen']" ]
cs.LG stat.ML
null
1206.4620
null
null
http://arxiv.org/pdf/1206.4620v1
2012-06-18T15:04:54Z
2012-06-18T15:04:54Z
Improved Information Gain Estimates for Decision Tree Induction
Ensembles of classification and regression trees remain popular machine learning methods because they define flexible non-parametric models that predict well and are computationally efficient both during training and testing. During induction of decision trees one aims to find predicates that are maximally informative about the prediction target. To select good predicates most approaches estimate an information-theoretic scoring function, the information gain, both for classification and regression problems. We point out that the common estimation procedures are biased and show that by replacing them with improved estimators of the discrete and the differential entropy we can obtain better decision trees. In effect our modifications yield improved predictive performance and are simple to implement in any decision tree code.
[ "['Sebastian Nowozin']", "Sebastian Nowozin (Microsoft Research Cambridge)" ]
cs.LG
null
1206.4621
null
null
http://arxiv.org/pdf/1206.4621v1
2012-06-18T15:05:32Z
2012-06-18T15:05:32Z
Path Integral Policy Improvement with Covariance Matrix Adaptation
There has been a recent focus in reinforcement learning on addressing continuous state and action problems by optimizing parameterized policies. PI2 is a recent example of this approach. It combines a derivation from first principles of stochastic optimal control with tools from statistical estimation theory. In this paper, we consider PI2 as a member of the wider family of methods which share the concept of probability-weighted averaging to iteratively update parameters to optimize a cost function. We compare PI2 to other members of the same family - Cross-Entropy Methods and CMAES - at the conceptual level and in terms of performance. The comparison suggests the derivation of a novel algorithm which we call PI2-CMA for "Path Integral Policy Improvement with Covariance Matrix Adaptation". PI2-CMA's main advantage is that it determines the magnitude of the exploration noise automatically.
[ "Freek Stulp (Ecole Nationale Superieure de Techniques Avancees),\n Olivier Sigaud (Universite Pierre et Marie Curie)", "['Freek Stulp' 'Olivier Sigaud']" ]
cs.LG cs.IR stat.ML
null
1206.4622
null
null
http://arxiv.org/pdf/1206.4622v1
2012-06-18T15:05:52Z
2012-06-18T15:05:52Z
A Graphical Model Formulation of Collaborative Filtering Neighbourhood Methods with Fast Maximum Entropy Training
Item neighbourhood methods for collaborative filtering learn a weighted graph over the set of items, where each item is connected to those it is most similar to. The prediction of a user's rating on an item is then given by that rating of neighbouring items, weighted by their similarity. This paper presents a new neighbourhood approach which we call item fields, whereby an undirected graphical model is formed over the item graph. The resulting prediction rule is a simple generalization of the classical approaches, which takes into account non-local information in the graph, allowing its best results to be obtained when using drastically fewer edges than other neighbourhood approaches. A fast approximate maximum entropy training method based on the Bethe approximation is presented, which uses a simple gradient ascent procedure. When using precomputed sufficient statistics on the Movielens datasets, our method is faster than maximum likelihood approaches by two orders of magnitude.
[ "['Aaron Defazio' 'Tiberio Caetano']", "Aaron Defazio (ANU), Tiberio Caetano (NICTA and Australian National\n University)" ]
cs.LG stat.ML
null
1206.4623
null
null
http://arxiv.org/pdf/1206.4623v1
2012-06-18T15:06:34Z
2012-06-18T15:06:34Z
On the Size of the Online Kernel Sparsification Dictionary
We analyze the size of the dictionary constructed from online kernel sparsification, using a novel formula that expresses the expected determinant of the kernel Gram matrix in terms of the eigenvalues of the covariance operator. Using this formula, we are able to connect the cardinality of the dictionary with the eigen-decay of the covariance operator. In particular, we show that under certain technical conditions, the size of the dictionary will always grow sub-linearly in the number of data points, and, as a consequence, the kernel linear regressor constructed from the resulting dictionary is consistent.
[ "Yi Sun (IDSIA), Faustino Gomez (IDSIA), Juergen Schmidhuber (IDSIA)", "['Yi Sun' 'Faustino Gomez' 'Juergen Schmidhuber']" ]
cs.LG stat.ML
null
1206.4624
null
null
http://arxiv.org/pdf/1206.4624v1
2012-06-18T15:06:49Z
2012-06-18T15:06:49Z
Robust Multiple Manifolds Structure Learning
We present a robust multiple manifolds structure learning (RMMSL) scheme to robustly estimate data structures under the multiple low intrinsic dimensional manifolds assumption. In the local learning stage, RMMSL efficiently estimates local tangent space by weighted low-rank matrix factorization. In the global learning stage, we propose a robust manifold clustering method based on local structure learning results. The proposed clustering method is designed to get the flattest manifolds clusters by introducing a novel curved-level similarity function. Our approach is evaluated and compared to state-of-the-art methods on synthetic data, handwritten digit images, human motion capture data and motorbike videos. We demonstrate the effectiveness of the proposed approach, which yields higher clustering accuracy, and produces promising results for challenging tasks of human motion segmentation and motion flow learning from videos.
[ "['Dian Gong' 'Xuemei Zhao' 'Gerard Medioni']", "Dian Gong (Univ. of Southern California), Xuemei Zhao (Univ of\n Southern California), Gerard Medioni (University of Southern California)" ]
cs.LG
null
1206.4625
null
null
http://arxiv.org/pdf/1206.4625v1
2012-06-18T15:07:04Z
2012-06-18T15:07:04Z
Optimizing F-measure: A Tale of Two Approaches
F-measures are popular performance metrics, particularly for tasks with imbalanced data sets. Algorithms for learning to maximize F-measures follow two approaches: the empirical utility maximization (EUM) approach learns a classifier having optimal performance on training data, while the decision-theoretic approach learns a probabilistic model and then predicts labels with maximum expected F-measure. In this paper, we investigate the theoretical justifications and connections for these two approaches, and we study the conditions under which one approach is preferable to the other using synthetic and real datasets. Given accurate models, our results suggest that the two approaches are asymptotically equivalent given large training and test sets. Nevertheless, empirically, the EUM approach appears to be more robust against model misspecification, and given a good model, the decision-theoretic approach appears to be better for handling rare classes and a common domain adaptation scenario.
[ "['Ye Nan' 'Kian Ming Chai' 'Wee Sun Lee' 'Hai Leong Chieu']", "Ye Nan (NUS), Kian Ming Chai (DSO National Laboratories), Wee Sun Lee\n (NUS), Hai Leong Chieu (DSO National Laboratories)" ]
cs.CE cs.LG q-fin.PM
null
1206.4626
null
null
http://arxiv.org/pdf/1206.4626v1
2012-06-18T15:07:23Z
2012-06-18T15:07:23Z
On-Line Portfolio Selection with Moving Average Reversion
On-line portfolio selection has attracted increasing interests in machine learning and AI communities recently. Empirical evidences show that stock's high and low prices are temporary and stock price relatives are likely to follow the mean reversion phenomenon. While the existing mean reversion strategies are shown to achieve good empirical performance on many real datasets, they often make the single-period mean reversion assumption, which is not always satisfied in some real datasets, leading to poor performance when the assumption does not hold. To overcome the limitation, this article proposes a multiple-period mean reversion, or so-called Moving Average Reversion (MAR), and a new on-line portfolio selection strategy named "On-Line Moving Average Reversion" (OLMAR), which exploits MAR by applying powerful online learning techniques. From our empirical results, we found that OLMAR can overcome the drawback of existing mean reversion algorithms and achieve significantly better results, especially on the datasets where the existing mean reversion algorithms failed. In addition to superior trading performance, OLMAR also runs extremely fast, further supporting its practical applicability to a wide range of applications.
[ "Bin Li (NTU), Steven C.H. Hoi (NTU)", "['Bin Li' 'Steven C. H. Hoi']" ]
cs.LG stat.ML
null
1206.4627
null
null
http://arxiv.org/pdf/1206.4627v1
2012-06-18T15:07:39Z
2012-06-18T15:07:39Z
Convergence Rates of Biased Stochastic Optimization for Learning Sparse Ising Models
We study the convergence rate of stochastic optimization of exact (NP-hard) objectives, for which only biased estimates of the gradient are available. We motivate this problem in the context of learning the structure and parameters of Ising models. We first provide a convergence-rate analysis of deterministic errors for forward-backward splitting (FBS). We then extend our analysis to biased stochastic errors, by first characterizing a family of samplers and providing a high probability bound that allows understanding not only FBS, but also proximal gradient (PG) methods. We derive some interesting conclusions: FBS requires only a logarithmically increasing number of random samples in order to converge (although at a very low rate); the required number of random samples is the same for the deterministic and the biased stochastic setting for FBS and basic PG; accelerated PG is not guaranteed to converge in the biased stochastic setting.
[ "Jean Honorio (Stony Brook University)", "['Jean Honorio']" ]
cs.LG stat.ML
null
1206.4628
null
null
http://arxiv.org/pdf/1206.4628v1
2012-06-18T15:07:55Z
2012-06-18T15:07:55Z
Robust PCA in High-dimension: A Deterministic Approach
We consider principal component analysis for contaminated data-set in the high dimensional regime, where the dimensionality of each observation is comparable or even more than the number of observations. We propose a deterministic high-dimensional robust PCA algorithm which inherits all theoretical properties of its randomized counterpart, i.e., it is tractable, robust to contaminated points, easily kernelizable, asymptotic consistent and achieves maximal robustness -- a breakdown point of 50%. More importantly, the proposed method exhibits significantly better computational efficiency, which makes it suitable for large-scale real applications.
[ "Jiashi Feng (NUS), Huan Xu (NUS), Shuicheng Yan (NUS)", "['Jiashi Feng' 'Huan Xu' 'Shuicheng Yan']" ]
cs.LG
null
1206.4629
null
null
http://arxiv.org/pdf/1206.4629v1
2012-06-18T15:08:22Z
2012-06-18T15:08:22Z
Multiple Kernel Learning from Noisy Labels by Stochastic Programming
We study the problem of multiple kernel learning from noisy labels. This is in contrast to most of the previous studies on multiple kernel learning that mainly focus on developing efficient algorithms and assume perfectly labeled training examples. Directly applying the existing multiple kernel learning algorithms to noisily labeled examples often leads to suboptimal performance due to the incorrect class assignments. We address this challenge by casting multiple kernel learning from noisy labels into a stochastic programming problem, and presenting a minimax formulation. We develop an efficient algorithm for solving the related convex-concave optimization problem with a fast convergence rate of $O(1/T)$ where $T$ is the number of iterations. Empirical studies on UCI data sets verify both the effectiveness of the proposed framework and the efficiency of the proposed optimization algorithm.
[ "['Tianbao Yang' 'Mehrdad Mahdavi' 'Rong Jin' 'Lijun Zhang' 'Yang Zhou']", "Tianbao Yang (Michigan State University), Mehrdad Mahdavi (Michigan\n State University), Rong Jin (Michigan State University), Lijun Zhang\n (Michigan State University), Yang Zhou (Yahoo! Labs)" ]
cs.LG
null
1206.4630
null
null
http://arxiv.org/pdf/1206.4630v1
2012-06-18T15:08:38Z
2012-06-18T15:08:38Z
Efficient Decomposed Learning for Structured Prediction
Structured prediction is the cornerstone of several machine learning applications. Unfortunately, in structured prediction settings with expressive inter-variable interactions, exact inference-based learning algorithms, e.g. Structural SVM, are often intractable. We present a new way, Decomposed Learning (DecL), which performs efficient learning by restricting the inference step to a limited part of the structured spaces. We provide characterizations based on the structure, target parameters, and gold labels, under which DecL is equivalent to exact learning. We then show that in real world settings, where our theoretical assumptions may not completely hold, DecL-based algorithms are significantly more efficient and as accurate as exact learning.
[ "Rajhans Samdani (University of Illinois, U-C), Dan Roth (University of\n Illinois, U-C)", "['Rajhans Samdani' 'Dan Roth']" ]
cs.LG cs.CL cs.IR stat.ME stat.ML
null
1206.4631
null
null
http://arxiv.org/pdf/1206.4631v3
2014-07-28T03:02:39Z
2012-06-18T15:11:38Z
A Poisson convolution model for characterizing topical content with word frequency and exclusivity
An ongoing challenge in the analysis of document collections is how to summarize content in terms of a set of inferred themes that can be interpreted substantively in terms of topics. The current practice of parametrizing the themes in terms of most frequent words limits interpretability by ignoring the differential use of words across topics. We argue that words that are both common and exclusive to a theme are more effective at characterizing topical content. We consider a setting where professional editors have annotated documents to a collection of topic categories, organized into a tree, in which leaf-nodes correspond to the most specific topics. Each document is annotated to multiple categories, at different levels of the tree. We introduce a hierarchical Poisson convolution model to analyze annotated documents in this setting. The model leverages the structure among categories defined by professional editors to infer a clear semantic description for each topic in terms of words that are both frequent and exclusive. We carry out a large randomized experiment on Amazon Turk to demonstrate that topic summaries based on the FREX score are more interpretable than currently established frequency based summaries, and that the proposed model produces more efficient estimates of exclusivity than with currently models. We also develop a parallelized Hamiltonian Monte Carlo sampler that allows the inference to scale to millions of documents.
[ "Edoardo M Airoldi, Jonathan M Bischof", "['Edoardo M Airoldi' 'Jonathan M Bischof']" ]
null
null
1206.4632
null
null
http://arxiv.org/pdf/1206.4632v1
2012-06-18T15:12:01Z
2012-06-18T15:12:01Z
A Complete Analysis of the l_1,p Group-Lasso
The Group-Lasso is a well-known tool for joint regularization in machine learning methods. While the l_{1,2} and the l_{1,infty} version have been studied in detail and efficient algorithms exist, there are still open questions regarding other l_{1,p} variants. We characterize conditions for solutions of the l_{1,p} Group-Lasso for all p-norms with 1 <= p <= infty, and we present a unified active set algorithm. For all p-norms, a highly efficient projected gradient algorithm is presented. This new algorithm enables us to compare the prediction performance of many variants of the Group-Lasso in a multi-task learning setting, where the aim is to solve many learning problems in parallel which are coupled via the Group-Lasso constraint. We conduct large-scale experiments on synthetic data and on two real-world data sets. In accordance with theoretical characterizations of the different norms we observe that the weak-coupling norms with p between 1.5 and 2 consistently outperform the strong-coupling norms with p >> 2.
[ "['Julia Vogt' 'Volker Roth']" ]
cs.LG stat.ML
null
1206.4633
null
null
http://arxiv.org/pdf/1206.4633v1
2012-06-18T15:13:13Z
2012-06-18T15:13:13Z
Fast Bounded Online Gradient Descent Algorithms for Scalable Kernel-Based Online Learning
Kernel-based online learning has often shown state-of-the-art performance for many online learning tasks. It, however, suffers from a major shortcoming, that is, the unbounded number of support vectors, making it non-scalable and unsuitable for applications with large-scale datasets. In this work, we study the problem of bounded kernel-based online learning that aims to constrain the number of support vectors by a predefined budget. Although several algorithms have been proposed in literature, they are neither computationally efficient due to their intensive budget maintenance strategy nor effective due to the use of simple Perceptron algorithm. To overcome these limitations, we propose a framework for bounded kernel-based online learning based on an online gradient descent approach. We propose two efficient algorithms of bounded online gradient descent (BOGD) for scalable kernel-based online learning: (i) BOGD by maintaining support vectors using uniform sampling, and (ii) BOGD++ by maintaining support vectors using non-uniform sampling. We present theoretical analysis of regret bound for both algorithms, and found promising empirical performance in terms of both efficacy and efficiency by comparing them to several well-known algorithms for bounded kernel-based online learning on large-scale datasets.
[ "Peilin Zhao (NTU), Jialei Wang (NTU), Pengcheng Wu (NTU), Rong Jin\n (MSU), Steven C.H. Hoi (NTU)", "['Peilin Zhao' 'Jialei Wang' 'Pengcheng Wu' 'Rong Jin' 'Steven C. H. Hoi']" ]
cs.LG cs.GR stat.ML
10.1587/transinf.E96.D.1134
1206.4634
null
null
http://arxiv.org/abs/1206.4634v1
2012-06-18T15:14:24Z
2012-06-18T15:14:24Z
Artist Agent: A Reinforcement Learning Approach to Automatic Stroke Generation in Oriental Ink Painting
Oriental ink painting, called Sumi-e, is one of the most appealing painting styles that has attracted artists around the world. Major challenges in computer-based Sumi-e simulation are to abstract complex scene information and draw smooth and natural brush strokes. To automatically find such strokes, we propose to model the brush as a reinforcement learning agent, and learn desired brush-trajectories by maximizing the sum of rewards in the policy search framework. We also provide elaborate design of actions, states, and rewards tailored for a Sumi-e agent. The effectiveness of our proposed approach is demonstrated through simulated Sumi-e experiments.
[ "['Ning Xie' 'Hirotaka Hachiya' 'Masashi Sugiyama']", "Ning Xie (Tokyo Institute of Technology), Hirotaka Hachiya (Tokyo\n Institute of Technology), Masashi Sugiyama (Tokyo Institute of Technology)" ]
cs.LG stat.ML
null
1206.4635
null
null
http://arxiv.org/pdf/1206.4635v1
2012-06-18T15:14:57Z
2012-06-18T15:14:57Z
Deep Mixtures of Factor Analysers
An efficient way to learn deep density models that have many layers of latent variables is to learn one layer at a time using a model that has only one layer of latent variables. After learning each layer, samples from the posterior distributions for that layer are used as training data for learning the next layer. This approach is commonly used with Restricted Boltzmann Machines, which are undirected graphical models with a single hidden layer, but it can also be used with Mixtures of Factor Analysers (MFAs) which are directed graphical models. In this paper, we present a greedy layer-wise learning algorithm for Deep Mixtures of Factor Analysers (DMFAs). Even though a DMFA can be converted to an equivalent shallow MFA by multiplying together the factor loading matrices at different levels, learning and inference are much more efficient in a DMFA and the sharing of each lower-level factor loading matrix by many different higher level MFAs prevents overfitting. We demonstrate empirically that DMFAs learn better density models than both MFAs and two types of Restricted Boltzmann Machine on a wide variety of datasets.
[ "Yichuan Tang (University of Toronto), Ruslan Salakhutdinov (University\n of Toronto), Geoffrey Hinton (University of Toronto)", "['Yichuan Tang' 'Ruslan Salakhutdinov' 'Geoffrey Hinton']" ]
cs.LG cs.AI cs.CV
null
1206.4636
null
null
http://arxiv.org/pdf/1206.4636v1
2012-06-18T15:15:13Z
2012-06-18T15:15:13Z
Modeling Latent Variable Uncertainty for Loss-based Learning
We consider the problem of parameter estimation using weakly supervised datasets, where a training sample consists of the input and a partially specified annotation, which we refer to as the output. The missing information in the annotation is modeled using latent variables. Previous methods overburden a single distribution with two separate tasks: (i) modeling the uncertainty in the latent variables during training; and (ii) making accurate predictions for the output and the latent variables during testing. We propose a novel framework that separates the demands of the two tasks using two distributions: (i) a conditional distribution to model the uncertainty of the latent variables for a given input-output pair; and (ii) a delta distribution to predict the output and the latent variables for a given input. During learning, we encourage agreement between the two distributions by minimizing a loss-based dissimilarity coefficient. Our approach generalizes latent SVM in two important ways: (i) it models the uncertainty over latent variables instead of relying on a pointwise estimate; and (ii) it allows the use of loss functions that depend on latent variables, which greatly increases its applicability. We demonstrate the efficacy of our approach on two challenging problems---object detection and action detection---using publicly available datasets.
[ "['M. Pawan Kumar' 'Ben Packer' 'Daphne Koller']", "M. Pawan Kumar (Ecole Centrale Paris), Ben Packer (Stanford\n University), Daphne Koller (Stanford University)" ]
cs.LG cs.CL stat.ML
null
1206.4637
null
null
http://arxiv.org/pdf/1206.4637v1
2012-06-18T15:15:28Z
2012-06-18T15:15:28Z
Learning to Identify Regular Expressions that Describe Email Campaigns
This paper addresses the problem of inferring a regular expression from a given set of strings that resembles, as closely as possible, the regular expression that a human expert would have written to identify the language. This is motivated by our goal of automating the task of postmasters of an email service who use regular expressions to describe and blacklist email spam campaigns. Training data contains batches of messages and corresponding regular expressions that an expert postmaster feels confident to blacklist. We model this task as a learning problem with structured output spaces and an appropriate loss function, derive a decoder and the resulting optimization problem, and a report on a case study conducted with an email service.
[ "['Paul Prasse' 'Christoph Sawade' 'Niels Landwehr' 'Tobias Scheffer']", "Paul Prasse (University of Potsdam), Christoph Sawade (University of\n Potsdam), Niels Landwehr (University of Potsdam), Tobias Scheffer (University\n of Potsdam)" ]
null
null
1206.4638
null
null
http://arxiv.org/pdf/1206.4638v1
2012-06-18T15:16:28Z
2012-06-18T15:16:28Z
Efficient Euclidean Projections onto the Intersection of Norm Balls
Using sparse-inducing norms to learn robust models has received increasing attention from many fields for its attractive properties. Projection-based methods have been widely applied to learning tasks constrained by such norms. As a key building block of these methods, an efficient operator for Euclidean projection onto the intersection of $ell_1$ and $ell_{1,q}$ norm balls $(q=2text{or}infty)$ is proposed in this paper. We prove that the projection can be reduced to finding the root of an auxiliary function which is piecewise smooth and monotonic. Hence, a bisection algorithm is sufficient to solve the problem. We show that the time complexity of our solution is $O(n+glog g)$ for $q=2$ and $O(nlog n)$ for $q=infty$, where $n$ is the dimensionality of the vector to be projected and $g$ is the number of disjoint groups; we confirm this complexity by experimentation. Empirical study reveals that our method achieves significantly better performance than classical methods in terms of running time and memory usage. We further show that embedded with our efficient projection operator, projection-based algorithms can solve regression problems with composite norm constraints more efficiently than other methods and give superior accuracy.
[ "['Adams Wei Yu' 'Hao Su' 'Li Fei-Fei']" ]
cs.LG cs.AI
null
1206.4639
null
null
http://arxiv.org/pdf/1206.4639v1
2012-06-18T15:17:49Z
2012-06-18T15:17:49Z
Adaptive Regularization for Weight Matrices
Algorithms for learning distributions over weight-vectors, such as AROW were recently shown empirically to achieve state-of-the-art performance at various problems, with strong theoretical guaranties. Extending these algorithms to matrix models pose challenges since the number of free parameters in the covariance of the distribution scales as $n^4$ with the dimension $n$ of the matrix, and $n$ tends to be large in real applications. We describe, analyze and experiment with two new algorithms for learning distribution of matrix models. Our first algorithm maintains a diagonal covariance over the parameters and can handle large covariance matrices. The second algorithm factors the covariance to capture inter-features correlation while keeping the number of parameters linear in the size of the original matrix. We analyze both algorithms in the mistake bound model and show a superior precision performance of our approach over other algorithms in two tasks: retrieving similar images, and ranking similar documents. The factored algorithm is shown to attain faster convergence rate.
[ "Koby Crammer (The Technion), Gal Chechik (Bar Ilan University and\n Google research)", "['Koby Crammer' 'Gal Chechik']" ]
cs.NA cs.LG stat.ML
null
1206.4640
null
null
http://arxiv.org/pdf/1206.4640v1
2012-06-18T15:18:05Z
2012-06-18T15:18:05Z
Stability of matrix factorization for collaborative filtering
We study the stability vis a vis adversarial noise of matrix factorization algorithm for matrix completion. In particular, our results include: (I) we bound the gap between the solution matrix of the factorization method and the ground truth in terms of root mean square error; (II) we treat the matrix factorization as a subspace fitting problem and analyze the difference between the solution subspace and the ground truth; (III) we analyze the prediction error of individual users based on the subspace stability. We apply these results to the problem of collaborative filtering under manipulator attack, which leads to useful insights and guidelines for collaborative filtering system design.
[ "Yu-Xiang Wang (National University of Singapore), Huan Xu (National\n University of Singapore)", "['Yu-Xiang Wang' 'Huan Xu']" ]
cs.LG cs.CV stat.ML
null
1206.4641
null
null
http://arxiv.org/pdf/1206.4641v1
2012-06-18T15:18:20Z
2012-06-18T15:18:20Z
Total Variation and Euler's Elastica for Supervised Learning
In recent years, total variation (TV) and Euler's elastica (EE) have been successfully applied to image processing tasks such as denoising and inpainting. This paper investigates how to extend TV and EE to the supervised learning settings on high dimensional data. The supervised learning problem can be formulated as an energy functional minimization under Tikhonov regularization scheme, where the energy is composed of a squared loss and a total variation smoothing (or Euler's elastica smoothing). Its solution via variational principles leads to an Euler-Lagrange PDE. However, the PDE is always high-dimensional and cannot be directly solved by common methods. Instead, radial basis functions are utilized to approximate the target function, reducing the problem to finding the linear coefficients of basis functions. We apply the proposed methods to supervised learning tasks (including binary classification, multi-class classification, and regression) on benchmark data sets. Extensive experiments have demonstrated promising results of the proposed methods.
[ "['Tong Lin' 'Hanlin Xue' 'Ling Wang' 'Hongbin Zha']", "Tong Lin (Peking University), Hanlin Xue (Peking University), Ling\n Wang (LTCI, Telecom ParisTech, Paris), Hongbin Zha (Peking University)" ]
cs.DS cs.LG stat.ML
null
1206.4642
null
null
http://arxiv.org/pdf/1206.4642v1
2012-06-18T15:18:51Z
2012-06-18T15:18:51Z
Fast Computation of Subpath Kernel for Trees
The kernel method is a potential approach to analyzing structured data such as sequences, trees, and graphs; however, unordered trees have not been investigated extensively. Kimura et al. (2011) proposed a kernel function for unordered trees on the basis of their subpaths, which are vertical substructures of trees responsible for hierarchical information in them. Their kernel exhibits practically good performance in terms of accuracy and speed; however, linear-time computation is not guaranteed theoretically, unlike the case of the other unordered tree kernel proposed by Vishwanathan and Smola (2003). In this paper, we propose a theoretically guaranteed linear-time kernel computation algorithm that is practically fast, and we present an efficient prediction algorithm whose running time depends only on the size of the input tree. Experimental results show that the proposed algorithms are quite efficient in practice.
[ "Daisuke Kimura (The University of Tokyo), Hisashi Kashima (The\n University of Tokyo)", "['Daisuke Kimura' 'Hisashi Kashima']" ]
cs.LG cs.GT cs.SY
null
1206.4643
null
null
http://arxiv.org/pdf/1206.4643v1
2012-06-18T15:19:07Z
2012-06-18T15:19:07Z
Lightning Does Not Strike Twice: Robust MDPs with Coupled Uncertainty
We consider Markov decision processes under parameter uncertainty. Previous studies all restrict to the case that uncertainties among different states are uncoupled, which leads to conservative solutions. In contrast, we introduce an intuitive concept, termed "Lightning Does not Strike Twice," to model coupled uncertain parameters. Specifically, we require that the system can deviate from its nominal parameters only a bounded number of times. We give probabilistic guarantees indicating that this model represents real life situations and devise tractable algorithms for computing optimal control policies using this concept.
[ "['Shie Mannor' 'Ofir Mebel' 'Huan Xu']", "Shie Mannor (Technion), Ofir Mebel (Technion), Huan Xu (National\n University of Singapore)" ]
cs.LG stat.ML
null
1206.4644
null
null
http://arxiv.org/pdf/1206.4644v1
2012-06-18T15:19:22Z
2012-06-18T15:19:22Z
Groupwise Constrained Reconstruction for Subspace Clustering
Reconstruction based subspace clustering methods compute a self reconstruction matrix over the samples and use it for spectral clustering to obtain the final clustering result. Their success largely relies on the assumption that the underlying subspaces are independent, which, however, does not always hold in the applications with increasing number of subspaces. In this paper, we propose a novel reconstruction based subspace clustering model without making the subspace independence assumption. In our model, certain properties of the reconstruction matrix are explicitly characterized using the latent cluster indicators, and the affinity matrix used for spectral clustering can be directly built from the posterior of the latent cluster indicators instead of the reconstruction matrix. Experimental results on both synthetic and real-world datasets show that the proposed model can outperform the state-of-the-art methods.
[ "['Ruijiang Li' 'Bin Li' 'Ke Zhang' 'Cheng Jin' 'Xiangyang Xue']", "Ruijiang Li (Fudan University), Bin Li (University of Technology,\n Sydney), Ke Zhang (Fudan Univ.), Cheng Jin (Fudan University), Xiangyang Xue\n (Fudan University)" ]
cs.LG cs.NA stat.ME stat.ML
null
1206.4645
null
null
http://arxiv.org/pdf/1206.4645v1
2012-06-18T15:19:58Z
2012-06-18T15:19:58Z
Ensemble Methods for Convex Regression with Applications to Geometric Programming Based Circuit Design
Convex regression is a promising area for bridging statistical estimation and deterministic convex optimization. New piecewise linear convex regression methods are fast and scalable, but can have instability when used to approximate constraints or objective functions for optimization. Ensemble methods, like bagging, smearing and random partitioning, can alleviate this problem and maintain the theoretical properties of the underlying estimator. We empirically examine the performance of ensemble methods for prediction and optimization, and then apply them to device modeling and constraint approximation for geometric programming based circuit design.
[ "['Lauren Hannah' 'David Dunson']", "Lauren Hannah (Duke University), David Dunson (Duke University)" ]
cs.LG stat.ML
null
1206.4646
null
null
http://arxiv.org/pdf/1206.4646v1
2012-06-18T15:20:14Z
2012-06-18T15:20:14Z
Partial-Hessian Strategies for Fast Learning of Nonlinear Embeddings
Stochastic neighbor embedding (SNE) and related nonlinear manifold learning algorithms achieve high-quality low-dimensional representations of similarity data, but are notoriously slow to train. We propose a generic formulation of embedding algorithms that includes SNE and other existing algorithms, and study their relation with spectral methods and graph Laplacians. This allows us to define several partial-Hessian optimization strategies, characterize their global and local convergence, and evaluate them empirically. We achieve up to two orders of magnitude speedup over existing training methods with a strategy (which we call the spectral direction) that adds nearly no overhead to the gradient and yet is simple, scalable and applicable to several existing and future embedding algorithms.
[ "Max Vladymyrov (UC Merced), Miguel Carreira-Perpinan (UC Merced)", "['Max Vladymyrov' 'Miguel Carreira-Perpinan']" ]
cs.LG cs.AI cs.IR
null
1206.4647
null
null
http://arxiv.org/pdf/1206.4647v1
2012-06-18T15:22:24Z
2012-06-18T15:22:24Z
Active Learning for Matching Problems
Effective learning of user preferences is critical to easing user burden in various types of matching problems. Equally important is active query selection to further reduce the amount of preference information users must provide. We address the problem of active learning of user preferences for matching problems, introducing a novel method for determining probabilistic matchings, and developing several new active learning strategies that are sensitive to the specific matching objective. Experiments with real-world data sets spanning diverse domains demonstrate that matching-sensitive active learning
[ "Laurent Charlin (University of Toronto), Rich Zemel (University of\n Toronto), Craig Boutilier (University of Toronto)", "['Laurent Charlin' 'Rich Zemel' 'Craig Boutilier']" ]
cs.LG
null
1206.4648
null
null
http://arxiv.org/pdf/1206.4648v1
2012-06-18T15:23:02Z
2012-06-18T15:23:02Z
Two-Manifold Problems with Applications to Nonlinear System Identification
Recently, there has been much interest in spectral approaches to learning manifolds---so-called kernel eigenmap methods. These methods have had some successes, but their applicability is limited because they are not robust to noise. To address this limitation, we look at two-manifold problems, in which we simultaneously reconstruct two related manifolds, each representing a different view of the same data. By solving these interconnected learning problems together, two-manifold algorithms are able to succeed where a non-integrated approach would fail: each view allows us to suppress noise in the other, reducing bias. We propose a class of algorithms for two-manifold problems, based on spectral decomposition of cross-covariance operators in Hilbert space, and discuss when two-manifold problems are useful. Finally, we demonstrate that solving a two-manifold problem can aid in learning a nonlinear dynamical system from limited data.
[ "Byron Boots (Carnegie Mellon University), Geoff Gordon (Carnegie\n Mellon University)", "['Byron Boots' 'Geoff Gordon']" ]
cs.LG cs.CV stat.ML
null
1206.4649
null
null
http://arxiv.org/pdf/1206.4649v1
2012-06-18T15:23:19Z
2012-06-18T15:23:19Z
Learning Efficient Structured Sparse Models
We present a comprehensive framework for structured sparse coding and modeling extending the recent ideas of using learnable fast regressors to approximate exact sparse codes. For this purpose, we develop a novel block-coordinate proximal splitting method for the iterative solution of hierarchical sparse coding problems, and show an efficient feed forward architecture derived from its iteration. This architecture faithfully approximates the exact structured sparse codes with a fraction of the complexity of the standard optimization methods. We also show that by using different training objective functions, learnable sparse encoders are no longer restricted to be mere approximants of the exact sparse code for a pre-given dictionary, as in earlier formulations, but can be rather used as full-featured sparse encoders or even modelers. A simple implementation shows several orders of magnitude speedup compared to the state-of-the-art at minimal performance degradation, making the proposed framework suitable for real time and large-scale applications.
[ "Alex Bronstein (Tel Aviv University), Pablo Sprechmann (University of\n Minnesota), Guillermo Sapiro (University of Minnesota)", "['Alex Bronstein' 'Pablo Sprechmann' 'Guillermo Sapiro']" ]
cs.LG stat.ML
null
1206.4650
null
null
http://arxiv.org/pdf/1206.4650v1
2012-06-18T15:23:37Z
2012-06-18T15:23:37Z
Analysis of Kernel Mean Matching under Covariate Shift
In real supervised learning scenarios, it is not uncommon that the training and test sample follow different probability distributions, thus rendering the necessity to correct the sampling bias. Focusing on a particular covariate shift problem, we derive high probability confidence bounds for the kernel mean matching (KMM) estimator, whose convergence rate turns out to depend on some regularity measure of the regression function and also on some capacity measure of the kernel. By comparing KMM with the natural plug-in estimator, we establish the superiority of the former hence provide concrete evidence/understanding to the effectiveness of KMM under covariate shift.
[ "Yaoliang Yu (University of Alberta), Csaba Szepesvari (University of\n Alberta)", "['Yaoliang Yu' 'Csaba Szepesvari']" ]
cs.LG cs.CV stat.ML
null
1206.4651
null
null
http://arxiv.org/pdf/1206.4651v1
2012-06-18T15:24:01Z
2012-06-18T15:24:01Z
Is margin preserved after random projection?
Random projections have been applied in many machine learning algorithms. However, whether margin is preserved after random projection is non-trivial and not well studied. In this paper we analyse margin distortion after random projection, and give the conditions of margin preservation for binary classification problems. We also extend our analysis to margin for multiclass problems, and provide theoretical bounds on multiclass margin on the projected data.
[ "Qinfeng Shi (The University of Adelaide), Chunhua Shen (The University\n of Adelaide), Rhys Hill (The University of Adelaide), Anton van den Hengel\n (the University of Adelaide)", "['Qinfeng Shi' 'Chunhua Shen' 'Rhys Hill' 'Anton van den Hengel']" ]
cs.LG cs.AI
null
1206.4652
null
null
http://arxiv.org/pdf/1206.4652v1
2012-06-18T15:24:31Z
2012-06-18T15:24:31Z
The Most Persistent Soft-Clique in a Set of Sampled Graphs
When searching for characteristic subpatterns in potentially noisy graph data, it appears self-evident that having multiple observations would be better than having just one. However, it turns out that the inconsistencies introduced when different graph instances have different edge sets pose a serious challenge. In this work we address this challenge for the problem of finding maximum weighted cliques. We introduce the concept of most persistent soft-clique. This is subset of vertices, that 1) is almost fully or at least densely connected, 2) occurs in all or almost all graph instances, and 3) has the maximum weight. We present a measure of clique-ness, that essentially counts the number of edge missing to make a subset of vertices into a clique. With this measure, we show that the problem of finding the most persistent soft-clique problem can be cast either as: a) a max-min two person game optimization problem, or b) a min-min soft margin optimization problem. Both formulations lead to the same solution when using a partial Lagrangian method to solve the optimization problems. By experiments on synthetic data and on real social network data, we show that the proposed method is able to reliably find soft cliques in graph data, even if that is distorted by random noise or unreliable observations.
[ "['Novi Quadrianto' 'Chao Chen' 'Christoph Lampert']", "Novi Quadrianto (University of Cambridge), Chao Chen (IST Austria),\n Christoph Lampert (IST Austria)" ]
cs.LG cs.CV stat.ML
null
1206.4653
null
null
http://arxiv.org/pdf/1206.4653v1
2012-06-18T15:24:49Z
2012-06-18T15:24:49Z
Dimensionality Reduction by Local Discriminative Gaussians
We present local discriminative Gaussian (LDG) dimensionality reduction, a supervised dimensionality reduction technique for classification. The LDG objective function is an approximation to the leave-one-out training error of a local quadratic discriminant analysis classifier, and thus acts locally to each training point in order to find a mapping where similar data can be discriminated from dissimilar data. While other state-of-the-art linear dimensionality reduction methods require gradient descent or iterative solution approaches, LDG is solved with a single eigen-decomposition. Thus, it scales better for datasets with a large number of feature dimensions or training examples. We also adapt LDG to the transfer learning setting, and show that it achieves good performance when the test data distribution differs from that of the training data.
[ "Nathan Parrish (University of Washington), Maya Gupta (University of\n Washington)", "['Nathan Parrish' 'Maya Gupta']" ]
cs.AI cs.LG stat.ML
null
1206.4654
null
null
http://arxiv.org/pdf/1206.4654v1
2012-06-18T15:25:04Z
2012-06-18T15:25:04Z
A Generalized Loop Correction Method for Approximate Inference in Graphical Models
Belief Propagation (BP) is one of the most popular methods for inference in probabilistic graphical models. BP is guaranteed to return the correct answer for tree structures, but can be incorrect or non-convergent for loopy graphical models. Recently, several new approximate inference algorithms based on cavity distribution have been proposed. These methods can account for the effect of loops by incorporating the dependency between BP messages. Alternatively, region-based approximations (that lead to methods such as Generalized Belief Propagation) improve upon BP by considering interactions within small clusters of variables, thus taking small loops within these clusters into account. This paper introduces an approach, Generalized Loop Correction (GLC), that benefits from both of these types of loop correction. We show how GLC relates to these two families of inference methods, then provide empirical evidence that GLC works effectively in general, and can be significantly more accurate than both correction schemes.
[ "['Siamak Ravanbakhsh' 'Chun-Nam Yu' 'Russell Greiner']", "Siamak Ravanbakhsh (University of Alberta), Chun-Nam Yu (University of\n Alberta), Russell Greiner (University of Alberta)" ]
cs.LG
null
1206.4655
null
null
http://arxiv.org/pdf/1206.4655v1
2012-06-18T15:25:58Z
2012-06-18T15:25:58Z
Modelling transition dynamics in MDPs with RKHS embeddings
We propose a new, nonparametric approach to learning and representing transition dynamics in Markov decision processes (MDPs), which can be combined easily with dynamic programming methods for policy optimisation and value estimation. This approach makes use of a recently developed representation of conditional distributions as \emph{embeddings} in a reproducing kernel Hilbert space (RKHS). Such representations bypass the need for estimating transition probabilities or densities, and apply to any domain on which kernels can be defined. This avoids the need to calculate intractable integrals, since expectations are represented as RKHS inner products whose computation has linear complexity in the number of points used to represent the embedding. We provide guarantees for the proposed applications in MDPs: in the context of a value iteration algorithm, we prove convergence to either the optimal policy, or to the closest projection of the optimal policy in our model class (an RKHS), under reasonable assumptions. In experiments, we investigate a learning task in a typical classical control setting (the under-actuated pendulum), and on a navigation problem where only images from a sensor are observed. For policy optimisation we compare with least-squares policy iteration where a Gaussian process is used for value function estimation. For value estimation we also compare to the NPDP method. Our approach achieves better performance in all experiments.
[ "Steffen Grunewalder (University College London), Guy Lever (University\n College London), Luca Baldassarre (University College London), Massi Pontil\n (University College London), Arthur Gretton (MPI for Intelligent Systems)", "['Steffen Grunewalder' 'Guy Lever' 'Luca Baldassarre' 'Massi Pontil'\n 'Arthur Gretton']" ]
cs.LG cs.AI stat.ML
null
1206.4656
null
null
http://arxiv.org/pdf/1206.4656v1
2012-06-18T15:26:13Z
2012-06-18T15:26:13Z
Machine Learning that Matters
Much of current machine learning (ML) research has lost its connection to problems of import to the larger world of science and society. From this perspective, there exist glaring limitations in the data sets we investigate, the metrics we employ for evaluation, and the degree to which results are communicated back to their originating domains. What changes are needed to how we conduct research to increase the impact that ML has? We present six Impact Challenges to explicitly focus the field?s energy and attention, and we discuss existing obstacles that must be addressed. We aim to inspire ongoing discussion and focus on ML that matters.
[ "['Kiri Wagstaff']", "Kiri Wagstaff (Jet Propulsion Laboratory)" ]
cs.LG cs.DS
null
1206.4657
null
null
http://arxiv.org/pdf/1206.4657v1
2012-06-18T15:26:34Z
2012-06-18T15:26:34Z
Projection-free Online Learning
The computational bottleneck in applying online learning to massive data sets is usually the projection step. We present efficient online learning algorithms that eschew projections in favor of much more efficient linear optimization steps using the Frank-Wolfe technique. We obtain a range of regret bounds for online convex optimization, with better bounds for specific cases such as stochastic online smooth convex optimization. Besides the computational advantage, other desirable features of our algorithms are that they are parameter-free in the stochastic case and produce sparse decisions. We apply our algorithms to computationally intensive applications of collaborative filtering, and show the theoretical improvements to be clearly visible on standard datasets.
[ "['Elad Hazan' 'Satyen Kale']", "Elad Hazan (Technion), Satyen Kale (IBM T.J. Watson Research Center)" ]
cs.LG stat.ML
null
1206.4658
null
null
http://arxiv.org/pdf/1206.4658v1
2012-06-18T15:27:40Z
2012-06-18T15:27:40Z
Dirichlet Process with Mixed Random Measures: A Nonparametric Topic Model for Labeled Data
We describe a nonparametric topic model for labeled data. The model uses a mixture of random measures (MRM) as a base distribution of the Dirichlet process (DP) of the HDP framework, so we call it the DP-MRM. To model labeled data, we define a DP distributed random measure for each label, and the resulting model generates an unbounded number of topics for each label. We apply DP-MRM on single-labeled and multi-labeled corpora of documents and compare the performance on label prediction with MedLDA, LDA-SVM, and Labeled-LDA. We further enhance the model by incorporating ddCRP and modeling multi-labeled images for image segmentation and object labeling, comparing the performance with nCuts and rddCRP.
[ "Dongwoo Kim (KAIST), Suin Kim (KAIST), Alice Oh (KAIST)", "['Dongwoo Kim' 'Suin Kim' 'Alice Oh']" ]
cs.LG stat.ML
null
1206.4659
null
null
http://arxiv.org/pdf/1206.4659v1
2012-06-18T15:27:56Z
2012-06-18T15:27:56Z
Max-Margin Nonparametric Latent Feature Models for Link Prediction
We present a max-margin nonparametric latent feature model, which unites the ideas of max-margin learning and Bayesian nonparametrics to discover discriminative latent features for link prediction and automatically infer the unknown latent social dimension. By minimizing a hinge-loss using the linear expectation operator, we can perform posterior inference efficiently without dealing with a highly nonlinear link likelihood function; by using a fully-Bayesian formulation, we can avoid tuning regularization constants. Experimental results on real datasets appear to demonstrate the benefits inherited from max-margin learning and fully-Bayesian nonparametric inference.
[ "['Jun Zhu']", "Jun Zhu (Tsinghua University)" ]
cs.LG
null
1206.4660
null
null
http://arxiv.org/pdf/1206.4660v1
2012-06-18T15:28:12Z
2012-06-18T15:28:12Z
Learning with Augmented Features for Heterogeneous Domain Adaptation
We propose a new learning method for heterogeneous domain adaptation (HDA), in which the data from the source domain and the target domain are represented by heterogeneous features with different dimensions. Using two different projection matrices, we first transform the data from two domains into a common subspace in order to measure the similarity between the data from two domains. We then propose two new feature mapping functions to augment the transformed data with their original features and zeros. The existing learning methods (e.g., SVM and SVR) can be readily incorporated with our newly proposed augmented feature representations to effectively utilize the data from both domains for HDA. Using the hinge loss function in SVM as an example, we introduce the detailed objective function in our method called Heterogeneous Feature Augmentation (HFA) for a linear case and also describe its kernelization in order to efficiently cope with the data with very high dimensions. Moreover, we also develop an alternating optimization algorithm to effectively solve the nontrivial optimization problem in our HFA method. Comprehensive experiments on two benchmark datasets clearly demonstrate that HFA outperforms the existing HDA methods.
[ "Lixin Duan (Nanyang Technological University), Dong Xu (Nanyang\n Technological University), Ivor Tsang (Nanyang Technological University)", "['Lixin Duan' 'Dong Xu' 'Ivor Tsang']" ]
cs.LG stat.ML
null
1206.4661
null
null
http://arxiv.org/pdf/1206.4661v1
2012-06-18T15:30:13Z
2012-06-18T15:30:13Z
Predicting accurate probabilities with a ranking loss
In many real-world applications of machine learning classifiers, it is essential to predict the probability of an example belonging to a particular class. This paper proposes a simple technique for predicting probabilities based on optimizing a ranking loss, followed by isotonic regression. This semi-parametric technique offers both good ranking and regression performance, and models a richer set of probability distributions than statistical workhorses such as logistic regression. We provide experimental results that show the effectiveness of this technique on real-world applications of probability prediction.
[ "Aditya Menon (UC San Diego), Xiaoqian Jiang (UC San Diego), Shankar\n Vembu (University of Toronto), Charles Elkan (UC San Diego), Lucila\n Ohno-Machado (UC San Diego)", "['Aditya Menon' 'Xiaoqian Jiang' 'Shankar Vembu' 'Charles Elkan'\n 'Lucila Ohno-Machado']" ]
cs.CR cs.LG cs.MM
null
1206.4662
null
null
http://arxiv.org/pdf/1206.4662v1
2012-06-18T15:30:35Z
2012-06-18T15:30:35Z
Bayesian Watermark Attacks
This paper presents an application of statistical machine learning to the field of watermarking. We propose a new attack model on additive spread-spectrum watermarking systems. The proposed attack is based on Bayesian statistics. We consider the scenario in which a watermark signal is repeatedly embedded in specific, possibly chosen based on a secret message bitstream, segments (signals) of the host data. The host signal can represent a patch of pixels from an image or a video frame. We propose a probabilistic model that infers the embedded message bitstream and watermark signal, directly from the watermarked data, without access to the decoder. We develop an efficient Markov chain Monte Carlo sampler for updating the model parameters from their conjugate full conditional posteriors. We also provide a variational Bayesian solution, which further increases the convergence speed of the algorithm. Experiments with synthetic and real image signals demonstrate that the attack model is able to correctly infer a large part of the message bitstream and obtain a very accurate estimate of the watermark signal.
[ "Ivo Shterev (Duke University), David Dunson (Duke University)", "['Ivo Shterev' 'David Dunson']" ]
cs.LG stat.ML
null
1206.4663
null
null
http://arxiv.org/pdf/1206.4663v1
2012-06-18T15:30:52Z
2012-06-18T15:30:52Z
The Convexity and Design of Composite Multiclass Losses
We consider composite loss functions for multiclass prediction comprising a proper (i.e., Fisher-consistent) loss over probability distributions and an inverse link function. We establish conditions for their (strong) convexity and explore the implications. We also show how the separation of concerns afforded by using this composite representation allows for the design of families of losses with the same Bayes risk.
[ "Mark Reid (The Australian National University and NICTA), Robert\n Williamson (The Australian National University and NICTA), Peng Sun (Tsinghua\n University)", "['Mark Reid' 'Robert Williamson' 'Peng Sun']" ]
cs.LG stat.ML
null
1206.4664
null
null
http://arxiv.org/pdf/1206.4664v1
2012-06-18T15:31:13Z
2012-06-18T15:31:13Z
Tighter Variational Representations of f-Divergences via Restriction to Probability Measures
We show that the variational representations for f-divergences currently used in the literature can be tightened. This has implications to a number of methods recently proposed based on this representation. As an example application we use our tighter representation to derive a general f-divergence estimator based on two i.i.d. samples and derive the dual program for this estimator that performs well empirically. We also point out a connection between our estimator and MMD.
[ "['Avraham Ruderman' 'Mark Reid' 'Dario Garcia-Garcia' 'James Petterson']", "Avraham Ruderman (Australian National University and NICTA), Mark Reid\n (Australian National University and NICTA), Dario Garcia-Garcia (Australian\n National University and NICTA), James Petterson (NICTA)" ]
cs.LG stat.ML
null
1206.4665
null
null
http://arxiv.org/pdf/1206.4665v1
2012-06-18T15:32:05Z
2012-06-18T15:32:05Z
Nonparametric variational inference
Variational methods are widely used for approximate posterior inference. However, their use is typically limited to families of distributions that enjoy particular conjugacy properties. To circumvent this limitation, we propose a family of variational approximations inspired by nonparametric kernel density estimation. The locations of these kernels and their bandwidth are treated as variational parameters and optimized to improve an approximate lower bound on the marginal likelihood of the data. Using multiple kernels allows the approximation to capture multiple modes of the posterior, unlike most other variational approximations. We demonstrate the efficacy of the nonparametric approximation with a hierarchical logistic regression model and a nonlinear matrix factorization model. We obtain predictive performance as good as or better than more specialized variational methods and sample-based approximations. The method is easy to apply to more general graphical models for which standard variational methods are difficult to derive.
[ "Samuel Gershman (Princeton University), Matt Hoffman (Princeton\n University), David Blei (Princeton University)", "['Samuel Gershman' 'Matt Hoffman' 'David Blei']" ]
stat.CO cs.LG stat.ME
null
1206.4666
null
null
http://arxiv.org/pdf/1206.4666v1
2012-06-18T15:32:46Z
2012-06-18T15:32:46Z
A Bayesian Approach to Approximate Joint Diagonalization of Square Matrices
We present a Bayesian scheme for the approximate diagonalisation of several square matrices which are not necessarily symmetric. A Gibbs sampler is derived to simulate samples of the common eigenvectors and the eigenvalues for these matrices. Several synthetic examples are used to illustrate the performance of the proposed Gibbs sampler and we then provide comparisons to several other joint diagonalization algorithms, which shows that the Gibbs sampler achieves the state-of-the-art performance on the examples considered. As a byproduct, the output of the Gibbs sampler could be used to estimate the log marginal likelihood, however we employ the approximation based on the Bayesian information criterion (BIC) which in the synthetic examples considered correctly located the number of common eigenvectors. We then succesfully applied the sampler to the source separation problem as well as the common principal component analysis and the common spatial pattern analysis problems.
[ "Mingjun Zhong (Dalian University of Tech.), Mark Girolami (University\n College London)", "['Mingjun Zhong' 'Mark Girolami']" ]
cs.LG cs.AI cs.IR
null
1206.4667
null
null
http://arxiv.org/pdf/1206.4667v2
2012-07-18T18:54:06Z
2012-06-18T15:33:05Z
Unachievable Region in Precision-Recall Space and Its Effect on Empirical Evaluation
Precision-recall (PR) curves and the areas under them are widely used to summarize machine learning results, especially for data sets exhibiting class skew. They are often used analogously to ROC curves and the area under ROC curves. It is known that PR curves vary as class skew changes. What was not recognized before this paper is that there is a region of PR space that is completely unachievable, and the size of this region depends only on the skew. This paper precisely characterizes the size of that region and discusses its implications for empirical evaluation methodology in machine learning.
[ "['Kendrick Boyd' 'Vitor Santos Costa' 'Jesse Davis' 'David Page']", "Kendrick Boyd (University of Wisconsin Madison), Vitor Santos Costa\n (University of Porto), Jesse Davis (KU Leuven), David Page (University of\n Wisconsin Madison)" ]
cs.LG cs.DS stat.ML
null
1206.4668
null
null
http://arxiv.org/pdf/1206.4668v1
2012-06-18T15:33:25Z
2012-06-18T15:33:25Z
Approximate Principal Direction Trees
We introduce a new spatial data structure for high dimensional data called the \emph{approximate principal direction tree} (APD tree) that adapts to the intrinsic dimension of the data. Our algorithm ensures vector-quantization accuracy similar to that of computationally-expensive PCA trees with similar time-complexity to that of lower-accuracy RP trees. APD trees use a small number of power-method iterations to find splitting planes for recursively partitioning the data. As such they provide a natural trade-off between the running-time and accuracy achieved by RP and PCA trees. Our theoretical results establish a) strong performance guarantees regardless of the convergence rate of the power-method and b) that $O(\log d)$ iterations suffice to establish the guarantee of PCA trees when the intrinsic dimension is $d$. We demonstrate this trade-off and the efficacy of our data structure on both the CPU and GPU.
[ "['Mark McCartin-Lim' 'Andrew McGregor' 'Rui Wang']", "Mark McCartin-Lim (University of Massachusetts), Andrew McGregor\n (University of Massachusetts), Rui Wang (University of Massachusetts)" ]
cs.LG stat.ML
null
1206.4669
null
null
http://arxiv.org/pdf/1206.4669v1
2012-06-18T15:34:07Z
2012-06-18T15:34:07Z
Sparse Additive Functional and Kernel CCA
Canonical Correlation Analysis (CCA) is a classical tool for finding correlations among the components of two random vectors. In recent years, CCA has been widely applied to the analysis of genomic data, where it is common for researchers to perform multiple assays on a single set of patient samples. Recent work has proposed sparse variants of CCA to address the high dimensionality of such data. However, classical and sparse CCA are based on linear models, and are thus limited in their ability to find general correlations. In this paper, we present two approaches to high-dimensional nonparametric CCA, building on recent developments in high-dimensional nonparametric regression. We present estimation procedures for both approaches, and analyze their theoretical properties in the high-dimensional setting. We demonstrate the effectiveness of these procedures in discovering nonlinear correlations via extensive simulations, as well as through experiments with genomic data.
[ "Sivaraman Balakrishnan (Carnegie Mellon University), Kriti Puniyani\n (Carnegie Mellon University), John Lafferty (Carnegie Mellon University)", "['Sivaraman Balakrishnan' 'Kriti Puniyani' 'John Lafferty']" ]
cs.IT astro-ph.EP cs.LG math.IT physics.data-an
null
1206.4670
null
null
http://arxiv.org/pdf/1206.4670v1
2012-06-18T15:34:23Z
2012-06-18T15:34:23Z
State-Space Inference for Non-Linear Latent Force Models with Application to Satellite Orbit Prediction
Latent force models (LFMs) are flexible models that combine mechanistic modelling principles (i.e., physical models) with non-parametric data-driven components. Several key applications of LFMs need non-linearities, which results in analytically intractable inference. In this work we show how non-linear LFMs can be represented as non-linear white noise driven state-space models and present an efficient non-linear Kalman filtering and smoothing based method for approximate state and parameter inference. We illustrate the performance of the proposed methodology via two simulated examples, and apply it to a real-world problem of long-term prediction of GPS satellite orbits.
[ "Jouni Hartikainen (Aalto University), Mari Seppanen (Tampere\n University of Technology), Simo Sarkka (Aalto University)", "['Jouni Hartikainen' 'Mari Seppanen' 'Simo Sarkka']" ]
cs.LG stat.ML
null
1206.4671
null
null
http://arxiv.org/pdf/1206.4671v1
2012-06-18T15:35:02Z
2012-06-18T15:35:02Z
Dependent Hierarchical Normalized Random Measures for Dynamic Topic Modeling
We develop dependent hierarchical normalized random measures and apply them to dynamic topic modeling. The dependency arises via superposition, subsampling and point transition on the underlying Poisson processes of these measures. The measures used include normalised generalised Gamma processes that demonstrate power law properties, unlike Dirichlet processes used previously in dynamic topic modeling. Inference for the model includes adapting a recently developed slice sampler to directly manipulate the underlying Poisson process. Experiments performed on news, blogs, academic and Twitter collections demonstrate the technique gives superior perplexity over a number of previous models.
[ "Changyou Chen (ANU & NICTA), Nan Ding (Purdue University), Wray\n Buntine (NICTA)", "['Changyou Chen' 'Nan Ding' 'Wray Buntine']" ]
cs.LG stat.ML
null
1206.4672
null
null
http://arxiv.org/pdf/1206.4672v1
2012-06-18T15:35:20Z
2012-06-18T15:35:20Z
Efficient Active Algorithms for Hierarchical Clustering
Advances in sensing technologies and the growth of the internet have resulted in an explosion in the size of modern datasets, while storage and processing power continue to lag behind. This motivates the need for algorithms that are efficient, both in terms of the number of measurements needed and running time. To combat the challenges associated with large datasets, we propose a general framework for active hierarchical clustering that repeatedly runs an off-the-shelf clustering algorithm on small subsets of the data and comes with guarantees on performance, measurement complexity and runtime complexity. We instantiate this framework with a simple spectral clustering algorithm and provide concrete results on its performance, showing that, under some assumptions, this algorithm recovers all clusters of size ?(log n) using O(n log^2 n) similarities and runs in O(n log^3 n) time for a dataset of n objects. Through extensive experimentation we also demonstrate that this framework is practically alluring.
[ "Akshay Krishnamurthy (Carnegie Mellon University), Sivaraman\n Balakrishnan (Carnegie Mellon University), Min Xu (Carnegie Mellon\n University), Aarti Singh (Carnegie Mellon University)", "['Akshay Krishnamurthy' 'Sivaraman Balakrishnan' 'Min Xu' 'Aarti Singh']" ]
cs.LG stat.ML
null
1206.4673
null
null
http://arxiv.org/pdf/1206.4673v1
2012-06-18T15:35:38Z
2012-06-18T15:35:38Z
Group Sparse Additive Models
We consider the problem of sparse variable selection in nonparametric additive models, with the prior knowledge of the structure among the covariates to encourage those variables within a group to be selected jointly. Previous works either study the group sparsity in the parametric setting (e.g., group lasso), or address the problem in the non-parametric setting without exploiting the structural information (e.g., sparse additive models). In this paper, we present a new method, called group sparse additive models (GroupSpAM), which can handle group sparsity in additive models. We generalize the l1/l2 norm to Hilbert spaces as the sparsity-inducing penalty in GroupSpAM. Moreover, we derive a novel thresholding condition for identifying the functional sparsity at the group level, and propose an efficient block coordinate descent algorithm for constructing the estimate. We demonstrate by simulation that GroupSpAM substantially outperforms the competing methods in terms of support recovery and prediction accuracy in additive models, and also conduct a comparative experiment on a real breast cancer dataset.
[ "Junming Yin (Carnegie Mellon University), Xi Chen (Carnegie Mellon\n University), Eric Xing (Carnegie Mellon University)", "['Junming Yin' 'Xi Chen' 'Eric Xing']" ]
cs.LG cs.DS stat.ML
null
1206.4674
null
null
http://arxiv.org/pdf/1206.4674v1
2012-06-18T15:36:16Z
2012-06-18T15:36:16Z
Comparison-Based Learning with Rank Nets
We consider the problem of search through comparisons, where a user is presented with two candidate objects and reveals which is closer to her intended target. We study adaptive strategies for finding the target, that require knowledge of rank relationships but not actual distances between objects. We propose a new strategy based on rank nets, and show that for target distributions with a bounded doubling constant, it finds the target in a number of comparisons close to the entropy of the target distribution and, hence, of the optimum. We extend these results to the case of noisy oracles, and compare this strategy to prior art over multiple datasets.
[ "Amin Karbasi (EPFL), Stratis Ioannidis (Technicolor), laurent\n Massoulie (Technicolor)", "['Amin Karbasi' 'Stratis Ioannidis' 'laurent Massoulie']" ]
cs.CR cs.DC cs.LG
null
1206.4675
null
null
http://arxiv.org/pdf/1206.4675v1
2012-06-18T15:36:32Z
2012-06-18T15:36:32Z
Finding Botnets Using Minimal Graph Clusterings
We study the problem of identifying botnets and the IP addresses which they comprise, based on the observation of a fraction of the global email spam traffic. Observed mailing campaigns constitute evidence for joint botnet membership, they are represented by cliques in the graph of all messages. No evidence against an association of nodes is ever available. We reduce the problem of identifying botnets to a problem of finding a minimal clustering of the graph of messages. We directly model the distribution of clusterings given the input graph; this avoids potential errors caused by distributional assumptions of a generative model. We report on a case study in which we evaluate the model by its ability to predict the spam campaign that a given IP address is going to participate in.
[ "Peter Haider (University of Potsdam), Tobias Scheffer (University of\n Potsdam)", "['Peter Haider' 'Tobias Scheffer']" ]
cs.LG cs.CV cs.NA stat.ML
null
1206.4676
null
null
http://arxiv.org/pdf/1206.4676v1
2012-06-18T15:36:49Z
2012-06-18T15:36:49Z
Clustering by Low-Rank Doubly Stochastic Matrix Decomposition
Clustering analysis by nonnegative low-rank approximations has achieved remarkable progress in the past decade. However, most approximation approaches in this direction are still restricted to matrix factorization. We propose a new low-rank learning method to improve the clustering performance, which is beyond matrix factorization. The approximation is based on a two-step bipartite random walk through virtual cluster nodes, where the approximation is formed by only cluster assigning probabilities. Minimizing the approximation error measured by Kullback-Leibler divergence is equivalent to maximizing the likelihood of a discriminative model, which endows our method with a solid probabilistic interpretation. The optimization is implemented by a relaxed Majorization-Minimization algorithm that is advantageous in finding good local minima. Furthermore, we point out that the regularized algorithm with Dirichlet prior only serves as initialization. Experimental results show that the new method has strong performance in clustering purity for various datasets, especially for large-scale manifold data.
[ "['Zhirong Yang' 'Erkki Oja']", "Zhirong Yang (Aalto University), Erkki Oja (Aalto University)" ]
cs.LG stat.ML
null
1206.4677
null
null
http://arxiv.org/pdf/1206.4677v1
2012-06-18T15:37:07Z
2012-06-18T15:37:07Z
Semi-Supervised Learning of Class Balance under Class-Prior Change by Distribution Matching
In real-world classification problems, the class balance in the training dataset does not necessarily reflect that of the test dataset, which can cause significant estimation bias. If the class ratio of the test dataset is known, instance re-weighting or resampling allows systematical bias correction. However, learning the class ratio of the test dataset is challenging when no labeled data is available from the test domain. In this paper, we propose to estimate the class ratio in the test dataset by matching probability distributions of training and test input data. We demonstrate the utility of the proposed approach through experiments.
[ "['Marthinus Du Plessis' 'Masashi Sugiyama']", "Marthinus Du Plessis (Tokyo Institute of Technology), Masashi Sugiyama\n (Tokyo Institute of Technology)" ]
cs.LG stat.ML
null
1206.4678
null
null
http://arxiv.org/pdf/1206.4678v1
2012-06-18T15:37:23Z
2012-06-18T15:37:23Z
Linear Regression with Limited Observation
We consider the most common variants of linear regression, including Ridge, Lasso and Support-vector regression, in a setting where the learner is allowed to observe only a fixed number of attributes of each example at training time. We present simple and efficient algorithms for these problems: for Lasso and Ridge regression they need the same total number of attributes (up to constants) as do full-information algorithms, for reaching a certain accuracy. For Support-vector regression, we require exponentially less attributes compared to the state of the art. By that, we resolve an open problem recently posed by Cesa-Bianchi et al. (2010). Experiments show the theoretical bounds to be justified by superior performance compared to the state of the art.
[ "['Elad Hazan' 'Tomer Koren']", "Elad Hazan (Technion), Tomer Koren (Technion)" ]
cs.LG stat.ML
null
1206.4679
null
null
http://arxiv.org/pdf/1206.4679v1
2012-06-18T15:37:59Z
2012-06-18T15:37:59Z
Factorized Asymptotic Bayesian Hidden Markov Models
This paper addresses the issue of model selection for hidden Markov models (HMMs). We generalize factorized asymptotic Bayesian inference (FAB), which has been recently developed for model selection on independent hidden variables (i.e., mixture models), for time-dependent hidden variables. As with FAB in mixture models, FAB for HMMs is derived as an iterative lower bound maximization algorithm of a factorized information criterion (FIC). It inherits, from FAB for mixture models, several desirable properties for learning HMMs, such as asymptotic consistency of FIC with marginal log-likelihood, a shrinkage effect for hidden state selection, monotonic increase of the lower FIC bound through the iterative optimization. Further, it does not have a tunable hyper-parameter, and thus its model selection process can be fully automated. Experimental results shows that FAB outperforms states-of-the-art variational Bayesian HMM and non-parametric Bayesian HMM in terms of model selection accuracy and computational efficiency.
[ "Ryohei Fujimaki (NEC Laboratories America), Kohei Hayashi (Nara\n Institute of Science and Technology)", "['Ryohei Fujimaki' 'Kohei Hayashi']" ]
cs.LG math.ST stat.TH
null
1206.4680
null
null
http://arxiv.org/pdf/1206.4680v1
2012-06-18T15:38:18Z
2012-06-18T15:38:18Z
Fast Prediction of New Feature Utility
We study the new feature utility prediction problem: statistically testing whether adding a new feature to the data representation can improve predictive accuracy on a supervised learning task. In many applications, identifying new informative features is the primary pathway for improving performance. However, evaluating every potential feature by re-training the predictor with it can be costly. The paper describes an efficient, learner-independent technique for estimating new feature utility without re-training based on the current predictor's outputs. The method is obtained by deriving a connection between loss reduction potential and the new feature's correlation with the loss gradient of the current predictor. This leads to a simple yet powerful hypothesis testing procedure, for which we prove consistency. Our theoretical analysis is accompanied by empirical evaluation on standard benchmarks and a large-scale industrial dataset.
[ "Hoyt Koepke (University of Washington), Mikhail Bilenko (Microsoft\n Research)", "['Hoyt Koepke' 'Mikhail Bilenko']" ]
cs.LG stat.ML
null
1206.4681
null
null
http://arxiv.org/pdf/1206.4681v1
2012-06-18T15:40:11Z
2012-06-18T15:40:11Z
LPQP for MAP: Putting LP Solvers to Better Use
MAP inference for general energy functions remains a challenging problem. While most efforts are channeled towards improving the linear programming (LP) based relaxation, this work is motivated by the quadratic programming (QP) relaxation. We propose a novel MAP relaxation that penalizes the Kullback-Leibler divergence between the LP pairwise auxiliary variables, and QP equivalent terms given by the product of the unaries. We develop two efficient algorithms based on variants of this relaxation. The algorithms minimize the non-convex objective using belief propagation and dual decomposition as building blocks. Experiments on synthetic and real-world data show that the solutions returned by our algorithms substantially improve over the LP relaxation.
[ "Patrick Pletscher (ETH Zurich), Sharon Wulff (ETH Zurich)", "['Patrick Pletscher' 'Sharon Wulff']" ]