categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
sequence
cs.SI cs.LG physics.soc-ph stat.ML
null
1210.4860
null
null
http://arxiv.org/pdf/1210.4860v1
2012-10-16T17:38:22Z
2012-10-16T17:38:22Z
Spectral Estimation of Conditional Random Graph Models for Large-Scale Network Data
Generative models for graphs have been typically committed to strong prior assumptions concerning the form of the modeled distributions. Moreover, the vast majority of currently available models are either only suitable for characterizing some particular network properties (such as degree distribution or clustering coefficient), or they are aimed at estimating joint probability distributions, which is often intractable in large-scale networks. In this paper, we first propose a novel network statistic, based on the Laplacian spectrum of graphs, which allows to dispense with any parametric assumption concerning the modeled network properties. Second, we use the defined statistic to develop the Fiedler random graph model, switching the focus from the estimation of joint probability distributions to a more tractable conditional estimation setting. After analyzing the dependence structure characterizing Fiedler random graphs, we evaluate them experimentally in edge prediction over several real-world networks, showing that they allow to reach a much higher prediction accuracy than various alternative statistical models.
[ "Antonino Freno, Mikaela Keller, Gemma C. Garriga, Marc Tommasi", "['Antonino Freno' 'Mikaela Keller' 'Gemma C. Garriga' 'Marc Tommasi']" ]
cs.LG stat.ML
null
1210.4862
null
null
http://arxiv.org/pdf/1210.4862v1
2012-10-16T17:38:45Z
2012-10-16T17:38:45Z
Sample-efficient Nonstationary Policy Evaluation for Contextual Bandits
We present and prove properties of a new offline policy evaluator for an exploration learning setting which is superior to previous evaluators. In particular, it simultaneously and correctly incorporates techniques from importance weighting, doubly robust evaluation, and nonstationary policy evaluation approaches. In addition, our approach allows generating longer histories by careful control of a bias-variance tradeoff, and further decreases variance by incorporating information about randomness of the target policy. Empirical evidence from synthetic and realworld exploration learning problems shows the new evaluator successfully unifies previous approaches and uses information an order of magnitude more efficiently.
[ "Miroslav Dudik, Dumitru Erhan, John Langford, Lihong Li", "['Miroslav Dudik' 'Dumitru Erhan' 'John Langford' 'Lihong Li']" ]
cs.LG stat.ML
null
1210.4867
null
null
http://arxiv.org/pdf/1210.4867v1
2012-10-16T17:39:37Z
2012-10-16T17:39:37Z
Lifted Relational Variational Inference
Hybrid continuous-discrete models naturally represent many real-world applications in robotics, finance, and environmental engineering. Inference with large-scale models is challenging because relational structures deteriorate rapidly during inference with observations. The main contribution of this paper is an efficient relational variational inference algorithm that factors largescale probability models into simpler variational models, composed of mixtures of iid (Bernoulli) random variables. The algorithm takes probability relational models of largescale hybrid systems and converts them to a close-to-optimal variational models. Then, it efficiently calculates marginal probabilities on the variational models by using a latent (or lifted) variable elimination or a lifted stochastic sampling. This inference is unique because it maintains the relational structure upon individual observations and during inference steps.
[ "['Jaesik Choi' 'Eyal Amir']", "Jaesik Choi, Eyal Amir" ]
cs.LG cs.IR stat.ML
null
1210.4869
null
null
http://arxiv.org/pdf/1210.4869v1
2012-10-16T17:40:52Z
2012-10-16T17:40:52Z
Response Aware Model-Based Collaborative Filtering
Previous work on recommender systems mainly focus on fitting the ratings provided by users. However, the response patterns, i.e., some items are rated while others not, are generally ignored. We argue that failing to observe such response patterns can lead to biased parameter estimation and sub-optimal model performance. Although several pieces of work have tried to model users' response patterns, they miss the effectiveness and interpretability of the successful matrix factorization collaborative filtering approaches. To bridge the gap, in this paper, we unify explicit response models and PMF to establish the Response Aware Probabilistic Matrix Factorization (RAPMF) framework. We show that RAPMF subsumes PMF as a special case. Empirically we demonstrate the merits of RAPMF from various aspects.
[ "Guang Ling, Haiqin Yang, Michael R. Lyu, Irwin King", "['Guang Ling' 'Haiqin Yang' 'Michael R. Lyu' 'Irwin King']" ]
cs.AI cs.LG
null
1210.4870
null
null
http://arxiv.org/pdf/1210.4870v1
2012-10-16T17:41:19Z
2012-10-16T17:41:19Z
Crowdsourcing Control: Moving Beyond Multiple Choice
To ensure quality results from crowdsourced tasks, requesters often aggregate worker responses and use one of a plethora of strategies to infer the correct answer from the set of noisy responses. However, all current models assume prior knowledge of all possible outcomes of the task. While not an unreasonable assumption for tasks that can be posited as multiple-choice questions (e.g. n-ary classification), we observe that many tasks do not naturally fit this paradigm, but instead demand a free-response formulation where the outcome space is of infinite size (e.g. audio transcription). We model such tasks with a novel probabilistic graphical model, and design and implement LazySusan, a decision-theoretic controller that dynamically requests responses as necessary in order to infer answers to these tasks. We also design an EM algorithm to jointly learn the parameters of our model while inferring the correct answers to multiple tasks at a time. Live experiments on Amazon Mechanical Turk demonstrate the superiority of LazySusan at solving SAT Math questions, eliminating 83.2% of the error and achieving greater net utility compared to the state-ofthe-art strategy, majority-voting. We also show in live experiments that our EM algorithm outperforms majority-voting on a visualization task that we design.
[ "Christopher H. Lin, Mausam, Daniel Weld", "['Christopher H. Lin' 'Mausam' 'Daniel Weld']" ]
cs.LG cs.CL cs.IR stat.ML
null
1210.4871
null
null
http://arxiv.org/pdf/1210.4871v1
2012-10-16T17:41:30Z
2012-10-16T17:41:30Z
Learning Mixtures of Submodular Shells with Application to Document Summarization
We introduce a method to learn a mixture of submodular "shells" in a large-margin setting. A submodular shell is an abstract submodular function that can be instantiated with a ground set and a set of parameters to produce a submodular function. A mixture of such shells can then also be so instantiated to produce a more complex submodular function. What our algorithm learns are the mixture weights over such shells. We provide a risk bound guarantee when learning in a large-margin structured-prediction setting using a projected subgradient method when only approximate submodular optimization is possible (such as with submodular function maximization). We apply this method to the problem of multi-document summarization and produce the best results reported so far on the widely used NIST DUC-05 through DUC-07 document summarization corpora.
[ "Hui Lin, Jeff A. Bilmes", "['Hui Lin' 'Jeff A. Bilmes']" ]
cs.LG cs.CV stat.ML
null
1210.4872
null
null
http://arxiv.org/pdf/1210.4872v1
2012-10-16T17:41:42Z
2012-10-16T17:41:42Z
Nested Dictionary Learning for Hierarchical Organization of Imagery and Text
A tree-based dictionary learning model is developed for joint analysis of imagery and associated text. The dictionary learning may be applied directly to the imagery from patches, or to general feature vectors extracted from patches or superpixels (using any existing method for image feature extraction). Each image is associated with a path through the tree (from root to a leaf), and each of the multiple patches in a given image is associated with one node in that path. Nodes near the tree root are shared between multiple paths, representing image characteristics that are common among different types of images. Moving toward the leaves, nodes become specialized, representing details in image classes. If available, words (text) are also jointly modeled, with a path-dependent probability over words. The tree structure is inferred via a nested Dirichlet process, and a retrospective stick-breaking sampler is used to infer the tree depth and width.
[ "Lingbo Li, XianXing Zhang, Mingyuan Zhou, Lawrence Carin", "['Lingbo Li' 'XianXing Zhang' 'Mingyuan Zhou' 'Lawrence Carin']" ]
cs.LG stat.ML
null
1210.4876
null
null
http://arxiv.org/pdf/1210.4876v1
2012-10-16T17:43:04Z
2012-10-16T17:43:04Z
Active Imitation Learning via Reduction to I.I.D. Active Learning
In standard passive imitation learning, the goal is to learn a target policy by passively observing full execution trajectories of it. Unfortunately, generating such trajectories can require substantial expert effort and be impractical in some cases. In this paper, we consider active imitation learning with the goal of reducing this effort by querying the expert about the desired action at individual states, which are selected based on answers to past queries and the learner's interactions with an environment simulator. We introduce a new approach based on reducing active imitation learning to i.i.d. active learning, which can leverage progress in the i.i.d. setting. Our first contribution, is to analyze reductions for both non-stationary and stationary policies, showing that the label complexity (number of queries) of active imitation learning can be substantially less than passive learning. Our second contribution, is to introduce a practical algorithm inspired by the reductions, which is shown to be highly effective in four test domains compared to a number of alternatives.
[ "Kshitij Judah, Alan Fern, Thomas G. Dietterich", "['Kshitij Judah' 'Alan Fern' 'Thomas G. Dietterich']" ]
cs.AI cs.GT cs.LG
null
1210.4880
null
null
http://arxiv.org/pdf/1210.4880v1
2012-10-16T17:43:47Z
2012-10-16T17:43:47Z
Inferring Strategies from Limited Reconnaissance in Real-time Strategy Games
In typical real-time strategy (RTS) games, enemy units are visible only when they are within sight range of a friendly unit. Knowledge of an opponent's disposition is limited to what can be observed through scouting. Information is costly, since units dedicated to scouting are unavailable for other purposes, and the enemy will resist scouting attempts. It is important to infer as much as possible about the opponent's current and future strategy from the available observations. We present a dynamic Bayes net model of strategies in the RTS game Starcraft that combines a generative model of how strategies relate to observable quantities with a principled framework for incorporating evidence gained via scouting. We demonstrate the model's ability to infer unobserved aspects of the game from realistic observations.
[ "Jesse Hostetler, Ethan W. Dereszynski, Thomas G. Dietterich, Alan Fern", "['Jesse Hostetler' 'Ethan W. Dereszynski' 'Thomas G. Dietterich'\n 'Alan Fern']" ]
cs.LG stat.ML
null
1210.4881
null
null
http://arxiv.org/pdf/1210.4881v1
2012-10-16T17:43:59Z
2012-10-16T17:43:59Z
Tightening Fractional Covering Upper Bounds on the Partition Function for High-Order Region Graphs
In this paper we present a new approach for tightening upper bounds on the partition function. Our upper bounds are based on fractional covering bounds on the entropy function, and result in a concave program to compute these bounds and a convex program to tighten them. To solve these programs effectively for general region graphs we utilize the entropy barrier method, thus decomposing the original programs by their dual programs and solve them with dual block optimization scheme. The entropy barrier method provides an elegant framework to generalize the message-passing scheme to high-order region graph, as well as to solve the block dual steps in closed-form. This is a key for computational relevancy for large problems with thousands of regions.
[ "['Tamir Hazan' 'Jian Peng' 'Amnon Shashua']", "Tamir Hazan, Jian Peng, Amnon Shashua" ]
cs.LG cs.NA stat.ML
null
1210.4883
null
null
http://arxiv.org/pdf/1210.4883v1
2012-10-16T17:45:11Z
2012-10-16T17:45:11Z
A Model-Based Approach to Rounding in Spectral Clustering
In spectral clustering, one defines a similarity matrix for a collection of data points, transforms the matrix to get the Laplacian matrix, finds the eigenvectors of the Laplacian matrix, and obtains a partition of the data using the leading eigenvectors. The last step is sometimes referred to as rounding, where one needs to decide how many leading eigenvectors to use, to determine the number of clusters, and to partition the data points. In this paper, we propose a novel method for rounding. The method differs from previous methods in three ways. First, we relax the assumption that the number of clusters equals the number of eigenvectors used. Second, when deciding the number of leading eigenvectors to use, we not only rely on information contained in the leading eigenvectors themselves, but also use subsequent eigenvectors. Third, our method is model-based and solves all the three subproblems of rounding using a class of graphical models called latent tree models. We evaluate our method on both synthetic and real-world data. The results show that our method works correctly in the ideal case where between-clusters similarity is 0, and degrades gracefully as one moves away from the ideal case.
[ "Leonard K. M. Poon, April H. Liu, Tengfei Liu, Nevin Lianwen Zhang", "['Leonard K. M. Poon' 'April H. Liu' 'Tengfei Liu' 'Nevin Lianwen Zhang']" ]
cs.LG stat.ML
null
1210.4884
null
null
http://arxiv.org/pdf/1210.4884v1
2012-10-16T17:45:30Z
2012-10-16T17:45:30Z
A Spectral Algorithm for Latent Junction Trees
Latent variable models are an elegant framework for capturing rich probabilistic dependencies in many applications. However, current approaches typically parametrize these models using conditional probability tables, and learning relies predominantly on local search heuristics such as Expectation Maximization. Using tensor algebra, we propose an alternative parameterization of latent variable models (where the model structures are junction trees) that still allows for computation of marginals among observed variables. While this novel representation leads to a moderate increase in the number of parameters for junction trees of low treewidth, it lets us design a local-minimum-free algorithm for learning this parameterization. The main computation of the algorithm involves only tensor operations and SVDs which can be orders of magnitude faster than EM algorithms for large datasets. To our knowledge, this is the first provably consistent parameter learning technique for a large class of low-treewidth latent graphical models beyond trees. We demonstrate the advantages of our method on synthetic and real datasets.
[ "Ankur P. Parikh, Le Song, Mariya Ishteva, Gabi Teodoru, Eric P. Xing", "['Ankur P. Parikh' 'Le Song' 'Mariya Ishteva' 'Gabi Teodoru'\n 'Eric P. Xing']" ]
cs.LG cs.AI stat.ML
null
1210.4887
null
null
http://arxiv.org/pdf/1210.4887v1
2012-10-16T17:46:07Z
2012-10-16T17:46:07Z
Hilbert Space Embeddings of POMDPs
A nonparametric approach for policy learning for POMDPs is proposed. The approach represents distributions over the states, observations, and actions as embeddings in feature spaces, which are reproducing kernel Hilbert spaces. Distributions over states given the observations are obtained by applying the kernel Bayes' rule to these distribution embeddings. Policies and value functions are defined on the feature space over states, which leads to a feature space expression for the Bellman equation. Value iteration may then be used to estimate the optimal value function and associated policy. Experimental results confirm that the correct policy is learned using the feature space representation.
[ "['Yu Nishiyama' 'Abdeslam Boularias' 'Arthur Gretton' 'Kenji Fukumizu']", "Yu Nishiyama, Abdeslam Boularias, Arthur Gretton, Kenji Fukumizu" ]
cs.LG cs.AI stat.ML
null
1210.4888
null
null
http://arxiv.org/pdf/1210.4888v1
2012-10-16T17:46:17Z
2012-10-16T17:46:17Z
Local Structure Discovery in Bayesian Networks
Learning a Bayesian network structure from data is an NP-hard problem and thus exact algorithms are feasible only for small data sets. Therefore, network structures for larger networks are usually learned with various heuristics. Another approach to scaling up the structure learning is local learning. In local learning, the modeler has one or more target variables that are of special interest; he wants to learn the structure near the target variables and is not interested in the rest of the variables. In this paper, we present a score-based local learning algorithm called SLL. We conjecture that our algorithm is theoretically sound in the sense that it is optimal in the limit of large sample size. Empirical results suggest that SLL is competitive when compared to the constraint-based HITON algorithm. We also study the prospects of constructing the network structure for the whole node set based on local results by presenting two algorithms and comparing them to several heuristics.
[ "Teppo Niinimaki, Pekka Parviainen", "['Teppo Niinimaki' 'Pekka Parviainen']" ]
cs.LG cs.AI stat.ML
null
1210.4889
null
null
http://arxiv.org/pdf/1210.4889v1
2012-10-16T17:46:26Z
2012-10-16T17:46:26Z
Learning STRIPS Operators from Noisy and Incomplete Observations
Agents learning to act autonomously in real-world domains must acquire a model of the dynamics of the domain in which they operate. Learning domain dynamics can be challenging, especially where an agent only has partial access to the world state, and/or noisy external sensors. Even in standard STRIPS domains, existing approaches cannot learn from noisy, incomplete observations typical of real-world domains. We propose a method which learns STRIPS action models in such domains, by decomposing the problem into first learning a transition function between states in the form of a set of classifiers, and then deriving explicit STRIPS rules from the classifiers' parameters. We evaluate our approach on simulated standard planning domains from the International Planning Competition, and show that it learns useful domain descriptions from noisy, incomplete observations.
[ "['Kira Mourao' 'Luke S. Zettlemoyer' 'Ronald P. A. Petrick'\n 'Mark Steedman']", "Kira Mourao, Luke S. Zettlemoyer, Ronald P. A. Petrick, Mark Steedman" ]
cs.LG stat.ML
null
1210.4892
null
null
http://arxiv.org/pdf/1210.4892v1
2012-10-16T17:47:18Z
2012-10-16T17:47:18Z
Unsupervised Joint Alignment and Clustering using Bayesian Nonparametrics
Joint alignment of a collection of functions is the process of independently transforming the functions so that they appear more similar to each other. Typically, such unsupervised alignment algorithms fail when presented with complex data sets arising from multiple modalities or make restrictive assumptions about the form of the functions or transformations, limiting their generality. We present a transformed Bayesian infinite mixture model that can simultaneously align and cluster a data set. Our model and associated learning scheme offer two key advantages: the optimal number of clusters is determined in a data-driven fashion through the use of a Dirichlet process prior, and it can accommodate any transformation function parameterized by a continuous parameter vector. As a result, it is applicable to a wide range of data types, and transformation functions. We present positive results on synthetic two-dimensional data, on a set of one-dimensional curves, and on various image data sets, showing large improvements over previous work. We discuss several variations of the model and conclude with directions for future work.
[ "['Marwan A. Mattar' 'Allen R. Hanson' 'Erik G. Learned-Miller']", "Marwan A. Mattar, Allen R. Hanson, Erik G. Learned-Miller" ]
cs.LG stat.ML
null
1210.4893
null
null
http://arxiv.org/pdf/1210.4893v1
2012-10-16T17:47:32Z
2012-10-16T17:47:32Z
Sparse Q-learning with Mirror Descent
This paper explores a new framework for reinforcement learning based on online convex optimization, in particular mirror descent and related algorithms. Mirror descent can be viewed as an enhanced gradient method, particularly suited to minimization of convex functions in highdimensional spaces. Unlike traditional gradient methods, mirror descent undertakes gradient updates of weights in both the dual space and primal space, which are linked together using a Legendre transform. Mirror descent can be viewed as a proximal algorithm where the distance generating function used is a Bregman divergence. A new class of proximal-gradient based temporal-difference (TD) methods are presented based on different Bregman divergences, which are more powerful than regular TD learning. Examples of Bregman divergences that are studied include p-norm functions, and Mahalanobis distance based on the covariance of sample gradients. A new family of sparse mirror-descent reinforcement learning methods are proposed, which are able to find sparse fixed points of an l1-regularized Bellman equation at significantly less computational cost than previous methods based on second-order matrix methods. An experimental study of mirror-descent reinforcement learning is presented using discrete and continuous Markov decision processes.
[ "Sridhar Mahadevan, Bo Liu", "['Sridhar Mahadevan' 'Bo Liu']" ]
cs.LG cs.AI stat.ML
null
1210.4896
null
null
http://arxiv.org/pdf/1210.4896v1
2012-10-16T17:48:08Z
2012-10-16T17:48:08Z
Closed-Form Learning of Markov Networks from Dependency Networks
Markov networks (MNs) are a powerful way to compactly represent a joint probability distribution, but most MN structure learning methods are very slow, due to the high cost of evaluating candidates structures. Dependency networks (DNs) represent a probability distribution as a set of conditional probability distributions. DNs are very fast to learn, but the conditional distributions may be inconsistent with each other and few inference algorithms support DNs. In this paper, we present a closed-form method for converting a DN into an MN, allowing us to enjoy both the efficiency of DN learning and the convenience of the MN representation. When the DN is consistent, this conversion is exact. For inconsistent DNs, we present averaging methods that significantly improve the approximation. In experiments on 12 standard datasets, our methods are orders of magnitude faster than and often more accurate than combining conditional distributions using weight learning.
[ "['Daniel Lowd']", "Daniel Lowd" ]
cs.LG stat.ML
null
1210.4898
null
null
http://arxiv.org/pdf/1210.4898v1
2012-10-16T17:50:15Z
2012-10-16T17:50:15Z
Value Function Approximation in Noisy Environments Using Locally Smoothed Regularized Approximate Linear Programs
Recently, Petrik et al. demonstrated that L1Regularized Approximate Linear Programming (RALP) could produce value functions and policies which compared favorably to established linear value function approximation techniques like LSPI. RALP's success primarily stems from the ability to solve the feature selection and value function approximation steps simultaneously. RALP's performance guarantees become looser if sampled next states are used. For very noisy domains, RALP requires an accurate model rather than samples, which can be unrealistic in some practical scenarios. In this paper, we demonstrate this weakness, and then introduce Locally Smoothed L1-Regularized Approximate Linear Programming (LS-RALP). We demonstrate that LS-RALP mitigates inaccuracies stemming from noise even without an accurate model. We show that, given some smoothness assumptions, as the number of samples increases, error from noise approaches zero, and provide experimental examples of LS-RALP's success on common reinforcement learning benchmark problems.
[ "Gavin Taylor, Ron Parr", "['Gavin Taylor' 'Ron Parr']" ]
cs.LG stat.ML
null
1210.4899
null
null
http://arxiv.org/pdf/1210.4899v1
2012-10-16T17:50:25Z
2012-10-16T17:50:25Z
Fast Exact Inference for Recursive Cardinality Models
Cardinality potentials are a generally useful class of high order potential that affect probabilities based on how many of D binary variables are active. Maximum a posteriori (MAP) inference for cardinality potential models is well-understood, with efficient computations taking O(DlogD) time. Yet efficient marginalization and sampling have not been addressed as thoroughly in the machine learning community. We show that there exists a simple algorithm for computing marginal probabilities and drawing exact joint samples that runs in O(Dlog2 D) time, and we show how to frame the algorithm as efficient belief propagation in a low order tree-structured model that includes additional auxiliary variables. We then develop a new, more general class of models, termed Recursive Cardinality models, which take advantage of this efficiency. Finally, we show how to do efficient exact inference in models composed of a tree structure and a cardinality potential. We explore the expressive power of Recursive Cardinality models and empirically demonstrate their utility.
[ "Daniel Tarlow, Kevin Swersky, Richard S. Zemel, Ryan Prescott Adams,\n Brendan J. Frey", "['Daniel Tarlow' 'Kevin Swersky' 'Richard S. Zemel' 'Ryan Prescott Adams'\n 'Brendan J. Frey']" ]
cs.DS cs.LG stat.ML
null
1210.4902
null
null
http://arxiv.org/pdf/1210.4902v1
2012-10-16T17:51:21Z
2012-10-16T17:51:21Z
Efficiently Searching for Frustrated Cycles in MAP Inference
Dual decomposition provides a tractable framework for designing algorithms for finding the most probable (MAP) configuration in graphical models. However, for many real-world inference problems, the typical decomposition has a large integrality gap, due to frustrated cycles. One way to tighten the relaxation is to introduce additional constraints that explicitly enforce cycle consistency. Earlier work showed that cluster-pursuit algorithms, which iteratively introduce cycle and other higherorder consistency constraints, allows one to exactly solve many hard inference problems. However, these algorithms explicitly enumerate a candidate set of clusters, limiting them to triplets or other short cycles. We solve the search problem for cycle constraints, giving a nearly linear time algorithm for finding the most frustrated cycle of arbitrary length. We show how to use this search algorithm together with the dual decomposition framework and clusterpursuit. The new algorithm exactly solves MAP inference problems arising from relational classification and stereo vision.
[ "['David Sontag' 'Do Kook Choe' 'Yitao Li']", "David Sontag, Do Kook Choe, Yitao Li" ]
stat.ML cs.LG
null
1210.4905
null
null
http://arxiv.org/pdf/1210.4905v1
2012-10-16T17:51:50Z
2012-10-16T17:51:50Z
Latent Composite Likelihood Learning for the Structured Canonical Correlation Model
Latent variable models are used to estimate variables of interest quantities which are observable only up to some measurement error. In many studies, such variables are known but not precisely quantifiable (such as "job satisfaction" in social sciences and marketing, "analytical ability" in educational testing, or "inflation" in economics). This leads to the development of measurement instruments to record noisy indirect evidence for such unobserved variables such as surveys, tests and price indexes. In such problems, there are postulated latent variables and a given measurement model. At the same time, other unantecipated latent variables can add further unmeasured confounding to the observed variables. The problem is how to deal with unantecipated latents variables. In this paper, we provide a method loosely inspired by canonical correlation that makes use of background information concerning the "known" latent variables. Given a partially specified structure, it provides a structure learning approach to detect "unknown unknowns," the confounding effect of potentially infinitely many other latent variables. This is done without explicitly modeling such extra latent factors. Because of the special structure of the problem, we are able to exploit a new variation of composite likelihood fitting to efficiently learn this structure. Validation is provided with experiments in synthetic data and the analysis of a large survey done with a sample of over 100,000 staff members of the National Health Service of the United Kingdom.
[ "Ricardo Silva", "['Ricardo Silva']" ]
cs.LG stat.ML
null
1210.4909
null
null
http://arxiv.org/pdf/1210.4909v1
2012-10-16T17:53:17Z
2012-10-16T17:53:17Z
Active Learning with Distributional Estimates
Active Learning (AL) is increasingly important in a broad range of applications. Two main AL principles to obtain accurate classification with few labeled data are refinement of the current decision boundary and exploration of poorly sampled regions. In this paper we derive a novel AL scheme that balances these two principles in a natural way. In contrast to many AL strategies, which are based on an estimated class conditional probability ^p(y|x), a key component of our approach is to view this quantity as a random variable, hence explicitly considering the uncertainty in its estimated value. Our main contribution is a novel mathematical framework for uncertainty-based AL, and a corresponding AL scheme, where the uncertainty in ^p(y|x) is modeled by a second-order distribution. On the practical side, we show how to approximate such second-order distributions for kernel density classification. Finally, we find that over a large number of UCI, USPS and Caltech4 datasets, our AL scheme achieves significantly better learning curves than popular AL methods such as uncertainty sampling and error reduction sampling, when all use the same kernel density classifier.
[ "Jens Roeder, Boaz Nadler, Kevin Kunzmann, Fred A. Hamprecht", "['Jens Roeder' 'Boaz Nadler' 'Kevin Kunzmann' 'Fred A. Hamprecht']" ]
cs.AI cs.LG stat.ML
null
1210.4910
null
null
http://arxiv.org/pdf/1210.4910v1
2012-10-16T17:53:29Z
2012-10-16T17:53:29Z
New Advances and Theoretical Insights into EDML
EDML is a recently proposed algorithm for learning MAP parameters in Bayesian networks. In this paper, we present a number of new advances and insights on the EDML algorithm. First, we provide the multivalued extension of EDML, originally proposed for Bayesian networks over binary variables. Next, we identify a simplified characterization of EDML that further implies a simple fixed-point algorithm for the convex optimization problem that underlies it. This characterization further reveals a connection between EDML and EM: a fixed point of EDML is a fixed point of EM, and vice versa. We thus identify also a new characterization of EM fixed points, but in the semantics of EDML. Finally, we propose a hybrid EDML/EM algorithm that takes advantage of the improved empirical convergence behavior of EDML, while maintaining the monotonic improvement property of EM.
[ "Khaled S. Refaat, Arthur Choi, Adnan Darwiche", "['Khaled S. Refaat' 'Arthur Choi' 'Adnan Darwiche']" ]
cs.AI cs.LG stat.ML
null
1210.4913
null
null
http://arxiv.org/pdf/1210.4913v1
2012-10-16T17:55:57Z
2012-10-16T17:55:57Z
An Improved Admissible Heuristic for Learning Optimal Bayesian Networks
Recently two search algorithms, A* and breadth-first branch and bound (BFBnB), were developed based on a simple admissible heuristic for learning Bayesian network structures that optimize a scoring function. The heuristic represents a relaxation of the learning problem such that each variable chooses optimal parents independently. As a result, the heuristic may contain many directed cycles and result in a loose bound. This paper introduces an improved admissible heuristic that tries to avoid directed cycles within small groups of variables. A sparse representation is also introduced to store only the unique optimal parent choices. Empirical results show that the new techniques significantly improved the efficiency and scalability of A* and BFBnB on most of datasets tested in this paper.
[ "['Changhe Yuan' 'Brandon Malone']", "Changhe Yuan, Brandon Malone" ]
cs.LG cs.IR stat.ML
null
1210.4914
null
null
http://arxiv.org/pdf/1210.4914v1
2012-10-16T17:56:08Z
2012-10-16T17:56:08Z
Latent Structured Ranking
Many latent (factorized) models have been proposed for recommendation tasks like collaborative filtering and for ranking tasks like document or image retrieval and annotation. Common to all those methods is that during inference the items are scored independently by their similarity to the query in the latent embedding space. The structure of the ranked list (i.e. considering the set of items returned as a whole) is not taken into account. This can be a problem because the set of top predictions can be either too diverse (contain results that contradict each other) or are not diverse enough. In this paper we introduce a method for learning latent structured rankings that improves over existing methods by providing the right blend of predictions at the top of the ranked list. Particular emphasis is put on making this method scalable. Empirical results on large scale image annotation and music recommendation tasks show improvements over existing approaches.
[ "Jason Weston, John Blitzer", "['Jason Weston' 'John Blitzer']" ]
cs.LG stat.ML
null
1210.4917
null
null
http://arxiv.org/pdf/1210.4917v1
2012-10-16T17:56:43Z
2012-10-16T17:56:43Z
Fast Graph Construction Using Auction Algorithm
In practical machine learning systems, graph based data representation has been widely used in various learning paradigms, ranging from unsupervised clustering to supervised classification. Besides those applications with natural graph or network structure data, such as social network analysis and relational learning, many other applications often involve a critical step in converting data vectors to an adjacency graph. In particular, a sparse subgraph extracted from the original graph is often required due to both theoretic and practical needs. Previous study clearly shows that the performance of different learning algorithms, e.g., clustering and classification, benefits from such sparse subgraphs with balanced node connectivity. However, the existing graph construction methods are either computationally expensive or with unsatisfactory performance. In this paper, we utilize a scalable method called auction algorithm and its parallel extension to recover a sparse yet nearly balanced subgraph with significantly reduced computational cost. Empirical study and comparison with the state-ofart approaches clearly demonstrate the superiority of the proposed method in both efficiency and accuracy.
[ "Jun Wang, Yinglong Xia", "['Jun Wang' 'Yinglong Xia']" ]
cs.LG cs.AI stat.ML
null
1210.4918
null
null
http://arxiv.org/pdf/1210.4918v1
2012-10-16T17:56:54Z
2012-10-16T17:56:54Z
Dynamic Teaching in Sequential Decision Making Environments
We describe theoretical bounds and a practical algorithm for teaching a model by demonstration in a sequential decision making environment. Unlike previous efforts that have optimized learners that watch a teacher demonstrate a static policy, we focus on the teacher as a decision maker who can dynamically choose different policies to teach different parts of the environment. We develop several teaching frameworks based on previously defined supervised protocols, such as Teaching Dimension, extending them to handle noise and sequences of inputs encountered in an MDP.We provide theoretical bounds on the learnability of several important model classes in this setting and suggest a practical algorithm for dynamic teaching.
[ "Thomas J. Walsh, Sergiu Goschin", "['Thomas J. Walsh' 'Sergiu Goschin']" ]
cs.LG cs.CE stat.ML
null
1210.4919
null
null
http://arxiv.org/pdf/1210.4919v1
2012-10-16T17:57:06Z
2012-10-16T17:57:06Z
Latent Dirichlet Allocation Uncovers Spectral Characteristics of Drought Stressed Plants
Understanding the adaptation process of plants to drought stress is essential in improving management practices, breeding strategies as well as engineering viable crops for a sustainable agriculture in the coming decades. Hyper-spectral imaging provides a particularly promising approach to gain such understanding since it allows to discover non-destructively spectral characteristics of plants governed primarily by scattering and absorption characteristics of the leaf internal structure and biochemical constituents. Several drought stress indices have been derived using hyper-spectral imaging. However, they are typically based on few hyper-spectral images only, rely on interpretations of experts, and consider few wavelengths only. In this study, we present the first data-driven approach to discovering spectral drought stress indices, treating it as an unsupervised labeling problem at massive scale. To make use of short range dependencies of spectral wavelengths, we develop an online variational Bayes algorithm for latent Dirichlet allocation with convolved Dirichlet regularizer. This approach scales to massive datasets and, hence, provides a more objective complement to plant physiological practices. The spectral topics found conform to plant physiological knowledge and can be computed in a fraction of the time compared to existing LDA approaches.
[ "Mirwaes Wahabzada, Kristian Kersting, Christian Bauckhage, Christoph\n Roemer, Agim Ballvora, Francisco Pinto, Uwe Rascher, Jens Leon, Lutz Ploemer", "['Mirwaes Wahabzada' 'Kristian Kersting' 'Christian Bauckhage'\n 'Christoph Roemer' 'Agim Ballvora' 'Francisco Pinto' 'Uwe Rascher'\n 'Jens Leon' 'Lutz Ploemer']" ]
cs.LG cs.IR stat.ML
null
1210.4920
null
null
http://arxiv.org/pdf/1210.4920v1
2012-10-16T17:57:22Z
2012-10-16T17:57:22Z
Factorized Multi-Modal Topic Model
Multi-modal data collections, such as corpora of paired images and text snippets, require analysis methods beyond single-view component and topic models. For continuous observations the current dominant approach is based on extensions of canonical correlation analysis, factorizing the variation into components shared by the different modalities and those private to each of them. For count data, multiple variants of topic models attempting to tie the modalities together have been presented. All of these, however, lack the ability to learn components private to one modality, and consequently will try to force dependencies even between minimally correlating modalities. In this work we combine the two approaches by presenting a novel HDP-based topic model that automatically learns both shared and private topics. The model is shown to be especially useful for querying the contents of one domain given samples of the other.
[ "['Seppo Virtanen' 'Yangqing Jia' 'Arto Klami' 'Trevor Darrell']", "Seppo Virtanen, Yangqing Jia, Arto Klami, Trevor Darrell" ]
cs.LG cs.CV cs.NA
null
1210.5034
null
null
http://arxiv.org/pdf/1210.5034v2
2012-10-21T06:17:08Z
2012-10-18T06:27:10Z
Optimal Computational Trade-Off of Inexact Proximal Methods
In this paper, we investigate the trade-off between convergence rate and computational cost when minimizing a composite functional with proximal-gradient methods, which are popular optimisation tools in machine learning. We consider the case when the proximity operator is computed via an iterative procedure, which provides an approximation of the exact proximity operator. In that case, we obtain algorithms with two nested loops. We show that the strategy that minimizes the computational cost to reach a solution with a desired accuracy in finite time is to set the number of inner iterations to a constant, which differs from the strategy indicated by a convergence rate analysis. In the process, we also present a new procedure called SIP (that is Speedy Inexact Proximal-gradient algorithm) that is both computationally efficient and easy to implement. Our numerical experiments confirm the theoretical findings and suggest that SIP can be a very competitive alternative to the standard procedure.
[ "Pierre Machart (LIF, LSIS), Sandrine Anthoine (LATP), Luca Baldassarre\n (EPFL)", "['Pierre Machart' 'Sandrine Anthoine' 'Luca Baldassarre']" ]
cs.DC cs.LG
null
1210.5128
null
null
http://arxiv.org/pdf/1210.5128v1
2012-10-18T14:02:12Z
2012-10-18T14:02:12Z
A Novel Learning Algorithm for Bayesian Network and Its Efficient Implementation on GPU
Computational inference of causal relationships underlying complex networks, such as gene-regulatory pathways, is NP-complete due to its combinatorial nature when permuting all possible interactions. Markov chain Monte Carlo (MCMC) has been introduced to sample only part of the combinations while still guaranteeing convergence and traversability, which therefore becomes widely used. However, MCMC is not able to perform efficiently enough for networks that have more than 15~20 nodes because of the computational complexity. In this paper, we use general purpose processor (GPP) and general purpose graphics processing unit (GPGPU) to implement and accelerate a novel Bayesian network learning algorithm. With a hash-table-based memory-saving strategy and a novel task assigning strategy, we achieve a 10-fold acceleration per iteration than using a serial GPP. Specially, we use a greedy method to search for the best graph from a given order. We incorporate a prior component in the current scoring function, which further facilitates the searching. Overall, we are able to apply this system to networks with more than 60 nodes, allowing inferences and modeling of bigger and more complex networks than current methods.
[ "['Yu Wang' 'Weikang Qian' 'Shuchang Zhang' 'Bo Yuan']", "Yu Wang, Weikang Qian, Shuchang Zhang and Bo Yuan" ]
cs.LG stat.ML
null
1210.5135
null
null
http://arxiv.org/pdf/1210.5135v1
2012-10-18T14:15:40Z
2012-10-18T14:15:40Z
LSBN: A Large-Scale Bayesian Structure Learning Framework for Model Averaging
The motivation for this paper is to apply Bayesian structure learning using Model Averaging in large-scale networks. Currently, Bayesian model averaging algorithm is applicable to networks with only tens of variables, restrained by its super-exponential complexity. We present a novel framework, called LSBN(Large-Scale Bayesian Network), making it possible to handle networks with infinite size by following the principle of divide-and-conquer. The method of LSBN comprises three steps. In general, LSBN first performs the partition by using a second-order partition strategy, which achieves more robust results. LSBN conducts sampling and structure learning within each overlapping community after the community is isolated from other variables by Markov Blanket. Finally LSBN employs an efficient algorithm, to merge structures of overlapping communities into a whole. In comparison with other four state-of-art large-scale network structure learning algorithms such as ARACNE, PC, Greedy Search and MMHC, LSBN shows comparable results in five common benchmark datasets, evaluated by precision, recall and f-score. What's more, LSBN makes it possible to learn large-scale Bayesian structure by Model Averaging which used to be intractable. In summary, LSBN provides an scalable and parallel framework for the reconstruction of network structures. Besides, the complete information of overlapping communities serves as the byproduct, which could be used to mine meaningful clusters in biological networks, such as protein-protein-interaction network or gene regulatory network, as well as in social network.
[ "['Yang Lu' 'Mengying Wang' 'Menglu Li' 'Qili Zhu' 'Bo Yuan']", "Yang Lu, Mengying Wang, Menglu Li, Qili Zhu, Bo Yuan" ]
stat.ML cs.LG
null
1210.5196
null
null
http://arxiv.org/pdf/1210.5196v1
2012-10-18T17:30:43Z
2012-10-18T17:30:43Z
Matrix reconstruction with the local max norm
We introduce a new family of matrix norms, the "local max" norms, generalizing existing methods such as the max norm, the trace norm (nuclear norm), and the weighted or smoothed weighted trace norms, which have been extensively used in the literature as regularizers for matrix reconstruction problems. We show that this new family can be used to interpolate between the (weighted or unweighted) trace norm and the more conservative max norm. We test this interpolation on simulated data and on the large-scale Netflix and MovieLens ratings data, and find improved accuracy relative to the existing matrix norms. We also provide theoretical results showing learning guarantees for some of the new norms.
[ "['Rina Foygel' 'Nathan Srebro' 'Ruslan Salakhutdinov']", "Rina Foygel, Nathan Srebro, Ruslan Salakhutdinov" ]
cs.IT cs.LG math.IT math.NA
null
1210.5323
null
null
http://arxiv.org/pdf/1210.5323v3
2013-07-17T05:50:32Z
2012-10-19T06:03:05Z
The performance of orthogonal multi-matching pursuit under RIP
The orthogonal multi-matching pursuit (OMMP) is a natural extension of orthogonal matching pursuit (OMP). We denote the OMMP with the parameter $M$ as OMMP(M) where $M\geq 1$ is an integer. The main difference between OMP and OMMP(M) is that OMMP(M) selects $M$ atoms per iteration, while OMP only adds one atom to the optimal atom set. In this paper, we study the performance of orthogonal multi-matching pursuit (OMMP) under RIP. In particular, we show that, when the measurement matrix A satisfies $(9s, 1/10)$-RIP, there exists an absolutely constant $M_0\leq 8$ so that OMMP(M_0) can recover $s$-sparse signal within $s$ iterations. We furthermore prove that, for slowly-decaying $s$-sparse signal, OMMP(M) can recover s-sparse signal within $O(\frac{s}{M})$ iterations for a large class of $M$. In particular, for $M=s^a$ with $a\in [0,1/2]$, OMMP(M) can recover slowly-decaying $s$-sparse signal within $O(s^{1-a})$ iterations. The result implies that OMMP can reduce the computational complexity heavily.
[ "Zhiqiang Xu", "['Zhiqiang Xu']" ]
cond-mat.dis-nn cond-mat.stat-mech cs.LG stat.ML
null
1210.5338
null
null
http://arxiv.org/pdf/1210.5338v2
2013-02-01T17:32:44Z
2012-10-19T08:08:55Z
Pairwise MRF Calibration by Perturbation of the Bethe Reference Point
We investigate different ways of generating approximate solutions to the pairwise Markov random field (MRF) selection problem. We focus mainly on the inverse Ising problem, but discuss also the somewhat related inverse Gaussian problem because both types of MRF are suitable for inference tasks with the belief propagation algorithm (BP) under certain conditions. Our approach consists in to take a Bethe mean-field solution obtained with a maximum spanning tree (MST) of pairwise mutual information, referred to as the \emph{Bethe reference point}, for further perturbation procedures. We consider three different ways following this idea: in the first one, we select and calibrate iteratively the optimal links to be added starting from the Bethe reference point; the second one is based on the observation that the natural gradient can be computed analytically at the Bethe point; in the third one, assuming no local field and using low temperature expansion we develop a dual loop joint model based on a well chosen fundamental cycle basis. We indeed identify a subclass of planar models, which we refer to as \emph{Bethe-dual graph models}, having possibly many loops, but characterized by a singly connected dual factor graph, for which the partition function and the linear response can be computed exactly in respectively O(N) and $O(N^2)$ operations, thanks to a dual weight propagation (DWP) message passing procedure that we set up. When restricted to this subclass of models, the inverse Ising problem being convex, becomes tractable at any temperature. Experimental tests on various datasets with refined $L_0$ or $L_1$ regularization procedures indicate that these approaches may be competitive and useful alternatives to existing ones.
[ "Cyril Furtlehner, Yufei Han, Jean-Marc Lasgouttes and Victorin Martin", "['Cyril Furtlehner' 'Yufei Han' 'Jean-Marc Lasgouttes' 'Victorin Martin']" ]
cs.LG
10.1109/TSP.2012.2226446
1210.5394
null
null
http://arxiv.org/abs/1210.5394v1
2012-10-19T12:15:28Z
2012-10-19T12:15:28Z
Bayesian Estimation for Continuous-Time Sparse Stochastic Processes
We consider continuous-time sparse stochastic processes from which we have only a finite number of noisy/noiseless samples. Our goal is to estimate the noiseless samples (denoising) and the signal in-between (interpolation problem). By relying on tools from the theory of splines, we derive the joint a priori distribution of the samples and show how this probability density function can be factorized. The factorization enables us to tractably implement the maximum a posteriori and minimum mean-square error (MMSE) criteria as two statistical approaches for estimating the unknowns. We compare the derived statistical methods with well-known techniques for the recovery of sparse signals, such as the $\ell_1$ norm and Log ($\ell_1$-$\ell_0$ relaxation) regularization methods. The simulation results show that, under certain conditions, the performance of the regularization techniques can be very close to that of the MMSE estimator.
[ "Arash Amini, Ulugbek S. Kamilov, Emrah Bostan and Michael Unser", "['Arash Amini' 'Ulugbek S. Kamilov' 'Emrah Bostan' 'Michael Unser']" ]
stat.ML cs.LG cs.NE
null
1210.5474
null
null
http://arxiv.org/pdf/1210.5474v1
2012-10-19T17:16:48Z
2012-10-19T17:16:48Z
Disentangling Factors of Variation via Generative Entangling
Here we propose a novel model family with the objective of learning to disentangle the factors of variation in data. Our approach is based on the spike-and-slab restricted Boltzmann machine which we generalize to include higher-order interactions among multiple latent variables. Seen from a generative perspective, the multiplicative interactions emulates the entangling of factors of variation. Inference in the model can be seen as disentangling these generative factors. Unlike previous attempts at disentangling latent factors, the proposed model is trained using no supervised information regarding the latent factors. We apply our model to the task of facial expression classification.
[ "Guillaume Desjardins and Aaron Courville and Yoshua Bengio", "['Guillaume Desjardins' 'Aaron Courville' 'Yoshua Bengio']" ]
cs.LG
null
1210.5544
null
null
http://arxiv.org/pdf/1210.5544v1
2012-10-19T21:31:50Z
2012-10-19T21:31:50Z
Online Learning in Decentralized Multiuser Resource Sharing Problems
In this paper, we consider the general scenario of resource sharing in a decentralized system when the resource rewards/qualities are time-varying and unknown to the users, and using the same resource by multiple users leads to reduced quality due to resource sharing. Firstly, we consider a user-independent reward model with no communication between the users, where a user gets feedback about the congestion level in the resource it uses. Secondly, we consider user-specific rewards and allow costly communication between the users. The users have a cooperative goal of achieving the highest system utility. There are multiple obstacles in achieving this goal such as the decentralized nature of the system, unknown resource qualities, communication, computation and switching costs. We propose distributed learning algorithms with logarithmic regret with respect to the optimal allocation. Our logarithmic regret result holds under both i.i.d. and Markovian reward models, as well as under communication, computation and switching costs.
[ "['Cem Tekin' 'Mingyan Liu']", "Cem Tekin, Mingyan Liu" ]
stat.ML cs.LG
10.1002/sam.11184
1210.5631
null
null
http://arxiv.org/abs/1210.5631v2
2013-01-04T22:52:39Z
2012-10-20T14:39:39Z
Content-boosted Matrix Factorization Techniques for Recommender Systems
Many businesses are using recommender systems for marketing outreach. Recommendation algorithms can be either based on content or driven by collaborative filtering. We study different ways to incorporate content information directly into the matrix factorization approach of collaborative filtering. These content-boosted matrix factorization algorithms not only improve recommendation accuracy, but also provide useful insights about the contents, as well as make recommendations more easily interpretable.
[ "['Jennifer Nguyen' 'Mu Zhu']", "Jennifer Nguyen, Mu Zhu" ]
cs.CV cs.AI cs.LG
null
1210.5644
null
null
http://arxiv.org/pdf/1210.5644v1
2012-10-20T17:41:23Z
2012-10-20T17:41:23Z
Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials
Most state-of-the-art techniques for multi-class image segmentation and labeling use conditional random fields defined over pixels or image regions. While region-level models often feature dense pairwise connectivity, pixel-level models are considerably larger and have only permitted sparse graph structures. In this paper, we consider fully connected CRF models defined on the complete set of pixels in an image. The resulting graphs have billions of edges, making traditional inference algorithms impractical. Our main contribution is a highly efficient approximate inference algorithm for fully connected CRF models in which the pairwise edge potentials are defined by a linear combination of Gaussian kernels. Our experiments demonstrate that dense connectivity at the pixel level substantially improves segmentation and labeling accuracy.
[ "['Philipp Krähenbühl' 'Vladlen Koltun']", "Philipp Kr\\\"ahenb\\\"uhl and Vladlen Koltun" ]
math.ST cs.LG stat.TH
null
1210.5830
null
null
http://arxiv.org/pdf/1210.5830v3
2015-10-11T11:10:53Z
2012-10-22T08:22:57Z
Choice of V for V-Fold Cross-Validation in Least-Squares Density Estimation
This paper studies V-fold cross-validation for model selection in least-squares density estimation. The goal is to provide theoretical grounds for choosing V in order to minimize the least-squares loss of the selected estimator. We first prove a non-asymptotic oracle inequality for V-fold cross-validation and its bias-corrected version (V-fold penalization). In particular, this result implies that V-fold penalization is asymptotically optimal in the nonparametric case. Then, we compute the variance of V-fold cross-validation and related criteria, as well as the variance of key quantities for model selection performance. We show that these variances depend on V like 1+4/(V-1), at least in some particular cases, suggesting that the performance increases much from V=2 to V=5 or 10, and then is almost constant. Overall, this can explain the common advice to take V=5---at least in our setting and when the computational power is limited---, as supported by some simulation experiments. An oracle inequality and exact formulas for the variance are also proved for Monte-Carlo cross-validation, also known as repeated cross-validation, where the parameter V is replaced by the number B of random splits of the data.
[ "Sylvain Arlot (SIERRA, DI-ENS), Matthieu Lerasle (JAD)", "['Sylvain Arlot' 'Matthieu Lerasle']" ]
cs.LG stat.ML
null
1210.5840
null
null
http://arxiv.org/pdf/1210.5840v1
2012-10-22T08:55:13Z
2012-10-22T08:55:13Z
Supervised Learning with Similarity Functions
We address the problem of general supervised learning when data can only be accessed through an (indefinite) similarity function between data points. Existing work on learning with indefinite kernels has concentrated solely on binary/multi-class classification problems. We propose a model that is generic enough to handle any supervised learning task and also subsumes the model previously proposed for classification. We give a "goodness" criterion for similarity functions w.r.t. a given supervised learning task and then adapt a well-known landmarking technique to provide efficient algorithms for supervised learning using "good" similarity functions. We demonstrate the effectiveness of our model on three important super-vised learning problems: a) real-valued regression, b) ordinal regression and c) ranking where we show that our method guarantees bounded generalization error. Furthermore, for the case of real-valued regression, we give a natural goodness definition that, when used in conjunction with a recent result in sparse vector recovery, guarantees a sparse predictor with bounded generalization error. Finally, we report results of our learning algorithms on regression and ordinal regression tasks using non-PSD similarity functions and demonstrate the effectiveness of our algorithms, especially that of the sparse landmark selection algorithm that achieves significantly higher accuracies than the baseline methods while offering reduced computational costs.
[ "['Purushottam Kar' 'Prateek Jain']", "Purushottam Kar and Prateek Jain" ]
stat.ML cs.LG
null
1210.5873
null
null
http://arxiv.org/pdf/1210.5873v1
2012-10-22T11:17:31Z
2012-10-22T11:17:31Z
Initialization of Self-Organizing Maps: Principal Components Versus Random Initialization. A Case Study
The performance of the Self-Organizing Map (SOM) algorithm is dependent on the initial weights of the map. The different initialization methods can broadly be classified into random and data analysis based initialization approach. In this paper, the performance of random initialization (RI) approach is compared to that of principal component initialization (PCI) in which the initial map weights are chosen from the space of the principal component. Performance is evaluated by the fraction of variance unexplained (FVU). Datasets were classified into quasi-linear and non-linear and it was observed that RI performed better for non-linear datasets; however the performance of PCI approach remains inconclusive for quasi-linear datasets.
[ "A. A. Akinduko and E. M. Mirkes", "['A. A. Akinduko' 'E. M. Mirkes']" ]
cs.LG stat.ML
null
1210.6001
null
null
http://arxiv.org/pdf/1210.6001v3
2013-06-07T09:45:45Z
2012-10-22T19:02:21Z
Reducing statistical time-series problems to binary classification
We show how binary classification methods developed to work on i.i.d. data can be used for solving statistical problems that are seemingly unrelated to classification and concern highly-dependent time series. Specifically, the problems of time-series clustering, homogeneity testing and the three-sample problem are addressed. The algorithms that we construct for solving these problems are based on a new metric between time-series distributions, which can be evaluated using binary classification methods. Universal consistency of the proposed algorithms is proven under most general assumptions. The theoretical results are illustrated with experiments on synthetic and real-world data.
[ "['Daniil Ryabko' 'Jérémie Mary']", "Daniil Ryabko, J\\'er\\'emie Mary" ]
cs.DS cs.IR cs.LG
null
1210.6287
null
null
http://arxiv.org/pdf/1210.6287v2
2012-10-26T19:14:20Z
2012-10-23T16:51:31Z
Fast Exact Max-Kernel Search
The wide applicability of kernels makes the problem of max-kernel search ubiquitous and more general than the usual similarity search in metric spaces. We focus on solving this problem efficiently. We begin by characterizing the inherent hardness of the max-kernel search problem with a novel notion of directional concentration. Following that, we present a method to use an $O(n \log n)$ algorithm to index any set of objects (points in $\Real^\dims$ or abstract objects) directly in the Hilbert space without any explicit feature representations of the objects in this space. We present the first provably $O(\log n)$ algorithm for exact max-kernel search using this index. Empirical results for a variety of data sets as well as abstract objects demonstrate up to 4 orders of magnitude speedup in some cases. Extensions for approximate max-kernel search are also presented.
[ "['Ryan R. Curtin' 'Parikshit Ram' 'Alexander G. Gray']", "Ryan R. Curtin, Parikshit Ram, Alexander G. Gray" ]
cs.LG
null
1210.6292
null
null
http://arxiv.org/pdf/1210.6292v2
2014-02-06T11:29:49Z
2012-10-23T17:12:01Z
A density-sensitive hierarchical clustering method
We define a hierarchical clustering method: $\alpha$-unchaining single linkage or $SL(\alpha)$. The input of this algorithm is a finite metric space and a certain parameter $\alpha$. This method is sensitive to the density of the distribution and offers some solution to the so called chaining effect. We also define a modified version, $SL^*(\alpha)$, to treat the chaining through points or small blocks. We study the theoretical properties of these methods and offer some theoretical background for the treatment of chaining effects.
[ "\\'Alvaro Mart\\'inez-P\\'erez", "['Álvaro Martínez-Pérez']" ]
cs.MS cs.CV cs.LG
null
1210.6293
null
null
http://arxiv.org/pdf/1210.6293v1
2012-10-23T17:15:03Z
2012-10-23T17:15:03Z
MLPACK: A Scalable C++ Machine Learning Library
MLPACK is a state-of-the-art, scalable, multi-platform C++ machine learning library released in late 2011 offering both a simple, consistent API accessible to novice users and high performance and flexibility to expert users by leveraging modern features of C++. MLPACK provides cutting-edge algorithms whose benchmarks exhibit far better performance than other leading machine learning libraries. MLPACK version 1.0.3, licensed under the LGPL, is available at http://www.mlpack.org.
[ "Ryan R. Curtin, James R. Cline, N.P. Slagle, William B. March,\n Parikshit Ram, Nishant A. Mehta, Alexander G. Gray", "['Ryan R. Curtin' 'James R. Cline' 'N. P. Slagle' 'William B. March'\n 'Parikshit Ram' 'Nishant A. Mehta' 'Alexander G. Gray']" ]
stat.ML cs.LG cs.SI physics.soc-ph q-fin.ST
10.1371/journal.pone.0064846
1210.6321
null
null
http://arxiv.org/abs/1210.6321v4
2013-03-23T14:34:36Z
2012-10-23T18:31:46Z
High quality topic extraction from business news explains abnormal financial market volatility
Understanding the mutual relationships between information flows and social activity in society today is one of the cornerstones of the social sciences. In financial economics, the key issue in this regard is understanding and quantifying how news of all possible types (geopolitical, environmental, social, financial, economic, etc.) affect trading and the pricing of firms in organized stock markets. In this article, we seek to address this issue by performing an analysis of more than 24 million news records provided by Thompson Reuters and of their relationship with trading activity for 206 major stocks in the S&P US stock index. We show that the whole landscape of news that affect stock price movements can be automatically summarized via simple regularized regressions between trading activity and news information pieces decomposed, with the help of simple topic modeling techniques, into their "thematic" features. Using these methods, we are able to estimate and quantify the impacts of news on trading. We introduce network-based visualization techniques to represent the whole landscape of news information associated with a basket of stocks. The examination of the words that are representative of the topic distributions confirms that our method is able to extract the significant pieces of information influencing the stock market. Our results show that one of the most puzzling stylized fact in financial economies, namely that at certain times trading volumes appear to be "abnormally large," can be partially explained by the flow of news. In this sense, our results prove that there is no "excess trading," when restricting to times when news are genuinely novel and provide relevant financial information.
[ "Ryohei Hisano, Didier Sornette, Takayuki Mizuno, Takaaki Ohnishi,\n Tsutomu Watanabe", "['Ryohei Hisano' 'Didier Sornette' 'Takayuki Mizuno' 'Takaaki Ohnishi'\n 'Tsutomu Watanabe']" ]
cs.SI cs.CY cs.LG
null
1210.6497
null
null
http://arxiv.org/pdf/1210.6497v1
2012-10-24T11:51:21Z
2012-10-24T11:51:21Z
Topic-Level Opinion Influence Model(TOIM): An Investigation Using Tencent Micro-Blogging
Mining user opinion from Micro-Blogging has been extensively studied on the most popular social networking sites such as Twitter and Facebook in the U.S., but few studies have been done on Micro-Blogging websites in other countries (e.g. China). In this paper, we analyze the social opinion influence on Tencent, one of the largest Micro-Blogging websites in China, endeavoring to unveil the behavior patterns of Chinese Micro-Blogging users. This paper proposes a Topic-Level Opinion Influence Model (TOIM) that simultaneously incorporates topic factor and social direct influence in a unified probabilistic framework. Based on TOIM, two topic level opinion influence propagation and aggregation algorithms are developed to consider the indirect influence: CP (Conservative Propagation) and NCP (None Conservative Propagation). Users' historical social interaction records are leveraged by TOIM to construct their progressive opinions and neighbors' opinion influence through a statistical learning process, which can be further utilized to predict users' future opinions on some specific topics. To evaluate and test this proposed model, an experiment was designed and a sub-dataset from Tencent Micro-Blogging was used. The experimental results show that TOIM outperforms baseline methods on predicting users' opinion. The applications of CP and NCP have no significant differences and could significantly improve recall and F1-measure of TOIM.
[ "Daifeng Li, Ying Ding, Xin Shuai, Golden Guo-zheng Sun, Jie Tang,\n Zhipeng Luo, Jingwei Zhang and Guo Zhang", "['Daifeng Li' 'Ying Ding' 'Xin Shuai' 'Golden Guo-zheng Sun' 'Jie Tang'\n 'Zhipeng Luo' 'Jingwei Zhang' 'Guo Zhang']" ]
cs.NE cs.LG stat.ML
10.1007/s13218-012-0207-2
1210.6511
null
null
http://arxiv.org/abs/1210.6511v1
2012-10-24T12:37:53Z
2012-10-24T12:37:53Z
Neural Networks for Complex Data
Artificial neural networks are simple and efficient machine learning tools. Defined originally in the traditional setting of simple vector data, neural network models have evolved to address more and more difficulties of complex real world problems, ranging from time evolving data to sophisticated data structures such as graphs and functions. This paper summarizes advances on those themes from the last decade, with a focus on results obtained by members of the SAMM team of Universit\'e Paris 1
[ "Marie Cottrell (SAMM), Madalina Olteanu (SAMM), Fabrice Rossi (SAMM),\n Joseph Rynkiewicz (SAMM), Nathalie Villa-Vialaneix (SAMM)", "['Marie Cottrell' 'Madalina Olteanu' 'Fabrice Rossi' 'Joseph Rynkiewicz'\n 'Nathalie Villa-Vialaneix']" ]
cs.LG cs.CV stat.ML
null
1210.6707
null
null
http://arxiv.org/pdf/1210.6707v1
2012-10-24T23:57:35Z
2012-10-24T23:57:35Z
Clustering hidden Markov models with variational HEM
The hidden Markov model (HMM) is a widely-used generative model that copes with sequential data, assuming that each observation is conditioned on the state of a hidden Markov chain. In this paper, we derive a novel algorithm to cluster HMMs based on the hierarchical EM (HEM) algorithm. The proposed algorithm i) clusters a given collection of HMMs into groups of HMMs that are similar, in terms of the distributions they represent, and ii) characterizes each group by a "cluster center", i.e., a novel HMM that is representative for the group, in a manner that is consistent with the underlying generative model of the HMM. To cope with intractable inference in the E-step, the HEM algorithm is formulated as a variational optimization problem, and efficiently solved for the HMM case by leveraging an appropriate variational approximation. The benefits of the proposed algorithm, which we call variational HEM (VHEM), are demonstrated on several tasks involving time-series data, such as hierarchical clustering of motion capture sequences, and automatic annotation and retrieval of music and of online hand-writing data, showing improvements over current methods. In particular, our variational HEM algorithm effectively leverages large amounts of data when learning annotation models by using an efficient hierarchical estimation procedure, which reduces learning times and memory requirements, while improving model robustness through better regularization.
[ "['Emanuele Coviello' 'Antoni B. Chan' 'Gert R. G. Lanckriet']", "Emanuele Coviello and Antoni B. Chan and Gert R.G. Lanckriet" ]
stat.ML cs.LG
10.1109/TPAMI.2014.2318728
1210.6738
null
null
http://arxiv.org/abs/1210.6738v4
2014-05-02T16:36:57Z
2012-10-25T04:25:00Z
Nested Hierarchical Dirichlet Processes
We develop a nested hierarchical Dirichlet process (nHDP) for hierarchical topic modeling. The nHDP is a generalization of the nested Chinese restaurant process (nCRP) that allows each word to follow its own path to a topic node according to a document-specific distribution on a shared tree. This alleviates the rigid, single-path formulation of the nCRP, allowing a document to more easily express thematic borrowings as a random effect. We derive a stochastic variational inference algorithm for the model, in addition to a greedy subtree selection method for each document, which allows for efficient inference using massive collections of text documents. We demonstrate our algorithm on 1.8 million documents from The New York Times and 3.3 million documents from Wikipedia.
[ "John Paisley, Chong Wang, David M. Blei and Michael I. Jordan", "['John Paisley' 'Chong Wang' 'David M. Blei' 'Michael I. Jordan']" ]
cs.LG cs.SD
null
1210.6766
null
null
http://arxiv.org/pdf/1210.6766v1
2012-10-25T09:22:59Z
2012-10-25T09:22:59Z
Structured Sparsity Models for Multiparty Speech Recovery from Reverberant Recordings
We tackle the multi-party speech recovery problem through modeling the acoustic of the reverberant chambers. Our approach exploits structured sparsity models to perform room modeling and speech recovery. We propose a scheme for characterizing the room acoustic from the unknown competing speech sources relying on localization of the early images of the speakers by sparse approximation of the spatial spectra of the virtual sources in a free-space model. The images are then clustered exploiting the low-rank structure of the spectro-temporal components belonging to each source. This enables us to identify the early support of the room impulse response function and its unique map to the room geometry. To further tackle the ambiguity of the reflection ratios, we propose a novel formulation of the reverberation model and estimate the absorption coefficients through a convex optimization exploiting joint sparsity model formulated upon spatio-spectral sparsity of concurrent speech representation. The acoustic parameters are then incorporated for separating individual speech signals through either structured sparse recovery or inverse filtering the acoustic channels. The experiments conducted on real data recordings demonstrate the effectiveness of the proposed approach for multi-party speech recovery and recognition.
[ "Afsaneh Asaei, Mohammad Golbabaee, Herv\\'e Bourlard, Volkan Cevher", "['Afsaneh Asaei' 'Mohammad Golbabaee' 'Hervé Bourlard' 'Volkan Cevher']" ]
cs.CE cs.LG
null
1210.6891
null
null
http://arxiv.org/pdf/1210.6891v1
2012-10-24T05:56:45Z
2012-10-24T05:56:45Z
Predicting Near-Future Churners and Win-Backs in the Telecommunications Industry
In this work, we presented the strategies and techniques that we have developed for predicting the near-future churners and win-backs for a telecom company. On a large-scale and real-world database containing customer profiles and some transaction data from a telecom company, we first analyzed the data schema, developed feature computation strategies and then extracted a large set of relevant features that can be associated with the customer churning and returning behaviors. Our features include both the original driver factors as well as some derived features. We evaluated our features on the imbalance corrected dataset, i.e. under-sampled dataset and compare a large number of existing machine learning tools, especially decision tree-based classifiers, for predicting the churners and win-backs. In general, we find RandomForest and SimpleCart learning algorithms generally perform well and tend to provide us with highly competitive prediction performance. Among the top-15 driver factors that signal the churn behavior, we find that the service utilization, e.g. last two months' download and upload volume, last three months' average upload and download, and the payment related factors are the most indicative features for predicting if churn will happen soon. Such features can collectively tell discrepancies between the service plans, payments and the dynamically changing utilization needs of the customers. Our proposed features and their computational strategy exhibit reasonable precision performance to predict churn behavior in near future.
[ "Clifton Phua, Hong Cao, Jo\\~ao B\\'artolo Gomes, Minh Nhut Nguyen", "['Clifton Phua' 'Hong Cao' 'João Bártolo Gomes' 'Minh Nhut Nguyen']" ]
q-bio.MN cs.CE cs.LG q-bio.GN stat.ML
null
1210.6912
null
null
http://arxiv.org/pdf/1210.6912v1
2012-10-25T17:13:57Z
2012-10-25T17:13:57Z
Enhancing the functional content of protein interaction networks
Protein interaction networks are a promising type of data for studying complex biological systems. However, despite the rich information embedded in these networks, they face important data quality challenges of noise and incompleteness that adversely affect the results obtained from their analysis. Here, we explore the use of the concept of common neighborhood similarity (CNS), which is a form of local structure in networks, to address these issues. Although several CNS measures have been proposed in the literature, an understanding of their relative efficacies for the analysis of interaction networks has been lacking. We follow the framework of graph transformation to convert the given interaction network into a transformed network corresponding to a variety of CNS measures evaluated. The effectiveness of each measure is then estimated by comparing the quality of protein function predictions obtained from its corresponding transformed network with those from the original network. Using a large set of S. cerevisiae interactions, and a set of 136 GO terms, we find that several of the transformed networks produce more accurate predictions than those obtained from the original network. In particular, the $HC.cont$ measure proposed here performs particularly well for this task. Further investigation reveals that the two major factors contributing to this improvement are the abilities of CNS measures, especially $HC.cont$, to prune out noisy edges and introduce new links between functionally related proteins.
[ "['Gaurav Pandey' 'Sahil Manocha' 'Gowtham Atluri' 'Vipin Kumar']", "Gaurav Pandey and Sahil Manocha and Gowtham Atluri and Vipin Kumar" ]
cs.SI cs.CY cs.LG
null
1210.7047
null
null
http://arxiv.org/pdf/1210.7047v1
2012-10-26T03:04:34Z
2012-10-26T03:04:34Z
User-level Weibo Recommendation incorporating Social Influence based on Semi-Supervised Algorithm
Tencent Weibo, as one of the most popular micro-blogging services in China, has attracted millions of users, producing 30-60 millions of weibo (similar as tweet in Twitter) daily. With the overload problem of user generate content, Tencent users find it is more and more hard to browse and find valuable information at the first time. In this paper, we propose a Factor Graph based weibo recommendation algorithm TSI-WR (Topic-Level Social Influence based Weibo Recommendation), which could help Tencent users to find most suitable information. The main innovation is that we consider both direct and indirect social influence from topic level based on social balance theory. The main advantages of adopting this strategy are that it could first build a more accurate description of latent relationship between two users with weak connections, which could help to solve the data sparsity problem; second provide a more accurate recommendation for a certain user from a wider range. Other meaningful contextual information is also combined into our model, which include: Users profile, Users influence, Content of weibos, Topic information of weibos and etc. We also design a semi-supervised algorithm to further reduce the influence of data sparisty. The experiments show that all the selected variables are important and the proposed model outperforms several baseline methods.
[ "Daifeng Li, Zhipeng Luo, Golden Guo-zheng Sun, Jie Tang, Jingwei Zhang", "['Daifeng Li' 'Zhipeng Luo' 'Golden Guo-zheng Sun' 'Jie Tang'\n 'Jingwei Zhang']" ]
stat.ML cs.LG math.OC
null
1210.7054
null
null
http://arxiv.org/pdf/1210.7054v1
2012-10-26T05:35:26Z
2012-10-26T05:35:26Z
Large-Scale Sparse Principal Component Analysis with Application to Text Data
Sparse PCA provides a linear combination of small number of features that maximizes variance across data. Although Sparse PCA has apparent advantages compared to PCA, such as better interpretability, it is generally thought to be computationally much more expensive. In this paper, we demonstrate the surprising fact that sparse PCA can be easier than PCA in practice, and that it can be reliably applied to very large data sets. This comes from a rigorous feature elimination pre-processing result, coupled with the favorable fact that features in real-life data typically have exponentially decreasing variances, which allows for many features to be eliminated. We introduce a fast block coordinate ascent algorithm with much better computational complexity than the existing first-order ones. We provide experimental results obtained on text corpora involving millions of documents and hundreds of thousands of features. These results illustrate how Sparse PCA can help organize a large corpus of text data in a user-interpretable way, providing an attractive alternative approach to topic models.
[ "Youwei Zhang, Laurent El Ghaoui", "['Youwei Zhang' 'Laurent El Ghaoui']" ]
cs.LG cs.IR stat.ML
null
1210.7056
null
null
http://arxiv.org/pdf/1210.7056v1
2012-10-26T05:36:57Z
2012-10-26T05:36:57Z
Selective Transfer Learning for Cross Domain Recommendation
Collaborative filtering (CF) aims to predict users' ratings on items according to historical user-item preference data. In many real-world applications, preference data are usually sparse, which would make models overfit and fail to give accurate predictions. Recently, several research works show that by transferring knowledge from some manually selected source domains, the data sparseness problem could be mitigated. However for most cases, parts of source domain data are not consistent with the observations in the target domain, which may misguide the target domain model building. In this paper, we propose a novel criterion based on empirical prediction error and its variance to better capture the consistency across domains in CF settings. Consequently, we embed this criterion into a boosting framework to perform selective knowledge transfer. Comparing to several state-of-the-art methods, we show that our proposed selective transfer learning framework can significantly improve the accuracy of rating prediction tasks on several real-world recommendation tasks.
[ "Zhongqi Lu and Erheng Zhong and Lili Zhao and Wei Xiang and Weike Pan\n and Qiang Yang", "['Zhongqi Lu' 'Erheng Zhong' 'Lili Zhao' 'Wei Xiang' 'Weike Pan'\n 'Qiang Yang']" ]
cs.CV cs.LG math.OC stat.ML
null
1210.7070
null
null
http://arxiv.org/pdf/1210.7070v3
2012-11-02T10:11:10Z
2012-10-26T09:08:55Z
A Multiscale Framework for Challenging Discrete Optimization
Current state-of-the-art discrete optimization methods struggle behind when it comes to challenging contrast-enhancing discrete energies (i.e., favoring different labels for neighboring variables). This work suggests a multiscale approach for these challenging problems. Deriving an algebraic representation allows us to coarsen any pair-wise energy using any interpolation in a principled algebraic manner. Furthermore, we propose an energy-aware interpolation operator that efficiently exposes the multiscale landscape of the energy yielding an effective coarse-to-fine optimization scheme. Results on challenging contrast-enhancing energies show significant improvement over state-of-the-art methods.
[ "Shai Bagon and Meirav Galun", "['Shai Bagon' 'Meirav Galun']" ]
cs.CV cs.LG math.OC stat.ML
null
1210.7362
null
null
http://arxiv.org/pdf/1210.7362v2
2012-11-07T21:09:53Z
2012-10-27T19:12:49Z
Discrete Energy Minimization, beyond Submodularity: Applications and Approximations
In this thesis I explore challenging discrete energy minimization problems that arise mainly in the context of computer vision tasks. This work motivates the use of such "hard-to-optimize" non-submodular functionals, and proposes methods and algorithms to cope with the NP-hardness of their optimization. Consequently, this thesis revolves around two axes: applications and approximations. The applications axis motivates the use of such "hard-to-optimize" energies by introducing new tasks. As the energies become less constrained and structured one gains more expressive power for the objective function achieving more accurate models. Results show how challenging, hard-to-optimize, energies are more adequate for certain computer vision applications. To overcome the resulting challenging optimization tasks the second axis of this thesis proposes approximation algorithms to cope with the NP-hardness of the optimization. Experiments show that these new methods yield good results for representative challenging problems.
[ "Shai Bagon", "['Shai Bagon']" ]
cs.CV cs.LG stat.ML
null
1210.7461
null
null
http://arxiv.org/pdf/1210.7461v1
2012-10-28T13:55:07Z
2012-10-28T13:55:07Z
Recognizing Static Signs from the Brazilian Sign Language: Comparing Large-Margin Decision Directed Acyclic Graphs, Voting Support Vector Machines and Artificial Neural Networks
In this paper, we explore and detail our experiments in a high-dimensionality, multi-class image classification problem often found in the automatic recognition of Sign Languages. Here, our efforts are directed towards comparing the characteristics, advantages and drawbacks of creating and training Support Vector Machines disposed in a Directed Acyclic Graph and Artificial Neural Networks to classify signs from the Brazilian Sign Language (LIBRAS). We explore how the different heuristics, hyperparameters and multi-class decision schemes affect the performance, efficiency and ease of use for each classifier. We provide hyperparameter surface maps capturing accuracy and efficiency, comparisons between DDAGs and 1-vs-1 SVMs, and effects of heuristics when training ANNs with Resilient Backpropagation. We report statistically significant results using Cohen's Kappa statistic for contingency tables.
[ "C\\'esar Roberto de Souza, Ednaldo Brigante Pizzolato, Mauro dos Santos\n Anjo", "['César Roberto de Souza' 'Ednaldo Brigante Pizzolato'\n 'Mauro dos Santos Anjo']" ]
cs.LG math.NA stat.ML
null
1210.7559
null
null
http://arxiv.org/pdf/1210.7559v4
2014-11-13T22:43:15Z
2012-10-29T04:38:41Z
Tensor decompositions for learning latent variable models
This work considers a computationally and statistically efficient parameter estimation method for a wide class of latent variable models---including Gaussian mixture models, hidden Markov models, and latent Dirichlet allocation---which exploits a certain tensor structure in their low-order observable moments (typically, of second- and third-order). Specifically, parameter estimation is reduced to the problem of extracting a certain (orthogonal) decomposition of a symmetric tensor derived from the moments; this decomposition can be viewed as a natural generalization of the singular value decomposition for matrices. Although tensor decompositions are generally intractable to compute, the decomposition of these specially structured tensors can be efficiently obtained by a variety of approaches, including power iterations and maximization approaches (similar to the case of matrices). A detailed analysis of a robust tensor power method is provided, establishing an analogue of Wedin's perturbation theorem for the singular vectors of matrices. This implies a robust and computationally tractable estimation approach for several popular latent variable models.
[ "['Anima Anandkumar' 'Rong Ge' 'Daniel Hsu' 'Sham M. Kakade'\n 'Matus Telgarsky']", "Anima Anandkumar and Rong Ge and Daniel Hsu and Sham M. Kakade and\n Matus Telgarsky" ]
cs.LG
null
1210.7657
null
null
http://arxiv.org/pdf/1210.7657v1
2012-10-29T13:30:27Z
2012-10-29T13:30:27Z
Text Classification with Compression Algorithms
This work concerns a comparison of SVM kernel methods in text categorization tasks. In particular I define a kernel function that estimates the similarity between two objects computing by their compressed lengths. In fact, compression algorithms can detect arbitrarily long dependencies within the text strings. Data text vectorization looses information in feature extractions and is highly sensitive by textual language. Furthermore, these methods are language independent and require no text preprocessing. Moreover, the accuracy computed on the datasets (Web-KB, 20ng and Reuters-21578), in some case, is greater than Gaussian, linear and polynomial kernels. The method limits are represented by computational time complexity of the Gram matrix and by very poor performance on non-textual datasets.
[ "['Antonio Giuliano Zippo']", "Antonio Giuliano Zippo" ]
cs.LG cs.AI
null
1210.8291
null
null
http://arxiv.org/pdf/1210.8291v1
2012-10-31T10:42:32Z
2012-10-31T10:42:32Z
Learning in the Model Space for Fault Diagnosis
The emergence of large scaled sensor networks facilitates the collection of large amounts of real-time data to monitor and control complex engineering systems. However, in many cases the collected data may be incomplete or inconsistent, while the underlying environment may be time-varying or un-formulated. In this paper, we have developed an innovative cognitive fault diagnosis framework that tackles the above challenges. This framework investigates fault diagnosis in the model space instead of in the signal space. Learning in the model space is implemented by fitting a series of models using a series of signal segments selected with a rolling window. By investigating the learning techniques in the fitted model space, faulty models can be discriminated from healthy models using one-class learning algorithm. The framework enables us to construct fault library when unknown faults occur, which can be regarded as cognitive fault isolation. This paper also theoretically investigates how to measure the pairwise distance between two models in the model space and incorporates the model distance into the learning algorithm in the model space. The results on three benchmark applications and one simulated model for the Barcelona water distribution network have confirmed the effectiveness of the proposed framework.
[ "Huanhuan Chen, Peter Tino, Xin Yao, and Ali Rodan", "['Huanhuan Chen' 'Peter Tino' 'Xin Yao' 'Ali Rodan']" ]
stat.ML cs.AI cs.LG
null
1210.8353
null
null
http://arxiv.org/pdf/1210.8353v1
2012-10-31T14:55:50Z
2012-10-31T14:55:50Z
Temporal Autoencoding Restricted Boltzmann Machine
Much work has been done refining and characterizing the receptive fields learned by deep learning algorithms. A lot of this work has focused on the development of Gabor-like filters learned when enforcing sparsity constraints on a natural image dataset. Little work however has investigated how these filters might expand to the temporal domain, namely through training on natural movies. Here we investigate exactly this problem in established temporal deep learning algorithms as well as a new learning paradigm suggested here, the Temporal Autoencoding Restricted Boltzmann Machine (TARBM).
[ "['Chris Häusler' 'Alex Susemihl']", "Chris H\\\"ausler, Alex Susemihl" ]
cs.AI cs.LG
null
1210.8385
null
null
http://arxiv.org/pdf/1210.8385v1
2012-10-31T16:41:37Z
2012-10-31T16:41:37Z
First Experiments with PowerPlay
Like a scientist or a playing child, PowerPlay not only learns new skills to solve given problems, but also invents new interesting problems by itself. By design, it continually comes up with the fastest to find, initially novel, but eventually solvable tasks. It also continually simplifies or compresses or speeds up solutions to previous tasks. Here we describe first experiments with PowerPlay. A self-delimiting recurrent neural network SLIM RNN is used as a general computational problem solving architecture. Its connection weights can encode arbitrary, self-delimiting, halting or non-halting programs affecting both environment (through effectors) and internal states encoding abstractions of event sequences. Our PowerPlay-driven SLIM RNN learns to become an increasingly general solver of self-invented problems, continually adding new problem solving procedures to its growing skill repertoire. Extending a recent conference paper, we identify interesting, emerging, developmental stages of our open-ended system. We also show how it automatically self-modularizes, frequently re-using code for previously invented skills, always trying to invent novel tasks that can be quickly validated because they do not require too many weight changes affecting too many previous tasks.
[ "Rupesh Kumar Srivastava, Bas R. Steunebrink and J\\\"urgen Schmidhuber", "['Rupesh Kumar Srivastava' 'Bas R. Steunebrink' 'Jürgen Schmidhuber']" ]
cs.LG stat.ML
null
1211.0025
null
null
http://arxiv.org/pdf/1211.0025v2
2014-06-21T13:47:44Z
2012-10-31T20:38:04Z
Venn-Abers predictors
This paper continues study, both theoretical and empirical, of the method of Venn prediction, concentrating on binary prediction problems. Venn predictors produce probability-type predictions for the labels of test objects which are guaranteed to be well calibrated under the standard assumption that the observations are generated independently from the same distribution. We give a simple formalization and proof of this property. We also introduce Venn-Abers predictors, a new class of Venn predictors based on the idea of isotonic regression, and report promising empirical results both for Venn-Abers predictors and for their more computationally efficient simplified version.
[ "Vladimir Vovk and Ivan Petej", "['Vladimir Vovk' 'Ivan Petej']" ]
cs.SI cs.LG stat.ML
null
1211.0028
null
null
http://arxiv.org/pdf/1211.0028v1
2012-10-31T20:56:16Z
2012-10-31T20:56:16Z
Understanding the Interaction between Interests, Conversations and Friendships in Facebook
In this paper, we explore salient questions about user interests, conversations and friendships in the Facebook social network, using a novel latent space model that integrates several data types. A key challenge of studying Facebook's data is the wide range of data modalities such as text, network links, and categorical labels. Our latent space model seamlessly combines all three data modalities over millions of users, allowing us to study the interplay between user friendships, interests, and higher-order network-wide social trends on Facebook. The recovered insights not only answer our initial questions, but also reveal surprising facts about user interests in the context of Facebook's ecosystem. We also confirm that our results are significant with respect to evidential information from the study subjects.
[ "['Qirong Ho' 'Rong Yan' 'Rajat Raina' 'Eric P. Xing']", "Qirong Ho, Rong Yan, Rajat Raina, Eric P. Xing" ]
cs.DM cs.LG cs.SI
10.1109/MSP.2012.2235192
1211.0053
null
null
http://arxiv.org/abs/1211.0053v2
2013-03-10T15:04:40Z
2012-10-31T23:18:43Z
The Emerging Field of Signal Processing on Graphs: Extending High-Dimensional Data Analysis to Networks and Other Irregular Domains
In applications such as social, energy, transportation, sensor, and neuronal networks, high-dimensional data naturally reside on the vertices of weighted graphs. The emerging field of signal processing on graphs merges algebraic and spectral graph theoretic concepts with computational harmonic analysis to process such signals on graphs. In this tutorial overview, we outline the main challenges of the area, discuss different ways to define graph spectral domains, which are the analogues to the classical frequency domain, and highlight the importance of incorporating the irregular structures of graph data domains when processing signals on graphs. We then review methods to generalize fundamental operations such as filtering, translation, modulation, dilation, and downsampling to the graph setting, and survey the localized, multiscale transforms that have been proposed to efficiently extract information from high-dimensional data on graphs. We conclude with a brief discussion of open issues and possible extensions.
[ "David I Shuman, Sunil K. Narang, Pascal Frossard, Antonio Ortega, and\n Pierre Vandergheynst", "['David I Shuman' 'Sunil K. Narang' 'Pascal Frossard' 'Antonio Ortega'\n 'Pierre Vandergheynst']" ]
math.OC cs.LG math.NA stat.CO stat.ML
null
1211.0056
null
null
http://arxiv.org/pdf/1211.0056v2
2012-11-02T04:04:35Z
2012-10-31T23:47:04Z
Iterative Hard Thresholding Methods for $l_0$ Regularized Convex Cone Programming
In this paper we consider $l_0$ regularized convex cone programming problems. In particular, we first propose an iterative hard thresholding (IHT) method and its variant for solving $l_0$ regularized box constrained convex programming. We show that the sequence generated by these methods converges to a local minimizer. Also, we establish the iteration complexity of the IHT method for finding an $\epsilon$-local-optimal solution. We then propose a method for solving $l_0$ regularized convex cone programming by applying the IHT method to its quadratic penalty relaxation and establish its iteration complexity for finding an $\epsilon$-approximate local minimizer. Finally, we propose a variant of this method in which the associated penalty parameter is dynamically updated, and show that every accumulation point is a local minimizer of the problem.
[ "Zhaosong Lu", "['Zhaosong Lu']" ]
cs.LG
null
1211.0210
null
null
http://arxiv.org/pdf/1211.0210v1
2012-11-01T15:52:11Z
2012-11-01T15:52:11Z
Extension of TSVM to Multi-Class and Hierarchical Text Classification Problems With General Losses
Transductive SVM (TSVM) is a well known semi-supervised large margin learning method for binary text classification. In this paper we extend this method to multi-class and hierarchical classification problems. We point out that the determination of labels of unlabeled examples with fixed classifier weights is a linear programming problem. We devise an efficient technique for solving it. The method is applicable to general loss functions. We demonstrate the value of the new method using large margin loss on a number of multi-class and hierarchical classification datasets. For maxent loss we show empirically that our method is better than expectation regularization/constraint and posterior regularization methods, and competitive with the version of entropy regularization method which uses label constraints.
[ "['Sathiya Keerthi Selvaraj' 'Sundararajan Sellamanickam' 'Shirish Shevade']", "Sathiya Keerthi Selvaraj, Sundararajan Sellamanickam, Shirish Shevade" ]
stat.ML cs.LG math.PR
null
1211.0358
null
null
http://arxiv.org/pdf/1211.0358v2
2013-03-23T01:23:34Z
2012-11-02T03:13:08Z
Deep Gaussian Processes
In this paper we introduce deep Gaussian process (GP) models. Deep GPs are a deep belief network based on Gaussian process mappings. The data is modeled as the output of a multivariate GP. The inputs to that Gaussian process are then governed by another GP. A single layer model is equivalent to a standard GP or the GP latent variable model (GP-LVM). We perform inference in the model by approximate variational marginalization. This results in a strict lower bound on the marginal likelihood of the model which we use for model selection (number of layers and nodes per layer). Deep belief networks are typically applied to relatively large data sets using stochastic gradient descent for optimization. Our fully Bayesian treatment allows for the application of deep models even when data is scarce. Model selection by our variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples.
[ "['Andreas C. Damianou' 'Neil D. Lawrence']", "Andreas C. Damianou, Neil D. Lawrence" ]
cs.LG cond-mat.dis-nn stat.ML
null
1211.0439
null
null
http://arxiv.org/pdf/1211.0439v1
2012-11-02T12:46:24Z
2012-11-02T12:46:24Z
Learning curves for multi-task Gaussian process regression
We study the average case performance of multi-task Gaussian process (GP) regression as captured in the learning curve, i.e. the average Bayes error for a chosen task versus the total number of examples $n$ for all tasks. For GP covariances that are the product of an input-dependent covariance function and a free-form inter-task covariance matrix, we show that accurate approximations for the learning curve can be obtained for an arbitrary number of tasks $T$. We use these to study the asymptotic learning behaviour for large $n$. Surprisingly, multi-task learning can be asymptotically essentially useless, in the sense that examples from other tasks help only when the degree of inter-task correlation, $\rho$, is near its maximal value $\rho=1$. This effect is most extreme for learning of smooth target functions as described by e.g. squared exponential kernels. We also demonstrate that when learning many tasks, the learning curves separate into an initial phase, where the Bayes error on each task is reduced down to a plateau value by "collective learning" even though most tasks have not seen examples, and a final decay that occurs once the number of examples is proportional to the number of tasks.
[ "['Simon R. F. Ashton' 'Peter Sollich']", "Simon R. F. Ashton and Peter Sollich" ]
cs.NI cs.LG
null
1211.0447
null
null
http://arxiv.org/pdf/1211.0447v1
2012-11-02T13:21:48Z
2012-11-02T13:21:48Z
Ordinal Rating of Network Performance and Inference by Matrix Completion
This paper addresses the large-scale acquisition of end-to-end network performance. We made two distinct contributions: ordinal rating of network performance and inference by matrix completion. The former reduces measurement costs and unifies various metrics which eases their processing in applications. The latter enables scalable and accurate inference with no requirement of structural information of the network nor geometric constraints. By combining both, the acquisition problem bears strong similarities to recommender systems. This paper investigates the applicability of various matrix factorization models used in recommender systems. We found that the simple regularized matrix factorization is not only practical but also produces accurate results that are beneficial for peer selection.
[ "Wei Du and Yongjun Liao and and Pierre Geurts and Guy Leduc", "['Wei Du' 'Yongjun Liao' 'and Pierre Geurts' 'Guy Leduc']" ]
cs.IT cs.LG math.IT stat.ML
null
1211.0587
null
null
http://arxiv.org/pdf/1211.0587v2
2012-11-21T12:52:44Z
2012-11-03T00:41:46Z
Partition Tree Weighting
This paper introduces the Partition Tree Weighting technique, an efficient meta-algorithm for piecewise stationary sources. The technique works by performing Bayesian model averaging over a large class of possible partitions of the data into locally stationary segments. It uses a prior, closely related to the Context Tree Weighting technique of Willems, that is well suited to data compression applications. Our technique can be applied to any coding distribution at an additional time and space cost only logarithmic in the sequence length. We provide a competitive analysis of the redundancy of our method, and explore its application in a variety of settings. The order of the redundancy and the complexity of our algorithm matches those of the best competitors available in the literature, and the new algorithm exhibits a superior complexity-performance trade-off in our experiments.
[ "Joel Veness, Martha White, Michael Bowling, Andr\\'as Gy\\\"orgy", "['Joel Veness' 'Martha White' 'Michael Bowling' 'András György']" ]
cs.LG cs.DS
null
1211.0616
null
null
http://arxiv.org/pdf/1211.0616v4
2014-05-10T11:15:05Z
2012-11-03T15:14:40Z
The complexity of learning halfspaces using generalized linear methods
Many popular learning algorithms (E.g. Regression, Fourier-Transform based algorithms, Kernel SVM and Kernel ridge regression) operate by reducing the problem to a convex optimization problem over a vector space of functions. These methods offer the currently best approach to several central problems such as learning half spaces and learning DNF's. In addition they are widely used in numerous application domains. Despite their importance, there are still very few proof techniques to show limits on the power of these algorithms. We study the performance of this approach in the problem of (agnostically and improperly) learning halfspaces with margin $\gamma$. Let $\mathcal{D}$ be a distribution over labeled examples. The $\gamma$-margin error of a hyperplane $h$ is the probability of an example to fall on the wrong side of $h$ or at a distance $\le\gamma$ from it. The $\gamma$-margin error of the best $h$ is denoted $\mathrm{Err}_\gamma(\mathcal{D})$. An $\alpha(\gamma)$-approximation algorithm receives $\gamma,\epsilon$ as input and, using i.i.d. samples of $\mathcal{D}$, outputs a classifier with error rate $\le \alpha(\gamma)\mathrm{Err}_\gamma(\mathcal{D}) + \epsilon$. Such an algorithm is efficient if it uses $\mathrm{poly}(\frac{1}{\gamma},\frac{1}{\epsilon})$ samples and runs in time polynomial in the sample size. The best approximation ratio achievable by an efficient algorithm is $O\left(\frac{1/\gamma}{\sqrt{\log(1/\gamma)}}\right)$ and is achieved using an algorithm from the above class. Our main result shows that the approximation ratio of every efficient algorithm from this family must be $\ge \Omega\left(\frac{1/\gamma}{\mathrm{poly}\left(\log\left(1/\gamma\right)\right)}\right)$, essentially matching the best known upper bound.
[ "Amit Daniely and Nati Linial and Shai Shalev-Shwartz", "['Amit Daniely' 'Nati Linial' 'Shai Shalev-Shwartz']" ]
cs.LG math.OC stat.ML
null
1211.0632
null
null
http://arxiv.org/pdf/1211.0632v2
2013-01-22T17:07:37Z
2012-11-03T19:05:56Z
Stochastic ADMM for Nonsmooth Optimization
We present a stochastic setting for optimization problems with nonsmooth convex separable objective functions over linear equality constraints. To solve such problems, we propose a stochastic Alternating Direction Method of Multipliers (ADMM) algorithm. Our algorithm applies to a more general class of nonsmooth convex functions that does not necessarily have a closed-form solution by minimizing the augmented function directly. We also demonstrate the rates of convergence for our algorithm under various structural assumptions of the stochastic functions: $O(1/\sqrt{t})$ for convex functions and $O(\log t/t)$ for strongly convex functions. Compared to previous literature, we establish the convergence rate of ADMM algorithm, for the first time, in terms of both the objective value and the feasibility violation.
[ "['Hua Ouyang' 'Niao He' 'Alexander Gray']", "Hua Ouyang, Niao He, Alexander Gray" ]
math.ST cs.LG stat.ML stat.TH
10.1214/12-AOS979
1211.0801
null
null
http://arxiv.org/abs/1211.0801v1
2012-11-05T09:36:40Z
2012-11-05T09:36:40Z
Discussion: Latent variable graphical model selection via convex optimization
Discussion of "Latent variable graphical model selection via convex optimization" by Venkat Chandrasekaran, Pablo A. Parrilo and Alan S. Willsky [arXiv:1008.1290].
[ "['Ming Yuan']", "Ming Yuan" ]
null
null
1211.0806
null
null
http://arxiv.org/abs/1211.0806v1
2012-11-05T09:51:07Z
2012-11-05T09:51:07Z
Discussion: Latent variable graphical model selection via convex optimization
Discussion of "Latent variable graphical model selection via convex optimization" by Venkat Chandrasekaran, Pablo A. Parrilo and Alan S. Willsky [arXiv:1008.1290].
[ "['Steffen Lauritzen' 'Nicolai Meinshausen']" ]
null
null
1211.0808
null
null
http://arxiv.org/abs/1211.0808v1
2012-11-05T09:59:33Z
2012-11-05T09:59:33Z
Discussion: Latent variable graphical model selection via convex optimization
Discussion of "Latent variable graphical model selection via convex optimization" by Venkat Chandrasekaran, Pablo A. Parrilo and Alan S. Willsky [arXiv:1008.1290].
[ "['Martin J. Wainwright']" ]
null
null
1211.0817
null
null
http://arxiv.org/abs/1211.0817v1
2012-11-05T10:32:57Z
2012-11-05T10:32:57Z
Discussion: Latent variable graphical model selection via convex optimization
Discussion of "Latent variable graphical model selection via convex optimization" by Venkat Chandrasekaran, Pablo A. Parrilo and Alan S. Willsky [arXiv:1008.1290].
[ "['Emmanuel J. Candés' 'Mahdi Soltanolkotabi']" ]
math.ST cs.LG stat.ML stat.TH
10.1214/12-AOS1020
1211.0835
null
null
http://arxiv.org/abs/1211.0835v1
2012-11-05T11:33:03Z
2012-11-05T11:33:03Z
Rejoinder: Latent variable graphical model selection via convex optimization
Rejoinder to "Latent variable graphical model selection via convex optimization" by Venkat Chandrasekaran, Pablo A. Parrilo and Alan S. Willsky [arXiv:1008.1290].
[ "['Venkat Chandrasekaran' 'Pablo A. Parrilo' 'Alan S. Willsky']", "Venkat Chandrasekaran, Pablo A. Parrilo, Alan S. Willsky" ]
stat.ML cs.LG
null
1211.0879
null
null
http://arxiv.org/pdf/1211.0879v1
2012-11-05T14:48:15Z
2012-11-05T14:48:15Z
Comparing K-Nearest Neighbors and Potential Energy Method in classification problem. A case study using KNN applet by E.M. Mirkes and real life benchmark data sets
K-nearest neighbors (KNN) method is used in many supervised learning classification problems. Potential Energy (PE) method is also developed for classification problems based on its physical metaphor. The energy potential used in the experiments are Yukawa potential and Gaussian Potential. In this paper, I use both applet and MATLAB program with real life benchmark data to analyze the performances of KNN and PE method in classification problems. The results show that in general, KNN and PE methods have similar performance. In particular, PE with Yukawa potential has worse performance than KNN when the density of the data is higher in the distribution of the database. When the Gaussian potential is applied, the results from PE and KNN have similar behavior. The indicators used are correlation coefficients and information gain.
[ "['Yanshan Shi']", "Yanshan Shi" ]
stat.ML cs.LG
null
1211.0889
null
null
http://arxiv.org/pdf/1211.0889v3
2013-05-04T06:22:23Z
2012-11-02T07:42:54Z
APPLE: Approximate Path for Penalized Likelihood Estimators
In high-dimensional data analysis, penalized likelihood estimators are shown to provide superior results in both variable selection and parameter estimation. A new algorithm, APPLE, is proposed for calculating the Approximate Path for Penalized Likelihood Estimators. Both the convex penalty (such as LASSO) and the nonconvex penalty (such as SCAD and MCP) cases are considered. The APPLE efficiently computes the solution path for the penalized likelihood estimator using a hybrid of the modified predictor-corrector method and the coordinate-descent algorithm. APPLE is compared with several well-known packages via simulation and analysis of two gene expression data sets.
[ "['Yi Yu' 'Yang Feng']", "Yi Yu and Yang Feng" ]
cs.AI cs.LG cs.PF stat.ML
null
1211.0906
null
null
http://arxiv.org/pdf/1211.0906v2
2013-10-26T09:00:50Z
2012-11-05T16:15:16Z
Algorithm Runtime Prediction: Methods & Evaluation
Perhaps surprisingly, it is possible to predict how long an algorithm will take to run on a previously unseen input, using machine learning techniques to build a model of the algorithm's runtime as a function of problem-specific instance features. Such models have important applications to algorithm analysis, portfolio-based algorithm selection, and the automatic configuration of parameterized algorithms. Over the past decade, a wide variety of techniques have been studied for building such models. Here, we describe extensions and improvements of existing models, new families of models, and -- perhaps most importantly -- a much more thorough treatment of algorithm parameters as model inputs. We also comprehensively describe new and existing features for predicting algorithm runtime for propositional satisfiability (SAT), travelling salesperson (TSP) and mixed integer programming (MIP) problems. We evaluate these innovations through the largest empirical analysis of its kind, comparing to a wide range of runtime modelling techniques from the literature. Our experiments consider 11 algorithms and 35 instance distributions; they also span a very wide range of SAT, MIP, and TSP instances, with the least structured having been generated uniformly at random and the most structured having emerged from real industrial applications. Overall, we demonstrate that our new models yield substantially better runtime predictions than previous approaches in terms of their generalization to new problem instances, to new algorithms from a parameterized space, and to both simultaneously.
[ "Frank Hutter, Lin Xu, Holger H. Hoos, Kevin Leyton-Brown", "['Frank Hutter' 'Lin Xu' 'Holger H. Hoos' 'Kevin Leyton-Brown']" ]
cs.LG cs.AI
null
1211.0996
null
null
http://arxiv.org/pdf/1211.0996v2
2013-04-17T21:04:47Z
2012-11-05T20:42:16Z
Learning using Local Membership Queries
We introduce a new model of membership query (MQ) learning, where the learning algorithm is restricted to query points that are \emph{close} to random examples drawn from the underlying distribution. The learning model is intermediate between the PAC model (Valiant, 1984) and the PAC+MQ model (where the queries are allowed to be arbitrary points). Membership query algorithms are not popular among machine learning practitioners. Apart from the obvious difficulty of adaptively querying labelers, it has also been observed that querying \emph{unnatural} points leads to increased noise from human labelers (Lang and Baum, 1992). This motivates our study of learning algorithms that make queries that are close to examples generated from the data distribution. We restrict our attention to functions defined on the $n$-dimensional Boolean hypercube and say that a membership query is local if its Hamming distance from some example in the (random) training data is at most $O(\log(n))$. We show the following results in this model: (i) The class of sparse polynomials (with coefficients in R) over $\{0,1\}^n$ is polynomial time learnable under a large class of \emph{locally smooth} distributions using $O(\log(n))$-local queries. This class also includes the class of $O(\log(n))$-depth decision trees. (ii) The class of polynomial-sized decision trees is polynomial time learnable under product distributions using $O(\log(n))$-local queries. (iii) The class of polynomial size DNF formulas is learnable under the uniform distribution using $O(\log(n))$-local queries in time $n^{O(\log(\log(n)))}$. (iv) In addition we prove a number of results relating the proposed model to the traditional PAC model and the PAC+MQ model.
[ "Pranjal Awasthi, Vitaly Feldman, Varun Kanade", "['Pranjal Awasthi' 'Vitaly Feldman' 'Varun Kanade']" ]
cs.CC cs.DS cs.IT cs.LG math.IT
null
1211.1041
null
null
http://arxiv.org/pdf/1211.1041v3
2013-12-03T21:51:26Z
2012-11-05T21:39:22Z
Algorithms and Hardness for Robust Subspace Recovery
We consider a fundamental problem in unsupervised learning called \emph{subspace recovery}: given a collection of $m$ points in $\mathbb{R}^n$, if many but not necessarily all of these points are contained in a $d$-dimensional subspace $T$ can we find it? The points contained in $T$ are called {\em inliers} and the remaining points are {\em outliers}. This problem has received considerable attention in computer science and in statistics. Yet efficient algorithms from computer science are not robust to {\em adversarial} outliers, and the estimators from robust statistics are hard to compute in high dimensions. Are there algorithms for subspace recovery that are both robust to outliers and efficient? We give an algorithm that finds $T$ when it contains more than a $\frac{d}{n}$ fraction of the points. Hence, for say $d = n/2$ this estimator is both easy to compute and well-behaved when there are a constant fraction of outliers. We prove that it is Small Set Expansion hard to find $T$ when the fraction of errors is any larger, thus giving evidence that our estimator is an {\em optimal} compromise between efficiency and robustness. As it turns out, this basic problem has a surprising number of connections to other areas including small set expansion, matroid theory and functional analysis that we make use of here.
[ "['Moritz Hardt' 'Ankur Moitra']", "Moritz Hardt and Ankur Moitra" ]
cs.LG stat.ML
null
1211.1043
null
null
http://arxiv.org/pdf/1211.1043v1
2012-11-05T21:40:38Z
2012-11-05T21:40:38Z
Soft (Gaussian CDE) regression models and loss functions
Regression, unlike classification, has lacked a comprehensive and effective approach to deal with cost-sensitive problems by the reuse (and not a re-training) of general regression models. In this paper, a wide variety of cost-sensitive problems in regression (such as bids, asymmetric losses and rejection rules) can be solved effectively by a lightweight but powerful approach, consisting of: (1) the conversion of any traditional one-parameter crisp regression model into a two-parameter soft regression model, seen as a normal conditional density estimator, by the use of newly-introduced enrichment methods; and (2) the reframing of an enriched soft regression model to new contexts by an instance-dependent optimisation of the expected loss derived from the conditional normal distribution.
[ "['Jose Hernandez-Orallo']", "Jose Hernandez-Orallo" ]
cs.LG math.ST stat.ML stat.TH
null
1211.1082
null
null
http://arxiv.org/pdf/1211.1082v3
2013-04-26T17:50:21Z
2012-11-06T00:21:32Z
Active and passive learning of linear separators under log-concave distributions
We provide new results concerning label efficient, polynomial time, passive and active learning of linear separators. We prove that active learning provides an exponential improvement over PAC (passive) learning of homogeneous linear separators under nearly log-concave distributions. Building on this, we provide a computationally efficient PAC algorithm with optimal (up to a constant factor) sample complexity for such problems. This resolves an open question concerning the sample complexity of efficient PAC algorithms under the uniform distribution in the unit ball. Moreover, it provides the first bound for a polynomial-time PAC algorithm that is tight for an interesting infinite class of hypothesis functions under a general and natural class of data-distributions, providing significant progress towards a longstanding open question. We also provide new bounds for active and passive learning in the case that the data might not be linearly separable, both in the agnostic case and and under the Tsybakov low-noise condition. To derive our results, we provide new structural results for (nearly) log-concave distributions, which might be of independent interest as well.
[ "['Maria Florina Balcan' 'Philip M. Long']", "Maria Florina Balcan and Philip M. Long" ]
cs.CV cs.LG
null
1211.1127
null
null
http://arxiv.org/pdf/1211.1127v1
2012-11-06T07:26:49Z
2012-11-06T07:26:49Z
Visual Transfer Learning: Informal Introduction and Literature Overview
Transfer learning techniques are important to handle small training sets and to allow for quick generalization even from only a few examples. The following paper is the introduction as well as the literature overview part of my thesis related to the topic of transfer learning for visual recognition problems.
[ "['Erik Rodner']", "Erik Rodner" ]
cs.LG cs.CV q-bio.NC
null
1211.1255
null
null
http://arxiv.org/pdf/1211.1255v1
2012-11-06T15:15:48Z
2012-11-06T15:15:48Z
Handwritten digit recognition by bio-inspired hierarchical networks
The human brain processes information showing learning and prediction abilities but the underlying neuronal mechanisms still remain unknown. Recently, many studies prove that neuronal networks are able of both generalizations and associations of sensory inputs. In this paper, following a set of neurophysiological evidences, we propose a learning framework with a strong biological plausibility that mimics prominent functions of cortical circuitries. We developed the Inductive Conceptual Network (ICN), that is a hierarchical bio-inspired network, able to learn invariant patterns by Variable-order Markov Models implemented in its nodes. The outputs of the top-most node of ICN hierarchy, representing the highest input generalization, allow for automatic classification of inputs. We found that the ICN clusterized MNIST images with an error of 5.73% and USPS images with an error of 12.56%.
[ "Antonio G. Zippo, Giuliana Gelsomino, Sara Nencini, Gabriele E. M.\n Biella", "['Antonio G. Zippo' 'Giuliana Gelsomino' 'Sara Nencini'\n 'Gabriele E. M. Biella']" ]
stat.ML cond-mat.dis-nn cond-mat.stat-mech cs.LG
null
1211.1328
null
null
http://arxiv.org/pdf/1211.1328v2
2013-09-30T10:36:51Z
2012-11-06T17:58:39Z
Random walk kernels and learning curves for Gaussian process regression on random graphs
We consider learning on graphs, guided by kernels that encode similarity between vertices. Our focus is on random walk kernels, the analogues of squared exponential kernels in Euclidean spaces. We show that on large, locally treelike, graphs these have some counter-intuitive properties, specifically in the limit of large kernel lengthscales. We consider using these kernels as covariance matrices of e.g.\ Gaussian processes (GPs). In this situation one typically scales the prior globally to normalise the average of the prior variance across vertices. We demonstrate that, in contrast to the Euclidean case, this generically leads to significant variation in the prior variance across vertices, which is undesirable from the probabilistic modelling point of view. We suggest the random walk kernel should be normalised locally, so that each vertex has the same prior variance, and analyse the consequences of this by studying learning curves for Gaussian process regression. Numerical calculations as well as novel theoretical predictions for the learning curves using belief propagation make it clear that one obtains distinctly different probabilistic models depending on the choice of normalisation. Our method for predicting the learning curves using belief propagation is significantly more accurate than previous approximations and should become exact in the limit of large random graphs.
[ "Matthew Urry and Peter Sollich", "['Matthew Urry' 'Peter Sollich']" ]
cs.LG
10.1016/j.ins.2014.08.058
1211.1513
null
null
http://arxiv.org/abs/1211.1513v2
2013-03-27T09:00:24Z
2012-11-07T10:57:38Z
K-Plane Regression
In this paper, we present a novel algorithm for piecewise linear regression which can learn continuous as well as discontinuous piecewise linear functions. The main idea is to repeatedly partition the data and learn a liner model in in each partition. While a simple algorithm incorporating this idea does not work well, an interesting modification results in a good algorithm. The proposed algorithm is similar in spirit to $k$-means clustering algorithm. We show that our algorithm can also be viewed as an EM algorithm for maximum likelihood estimation of parameters under a reasonable probability model. We empirically demonstrate the effectiveness of our approach by comparing its performance with the state of art regression learning algorithms on some real world datasets.
[ "['Naresh Manwani' 'P. S. Sastry']", "Naresh Manwani, P. S. Sastry" ]
cs.CE cs.LG
null
1211.1526
null
null
http://arxiv.org/pdf/1211.1526v2
2012-11-08T13:54:13Z
2012-11-07T12:47:57Z
Explosion prediction of oil gas using SVM and Logistic Regression
The prevention of dangerous chemical accidents is a primary problem of industrial manufacturing. In the accidents of dangerous chemicals, the oil gas explosion plays an important role. The essential task of the explosion prevention is to estimate the better explosion limit of a given oil gas. In this paper, Support Vector Machines (SVM) and Logistic Regression (LR) are used to predict the explosion of oil gas. LR can get the explicit probability formula of explosion, and the explosive range of the concentrations of oil gas according to the concentration of oxygen. Meanwhile, SVM gives higher accuracy of prediction. Furthermore, considering the practical requirements, the effects of penalty parameter on the distribution of two types of errors are discussed.
[ "['Xiaofei Wang' 'Mingming Zhang' 'Liyong Shen' 'Suixiang Gao']", "Xiaofei Wang, Mingming Zhang, Liyong Shen, Suixiang Gao" ]
cs.CV cs.LG
null
1211.1544
null
null
http://arxiv.org/pdf/1211.1544v3
2012-11-09T10:36:22Z
2012-11-07T13:35:52Z
Image denoising with multi-layer perceptrons, part 1: comparison with existing algorithms and with bounds
Image denoising can be described as the problem of mapping from a noisy image to a noise-free image. The best currently available denoising methods approximate this mapping with cleverly engineered algorithms. In this work we attempt to learn this mapping directly with plain multi layer perceptrons (MLP) applied to image patches. We will show that by training on large image databases we are able to outperform the current state-of-the-art image denoising methods. In addition, our method achieves results that are superior to one type of theoretical bound and goes a large way toward closing the gap with a second type of theoretical bound. Our approach is easily adapted to less extensively studied types of noise, such as mixed Poisson-Gaussian noise, JPEG artifacts, salt-and-pepper noise and noise resembling stripes, for which we achieve excellent results as well. We will show that combining a block-matching procedure with MLPs can further improve the results on certain images. In a second paper, we detail the training trade-offs and the inner mechanisms of our MLPs.
[ "['Harold Christopher Burger' 'Christian J. Schuler' 'Stefan Harmeling']", "Harold Christopher Burger, Christian J. Schuler, Stefan Harmeling" ]
cs.LG cs.NA math.OC
null
1211.1550
null
null
http://arxiv.org/pdf/1211.1550v2
2012-11-12T17:50:39Z
2012-11-07T13:49:26Z
A Riemannian geometry for low-rank matrix completion
We propose a new Riemannian geometry for fixed-rank matrices that is specifically tailored to the low-rank matrix completion problem. Exploiting the degree of freedom of a quotient space, we tune the metric on our search space to the particular least square cost function. At one level, it illustrates in a novel way how to exploit the versatile framework of optimization on quotient manifold. At another level, our algorithm can be considered as an improved version of LMaFit, the state-of-the-art Gauss-Seidel algorithm. We develop necessary tools needed to perform both first-order and second-order optimization. In particular, we propose gradient descent schemes (steepest descent and conjugate gradient) and trust-region algorithms. We also show that, thanks to the simplicity of the cost function, it is numerically cheap to perform an exact linesearch given a search direction, which makes our algorithms competitive with the state-of-the-art on standard low-rank matrix completion instances.
[ "['B. Mishra' 'K. Adithya Apuroop' 'R. Sepulchre']", "B. Mishra, K. Adithya Apuroop and R. Sepulchre" ]
cs.CV cs.LG
null
1211.1552
null
null
http://arxiv.org/pdf/1211.1552v1
2012-11-07T13:50:19Z
2012-11-07T13:50:19Z
Image denoising with multi-layer perceptrons, part 2: training trade-offs and analysis of their mechanisms
Image denoising can be described as the problem of mapping from a noisy image to a noise-free image. In another paper, we show that multi-layer perceptrons can achieve outstanding image denoising performance for various types of noise (additive white Gaussian noise, mixed Poisson-Gaussian noise, JPEG artifacts, salt-and-pepper noise and noise resembling stripes). In this work we discuss in detail which trade-offs have to be considered during the training procedure. We will show how to achieve good results and which pitfalls to avoid. By analysing the activation patterns of the hidden units we are able to make observations regarding the functioning principle of multi-layer perceptrons trained for image denoising.
[ "['Harold Christopher Burger' 'Christian J. Schuler' 'Stefan Harmeling']", "Harold Christopher Burger, Christian J. Schuler, Stefan Harmeling" ]
cs.RO cs.CV cs.LG cs.SY
null
1211.1690
null
null
http://arxiv.org/pdf/1211.1690v1
2012-11-07T21:20:23Z
2012-11-07T21:20:23Z
Learning Monocular Reactive UAV Control in Cluttered Natural Environments
Autonomous navigation for large Unmanned Aerial Vehicles (UAVs) is fairly straight-forward, as expensive sensors and monitoring devices can be employed. In contrast, obstacle avoidance remains a challenging task for Micro Aerial Vehicles (MAVs) which operate at low altitude in cluttered environments. Unlike large vehicles, MAVs can only carry very light sensors, such as cameras, making autonomous navigation through obstacles much more challenging. In this paper, we describe a system that navigates a small quadrotor helicopter autonomously at low altitude through natural forest environments. Using only a single cheap camera to perceive the environment, we are able to maintain a constant velocity of up to 1.5m/s. Given a small set of human pilot demonstrations, we use recent state-of-the-art imitation learning techniques to train a controller that can avoid trees by adapting the MAVs heading. We demonstrate the performance of our system in a more controlled environment indoors, and in real natural forest environments outdoors.
[ "Stephane Ross, Narek Melik-Barkhudarov, Kumar Shaurya Shankar, Andreas\n Wendel, Debadeepta Dey, J. Andrew Bagnell, Martial Hebert", "['Stephane Ross' 'Narek Melik-Barkhudarov' 'Kumar Shaurya Shankar'\n 'Andreas Wendel' 'Debadeepta Dey' 'J. Andrew Bagnell' 'Martial Hebert']" ]
cs.LG cs.DS stat.ML
null
1211.1716
null
null
http://arxiv.org/pdf/1211.1716v2
2013-06-09T04:43:53Z
2012-11-07T22:50:51Z
Blind Signal Separation in the Presence of Gaussian Noise
A prototypical blind signal separation problem is the so-called cocktail party problem, with n people talking simultaneously and n different microphones within a room. The goal is to recover each speech signal from the microphone inputs. Mathematically this can be modeled by assuming that we are given samples from an n-dimensional random variable X=AS, where S is a vector whose coordinates are independent random variables corresponding to each speaker. The objective is to recover the matrix A^{-1} given random samples from X. A range of techniques collectively known as Independent Component Analysis (ICA) have been proposed to address this problem in the signal processing and machine learning literature. Many of these techniques are based on using the kurtosis or other cumulants to recover the components. In this paper we propose a new algorithm for solving the blind signal separation problem in the presence of additive Gaussian noise, when we are given samples from X=AS+\eta, where \eta is drawn from an unknown, not necessarily spherical n-dimensional Gaussian distribution. Our approach is based on a method for decorrelating a sample with additive Gaussian noise under the assumption that the underlying distribution is a linear transformation of a distribution with independent components. Our decorrelation routine is based on the properties of cumulant tensors and can be combined with any standard cumulant-based method for ICA to get an algorithm that is provably robust in the presence of Gaussian noise. We derive polynomial bounds for the sample complexity and error propagation of our method.
[ "Mikhail Belkin, Luis Rademacher, James Voss", "['Mikhail Belkin' 'Luis Rademacher' 'James Voss']" ]