title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Online Learning, Stability, and Stochastic Gradient Descent | cs.LG | In batch learning, stability together with existence and uniqueness of the
solution corresponds to well-posedness of Empirical Risk Minimization (ERM)
methods; recently, it was proved that CV_loo stability is necessary and
sufficient for generalization and consistency of ERM. In this note, we
introduce CV_on stability, which plays a similar note in online learning. We
show that stochastic gradient descent (SDG) with the usual hypotheses is CVon
stable and we then discuss the implications of CV_on stability for convergence
of SGD.
| Tomaso Poggio, Stephen Voinea, Lorenzo Rosasco | null | 1105.4701 | null | null |
Robust approachability and regret minimization in games with partial
monitoring | math.ST cs.LG stat.TH | Approachability has become a standard tool in analyzing earning algorithms in
the adversarial online learning setup. We develop a variant of approachability
for games where there is ambiguity in the obtained reward that belongs to a
set, rather than being a single vector. Using this variant we tackle the
problem of approachability in games with partial monitoring and develop simple
and efficient algorithms (i.e., with constant per-step complexity) for this
setup. We finally consider external regret and internal regret in repeated
games with partial monitoring and derive regret-minimizing strategies based on
approachability theory.
| Shie Mannor (EE-Technion), Vianney Perchet (CMLA), Gilles Stoltz (DMA,
GREGH, INRIA Paris - Rocquencourt) | null | 1105.4995 | null | null |
Large-Scale Music Annotation and Retrieval: Learning to Rank in Joint
Semantic Spaces | cs.LG | Music prediction tasks range from predicting tags given a song or clip of
audio, predicting the name of the artist, or predicting related songs given a
song, clip, artist name or tag. That is, we are interested in every semantic
relationship between the different musical concepts in our database. In
realistically sized databases, the number of songs is measured in the hundreds
of thousands or more, and the number of artists in the tens of thousands or
more, providing a considerable challenge to standard machine learning
techniques. In this work, we propose a method that scales to such datasets
which attempts to capture the semantic similarities between the database items
by modeling audio, artist names, and tags in a single low-dimensional semantic
space. This choice of space is learnt by optimizing the set of prediction tasks
of interest jointly using multi-task learning. Our method both outperforms
baseline methods and, in comparison to them, is faster and consumes less
memory. We then demonstrate how our method learns an interpretable model, where
the semantic space captures well the similarities of interest.
| Jason Weston, Samy Bengio, Philippe Hamel | null | 1105.5196 | null | null |
Parallel Coordinate Descent for L1-Regularized Loss Minimization | cs.LG cs.IT math.IT | We propose Shotgun, a parallel coordinate descent algorithm for minimizing
L1-regularized losses. Though coordinate descent seems inherently sequential,
we prove convergence bounds for Shotgun which predict linear speedups, up to a
problem-dependent limit. We present a comprehensive empirical study of Shotgun
for Lasso and sparse logistic regression. Our theoretical predictions on the
potential for parallelism closely match behavior on real data. Shotgun
outperforms other published solvers on a range of large problems, proving to be
one of the most scalable algorithms for L1.
| Joseph K. Bradley, Aapo Kyrola, Danny Bickson and Carlos Guestrin | null | 1105.5379 | null | null |
Learning to Order Things | cs.LG cs.AI | There are many applications in which it is desirable to order rather than
classify instances. Here we consider the problem of learning how to order
instances given feedback in the form of preference judgments, i.e., statements
to the effect that one instance should be ranked ahead of another. We outline a
two-stage approach in which one first learns by conventional means a binary
preference function indicating whether it is advisable to rank one instance
before another. Here we consider an on-line algorithm for learning preference
functions that is based on Freund and Schapire's 'Hedge' algorithm. In the
second stage, new instances are ordered so as to maximize agreement with the
learned preference function. We show that the problem of finding the ordering
that agrees best with a learned preference function is NP-complete.
Nevertheless, we describe simple greedy algorithms that are guaranteed to find
a good approximation. Finally, we show how metasearch can be formulated as an
ordering problem, and present experimental results on learning a combination of
'search experts', each of which is a domain-specific query expansion strategy
for a web search engine.
| W. W. Cohen, R. E. Schapire, Y. Singer | 10.1613/jair.587 | 1105.5464 | null | null |
Kernel Belief Propagation | cs.LG | We propose a nonparametric generalization of belief propagation, Kernel
Belief Propagation (KBP), for pairwise Markov random fields. Messages are
represented as functions in a reproducing kernel Hilbert space (RKHS), and
message updates are simple linear operations in the RKHS. KBP makes none of the
assumptions commonly required in classical BP algorithms: the variables need
not arise from a finite domain or a Gaussian distribution, nor must their
relations take any particular parametric form. Rather, the relations between
variables are represented implicitly, and are learned nonparametrically from
training data. KBP has the advantage that it may be used on any domain where
kernels are defined (Rd, strings, groups), even where explicit parametric
models are not known, or closed form expressions for the BP updates do not
exist. The computational cost of message updates in KBP is polynomial in the
training data size. We also propose a constant time approximate message update
procedure by representing messages using a small number of basis functions. In
experiments, we apply KBP to image denoising, depth prediction from still
images, and protein configuration prediction: KBP is faster than competing
classical and nonparametric approaches (by orders of magnitude, in some cases),
while providing significantly more accurate results.
| Le Song, Arthur Gretton, Danny Bickson, Yucheng Low, Carlos Guestrin | null | 1105.5592 | null | null |
A Philosophical Treatise of Universal Induction | cs.LG cs.IT math.IT | Understanding inductive reasoning is a problem that has engaged mankind for
thousands of years. This problem is relevant to a wide range of fields and is
integral to the philosophy of science. It has been tackled by many great minds
ranging from philosophers to scientists to mathematicians, and more recently
computer scientists. In this article we argue the case for Solomonoff
Induction, a formal inductive framework which combines algorithmic information
theory with the Bayesian framework. Although it achieves excellent theoretical
results and is based on solid philosophical foundations, the requisite
technical knowledge necessary for understanding this framework has caused it to
remain largely unknown and unappreciated in the wider scientific community. The
main contribution of this article is to convey Solomonoff induction and its
related concepts in a generally accessible form with the aim of bridging this
current technical gap. In the process we examine the major historical
contributions that have led to the formulation of Solomonoff Induction as well
as criticisms of Solomonoff and induction in general. In particular we examine
how Solomonoff induction addresses many issues that have plagued other
inductive systems, such as the black ravens paradox and the confirmation
problem, and compare this approach with other recent approaches.
| Samuel Rathmanner and Marcus Hutter | 10.3390/e13061076 | 1105.5721 | null | null |
Efficient sampling of high-dimensional Gaussian fields: the
non-stationary / non-sparse case | stat.CO cs.LG stat.AP | This paper is devoted to the problem of sampling Gaussian fields in high
dimension. Solutions exist for two specific structures of inverse covariance :
sparse and circulant. The proposed approach is valid in a more general case and
especially as it emerges in inverse problems. It relies on a
perturbation-optimization principle: adequate stochastic perturbation of a
criterion and optimization of the perturbed criterion. It is shown that the
criterion minimizer is a sample of the target density. The motivation in
inverse problems is related to general (non-convolutive) linear observation
models and their resolution in a Bayesian framework implemented through
sampling algorithms when existing samplers are not feasible. It finds a direct
application in myopic and/or unsupervised inversion as well as in some
non-Gaussian inversion. An illustration focused on hyperparameter estimation
for super-resolution problems assesses the effectiveness of the proposed
approach.
| F. Orieux, and O. F\'eron, and J.-F. Giovannelli | null | 1105.5887 | null | null |
The Perceptron with Dynamic Margin | cs.LG | The classical perceptron rule provides a varying upper bound on the maximum
margin, namely the length of the current weight vector divided by the total
number of updates up to that time. Requiring that the perceptron updates its
internal state whenever the normalized margin of a pattern is found not to
exceed a certain fraction of this dynamic upper bound we construct a new
approximate maximum margin classifier called the perceptron with dynamic margin
(PDM). We demonstrate that PDM converges in a finite number of steps and derive
an upper bound on them. We also compare experimentally PDM with other
perceptron-like algorithms and support vector machines on hard margin tasks
involving linear kernels which are equivalent to 2-norm soft margin.
| Constantinos Panagiotakopoulos and Petroula Tsampouka | null | 1105.6041 | null | null |
Evolutionary Algorithms for Reinforcement Learning | cs.LG cs.AI cs.NE | There are two distinct approaches to solving reinforcement learning problems,
namely, searching in value function space and searching in policy space.
Temporal difference methods and evolutionary algorithms are well-known examples
of these approaches. Kaelbling, Littman and Moore recently provided an
informative survey of temporal difference methods. This article focuses on the
application of evolutionary algorithms to the reinforcement learning problem,
emphasizing alternative policy representations, credit assignment methods, and
problem-specific genetic operators. Strengths and weaknesses of the
evolutionary approach to reinforcement learning are presented, along with a
survey of representative applications.
| J. J. Grefenstette, D. E. Moriarty, A. C. Schultz | 10.1613/jair.613 | 1106.0221 | null | null |
Learning Hierarchical Sparse Representations using Iterative Dictionary
Learning and Dimension Reduction | cs.LG cs.AI cs.CV | This paper introduces an elemental building block which combines Dictionary
Learning and Dimension Reduction (DRDL). We show how this foundational element
can be used to iteratively construct a Hierarchical Sparse Representation (HSR)
of a sensory stream. We compare our approach to existing models showing the
generality of our simple prescription. We then perform preliminary experiments
using this framework, illustrating with the example of an object recognition
task using standard datasets. This work introduces the very first steps towards
an integrated framework for designing and analyzing various computational tasks
from learning to attention to action. The ultimate goal is building a
mathematically rigorous, integrated theory of intelligence.
| Mohamad Tarifi, Meera Sitharam, Jeffery Ho | null | 1106.0357 | null | null |
Learning unbelievable marginal probabilities | cs.AI cs.LG | Loopy belief propagation performs approximate inference on graphical models
with loops. One might hope to compensate for the approximation by adjusting
model parameters. Learning algorithms for this purpose have been explored
previously, and the claim has been made that every set of locally consistent
marginals can arise from belief propagation run on a graphical model. On the
contrary, here we show that many probability distributions have marginals that
cannot be reached by belief propagation using any set of model parameters or
any learning algorithm. We call such marginals `unbelievable.' This problem
occurs whenever the Hessian of the Bethe free energy is not positive-definite
at the target marginals. All learning algorithms for belief propagation
necessarily fail in these cases, producing beliefs or sets of beliefs that may
even be worse than the pre-learning approximation. We then show that averaging
inaccurate beliefs, each obtained from belief propagation using model
parameters perturbed about some learned mean values, can achieve the
unbelievable marginals.
| Xaq Pitkow, Yashar Ahmadian, Ken D. Miller | null | 1106.0483 | null | null |
Submodular Functions Are Noise Stable | cs.LG cs.CC cs.GT | We show that all non-negative submodular functions have high {\em
noise-stability}. As a consequence, we obtain a polynomial-time learning
algorithm for this class with respect to any product distribution on
$\{-1,1\}^n$ (for any constant accuracy parameter $\epsilon$). Our algorithm
also succeeds in the agnostic setting. Previous work on learning submodular
functions required either query access or strong assumptions about the types of
submodular functions to be learned (and did not hold in the agnostic setting).
| Mahdi Cheraghchi, Adam Klivans, Pravesh Kothari, Homin K. Lee | null | 1106.0518 | null | null |
Experiments with Infinite-Horizon, Policy-Gradient Estimation | cs.AI cs.LG | In this paper, we present algorithms that perform gradient ascent of the
average reward in a partially observable Markov decision process (POMDP). These
algorithms are based on GPOMDP, an algorithm introduced in a companion paper
(Baxter and Bartlett, this volume), which computes biased estimates of the
performance gradient in POMDPs. The algorithm's chief advantages are that it
uses only one free parameter beta, which has a natural interpretation in terms
of bias-variance trade-off, it requires no knowledge of the underlying state,
and it can be applied to infinite state, control and observation spaces. We
show how the gradient estimates produced by GPOMDP can be used to perform
gradient ascent, both with a traditional stochastic-gradient algorithm, and
with an algorithm based on conjugate-gradients that utilizes gradient
information to bracket maxima in line searches. Experimental results are
presented illustrating both the theoretical results of (Baxter and Bartlett,
this volume) on a toy problem, and practical aspects of the algorithms on a
number of more realistic problems.
| J. Baxter, P. L. Bartlett, L. Weaver | 10.1613/jair.807 | 1106.0666 | null | null |
Optimizing Dialogue Management with Reinforcement Learning: Experiments
with the NJFun System | cs.LG cs.AI | Designing the dialogue policy of a spoken dialogue system involves many
nontrivial choices. This paper presents a reinforcement learning approach for
automatically optimizing a dialogue policy, which addresses the technical
challenges in applying reinforcement learning to a working dialogue system with
human users. We report on the design, construction and empirical evaluation of
NJFun, an experimental spoken dialogue system that provides users with access
to information about fun things to do in New Jersey. Our results show that by
optimizing its performance via reinforcement learning, NJFun measurably
improves system performance.
| M. Kearns, D. Litman, S. Singh, M. Walker | 10.1613/jair.859 | 1106.0676 | null | null |
Accelerating Reinforcement Learning through Implicit Imitation | cs.LG cs.AI | Imitation can be viewed as a means of enhancing learning in multiagent
environments. It augments an agent's ability to learn useful behaviors by
making intelligent use of the knowledge implicit in behaviors demonstrated by
cooperative teachers or other more experienced agents. We propose and study a
formal model of implicit imitation that can accelerate reinforcement learning
dramatically in certain cases. Roughly, by observing a mentor, a
reinforcement-learning agent can extract information about its own capabilities
in, and the relative value of, unvisited parts of the state space. We study two
specific instantiations of this model, one in which the learning agent and the
mentor have identical abilities, and one designed to deal with agents and
mentors with different action sets. We illustrate the benefits of implicit
imitation by integrating it with prioritized sweeping, and demonstrating
improved performance and convergence through observation of single and multiple
mentors. Though we make some stringent assumptions regarding observability and
possible interactions, we briefly comment on extensions of the model that relax
these restricitions.
| C. Boutilier, B. Price | 10.1613/jair.898 | 1106.0681 | null | null |
Efficient Reinforcement Learning Using Recursive Least-Squares Methods | cs.LG cs.AI | The recursive least-squares (RLS) algorithm is one of the most well-known
algorithms used in adaptive filtering, system identification and adaptive
control. Its popularity is mainly due to its fast convergence speed, which is
considered to be optimal in practice. In this paper, RLS methods are used to
solve reinforcement learning problems, where two new reinforcement learning
algorithms using linear value function approximators are proposed and analyzed.
The two algorithms are called RLS-TD(lambda) and Fast-AHC (Fast Adaptive
Heuristic Critic), respectively. RLS-TD(lambda) can be viewed as the extension
of RLS-TD(0) from lambda=0 to general lambda within interval [0,1], so it is a
multi-step temporal-difference (TD) learning algorithm using RLS methods. The
convergence with probability one and the limit of convergence of RLS-TD(lambda)
are proved for ergodic Markov chains. Compared to the existing LS-TD(lambda)
algorithm, RLS-TD(lambda) has advantages in computation and is more suitable
for online learning. The effectiveness of RLS-TD(lambda) is analyzed and
verified by learning prediction experiments of Markov chains with a wide range
of parameter settings. The Fast-AHC algorithm is derived by applying the
proposed RLS-TD(lambda) algorithm in the critic network of the adaptive
heuristic critic method. Unlike conventional AHC algorithm, Fast-AHC makes use
of RLS methods to improve the learning-prediction efficiency in the critic.
Learning control experiments of the cart-pole balancing and the acrobot
swing-up problems are conducted to compare the data efficiency of Fast-AHC with
conventional AHC. From the experimental results, it is shown that the data
efficiency of learning control can also be improved by using RLS methods in the
learning-prediction process of the critic. The performance of Fast-AHC is also
compared with that of the AHC method using LS-TD(lambda). Furthermore, it is
demonstrated in the experiments that different initial values of the variance
matrix in RLS-TD(lambda) are required to get better performance not only in
learning prediction but also in learning control. The experimental results are
analyzed based on the existing theoretical work on the transient phase of
forgetting factor RLS methods.
| H. He, D. Hu, X. Xu | 10.1613/jair.946 | 1106.0707 | null | null |
Rademacher complexity of stationary sequences | stat.ML cs.LG | We show how to control the generalization error of time series models wherein
past values of the outcome are used to predict future values. The results are
based on a generalization of standard i.i.d. concentration inequalities to
dependent data without the mixing assumptions common in the time series
setting. Our proof and the result are simpler than previous analyses with
dependent data or stochastic adversaries which use sequential Rademacher
complexities rather than the expected Rademacher complexity for i.i.d.
processes. We also derive empirical Rademacher results without mixing
assumptions resulting in fully calculable upper bounds.
| Daniel J. McDonald and Cosma Rohilla Shalizi | null | 1106.0730 | null | null |
Optimal Reinforcement Learning for Gaussian Systems | stat.ML cs.LG | The exploration-exploitation trade-off is among the central challenges of
reinforcement learning. The optimal Bayesian solution is intractable in
general. This paper studies to what extent analytic statements about optimal
learning are possible if all beliefs are Gaussian processes. A first order
approximation of learning of both loss and dynamics, for nonlinear,
time-varying systems in continuous time and space, subject to a relatively weak
restriction on the dynamics, is described by an infinite-dimensional partial
differential equation. An approximate finite-dimensional projection gives an
impression for how this result may be helpful.
| Philipp Hennig | null | 1106.0800 | null | null |
Hashing Algorithms for Large-Scale Learning | stat.ML cs.LG | In this paper, we first demonstrate that b-bit minwise hashing, whose
estimators are positive definite kernels, can be naturally integrated with
learning algorithms such as SVM and logistic regression. We adopt a simple
scheme to transform the nonlinear (resemblance) kernel into linear (inner
product) kernel; and hence large-scale problems can be solved extremely
efficiently. Our method provides a simple effective solution to large-scale
learning in massive and extremely high-dimensional datasets, especially when
data do not fit in memory.
We then compare b-bit minwise hashing with the Vowpal Wabbit (VW) algorithm
(which is related the Count-Min (CM) sketch). Interestingly, VW has the same
variances as random projections. Our theoretical and empirical comparisons
illustrate that usually $b$-bit minwise hashing is significantly more accurate
(at the same storage) than VW (and random projections) in binary data.
Furthermore, $b$-bit minwise hashing can be combined with VW to achieve further
improvements in terms of training speed, especially when $b$ is large.
| Ping Li, Anshumali Shrivastava, Joshua Moore, Arnd Christian Konig | null | 1106.0967 | null | null |
Nearest Prime Simplicial Complex for Object Recognition | cs.LG cs.AI cs.CG cs.CV | The structure representation of data distribution plays an important role in
understanding the underlying mechanism of generating data. In this paper, we
propose nearest prime simplicial complex approaches (NSC) by utilizing
persistent homology to capture such structures. Assuming that each class is
represented with a prime simplicial complex, we classify unlabeled samples
based on the nearest projection distances from the samples to the simplicial
complexes. We also extend the extrapolation ability of these complexes with a
projection constraint term. Experiments in simulated and practical datasets
indicate that compared with several published algorithms, the proposed NSC
approaches achieve promising performance without losing the structure
representation.
| Junping Zhang and Ziyu Xie and Stan Z. Li | null | 1106.0987 | null | null |
Complexity Analysis of Vario-eta through Structure | cs.LG cs.DM | Graph-based representations of images have recently acquired an important
role for classification purposes within the context of machine learning
approaches. The underlying idea is to consider that relevant information of an
image is implicitly encoded into the relationships between more basic entities
that compose by themselves the whole image. The classification problem is then
reformulated in terms of an optimization problem usually solved by a
gradient-based search procedure. Vario-eta through structure is an approximate
second order stochastic optimization technique that achieves a good trade-off
between speed of convergence and the computational effort required. However,
the robustness of this technique for large scale problems has not been yet
assessed. In this paper we firstly provide a theoretical justification of the
assumptions made by this optimization procedure. Secondly, a complexity
analysis of the algorithm is performed to prove its suitability for large scale
learning problems.
| Alejandro Chinea, Elka Korutcheva | null | 1106.1113 | null | null |
Bayesian and L1 Approaches to Sparse Unsupervised Learning | cs.LG cs.AI stat.ML | The use of L1 regularisation for sparse learning has generated immense
research interest, with successful application in such diverse areas as signal
acquisition, image coding, genomics and collaborative filtering. While existing
work highlights the many advantages of L1 methods, in this paper we find that
L1 regularisation often dramatically underperforms in terms of predictive
performance when compared with other methods for inferring sparsity. We focus
on unsupervised latent variable models, and develop L1 minimising factor
models, Bayesian variants of "L1", and Bayesian models with a stronger L0-like
sparsity induced through spike-and-slab distributions. These spike-and-slab
Bayesian factor models encourage sparsity while accounting for uncertainty in a
principled manner and avoiding unnecessary shrinkage of non-zero values. We
demonstrate on a number of data sets that in practice spike-and-slab Bayesian
methods outperform L1 minimisation, even on a computational budget. We thus
highlight the need to re-assess the wide use of L1 methods in sparsity-reliant
applications, particularly when we care about generalising to previously unseen
data, and provide an alternative that, over many varying conditions, provides
improved generalisation performance.
| Shakir Mohamed, Katherine Heller and Zoubin Ghahramani | null | 1106.1157 | null | null |
Using More Data to Speed-up Training Time | cs.LG stat.ML | In many recent applications, data is plentiful. By now, we have a rather
clear understanding of how more data can be used to improve the accuracy of
learning algorithms. Recently, there has been a growing interest in
understanding how more data can be leveraged to reduce the required training
runtime. In this paper, we study the runtime of learning as a function of the
number of available training examples, and underscore the main high-level
techniques. We provide some initial positive results showing that the runtime
can decrease exponentially while only requiring a polynomial growth of the
number of examples, and spell-out several interesting open problems.
| Shai Shalev-Shwartz and Ohad Shamir and Eran Tromer | null | 1106.1216 | null | null |
A Unified Framework for Approximating and Clustering Data | cs.LG | Given a set $F$ of $n$ positive functions over a ground set $X$, we consider
the problem of computing $x^*$ that minimizes the expression $\sum_{f\in
F}f(x)$, over $x\in X$. A typical application is \emph{shape fitting}, where we
wish to approximate a set $P$ of $n$ elements (say, points) by a shape $x$ from
a (possibly infinite) family $X$ of shapes. Here, each point $p\in P$
corresponds to a function $f$ such that $f(x)$ is the distance from $p$ to $x$,
and we seek a shape $x$ that minimizes the sum of distances from each point in
$P$. In the $k$-clustering variant, each $x\in X$ is a tuple of $k$ shapes, and
$f(x)$ is the distance from $p$ to its closest shape in $x$.
Our main result is a unified framework for constructing {\em coresets} and
{\em approximate clustering} for such general sets of functions. To achieve our
results, we forge a link between the classic and well defined notion of
$\varepsilon$-approximations from the theory of PAC Learning and VC dimension,
to the relatively new (and not so consistent) paradigm of coresets, which are
some kind of "compressed representation" of the input set $F$. Using
traditional techniques, a coreset usually implies an LTAS (linear time
approximation scheme) for the corresponding optimization problem, which can be
computed in parallel, via one pass over the data, and using only
polylogarithmic space (i.e, in the streaming model).
We show how to generalize the results of our framework for squared distances
(as in $k$-mean), distances to the $q$th power, and deterministic
constructions.
| Dan Feldman, Michael Langberg | null | 1106.1379 | null | null |
Large-Scale Convex Minimization with a Low-Rank Constraint | cs.LG stat.ML | We address the problem of minimizing a convex function over the space of
large matrices with low rank. While this optimization problem is hard in
general, we propose an efficient greedy algorithm and derive its formal
approximation guarantees. Each iteration of the algorithm involves
(approximately) finding the left and right singular vectors corresponding to
the largest singular value of a certain matrix, which can be calculated in
linear time. This leads to an algorithm which can scale to large matrices
arising in several applications such as matrix completion for collaborative
filtering and robust low rank matrix approximation.
| Shai Shalev-Shwartz and Alon Gonen and Ohad Shamir | null | 1106.1622 | null | null |
Sparse Principal Component of a Rank-deficient Matrix | cs.IT cs.LG cs.SY math.IT math.OC | We consider the problem of identifying the sparse principal component of a
rank-deficient matrix. We introduce auxiliary spherical variables and prove
that there exists a set of candidate index-sets (that is, sets of indices to
the nonzero elements of the vector argument) whose size is polynomially
bounded, in terms of rank, and contains the optimal index-set, i.e. the
index-set of the nonzero elements of the optimal solution. Finally, we develop
an algorithm that computes the optimal sparse principal component in polynomial
time for any sparsity degree.
| Megasthenis Asteris, Dimitris S. Papailiopoulos, and George N.
Karystinos | null | 1106.1651 | null | null |
Max-Margin Stacking and Sparse Regularization for Linear Classifier
Combination and Selection | cs.LG | The main principle of stacked generalization (or Stacking) is using a
second-level generalizer to combine the outputs of base classifiers in an
ensemble. In this paper, we investigate different combination types under the
stacking framework; namely weighted sum (WS), class-dependent weighted sum
(CWS) and linear stacked generalization (LSG). For learning the weights, we
propose using regularized empirical risk minimization with the hinge loss. In
addition, we propose using group sparsity for regularization to facilitate
classifier selection. We performed experiments using two different ensemble
setups with differing diversities on 8 real-world datasets. Results show the
power of regularized learning with the hinge loss function. Using sparse
regularization, we are able to reduce the number of selected classifiers of the
diverse ensemble without sacrificing accuracy. With the non-diverse ensembles,
we even gain accuracy on average by using sparse regularization.
| Mehmet Umut Sen and Hakan Erdogan | null | 1106.1684 | null | null |
Reinforcement learning based sensing policy optimization for energy
efficient cognitive radio networks | cs.LG | This paper introduces a machine learning based collaborative multi-band
spectrum sensing policy for cognitive radios. The proposed sensing policy
guides secondary users to focus the search of unused radio spectrum to those
frequencies that persistently provide them high data rate. The proposed policy
is based on machine learning, which makes it adaptive with the temporally and
spatially varying radio spectrum. Furthermore, there is no need for dynamic
modeling of the primary activity since it is implicitly learned over time.
Energy efficiency is achieved by minimizing the number of assigned sensors per
each subband under a constraint on miss detection probability. It is important
to control the missed detections because they cause collisions with primary
transmissions and lead to retransmissions at both the primary and secondary
user. Simulations show that the proposed machine learning based sensing policy
improves the overall throughput of the secondary network and improves the
energy efficiency while controlling the miss detection probability.
| Jan Oksanen, Jarmo Lund\'en, Visa Koivunen | null | 1106.1770 | null | null |
Learning the Dependence Graph of Time Series with Latent Factors | cs.LG | This paper considers the problem of learning, from samples, the dependency
structure of a system of linear stochastic differential equations, when some of
the variables are latent. In particular, we observe the time evolution of some
variables, and never observe other variables; from this, we would like to find
the dependency structure between the observed variables - separating out the
spurious interactions caused by the (marginalizing out of the) latent
variables' time series. We develop a new method, based on convex optimization,
to do so in the case when the number of latent variables is smaller than the
number of observed ones. For the case when the dependency structure between the
observed variables is sparse, we theoretically establish a high-dimensional
scaling result for structure recovery. We verify our theoretical result with
both synthetic and real data (from the stock market).
| Ali Jalali and Sujay Sanghavi | null | 1106.1887 | null | null |
Ranking via Sinkhorn Propagation | stat.ML cs.IR cs.LG | It is of increasing importance to develop learning methods for ranking. In
contrast to many learning objectives, however, the ranking problem presents
difficulties due to the fact that the space of permutations is not smooth. In
this paper, we examine the class of rank-linear objective functions, which
includes popular metrics such as precision and discounted cumulative gain. In
particular, we observe that expectations of these gains are completely
characterized by the marginals of the corresponding distribution over
permutation matrices. Thus, the expectations of rank-linear objectives can
always be described through locations in the Birkhoff polytope, i.e.,
doubly-stochastic matrices (DSMs). We propose a technique for learning
DSM-based ranking functions using an iterative projection operator known as
Sinkhorn normalization. Gradients of this operator can be computed via
backpropagation, resulting in an algorithm we call Sinkhorn propagation, or
SinkProp. This approach can be combined with a wide range of gradient-based
approaches to rank learning. We demonstrate the utility of SinkProp on several
information retrieval data sets.
| Ryan Prescott Adams, Richard S. Zemel | null | 1106.1925 | null | null |
Lyapunov stochastic stability and control of robust dynamic coalitional
games with transferable utilities | cs.GT cs.LG cs.SY math.OC | This paper considers a dynamic game with transferable utilities (TU), where
the characteristic function is a continuous-time bounded mean ergodic process.
A central planner interacts continuously over time with the players by choosing
the instantaneous allocations subject to budget constraints. Before the game
starts, the central planner knows the nature of the process (bounded mean
ergodic), the bounded set from which the coalitions' values are sampled, and
the long run average coalitions' values. On the other hand, he has no knowledge
of the underlying probability function generating the coalitions' values. Our
goal is to find allocation rules that use a measure of the extra reward that a
coalition has received up to the current time by re-distributing the budget
among the players. The objective is two-fold: i) guaranteeing convergence of
the average allocations to the core (or a specific point in the core) of the
average game, ii) driving the coalitions' excesses to an a priori given cone.
The resulting allocation rules are robust as they guarantee the aforementioned
convergence properties despite the uncertain and time-varying nature of the
coaltions' values. We highlight three main contributions. First, we design an
allocation rule based on full observation of the extra reward so that the
average allocation approaches a specific point in the core of the average game,
while the coalitions' excesses converge to an a priori given direction. Second,
we design a new allocation rule based on partial observation on the extra
reward so that the average allocation converges to the core of the average
game, while the coalitions' excesses converge to an a priori given cone. And
third, we establish connections to approachability theory and attainability
theory.
| Dario Bauso, Puduru Viswanadha Reddy and Tamer Basar | null | 1106.1933 | null | null |
Clustering with Multi-Layer Graphs: A Spectral Perspective | cs.LG cs.CV cs.SI stat.ML | Observational data usually comes with a multimodal nature, which means that
it can be naturally represented by a multi-layer graph whose layers share the
same set of vertices (users) with different edges (pairwise relationships). In
this paper, we address the problem of combining different layers of the
multi-layer graph for improved clustering of the vertices compared to using
layers independently. We propose two novel methods, which are based on joint
matrix factorization and graph regularization framework respectively, to
efficiently combine the spectrum of the multiple graph layers, namely the
eigenvectors of the graph Laplacian matrices. In each case, the resulting
combination, which we call a "joint spectrum" of multiple graphs, is used for
clustering the vertices. We evaluate our approaches by simulations with several
real world social network datasets. Results demonstrate the superior or
competitive performance of the proposed methods over state-of-the-art technique
and common baseline methods, such as co-regularization and summation of
information from individual graphs.
| Xiaowen Dong, Pascal Frossard, Pierre Vandergheynst and Nikolai
Nefedov | 10.1109/TSP.2012.2212886 | 1106.2233 | null | null |
Random design analysis of ridge regression | math.ST cs.AI cs.LG stat.ML stat.TH | This work gives a simultaneous analysis of both the ordinary least squares
estimator and the ridge regression estimator in the random design setting under
mild assumptions on the covariate/response distributions. In particular, the
analysis provides sharp results on the ``out-of-sample'' prediction error, as
opposed to the ``in-sample'' (fixed design) error. The analysis also reveals
the effect of errors in the estimated covariance structure, as well as the
effect of modeling errors, neither of which effects are present in the fixed
design setting. The proofs of the main results are based on a simple
decomposition lemma combined with concentration inequalities for random vectors
and matrices.
| Daniel Hsu, Sham M. Kakade, Tong Zhang | null | 1106.2363 | null | null |
Efficient Optimal Learning for Contextual Bandits | cs.LG cs.AI stat.ML | We address the problem of learning in an online setting where the learner
repeatedly observes features, selects among a set of actions, and receives
reward for the action taken. We provide the first efficient algorithm with an
optimal regret. Our algorithm uses a cost sensitive classification learner as
an oracle and has a running time $\mathrm{polylog}(N)$, where $N$ is the number
of classification rules among which the oracle might choose. This is
exponentially faster than all previous algorithms that achieve optimal regret
in this setting. Our formulation also enables us to create an algorithm with
regret that is additive rather than multiplicative in feedback delay as in all
previous work.
| Miroslav Dudik, Daniel Hsu, Satyen Kale, Nikos Karampatziakis, John
Langford, Lev Reyzin, Tong Zhang | null | 1106.2369 | null | null |
Efficient Transductive Online Learning via Randomized Rounding | cs.LG stat.ML | Most traditional online learning algorithms are based on variants of mirror
descent or follow-the-leader. In this paper, we present an online algorithm
based on a completely different approach, tailored for transductive settings,
which combines "random playout" and randomized rounding of loss subgradients.
As an application of our approach, we present the first computationally
efficient online algorithm for collaborative filtering with trace-norm
constrained matrices. As a second application, we solve an open question
linking batch learning and transductive online learning
| Nicol\`o Cesa-Bianchi and Ohad Shamir | null | 1106.2429 | null | null |
From Bandits to Experts: On the Value of Side-Observations | cs.LG stat.ML | We consider an adversarial online learning setting where a decision maker can
choose an action in every stage of the game. In addition to observing the
reward of the chosen action, the decision maker gets side observations on the
reward he would have obtained had he chosen some of the other actions. The
observation structure is encoded as a graph, where node i is linked to node j
if sampling i provides information on the reward of j. This setting naturally
interpolates between the well-known "experts" setting, where the decision maker
can view all rewards, and the multi-armed bandits setting, where the decision
maker can only view the reward of the chosen action. We develop practical
algorithms with provable regret guarantees, which depend on non-trivial
graph-theoretic properties of the information feedback structure. We also
provide partially-matching lower bounds.
| Shie Mannor and Ohad Shamir | null | 1106.2436 | null | null |
Learning Equilibria with Partial Information in Decentralized Wireless
Networks | cs.LG cs.AI cs.GT cs.MA | In this article, a survey of several important equilibrium concepts for
decentralized networks is presented. The term decentralized is used here to
refer to scenarios where decisions (e.g., choosing a power allocation policy)
are taken autonomously by devices interacting with each other (e.g., through
mutual interference). The iterative long-term interaction is characterized by
stable points of the wireless network called equilibria. The interest in these
equilibria stems from the relevance of network stability and the fact that they
can be achieved by letting radio devices to repeatedly interact over time. To
achieve these equilibria, several learning techniques, namely, the best
response dynamics, fictitious play, smoothed fictitious play, reinforcement
learning algorithms, and regret matching, are discussed in terms of information
requirements and convergence properties. Most of the notions introduced here,
for both equilibria and learning schemes, are illustrated by a simple case
study, namely, an interference channel with two transmitter-receiver pairs.
| Luca Rose, Samir M. Perlaza, Samson Lasaulce, M\'erouane Debbah | 10.1109/MCOM.2011.5978427 | 1106.2662 | null | null |
Learning, investments and derivatives | q-fin.GN cs.LG | The recent crisis and the following flight to simplicity put most derivative
businesses around the world under considerable pressure. We argue that the
traditional modeling techniques must be extended to include product design. We
propose a quantitative framework for creating products which meet the challenge
of being optimal from the investors point of view while remaining relatively
simple and transparent.
| Andrei N. Soklakov | null | 1106.2882 | null | null |
On epsilon-optimality of the pursuit learning algorithm | cs.LG | Estimator algorithms in learning automata are useful tools for adaptive,
real-time optimization in computer science and engineering applications. This
paper investigates theoretical convergence properties for a special case of
estimator algorithms: the pursuit learning algorithm. In this note, we identify
and fill a gap in existing proofs of probabilistic convergence for pursuit
learning. It is tradition to take the pursuit learning tuning parameter to be
fixed in practical applications, but our proof sheds light on the importance of
a vanishing sequence of tuning parameters in a theoretical convergence
analysis.
| Ryan Martin, Omkar Tilak | 10.1239/jap/1346955334 | 1106.3355 | null | null |
Decoding finger movements from ECoG signals using switching linear
models | cs.LG | One of the major challenges of ECoG-based Brain-Machine Interfaces is the
movement prediction of a human subject. Several methods exist to predict an arm
2-D trajectory. The fourth BCI Competition gives a dataset in which the aim is
to predict individual finger movements (5-D trajectory). The difficulty lies in
the fact that there is no simple relation between ECoG signals and finger
movement. We propose in this paper to decode finger flexions using switching
models. This method permits to simplify the system as it is now described as an
ensemble of linear models depending on an internal state. We show that an
interesting accuracy prediction can be obtained by such a model.
| R\'emi Flamary (LITIS), Alain Rakotomamonjy (LITIS) | null | 1106.3395 | null | null |
Large margin filtering for signal sequence labeling | cs.LG | Signal Sequence Labeling consists in predicting a sequence of labels given an
observed sequence of samples. A naive way is to filter the signal in order to
reduce the noise and to apply a classification algorithm on the filtered
samples. We propose in this paper to jointly learn the filter with the
classifier leading to a large margin filtering for classification. This method
allows to learn the optimal cutoff frequency and phase of the filter that may
be different from zero. Two methods are proposed and tested on a toy dataset
and on a real life BCI dataset from BCI Competition III.
| R\'emi Flamary (LITIS), Benjamin Labb\'e (LITIS), Alain Rakotomamonjy
(LITIS) | 10.1109/ICASSP.2010.5495281 | 1106.3396 | null | null |
Handling uncertainties in SVM classification | cs.LG | This paper addresses the pattern classification problem arising when
available target data include some uncertainty information. Target data
considered here is either qualitative (a class label) or quantitative (an
estimation of the posterior probability). Our main contribution is a SVM
inspired formulation of this problem allowing to take into account class label
through a hinge loss as well as probability estimates using epsilon-insensitive
cost function together with a minimum norm (maximum margin) objective. This
formulation shows a dual form leading to a quadratic problem and allows the use
of a representer theorem and associated kernel. The solution provided can be
used for both decision and posterior probability estimation. Based on empirical
evidence our method outperforms regular SVM in terms of probability predictions
and classification performances.
| Emilie Niaf (CREATIS), R\'emi Flamary (LITIS), Carole Lartizien
(CREATIS), St\'ephane Canu (LITIS) | null | 1106.3397 | null | null |
Robust Bayesian reinforcement learning through tight lower bounds | cs.LG stat.ML | In the Bayesian approach to sequential decision making, exact calculation of
the (subjective) utility is intractable. This extends to most special cases of
interest, such as reinforcement learning problems. While utility bounds are
known to exist for this problem, so far none of them were particularly tight.
In this paper, we show how to efficiently calculate a lower bound, which
corresponds to the utility of a near-optimal memoryless policy for the decision
problem, which is generally different from both the Bayes-optimal policy and
the policy which is optimal for the expected MDP under the current belief. We
then show how these can be applied to obtain robust exploration policies in a
Bayesian reinforcement learning setting.
| Christos Dimitrakakis | null | 1106.3651 | null | null |
Prediction and Modularity in Dynamical Systems | nlin.AO cs.AI cs.IT cs.LG cs.SY math.IT q-bio.QM stat.ME | Identifying and understanding modular organizations is centrally important in
the study of complex systems. Several approaches to this problem have been
advanced, many framed in information-theoretic terms. Our treatment starts from
the complementary point of view of statistical modeling and prediction of
dynamical systems. It is known that for finite amounts of training data,
simpler models can have greater predictive power than more complex ones. We use
the trade-off between model simplicity and predictive accuracy to generate
optimal multiscale decompositions of dynamical networks into weakly-coupled,
simple modules. State-dependent and causal versions of our method are also
proposed.
| Artemy Kolchinsky, Luis M. Rocha | null | 1106.3703 | null | null |
Learning XML Twig Queries | cs.DB cs.LG | We investigate the problem of learning XML queries, path queries and tree
pattern queries, from examples given by the user. A learning algorithm takes on
the input a set of XML documents with nodes annotated by the user and returns a
query that selects the nodes in a manner consistent with the annotation. We
study two learning settings that differ with the types of annotations. In the
first setting the user may only indicate required nodes that the query must
return. In the second, more general, setting, the user may also indicate
forbidden nodes that the query must not return. The query may or may not return
any node with no annotation. We formalize what it means for a class of queries
to be \emph{learnable}. One requirement is the existence of a learning
algorithm that is sound i.e., always returns a query consistent with the
examples given by the user. Furthermore, the learning algorithm should be
complete i.e., able to produce every query with a sufficiently rich example.
Other requirements involve tractability of learning and its robustness to
nonessential examples. We show that the classes of simple path queries and
path-subsumption-free tree queries are learnable from positive examples. The
learnability of the full class of tree pattern queries (and the full class of
path queries) remains an open question. We show also that adding negative
examples to the picture renders the learning unfeasible.
Published in ICDT 2012, Berlin.
| S{\l}awomir Staworko, Piotr Wieczorek | null | 1106.3725 | null | null |
Algorithmic Programming Language Identification | cs.LG | Motivated by the amount of code that goes unidentified on the web, we
introduce a practical method for algorithmically identifying the programming
language of source code. Our work is based on supervised learning and
intelligent statistical features. We also explored, but abandoned, a
grammatical approach. In testing, our implementation greatly outperforms that
of an existing tool that relies on a Bayesian classifier. Code is written in
Python and available under an MIT license.
| David Klein, Kyle Murray and Simon Weber | null | 1106.4064 | null | null |
On the Inclusion Relation of Reproducing Kernel Hilbert Spaces | math.FA cs.LG | To help understand various reproducing kernels used in applied sciences, we
investigate the inclusion relation of two reproducing kernel Hilbert spaces.
Characterizations in terms of feature maps of the corresponding reproducing
kernels are established. A full table of inclusion relations among widely-used
translation invariant kernels is given. Concrete examples for Hilbert-Schmidt
kernels are presented as well. We also discuss the preservation of such a
relation under various operations of reproducing kernels. Finally, we briefly
discuss the special inclusion with a norm equivalence.
| Haizhang Zhang, Liang Zhao | null | 1106.4075 | null | null |
Learning with the Weighted Trace-norm under Arbitrary Sampling
Distributions | cs.LG stat.ML | We provide rigorous guarantees on learning with the weighted trace-norm under
arbitrary sampling distributions. We show that the standard weighted trace-norm
might fail when the sampling distribution is not a product distribution (i.e.
when row and column indexes are not selected independently), present a
corrected variant for which we establish strong learning guarantees, and
demonstrate that it works better in practice. We provide guarantees when
weighting by either the true or empirical sampling distribution, and suggest
that even if the true distribution is known (or is uniform), weighting by the
empirical distribution may be beneficial.
| Rina Foygel, Ruslan Salakhutdinov, Ohad Shamir, and Nathan Srebro | null | 1106.4251 | null | null |
Tight Measurement Bounds for Exact Recovery of Structured Sparse Signals | stat.ML cs.LG | Standard compressive sensing results state that to exactly recover an s
sparse signal in R^p, one requires O(s. log(p)) measurements. While this bound
is extremely useful in practice, often real world signals are not only sparse,
but also exhibit structure in the sparsity pattern. We focus on
group-structured patterns in this paper. Under this model, groups of signal
coefficients are active (or inactive) together. The groups are predefined, but
the particular set of groups that are active (i.e., in the signal support) must
be learned from measurements. We show that exploiting knowledge of groups can
further reduce the number of measurements required for exact signal recovery,
and derive universal bounds for the number of measurements needed. The bound is
universal in the sense that it only depends on the number of groups under
consideration, and not the particulars of the groups (e.g., compositions,
sizes, extents, overlaps, etc.). Experiments show that our result holds for a
variety of overlapping group configurations.
| Nikhil Rao, Benjamin Recht and Robert Nowak | null | 1106.4355 | null | null |
Specific-to-General Learning for Temporal Events with Application to
Learning Event Definitions from Video | cs.AI cs.LG | We develop, analyze, and evaluate a novel, supervised, specific-to-general
learner for a simple temporal logic and use the resulting algorithm to learn
visual event definitions from video sequences. First, we introduce a simple,
propositional, temporal, event-description language called AMA that is
sufficiently expressive to represent many events yet sufficiently restrictive
to support learning. We then give algorithms, along with lower and upper
complexity bounds, for the subsumption and generalization problems for AMA
formulas. We present a positive-examples--only specific-to-general learning
method based on these algorithms. We also present a polynomial-time--computable
``syntactic'' subsumption test that implies semantic subsumption without being
equivalent to it. A generalization algorithm based on syntactic subsumption can
be used in place of semantic generalization to improve the asymptotic
complexity of the resulting learning algorithm. Finally, we apply this
algorithm to the task of learning relational event definitions from video and
show that it yields definitions that are competitive with hand-coded ones.
| A. Fern, R. Givan, J. M. Siskind | 10.1613/jair.1050 | 1106.4572 | null | null |
Better Mini-Batch Algorithms via Accelerated Gradient Methods | cs.LG | Mini-batch algorithms have been proposed as a way to speed-up stochastic
convex optimization problems. We study how such algorithms can be improved
using accelerated gradient methods. We provide a novel analysis, which shows
how standard gradient methods may sometimes be insufficient to obtain a
significant speed-up and propose a novel accelerated gradient algorithm, which
deals with this deficiency, enjoys a uniformly superior guarantee and works
well in practice.
| Andrew Cotter, Ohad Shamir, Nathan Srebro, Karthik Sridharan | null | 1106.4574 | null | null |
A General Framework for Structured Sparsity via Proximal Optimization | cs.LG stat.ML | We study a generalized framework for structured sparsity. It extends the
well-known methods of Lasso and Group Lasso by incorporating additional
constraints on the variables as part of a convex optimization problem. This
framework provides a straightforward way of favouring prescribed sparsity
patterns, such as orderings, contiguous regions and overlapping groups, among
others. Existing optimization methods are limited to specific constraint sets
and tend to not scale well with sample size and dimensionality. We propose a
novel first order proximal method, which builds upon results on fixed points
and successive approximations. The algorithm can be applied to a general class
of conic and norm constraints sets and relies on a proximity operator
subproblem which can be computed explicitly. Experiments on different
regression problems demonstrate the efficiency of the optimization algorithm
and its scalability with the size of the problem. They also demonstrate state
of the art statistical performance, which improves over Lasso and StructOMP.
| Andreas Argyriou and Luca Baldassarre and Jean Morales and
Massimiliano Pontil | null | 1106.5236 | null | null |
Potential-Based Shaping and Q-Value Initialization are Equivalent | cs.LG | Shaping has proven to be a powerful but precarious means of improving
reinforcement learning performance. Ng, Harada, and Russell (1999) proposed the
potential-based shaping algorithm for adding shaping rewards in a way that
guarantees the learner will learn optimal behavior. In this note, we prove
certain similarities between this shaping algorithm and the initialization step
required for several reinforcement learning algorithms. More specifically, we
prove that a reinforcement learner with initial Q-values based on the shaping
algorithm's potential function make the same updates throughout learning as a
learner receiving potential-based shaping rewards. We further prove that under
a broad category of policies, the behavior of these two learners are
indistinguishable. The comparison provides intuition on the theoretical
properties of the shaping algorithm as well as a suggestion for a simpler
method for capturing the algorithm's benefit. In addition, the equivalence
raises previously unaddressed issues concerning the efficiency of learning with
potential-based shaping.
| E. Wiewiora | 10.1613/jair.1190 | 1106.5267 | null | null |
Set systems: order types, continuous nondeterministic deformations, and
quasi-orders | cs.LO cs.GT cs.LG | By reformulating a learning process of a set system L as a game between
Teacher and Learner, we define the order type of L to be the order type of the
game tree, if the tree is well-founded. The features of the order type of L
(dim L in symbol) are (1) We can represent any well-quasi-order (wqo for short)
by the set system L of the upper-closed sets of the wqo such that the maximal
order type of the wqo is equal to dim L. (2) dim L is an upper bound of the
mind-change complexity of L. dim L is defined iff L has a finite elasticity (fe
for short), where, according to computational learning theory, if an indexed
family of recursive languages has fe then it is learnable by an algorithm from
positive data. Regarding set systems as subspaces of Cantor spaces, we prove
that fe of set systems is preserved by any continuous function which is
monotone with respect to the set-inclusion. By it, we prove that finite
elasticity is preserved by various (nondeterministic) language operators
(Kleene-closure, shuffle-closure, union, product, intersection,. . ..) The
monotone continuous functions represent nondeterministic computations. If a
monotone continuous function has a computation tree with each node followed by
at most n immediate successors and the order type of a set system L is
{\alpha}, then the direct image of L is a set system of order type at most
n-adic diagonal Ramsey number of {\alpha}. Furthermore, we provide an
order-type-preserving contravariant embedding from the category of quasi-orders
and finitely branching simulations between them, into the complete category of
subspaces of Cantor spaces and monotone continuous functions having Girard's
linearity between them. Keyword: finite elasticity, shuffle-closure
| Yohji Akama | null | 1106.5294 | null | null |
Pose Estimation from a Single Depth Image for Arbitrary Kinematic
Skeletons | cs.CV cs.AI cs.LG | We present a method for estimating pose information from a single depth image
given an arbitrary kinematic structure without prior training. For an arbitrary
skeleton and depth image, an evolutionary algorithm is used to find the optimal
kinematic configuration to explain the observed image. Results show that our
approach can correctly estimate poses of 39 and 78 degree-of-freedom models
from a single depth image, even in cases of significant self-occlusion.
| Daniel L. Ly and Ashutosh Saxena and Hod Lipson | null | 1106.5341 | null | null |
HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient
Descent | math.OC cs.LG | Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve
state-of-the-art performance on a variety of machine learning tasks. Several
researchers have recently proposed schemes to parallelize SGD, but all require
performance-destroying memory locking and synchronization. This work aims to
show using novel theoretical analysis, algorithms, and implementation that SGD
can be implemented without any locking. We present an update scheme called
HOGWILD! which allows processors access to shared memory with the possibility
of overwriting each other's work. We show that when the associated optimization
problem is sparse, meaning most gradient updates only modify small parts of the
decision variable, then HOGWILD! achieves a nearly optimal rate of convergence.
We demonstrate experimentally that HOGWILD! outperforms alternative schemes
that use locking by an order of magnitude.
| Feng Niu, Benjamin Recht, Christopher Re, Stephen J. Wright | null | 1106.5730 | null | null |
A Dirty Model for Multiple Sparse Regression | cs.LG math.ST stat.ML stat.TH | Sparse linear regression -- finding an unknown vector from linear
measurements -- is now known to be possible with fewer samples than variables,
via methods like the LASSO. We consider the multiple sparse linear regression
problem, where several related vectors -- with partially shared support sets --
have to be recovered. A natural question in this setting is whether one can use
the sharing to further decrease the overall number of samples required. A line
of recent research has studied the use of \ell_1/\ell_q norm
block-regularizations with q>1 for such problems; however these could actually
perform worse in sample complexity -- vis a vis solving each problem separately
ignoring sharing -- depending on the level of sharing.
We present a new method for multiple sparse linear regression that can
leverage support and parameter overlap when it exists, but not pay a penalty
when it does not. A very simple idea: we decompose the parameters into two
components and regularize these differently. We show both theoretically and
empirically, our method strictly and noticeably outperforms both \ell_1 or
\ell_1/\ell_q methods, over the entire range of possible overlaps (except at
boundary cases, where we match the best method). We also provide theoretical
guarantees that the method performs well under high-dimensional scaling.
| Ali Jalali and Pradeep Ravikumar and Sujay Sanghavi | null | 1106.5826 | null | null |
Deterministic Sequencing of Exploration and Exploitation for Multi-Armed
Bandit Problems | math.OC cs.LG cs.SY math.PR math.ST stat.TH | In the Multi-Armed Bandit (MAB) problem, there is a given set of arms with
unknown reward models. At each time, a player selects one arm to play, aiming
to maximize the total expected reward over a horizon of length T. An approach
based on a Deterministic Sequencing of Exploration and Exploitation (DSEE) is
developed for constructing sequential arm selection policies. It is shown that
for all light-tailed reward distributions, DSEE achieves the optimal
logarithmic order of the regret, where regret is defined as the total expected
reward loss against the ideal case with known reward models. For heavy-tailed
reward distributions, DSEE achieves O(T^1/p) regret when the moments of the
reward distributions exist up to the pth order for 1<p<=2 and O(T^1/(1+p/2))
for p>2. With the knowledge of an upperbound on a finite moment of the
heavy-tailed reward distributions, DSEE offers the optimal logarithmic regret
order. The proposed DSEE approach complements existing work on MAB by providing
corresponding results for general reward distributions. Furthermore, with a
clearly defined tunable parameter-the cardinality of the exploration sequence,
the DSEE approach is easily extendable to variations of MAB, including MAB with
various objectives, decentralized MAB with multiple players and incomplete
reward observations under collisions, MAB with unknown Markov dynamics, and
combinatorial MAB with dependent arms that often arise in network optimization
problems such as the shortest path, the minimum spanning, and the dominating
set problems under unknown random weights.
| Sattar Vakili, Keqin Liu, Qing Zhao | null | 1106.6104 | null | null |
IBSEAD: - A Self-Evolving Self-Obsessed Learning Algorithm for Machine
Learning | cs.LG | We present IBSEAD or distributed autonomous entity systems based Interaction
- a learning algorithm for the computer to self-evolve in a self-obsessed
manner. This learning algorithm will present the computer to look at the
internal and external environment in series of independent entities, which will
interact with each other, with and/or without knowledge of the computer's
brain. When a learning algorithm interacts, it does so by detecting and
understanding the entities in the human algorithm. However, the problem with
this approach is that the algorithm does not consider the interaction of the
third party or unknown entities, which may be interacting with each other.
These unknown entities in their interaction with the non-computer entities make
an effect in the environment that influences the information and the behaviour
of the computer brain. Such details and the ability to process the dynamic and
unsettling nature of these interactions are absent in the current learning
algorithm such as the decision tree learning algorithm. IBSEAD is able to
evaluate and consider such algorithms and thus give us a better accuracy in
simulation of the highly evolved nature of the human brain. Processes such as
dreams, imagination and novelty, that exist in humans are not fully simulated
by the existing learning algorithms. Also, Hidden Markov models (HMM) are
useful in finding "hidden" entities, which may be known or unknown. However,
this model fails to consider the case of unknown entities which maybe unclear
or unknown. IBSEAD is better because it considers three types of entities-
known, unknown and invisible. We present our case with a comparison of existing
algorithms in known environments and cases and present the results of the
experiments using dry run of the simulated runs of the existing machine
learning algorithms versus IBSEAD.
| Jitesh Dundas and David Chik | null | 1106.6186 | null | null |
A Note on Improved Loss Bounds for Multiple Kernel Learning | cs.LG | In this paper, we correct an upper bound, presented in~\cite{hs-11}, on the
generalisation error of classifiers learned through multiple kernel learning.
The bound in~\cite{hs-11} uses Rademacher complexity and has an\emph{additive}
dependence on the logarithm of the number of kernels and the margin achieved by
the classifier. However, there are some errors in parts of the proof which are
corrected in this paper. Unfortunately, the final result turns out to be a risk
bound which has a \emph{multiplicative} dependence on the logarithm of the
number of kernels and the margin achieved by the classifier.
| Zakria Hussain and John Shawe-Taylor and Mario Marchand | null | 1106.6258 | null | null |
Abstraction Super-structuring Normal Forms: Towards a Theory of
Structural Induction | cs.AI cs.FL cs.LG | Induction is the process by which we obtain predictive laws or theories or
models of the world. We consider the structural aspect of induction. We answer
the question as to whether we can find a finite and minmalistic set of
operations on structural elements in terms of which any theory can be
expressed. We identify abstraction (grouping similar entities) and
super-structuring (combining topologically e.g., spatio-temporally close
entities) as the essential structural operations in the induction process. We
show that only two more structural operations, namely, reverse abstraction and
reverse super-structuring (the duals of abstraction and super-structuring
respectively) suffice in order to exploit the full power of Turing-equivalent
generative grammars in induction. We explore the implications of this theorem
with respect to the nature of hidden variables, radical positivism and the
2-century old claim of David Hume about the principles of connexion among
ideas.
| Adrian Silvescu and Vasant Honavar | null | 1107.0434 | null | null |
"Memory foam" approach to unsupervised learning | nlin.AO cs.LG | We propose an alternative approach to construct an artificial learning
system, which naturally learns in an unsupervised manner. Its mathematical
prototype is a dynamical system, which automatically shapes its vector field in
response to the input signal. The vector field converges to a gradient of a
multi-dimensional probability density distribution of the input process, taken
with negative sign. The most probable patterns are represented by the stable
fixed points, whose basins of attraction are formed automatically. The
performance of this system is illustrated with musical signals.
| Natalia B. Janson and Christopher J. Marsden | null | 1107.0674 | null | null |
Distributed Matrix Completion and Robust Factorization | cs.LG cs.DS cs.NA math.NA stat.ML | If learning methods are to scale to the massive sizes of modern datasets, it
is essential for the field of machine learning to embrace parallel and
distributed computing. Inspired by the recent development of matrix
factorization methods with rich theory but poor computational complexity and by
the relative ease of mapping matrices onto distributed architectures, we
introduce a scalable divide-and-conquer framework for noisy matrix
factorization. We present a thorough theoretical analysis of this framework in
which we characterize the statistical errors introduced by the "divide" step
and control their magnitude in the "conquer" step, so that the overall
algorithm enjoys high-probability estimation guarantees comparable to those of
its base algorithm. We also present experiments in collaborative filtering and
video background modeling that demonstrate the near-linear to superlinear
speed-ups attainable with this approach.
| Lester Mackey, Ameet Talwalkar, Michael I. Jordan | null | 1107.0789 | null | null |
GraphLab: A Distributed Framework for Machine Learning in the Cloud | cs.LG | Machine Learning (ML) techniques are indispensable in a wide range of fields.
Unfortunately, the exponential increase of dataset sizes are rapidly extending
the runtime of sequential algorithms and threatening to slow future progress in
ML. With the promise of affordable large-scale parallel computing, Cloud
systems offer a viable platform to resolve the computational challenges in ML.
However, designing and implementing efficient, provably correct distributed ML
algorithms is often prohibitively challenging. To enable ML researchers to
easily and efficiently use parallel systems, we introduced the GraphLab
abstraction which is designed to represent the computational patterns in ML
algorithms while permitting efficient parallel and distributed implementations.
In this paper we provide a formal description of the GraphLab parallel
abstraction and present an efficient distributed implementation. We conduct a
comprehensive evaluation of GraphLab on three state-of-the-art ML algorithms
using real large-scale data and a 64 node EC2 cluster of 512 processors. We
find that GraphLab achieves orders of magnitude performance gains over Hadoop
while performing comparably or superior to hand-tuned MPI implementations.
| Yucheng Low, Joseph Gonzalez, Aapo Kyrola, Danny Bickson, Carlos
Guestrin | null | 1107.0922 | null | null |
High-Dimensional Gaussian Graphical Model Selection: Walk Summability
and Local Separation Criterion | cs.LG math.ST stat.TH | We consider the problem of high-dimensional Gaussian graphical model
selection. We identify a set of graphs for which an efficient estimation
algorithm exists, and this algorithm is based on thresholding of empirical
conditional covariances. Under a set of transparent conditions, we establish
structural consistency (or sparsistency) for the proposed algorithm, when the
number of samples n=omega(J_{min}^{-2} log p), where p is the number of
variables and J_{min} is the minimum (absolute) edge potential of the graphical
model. The sufficient conditions for sparsistency are based on the notion of
walk-summability of the model and the presence of sparse local vertex
separators in the underlying graph. We also derive novel non-asymptotic
necessary conditions on the number of samples required for sparsistency.
| Animashree Anandkumar, Vincent Y. F. Tan and Alan. S. Willsky | null | 1107.1270 | null | null |
Spectral Methods for Learning Multivariate Latent Tree Structure | cs.LG stat.ML | This work considers the problem of learning the structure of multivariate
linear tree models, which include a variety of directed tree graphical models
with continuous, discrete, and mixed latent variables such as linear-Gaussian
models, hidden Markov models, Gaussian mixture models, and Markov evolutionary
trees. The setting is one where we only have samples from certain observed
variables in the tree, and our goal is to estimate the tree structure (i.e.,
the graph of how the underlying hidden variables are connected to each other
and to the observed variables). We propose the Spectral Recursive Grouping
algorithm, an efficient and simple bottom-up procedure for recovering the tree
structure from independent samples of the observed variables. Our finite sample
size bounds for exact recovery of the tree structure reveal certain natural
dependencies on underlying statistical and structural properties of the
underlying joint distribution. Furthermore, our sample complexity guarantees
have no explicit dependence on the dimensionality of the observed variables,
making the algorithm applicable to many high-dimensional settings. At the heart
of our algorithm is a spectral quartet test for determining the relative
topology of a quartet of variables from second-order statistics.
| Animashree Anandkumar, Kamalika Chaudhuri, Daniel Hsu, Sham M. Kakade,
Le Song, Tong Zhang | null | 1107.1283 | null | null |
Text Classification: A Sequential Reading Approach | cs.AI cs.IR cs.LG | We propose to model the text classification process as a sequential decision
process. In this process, an agent learns to classify documents into topics
while reading the document sentences sequentially and learns to stop as soon as
enough information was read for deciding. The proposed algorithm is based on a
modelisation of Text Classification as a Markov Decision Process and learns by
using Reinforcement Learning. Experiments on four different classical
mono-label corpora show that the proposed approach performs comparably to
classical SVM approaches for large training sets, and better for small training
sets. In addition, the model automatically adapts its reading process to the
quantity of training information provided.
| Gabriel Dulac-Arnold, Ludovic Denoyer, Patrick Gallinari | 10.1007/978-3-642-20161-5_41 | 1107.1322 | null | null |
On the Furthest Hyperplane Problem and Maximal Margin Clustering | cs.CC cs.DS cs.LG | This paper introduces the Furthest Hyperplane Problem (FHP), which is an
unsupervised counterpart of Support Vector Machines. Given a set of n points in
Rd, the objective is to produce the hyperplane (passing through the origin)
which maximizes the separation margin, that is, the minimal distance between
the hyperplane and any input point. To the best of our knowledge, this is the
first paper achieving provable results regarding FHP. We provide both lower and
upper bounds to this NP-hard problem. First, we give a simple randomized
algorithm whose running time is n^O(1/{\theta}^2) where {\theta} is the optimal
separation margin. We show that its exponential dependency on 1/{\theta}^2 is
tight, up to sub-polynomial factors, assuming SAT cannot be solved in
sub-exponential time. Next, we give an efficient approxima- tion algorithm. For
any {\alpha} \in [0, 1], the algorithm produces a hyperplane whose distance
from at least 1 - 5{\alpha} fraction of the points is at least {\alpha} times
the optimal separation margin. Finally, we show that FHP does not admit a PTAS
by presenting a gap preserving reduction from a particular version of the PCP
theorem.
| Zohar Karnin, Edo Liberty, Shachar Lovett, Roy Schwartz, Omri
Weinstein | null | 1107.1358 | null | null |
Polyceptron: A Polyhedral Learning Algorithm | cs.LG cs.NE | In this paper we propose a new algorithm for learning polyhedral classifiers
which we call as Polyceptron. It is a Perception like algorithm which updates
the parameters only when the current classifier misclassifies any training
data. We give both batch and online version of Polyceptron algorithm. Finally
we give experimental results to show the effectiveness of our approach.
| Naresh Manwani and P. S. Sastry | null | 1107.1564 | null | null |
High-dimensional structure estimation in Ising models: Local separation
criterion | stat.ML cs.LG math.ST stat.TH | We consider the problem of high-dimensional Ising (graphical) model
selection. We propose a simple algorithm for structure estimation based on the
thresholding of the empirical conditional variation distances. We introduce a
novel criterion for tractable graph families, where this method is efficient,
based on the presence of sparse local separators between node pairs in the
underlying graph. For such graphs, the proposed algorithm has a sample
complexity of $n=\Omega(J_{\min}^{-2}\log p)$, where $p$ is the number of
variables, and $J_{\min}$ is the minimum (absolute) edge potential in the
model. We also establish nonasymptotic necessary and sufficient conditions for
structure estimation.
| Animashree Anandkumar, Vincent Y. F. Tan, Furong Huang, Alan S.
Willsky | 10.1214/12-AOS1009 | 1107.1736 | null | null |
Stochastic convex optimization with bandit feedback | math.OC cs.LG cs.SY | This paper addresses the problem of minimizing a convex, Lipschitz function
$f$ over a convex, compact set $\xset$ under a stochastic bandit feedback
model. In this model, the algorithm is allowed to observe noisy realizations of
the function value $f(x)$ at any query point $x \in \xset$. The quantity of
interest is the regret of the algorithm, which is the sum of the function
values at algorithm's query points minus the optimal function value. We
demonstrate a generalization of the ellipsoid algorithm that incurs
$\otil(\poly(d)\sqrt{T})$ regret. Since any algorithm has regret at least
$\Omega(\sqrt{T})$ on this problem, our algorithm is optimal in terms of the
scaling with $T$.
| Alekh Agarwal, Dean P. Foster, Daniel Hsu, Sham M. Kakade, Alexander
Rakhlin | null | 1107.1744 | null | null |
Multi-Instance Learning with Any Hypothesis Class | cs.LG stat.ML | In the supervised learning setting termed Multiple-Instance Learning (MIL),
the examples are bags of instances, and the bag label is a function of the
labels of its instances. Typically, this function is the Boolean OR. The
learner observes a sample of bags and the bag labels, but not the instance
labels that determine the bag labels. The learner is then required to emit a
classification rule for bags based on the sample. MIL has numerous
applications, and many heuristic algorithms have been used successfully on this
problem, each adapted to specific settings or applications. In this work we
provide a unified theoretical analysis for MIL, which holds for any underlying
hypothesis class, regardless of a specific application or problem domain. We
show that the sample complexity of MIL is only poly-logarithmically dependent
on the size of the bag, for any underlying hypothesis class. In addition, we
introduce a new PAC-learning algorithm for MIL, which uses a regular supervised
learning algorithm as an oracle. We prove that efficient PAC-learning for MIL
can be generated from any efficient non-MIL supervised learning algorithm that
handles one-sided error. The computational complexity of the resulting
algorithm is only polynomially dependent on the bag size.
| Sivan Sabato and Naftali Tishby | null | 1107.2021 | null | null |
Data Stability in Clustering: A Closer Look | cs.LG cs.DS | We consider the model introduced by Bilu and Linial (2010), who study
problems for which the optimal clustering does not change when distances are
perturbed. They show that even when a problem is NP-hard, it is sometimes
possible to obtain efficient algorithms for instances resilient to certain
multiplicative perturbations, e.g. on the order of $O(\sqrt{n})$ for max-cut
clustering. Awasthi et al. (2010) consider center-based objectives, and Balcan
and Liang (2011) analyze the $k$-median and min-sum objectives, giving
efficient algorithms for instances resilient to certain constant multiplicative
perturbations.
Here, we are motivated by the question of to what extent these assumptions
can be relaxed while allowing for efficient algorithms. We show there is little
room to improve these results by giving NP-hardness lower bounds for both the
$k$-median and min-sum objectives. On the other hand, we show that constant
multiplicative resilience parameters can be so strong as to make the clustering
problem trivial, leaving only a narrow range of resilience parameters for which
clustering is interesting. We also consider a model of additive perturbations
and give a correspondence between additive and multiplicative notions of
stability. Our results provide a close examination of the consequences of
assuming stability in data.
| Shalev Ben-David, Lev Reyzin | null | 1107.2379 | null | null |
Private Data Release via Learning Thresholds | cs.CC cs.LG | This work considers computationally efficient privacy-preserving data
release. We study the task of analyzing a database containing sensitive
information about individual participants. Given a set of statistical queries
on the data, we want to release approximate answers to the queries while also
guaranteeing differential privacy---protecting each participant's sensitive
data.
Our focus is on computationally efficient data release algorithms; we seek
algorithms whose running time is polynomial, or at least sub-exponential, in
the data dimensionality. Our primary contribution is a computationally
efficient reduction from differentially private data release for a class of
counting queries, to learning thresholded sums of predicates from a related
class.
We instantiate this general reduction with a variety of algorithms for
learning thresholds. These instantiations yield several new results for
differentially private data release. As two examples, taking {0,1}^d to be the
data domain (of dimension d), we obtain differentially private algorithms for:
(*) Releasing all k-way conjunctions. For any given k, the resulting data
release algorithm has bounded error as long as the database is of size at least
d^{O(\sqrt{k\log(k\log d)})}. The running time is polynomial in the database
size.
(*) Releasing a (1-\gamma)-fraction of all parity queries. For any \gamma
\geq \poly(1/d), the algorithm has bounded error as long as the database is of
size at least \poly(d). The running time is polynomial in the database size.
Several other instantiations yield further results for privacy-preserving
data release. Of the two results highlighted above, the first learning
algorithm uses techniques for representing thresholded sums of predicates as
low-degree polynomial threshold functions. The second learning algorithm is
based on Jackson's Harmonic Sieve algorithm [Jackson 1997].
| Moritz Hardt and Guy N. Rothblum and Rocco A. Servedio | null | 1107.2444 | null | null |
Statistical Topic Models for Multi-Label Document Classification | stat.ML cs.LG | Machine learning approaches to multi-label document classification have to
date largely relied on discriminative modeling techniques such as support
vector machines. A drawback of these approaches is that performance rapidly
drops off as the total number of labels and the number of labels per document
increase. This problem is amplified when the label frequencies exhibit the type
of highly skewed distributions that are often observed in real-world datasets.
In this paper we investigate a class of generative statistical topic models for
multi-label documents that associate individual word tokens with different
labels. We investigate the advantages of this approach relative to
discriminative models, particularly with respect to classification problems
involving large numbers of relatively rare labels. We compare the performance
of generative and discriminative approaches on document labeling tasks ranging
from datasets with several thousand labels to datasets with tens of labels. The
experimental results indicate that probabilistic generative models can achieve
competitive multi-label classification performance compared to discriminative
methods, and have advantages for datasets with many labels and skewed label
frequencies.
| Timothy N. Rubin, America Chambers, Padhraic Smyth and Mark Steyvers | null | 1107.2462 | null | null |
Provably Safe and Robust Learning-Based Model Predictive Control | math.OC cs.LG cs.SY math.ST stat.TH | Controller design faces a trade-off between robustness and performance, and
the reliability of linear controllers has caused many practitioners to focus on
the former. However, there is renewed interest in improving system performance
to deal with growing energy constraints. This paper describes a learning-based
model predictive control (LBMPC) scheme that provides deterministic guarantees
on robustness, while statistical identification tools are used to identify
richer models of the system in order to improve performance; the benefits of
this framework are that it handles state and input constraints, optimizes
system performance with respect to a cost function, and can be designed to use
a wide variety of parametric or nonparametric statistical tools. The main
insight of LBMPC is that safety and performance can be decoupled under
reasonable conditions in an optimization framework by maintaining two models of
the system. The first is an approximate model with bounds on its uncertainty,
and the second model is updated by statistical methods. LBMPC improves
performance by choosing inputs that minimize a cost subject to the learned
dynamics, and it ensures safety and robustness by checking whether these same
inputs keep the approximate model stable when it is subject to uncertainty.
Furthermore, we show that if the system is sufficiently excited, then the LBMPC
control action probabilistically converges to that of an MPC computed using the
true dynamics.
| Anil Aswani, Humberto Gonzalez, S. Shankar Sastry, Claire Tomlin | null | 1107.2487 | null | null |
Towards Optimal One Pass Large Scale Learning with Averaged Stochastic
Gradient Descent | cs.LG | For large scale learning problems, it is desirable if we can obtain the
optimal model parameters by going through the data in only one pass. Polyak and
Juditsky (1992) showed that asymptotically the test performance of the simple
average of the parameters obtained by stochastic gradient descent (SGD) is as
good as that of the parameters which minimize the empirical cost. However, to
our knowledge, despite its optimal asymptotic convergence rate, averaged SGD
(ASGD) received little attention in recent research on large scale learning.
One possible reason is that it may take a prohibitively large number of
training samples for ASGD to reach its asymptotic region for most real
problems. In this paper, we present a finite sample analysis for the method of
Polyak and Juditsky (1992). Our analysis shows that it indeed usually takes a
huge number of samples for ASGD to reach its asymptotic region for improperly
chosen learning rate. More importantly, based on our analysis, we propose a
simple way to properly set learning rate so that it takes a reasonable amount
of data for ASGD to reach its asymptotic region. We compare ASGD using our
proposed learning rate with other well known algorithms for training large
scale linear classifiers. The experiments clearly show the superiority of ASGD.
| Wei Xu | null | 1107.2490 | null | null |
Learning $k$-Modal Distributions via Testing | cs.DS cs.LG math.ST stat.TH | A $k$-modal probability distribution over the discrete domain $\{1,...,n\}$
is one whose histogram has at most $k$ "peaks" and "valleys." Such
distributions are natural generalizations of monotone ($k=0$) and unimodal
($k=1$) probability distributions, which have been intensively studied in
probability theory and statistics.
In this paper we consider the problem of \emph{learning} (i.e., performing
density estimation of) an unknown $k$-modal distribution with respect to the
$L_1$ distance. The learning algorithm is given access to independent samples
drawn from an unknown $k$-modal distribution $p$, and it must output a
hypothesis distribution $\widehat{p}$ such that with high probability the total
variation distance between $p$ and $\widehat{p}$ is at most $\epsilon.$ Our
main goal is to obtain \emph{computationally efficient} algorithms for this
problem that use (close to) an information-theoretically optimal number of
samples.
We give an efficient algorithm for this problem that runs in time
$\mathrm{poly}(k,\log(n),1/\epsilon)$. For $k \leq \tilde{O}(\log n)$, the
number of samples used by our algorithm is very close (within an
$\tilde{O}(\log(1/\epsilon))$ factor) to being information-theoretically
optimal. Prior to this work computationally efficient algorithms were known
only for the cases $k=0,1$ \cite{Birge:87b,Birge:97}.
A novel feature of our approach is that our learning algorithm crucially uses
a new algorithm for \emph{property testing of probability distributions} as a
key subroutine. The learning algorithm uses the property tester to efficiently
decompose the $k$-modal distribution into $k$ (near-)monotone distributions,
which are easier to learn.
| Constantinos Daskalakis, Ilias Diakonikolas, Rocco A. Servedio | null | 1107.2700 | null | null |
Learning Poisson Binomial Distributions | cs.DS cs.LG math.ST stat.TH | We consider a basic problem in unsupervised learning: learning an unknown
\emph{Poisson Binomial Distribution}. A Poisson Binomial Distribution (PBD)
over $\{0,1,\dots,n\}$ is the distribution of a sum of $n$ independent
Bernoulli random variables which may have arbitrary, potentially non-equal,
expectations. These distributions were first studied by S. Poisson in 1837
\cite{Poisson:37} and are a natural $n$-parameter generalization of the
familiar Binomial Distribution. Surprisingly, prior to our work this basic
learning problem was poorly understood, and known results for it were far from
optimal.
We essentially settle the complexity of the learning problem for this basic
class of distributions. As our first main result we give a highly efficient
algorithm which learns to $\eps$-accuracy (with respect to the total variation
distance) using $\tilde{O}(1/\eps^3)$ samples \emph{independent of $n$}. The
running time of the algorithm is \emph{quasilinear} in the size of its input
data, i.e., $\tilde{O}(\log(n)/\eps^3)$ bit-operations. (Observe that each draw
from the distribution is a $\log(n)$-bit string.) Our second main result is a
{\em proper} learning algorithm that learns to $\eps$-accuracy using
$\tilde{O}(1/\eps^2)$ samples, and runs in time $(1/\eps)^{\poly (\log
(1/\eps))} \cdot \log n$. This is nearly optimal, since any algorithm {for this
problem} must use $\Omega(1/\eps^2)$ samples. We also give positive and
negative results for some extensions of this learning problem to weighted sums
of independent Bernoulli random variables.
| Constantinos Daskalakis, Ilias Diakonikolas, Rocco A. Servedio | null | 1107.2702 | null | null |
Modelling Distributed Shape Priors by Gibbs Random Fields of Second
Order | cs.CV cs.LG | We analyse the potential of Gibbs Random Fields for shape prior modelling. We
show that the expressive power of second order GRFs is already sufficient to
express simple shapes and spatial relations between them simultaneously. This
allows to model and recognise complex shapes as spatial compositions of simpler
parts.
| Boris Flach and Dmitrij Schlesinger | null | 1107.2807 | null | null |
From Small-World Networks to Comparison-Based Search | cs.LG cs.DS cs.IT cs.SI math.IT stat.ML | The problem of content search through comparisons has recently received
considerable attention. In short, a user searching for a target object
navigates through a database in the following manner: the user is asked to
select the object most similar to her target from a small list of objects. A
new object list is then presented to the user based on her earlier selection.
This process is repeated until the target is included in the list presented, at
which point the search terminates. This problem is known to be strongly related
to the small-world network design problem.
However, contrary to prior work, which focuses on cases where objects in the
database are equally popular, we consider here the case where the demand for
objects may be heterogeneous. We show that, under heterogeneous demand, the
small-world network design problem is NP-hard. Given the above negative result,
we propose a novel mechanism for small-world design and provide an upper bound
on its performance under heterogeneous demand. The above mechanism has a
natural equivalent in the context of content search through comparisons, and we
establish both an upper bound and a lower bound for the performance of this
mechanism. These bounds are intuitively appealing, as they depend on the
entropy of the demand as well as its doubling constant, a quantity capturing
the topology of the set of target objects. They also illustrate interesting
connections between comparison-based search to classic results from information
theory. Finally, we propose an adaptive learning algorithm for content search
that meets the performance guarantees achieved by the above mechanisms.
| Amin Karbasi, Stratis Ioannidis, Laurent Massoulie | null | 1107.3059 | null | null |
On the Computational Complexity of Stochastic Controller Optimization in
POMDPs | cs.CC cs.LG cs.SY math.OC | We show that the problem of finding an optimal stochastic 'blind' controller
in a Markov decision process is an NP-hard problem. The corresponding decision
problem is NP-hard, in PSPACE, and SQRT-SUM-hard, hence placing it in NP would
imply breakthroughs in long-standing open problems in computer science. Our
result establishes that the more general problem of stochastic controller
optimization in POMDPs is also NP-hard. Nonetheless, we outline a special case
that is convex and admits efficient global solutions.
| Nikos Vlassis, Michael L. Littman, David Barber | null | 1107.3090 | null | null |
Robust Kernel Density Estimation | stat.ML cs.LG stat.ME | We propose a method for nonparametric density estimation that exhibits
robustness to contamination of the training sample. This method achieves
robustness by combining a traditional kernel density estimator (KDE) with ideas
from classical $M$-estimation. We interpret the KDE based on a radial, positive
semi-definite kernel as a sample mean in the associated reproducing kernel
Hilbert space. Since the sample mean is sensitive to outliers, we estimate it
robustly via $M$-estimation, yielding a robust kernel density estimator (RKDE).
An RKDE can be computed efficiently via a kernelized iteratively re-weighted
least squares (IRWLS) algorithm. Necessary and sufficient conditions are given
for kernelized IRWLS to converge to the global minimizer of the $M$-estimator
objective function. The robustness of the RKDE is demonstrated with a
representer theorem, the influence function, and experimental results for
density estimation and anomaly detection.
| JooSeuk Kim and Clayton D. Scott | null | 1107.3133 | null | null |
On Learning Discrete Graphical Models Using Greedy Methods | cs.LG math.ST stat.ML stat.TH | In this paper, we address the problem of learning the structure of a pairwise
graphical model from samples in a high-dimensional setting. Our first main
result studies the sparsistency, or consistency in sparsity pattern recovery,
properties of a forward-backward greedy algorithm as applied to general
statistical models. As a special case, we then apply this algorithm to learn
the structure of a discrete graphical model via neighborhood estimation. As a
corollary of our general result, we derive sufficient conditions on the number
of samples n, the maximum node-degree d and the problem size p, as well as
other conditions on the model parameters, so that the algorithm recovers all
the edges with high probability. Our result guarantees graph selection for
samples scaling as n = Omega(d^2 log(p)), in contrast to existing
convex-optimization based algorithms that require a sample complexity of
\Omega(d^3 log(p)). Further, the greedy algorithm only requires a restricted
strong convexity condition which is typically milder than irrepresentability
assumptions. We corroborate these results using numerical simulations at the
end.
| Ali Jalali and Chris Johnson and Pradeep Ravikumar | null | 1107.3258 | null | null |
Discovering Knowledge using a Constraint-based Language | cs.LG | Discovering pattern sets or global patterns is an attractive issue from the
pattern mining community in order to provide useful information. By combining
local patterns satisfying a joint meaning, this approach produces patterns of
higher level and thus more useful for the data analyst than the usual local
patterns, while reducing the number of patterns. In parallel, recent works
investigating relationships between data mining and constraint programming (CP)
show that the CP paradigm is a nice framework to model and mine such patterns
in a declarative and generic way. We present a constraint-based language which
enables us to define queries addressing patterns sets and global patterns. The
usefulness of such a declarative approach is highlighted by several examples
coming from the clustering based on associations. This language has been
implemented in the CP framework.
| Patrice Boizumault, Bruno Cr\'emilleux, Mehdi Khiari, Samir Loudni,
and Jean-Philippe M\'etivier | null | 1107.3407 | null | null |
Unsupervised K-Nearest Neighbor Regression | stat.ML cs.LG | In many scientific disciplines structures in high-dimensional data have to be
found, e.g., in stellar spectra, in genome data, or in face recognition tasks.
In this work we present a novel approach to non-linear dimensionality
reduction. It is based on fitting K-nearest neighbor regression to the
unsupervised regression framework for learning of low-dimensional manifolds.
Similar to related approaches that are mostly based on kernel methods,
unsupervised K-nearest neighbor (UNN) regression optimizes latent variables
w.r.t. the data space reconstruction error employing the K-nearest neighbor
heuristic. The problem of optimizing latent neighborhoods is difficult to
solve, but the UNN formulation allows the design of efficient strategies that
iteratively embed latent points to fixed neighborhood topologies. UNN is well
appropriate for sorting of high-dimensional data. The iterative variants are
analyzed experimentally.
| Oliver Kramer | null | 1107.3600 | null | null |
Weakly Supervised Learning of Foreground-Background Segmentation using
Masked RBMs | cs.LG cs.CV | We propose an extension of the Restricted Boltzmann Machine (RBM) that allows
the joint shape and appearance of foreground objects in cluttered images to be
modeled independently of the background. We present a learning scheme that
learns this representation directly from cluttered images with only very weak
supervision. The model generates plausible samples and performs
foreground-background segmentation. We demonstrate that representing foreground
objects independently of the background can be beneficial in recognition tasks.
| Nicolas Heess (Informatics), Nicolas Le Roux (INRIA Paris -
Rocquencourt), John Winn | null | 1107.3823 | null | null |
Optimal Adaptive Learning in Uncontrolled Restless Bandit Problems | math.OC cs.LG | In this paper we consider the problem of learning the optimal policy for
uncontrolled restless bandit problems. In an uncontrolled restless bandit
problem, there is a finite set of arms, each of which when pulled yields a
positive reward. There is a player who sequentially selects one of the arms at
each time step. The goal of the player is to maximize its undiscounted reward
over a time horizon T. The reward process of each arm is a finite state Markov
chain, whose transition probabilities are unknown by the player. State
transitions of each arm is independent of the selection of the player. We
propose a learning algorithm with logarithmic regret uniformly over time with
respect to the optimal finite horizon policy. Our results extend the optimal
adaptive learning of MDPs to POMDPs.
| Cem Tekin, Mingyan Liu | null | 1107.4042 | null | null |
On the Universality of Online Mirror Descent | cs.LG | We show that for a general class of convex online learning problems, Mirror
Descent can always achieve a (nearly) optimal regret guarantee.
| Nathan Srebro, Karthik Sridharan, Ambuj Tewari | null | 1107.4080 | null | null |
Performance and Convergence of Multi-user Online Learning | cs.MA cs.LG | We study the problem of allocating multiple users to a set of wireless
channels in a decentralized manner when the channel quali- ties are
time-varying and unknown to the users, and accessing the same channel by
multiple users leads to reduced quality due to interference. In such a setting
the users not only need to learn the inherent channel quality and at the same
time the best allocations of users to channels so as to maximize the social
welfare. Assuming that the users adopt a certain online learning algorithm, we
investigate under what conditions the socially optimal allocation is
achievable. In particular we examine the effect of different levels of
knowledge the users may have and the amount of communications and cooperation.
The general conclusion is that when the cooperation of users decreases and the
uncertainty about channel payoffs increases it becomes harder to achieve the
socially opti- mal allocation.
| Cem Tekin, Mingyan Liu | null | 1107.4153 | null | null |
Analogy perception applied to seven tests of word comprehension | cs.AI cs.CL cs.LG | It has been argued that analogy is the core of cognition. In AI research,
algorithms for analogy are often limited by the need for hand-coded high-level
representations as input. An alternative approach is to use high-level
perception, in which high-level representations are automatically generated
from raw data. Analogy perception is the process of recognizing analogies using
high-level perception. We present PairClass, an algorithm for analogy
perception that recognizes lexical proportional analogies using representations
that are automatically generated from a large corpus of raw textual data. A
proportional analogy is an analogy of the form A:B::C:D, meaning "A is to B as
C is to D". A lexical proportional analogy is a proportional analogy with
words, such as carpenter:wood::mason:stone. PairClass represents the semantic
relations between two words using a high-dimensional feature vector, in which
the elements are based on frequencies of patterns in the corpus. PairClass
recognizes analogies by applying standard supervised machine learning
techniques to the feature vectors. We show how seven different tests of word
comprehension can be framed as problems of analogy perception and we then apply
PairClass to the seven resulting sets of analogy perception problems. We
achieve competitive results on all seven tests. This is the first time a
uniform approach has handled such a range of tests of word comprehension.
| Peter D. Turney (National Research Council of Canada) | null | 1107.4573 | null | null |
The Divergence of Reinforcement Learning Algorithms with Value-Iteration
and Function Approximation | cs.LG | This paper gives specific divergence examples of value-iteration for several
major Reinforcement Learning and Adaptive Dynamic Programming algorithms, when
using a function approximator for the value function. These divergence examples
differ from previous divergence examples in the literature, in that they are
applicable for a greedy policy, i.e. in a "value iteration" scenario. Perhaps
surprisingly, with a greedy policy, it is also possible to get divergence for
the algorithms TD(1) and Sarsa(1). In addition to these divergences, we also
achieve divergence for the Adaptive Dynamic Programming algorithms HDP, DHP and
GDHP.
| Michael Fairbank and Eduardo Alonso | null | 1107.4606 | null | null |
Lifted Graphical Models: A Survey | cs.AI cs.LG | This article presents a survey of work on lifted graphical models. We review
a general form for a lifted graphical model, a par-factor graph, and show how a
number of existing statistical relational representations map to this
formalism. We discuss inference algorithms, including lifted inference
algorithms, that efficiently compute the answers to probabilistic queries. We
also review work in learning lifted graphical models from data. It is our
belief that the need for statistical relational models (whether it goes by that
name or another) will grow in the coming decades, as we are inundated with data
which is a mix of structured and unstructured, with entities and relations
extracted in a noisy manner from text, and with the need to reason effectively
with this data. We hope that this synthesis of ideas from many different
research groups will provide an accessible starting point for new researchers
in this expanding field.
| Lilyana Mihalkova and Lise Getoor | null | 1107.4966 | null | null |
Normative design using inductive learning | cs.LO cs.AI cs.LG | In this paper we propose a use-case-driven iterative design methodology for
normative frameworks, also called virtual institutions, which are used to
govern open systems. Our computational model represents the normative framework
as a logic program under answer set semantics (ASP). By means of an inductive
logic programming approach, implemented using ASP, it is possible to synthesise
new rules and revise the existing ones. The learning mechanism is guided by the
designer who describes the desired properties of the framework through use
cases, comprising (i) event traces that capture possible scenarios, and (ii) a
state that describes the desired outcome. The learning process then proposes
additional rules, or changes to current rules, to satisfy the constraints
expressed in the use cases. Thus, the contribution of this paper is a process
for the elaboration and revision of a normative framework by means of a
semi-automatic and iterative process driven from specifications of
(un)desirable behaviour. The process integrates a novel and general methodology
for theory revision based on ASP.
| Domenico Corapi, Alessandra Russo, Marina De Vos, Julian Padget, Ken
Satoh | null | 1107.4967 | null | null |
Submodular Optimization for Efficient Semi-supervised Support Vector
Machines | cs.LG cs.AI | In this work we present a quadratic programming approximation of the
Semi-Supervised Support Vector Machine (S3VM) problem, namely approximate
QP-S3VM, that can be efficiently solved using off the shelf optimization
packages. We prove that this approximate formulation establishes a relation
between the low density separation and the graph-based models of
semi-supervised learning (SSL) which is important to develop a unifying
framework for semi-supervised learning methods. Furthermore, we propose the
novel idea of representing SSL problems as submodular set functions and use
efficient submodular optimization algorithms to solve them. Using this new idea
we develop a representation of the approximate QP-S3VM as a maximization of a
submodular set function which makes it possible to optimize using efficient
greedy algorithms. We demonstrate that the proposed methods are accurate and
provide significant improvement in time complexity over the state of the art in
the literature.
| Wael Emara and Mehmed Kantardzic | null | 1107.5236 | null | null |
Multi Layer Analysis | cs.CV cs.DS cs.LG q-bio.QM | This thesis presents a new methodology to analyze one-dimensional signals
trough a new approach called Multi Layer Analysis, for short MLA. It also
provides some new insights on the relationship between one-dimensional signals
processed by MLA and tree kernels, test of randomness and signal processing
techniques. The MLA approach has a wide range of application to the fields of
pattern discovery and matching, computational biology and many other areas of
computer science and signal processing. This thesis includes also some
applications of this approach to real problems in biology and seismology.
| Luca Pinello | null | 1107.5349 | null | null |
Axioms for Rational Reinforcement Learning | cs.LG | We provide a formal, simple and intuitive theory of rational decision making
including sequential decisions that affect the environment. The theory has a
geometric flavor, which makes the arguments easy to visualize and understand.
Our theory is for complete decision makers, which means that they have a
complete set of preferences. Our main result shows that a complete rational
decision maker implicitly has a probabilistic model of the environment. We have
a countable version of this result that brings light on the issue of countable
vs finite additivity by showing how it depends on the geometry of the space
which we have preferences over. This is achieved through fruitfully connecting
rationality with the Hahn-Banach Theorem. The theory presented here can be
viewed as a formalization and extension of the betting odds approach to
probability of Ramsey and De Finetti.
| Peter Sunehag and Marcus Hutter | null | 1107.5520 | null | null |
Universal Prediction of Selected Bits | cs.LG cs.IT math.IT | Many learning tasks can be viewed as sequence prediction problems. For
example, online classification can be converted to sequence prediction with the
sequence being pairs of input/target data and where the goal is to correctly
predict the target data given input data and previous input/target pairs.
Solomonoff induction is known to solve the general sequence prediction problem,
but only if the entire sequence is sampled from a computable distribution. In
the case of classification and discriminative learning though, only the targets
need be structured (given the inputs). We show that the normalised version of
Solomonoff induction can still be used in this case, and more generally that it
can detect any recursive sub-pattern (regularity) within an otherwise
completely unstructured sequence. It is also shown that the unnormalised
version can fail to predict very simple recursive sub-patterns.
| Tor Lattimore and Marcus Hutter and Vaibhav Gavane | null | 1107.5531 | null | null |
Asymptotically Optimal Agents | cs.AI cs.LG | Artificial general intelligence aims to create agents capable of learning to
solve arbitrary interesting problems. We define two versions of asymptotic
optimality and prove that no agent can satisfy the strong version while in some
cases, depending on discounting, there does exist a non-computable weak
asymptotically optimal agent.
| Tor Lattimore and Marcus Hutter | null | 1107.5537 | null | null |
Subsets and Splits