categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
cs.LG cs.DS cs.NA math.NA stat.ML | null | 1107.0789 | null | null | http://arxiv.org/pdf/1107.0789v7 | 2013-10-28T06:02:12Z | 2011-07-05T06:03:44Z | Distributed Matrix Completion and Robust Factorization | If learning methods are to scale to the massive sizes of modern datasets, it
is essential for the field of machine learning to embrace parallel and
distributed computing. Inspired by the recent development of matrix
factorization methods with rich theory but poor computational complexity and by
the relative ease of mapping matrices onto distributed architectures, we
introduce a scalable divide-and-conquer framework for noisy matrix
factorization. We present a thorough theoretical analysis of this framework in
which we characterize the statistical errors introduced by the "divide" step
and control their magnitude in the "conquer" step, so that the overall
algorithm enjoys high-probability estimation guarantees comparable to those of
its base algorithm. We also present experiments in collaborative filtering and
video background modeling that demonstrate the near-linear to superlinear
speed-ups attainable with this approach.
| [
"Lester Mackey, Ameet Talwalkar, Michael I. Jordan",
"['Lester Mackey' 'Ameet Talwalkar' 'Michael I. Jordan']"
] |
cs.LG | null | 1107.0922 | null | null | http://arxiv.org/pdf/1107.0922v1 | 2011-07-05T16:56:53Z | 2011-07-05T16:56:53Z | GraphLab: A Distributed Framework for Machine Learning in the Cloud | Machine Learning (ML) techniques are indispensable in a wide range of fields.
Unfortunately, the exponential increase of dataset sizes are rapidly extending
the runtime of sequential algorithms and threatening to slow future progress in
ML. With the promise of affordable large-scale parallel computing, Cloud
systems offer a viable platform to resolve the computational challenges in ML.
However, designing and implementing efficient, provably correct distributed ML
algorithms is often prohibitively challenging. To enable ML researchers to
easily and efficiently use parallel systems, we introduced the GraphLab
abstraction which is designed to represent the computational patterns in ML
algorithms while permitting efficient parallel and distributed implementations.
In this paper we provide a formal description of the GraphLab parallel
abstraction and present an efficient distributed implementation. We conduct a
comprehensive evaluation of GraphLab on three state-of-the-art ML algorithms
using real large-scale data and a 64 node EC2 cluster of 512 processors. We
find that GraphLab achieves orders of magnitude performance gains over Hadoop
while performing comparably or superior to hand-tuned MPI implementations.
| [
"['Yucheng Low' 'Joseph Gonzalez' 'Aapo Kyrola' 'Danny Bickson'\n 'Carlos Guestrin']",
"Yucheng Low, Joseph Gonzalez, Aapo Kyrola, Danny Bickson, Carlos\n Guestrin"
] |
cs.LG math.ST stat.TH | null | 1107.1270 | null | null | http://arxiv.org/pdf/1107.1270v3 | 2012-03-04T04:42:39Z | 2011-07-06T22:21:57Z | High-Dimensional Gaussian Graphical Model Selection: Walk Summability
and Local Separation Criterion | We consider the problem of high-dimensional Gaussian graphical model
selection. We identify a set of graphs for which an efficient estimation
algorithm exists, and this algorithm is based on thresholding of empirical
conditional covariances. Under a set of transparent conditions, we establish
structural consistency (or sparsistency) for the proposed algorithm, when the
number of samples n=omega(J_{min}^{-2} log p), where p is the number of
variables and J_{min} is the minimum (absolute) edge potential of the graphical
model. The sufficient conditions for sparsistency are based on the notion of
walk-summability of the model and the presence of sparse local vertex
separators in the underlying graph. We also derive novel non-asymptotic
necessary conditions on the number of samples required for sparsistency.
| [
"['Animashree Anandkumar' 'Vincent Y. F. Tan' 'Alan. S. Willsky']",
"Animashree Anandkumar, Vincent Y. F. Tan and Alan. S. Willsky"
] |
cs.LG stat.ML | null | 1107.1283 | null | null | http://arxiv.org/pdf/1107.1283v2 | 2011-11-08T15:42:32Z | 2011-07-07T02:33:31Z | Spectral Methods for Learning Multivariate Latent Tree Structure | This work considers the problem of learning the structure of multivariate
linear tree models, which include a variety of directed tree graphical models
with continuous, discrete, and mixed latent variables such as linear-Gaussian
models, hidden Markov models, Gaussian mixture models, and Markov evolutionary
trees. The setting is one where we only have samples from certain observed
variables in the tree, and our goal is to estimate the tree structure (i.e.,
the graph of how the underlying hidden variables are connected to each other
and to the observed variables). We propose the Spectral Recursive Grouping
algorithm, an efficient and simple bottom-up procedure for recovering the tree
structure from independent samples of the observed variables. Our finite sample
size bounds for exact recovery of the tree structure reveal certain natural
dependencies on underlying statistical and structural properties of the
underlying joint distribution. Furthermore, our sample complexity guarantees
have no explicit dependence on the dimensionality of the observed variables,
making the algorithm applicable to many high-dimensional settings. At the heart
of our algorithm is a spectral quartet test for determining the relative
topology of a quartet of variables from second-order statistics.
| [
"Animashree Anandkumar, Kamalika Chaudhuri, Daniel Hsu, Sham M. Kakade,\n Le Song, Tong Zhang",
"['Animashree Anandkumar' 'Kamalika Chaudhuri' 'Daniel Hsu'\n 'Sham M. Kakade' 'Le Song' 'Tong Zhang']"
] |
cs.AI cs.IR cs.LG | 10.1007/978-3-642-20161-5_41 | 1107.1322 | null | null | http://arxiv.org/abs/1107.1322v3 | 2011-08-29T17:45:53Z | 2011-07-07T09:09:19Z | Text Classification: A Sequential Reading Approach | We propose to model the text classification process as a sequential decision
process. In this process, an agent learns to classify documents into topics
while reading the document sentences sequentially and learns to stop as soon as
enough information was read for deciding. The proposed algorithm is based on a
modelisation of Text Classification as a Markov Decision Process and learns by
using Reinforcement Learning. Experiments on four different classical
mono-label corpora show that the proposed approach performs comparably to
classical SVM approaches for large training sets, and better for small training
sets. In addition, the model automatically adapts its reading process to the
quantity of training information provided.
| [
"['Gabriel Dulac-Arnold' 'Ludovic Denoyer' 'Patrick Gallinari']",
"Gabriel Dulac-Arnold, Ludovic Denoyer, Patrick Gallinari"
] |
cs.CC cs.DS cs.LG | null | 1107.1358 | null | null | http://arxiv.org/pdf/1107.1358v2 | 2012-02-02T21:40:04Z | 2011-07-07T11:58:52Z | On the Furthest Hyperplane Problem and Maximal Margin Clustering | This paper introduces the Furthest Hyperplane Problem (FHP), which is an
unsupervised counterpart of Support Vector Machines. Given a set of n points in
Rd, the objective is to produce the hyperplane (passing through the origin)
which maximizes the separation margin, that is, the minimal distance between
the hyperplane and any input point. To the best of our knowledge, this is the
first paper achieving provable results regarding FHP. We provide both lower and
upper bounds to this NP-hard problem. First, we give a simple randomized
algorithm whose running time is n^O(1/{\theta}^2) where {\theta} is the optimal
separation margin. We show that its exponential dependency on 1/{\theta}^2 is
tight, up to sub-polynomial factors, assuming SAT cannot be solved in
sub-exponential time. Next, we give an efficient approxima- tion algorithm. For
any {\alpha} \in [0, 1], the algorithm produces a hyperplane whose distance
from at least 1 - 5{\alpha} fraction of the points is at least {\alpha} times
the optimal separation margin. Finally, we show that FHP does not admit a PTAS
by presenting a gap preserving reduction from a particular version of the PCP
theorem.
| [
"['Zohar Karnin' 'Edo Liberty' 'Shachar Lovett' 'Roy Schwartz'\n 'Omri Weinstein']",
"Zohar Karnin, Edo Liberty, Shachar Lovett, Roy Schwartz, Omri\n Weinstein"
] |
cs.LG cs.NE | null | 1107.1564 | null | null | http://arxiv.org/pdf/1107.1564v3 | 2014-03-12T07:08:13Z | 2011-07-08T06:26:03Z | Polyceptron: A Polyhedral Learning Algorithm | In this paper we propose a new algorithm for learning polyhedral classifiers
which we call as Polyceptron. It is a Perception like algorithm which updates
the parameters only when the current classifier misclassifies any training
data. We give both batch and online version of Polyceptron algorithm. Finally
we give experimental results to show the effectiveness of our approach.
| [
"['Naresh Manwani' 'P. S. Sastry']",
"Naresh Manwani and P. S. Sastry"
] |
stat.ML cs.LG math.ST stat.TH | 10.1214/12-AOS1009 | 1107.1736 | null | null | http://arxiv.org/abs/1107.1736v4 | 2012-08-20T05:38:19Z | 2011-07-08T21:35:48Z | High-dimensional structure estimation in Ising models: Local separation
criterion | We consider the problem of high-dimensional Ising (graphical) model
selection. We propose a simple algorithm for structure estimation based on the
thresholding of the empirical conditional variation distances. We introduce a
novel criterion for tractable graph families, where this method is efficient,
based on the presence of sparse local separators between node pairs in the
underlying graph. For such graphs, the proposed algorithm has a sample
complexity of $n=\Omega(J_{\min}^{-2}\log p)$, where $p$ is the number of
variables, and $J_{\min}$ is the minimum (absolute) edge potential in the
model. We also establish nonasymptotic necessary and sufficient conditions for
structure estimation.
| [
"['Animashree Anandkumar' 'Vincent Y. F. Tan' 'Furong Huang'\n 'Alan S. Willsky']",
"Animashree Anandkumar, Vincent Y. F. Tan, Furong Huang, Alan S.\n Willsky"
] |
math.OC cs.LG cs.SY | null | 1107.1744 | null | null | http://arxiv.org/pdf/1107.1744v2 | 2011-10-08T06:06:43Z | 2011-07-08T22:18:05Z | Stochastic convex optimization with bandit feedback | This paper addresses the problem of minimizing a convex, Lipschitz function
$f$ over a convex, compact set $\xset$ under a stochastic bandit feedback
model. In this model, the algorithm is allowed to observe noisy realizations of
the function value $f(x)$ at any query point $x \in \xset$. The quantity of
interest is the regret of the algorithm, which is the sum of the function
values at algorithm's query points minus the optimal function value. We
demonstrate a generalization of the ellipsoid algorithm that incurs
$\otil(\poly(d)\sqrt{T})$ regret. Since any algorithm has regret at least
$\Omega(\sqrt{T})$ on this problem, our algorithm is optimal in terms of the
scaling with $T$.
| [
"['Alekh Agarwal' 'Dean P. Foster' 'Daniel Hsu' 'Sham M. Kakade'\n 'Alexander Rakhlin']",
"Alekh Agarwal, Dean P. Foster, Daniel Hsu, Sham M. Kakade, Alexander\n Rakhlin"
] |
cs.LG stat.ML | null | 1107.2021 | null | null | http://arxiv.org/pdf/1107.2021v3 | 2012-08-13T16:38:44Z | 2011-07-11T13:30:58Z | Multi-Instance Learning with Any Hypothesis Class | In the supervised learning setting termed Multiple-Instance Learning (MIL),
the examples are bags of instances, and the bag label is a function of the
labels of its instances. Typically, this function is the Boolean OR. The
learner observes a sample of bags and the bag labels, but not the instance
labels that determine the bag labels. The learner is then required to emit a
classification rule for bags based on the sample. MIL has numerous
applications, and many heuristic algorithms have been used successfully on this
problem, each adapted to specific settings or applications. In this work we
provide a unified theoretical analysis for MIL, which holds for any underlying
hypothesis class, regardless of a specific application or problem domain. We
show that the sample complexity of MIL is only poly-logarithmically dependent
on the size of the bag, for any underlying hypothesis class. In addition, we
introduce a new PAC-learning algorithm for MIL, which uses a regular supervised
learning algorithm as an oracle. We prove that efficient PAC-learning for MIL
can be generated from any efficient non-MIL supervised learning algorithm that
handles one-sided error. The computational complexity of the resulting
algorithm is only polynomially dependent on the bag size.
| [
"['Sivan Sabato' 'Naftali Tishby']",
"Sivan Sabato and Naftali Tishby"
] |
cs.LG cs.DS | null | 1107.2379 | null | null | http://arxiv.org/pdf/1107.2379v5 | 2014-08-29T18:52:16Z | 2011-07-12T19:27:12Z | Data Stability in Clustering: A Closer Look | We consider the model introduced by Bilu and Linial (2010), who study
problems for which the optimal clustering does not change when distances are
perturbed. They show that even when a problem is NP-hard, it is sometimes
possible to obtain efficient algorithms for instances resilient to certain
multiplicative perturbations, e.g. on the order of $O(\sqrt{n})$ for max-cut
clustering. Awasthi et al. (2010) consider center-based objectives, and Balcan
and Liang (2011) analyze the $k$-median and min-sum objectives, giving
efficient algorithms for instances resilient to certain constant multiplicative
perturbations.
Here, we are motivated by the question of to what extent these assumptions
can be relaxed while allowing for efficient algorithms. We show there is little
room to improve these results by giving NP-hardness lower bounds for both the
$k$-median and min-sum objectives. On the other hand, we show that constant
multiplicative resilience parameters can be so strong as to make the clustering
problem trivial, leaving only a narrow range of resilience parameters for which
clustering is interesting. We also consider a model of additive perturbations
and give a correspondence between additive and multiplicative notions of
stability. Our results provide a close examination of the consequences of
assuming stability in data.
| [
"Shalev Ben-David, Lev Reyzin",
"['Shalev Ben-David' 'Lev Reyzin']"
] |
cs.CC cs.LG | null | 1107.2444 | null | null | http://arxiv.org/pdf/1107.2444v1 | 2011-07-13T00:53:23Z | 2011-07-13T00:53:23Z | Private Data Release via Learning Thresholds | This work considers computationally efficient privacy-preserving data
release. We study the task of analyzing a database containing sensitive
information about individual participants. Given a set of statistical queries
on the data, we want to release approximate answers to the queries while also
guaranteeing differential privacy---protecting each participant's sensitive
data.
Our focus is on computationally efficient data release algorithms; we seek
algorithms whose running time is polynomial, or at least sub-exponential, in
the data dimensionality. Our primary contribution is a computationally
efficient reduction from differentially private data release for a class of
counting queries, to learning thresholded sums of predicates from a related
class.
We instantiate this general reduction with a variety of algorithms for
learning thresholds. These instantiations yield several new results for
differentially private data release. As two examples, taking {0,1}^d to be the
data domain (of dimension d), we obtain differentially private algorithms for:
(*) Releasing all k-way conjunctions. For any given k, the resulting data
release algorithm has bounded error as long as the database is of size at least
d^{O(\sqrt{k\log(k\log d)})}. The running time is polynomial in the database
size.
(*) Releasing a (1-\gamma)-fraction of all parity queries. For any \gamma
\geq \poly(1/d), the algorithm has bounded error as long as the database is of
size at least \poly(d). The running time is polynomial in the database size.
Several other instantiations yield further results for privacy-preserving
data release. Of the two results highlighted above, the first learning
algorithm uses techniques for representing thresholded sums of predicates as
low-degree polynomial threshold functions. The second learning algorithm is
based on Jackson's Harmonic Sieve algorithm [Jackson 1997].
| [
"Moritz Hardt and Guy N. Rothblum and Rocco A. Servedio",
"['Moritz Hardt' 'Guy N. Rothblum' 'Rocco A. Servedio']"
] |
stat.ML cs.LG | null | 1107.2462 | null | null | http://arxiv.org/pdf/1107.2462v2 | 2011-11-10T04:24:38Z | 2011-07-13T04:28:32Z | Statistical Topic Models for Multi-Label Document Classification | Machine learning approaches to multi-label document classification have to
date largely relied on discriminative modeling techniques such as support
vector machines. A drawback of these approaches is that performance rapidly
drops off as the total number of labels and the number of labels per document
increase. This problem is amplified when the label frequencies exhibit the type
of highly skewed distributions that are often observed in real-world datasets.
In this paper we investigate a class of generative statistical topic models for
multi-label documents that associate individual word tokens with different
labels. We investigate the advantages of this approach relative to
discriminative models, particularly with respect to classification problems
involving large numbers of relatively rare labels. We compare the performance
of generative and discriminative approaches on document labeling tasks ranging
from datasets with several thousand labels to datasets with tens of labels. The
experimental results indicate that probabilistic generative models can achieve
competitive multi-label classification performance compared to discriminative
methods, and have advantages for datasets with many labels and skewed label
frequencies.
| [
"['Timothy N. Rubin' 'America Chambers' 'Padhraic Smyth' 'Mark Steyvers']",
"Timothy N. Rubin, America Chambers, Padhraic Smyth and Mark Steyvers"
] |
math.OC cs.LG cs.SY math.ST stat.TH | null | 1107.2487 | null | null | http://arxiv.org/pdf/1107.2487v2 | 2012-08-04T00:13:11Z | 2011-07-13T08:34:50Z | Provably Safe and Robust Learning-Based Model Predictive Control | Controller design faces a trade-off between robustness and performance, and
the reliability of linear controllers has caused many practitioners to focus on
the former. However, there is renewed interest in improving system performance
to deal with growing energy constraints. This paper describes a learning-based
model predictive control (LBMPC) scheme that provides deterministic guarantees
on robustness, while statistical identification tools are used to identify
richer models of the system in order to improve performance; the benefits of
this framework are that it handles state and input constraints, optimizes
system performance with respect to a cost function, and can be designed to use
a wide variety of parametric or nonparametric statistical tools. The main
insight of LBMPC is that safety and performance can be decoupled under
reasonable conditions in an optimization framework by maintaining two models of
the system. The first is an approximate model with bounds on its uncertainty,
and the second model is updated by statistical methods. LBMPC improves
performance by choosing inputs that minimize a cost subject to the learned
dynamics, and it ensures safety and robustness by checking whether these same
inputs keep the approximate model stable when it is subject to uncertainty.
Furthermore, we show that if the system is sufficiently excited, then the LBMPC
control action probabilistically converges to that of an MPC computed using the
true dynamics.
| [
"['Anil Aswani' 'Humberto Gonzalez' 'S. Shankar Sastry' 'Claire Tomlin']",
"Anil Aswani, Humberto Gonzalez, S. Shankar Sastry, Claire Tomlin"
] |
cs.LG | null | 1107.2490 | null | null | http://arxiv.org/pdf/1107.2490v2 | 2011-12-22T06:43:31Z | 2011-07-13T08:57:29Z | Towards Optimal One Pass Large Scale Learning with Averaged Stochastic
Gradient Descent | For large scale learning problems, it is desirable if we can obtain the
optimal model parameters by going through the data in only one pass. Polyak and
Juditsky (1992) showed that asymptotically the test performance of the simple
average of the parameters obtained by stochastic gradient descent (SGD) is as
good as that of the parameters which minimize the empirical cost. However, to
our knowledge, despite its optimal asymptotic convergence rate, averaged SGD
(ASGD) received little attention in recent research on large scale learning.
One possible reason is that it may take a prohibitively large number of
training samples for ASGD to reach its asymptotic region for most real
problems. In this paper, we present a finite sample analysis for the method of
Polyak and Juditsky (1992). Our analysis shows that it indeed usually takes a
huge number of samples for ASGD to reach its asymptotic region for improperly
chosen learning rate. More importantly, based on our analysis, we propose a
simple way to properly set learning rate so that it takes a reasonable amount
of data for ASGD to reach its asymptotic region. We compare ASGD using our
proposed learning rate with other well known algorithms for training large
scale linear classifiers. The experiments clearly show the superiority of ASGD.
| [
"Wei Xu",
"['Wei Xu']"
] |
cs.DS cs.LG math.ST stat.TH | null | 1107.2700 | null | null | http://arxiv.org/pdf/1107.2700v3 | 2014-09-14T21:20:37Z | 2011-07-13T23:26:53Z | Learning $k$-Modal Distributions via Testing | A $k$-modal probability distribution over the discrete domain $\{1,...,n\}$
is one whose histogram has at most $k$ "peaks" and "valleys." Such
distributions are natural generalizations of monotone ($k=0$) and unimodal
($k=1$) probability distributions, which have been intensively studied in
probability theory and statistics.
In this paper we consider the problem of \emph{learning} (i.e., performing
density estimation of) an unknown $k$-modal distribution with respect to the
$L_1$ distance. The learning algorithm is given access to independent samples
drawn from an unknown $k$-modal distribution $p$, and it must output a
hypothesis distribution $\widehat{p}$ such that with high probability the total
variation distance between $p$ and $\widehat{p}$ is at most $\epsilon.$ Our
main goal is to obtain \emph{computationally efficient} algorithms for this
problem that use (close to) an information-theoretically optimal number of
samples.
We give an efficient algorithm for this problem that runs in time
$\mathrm{poly}(k,\log(n),1/\epsilon)$. For $k \leq \tilde{O}(\log n)$, the
number of samples used by our algorithm is very close (within an
$\tilde{O}(\log(1/\epsilon))$ factor) to being information-theoretically
optimal. Prior to this work computationally efficient algorithms were known
only for the cases $k=0,1$ \cite{Birge:87b,Birge:97}.
A novel feature of our approach is that our learning algorithm crucially uses
a new algorithm for \emph{property testing of probability distributions} as a
key subroutine. The learning algorithm uses the property tester to efficiently
decompose the $k$-modal distribution into $k$ (near-)monotone distributions,
which are easier to learn.
| [
"['Constantinos Daskalakis' 'Ilias Diakonikolas' 'Rocco A. Servedio']",
"Constantinos Daskalakis, Ilias Diakonikolas, Rocco A. Servedio"
] |
cs.DS cs.LG math.ST stat.TH | null | 1107.2702 | null | null | http://arxiv.org/pdf/1107.2702v4 | 2015-02-17T01:45:53Z | 2011-07-13T23:30:39Z | Learning Poisson Binomial Distributions | We consider a basic problem in unsupervised learning: learning an unknown
\emph{Poisson Binomial Distribution}. A Poisson Binomial Distribution (PBD)
over $\{0,1,\dots,n\}$ is the distribution of a sum of $n$ independent
Bernoulli random variables which may have arbitrary, potentially non-equal,
expectations. These distributions were first studied by S. Poisson in 1837
\cite{Poisson:37} and are a natural $n$-parameter generalization of the
familiar Binomial Distribution. Surprisingly, prior to our work this basic
learning problem was poorly understood, and known results for it were far from
optimal.
We essentially settle the complexity of the learning problem for this basic
class of distributions. As our first main result we give a highly efficient
algorithm which learns to $\eps$-accuracy (with respect to the total variation
distance) using $\tilde{O}(1/\eps^3)$ samples \emph{independent of $n$}. The
running time of the algorithm is \emph{quasilinear} in the size of its input
data, i.e., $\tilde{O}(\log(n)/\eps^3)$ bit-operations. (Observe that each draw
from the distribution is a $\log(n)$-bit string.) Our second main result is a
{\em proper} learning algorithm that learns to $\eps$-accuracy using
$\tilde{O}(1/\eps^2)$ samples, and runs in time $(1/\eps)^{\poly (\log
(1/\eps))} \cdot \log n$. This is nearly optimal, since any algorithm {for this
problem} must use $\Omega(1/\eps^2)$ samples. We also give positive and
negative results for some extensions of this learning problem to weighted sums
of independent Bernoulli random variables.
| [
"['Constantinos Daskalakis' 'Ilias Diakonikolas' 'Rocco A. Servedio']",
"Constantinos Daskalakis, Ilias Diakonikolas, Rocco A. Servedio"
] |
cs.CV cs.LG | null | 1107.2807 | null | null | http://arxiv.org/pdf/1107.2807v1 | 2011-07-14T12:51:10Z | 2011-07-14T12:51:10Z | Modelling Distributed Shape Priors by Gibbs Random Fields of Second
Order | We analyse the potential of Gibbs Random Fields for shape prior modelling. We
show that the expressive power of second order GRFs is already sufficient to
express simple shapes and spatial relations between them simultaneously. This
allows to model and recognise complex shapes as spatial compositions of simpler
parts.
| [
"Boris Flach and Dmitrij Schlesinger",
"['Boris Flach' 'Dmitrij Schlesinger']"
] |
cs.LG cs.DS cs.IT cs.SI math.IT stat.ML | null | 1107.3059 | null | null | http://arxiv.org/pdf/1107.3059v3 | 2014-02-10T07:03:28Z | 2011-07-15T12:47:02Z | From Small-World Networks to Comparison-Based Search | The problem of content search through comparisons has recently received
considerable attention. In short, a user searching for a target object
navigates through a database in the following manner: the user is asked to
select the object most similar to her target from a small list of objects. A
new object list is then presented to the user based on her earlier selection.
This process is repeated until the target is included in the list presented, at
which point the search terminates. This problem is known to be strongly related
to the small-world network design problem.
However, contrary to prior work, which focuses on cases where objects in the
database are equally popular, we consider here the case where the demand for
objects may be heterogeneous. We show that, under heterogeneous demand, the
small-world network design problem is NP-hard. Given the above negative result,
we propose a novel mechanism for small-world design and provide an upper bound
on its performance under heterogeneous demand. The above mechanism has a
natural equivalent in the context of content search through comparisons, and we
establish both an upper bound and a lower bound for the performance of this
mechanism. These bounds are intuitively appealing, as they depend on the
entropy of the demand as well as its doubling constant, a quantity capturing
the topology of the set of target objects. They also illustrate interesting
connections between comparison-based search to classic results from information
theory. Finally, we propose an adaptive learning algorithm for content search
that meets the performance guarantees achieved by the above mechanisms.
| [
"Amin Karbasi, Stratis Ioannidis, Laurent Massoulie",
"['Amin Karbasi' 'Stratis Ioannidis' 'Laurent Massoulie']"
] |
cs.CC cs.LG cs.SY math.OC | null | 1107.3090 | null | null | http://arxiv.org/pdf/1107.3090v2 | 2012-10-04T13:54:42Z | 2011-07-15T15:33:15Z | On the Computational Complexity of Stochastic Controller Optimization in
POMDPs | We show that the problem of finding an optimal stochastic 'blind' controller
in a Markov decision process is an NP-hard problem. The corresponding decision
problem is NP-hard, in PSPACE, and SQRT-SUM-hard, hence placing it in NP would
imply breakthroughs in long-standing open problems in computer science. Our
result establishes that the more general problem of stochastic controller
optimization in POMDPs is also NP-hard. Nonetheless, we outline a special case
that is convex and admits efficient global solutions.
| [
"Nikos Vlassis, Michael L. Littman, David Barber",
"['Nikos Vlassis' 'Michael L. Littman' 'David Barber']"
] |
stat.ML cs.LG stat.ME | null | 1107.3133 | null | null | http://arxiv.org/pdf/1107.3133v2 | 2011-09-06T03:18:45Z | 2011-07-15T19:05:48Z | Robust Kernel Density Estimation | We propose a method for nonparametric density estimation that exhibits
robustness to contamination of the training sample. This method achieves
robustness by combining a traditional kernel density estimator (KDE) with ideas
from classical $M$-estimation. We interpret the KDE based on a radial, positive
semi-definite kernel as a sample mean in the associated reproducing kernel
Hilbert space. Since the sample mean is sensitive to outliers, we estimate it
robustly via $M$-estimation, yielding a robust kernel density estimator (RKDE).
An RKDE can be computed efficiently via a kernelized iteratively re-weighted
least squares (IRWLS) algorithm. Necessary and sufficient conditions are given
for kernelized IRWLS to converge to the global minimizer of the $M$-estimator
objective function. The robustness of the RKDE is demonstrated with a
representer theorem, the influence function, and experimental results for
density estimation and anomaly detection.
| [
"JooSeuk Kim and Clayton D. Scott",
"['JooSeuk Kim' 'Clayton D. Scott']"
] |
cs.LG math.ST stat.ML stat.TH | null | 1107.3258 | null | null | http://arxiv.org/pdf/1107.3258v1 | 2011-07-16T22:04:13Z | 2011-07-16T22:04:13Z | On Learning Discrete Graphical Models Using Greedy Methods | In this paper, we address the problem of learning the structure of a pairwise
graphical model from samples in a high-dimensional setting. Our first main
result studies the sparsistency, or consistency in sparsity pattern recovery,
properties of a forward-backward greedy algorithm as applied to general
statistical models. As a special case, we then apply this algorithm to learn
the structure of a discrete graphical model via neighborhood estimation. As a
corollary of our general result, we derive sufficient conditions on the number
of samples n, the maximum node-degree d and the problem size p, as well as
other conditions on the model parameters, so that the algorithm recovers all
the edges with high probability. Our result guarantees graph selection for
samples scaling as n = Omega(d^2 log(p)), in contrast to existing
convex-optimization based algorithms that require a sample complexity of
\Omega(d^3 log(p)). Further, the greedy algorithm only requires a restricted
strong convexity condition which is typically milder than irrepresentability
assumptions. We corroborate these results using numerical simulations at the
end.
| [
"Ali Jalali and Chris Johnson and Pradeep Ravikumar",
"['Ali Jalali' 'Chris Johnson' 'Pradeep Ravikumar']"
] |
cs.LG | null | 1107.3407 | null | null | http://arxiv.org/pdf/1107.3407v1 | 2011-07-18T12:01:28Z | 2011-07-18T12:01:28Z | Discovering Knowledge using a Constraint-based Language | Discovering pattern sets or global patterns is an attractive issue from the
pattern mining community in order to provide useful information. By combining
local patterns satisfying a joint meaning, this approach produces patterns of
higher level and thus more useful for the data analyst than the usual local
patterns, while reducing the number of patterns. In parallel, recent works
investigating relationships between data mining and constraint programming (CP)
show that the CP paradigm is a nice framework to model and mine such patterns
in a declarative and generic way. We present a constraint-based language which
enables us to define queries addressing patterns sets and global patterns. The
usefulness of such a declarative approach is highlighted by several examples
coming from the clustering based on associations. This language has been
implemented in the CP framework.
| [
"Patrice Boizumault, Bruno Cr\\'emilleux, Mehdi Khiari, Samir Loudni,\n and Jean-Philippe M\\'etivier",
"['Patrice Boizumault' 'Bruno Crémilleux' 'Mehdi Khiari' 'Samir Loudni'\n 'Jean-Philippe Métivier']"
] |
stat.ML cs.LG | null | 1107.3600 | null | null | http://arxiv.org/pdf/1107.3600v2 | 2011-09-26T10:02:43Z | 2011-07-19T00:48:41Z | Unsupervised K-Nearest Neighbor Regression | In many scientific disciplines structures in high-dimensional data have to be
found, e.g., in stellar spectra, in genome data, or in face recognition tasks.
In this work we present a novel approach to non-linear dimensionality
reduction. It is based on fitting K-nearest neighbor regression to the
unsupervised regression framework for learning of low-dimensional manifolds.
Similar to related approaches that are mostly based on kernel methods,
unsupervised K-nearest neighbor (UNN) regression optimizes latent variables
w.r.t. the data space reconstruction error employing the K-nearest neighbor
heuristic. The problem of optimizing latent neighborhoods is difficult to
solve, but the UNN formulation allows the design of efficient strategies that
iteratively embed latent points to fixed neighborhood topologies. UNN is well
appropriate for sorting of high-dimensional data. The iterative variants are
analyzed experimentally.
| [
"['Oliver Kramer']",
"Oliver Kramer"
] |
cs.LG cs.CV | null | 1107.3823 | null | null | http://arxiv.org/pdf/1107.3823v1 | 2011-07-19T19:43:10Z | 2011-07-19T19:43:10Z | Weakly Supervised Learning of Foreground-Background Segmentation using
Masked RBMs | We propose an extension of the Restricted Boltzmann Machine (RBM) that allows
the joint shape and appearance of foreground objects in cluttered images to be
modeled independently of the background. We present a learning scheme that
learns this representation directly from cluttered images with only very weak
supervision. The model generates plausible samples and performs
foreground-background segmentation. We demonstrate that representing foreground
objects independently of the background can be beneficial in recognition tasks.
| [
"Nicolas Heess (Informatics), Nicolas Le Roux (INRIA Paris -\n Rocquencourt), John Winn",
"['Nicolas Heess' 'Nicolas Le Roux' 'John Winn']"
] |
math.OC cs.LG | null | 1107.4042 | null | null | http://arxiv.org/pdf/1107.4042v3 | 2015-01-29T10:15:00Z | 2011-07-20T17:33:43Z | Optimal Adaptive Learning in Uncontrolled Restless Bandit Problems | In this paper we consider the problem of learning the optimal policy for
uncontrolled restless bandit problems. In an uncontrolled restless bandit
problem, there is a finite set of arms, each of which when pulled yields a
positive reward. There is a player who sequentially selects one of the arms at
each time step. The goal of the player is to maximize its undiscounted reward
over a time horizon T. The reward process of each arm is a finite state Markov
chain, whose transition probabilities are unknown by the player. State
transitions of each arm is independent of the selection of the player. We
propose a learning algorithm with logarithmic regret uniformly over time with
respect to the optimal finite horizon policy. Our results extend the optimal
adaptive learning of MDPs to POMDPs.
| [
"['Cem Tekin' 'Mingyan Liu']",
"Cem Tekin, Mingyan Liu"
] |
cs.LG | null | 1107.4080 | null | null | http://arxiv.org/pdf/1107.4080v1 | 2011-07-20T19:34:00Z | 2011-07-20T19:34:00Z | On the Universality of Online Mirror Descent | We show that for a general class of convex online learning problems, Mirror
Descent can always achieve a (nearly) optimal regret guarantee.
| [
"Nathan Srebro, Karthik Sridharan, Ambuj Tewari",
"['Nathan Srebro' 'Karthik Sridharan' 'Ambuj Tewari']"
] |
cs.MA cs.LG | null | 1107.4153 | null | null | http://arxiv.org/pdf/1107.4153v1 | 2011-07-21T04:15:25Z | 2011-07-21T04:15:25Z | Performance and Convergence of Multi-user Online Learning | We study the problem of allocating multiple users to a set of wireless
channels in a decentralized manner when the channel quali- ties are
time-varying and unknown to the users, and accessing the same channel by
multiple users leads to reduced quality due to interference. In such a setting
the users not only need to learn the inherent channel quality and at the same
time the best allocations of users to channels so as to maximize the social
welfare. Assuming that the users adopt a certain online learning algorithm, we
investigate under what conditions the socially optimal allocation is
achievable. In particular we examine the effect of different levels of
knowledge the users may have and the amount of communications and cooperation.
The general conclusion is that when the cooperation of users decreases and the
uncertainty about channel payoffs increases it becomes harder to achieve the
socially opti- mal allocation.
| [
"['Cem Tekin' 'Mingyan Liu']",
"Cem Tekin, Mingyan Liu"
] |
cs.AI cs.CL cs.LG | null | 1107.4573 | null | null | http://arxiv.org/pdf/1107.4573v1 | 2011-07-22T16:54:11Z | 2011-07-22T16:54:11Z | Analogy perception applied to seven tests of word comprehension | It has been argued that analogy is the core of cognition. In AI research,
algorithms for analogy are often limited by the need for hand-coded high-level
representations as input. An alternative approach is to use high-level
perception, in which high-level representations are automatically generated
from raw data. Analogy perception is the process of recognizing analogies using
high-level perception. We present PairClass, an algorithm for analogy
perception that recognizes lexical proportional analogies using representations
that are automatically generated from a large corpus of raw textual data. A
proportional analogy is an analogy of the form A:B::C:D, meaning "A is to B as
C is to D". A lexical proportional analogy is a proportional analogy with
words, such as carpenter:wood::mason:stone. PairClass represents the semantic
relations between two words using a high-dimensional feature vector, in which
the elements are based on frequencies of patterns in the corpus. PairClass
recognizes analogies by applying standard supervised machine learning
techniques to the feature vectors. We show how seven different tests of word
comprehension can be framed as problems of analogy perception and we then apply
PairClass to the seven resulting sets of analogy perception problems. We
achieve competitive results on all seven tests. This is the first time a
uniform approach has handled such a range of tests of word comprehension.
| [
"['Peter D. Turney']",
"Peter D. Turney (National Research Council of Canada)"
] |
cs.LG | null | 1107.4606 | null | null | http://arxiv.org/pdf/1107.4606v2 | 2012-07-29T18:08:07Z | 2011-07-22T13:05:48Z | The Divergence of Reinforcement Learning Algorithms with Value-Iteration
and Function Approximation | This paper gives specific divergence examples of value-iteration for several
major Reinforcement Learning and Adaptive Dynamic Programming algorithms, when
using a function approximator for the value function. These divergence examples
differ from previous divergence examples in the literature, in that they are
applicable for a greedy policy, i.e. in a "value iteration" scenario. Perhaps
surprisingly, with a greedy policy, it is also possible to get divergence for
the algorithms TD(1) and Sarsa(1). In addition to these divergences, we also
achieve divergence for the Adaptive Dynamic Programming algorithms HDP, DHP and
GDHP.
| [
"['Michael Fairbank' 'Eduardo Alonso']",
"Michael Fairbank and Eduardo Alonso"
] |
cs.AI cs.LG | null | 1107.4966 | null | null | http://arxiv.org/pdf/1107.4966v2 | 2011-08-26T17:27:04Z | 2011-07-25T14:56:18Z | Lifted Graphical Models: A Survey | This article presents a survey of work on lifted graphical models. We review
a general form for a lifted graphical model, a par-factor graph, and show how a
number of existing statistical relational representations map to this
formalism. We discuss inference algorithms, including lifted inference
algorithms, that efficiently compute the answers to probabilistic queries. We
also review work in learning lifted graphical models from data. It is our
belief that the need for statistical relational models (whether it goes by that
name or another) will grow in the coming decades, as we are inundated with data
which is a mix of structured and unstructured, with entities and relations
extracted in a noisy manner from text, and with the need to reason effectively
with this data. We hope that this synthesis of ideas from many different
research groups will provide an accessible starting point for new researchers
in this expanding field.
| [
"['Lilyana Mihalkova' 'Lise Getoor']",
"Lilyana Mihalkova and Lise Getoor"
] |
cs.LO cs.AI cs.LG | null | 1107.4967 | null | null | http://arxiv.org/pdf/1107.4967v1 | 2011-07-25T15:01:50Z | 2011-07-25T15:01:50Z | Normative design using inductive learning | In this paper we propose a use-case-driven iterative design methodology for
normative frameworks, also called virtual institutions, which are used to
govern open systems. Our computational model represents the normative framework
as a logic program under answer set semantics (ASP). By means of an inductive
logic programming approach, implemented using ASP, it is possible to synthesise
new rules and revise the existing ones. The learning mechanism is guided by the
designer who describes the desired properties of the framework through use
cases, comprising (i) event traces that capture possible scenarios, and (ii) a
state that describes the desired outcome. The learning process then proposes
additional rules, or changes to current rules, to satisfy the constraints
expressed in the use cases. Thus, the contribution of this paper is a process
for the elaboration and revision of a normative framework by means of a
semi-automatic and iterative process driven from specifications of
(un)desirable behaviour. The process integrates a novel and general methodology
for theory revision based on ASP.
| [
"['Domenico Corapi' 'Alessandra Russo' 'Marina De Vos' 'Julian Padget'\n 'Ken Satoh']",
"Domenico Corapi, Alessandra Russo, Marina De Vos, Julian Padget, Ken\n Satoh"
] |
cs.LG cs.AI | null | 1107.5236 | null | null | http://arxiv.org/pdf/1107.5236v2 | 2011-08-23T17:42:35Z | 2011-07-26T15:11:10Z | Submodular Optimization for Efficient Semi-supervised Support Vector
Machines | In this work we present a quadratic programming approximation of the
Semi-Supervised Support Vector Machine (S3VM) problem, namely approximate
QP-S3VM, that can be efficiently solved using off the shelf optimization
packages. We prove that this approximate formulation establishes a relation
between the low density separation and the graph-based models of
semi-supervised learning (SSL) which is important to develop a unifying
framework for semi-supervised learning methods. Furthermore, we propose the
novel idea of representing SSL problems as submodular set functions and use
efficient submodular optimization algorithms to solve them. Using this new idea
we develop a representation of the approximate QP-S3VM as a maximization of a
submodular set function which makes it possible to optimize using efficient
greedy algorithms. We demonstrate that the proposed methods are accurate and
provide significant improvement in time complexity over the state of the art in
the literature.
| [
"['Wael Emara' 'Mehmed Kantardzic']",
"Wael Emara and Mehmed Kantardzic"
] |
cs.CV cs.DS cs.LG q-bio.QM | null | 1107.5349 | null | null | http://arxiv.org/pdf/1107.5349v1 | 2011-07-26T22:29:35Z | 2011-07-26T22:29:35Z | Multi Layer Analysis | This thesis presents a new methodology to analyze one-dimensional signals
trough a new approach called Multi Layer Analysis, for short MLA. It also
provides some new insights on the relationship between one-dimensional signals
processed by MLA and tree kernels, test of randomness and signal processing
techniques. The MLA approach has a wide range of application to the fields of
pattern discovery and matching, computational biology and many other areas of
computer science and signal processing. This thesis includes also some
applications of this approach to real problems in biology and seismology.
| [
"['Luca Pinello']",
"Luca Pinello"
] |
cs.LG | null | 1107.5520 | null | null | http://arxiv.org/pdf/1107.5520v1 | 2011-07-27T16:29:06Z | 2011-07-27T16:29:06Z | Axioms for Rational Reinforcement Learning | We provide a formal, simple and intuitive theory of rational decision making
including sequential decisions that affect the environment. The theory has a
geometric flavor, which makes the arguments easy to visualize and understand.
Our theory is for complete decision makers, which means that they have a
complete set of preferences. Our main result shows that a complete rational
decision maker implicitly has a probabilistic model of the environment. We have
a countable version of this result that brings light on the issue of countable
vs finite additivity by showing how it depends on the geometry of the space
which we have preferences over. This is achieved through fruitfully connecting
rationality with the Hahn-Banach Theorem. The theory presented here can be
viewed as a formalization and extension of the betting odds approach to
probability of Ramsey and De Finetti.
| [
"Peter Sunehag and Marcus Hutter",
"['Peter Sunehag' 'Marcus Hutter']"
] |
cs.LG cs.IT math.IT | null | 1107.5531 | null | null | http://arxiv.org/pdf/1107.5531v1 | 2011-07-27T16:44:41Z | 2011-07-27T16:44:41Z | Universal Prediction of Selected Bits | Many learning tasks can be viewed as sequence prediction problems. For
example, online classification can be converted to sequence prediction with the
sequence being pairs of input/target data and where the goal is to correctly
predict the target data given input data and previous input/target pairs.
Solomonoff induction is known to solve the general sequence prediction problem,
but only if the entire sequence is sampled from a computable distribution. In
the case of classification and discriminative learning though, only the targets
need be structured (given the inputs). We show that the normalised version of
Solomonoff induction can still be used in this case, and more generally that it
can detect any recursive sub-pattern (regularity) within an otherwise
completely unstructured sequence. It is also shown that the unnormalised
version can fail to predict very simple recursive sub-patterns.
| [
"['Tor Lattimore' 'Marcus Hutter' 'Vaibhav Gavane']",
"Tor Lattimore and Marcus Hutter and Vaibhav Gavane"
] |
cs.AI cs.LG | null | 1107.5537 | null | null | http://arxiv.org/pdf/1107.5537v1 | 2011-07-27T16:51:48Z | 2011-07-27T16:51:48Z | Asymptotically Optimal Agents | Artificial general intelligence aims to create agents capable of learning to
solve arbitrary interesting problems. We define two versions of asymptotic
optimality and prove that no agent can satisfy the strong version while in some
cases, depending on discounting, there does exist a non-computable weak
asymptotically optimal agent.
| [
"Tor Lattimore and Marcus Hutter",
"['Tor Lattimore' 'Marcus Hutter']"
] |
cs.LG | null | 1107.5671 | null | null | http://arxiv.org/pdf/1107.5671v1 | 2011-07-28T10:36:30Z | 2011-07-28T10:36:30Z | Automatic Network Reconstruction using ASP | Building biological models by inferring functional dependencies from
experimental data is an im- portant issue in Molecular Biology. To relieve the
biologist from this traditionally manual process, various approaches have been
proposed to increase the degree of automation. However, available ap- proaches
often yield a single model only, rely on specific assumptions, and/or use
dedicated, heuris- tic algorithms that are intolerant to changing circumstances
or requirements in the view of the rapid progress made in Biotechnology. Our
aim is to provide a declarative solution to the problem by ap- peal to Answer
Set Programming (ASP) overcoming these difficulties. We build upon an existing
approach to Automatic Network Reconstruction proposed by part of the authors.
This approach has firm mathematical foundations and is well suited for ASP due
to its combinatorial flavor providing a characterization of all models
explaining a set of experiments. The usage of ASP has several ben- efits over
the existing heuristic algorithms. First, it is declarative and thus
transparent for biological experts. Second, it is elaboration tolerant and thus
allows for an easy exploration and incorporation of biological constraints.
Third, it allows for exploring the entire space of possible models. Finally,
our approach offers an excellent performance, matching existing,
special-purpose systems.
| [
"Max Ostrowski and Torsten Schaub and Markus Durzinsky and Wolfgang\n Marwan and Annegret Wagler",
"['Max Ostrowski' 'Torsten Schaub' 'Markus Durzinsky' 'Wolfgang Marwan'\n 'Annegret Wagler']"
] |
cs.LG cs.DB | null | 1108.0017 | null | null | http://arxiv.org/pdf/1108.0017v1 | 2011-07-29T21:07:51Z | 2011-07-29T21:07:51Z | Generating a Diverse Set of High-Quality Clusterings | We provide a new framework for generating multiple good quality partitions
(clusterings) of a single data set. Our approach decomposes this problem into
two components, generating many high-quality partitions, and then grouping
these partitions to obtain k representatives. The decomposition makes the
approach extremely modular and allows us to optimize various criteria that
control the choice of representative partitions.
| [
"Jeff M. Phillips, Parasaran Raman, and Suresh Venkatasubramanian",
"['Jeff M. Phillips' 'Parasaran Raman' 'Suresh Venkatasubramanian']"
] |
cs.AI cs.LG | 10.1007/978-3-642-23291-6_28 | 1108.0039 | null | null | http://arxiv.org/abs/1108.0039v2 | 2011-10-12T20:11:51Z | 2011-07-30T05:12:17Z | CBR with Commonsense Reasoning and Structure Mapping: An Application to
Mediation | Mediation is an important method in dispute resolution. We implement a case
based reasoning approach to mediation integrating analogical and commonsense
reasoning components that allow an artificial mediation agent to satisfy
requirements expected from a human mediator, in particular: utilizing
experience with cases in different domains; and structurally transforming the
set of issues for a better solution. We utilize a case structure based on
ontologies reflecting the perceptions of the parties in dispute. The analogical
reasoning component, employing the Structure Mapping Theory from psychology,
provides a flexibility to respond innovatively in unusual circumstances, in
contrast with conventional approaches confined into specialized problem
domains. We aim to build a mediation case base incorporating real world
instances ranging from interpersonal or intergroup disputes to international
conflicts.
| [
"Atilim Gunes Baydin, Ramon Lopez de Mantaras, Simeon Simoff, Carles\n Sierra",
"['Atilim Gunes Baydin' 'Ramon Lopez de Mantaras' 'Simeon Simoff'\n 'Carles Sierra']"
] |
cs.LG math.OC stat.ML | null | 1108.0775 | null | null | http://arxiv.org/pdf/1108.0775v2 | 2011-11-22T09:59:21Z | 2011-08-03T07:55:19Z | Optimization with Sparsity-Inducing Penalties | Sparse estimation methods are aimed at using or obtaining parsimonious
representations of data or models. They were first dedicated to linear variable
selection but numerous extensions have now emerged such as structured sparsity
or kernel selection. It turns out that many of the related estimation problems
can be cast as convex optimization problems by regularizing the empirical risk
with appropriate non-smooth norms. The goal of this paper is to present from a
general perspective optimization tools and techniques dedicated to such
sparsity-inducing penalties. We cover proximal methods, block-coordinate
descent, reweighted $\ell_2$-penalized techniques, working-set and homotopy
methods, as well as non-convex formulations and extensions, and provide an
extensive set of experiments to compare various algorithms from a computational
point of view.
| [
"Francis Bach (LIENS, INRIA Paris - Rocquencourt), Rodolphe Jenatton\n (LIENS, INRIA Paris - Rocquencourt), Julien Mairal, Guillaume Obozinski\n (LIENS, INRIA Paris - Rocquencourt)",
"['Francis Bach' 'Rodolphe Jenatton' 'Julien Mairal' 'Guillaume Obozinski']"
] |
stat.ML cs.DB cs.IR cs.LG | null | 1108.0895 | null | null | http://arxiv.org/pdf/1108.0895v1 | 2011-08-03T17:08:11Z | 2011-08-03T17:08:11Z | Accurate Estimators for Improving Minwise Hashing and b-Bit Minwise
Hashing | Minwise hashing is the standard technique in the context of search and
databases for efficiently estimating set (e.g., high-dimensional 0/1 vector)
similarities. Recently, b-bit minwise hashing was proposed which significantly
improves upon the original minwise hashing in practice by storing only the
lowest b bits of each hashed value, as opposed to using 64 bits. b-bit hashing
is particularly effective in applications which mainly concern sets of high
similarities (e.g., the resemblance >0.5). However, there are other important
applications in which not just pairs of high similarities matter. For example,
many learning algorithms require all pairwise similarities and it is expected
that only a small fraction of the pairs are similar. Furthermore, many
applications care more about containment (e.g., how much one object is
contained by another object) than the resemblance. In this paper, we show that
the estimators for minwise hashing and b-bit minwise hashing used in the
current practice can be systematically improved and the improvements are most
significant for set pairs of low resemblance and high containment.
| [
"['Ping Li' 'Christian Konig']",
"Ping Li and Christian Konig"
] |
cs.CV cs.LG | null | 1108.1636 | null | null | http://arxiv.org/pdf/1108.1636v1 | 2011-08-08T08:59:05Z | 2011-08-08T08:59:05Z | A new embedding quality assessment method for manifold learning | Manifold learning is a hot research topic in the field of computer science. A
crucial issue with current manifold learning methods is that they lack a
natural quantitative measure to assess the quality of learned embeddings, which
greatly limits their applications to real-world problems. In this paper, a new
embedding quality assessment method for manifold learning, named as
Normalization Independent Embedding Quality Assessment (NIEQA), is proposed.
Compared with current assessment methods which are limited to isometric
embeddings, the NIEQA method has a much larger application range due to two
features. First, it is based on a new measure which can effectively evaluate
how well local neighborhood geometry is preserved under normalization, hence it
can be applied to both isometric and normalized embeddings. Second, it can
provide both local and global evaluations to output an overall assessment.
Therefore, NIEQA can serve as a natural tool in model selection and evaluation
tasks for manifold learning. Experimental results on benchmark data sets
validate the effectiveness of the proposed method.
| [
"['Peng Zhang' 'Yuanyuan Ren' 'Bo Zhang']",
"Peng Zhang, Yuanyuan Ren, and Bo Zhang"
] |
stat.ML cs.LG math.ST stat.TH | null | 1108.1766 | null | null | http://arxiv.org/pdf/1108.1766v1 | 2011-08-08T18:04:02Z | 2011-08-08T18:04:02Z | Activized Learning: Transforming Passive to Active with Improved Label
Complexity | We study the theoretical advantages of active learning over passive learning.
Specifically, we prove that, in noise-free classifier learning for VC classes,
any passive learning algorithm can be transformed into an active learning
algorithm with asymptotically strictly superior label complexity for all
nontrivial target functions and distributions. We further provide a general
characterization of the magnitudes of these improvements in terms of a novel
generalization of the disagreement coefficient. We also extend these results to
active learning in the presence of label noise, and find that even under broad
classes of noise distributions, we can typically guarantee strict improvements
over the known results for passive learning.
| [
"Steve Hanneke",
"['Steve Hanneke']"
] |
cs.LG cs.AI | null | 1108.2054 | null | null | http://arxiv.org/pdf/1108.2054v1 | 2011-08-09T21:28:42Z | 2011-08-09T21:28:42Z | Uncertain Nearest Neighbor Classification | This work deals with the problem of classifying uncertain data. With this aim
the Uncertain Nearest Neighbor (UNN) rule is here introduced, which represents
the generalization of the deterministic nearest neighbor rule to the case in
which uncertain objects are available. The UNN rule relies on the concept of
nearest neighbor class, rather than on that of nearest neighbor object. The
nearest neighbor class of a test object is the class that maximizes the
probability of providing its nearest neighbor. It is provided evidence that the
former concept is much more powerful than the latter one in the presence of
uncertainty, in that it correctly models the right semantics of the nearest
neighbor decision rule when applied to the uncertain scenario. An effective and
efficient algorithm to perform uncertain nearest neighbor classification of a
generic (un)certain test object is designed, based on properties that greatly
reduce the temporal cost associated with nearest neighbor class probability
computation. Experimental results are presented, showing that the UNN rule is
effective and efficient in classifying uncertain data.
| [
"['Fabrizio Angiulli' 'Fabio Fassetti']",
"Fabrizio Angiulli and Fabio Fassetti"
] |
cs.AI cs.LG | 10.1007/s10462-012-9346-y | 1108.2283 | null | null | http://arxiv.org/abs/1108.2283v2 | 2013-11-20T19:15:05Z | 2011-08-10T20:25:08Z | A survey on independence-based Markov networks learning | This work reports the most relevant technical aspects in the problem of
learning the \emph{Markov network structure} from data. Such problem has become
increasingly important in machine learning, and many other application fields
of machine learning. Markov networks, together with Bayesian networks, are
probabilistic graphical models, a widely used formalism for handling
probability distributions in intelligent systems. Learning graphical models
from data have been extensively applied for the case of Bayesian networks, but
for Markov networks learning it is not tractable in practice. However, this
situation is changing with time, given the exponential growth of computers
capacity, the plethora of available digital data, and the researching on new
learning technologies. This work stresses on a technology called
independence-based learning, which allows the learning of the independence
structure of those networks from data in an efficient and sound manner,
whenever the dataset is sufficiently large, and data is a representative
sampling of the target distribution. In the analysis of such technology, this
work surveys the current state-of-the-art algorithms for learning Markov
networks structure, discussing its current limitations, and proposing a series
of open problems where future works may produce some advances in the area in
terms of quality and efficiency. The paper concludes by opening a discussion
about how to develop a general formalism for improving the quality of the
structures learned, when data is scarce.
| [
"Federico Schl\\\"uter",
"['Federico Schlüter']"
] |
cs.LG | 10.1109/TNNLS.2012.2185811 | 1108.2486 | null | null | http://arxiv.org/abs/1108.2486v1 | 2011-08-11T18:54:02Z | 2011-08-11T18:54:02Z | Feature Extraction for Change-Point Detection using Stationary Subspace
Analysis | Detecting changes in high-dimensional time series is difficult because it
involves the comparison of probability densities that need to be estimated from
finite samples. In this paper, we present the first feature extraction method
tailored to change point detection, which is based on an extended version of
Stationary Subspace Analysis. We reduce the dimensionality of the data to the
most non-stationary directions, which are most informative for detecting state
changes in the time series. In extensive simulations on synthetic data we show
that the accuracy of three change point detection algorithms is significantly
increased by a prior feature extraction step. These findings are confirmed in
an application to industrial fault monitoring.
| [
"['Duncan Blythe' 'Paul von Bünau' 'Frank Meinecke' 'Klaus-Robert Müller']",
"Duncan Blythe, Paul von B\\\"unau, Frank Meinecke, Klaus-Robert M\\\"uller"
] |
cs.LG cs.DC | null | 1108.2580 | null | null | http://arxiv.org/pdf/1108.2580v2 | 2011-08-17T11:14:42Z | 2011-08-12T07:22:08Z | Efficient Multicore Collaborative Filtering | This paper describes the solution method taken by LeBuSiShu team for track1
in ACM KDD CUP 2011 contest (resulting in the 5th place). We identified two
main challenges: the unique item taxonomy characteristics as well as the large
data set size.To handle the item taxonomy, we present a novel method called
Matrix Factorization Item Taxonomy Regularization (MFITR). MFITR obtained the
2nd best prediction result out of more then ten implemented algorithms. For
rapidly computing multiple solutions of various algorithms, we have implemented
an open source parallel collaborative filtering library on top of the GraphLab
machine learning framework. We report some preliminary performance results
obtained using the BlackLight supercomputer.
| [
"['Yao Wu' 'Qiang Yan' 'Danny Bickson' 'Yucheng Low' 'Qing Yang']",
"Yao Wu, Qiang Yan, Danny Bickson, Yucheng Low, Qing Yang"
] |
cs.LG stat.ML | null | 1108.2820 | null | null | http://arxiv.org/pdf/1108.2820v2 | 2012-01-28T07:51:50Z | 2011-08-13T20:47:30Z | Ensemble Risk Modeling Method for Robust Learning on Scarce Data | In medical risk modeling, typical data are "scarce": they have relatively
small number of training instances (N), censoring, and high dimensionality (M).
We show that the problem may be effectively simplified by reducing it to
bipartite ranking, and introduce new bipartite ranking algorithm, Smooth Rank,
for robust learning on scarce data. The algorithm is based on ensemble learning
with unsupervised aggregation of predictors. The advantage of our approach is
confirmed in comparison with two "gold standard" risk modeling methods on 10
real life survival analysis datasets, where the new approach has the best
results on all but two datasets with the largest ratio N/M. For systematic
study of the effects of data scarcity on modeling by all three methods, we
conducted two types of computational experiments: on real life data with
randomly drawn training sets of different sizes, and on artificial data with
increasing number of features. Both experiments demonstrated that Smooth Rank
has critical advantage over the popular methods on the scarce data; it does not
suffer from overfitting where other methods do.
| [
"['Marina Sapir']",
"Marina Sapir"
] |
q-bio.NC cs.LG stat.ML | null | 1108.2840 | null | null | http://arxiv.org/pdf/1108.2840v1 | 2011-08-14T03:47:14Z | 2011-08-14T03:47:14Z | Generalised elastic nets | The elastic net was introduced as a heuristic algorithm for combinatorial
optimisation and has been applied, among other problems, to biological
modelling. It has an energy function which trades off a fitness term against a
tension term. In the original formulation of the algorithm the tension term was
implicitly based on a first-order derivative. In this paper we generalise the
elastic net model to an arbitrary quadratic tension term, e.g. derived from a
discretised differential operator, and give an efficient learning algorithm. We
refer to these as generalised elastic nets (GENs). We give a theoretical
analysis of the tension term for 1D nets with periodic boundary conditions, and
show that the model is sensitive to the choice of finite difference scheme that
represents the discretised derivative. We illustrate some of these issues in
the context of cortical map models, by relating the choice of tension term to a
cortical interaction function. In particular, we prove that this interaction
takes the form of a Mexican hat for the original elastic net, and of
progressively more oscillatory Mexican hats for higher-order derivatives. The
results apply not only to generalised elastic nets but also to other methods
using discrete differential penalties, and are expected to be useful in other
areas, such as data analysis, computer graphics and optimisation problems.
| [
"Miguel \\'A. Carreira-Perpi\\~n\\'an, Geoffrey J. Goodhill",
"['Miguel Á. Carreira-Perpiñán' 'Geoffrey J. Goodhill']"
] |
cs.LG stat.ME stat.ML | null | 1108.3072 | null | null | http://arxiv.org/pdf/1108.3072v1 | 2011-08-15T19:53:55Z | 2011-08-15T19:53:55Z | Training Logistic Regression and SVM on 200GB Data Using b-Bit Minwise
Hashing and Comparisons with Vowpal Wabbit (VW) | We generated a dataset of 200 GB with 10^9 features, to test our recent b-bit
minwise hashing algorithms for training very large-scale logistic regression
and SVM. The results confirm our prior work that, compared with the VW hashing
algorithm (which has the same variance as random projections), b-bit minwise
hashing is substantially more accurate at the same storage. For example, with
merely 30 hashed values per data point, b-bit minwise hashing can achieve
similar accuracies as VW with 2^14 hashed values per data point.
We demonstrate that the preprocessing cost of b-bit minwise hashing is
roughly on the same order of magnitude as the data loading time. Furthermore,
by using a GPU, the preprocessing cost can be reduced to a small fraction of
the data loading time.
Minwise hashing has been widely used in industry, at least in the context of
search. One reason for its popularity is that one can efficiently simulate
permutations by (e.g.,) universal hashing. In other words, there is no need to
store the permutation matrix. In this paper, we empirically verify this
practice, by demonstrating that even using the simplest 2-universal hashing
does not degrade the learning performance.
| [
"Ping Li, Anshumali Shrivastava, Christian Konig",
"['Ping Li' 'Anshumali Shrivastava' 'Christian Konig']"
] |
cs.LG stat.ML | null | 1108.3154 | null | null | http://arxiv.org/pdf/1108.3154v2 | 2011-08-17T17:01:35Z | 2011-08-16T05:11:54Z | Stability Conditions for Online Learnability | Stability is a general notion that quantifies the sensitivity of a learning
algorithm's output to small change in the training dataset (e.g. deletion or
replacement of a single training sample). Such conditions have recently been
shown to be more powerful to characterize learnability in the general learning
setting under i.i.d. samples where uniform convergence is not necessary for
learnability, but where stability is both sufficient and necessary for
learnability. We here show that similar stability conditions are also
sufficient for online learnability, i.e. whether there exists a learning
algorithm such that under any sequence of examples (potentially chosen
adversarially) produces a sequence of hypotheses that has no regret in the
limit with respect to the best hypothesis in hindsight. We introduce online
stability, a stability condition related to uniform-leave-one-out stability in
the batch setting, that is sufficient for online learnability. In particular we
show that popular classes of online learners, namely algorithms that fall in
the category of Follow-the-(Regularized)-Leader, Mirror Descent, gradient-based
methods and randomized algorithms like Weighted Majority and Hedge, are
guaranteed to have no regret if they have such online stability property. We
provide examples that suggest the existence of an algorithm with such stability
condition might in fact be necessary for online learnability. For the more
restricted binary classification setting, we establish that such stability
condition is in fact both sufficient and necessary. We also show that for a
large class of online learnable problems in the general learning setting,
namely those with a notion of sub-exponential covering, no-regret online
algorithms that have such stability condition exists.
| [
"Stephane Ross, J. Andrew Bagnell",
"['Stephane Ross' 'J. Andrew Bagnell']"
] |
stat.ML cs.AI cs.LG stat.AP | null | 1108.3259 | null | null | http://arxiv.org/pdf/1108.3259v1 | 2011-08-16T14:55:20Z | 2011-08-16T14:55:20Z | A review and comparison of strategies for multi-step ahead time series
forecasting based on the NN5 forecasting competition | Multi-step ahead forecasting is still an open challenge in time series
forecasting. Several approaches that deal with this complex problem have been
proposed in the literature but an extensive comparison on a large number of
tasks is still missing. This paper aims to fill this gap by reviewing existing
strategies for multi-step ahead forecasting and comparing them in theoretical
and practical terms. To attain such an objective, we performed a large scale
comparison of these different strategies using a large experimental benchmark
(namely the 111 series from the NN5 forecasting competition). In addition, we
considered the effects of deseasonalization, input variable selection, and
forecast combination on these strategies and on multi-step ahead forecasting at
large. The following three findings appear to be consistently supported by the
experimental results: Multiple-Output strategies are the best performing
approaches, deseasonalization leads to uniformly improved forecast accuracy,
and input selection is more effective when performed in conjunction with
deseasonalization.
| [
"Souhaib Ben Taieb and Gianluca Bontempi and Amir Atiya and Antti\n Sorjamaa",
"['Souhaib Ben Taieb' 'Gianluca Bontempi' 'Amir Atiya' 'Antti Sorjamaa']"
] |
cs.LG cs.AI cs.CV cs.IR stat.ML | null | 1108.3298 | null | null | http://arxiv.org/pdf/1108.3298v1 | 2011-08-16T18:06:29Z | 2011-08-16T18:06:29Z | A Machine Learning Perspective on Predictive Coding with PAQ | PAQ8 is an open source lossless data compression algorithm that currently
achieves the best compression rates on many benchmarks. This report presents a
detailed description of PAQ8 from a statistical machine learning perspective.
It shows that it is possible to understand some of the modules of PAQ8 and use
this understanding to improve the method. However, intuitive statistical
explanations of the behavior of other modules remain elusive. We hope the
description in this report will be a starting point for discussions that will
increase our understanding, lead to improvements to PAQ8, and facilitate a
transfer of knowledge from PAQ8 to other machine learning methods, such a
recurrent neural networks and stochastic memoizers. Finally, the report
presents a broad range of new applications of PAQ to machine learning tasks
including language modeling and adaptive text prediction, adaptive game
playing, classification, and compression using features from the field of deep
learning.
| [
"['Byron Knoll' 'Nando de Freitas']",
"Byron Knoll, Nando de Freitas"
] |
stat.ML cs.AI cs.LG | null | 1108.3372 | null | null | http://arxiv.org/pdf/1108.3372v1 | 2011-08-16T23:46:59Z | 2011-08-16T23:46:59Z | Overlapping Mixtures of Gaussian Processes for the Data Association
Problem | In this work we introduce a mixture of GPs to address the data association
problem, i.e. to label a group of observations according to the sources that
generated them. Unlike several previously proposed GP mixtures, the novel
mixture has the distinct characteristic of using no gating function to
determine the association of samples and mixture components. Instead, all the
GPs in the mixture are global and samples are clustered following
"trajectories" across input space. We use a non-standard variational Bayesian
algorithm to efficiently recover sample labels and learn the hyperparameters.
We show how multi-object tracking problems can be disambiguated and also
explore the characteristics of the model in traditional regression settings.
| [
"['Miguel Lázaro-Gredilla' 'Steven Van Vaerenbergh' 'Neil Lawrence']",
"Miguel L\\'azaro-Gredilla, Steven Van Vaerenbergh, and Neil Lawrence"
] |
cs.LG cs.AI | 10.1007/s10817-013-9286-5 | 1108.3446 | null | null | http://arxiv.org/abs/1108.3446v2 | 2012-04-12T18:52:58Z | 2011-08-17T11:18:55Z | Premise Selection for Mathematics by Corpus Analysis and Kernel Methods | Smart premise selection is essential when using automated reasoning as a tool
for large-theory formal proof development. A good method for premise selection
in complex mathematical libraries is the application of machine learning to
large corpora of proofs. This work develops learning-based premise selection in
two ways. First, a newly available minimal dependency analysis of existing
high-level formal mathematical proofs is used to build a large knowledge base
of proof dependencies, providing precise data for ATP-based re-verification and
for training premise selection algorithms. Second, a new machine learning
algorithm for premise selection based on kernel methods is proposed and
implemented. To evaluate the impact of both techniques, a benchmark consisting
of 2078 large-theory mathematical problems is constructed,extending the older
MPTP Challenge benchmark. The combined effect of the techniques results in a
50% improvement on the benchmark over the Vampire/SInE state-of-the-art system
for automated reasoning in large theories.
| [
"['Jesse Alama' 'Tom Heskes' 'Daniel Kühlwein' 'Evgeni Tsivtsivadze'\n 'Josef Urban']",
"Jesse Alama, Tom Heskes, Daniel K\\\"uhlwein, Evgeni Tsivtsivadze, and\n Josef Urban"
] |
cs.LG stat.ML | null | 1108.3476 | null | null | http://arxiv.org/pdf/1108.3476v2 | 2011-09-02T08:47:01Z | 2011-08-17T13:36:11Z | Structured Sparsity and Generalization | We present a data dependent generalization bound for a large class of
regularized algorithms which implement structured sparsity constraints. The
bound can be applied to standard squared-norm regularization, the Lasso, the
group Lasso, some versions of the group Lasso with overlapping groups, multiple
kernel learning and other regularization schemes. In all these cases
competitive results are obtained. A novel feature of our bound is that it can
be applied in an infinite dimensional setting such as the Lasso in a separable
Hilbert space or multiple kernel learning with a countable number of kernels.
| [
"Andreas Maurer and Massimiliano Pontil",
"['Andreas Maurer' 'Massimiliano Pontil']"
] |
cs.GT cs.DS cs.LG | null | 1108.4142 | null | null | http://arxiv.org/pdf/1108.4142v3 | 2013-11-26T20:08:07Z | 2011-08-20T20:28:09Z | Dynamic Pricing with Limited Supply | We consider the problem of dynamic pricing with limited supply. A seller has
$k$ identical items for sale and is facing $n$ potential buyers ("agents") that
are arriving sequentially. Each agent is interested in buying one item. Each
agent's value for an item is an IID sample from some fixed distribution with
support $[0,1]$. The seller offers a take-it-or-leave-it price to each arriving
agent (possibly different for different agents), and aims to maximize his
expected revenue.
We focus on "prior-independent" mechanisms -- ones that do not use any
information about the distribution. They are desirable because knowing the
distribution is unrealistic in many practical scenarios. We study how the
revenue of such mechanisms compares to the revenue of the optimal offline
mechanism that knows the distribution ("offline benchmark").
We present a prior-independent dynamic pricing mechanism whose revenue is at
most $O((k \log n)^{2/3})$ less than the offline benchmark, for every
distribution that is regular. In fact, this guarantee holds without *any*
assumptions if the benchmark is relaxed to fixed-price mechanisms. Further, we
prove a matching lower bound. The performance guarantee for the same mechanism
can be improved to $O(\sqrt{k} \log n)$, with a distribution-dependent
constant, if $k/n$ is sufficiently small. We show that, in the worst case over
all demand distributions, this is essentially the best rate that can be
obtained with a distribution-specific constant.
On a technical level, we exploit the connection to multi-armed bandits (MAB).
While dynamic pricing with unlimited supply can easily be seen as an MAB
problem, the intuition behind MAB approaches breaks when applied to the setting
with limited supply. Our high-level conceptual contribution is that even the
limited supply setting can be fruitfully treated as a bandit problem.
| [
"['Moshe Babaioff' 'Shaddin Dughmi' 'Robert Kleinberg'\n 'Aleksandrs Slivkins']",
"Moshe Babaioff, Shaddin Dughmi, Robert Kleinberg and Aleksandrs\n Slivkins"
] |
cs.LG cs.CE | null | 1108.4545 | null | null | http://arxiv.org/pdf/1108.4545v1 | 2011-08-23T10:34:07Z | 2011-08-23T10:34:07Z | The fuzzy gene filter: A classifier performance assesment | The Fuzzy Gene Filter (FGF) is an optimised Fuzzy Inference System designed
to rank genes in order of differential expression, based on expression data
generated in a microarray experiment. This paper examines the effectiveness of
the FGF for feature selection using various classification architectures. The
FGF is compared to three of the most common gene ranking algorithms: t-test,
Wilcoxon test and ROC curve analysis. Four classification schemes are used to
compare the performance of the FGF vis-a-vis the standard approaches: K Nearest
Neighbour (KNN), Support Vector Machine (SVM), Naive Bayesian Classifier (NBC)
and Artificial Neural Network (ANN). A nested stratified Leave-One-Out Cross
Validation scheme is used to identify the optimal number top ranking genes, as
well as the optimal classifier parameters. Two microarray data sets are used
for the comparison: a prostate cancer data set and a lymphoma data set.
| [
"['Meir Perez' 'Tshilidzi Marwala']",
"Meir Perez and Tshilidzi Marwala"
] |
cs.LG cs.CE | null | 1108.4551 | null | null | http://arxiv.org/pdf/1108.4551v1 | 2011-08-23T10:52:18Z | 2011-08-23T10:52:18Z | Improving the performance of the ripper in insurance risk classification
: A comparitive study using feature selection | The Ripper algorithm is designed to generate rule sets for large datasets
with many features. However, it was shown that the algorithm struggles with
classification performance in the presence of missing data. The algorithm
struggles to classify instances when the quality of the data deteriorates as a
result of increasing missing data. In this paper, a feature selection technique
is used to help improve the classification performance of the Ripper model.
Principal component analysis and evidence automatic relevance determination
techniques are used to improve the performance. A comparison is done to see
which technique helps the algorithm improve the most. Training datasets with
completely observable data were used to construct the model and testing
datasets with missing values were used for measuring accuracy. The results
showed that principal component analysis is a better feature selection for the
Ripper in improving the classification performance.
| [
"Mlungisi Duma, Bhekisipho Twala, Tshilidzi Marwala",
"['Mlungisi Duma' 'Bhekisipho Twala' 'Tshilidzi Marwala']"
] |
cs.LG | null | 1108.4559 | null | null | http://arxiv.org/pdf/1108.4559v2 | 2012-11-27T18:59:15Z | 2011-08-23T11:52:35Z | Optimal Algorithms for Ridge and Lasso Regression with Partially
Observed Attributes | We consider the most common variants of linear regression, including Ridge,
Lasso and Support-vector regression, in a setting where the learner is allowed
to observe only a fixed number of attributes of each example at training time.
We present simple and efficient algorithms for these problems: for Lasso and
Ridge regression they need the same total number of attributes (up to
constants) as do full-information algorithms, for reaching a certain accuracy.
For Support-vector regression, we require exponentially less attributes
compared to the state of the art. By that, we resolve an open problem recently
posed by Cesa-Bianchi et al. (2010). Experiments show the theoretical bounds to
be justified by superior performance compared to the state of the art.
| [
"['Elad Hazan' 'Tomer Koren']",
"Elad Hazan and Tomer Koren"
] |
cs.LG | null | 1108.4961 | null | null | http://arxiv.org/pdf/1108.4961v1 | 2011-08-24T22:38:40Z | 2011-08-24T22:38:40Z | Non-trivial two-armed partial-monitoring games are bandits | We consider online learning in partial-monitoring games against an oblivious
adversary. We show that when the number of actions available to the learner is
two and the game is nontrivial then it is reducible to a bandit-like game and
thus the minimax regret is $\Theta(\sqrt{T})$.
| [
"Andr\\'as Antos, G\\'abor Bart\\'ok, Csaba Szepesv\\'ari",
"['András Antos' 'Gábor Bartók' 'Csaba Szepesvári']"
] |
stat.ML cs.LG q-bio.QM | null | 1108.5397 | null | null | http://arxiv.org/pdf/1108.5397v1 | 2011-08-26T21:21:51Z | 2011-08-26T21:21:51Z | Prediction of peptide bonding affinity: kernel methods for nonlinear
modeling | This paper presents regression models obtained from a process of blind
prediction of peptide binding affinity from provided descriptors for several
distinct datasets as part of the 2006 Comparative Evaluation of Prediction
Algorithms (COEPRA) contest. This paper finds that kernel partial least
squares, a nonlinear partial least squares (PLS) algorithm, outperforms PLS,
and that the incorporation of transferable atom equivalent features improves
predictive capability.
| [
"['Charles Bergeron' 'Theresa Hepburn' 'C. Matthew Sundling'\n 'Michael Krein' 'Bill Katt' 'Nagamani Sukumar' 'Curt M. Breneman'\n 'Kristin P. Bennett']",
"Charles Bergeron, Theresa Hepburn, C. Matthew Sundling, Michael Krein,\n Bill Katt, Nagamani Sukumar, Curt M. Breneman, Kristin P. Bennett"
] |
cs.IR cs.ET cs.LG physics.data-an | null | 1108.5491 | null | null | http://arxiv.org/pdf/1108.5491v1 | 2011-08-28T02:55:18Z | 2011-08-28T02:55:18Z | Improving Ranking Using Quantum Probability | The paper shows that ranking information units by quantum probability differs
from ranking them by classical probability provided the same data used for
parameter estimation. As probability of detection (also known as recall or
power) and probability of false alarm (also known as fallout or size) measure
the quality of ranking, we point out and show that ranking by quantum
probability yields higher probability of detection than ranking by classical
probability provided a given probability of false alarm and the same parameter
estimation data. As quantum probability provided more effective detectors than
classical probability within other domains that data management, we conjecture
that, the system that can implement subspace-based detectors shall be more
effective than a system which implements a set-based detectors, the
effectiveness being calculated as expected recall estimated over the
probability of detection and expected fallout estimated over the probability of
false alarm.
| [
"['Massimo Melucci']",
"Massimo Melucci"
] |
cs.LG cs.GT cs.SI | null | 1108.5514 | null | null | http://arxiv.org/pdf/1108.5514v1 | 2011-08-29T04:18:19Z | 2011-08-29T04:18:19Z | Strategic Learning and Robust Protocol Design for Online Communities
with Selfish Users | This paper focuses on analyzing the free-riding behavior of self-interested
users in online communities. Hence, traditional optimization methods for
communities composed of compliant users such as network utility maximization
cannot be applied here. In our prior work, we show how social reciprocation
protocols can be designed in online communities which have populations
consisting of a continuum of users and are stationary under stochastic
permutations. Under these assumptions, we are able to prove that users
voluntarily comply with the pre-determined social norms and cooperate with
other users in the community by providing their services. In this paper, we
generalize the study by analyzing the interactions of self-interested users in
online communities with finite populations and are not stationary. To optimize
their long-term performance based on their knowledge, users adapt their
strategies to play their best response by solving individual stochastic control
problems. The best-response dynamic introduces a stochastic dynamic process in
the community, in which the strategies of users evolve over time. We then
investigate the long-term evolution of a community, and prove that the
community will converge to stochastically stable equilibria which are stable
against stochastic permutations. Understanding the evolution of a community
provides protocol designers with guidelines for designing social norms in which
no user has incentives to adapt its strategy and deviate from the prescribed
protocol, thereby ensuring that the adopted protocol will enable the community
to achieve the optimal social welfare.
| [
"['Yu Zhang' 'Mihaela van der Schaar']",
"Yu Zhang, Mihaela van der Schaar"
] |
cs.IR cs.LG physics.data-an | null | 1108.5575 | null | null | http://arxiv.org/pdf/1108.5575v1 | 2011-08-29T14:37:39Z | 2011-08-29T14:37:39Z | Getting Beyond the State of the Art of Information Retrieval with
Quantum Theory | According to the probability ranking principle, the document set with the
highest values of probability of relevance optimizes information retrieval
effectiveness given the probabilities are estimated as accurately as possible.
The key point of this principle is the separation of the document set into two
subsets with a given level of fallout and with the highest recall. If subsets
of set measures are replaced by subspaces and space measures, we obtain an
alternative theory stemming from Quantum Theory. That theory is named after
vector probability because vectors represent event like sets do in classical
probability. The paper shows that the separation into vector subspaces is more
effective than the separation into subsets with the same available evidence.
The result is proved mathematically and verified experimentally. In general,
the paper suggests that quantum theory is not only a source of rhetoric
inspiration, but is a sufficient condition to improve retrieval effectiveness
in a principled way.
| [
"['Massimo Melucci']",
"Massimo Melucci"
] |
cs.AI cs.LG | 10.1007/978-3-642-23780-5_34 | 1108.5668 | null | null | http://arxiv.org/abs/1108.5668v1 | 2011-08-29T17:46:08Z | 2011-08-29T17:46:08Z | Datum-Wise Classification: A Sequential Approach to Sparsity | We propose a novel classification technique whose aim is to select an
appropriate representation for each datapoint, in contrast to the usual
approach of selecting a representation encompassing the whole dataset. This
datum-wise representation is found by using a sparsity inducing empirical risk,
which is a relaxation of the standard L 0 regularized risk. The classification
problem is modeled as a sequential decision process that sequentially chooses,
for each datapoint, which features to use before classifying. Datum-Wise
Classification extends naturally to multi-class tasks, and we describe a
specific case where our inference has equivalent complexity to a traditional
linear classifier, while still using a variable number of features. We compare
our classifier to classical L 1 regularized linear models (L 1-SVM and LARS) on
a set of common binary and multi-class datasets and show that for an equal
average number of features used we can get improved performance using our
method.
| [
"['Gabriel Dulac-Arnold' 'Ludovic Denoyer' 'Philippe Preux'\n 'Patrick Gallinari']",
"Gabriel Dulac-Arnold, Ludovic Denoyer, Philippe Preux and Patrick\n Gallinari"
] |
cs.IR cs.LG | null | 1108.5784 | null | null | http://arxiv.org/pdf/1108.5784v1 | 2011-08-30T00:31:44Z | 2011-08-30T00:31:44Z | Probability Ranking in Vector Spaces | The Probability Ranking Principle states that the document set with the
highest values of probability of relevance optimizes information retrieval
effectiveness given the probabilities are estimated as accurately as possible.
The key point of the principle is the separation of the document set into two
subsets with a given level of fallout and with the highest recall. The paper
introduces the separation between two vector subspaces and shows that the
separation yields a more effective performance than the optimal separation into
subsets with the same available evidence, the performance being measured with
recall and fallout. The result is proved mathematically and exemplified
experimentally.
| [
"['Massimo Melucci']",
"Massimo Melucci"
] |
cs.LG cs.GT | null | 1108.6088 | null | null | http://arxiv.org/pdf/1108.6088v1 | 2011-08-30T21:48:37Z | 2011-08-30T21:48:37Z | No Internal Regret via Neighborhood Watch | We present an algorithm which attains O(\sqrt{T}) internal (and thus
external) regret for finite games with partial monitoring under the local
observability condition. Recently, this condition has been shown by (Bartok,
Pal, and Szepesvari, 2011) to imply the O(\sqrt{T}) rate for partial monitoring
games against an i.i.d. opponent, and the authors conjectured that the same
holds for non-stochastic adversaries. Our result is in the affirmative, and it
completes the characterization of possible rates for finite partial-monitoring
games, an open question stated by (Cesa-Bianchi, Lugosi, and Stoltz, 2006). Our
regret guarantees also hold for the more general model of partial monitoring
with random signals.
| [
"['Dean Foster' 'Alexander Rakhlin']",
"Dean Foster and Alexander Rakhlin"
] |
cs.AI cs.LG | null | 1108.6211 | null | null | http://arxiv.org/pdf/1108.6211v2 | 2011-09-01T09:19:00Z | 2011-08-31T12:46:11Z | Transfer from Multiple MDPs | Transfer reinforcement learning (RL) methods leverage on the experience
collected on a set of source tasks to speed-up RL algorithms. A simple and
effective approach is to transfer samples from source tasks and include them
into the training set used to solve a given target task. In this paper, we
investigate the theoretical properties of this transfer method and we introduce
novel algorithms adapting the transfer process on the basis of the similarity
between source and target tasks. Finally, we report illustrative experimental
results in a continuous chain problem.
| [
"['Alessandro Lazaric' 'Marcello Restelli']",
"Alessandro Lazaric (INRIA Lille - Nord Europe), Marcello Restelli"
] |
cs.LG cs.NA | null | 1108.6296 | null | null | http://arxiv.org/pdf/1108.6296v2 | 2012-01-14T16:11:56Z | 2011-08-31T17:36:26Z | Infinite Tucker Decomposition: Nonparametric Bayesian Models for
Multiway Data Analysis | Tensor decomposition is a powerful computational tool for multiway data
analysis. Many popular tensor decomposition approaches---such as the Tucker
decomposition and CANDECOMP/PARAFAC (CP)---amount to multi-linear
factorization. They are insufficient to model (i) complex interactions between
data entities, (ii) various data types (e.g. missing data and binary data), and
(iii) noisy observations and outliers. To address these issues, we propose
tensor-variate latent nonparametric Bayesian models, coupled with efficient
inference methods, for multiway data analysis. We name these models InfTucker.
Using these InfTucker, we conduct Tucker decomposition in an infinite feature
space. Unlike classical tensor decomposition models, our new approaches handle
both continuous and binary data in a probabilistic framework. Unlike previous
Bayesian models on matrices and tensors, our models are based on latent
Gaussian or $t$ processes with nonlinear covariance functions. To efficiently
learn the InfTucker from data, we develop a variational inference technique on
tensors. Compared with classical implementation, the new technique reduces both
time and space complexities by several orders of magnitude. Our experimental
results on chemometrics and social network datasets demonstrate that our new
models achieved significantly higher prediction accuracy than the most
state-of-art tensor decomposition
| [
"['Zenglin Xu' 'Feng Yan' 'Yuan' 'Qi']",
"Zenglin Xu, Feng Yan, Yuan (Alan) Qi"
] |
cs.LG | null | 1109.0093 | null | null | http://arxiv.org/pdf/1109.0093v4 | 2012-12-10T09:00:47Z | 2011-09-01T05:28:55Z | Local Component Analysis | Kernel density estimation, a.k.a. Parzen windows, is a popular density
estimation method, which can be used for outlier detection or clustering. With
multivariate data, its performance is heavily reliant on the metric used within
the kernel. Most earlier work has focused on learning only the bandwidth of the
kernel (i.e., a scalar multiplicative factor). In this paper, we propose to
learn a full Euclidean metric through an expectation-minimization (EM)
procedure, which can be seen as an unsupervised counterpart to neighbourhood
component analysis (NCA). In order to avoid overfitting with a fully
nonparametric density estimator in high dimensions, we also consider a
semi-parametric Gaussian-Parzen density model, where some of the variables are
modelled through a jointly Gaussian density, while others are modelled through
Parzen windows. For these two models, EM leads to simple closed-form updates
based on matrix inversions and eigenvalue decompositions. We show empirically
that our method leads to density estimators with higher test-likelihoods than
natural competing methods, and that the metrics may be used within most
unsupervised learning techniques that rely on such metrics, such as spectral
clustering or manifold learning methods. Finally, we present a stochastic
approximation scheme which allows for the use of this method in a large-scale
setting.
| [
"['Nicolas Le Roux' 'Francis Bach']",
"Nicolas Le Roux (INRIA Paris - Rocquencourt, LIENS), Francis Bach\n (INRIA Paris - Rocquencourt, LIENS)"
] |
cs.LG cs.CR stat.ML | null | 1109.0105 | null | null | http://arxiv.org/pdf/1109.0105v2 | 2011-09-16T17:10:18Z | 2011-09-01T06:43:23Z | Differentially Private Online Learning | In this paper, we consider the problem of preserving privacy in the online
learning setting. We study the problem in the online convex programming (OCP)
framework---a popular online learning setting with several interesting
theoretical and practical implications---while using differential privacy as
the formal privacy measure. For this problem, we distill two critical
attributes that a private OCP algorithm should have in order to provide
reasonable privacy as well as utility guarantees: 1) linearly decreasing
sensitivity, i.e., as new data points arrive their effect on the learning model
decreases, 2) sub-linear regret bound---regret bound is a popular
goodness/utility measure of an online learning algorithm.
Given an OCP algorithm that satisfies these two conditions, we provide a
general framework to convert the given algorithm into a privacy preserving OCP
algorithm with good (sub-linear) regret. We then illustrate our approach by
converting two popular online learning algorithms into their differentially
private variants while guaranteeing sub-linear regret ($O(\sqrt{T})$). Next, we
consider the special case of online linear regression problems, a practically
important class of online learning problems, for which we generalize an
approach by Dwork et al. to provide a differentially private algorithm with
just $O(\log^{1.5} T)$ regret. Finally, we show that our online learning
framework can be used to provide differentially private algorithms for offline
learning as well. For the offline learning problem, our approach obtains better
error bounds as well as can handle larger class of problems than the existing
state-of-the-art methods Chaudhuri et al.
| [
"['Prateek Jain' 'Pravesh Kothari' 'Abhradeep Thakurta']",
"Prateek Jain, Pravesh Kothari, Abhradeep Thakurta"
] |
quant-ph cs.LG | 10.1007/s11128-012-0506-4 | 1109.0325 | null | null | http://arxiv.org/abs/1109.0325v1 | 2011-09-01T23:10:31Z | 2011-09-01T23:10:31Z | Quantum adiabatic machine learning | We develop an approach to machine learning and anomaly detection via quantum
adiabatic evolution. In the training phase we identify an optimal set of weak
classifiers, to form a single strong classifier. In the testing phase we
adiabatically evolve one or more strong classifiers on a superposition of
inputs in order to find certain anomalous elements in the classification space.
Both the training and testing phases are executed via quantum adiabatic
evolution. We apply and illustrate this approach in detail to the problem of
software verification and validation.
| [
"['Kristen L. Pudenz' 'Daniel A. Lidar']",
"Kristen L. Pudenz, Daniel A. Lidar"
] |
stat.ML cs.LG | null | 1109.0455 | null | null | http://arxiv.org/pdf/1109.0455v1 | 2011-09-02T14:27:25Z | 2011-09-02T14:27:25Z | Gradient-based kernel dimension reduction for supervised learning | This paper proposes a novel kernel approach to linear dimension reduction for
supervised learning. The purpose of the dimension reduction is to find
directions in the input space to explain the output as effectively as possible.
The proposed method uses an estimator for the gradient of regression function,
based on the covariance operators on reproducing kernel Hilbert spaces. In
comparison with other existing methods, the proposed one has wide applicability
without strong assumptions on the distributions or the type of variables, and
uses computationally simple eigendecomposition. Experimental results show that
the proposed method successfully finds the effective directions with efficient
computation.
| [
"Kenji Fukumizu and Chenlei Leng",
"['Kenji Fukumizu' 'Chenlei Leng']"
] |
stat.ME cs.LG | null | 1109.0486 | null | null | http://arxiv.org/pdf/1109.0486v3 | 2012-11-12T16:07:03Z | 2011-09-02T15:48:23Z | The Variational Garrote | In this paper, we present a new variational method for sparse regression
using $L_0$ regularization. The variational parameters appear in the
approximate model in a way that is similar to Breiman's Garrote model. We refer
to this method as the variational Garrote (VG). We show that the combination of
the variational approximation and $L_0$ regularization has the effect of making
the problem effectively of maximal rank even when the number of samples is
small compared to the number of variables. The VG is compared numerically with
the Lasso method, ridge regression and the recently introduced paired mean
field method (PMF) (M. Titsias & M. L\'azaro-Gredilla., NIPS 2012). Numerical
results show that the VG and PMF yield more accurate predictions and more
accurately reconstruct the true model than the other methods. It is shown that
the VG finds correct solutions when the Lasso solution is inconsistent due to
large input correlations. Globally, VG is significantly faster than PMF and
tends to perform better as the problems become denser and in problems with
strongly correlated inputs. The naive implementation of the VG scales cubic
with the number of features. By introducing Lagrange multipliers we obtain a
dual formulation of the problem that scales cubic in the number of samples, but
close to linear in the number of features.
| [
"Hilbert J. Kappen, Vicen\\c{c} G\\'omez",
"['Hilbert J. Kappen' 'Vicenç Gómez']"
] |
cs.CR cs.LG | null | 1109.0507 | null | null | http://arxiv.org/pdf/1109.0507v1 | 2011-09-02T17:35:50Z | 2011-09-02T17:35:50Z | How Open Should Open Source Be? | Many open-source projects land security fixes in public repositories before
shipping these patches to users. This paper presents attacks on such projects -
taking Firefox as a case-study - that exploit patch metadata to efficiently
search for security patches prior to shipping. Using access-restricted bug
reports linked from patch descriptions, security patches can be immediately
identified for 260 out of 300 days of Firefox 3 development. In response to
Mozilla obfuscating descriptions, we show that machine learning can exploit
metadata such as patch author to search for security patches, extending the
total window of vulnerability by 5 months in an 8 month period when examining
up to two patches daily. Finally we present strong evidence that further
metadata obfuscation is unlikely to prevent information leaks, and we argue
that open-source projects instead ought to keep security patches secret until
they are ready to be released.
| [
"Adam Barth, Saung Li, Benjamin I. P. Rubinstein, Dawn Song",
"['Adam Barth' 'Saung Li' 'Benjamin I. P. Rubinstein' 'Dawn Song']"
] |
cs.LG cs.AI cs.CV stat.ML | null | 1109.0820 | null | null | http://arxiv.org/pdf/1109.0820v1 | 2011-09-05T07:52:17Z | 2011-09-05T07:52:17Z | ShareBoost: Efficient Multiclass Learning with Feature Sharing | Multiclass prediction is the problem of classifying an object into a relevant
target class. We consider the problem of learning a multiclass predictor that
uses only few features, and in particular, the number of used features should
increase sub-linearly with the number of possible classes. This implies that
features should be shared by several classes. We describe and analyze the
ShareBoost algorithm for learning a multiclass predictor that uses few shared
features. We prove that ShareBoost efficiently finds a predictor that uses few
shared features (if such a predictor exists) and that it has a small
generalization error. We also describe how to use ShareBoost for learning a
non-linear predictor that has a fast evaluation time. In a series of
experiments with natural data sets we demonstrate the benefits of ShareBoost
and evaluate its success relatively to other state-of-the-art approaches.
| [
"['Shai Shalev-Shwartz' 'Yonatan Wexler' 'Amnon Shashua']",
"Shai Shalev-Shwartz and Yonatan Wexler and Amnon Shashua"
] |
cs.LG stat.ML | 10.5121/ijwmn.2011.3412 | 1109.0895 | null | null | http://arxiv.org/abs/1109.0895v1 | 2011-08-31T21:40:00Z | 2011-08-31T21:40:00Z | Nonlinear Channel Estimation for OFDM System by Complex LS-SVM under
High Mobility Conditions | A nonlinear channel estimator using complex Least Square Support Vector
Machines (LS-SVM) is proposed for pilot-aided OFDM system and applied to Long
Term Evolution (LTE) downlink under high mobility conditions. The estimation
algorithm makes use of the reference signals to estimate the total frequency
response of the highly selective multipath channel in the presence of
non-Gaussian impulse noise interfering with pilot signals. Thus, the algorithm
maps trained data into a high dimensional feature space and uses the structural
risk minimization (SRM) principle to carry out the regression estimation for
the frequency response function of the highly selective channel. The
simulations show the effectiveness of the proposed method which has good
performance and high precision to track the variations of the fading channels
compared to the conventional LS method and it is robust at high speed mobility.
| [
"Anis Charrada, Abdelaziz Samet",
"['Anis Charrada' 'Abdelaziz Samet']"
] |
cs.CE cs.ET cs.LG q-bio.QM | 10.5121/ijcses.2011.2302 | 1109.1062 | null | null | http://arxiv.org/abs/1109.1062v1 | 2011-09-06T04:42:55Z | 2011-09-06T04:42:55Z | Review on Feature Selection Techniques and the Impact of SVM for Cancer
Classification using Gene Expression Profile | The DNA microarray technology has modernized the approach of biology research
in such a way that scientists can now measure the expression levels of
thousands of genes simultaneously in a single experiment. Gene expression
profiles, which represent the state of a cell at a molecular level, have great
potential as a medical diagnosis tool. But compared to the number of genes
involved, available training data sets generally have a fairly small sample
size for classification. These training data limitations constitute a challenge
to certain classification methodologies. Feature selection techniques can be
used to extract the marker genes which influence the classification accuracy
effectively by eliminating the un wanted noisy and redundant genes This paper
presents a review of feature selection techniques that have been employed in
micro array data based cancer classification and also the predominant role of
SVM for cancer classification.
| [
"['G. Victo Sudha George' 'V. Cyril Raj']",
"G. Victo Sudha George and V.Cyril Raj"
] |
cs.DM cs.CE cs.LG | null | 1109.1355 | null | null | http://arxiv.org/pdf/1109.1355v1 | 2011-09-07T05:10:58Z | 2011-09-07T05:10:58Z | Localization on low-order eigenvectors of data matrices | Eigenvector localization refers to the situation when most of the components
of an eigenvector are zero or near-zero. This phenomenon has been observed on
eigenvectors associated with extremal eigenvalues, and in many of those cases
it can be meaningfully interpreted in terms of "structural heterogeneities" in
the data. For example, the largest eigenvectors of adjacency matrices of large
complex networks often have most of their mass localized on high-degree nodes;
and the smallest eigenvectors of the Laplacians of such networks are often
localized on small but meaningful community-like sets of nodes. Here, we
describe localization associated with low-order eigenvectors, i.e.,
eigenvectors corresponding to eigenvalues that are not extremal but that are
"buried" further down in the spectrum. Although we have observed it in several
unrelated applications, this phenomenon of low-order eigenvector localization
defies common intuitions and simple explanations, and it creates serious
difficulties for the applicability of popular eigenvector-based machine
learning and data analysis tools. After describing two examples where low-order
eigenvector localization arises, we present a very simple model that
qualitatively reproduces several of the empirically-observed results. This
model suggests certain coarse structural similarities among the
seemingly-unrelated applications where we have observed low-order eigenvector
localization, and it may be used as a diagnostic tool to help extract insight
from data graphs when such low-order eigenvector localization is present.
| [
"Mihai Cucuringu and Michael W. Mahoney",
"['Mihai Cucuringu' 'Michael W. Mahoney']"
] |
cs.LG cs.DC | 10.1002/cpe.2858 | 1109.1396 | null | null | http://arxiv.org/abs/1109.1396v3 | 2012-06-06T09:26:30Z | 2011-09-07T09:16:37Z | Gossip Learning with Linear Models on Fully Distributed Data | Machine learning over fully distributed data poses an important problem in
peer-to-peer (P2P) applications. In this model we have one data record at each
network node, but without the possibility to move raw data due to privacy
considerations. For example, user profiles, ratings, history, or sensor
readings can represent this case. This problem is difficult, because there is
no possibility to learn local models, the system model offers almost no
guarantees for reliability, yet the communication cost needs to be kept low.
Here we propose gossip learning, a generic approach that is based on multiple
models taking random walks over the network in parallel, while applying an
online learning algorithm to improve themselves, and getting combined via
ensemble learning methods. We present an instantiation of this approach for the
case of classification with linear models. Our main contribution is an ensemble
learning method which---through the continuous combination of the models in the
network---implements a virtual weighted voting mechanism over an exponential
number of models at practically no extra cost as compared to independent random
walks. We prove the convergence of the method theoretically, and perform
extensive experiments on benchmark datasets. Our experimental analysis
demonstrates the performance and robustness of the proposed approach.
| [
"['Róbert Ormándi' 'István Hegedüs' 'Márk Jelasity']",
"R\\'obert Orm\\'andi, Istv\\'an Heged\\\"us, M\\'ark Jelasity"
] |
cs.GT cs.LG cs.MA nlin.AO q-bio.PE | 10.1103/PhysRevE.85.041145 | 1109.1528 | null | null | http://arxiv.org/abs/1109.1528v3 | 2012-03-01T22:51:48Z | 2011-09-07T18:21:39Z | Dynamics of Boltzmann Q-Learning in Two-Player Two-Action Games | We consider the dynamics of Q-learning in two-player two-action games with a
Boltzmann exploration mechanism. For any non-zero exploration rate the dynamics
is dissipative, which guarantees that agent strategies converge to rest points
that are generally different from the game's Nash Equlibria (NE). We provide a
comprehensive characterization of the rest point structure for different games,
and examine the sensitivity of this structure with respect to the noise due to
exploration. Our results indicate that for a class of games with multiple NE
the asymptotic behavior of learning dynamics can undergo drastic changes at
critical exploration rates. Furthermore, we demonstrate that for certain games
with a single NE, it is possible to have additional rest points (not
corresponding to any NE) that persist for a finite range of the exploration
rates and disappear when the exploration rates of both players tend to zero.
| [
"['Ardeshir Kianercy' 'Aram Galstyan']",
"Ardeshir Kianercy, Aram Galstyan"
] |
math.OC cs.LG cs.NI cs.SY math.PR | null | 1109.1533 | null | null | http://arxiv.org/pdf/1109.1533v1 | 2011-09-07T18:33:59Z | 2011-09-07T18:33:59Z | The Non-Bayesian Restless Multi-Armed Bandit: A Case of Near-Logarithmic
Strict Regret | In the classic Bayesian restless multi-armed bandit (RMAB) problem, there are
$N$ arms, with rewards on all arms evolving at each time as Markov chains with
known parameters. A player seeks to activate $K \geq 1$ arms at each time in
order to maximize the expected total reward obtained over multiple plays. RMAB
is a challenging problem that is known to be PSPACE-hard in general. We
consider in this work the even harder non-Bayesian RMAB, in which the
parameters of the Markov chain are assumed to be unknown \emph{a priori}. We
develop an original approach to this problem that is applicable when the
corresponding Bayesian problem has the structure that, depending on the known
parameter values, the optimal solution is one of a prescribed finite set of
policies. In such settings, we propose to learn the optimal policy for the
non-Bayesian RMAB by employing a suitable meta-policy which treats each policy
from this finite set as an arm in a different non-Bayesian multi-armed bandit
problem for which a single-arm selection policy is optimal. We demonstrate this
approach by developing a novel sensing policy for opportunistic spectrum access
over unknown dynamic channels. We prove that our policy achieves
near-logarithmic regret (the difference in expected reward compared to a
model-aware genie), which leads to the same average reward that can be achieved
by the optimal policy under a known model. This is the first such result in the
literature for a non-Bayesian RMAB. For our proof, we also develop a novel
generalization of the Chernoff-Hoeffding bound.
| [
"Wenhan Dai, Yi Gai, Bhaskar Krishnamachari, Qing Zhao",
"['Wenhan Dai' 'Yi Gai' 'Bhaskar Krishnamachari' 'Qing Zhao']"
] |
cs.LG cs.NI cs.SY math.OC math.PR | null | 1109.1552 | null | null | http://arxiv.org/pdf/1109.1552v1 | 2011-09-07T19:54:30Z | 2011-09-07T19:54:30Z | Efficient Online Learning for Opportunistic Spectrum Access | The problem of opportunistic spectrum access in cognitive radio networks has
been recently formulated as a non-Bayesian restless multi-armed bandit problem.
In this problem, there are N arms (corresponding to channels) and one player
(corresponding to a secondary user). The state of each arm evolves as a
finite-state Markov chain with unknown parameters. At each time slot, the
player can select K < N arms to play and receives state-dependent rewards
(corresponding to the throughput obtained given the activity of primary users).
The objective is to maximize the expected total rewards (i.e., total
throughput) obtained over multiple plays. The performance of an algorithm for
such a multi-armed bandit problem is measured in terms of regret, defined as
the difference in expected reward compared to a model-aware genie who always
plays the best K arms. In this paper, we propose a new continuous exploration
and exploitation (CEE) algorithm for this problem. When no information is
available about the dynamics of the arms, CEE is the first algorithm to
guarantee near-logarithmic regret uniformly over time. When some bounds
corresponding to the stationary state distributions and the state-dependent
rewards are known, we show that CEE can be easily modified to achieve
logarithmic regret over time. In contrast, prior algorithms require additional
information concerning bounds on the second eigenvalues of the transition
matrices in order to guarantee logarithmic regret. Finally, we show through
numerical simulations that CEE is more efficient than prior algorithms.
| [
"Wenhan Dai, Yi Gai, Bhaskar Krishnamachari",
"['Wenhan Dai' 'Yi Gai' 'Bhaskar Krishnamachari']"
] |
cs.SI cs.LG physics.soc-ph | null | 1109.1605 | null | null | http://arxiv.org/pdf/1109.1605v1 | 2011-09-08T00:00:16Z | 2011-09-08T00:00:16Z | On Clustering on Graphs with Multiple Edge Types | We study clustering on graphs with multiple edge types. Our main motivation
is that similarities between objects can be measured in many different metrics.
For instance similarity between two papers can be based on common authors,
where they are published, keyword similarity, citations, etc. As such, graphs
with multiple edges is a more accurate model to describe similarities between
objects. Each edge/metric provides only partial information about the data;
recovering full information requires aggregation of all the similarity metrics.
Clustering becomes much more challenging in this context, since in addition to
the difficulties of the traditional clustering problem, we have to deal with a
space of clusterings. We generalize the concept of clustering in single-edge
graphs to multi-edged graphs and investigate problems such as: Can we find a
clustering that remains good, even if we change the relative weights of
metrics? How can we describe the space of clusterings efficiently? Can we find
unexpected clusterings (a good clustering that is distant from all given
clusterings)? If given the ground-truth clustering, can we recover how the
weights for edge types were aggregated? %In this paper, we discuss these
problems and the underlying algorithmic challenges and propose some solutions.
We also present two case studies: one based on papers on Arxiv and one based on
CIA World Factbook.
| [
"['Matthew Rocklin' 'Ali Pinar']",
"Matthew Rocklin and Ali Pinar"
] |
cs.LG cs.NI math.OC math.PR | null | 1109.1606 | null | null | http://arxiv.org/pdf/1109.1606v1 | 2011-09-08T00:43:42Z | 2011-09-08T00:43:42Z | Online Learning for Combinatorial Network Optimization with Restless
Markovian Rewards | Combinatorial network optimization algorithms that compute optimal structures
taking into account edge weights form the foundation for many network
protocols. Examples include shortest path routing, minimal spanning tree
computation, maximum weighted matching on bipartite graphs, etc. We present
CLRMR, the first online learning algorithm that efficiently solves the
stochastic version of these problems where the underlying edge weights vary as
independent Markov chains with unknown dynamics.
The performance of an online learning algorithm is characterized in terms of
regret, defined as the cumulative difference in rewards between a
suitably-defined genie, and that obtained by the given algorithm. We prove
that, compared to a genie that knows the Markov transition matrices and uses
the single-best structure at all times, CLRMR yields regret that is polynomial
in the number of edges and nearly-logarithmic in time.
| [
"['Yi Gai' 'Bhaskar Krishnamachari' 'Mingyan Liu']",
"Yi Gai, Bhaskar Krishnamachari, Mingyan Liu"
] |
cs.LG cs.DS | null | 1109.1729 | null | null | http://arxiv.org/pdf/1109.1729v1 | 2011-09-08T14:34:57Z | 2011-09-08T14:34:57Z | Anomaly Sequences Detection from Logs Based on Compression | Mining information from logs is an old and still active research topic. In
recent years, with the rapid emerging of cloud computing, log mining becomes
increasingly important to industry. This paper focus on one major mission of
log mining: anomaly detection, and proposes a novel method for mining abnormal
sequences from large logs. Different from previous anomaly detection systems
which based on statistics, probabilities and Markov assumption, our approach
measures the strangeness of a sequence using compression. It first trains a
grammar about normal behaviors using grammar-based compression, then measures
the information quantities and densities of questionable sequences according to
incrementation of grammar length. We have applied our approach on mining some
real bugs from fine grained execution logs. We have also tested its ability on
intrusion detection using some publicity available system call traces. The
experiments show that our method successfully selects the strange sequences
which related to bugs or attacking.
| [
"['Nan Wang' 'Jizhong Han' 'Jinyun Fang']",
"Nan Wang and Jizhong Han and Jinyun Fang"
] |
cs.LG | null | 1109.1844 | null | null | http://arxiv.org/pdf/1109.1844v2 | 2016-10-04T08:33:09Z | 2011-09-08T20:53:54Z | Weighted Clustering | One of the most prominent challenges in clustering is "the user's dilemma,"
which is the problem of selecting an appropriate clustering algorithm for a
specific task. A formal approach for addressing this problem relies on the
identification of succinct, user-friendly properties that formally capture when
certain clustering methods are preferred over others.
Until now these properties focused on advantages of classical Linkage-Based
algorithms, failing to identify when other clustering paradigms, such as
popular center-based methods, are preferable. We present surprisingly simple
new properties that delineate the differences between common clustering
paradigms, which clearly and formally demonstrates advantages of center-based
approaches for some applications. These properties address how sensitive
algorithms are to changes in element frequencies, which we capture in a
generalized setting where every element is associated with a real-valued
weight.
| [
"Margareta Ackerman, Shai Ben-David, Simina Br\\^anzei, and David Loker",
"['Margareta Ackerman' 'Shai Ben-David' 'Simina Brânzei' 'David Loker']"
] |
cs.LG stat.ML | null | 1109.1990 | null | null | http://arxiv.org/pdf/1109.1990v1 | 2011-09-09T13:01:41Z | 2011-09-09T13:01:41Z | Trace Lasso: a trace norm regularization for correlated designs | Using the $\ell_1$-norm to regularize the estimation of the parameter vector
of a linear model leads to an unstable estimator when covariates are highly
correlated. In this paper, we introduce a new penalty function which takes into
account the correlation of the design matrix to stabilize the estimation. This
norm, called the trace Lasso, uses the trace norm, which is a convex surrogate
of the rank, of the selected covariates as the criterion of model complexity.
We analyze the properties of our norm, describe an optimization algorithm based
on reweighted least-squares, and illustrate the behavior of this norm on
synthetic data, showing that it is more adapted to strong correlations than
competing methods such as the elastic net.
| [
"['Edouard Grave' 'Guillaume Obozinski' 'Francis Bach']",
"Edouard Grave (LIENS, INRIA Paris - Rocquencourt), Guillaume Obozinski\n (LIENS, INRIA Paris - Rocquencourt), Francis Bach (LIENS, INRIA Paris -\n Rocquencourt)"
] |
cs.NE cs.LG | null | 1109.2034 | null | null | http://arxiv.org/pdf/1109.2034v2 | 2013-08-22T12:21:24Z | 2011-09-09T14:59:59Z | Learning Sequence Neighbourhood Metrics | Recurrent neural networks (RNNs) in combination with a pooling operator and
the neighbourhood components analysis (NCA) objective function are able to
detect the characterizing dynamics of sequences and embed them into a
fixed-length vector space of arbitrary dimensionality. Subsequently, the
resulting features are meaningful and can be used for visualization or nearest
neighbour classification in linear time. This kind of metric learning for
sequential data enables the use of algorithms tailored towards fixed length
vector spaces such as R^n.
| [
"['Justin Bayer' 'Christian Osendorfer' 'Patrick van der Smagt']",
"Justin Bayer and Christian Osendorfer and Patrick van der Smagt"
] |
cs.LG | 10.1613/jair.1509 | 1109.2047 | null | null | http://arxiv.org/abs/1109.2047v1 | 2011-09-09T15:56:58Z | 2011-09-09T15:56:58Z | Learning From Labeled And Unlabeled Data: An Empirical Study Across
Techniques And Domains | There has been increased interest in devising learning techniques that
combine unlabeled data with labeled data ? i.e. semi-supervised learning.
However, to the best of our knowledge, no study has been performed across
various techniques and different types and amounts of labeled and unlabeled
data. Moreover, most of the published work on semi-supervised learning
techniques assumes that the labeled and unlabeled data come from the same
distribution. It is possible for the labeling process to be associated with a
selection bias such that the distributions of data points in the labeled and
unlabeled sets are different. Not correcting for such bias can result in biased
function approximation with potentially poor performance. In this paper, we
present an empirical study of various semi-supervised learning techniques on a
variety of datasets. We attempt to answer various questions such as the effect
of independence or relevance amongst features, the effect of the size of the
labeled and unlabeled sets and the effect of noise. We also investigate the
impact of sample-selection bias on the semi-supervised learning techniques
under study and implement a bivariate probit technique particularly designed to
correct for such bias.
| [
"N. V. Chawla, Grigoris Karakoulas",
"['N. V. Chawla' 'Grigoris Karakoulas']"
] |
cs.LG cs.NI cs.SY math.OC math.PR | null | 1109.2088 | null | null | http://arxiv.org/pdf/1109.2088v1 | 2011-09-09T18:42:42Z | 2011-09-09T18:42:42Z | Online Learning Algorithms for Stochastic Water-Filling | Water-filling is the term for the classic solution to the problem of
allocating constrained power to a set of parallel channels to maximize the
total data-rate. It is used widely in practice, for example, for power
allocation to sub-carriers in multi-user OFDM systems such as WiMax. The
classic water-filling algorithm is deterministic and requires perfect knowledge
of the channel gain to noise ratios. In this paper we consider how to do power
allocation over stochastically time-varying (i.i.d.) channels with unknown gain
to noise ratio distributions. We adopt an online learning framework based on
stochastic multi-armed bandits. We consider two variations of the problem, one
in which the goal is to find a power allocation to maximize $\sum\limits_i
\mathbb{E}[\log(1 + SNR_i)]$, and another in which the goal is to find a power
allocation to maximize $\sum\limits_i \log(1 + \mathbb{E}[SNR_i])$. For the
first problem, we propose a \emph{cognitive water-filling} algorithm that we
call CWF1. We show that CWF1 obtains a regret (defined as the cumulative gap
over time between the sum-rate obtained by a distribution-aware genie and this
policy) that grows polynomially in the number of channels and logarithmically
in time, implying that it asymptotically achieves the optimal time-averaged
rate that can be obtained when the gain distributions are known. For the second
problem, we present an algorithm called CWF2, which is, to our knowledge, the
first algorithm in the literature on stochastic multi-armed bandits to exploit
non-linear dependencies between the arms. We prove that the number of times
CWF2 picks the incorrect power allocation is bounded by a function that is
polynomial in the number of channels and logarithmic in time, implying that its
frequency of incorrect allocation tends to zero.
| [
"Yi Gai, Bhaskar Krishnamachari",
"['Yi Gai' 'Bhaskar Krishnamachari']"
] |
cs.LG | 10.1613/jair.1655 | 1109.2141 | null | null | http://arxiv.org/abs/1109.2141v1 | 2011-09-09T20:31:05Z | 2011-09-09T20:31:05Z | Efficiency versus Convergence of Boolean Kernels for On-Line Learning
Algorithms | The paper studies machine learning problems where each example is described
using a set of Boolean features and where hypotheses are represented by linear
threshold elements. One method of increasing the expressiveness of learned
hypotheses in this context is to expand the feature set to include conjunctions
of basic features. This can be done explicitly or where possible by using a
kernel function. Focusing on the well known Perceptron and Winnow algorithms,
the paper demonstrates a tradeoff between the computational efficiency with
which the algorithm can be run over the expanded feature space and the
generalization ability of the corresponding learning algorithm. We first
describe several kernel functions which capture either limited forms of
conjunctions or all conjunctions. We show that these kernels can be used to
efficiently run the Perceptron algorithm over a feature space of exponentially
many conjunctions; however we also show that using such kernels, the Perceptron
algorithm can provably make an exponential number of mistakes even when
learning simple functions. We then consider the question of whether kernel
functions can analogously be used to run the multiplicative-update Winnow
algorithm over an expanded feature space of exponentially many conjunctions.
Known upper bounds imply that the Winnow algorithm can learn Disjunctive Normal
Form (DNF) formulae with a polynomial mistake bound in this setting. However,
we prove that it is computationally hard to simulate Winnows behavior for
learning DNF over such a feature set. This implies that the kernel functions
which correspond to running Winnow for this problem are not efficiently
computable, and that there is no general construction that can run Winnow with
kernels.
| [
"['R. Khardon' 'D. Roth' 'R. A. Servedio']",
"R. Khardon, D. Roth, R. A. Servedio"
] |
cs.LG | 10.1613/jair.1666 | 1109.2147 | null | null | http://arxiv.org/abs/1109.2147v1 | 2011-09-09T20:32:41Z | 2011-09-09T20:32:41Z | Risk-Sensitive Reinforcement Learning Applied to Control under
Constraints | In this paper, we consider Markov Decision Processes (MDPs) with error
states. Error states are those states entering which is undesirable or
dangerous. We define the risk with respect to a policy as the probability of
entering such a state when the policy is pursued. We consider the problem of
finding good policies whose risk is smaller than some user-specified threshold,
and formalize it as a constrained MDP with two criteria. The first criterion
corresponds to the value function originally given. We will show that the risk
can be formulated as a second criterion function based on a cumulative return,
whose definition is independent of the original value function. We present a
model free, heuristic reinforcement learning algorithm that aims at finding
good deterministic policies. It is based on weighting the original value
function and the risk. The weight parameter is adapted in order to find a
feasible solution for the constrained problem that has a good performance with
respect to the value function. The algorithm was successfully applied to the
control of a feed tank with stochastic inflows that lies upstream of a
distillation column. This control task was originally formulated as an optimal
control problem with chance constraints, and it was solved under certain
assumptions on the model to obtain an optimal solution. The power of our
learning algorithm is that it can be used even when some of these restrictive
assumptions are relaxed.
| [
"['P. Geibel' 'F. Wysotzki']",
"P. Geibel, F. Wysotzki"
] |
cs.DS cs.CR cs.LG | null | 1109.2229 | null | null | http://arxiv.org/pdf/1109.2229v1 | 2011-09-10T15:23:14Z | 2011-09-10T15:23:14Z | A Learning Theory Approach to Non-Interactive Database Privacy | In this paper we demonstrate that, ignoring computational constraints, it is
possible to privately release synthetic databases that are useful for large
classes of queries -- much larger in size than the database itself.
Specifically, we give a mechanism that privately releases synthetic data for a
class of queries over a discrete domain with error that grows as a function of
the size of the smallest net approximately representing the answers to that
class of queries. We show that this in particular implies a mechanism for
counting queries that gives error guarantees that grow only with the
VC-dimension of the class of queries, which itself grows only logarithmically
with the size of the query class.
We also show that it is not possible to privately release even simple classes
of queries (such as intervals and their generalizations) over continuous
domains. Despite this, we give a privacy-preserving polynomial time algorithm
that releases information useful for all halfspace queries, given a slight
relaxation of the utility guarantee. This algorithm does not release synthetic
data, but instead another data structure capable of representing an answer for
each query. We also give an efficient algorithm for releasing synthetic data
for the class of interval queries and axis-aligned rectangles of constant
dimension.
Finally, inspired by learning theory, we introduce a new notion of data
privacy, which we call distributional privacy, and show that it is strictly
stronger than the prevailing privacy notion, differential privacy.
| [
"Avrim Blum, Katrina Ligett, Aaron Roth",
"['Avrim Blum' 'Katrina Ligett' 'Aaron Roth']"
] |
cs.LG | null | 1109.2296 | null | null | http://arxiv.org/pdf/1109.2296v1 | 2011-09-11T09:00:53Z | 2011-09-11T09:00:53Z | Bandits with an Edge | We consider a bandit problem over a graph where the rewards are not directly
observed. Instead, the decision maker can compare two nodes and receive
(stochastic) information pertaining to the difference in their value. The graph
structure describes the set of possible comparisons. Consequently, comparing
between two nodes that are relatively far requires estimating the difference
between every pair of nodes on the path between them. We analyze this problem
from the perspective of sample complexity: How many queries are needed to find
an approximately optimal node with probability more than $1-\delta$ in the PAC
setup? We show that the topology of the graph plays a crucial in defining the
sample complexity: graphs with a low diameter have a much better sample
complexity.
| [
"['Dotan Di Castro' 'Claudio Gentile' 'Shie Mannor']",
"Dotan Di Castro, Claudio Gentile, Shie Mannor"
] |
cs.LG cs.CV | null | 1109.2388 | null | null | http://arxiv.org/pdf/1109.2388v1 | 2011-09-12T07:31:34Z | 2011-09-12T07:31:34Z | MIS-Boost: Multiple Instance Selection Boosting | In this paper, we present a new multiple instance learning (MIL) method,
called MIS-Boost, which learns discriminative instance prototypes by explicit
instance selection in a boosting framework. Unlike previous instance selection
based MIL methods, we do not restrict the prototypes to a discrete set of
training instances but allow them to take arbitrary values in the instance
feature space. We also do not restrict the total number of prototypes and the
number of selected-instances per bag; these quantities are completely
data-driven. We show that MIS-Boost outperforms state-of-the-art MIL methods on
a number of benchmark datasets. We also apply MIS-Boost to large-scale image
classification, where we show that the automatically selected prototypes map to
visually meaningful image regions.
| [
"Emre Akbas, Bernard Ghanem, Narendra Ahuja",
"['Emre Akbas' 'Bernard Ghanem' 'Narendra Ahuja']"
] |
cs.CV cs.LG | null | 1109.2389 | null | null | http://arxiv.org/pdf/1109.2389v1 | 2011-09-12T07:45:03Z | 2011-09-12T07:45:03Z | A Probabilistic Framework for Discriminative Dictionary Learning | In this paper, we address the problem of discriminative dictionary learning
(DDL), where sparse linear representation and classification are combined in a
probabilistic framework. As such, a single discriminative dictionary and linear
binary classifiers are learned jointly. By encoding sparse representation and
discriminative classification models in a MAP setting, we propose a general
optimization framework that allows for a data-driven tradeoff between faithful
representation and accurate classification. As opposed to previous work, our
learning methodology is capable of incorporating a diverse family of
classification cost functions (including those used in popular boosting
methods), while avoiding the need for involved optimization techniques. We show
that DDL can be solved by a sequence of updates that make use of well-known and
well-studied sparse coding and dictionary learning algorithms from the
literature. To validate our DDL framework, we apply it to digit classification
and face recognition and test it on standard benchmarks.
| [
"['Bernard Ghanem' 'Narendra Ahuja']",
"Bernard Ghanem and Narendra Ahuja"
] |
cs.LG stat.ML | null | 1109.2397 | null | null | http://arxiv.org/pdf/1109.2397v2 | 2012-04-20T13:20:27Z | 2011-09-12T08:23:02Z | Structured sparsity through convex optimization | Sparse estimation methods are aimed at using or obtaining parsimonious
representations of data or models. While naturally cast as a combinatorial
optimization problem, variable or feature selection admits a convex relaxation
through the regularization by the $\ell_1$-norm. In this paper, we consider
situations where we are not only interested in sparsity, but where some
structural prior knowledge is available as well. We show that the $\ell_1$-norm
can then be extended to structured norms built on either disjoint or
overlapping groups of variables, leading to a flexible framework that can deal
with various structures. We present applications to unsupervised learning, for
structured sparse principal component analysis and hierarchical dictionary
learning, and to supervised learning in the context of non-linear variable
selection.
| [
"Francis Bach (LIENS, INRIA Paris - Rocquencourt), Rodolphe Jenatton\n (LIENS, INRIA Paris - Rocquencourt), Julien Mairal, Guillaume Obozinski\n (LIENS, INRIA Paris - Rocquencourt)",
"['Francis Bach' 'Rodolphe Jenatton' 'Julien Mairal' 'Guillaume Obozinski']"
] |
Subsets and Splits