categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
math.ST cs.LG math.OC stat.TH | null | 1111.3866 | null | null | http://arxiv.org/pdf/1111.3866v1 | 2011-11-16T16:46:48Z | 2011-11-16T16:46:48Z | Sequential search based on kriging: convergence analysis of some
algorithms | Let $\FF$ be a set of real-valued functions on a set $\XX$ and let $S:\FF \to
\GG$ be an arbitrary mapping. We consider the problem of making inference about
$S(f)$, with $f\in\FF$ unknown, from a finite set of pointwise evaluations of
$f$. We are mainly interested in the problems of approximation and
optimization. In this article, we make a brief review of results concerning
average error bounds of Bayesian search methods that use a random process prior
about $f$.
| [
"['Emmanuel Vazquez' 'Julien Bect']",
"Emmanuel Vazquez and Julien Bect"
] |
stat.CO cs.LG | null | 1111.4246 | null | null | http://arxiv.org/pdf/1111.4246v1 | 2011-11-18T00:39:32Z | 2011-11-18T00:39:32Z | The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian
Monte Carlo | Hamiltonian Monte Carlo (HMC) is a Markov chain Monte Carlo (MCMC) algorithm
that avoids the random walk behavior and sensitivity to correlated parameters
that plague many MCMC methods by taking a series of steps informed by
first-order gradient information. These features allow it to converge to
high-dimensional target distributions much more quickly than simpler methods
such as random walk Metropolis or Gibbs sampling. However, HMC's performance is
highly sensitive to two user-specified parameters: a step size {\epsilon} and a
desired number of steps L. In particular, if L is too small then the algorithm
exhibits undesirable random walk behavior, while if L is too large the
algorithm wastes computation. We introduce the No-U-Turn Sampler (NUTS), an
extension to HMC that eliminates the need to set a number of steps L. NUTS uses
a recursive algorithm to build a set of likely candidate points that spans a
wide swath of the target distribution, stopping automatically when it starts to
double back and retrace its steps. Empirically, NUTS perform at least as
efficiently as and sometimes more efficiently than a well tuned standard HMC
method, without requiring user intervention or costly tuning runs. We also
derive a method for adapting the step size parameter {\epsilon} on the fly
based on primal-dual averaging. NUTS can thus be used with no hand-tuning at
all. NUTS is also suitable for applications such as BUGS-style automatic
inference engines that require efficient "turnkey" sampling algorithms.
| [
"['Matthew D. Hoffman' 'Andrew Gelman']",
"Matthew D. Hoffman and Andrew Gelman"
] |
cs.LG | null | 1111.4460 | null | null | http://arxiv.org/pdf/1111.4460v1 | 2011-11-18T19:23:47Z | 2011-11-18T19:23:47Z | Parametrized Stochastic Multi-armed Bandits with Binary Rewards | In this paper, we consider the problem of multi-armed bandits with a large,
possibly infinite number of correlated arms. We assume that the arms have
Bernoulli distributed rewards, independent across time, where the probabilities
of success are parametrized by known attribute vectors for each arm, as well as
an unknown preference vector, each of dimension $n$. For this model, we seek an
algorithm with a total regret that is sub-linear in time and independent of the
number of arms. We present such an algorithm, which we call the Two-Phase
Algorithm, and analyze its performance. We show upper bounds on the total
regret which applies uniformly in time, for both the finite and infinite arm
cases. The asymptotics of the finite arm bound show that for any $f \in
\omega(\log(T))$, the total regret can be made to be $O(n \cdot f(T))$. In the
infinite arm case, the total regret is $O(\sqrt{n^3 T})$.
| [
"['Chong Jiang' 'R. Srikant']",
"Chong Jiang and R. Srikant"
] |
cs.LG | null | 1111.4470 | null | null | http://arxiv.org/pdf/1111.4470v3 | 2017-04-24T07:56:53Z | 2011-11-18T20:32:33Z | Efficient Regression in Metric Spaces via Approximate Lipschitz
Extension | We present a framework for performing efficient regression in general metric
spaces. Roughly speaking, our regressor predicts the value at a new point by
computing a Lipschitz extension --- the smoothest function consistent with the
observed data --- after performing structural risk minimization to avoid
overfitting. We obtain finite-sample risk bounds with minimal structural and
noise assumptions, and a natural speed-precision tradeoff. The offline
(learning) and online (prediction) stages can be solved by convex programming,
but this naive approach has runtime complexity $O(n^3)$, which is prohibitive
for large datasets. We design instead a regression algorithm whose speed and
generalization performance depend on the intrinsic dimension of the data, to
which the algorithm adapts. While our main innovation is algorithmic, the
statistical results may also be of independent interest.
| [
"['Lee-Ad Gottlieb' 'Aryeh Kontorovich' 'Robert Krauthgamer']",
"Lee-Ad Gottlieb and Aryeh Kontorovich and Robert Krauthgamer"
] |
cs.LG | 10.1007/978-3-642-33492-4_4 | 1111.4541 | null | null | http://arxiv.org/abs/1111.4541v2 | 2012-02-29T04:19:56Z | 2011-11-19T08:39:34Z | Large Scale Spectral Clustering Using Approximate Commute Time Embedding | Spectral clustering is a novel clustering method which can detect complex
shapes of data clusters. However, it requires the eigen decomposition of the
graph Laplacian matrix, which is proportion to $O(n^3)$ and thus is not
suitable for large scale systems. Recently, many methods have been proposed to
accelerate the computational time of spectral clustering. These approximate
methods usually involve sampling techniques by which a lot information of the
original data may be lost. In this work, we propose a fast and accurate
spectral clustering approach using an approximate commute time embedding, which
is similar to the spectral embedding. The method does not require using any
sampling technique and computing any eigenvector at all. Instead it uses random
projection and a linear time solver to find the approximate embedding. The
experiments in several synthetic and real datasets show that the proposed
approach has better clustering quality and is faster than the state-of-the-art
approximate spectral clustering methods.
| [
"['Nguyen Lu Dang Khoa' 'Sanjay Chawla']",
"Nguyen Lu Dang Khoa and Sanjay Chawla"
] |
math.OC cs.LG stat.CO | null | 1111.4802 | null | null | http://arxiv.org/pdf/1111.4802v1 | 2011-11-21T09:47:51Z | 2011-11-21T09:47:51Z | Bayesian optimization using sequential Monte Carlo | We consider the problem of optimizing a real-valued continuous function $f$
using a Bayesian approach, where the evaluations of $f$ are chosen sequentially
by combining prior information about $f$, which is described by a random
process model, and past evaluation results. The main difficulty with this
approach is to be able to compute the posterior distributions of quantities of
interest which are used to choose evaluation points. In this article, we decide
to use a Sequential Monte Carlo (SMC) approach.
| [
"['Romain Benassi' 'Julien Bect' 'Emmanuel Vazquez']",
"Romain Benassi and Julien Bect and Emmanuel Vazquez"
] |
math.OC cs.LG stat.ML | 10.1109/TAC.2013.2254619 | 1111.5280 | null | null | http://arxiv.org/abs/1111.5280v4 | 2013-11-19T11:56:10Z | 2011-11-22T18:41:12Z | Stochastic gradient descent on Riemannian manifolds | Stochastic gradient descent is a simple approach to find the local minima of
a cost function whose evaluations are corrupted by noise. In this paper, we
develop a procedure extending stochastic gradient descent algorithms to the
case where the function is defined on a Riemannian manifold. We prove that, as
in the Euclidian case, the gradient descent algorithm converges to a critical
point of the cost function. The algorithm has numerous potential applications,
and is illustrated here by four examples. In particular a novel gossip
algorithm on the set of covariance matrices is derived and tested numerically.
| [
"Silvere Bonnabel",
"['Silvere Bonnabel']"
] |
stat.ML cs.LG | null | 1111.5479 | null | null | http://arxiv.org/pdf/1111.5479v2 | 2012-08-07T23:11:40Z | 2011-11-23T12:47:50Z | The Graphical Lasso: New Insights and Alternatives | The graphical lasso \citep{FHT2007a} is an algorithm for learning the
structure in an undirected Gaussian graphical model, using $\ell_1$
regularization to control the number of zeros in the precision matrix
${\B\Theta}={\B\Sigma}^{-1}$ \citep{BGA2008,yuan_lin_07}. The {\texttt R}
package \GL\ \citep{FHT2007a} is popular, fast, and allows one to efficiently
build a path of models for different values of the tuning parameter.
Convergence of \GL\ can be tricky; the converged precision matrix might not be
the inverse of the estimated covariance, and occasionally it fails to converge
with warm starts. In this paper we explain this behavior, and propose new
algorithms that appear to outperform \GL.
By studying the "normal equations" we see that, \GL\ is solving the {\em
dual} of the graphical lasso penalized likelihood, by block coordinate ascent;
a result which can also be found in \cite{BGA2008}.
In this dual, the target of estimation is $\B\Sigma$, the covariance matrix,
rather than the precision matrix $\B\Theta$. We propose similar primal
algorithms \PGL\ and \DPGL, that also operate by block-coordinate descent,
where $\B\Theta$ is the optimization target. We study all of these algorithms,
and in particular different approaches to solving their coordinate
sub-problems. We conclude that \DPGL\ is superior from several points of view.
| [
"['Rahul Mazumder' 'Trevor Hastie']",
"Rahul Mazumder, Trevor Hastie"
] |
stat.ML cs.IT cs.LG math.IT | null | 1111.5648 | null | null | http://arxiv.org/pdf/1111.5648v1 | 2011-11-23T23:25:57Z | 2011-11-23T23:25:57Z | Falsification and future performance | We information-theoretically reformulate two measures of capacity from
statistical learning theory: empirical VC-entropy and empirical Rademacher
complexity. We show these capacity measures count the number of hypotheses
about a dataset that a learning algorithm falsifies when it finds the
classifier in its repertoire minimizing empirical risk. It then follows from
that the future performance of predictors on unseen data is controlled in part
by how many hypotheses the learner falsifies. As a corollary we show that
empirical VC-entropy quantifies the message length of the true hypothesis in
the optimal code of a particular probability distribution, the so-called actual
repertoire.
| [
"['David Balduzzi']",
"David Balduzzi"
] |
cs.LG | null | 1111.6082 | null | null | http://arxiv.org/pdf/1111.6082v3 | 2012-09-27T22:02:49Z | 2011-11-25T18:51:29Z | Trading Regret for Efficiency: Online Convex Optimization with Long Term
Constraints | In this paper we propose a framework for solving constrained online convex
optimization problem. Our motivation stems from the observation that most
algorithms proposed for online convex optimization require a projection onto
the convex set $\mathcal{K}$ from which the decisions are made. While for
simple shapes (e.g. Euclidean ball) the projection is straightforward, for
arbitrary complex sets this is the main computational challenge and may be
inefficient in practice. In this paper, we consider an alternative online
convex optimization problem. Instead of requiring decisions belong to
$\mathcal{K}$ for all rounds, we only require that the constraints which define
the set $\mathcal{K}$ be satisfied in the long run. We show that our framework
can be utilized to solve a relaxed version of online learning with side
constraints addressed in \cite{DBLP:conf/colt/MannorT06} and
\cite{DBLP:conf/aaai/KvetonYTM08}. By turning the problem into an online
convex-concave optimization problem, we propose an efficient algorithm which
achieves $\tilde{\mathcal{O}}(\sqrt{T})$ regret bound and
$\tilde{\mathcal{O}}(T^{3/4})$ bound for the violation of constraints. Then we
modify the algorithm in order to guarantee that the constraints are satisfied
in the long run. This gain is achieved at the price of getting
$\tilde{\mathcal{O}}(T^{3/4})$ regret bound. Our second algorithm is based on
the Mirror Prox method \citep{nemirovski-2005-prox} to solve variational
inequalities which achieves $\tilde{\mathcal{\mathcal{O}}}(T^{2/3})$ bound for
both regret and the violation of constraints when the domain $\K$ can be
described by a finite number of linear constraints. Finally, we extend the
result to the setting where we only have partial access to the convex set
$\mathcal{K}$ and propose a multipoint bandit feedback algorithm with the same
bounds in expectation as our first algorithm.
| [
"Mehrdad Mahdavi, Rong Jin, Tianbao Yang",
"['Mehrdad Mahdavi' 'Rong Jin' 'Tianbao Yang']"
] |
cs.LG stat.ML | 10.1007/s10994-013-5345-8 | 1111.6201 | null | null | http://arxiv.org/abs/1111.6201v4 | 2013-02-24T05:01:59Z | 2011-11-26T23:36:40Z | Learning a Factor Model via Regularized PCA | We consider the problem of learning a linear factor model. We propose a
regularized form of principal component analysis (PCA) and demonstrate through
experiments with synthetic and real data the superiority of resulting estimates
to those produced by pre-existing factor analysis approaches. We also establish
theoretical results that explain how our algorithm corrects the biases induced
by conventional approaches. An important feature of our algorithm is that its
computational requirements are similar to those of PCA, which enjoys wide use
in large part due to its efficiency.
| [
"Yi-Hao Kao and Benjamin Van Roy",
"['Yi-Hao Kao' 'Benjamin Van Roy']"
] |
cs.CE cs.LG math.OC | null | 1111.6214 | null | null | http://arxiv.org/pdf/1111.6214v1 | 2011-11-27T02:02:53Z | 2011-11-27T02:02:53Z | Robust Max-Product Belief Propagation | We study the problem of optimizing a graph-structured objective function
under \emph{adversarial} uncertainty. This problem can be modeled as a
two-persons zero-sum game between an Engineer and Nature. The Engineer controls
a subset of the variables (nodes in the graph), and tries to assign their
values to maximize an objective function. Nature controls the complementary
subset of variables and tries to minimize the same objective. This setting
encompasses estimation and optimization problems under model uncertainty, and
strategic problems with a graph structure. Von Neumann's minimax theorem
guarantees the existence of a (minimax) pair of randomized strategies that
provide optimal robustness for each player against its adversary.
We prove several structural properties of this strategy pair in the case of
graph-structured payoff function. In particular, the randomized minimax
strategies (distributions over variable assignments) can be chosen in such a
way to satisfy the Markov property with respect to the graph. This
significantly reduces the problem dimensionality. Finally we introduce a
message passing algorithm to solve this minimax problem. The algorithm
generalizes max-product belief propagation to this new domain.
| [
"Morteza Ibrahimi, Adel Javanmard, Yashodhan Kanoria and Andrea\n Montanari",
"['Morteza Ibrahimi' 'Adel Javanmard' 'Yashodhan Kanoria'\n 'Andrea Montanari']"
] |
cs.LG | null | 1111.6337 | null | null | http://arxiv.org/pdf/1111.6337v4 | 2012-06-14T01:40:01Z | 2011-11-28T03:50:18Z | Regret Bound by Variation for Online Convex Optimization | In citep{Hazan-2008-extract}, the authors showed that the regret of online
linear optimization can be bounded by the total variation of the cost vectors.
In this paper, we extend this result to general online convex optimization. We
first analyze the limitations of the algorithm in \citep{Hazan-2008-extract}
when applied it to online convex optimization. We then present two algorithms
for online convex optimization whose regrets are bounded by the variation of
cost functions. We finally consider the bandit setting, and present a
randomized algorithm for online bandit convex optimization with a
variation-based regret bound. We show that the regret bound for online bandit
convex optimization is optimal when the variation of cost functions is
independent of the number of trials.
| [
"['Tianbao Yang' 'Mehrdad Mahdavi' 'Rong Jin' 'Shenghuo Zhu']",
"Tianbao Yang, Mehrdad Mahdavi, Rong Jin, Shenghuo Zhu"
] |
cs.LG math.OC | null | 1111.6453 | null | null | http://arxiv.org/pdf/1111.6453v2 | 2013-10-08T07:22:08Z | 2011-11-28T14:45:01Z | Learning with Submodular Functions: A Convex Optimization Perspective | Submodular functions are relevant to machine learning for at least two
reasons: (1) some problems may be expressed directly as the optimization of
submodular functions and (2) the lovasz extension of submodular functions
provides a useful set of regularization functions for supervised and
unsupervised learning. In this monograph, we present the theory of submodular
functions from a convex analysis perspective, presenting tight links between
certain polyhedra, combinatorial optimization and convex optimization problems.
In particular, we show how submodular function minimization is equivalent to
solving a wide variety of convex optimization problems. This allows the
derivation of new efficient algorithms for approximate and exact submodular
function minimization with theoretical guarantees and good practical
performance. By listing many examples of submodular functions, we review
various applications to machine learning, such as clustering, experimental
design, sensor placement, graphical model structure learning or subset
selection, as well as a family of structured sparsity-inducing norms that can
be derived and used from submodular functions.
| [
"Francis Bach (LIENS, INRIA Paris - Rocquencourt)",
"['Francis Bach']"
] |
stat.ML cs.LG | 10.1109/TFUZZ.2012.2194151 | 1111.6473 | null | null | http://arxiv.org/abs/1111.6473v1 | 2011-11-28T15:28:53Z | 2011-11-28T15:28:53Z | A kernel-based framework for learning graded relations from data | Driven by a large number of potential applications in areas like
bioinformatics, information retrieval and social network analysis, the problem
setting of inferring relations between pairs of data objects has recently been
investigated quite intensively in the machine learning community. To this end,
current approaches typically consider datasets containing crisp relations, so
that standard classification methods can be adopted. However, relations between
objects like similarities and preferences are often expressed in a graded
manner in real-world applications. A general kernel-based framework for
learning relations from data is introduced here. It extends existing approaches
because both crisp and graded relations are considered, and it unifies existing
approaches because different types of graded relations can be modeled,
including symmetric and reciprocal relations. This framework establishes
important links between recent developments in fuzzy set theory and machine
learning. Its usefulness is demonstrated through various experiments on
synthetic and real-world data.
| [
"Willem Waegeman, Tapio Pahikkala, Antti Airola, Tapio Salakoski,\n Michiel Stock, Bernard De Baets",
"['Willem Waegeman' 'Tapio Pahikkala' 'Antti Airola' 'Tapio Salakoski'\n 'Michiel Stock' 'Bernard De Baets']"
] |
cs.DS cs.CR cs.LG | null | 1111.6842 | null | null | http://arxiv.org/pdf/1111.6842v1 | 2011-11-29T15:23:08Z | 2011-11-29T15:23:08Z | Fast Private Data Release Algorithms for Sparse Queries | We revisit the problem of accurately answering large classes of statistical
queries while preserving differential privacy. Previous approaches to this
problem have either been very general but have not had run-time polynomial in
the size of the database, have applied only to very limited classes of queries,
or have relaxed the notion of worst-case error guarantees. In this paper we
consider the large class of sparse queries, which take non-zero values on only
polynomially many universe elements. We give efficient query release algorithms
for this class, in both the interactive and the non-interactive setting. Our
algorithms also achieve better accuracy bounds than previous general techniques
do when applied to sparse queries: our bounds are independent of the universe
size. In fact, even the runtime of our interactive mechanism is independent of
the universe size, and so can be implemented in the "infinite universe" model
in which no finite universe need be specified by the data curator.
| [
"Avrim Blum and Aaron Roth",
"['Avrim Blum' 'Aaron Roth']"
] |
cs.IT cs.LG math.IT physics.data-an stat.AP | null | 1111.6857 | null | null | http://arxiv.org/pdf/1111.6857v5 | 2012-08-29T15:23:09Z | 2011-11-28T18:03:04Z | Multivariate information measures: an experimentalist's perspective | Information theory is widely accepted as a powerful tool for analyzing
complex systems and it has been applied in many disciplines. Recently, some
central components of information theory - multivariate information measures -
have found expanded use in the study of several phenomena. These information
measures differ in subtle yet significant ways. Here, we will review the
information theory behind each measure, as well as examine the differences
between these measures by applying them to several simple model systems. In
addition to these systems, we will illustrate the usefulness of the information
measures by analyzing neural spiking data from a dissociated culture through
early stages of its development. We hope that this work will aid other
researchers as they seek the best multivariate information measure for their
specific research goals and system. Finally, we have made software available
online which allows the user to calculate all of the information measures
discussed within this paper.
| [
"Nicholas Timme, Wesley Alford, Benjamin Flecker, and John M. Beggs",
"['Nicholas Timme' 'Wesley Alford' 'Benjamin Flecker' 'John M. Beggs']"
] |
stat.ML cs.LG | null | 1111.6925 | null | null | http://arxiv.org/pdf/1111.6925v1 | 2011-11-29T18:33:01Z | 2011-11-29T18:33:01Z | Structure Learning of Probabilistic Graphical Models: A Comprehensive
Survey | Probabilistic graphical models combine the graph theory and probability
theory to give a multivariate statistical modeling. They provide a unified
description of uncertainty using probability and complexity using the graphical
model. Especially, graphical models provide the following several useful
properties:
- Graphical models provide a simple and intuitive interpretation of the
structures of probabilistic models. On the other hand, they can be used to
design and motivate new models.
- Graphical models provide additional insights into the properties of the
model, including the conditional independence properties.
- Complex computations which are required to perform inference and learning
in sophisticated models can be expressed in terms of graphical manipulations,
in which the underlying mathematical expressions are carried along implicitly.
The graphical models have been applied to a large number of fields, including
bioinformatics, social science, control theory, image processing, marketing
analysis, among others. However, structure learning for graphical models
remains an open challenge, since one must cope with a combinatorial search over
the space of all possible structures.
In this paper, we present a comprehensive survey of the existing structure
learning algorithms.
| [
"['Yang Zhou']",
"Yang Zhou"
] |
cs.DS cs.DB cs.LG | null | 1111.6937 | null | null | http://arxiv.org/pdf/1111.6937v6 | 2013-02-22T14:32:31Z | 2011-11-29T19:11:50Z | Efficient Discovery of Association Rules and Frequent Itemsets through
Sampling with Tight Performance Guarantees | The tasks of extracting (top-$K$) Frequent Itemsets (FI's) and Association
Rules (AR's) are fundamental primitives in data mining and database
applications. Exact algorithms for these problems exist and are widely used,
but their running time is hindered by the need of scanning the entire dataset,
possibly multiple times. High quality approximations of FI's and AR's are
sufficient for most practical uses, and a number of recent works explored the
application of sampling for fast discovery of approximate solutions to the
problems. However, these works do not provide satisfactory performance
guarantees on the quality of the approximation, due to the difficulty of
bounding the probability of under- or over-sampling any one of an unknown
number of frequent itemsets. In this work we circumvent this issue by applying
the statistical concept of \emph{Vapnik-Chervonenkis (VC) dimension} to develop
a novel technique for providing tight bounds on the sample size that guarantees
approximation within user-specified parameters. Our technique applies both to
absolute and to relative approximations of (top-$K$) FI's and AR's. The
resulting sample size is linearly dependent on the VC-dimension of a range
space associated with the dataset to be mined. The main theoretical
contribution of this work is a proof that the VC-dimension of this range space
is upper bounded by an easy-to-compute characteristic quantity of the dataset
which we call \emph{d-index}, and is the maximum integer $d$ such that the
dataset contains at least $d$ transactions of length at least $d$ such that no
one of them is a superset of or equal to another. We show that this bound is
strict for a large class of datasets.
| [
"['Matteo Riondato' 'Eli Upfal']",
"Matteo Riondato and Eli Upfal"
] |
cs.ET cs.LG cs.NE nlin.CD physics.optics | 10.1038/srep00287 | 1111.7219 | null | null | http://arxiv.org/abs/1111.7219v1 | 2011-11-30T15:50:58Z | 2011-11-30T15:50:58Z | Optoelectronic Reservoir Computing | Reservoir computing is a recently introduced, highly efficient bio-inspired
approach for processing time dependent data. The basic scheme of reservoir
computing consists of a non linear recurrent dynamical system coupled to a
single input layer and a single output layer. Within these constraints many
implementations are possible. Here we report an opto-electronic implementation
of reservoir computing based on a recently proposed architecture consisting of
a single non linear node and a delay line. Our implementation is sufficiently
fast for real time information processing. We illustrate its performance on
tasks of practical importance such as nonlinear channel equalization and speech
recognition, and obtain results comparable to state of the art digital
implementations.
| [
"['Yvan Paquot' 'François Duport' 'Anteo Smerieri' 'Joni Dambre'\n 'Benjamin Schrauwen' 'Marc Haelterman' 'Serge Massar']",
"Yvan Paquot, Fran\\c{c}ois Duport, Anteo Smerieri, Joni Dambre,\n Benjamin Schrauwen, Marc Haelterman and Serge Massar"
] |
cs.DB cs.LG | null | 1111.7295 | null | null | http://arxiv.org/pdf/1111.7295v2 | 2011-12-02T16:01:50Z | 2011-11-30T20:17:29Z | A Learning Framework for Self-Tuning Histograms | In this paper, we consider the problem of estimating self-tuning histograms
using query workloads. To this end, we propose a general learning theoretic
formulation. Specifically, we use query feedback from a workload as training
data to estimate a histogram with a small memory footprint that minimizes the
expected error on future queries. Our formulation provides a framework in which
different approaches can be studied and developed. We first study the simple
class of equi-width histograms and present a learning algorithm, EquiHist, that
is competitive in many settings. We also provide formal guarantees for
equi-width histograms that highlight scenarios in which equi-width histograms
can be expected to succeed or fail. We then go beyond equi-width histograms and
present a novel learning algorithm, SpHist, for estimating general histograms.
Here we use Haar wavelets to reduce the problem of learning histograms to that
of learning a sparse vector. Both algorithms have multiple advantages over
existing methods: 1) simple and scalable extensions to multi-dimensional data,
2) scalability with number of histogram buckets and size of query feedback, 3)
natural extensions to incorporate new feedback and handle database updates. We
demonstrate these advantages over the current state-of-the-art, ISOMER, through
detailed experiments on real and synthetic data. In particular, we show that
SpHist obtains up to 50% less error than ISOMER on real-world multi-dimensional
datasets.
| [
"Raajay Viswanathan, Prateek Jain, Srivatsan Laxman, Arvind Arasu",
"['Raajay Viswanathan' 'Prateek Jain' 'Srivatsan Laxman' 'Arvind Arasu']"
] |
cs.LG cs.DS | null | 1112.0826 | null | null | http://arxiv.org/pdf/1112.0826v5 | 2016-12-11T21:41:33Z | 2011-12-05T03:42:07Z | Clustering under Perturbation Resilience | Motivated by the fact that distances between data points in many real-world
clustering instances are often based on heuristic measures, Bilu and
Linial~\cite{BL} proposed analyzing objective based clustering problems under
the assumption that the optimum clustering to the objective is preserved under
small multiplicative perturbations to distances between points. The hope is
that by exploiting the structure in such instances, one can overcome worst case
hardness results.
In this paper, we provide several results within this framework. For
center-based objectives, we present an algorithm that can optimally cluster
instances resilient to perturbations of factor $(1 + \sqrt{2})$, solving an
open problem of Awasthi et al.~\cite{ABS10}. For $k$-median, a center-based
objective of special interest, we additionally give algorithms for a more
relaxed assumption in which we allow the optimal solution to change in a small
$\epsilon$ fraction of the points after perturbation. We give the first bounds
known for $k$-median under this more realistic and more general assumption. We
also provide positive results for min-sum clustering which is typically a
harder objective than center-based objectives from approximability standpoint.
Our algorithms are based on new linkage criteria that may be of independent
interest.
Additionally, we give sublinear-time algorithms, showing algorithms that can
return an implicit clustering from only access to a small random sample.
| [
"['Maria Florina Balcan' 'Yingyu Liang']",
"Maria Florina Balcan, Yingyu Liang"
] |
cs.LG | null | 1112.1125 | null | null | http://arxiv.org/pdf/1112.1125v2 | 2011-12-09T22:58:58Z | 2011-12-06T00:13:44Z | Learning in embodied action-perception loops through exploration | Although exploratory behaviors are ubiquitous in the animal kingdom, their
computational underpinnings are still largely unknown. Behavioral Psychology
has identified learning as a primary drive underlying many exploratory
behaviors. Exploration is seen as a means for an animal to gather sensory data
useful for reducing its ignorance about the environment. While related problems
have been addressed in Data Mining and Reinforcement Learning, the
computational modeling of learning-driven exploration by embodied agents is
largely unrepresented.
Here, we propose a computational theory for learning-driven exploration based
on the concept of missing information that allows an agent to identify
informative actions using Bayesian inference. We demonstrate that when
embodiment constraints are high, agents must actively coordinate their actions
to learn efficiently. Compared to earlier approaches, our exploration policy
yields more efficient learning across a range of worlds with diverse
structures. The improved learning in turn affords greater success in general
tasks including navigation and reward gathering. We conclude by discussing how
the proposed theory relates to previous information-theoretic objectives of
behavior, such as predictive information and the free energy principle, and how
it might contribute to a general theory of exploratory behavior.
| [
"['Daniel Y. Little' 'Friedrich T. Sommer']",
"Daniel Y. Little and Friedrich T. Sommer"
] |
cs.LG cs.RO | null | 1112.1133 | null | null | http://arxiv.org/pdf/1112.1133v3 | 2012-06-08T20:39:30Z | 2011-12-06T00:45:28Z | Multi-timescale Nexting in a Reinforcement Learning Robot | The term "nexting" has been used by psychologists to refer to the propensity
of people and many other animals to continually predict what will happen next
in an immediate, local, and personal sense. The ability to "next" constitutes a
basic kind of awareness and knowledge of one's environment. In this paper we
present results with a robot that learns to next in real time, predicting
thousands of features of the world's state, including all sensory inputs, at
timescales from 0.1 to 8 seconds. This was achieved by treating each state
feature as a reward-like target and applying temporal-difference methods to
learn a corresponding value function with a discount rate corresponding to the
timescale. We show that two thousand predictions, each dependent on six
thousand state features, can be learned and updated online at better than 10Hz
on a laptop computer, using the standard TD(lambda) algorithm with linear
function approximation. We show that this approach is efficient enough to be
practical, with most of the learning complete within 30 minutes. We also show
that a single tile-coded feature representation suffices to accurately predict
many different signals at a significant range of timescales. Finally, we show
that the accuracy of our learned predictions compares favorably with the
optimal off-line solution.
| [
"['Joseph Modayil' 'Adam White' 'Richard S. Sutton']",
"Joseph Modayil, Adam White, Richard S. Sutton"
] |
cs.LG | null | 1112.1390 | null | null | http://arxiv.org/pdf/1112.1390v1 | 2011-12-06T20:15:37Z | 2011-12-06T20:15:37Z | An Identity for Kernel Ridge Regression | This paper derives an identity connecting the square loss of ridge regression
in on-line mode with the loss of the retrospectively best regressor. Some
corollaries about the properties of the cumulative loss of on-line ridge
regression are also obtained.
| [
"['Fedor Zhdanov' 'Yuri Kalnishkan']",
"Fedor Zhdanov and Yuri Kalnishkan"
] |
cs.LG stat.ML | null | 1112.1556 | null | null | http://arxiv.org/pdf/1112.1556v3 | 2012-02-24T08:07:54Z | 2011-12-07T13:34:25Z | Active Learning of Halfspaces under a Margin Assumption | We derive and analyze a new, efficient, pool-based active learning algorithm
for halfspaces, called ALuMA. Most previous algorithms show exponential
improvement in the label complexity assuming that the distribution over the
instance space is close to uniform. This assumption rarely holds in practical
applications. Instead, we study the label complexity under a large-margin
assumption -- a much more realistic condition, as evident by the success of
margin-based algorithms such as SVM. Our algorithm is computationally efficient
and comes with formal guarantees on its label complexity. It also naturally
extends to the non-separable case and to non-linear kernels. Experiments
illustrate the clear advantage of ALuMA over other active learning algorithms.
| [
"['Alon Gonen' 'Sivan Sabato' 'Shai Shalev-Shwartz']",
"Alon Gonen, Sivan Sabato and Shai Shalev-Shwartz"
] |
cs.NI cs.LG | 10.5121/ijcnc.2011.3413 | 1112.1615 | null | null | http://arxiv.org/abs/1112.1615v1 | 2011-12-07T16:34:20Z | 2011-12-07T16:34:20Z | SLA Establishment with Guaranteed QoS in the Interdomain Network: A
Stock Model | The new model that we present in this paper is introduced in the context of
guaranteed QoS and resources management in the inter-domain routing framework.
This model, called the stock model, is based on a reverse cascade approach and
is applied in a distributed context. So transit providers have to learn the
right capacities to buy and to stock and, therefore learning theory is applied
through an iterative process. We show that transit providers manage to learn
how to strategically choose their capacities on each route in order to maximize
their benefits, despite the very incomplete information. Finally, we provide
and analyse some simulation results given by the application of the model in a
simple case where the model quickly converges to a stable state.
| [
"['Dominique Barth' 'Boubkeur Boudaoud' 'Thierry Mautor']",
"Dominique Barth, Boubkeur Boudaoud and Thierry Mautor"
] |
cs.DB cs.LG | null | 1112.1734 | null | null | http://arxiv.org/pdf/1112.1734v1 | 2011-12-07T23:33:15Z | 2011-12-07T23:33:15Z | Using Taxonomies to Facilitate the Analysis of the Association Rules | The Data Mining process enables the end users to analyze, understand and use
the extracted knowledge in an intelligent system or to support in the
decision-making processes. However, many algorithms used in the process
encounter large quantities of patterns, complicating the analysis of the
patterns. This fact occurs with association rules, a Data Mining technique that
tries to identify intrinsic patterns in large data sets. A method that can help
the analysis of the association rules is the use of taxonomies in the step of
post-processing knowledge. In this paper, the GART algorithm is proposed, which
uses taxonomies to generalize association rules, and the RulEE-GAR
computational module, that enables the analysis of the generalized rules.
| [
"['Marcos Aurélio Domingues' 'Solange Oliveira Rezende']",
"Marcos Aur\\'elio Domingues, Solange Oliveira Rezende"
] |
cs.IT cs.DM cs.LG math.IT | null | 1112.1757 | null | null | http://arxiv.org/pdf/1112.1757v2 | 2011-12-28T05:33:05Z | 2011-12-08T03:32:39Z | Recovery of a Sparse Integer Solution to an Underdetermined System of
Linear Equations | We consider a system of m linear equations in n variables Ax=b where A is a
given m x n matrix and b is a given m-vector known to be equal to Ax' for some
unknown solution x' that is integer and k-sparse: x' in {0,1}^n and exactly k
entries of x' are 1. We give necessary and sufficient conditions for recovering
the solution x exactly using an LP relaxation that minimizes l1 norm of x. When
A is drawn from a distribution that has exchangeable columns, we show an
interesting connection between the recovery probability and a well known
problem in geometry, namely the k-set problem. To the best of our knowledge,
this connection appears to be new in the compressive sensing literature. We
empirically show that for large n if the elements of A are drawn i.i.d. from
the normal distribution then the performance of the recovery LP exhibits a
phase transition, i.e., for each k there exists a value m' of m such that the
recovery always succeeds if m > m' and always fails if m < m'. Using the
empirical data we conjecture that m' = nH(k/n)/2 where H(x) = -(x)log_2(x) -
(1-x)log_2(1-x) is the binary entropy function.
| [
"T. S. Jayram, Soumitra Pal, Vijay Arya",
"['T. S. Jayram' 'Soumitra Pal' 'Vijay Arya']"
] |
cs.LG math.PR math.ST stat.TH | null | 1112.1768 | null | null | null | null | null | The Extended UCB Policies for Frequentist Multi-armed Bandit Problems | The multi-armed bandit (MAB) problem is a widely studied model in the field
of reinforcement learning. This paper considers two cases of the classical MAB
model -- the light-tailed reward distributions and the heavy-tailed,
respectively. For the light-tailed (i.e. sub-Gaussian) case, we propose the
UCB1-LT policy, achieving the optimal $O(\log T)$ of the order of regret
growth. For the heavy-tailed case, we introduce the extended robust UCB policy,
which is an extension of the UCB policies proposed by Bubeck et al. (2013) and
Lattimore (2017). The previous UCB policies require the knowledge of an upper
bound on specific moments of reward distributions, which can be hard to acquire
in some practical situations. Our extended robust UCB eliminates this
requirement while still achieving the optimal regret growth order $O(\log T)$,
thus providing a broadened application area of the UCB policies for the
heavy-tailed reward distributions.
| [
"Keqin Liu, Haoran Chen, Weibing Deng, Ting Wu"
] |
cs.LG cs.AI cs.RO | 10.1109/DEVLRN.2011.6037329 | 1112.1937 | null | null | http://arxiv.org/abs/1112.1937v1 | 2011-12-08T20:27:31Z | 2011-12-08T20:27:31Z | Bootstrapping Intrinsically Motivated Learning with Human Demonstrations | This paper studies the coupling of internally guided learning and social
interaction, and more specifically the improvement owing to demonstrations of
the learning by intrinsic motivation. We present Socially Guided Intrinsic
Motivation by Demonstration (SGIM-D), an algorithm for learning in continuous,
unbounded and non-preset environments. After introducing social learning and
intrinsic motivation, we describe the design of our algorithm, before showing
through a fishing experiment that SGIM-D efficiently combines the advantages of
social learning and intrinsic motivation to gain a wide repertoire while being
specialised in specific subspaces.
| [
"Sao Mai Nguyen (INRIA Bordeaux - Sud-Ouest), Adrien Baranes (INRIA\n Bordeaux - Sud-Ouest), Pierre-Yves Oudeyer (INRIA Bordeaux - Sud-Ouest)",
"['Sao Mai Nguyen' 'Adrien Baranes' 'Pierre-Yves Oudeyer']"
] |
cs.LG | null | 1112.1966 | null | null | http://arxiv.org/pdf/1112.1966v1 | 2011-12-08T21:33:38Z | 2011-12-08T21:33:38Z | Bipartite ranking algorithm for classification and survival analysis | Unsupervised aggregation of independently built univariate predictors is
explored as an alternative regularization approach for noisy, sparse datasets.
Bipartite ranking algorithm Smooth Rank implementing this approach is
introduced. The advantages of this algorithm are demonstrated on two types of
problems. First, Smooth Rank is applied to two-class problems from bio-medical
field, where ranking is often preferable to classification. In comparison
against SVMs with radial and linear kernels, Smooth Rank had the best
performance on 8 out of 12 benchmark benchmarks. The second area of application
is survival analysis, which is reduced here to bipartite ranking in a way which
allows one to use commonly accepted measures of methods performance. In
comparison of Smooth Rank with Cox PH regression and CoxPath methods, Smooth
Rank proved to be the best on 9 out of 10 benchmark datasets.
| [
"['Marina Sapir']",
"Marina Sapir"
] |
cs.SI cs.LG | null | 1112.2187 | null | null | http://arxiv.org/pdf/1112.2187v2 | 2011-12-15T00:22:51Z | 2011-12-09T19:31:28Z | Chinese Restaurant Game - Part II: Applications to Wireless Networking,
Cloud Computing, and Online Social Networking | In Part I of this two-part paper [1], we proposed a new game, called Chinese
restaurant game, to analyze the social learning problem with negative network
externality. The best responses of agents in the Chinese restaurant game with
imperfect signals are constructed through a recursive method, and the influence
of both learning and network externality on the utilities of agents is studied.
In Part II of this two-part paper, we illustrate three applications of Chinese
restaurant game in wireless networking, cloud computing, and online social
networking. For each application, we formulate the corresponding problem as a
Chinese restaurant game and analyze how agents learn and make strategic
decisions in the problem. The proposed method is compared with four
common-sense methods in terms of agents' utilities and the overall system
performance through simulations. We find that the proposed Chinese restaurant
game theoretic approach indeed helps agents make better decisions and improves
the overall system performance. Furthermore, agents with different decision
orders have different advantages in terms of their utilities, which also
verifies the conclusions drawn in Part I of this two-part paper.
| [
"['Chih-Yu Wang' 'Yan Chen' 'K. J. Ray Liu']",
"Chih-Yu Wang and Yan Chen and K. J. Ray Liu"
] |
cs.SI cs.LG | null | 1112.2188 | null | null | http://arxiv.org/pdf/1112.2188v3 | 2012-02-13T07:20:48Z | 2011-12-09T19:33:06Z | Chinese Restaurant Game - Part I: Theory of Learning with Negative
Network Externality | In a social network, agents are intelligent and have the capability to make
decisions to maximize their utilities. They can either make wise decisions by
taking advantages of other agents' experiences through learning, or make
decisions earlier to avoid competitions from huge crowds. Both these two
effects, social learning and negative network externality, play important roles
in the decision process of an agent. While there are existing works on either
social learning or negative network externality, a general study on considering
both these two contradictory effects is still limited. We find that the Chinese
restaurant process, a popular random process, provides a well-defined structure
to model the decision process of an agent under these two effects. By
introducing the strategic behavior into the non-strategic Chinese restaurant
process, in Part I of this two-part paper, we propose a new game, called
Chinese Restaurant Game, to formulate the social learning problem with negative
network externality. Through analyzing the proposed Chinese restaurant game, we
derive the optimal strategy of each agent and provide a recursive method to
achieve the optimal strategy. How social learning and negative network
externality influence each other under various settings is also studied through
simulations.
| [
"['Chih-Yu Wang' 'Yan Chen' 'K. J. Ray Liu']",
"Chih-Yu Wang and Yan Chen and K. J. Ray Liu"
] |
stat.ML cs.LG cs.MA | null | 1112.2315 | null | null | http://arxiv.org/pdf/1112.2315v1 | 2011-12-11T01:52:50Z | 2011-12-11T01:52:50Z | Adaptive Forgetting Factor Fictitious Play | It is now well known that decentralised optimisation can be formulated as a
potential game, and game-theoretical learning algorithms can be used to find an
optimum. One of the most common learning techniques in game theory is
fictitious play. However fictitious play is founded on an implicit assumption
that opponents' strategies are stationary. We present a novel variation of
fictitious play that allows the use of a more realistic model of opponent
strategy. It uses a heuristic approach, from the online streaming data
literature, to adaptively update the weights assigned to recently observed
actions. We compare the results of the proposed algorithm with those of
stochastic and geometric fictitious play in a simple strategic form game, a
vehicle target assignment game and a disaster management problem. In all the
tests the rate of convergence of the proposed algorithm was similar or better
than the variations of fictitious play we compared it with. The new algorithm
therefore improves the performance of game-theoretical learning in
decentralised optimisation.
| [
"Michalis Smyrnakis and David S. Leslie",
"['Michalis Smyrnakis' 'David S. Leslie']"
] |
math.OC cs.LG | null | 1112.2318 | null | null | http://arxiv.org/pdf/1112.2318v2 | 2013-06-03T04:58:28Z | 2011-12-11T04:00:57Z | Low-rank optimization with trace norm penalty | The paper addresses the problem of low-rank trace norm minimization. We
propose an algorithm that alternates between fixed-rank optimization and
rank-one updates. The fixed-rank optimization is characterized by an efficient
factorization that makes the trace norm differentiable in the search space and
the computation of duality gap numerically tractable. The search space is
nonlinear but is equipped with a particular Riemannian structure that leads to
efficient computations. We present a second-order trust-region algorithm with a
guaranteed quadratic rate of convergence. Overall, the proposed optimization
scheme converges super-linearly to the global solution while maintaining
complexity that is linear in the number of rows and columns of the matrix. To
compute a set of solutions efficiently for a grid of regularization parameters
we propose a predictor-corrector approach that outperforms the naive
warm-restart approach on the fixed-rank quotient manifold. The performance of
the proposed algorithm is illustrated on problems of low-rank matrix completion
and multivariate linear regression.
| [
"['B. Mishra' 'G. Meyer' 'F. Bach' 'R. Sepulchre']",
"B. Mishra, G. Meyer, F. Bach and R. Sepulchre"
] |
stat.ME cs.CR cs.LG | null | 1112.2680 | null | null | http://arxiv.org/pdf/1112.2680v1 | 2011-12-12T20:16:03Z | 2011-12-12T20:16:03Z | Random Differential Privacy | We propose a relaxed privacy definition called {\em random differential
privacy} (RDP). Differential privacy requires that adding any new observation
to a database will have small effect on the output of the data-release
procedure. Random differential privacy requires that adding a {\em randomly
drawn new observation} to a database will have small effect on the output. We
show an analog of the composition property of differentially private procedures
which applies to our new definition. We show how to release an RDP histogram
and we show that RDP histograms are much more accurate than histograms obtained
using ordinary differential privacy. We finally show an analog of the global
sensitivity framework for the release of functions under our privacy
definition.
| [
"Rob Hall, Alessandro Rinaldo, Larry Wasserman",
"['Rob Hall' 'Alessandro Rinaldo' 'Larry Wasserman']"
] |
stat.ML cs.LG | null | 1112.2738 | null | null | http://arxiv.org/pdf/1112.2738v1 | 2011-12-12T22:33:55Z | 2011-12-12T22:33:55Z | Robust Learning via Cause-Effect Models | We consider the problem of function estimation in the case where the data
distribution may shift between training and test time, and additional
information about it may be available at test time. This relates to popular
scenarios such as covariate shift, concept drift, transfer learning and
semi-supervised learning. This working paper discusses how these tasks could be
tackled depending on the kind of changes of the distributions. It argues that
knowledge of an underlying causal direction can facilitate several of these
tasks.
| [
"['Bernhard Schölkopf' 'Dominik Janzing' 'Jonas Peters' 'Kun Zhang']",
"Bernhard Sch\\\"olkopf, Dominik Janzing, Jonas Peters, Kun Zhang"
] |
math.CO cs.LG | null | 1112.2801 | null | null | http://arxiv.org/pdf/1112.2801v3 | 2012-02-29T13:14:08Z | 2011-12-13T06:04:05Z | A new order theory of set systems and better quasi-orderings | By reformulating a learning process of a set system L as a game between
Teacher (presenter of data) and Learner (updater of the abstract independent
set), we define the order type dim L of L to be the order type of the game
tree. The theory of this new order type and continuous, monotone function
between set systems corresponds to the theory of well quasi-orderings (WQOs).
As Nash-Williams developed the theory of WQOs to the theory of better
quasi-orderings (BQOs), we introduce a set system that has order type and
corresponds to a BQO. We prove that the class of set systems corresponding to
BQOs is closed by any monotone function. In (Shinohara and Arimura. "Inductive
inference of unbounded unions of pattern languages from positive data."
Theoretical Computer Science, pp. 191-209, 2000), for any set system L, they
considered the class of arbitrary (finite) unions of members of L. From
viewpoint of WQOs and BQOs, we characterize the set systems L such that the
class of arbitrary (finite) unions of members of L has order type. The
characterization shows that the order structure of the set system L with
respect to the set-inclusion is not important for the resulting set system
having order type. We point out continuous, monotone function of set systems is
similar to positive reduction to Jockusch-Owings' weakly semirecursive sets.
| [
"Yohji Akama",
"['Yohji Akama']"
] |
cs.LG | null | 1112.3712 | null | null | http://arxiv.org/pdf/1112.3712v1 | 2011-12-16T05:21:10Z | 2011-12-16T05:21:10Z | Analysis and Extension of Arc-Cosine Kernels for Large Margin
Classification | We investigate a recently proposed family of positive-definite kernels that
mimic the computation in large neural networks. We examine the properties of
these kernels using tools from differential geometry; specifically, we analyze
the geometry of surfaces in Hilbert space that are induced by these kernels.
When this geometry is described by a Riemannian manifold, we derive results for
the metric, curvature, and volume element. Interestingly, though, we find that
the simplest kernel in this family does not admit such an interpretation. We
explore two variations of these kernels that mimic computation in neural
networks with different activation functions. We experiment with these new
kernels on several data sets and highlight their general trends in performance
for classification.
| [
"Youngmin Cho and Lawrence K. Saul",
"['Youngmin Cho' 'Lawrence K. Saul']"
] |
cs.LG | null | 1112.3714 | null | null | http://arxiv.org/pdf/1112.3714v1 | 2011-12-16T05:33:59Z | 2011-12-16T05:33:59Z | Nonnegative Matrix Factorization for Semi-supervised Dimensionality
Reduction | We show how to incorporate information from labeled examples into nonnegative
matrix factorization (NMF), a popular unsupervised learning algorithm for
dimensionality reduction. In addition to mapping the data into a space of lower
dimensionality, our approach aims to preserve the nonnegative components of the
data that are important for classification. We identify these components from
the support vectors of large-margin classifiers and derive iterative updates to
preserve them in a semi-supervised version of NMF. These updates have a simple
multiplicative form like their unsupervised counterparts; they are also
guaranteed at each iteration to decrease their loss function---a weighted sum
of I-divergences that captures the trade-off between unsupervised and
supervised learning. We evaluate these updates for dimensionality reduction
when they are used as a precursor to linear classification. In this role, we
find that they yield much better performance than their unsupervised
counterparts. We also find one unexpected benefit of the low dimensional
representations discovered by our approach: often they yield more accurate
classifiers than both ordinary and transductive SVMs trained in the original
input space.
| [
"Youngmin Cho and Lawrence K. Saul",
"['Youngmin Cho' 'Lawrence K. Saul']"
] |
cs.IT cs.LG math.IT | null | 1112.3946 | null | null | http://arxiv.org/pdf/1112.3946v2 | 2012-01-05T04:27:46Z | 2011-12-16T20:40:23Z | Strongly Convex Programming for Exact Matrix Completion and Robust
Principal Component Analysis | The common task in matrix completion (MC) and robust principle component
analysis (RPCA) is to recover a low-rank matrix from a given data matrix. These
problems gained great attention from various areas in applied sciences
recently, especially after the publication of the pioneering works of Cand`es
et al.. One fundamental result in MC and RPCA is that nuclear norm based convex
optimizations lead to the exact low-rank matrix recovery under suitable
conditions. In this paper, we extend this result by showing that strongly
convex optimizations can guarantee the exact low-rank matrix recovery as well.
The result in this paper not only provides sufficient conditions under which
the strongly convex models lead to the exact low-rank matrix recovery, but also
guides us on how to choose suitable parameters in practical algorithms.
| [
"['Hui Zhang' 'Jian-Feng Cai' 'Lizhi Cheng' 'Jubo Zhu']",
"Hui Zhang, Jian-Feng Cai, Lizhi Cheng, Jubo Zhu"
] |
cs.LG | null | 1112.4020 | null | null | http://arxiv.org/pdf/1112.4020v1 | 2011-12-17T03:57:06Z | 2011-12-17T03:57:06Z | Clustering and Latent Semantic Indexing Aspects of the Nonnegative
Matrix Factorization | This paper provides a theoretical support for clustering aspect of the
nonnegative matrix factorization (NMF). By utilizing the Karush-Kuhn-Tucker
optimality conditions, we show that NMF objective is equivalent to graph
clustering objective, so clustering aspect of the NMF has a solid
justification. Different from previous approaches which usually discard the
nonnegativity constraints, our approach guarantees the stationary point being
used in deriving the equivalence is located on the feasible region in the
nonnegative orthant. Additionally, since clustering capability of a matrix
decomposition technique can sometimes imply its latent semantic indexing (LSI)
aspect, we will also evaluate LSI aspect of the NMF by showing its capability
in solving the synonymy and polysemy problems in synthetic datasets. And more
extensive evaluation will be conducted by comparing LSI performances of the NMF
and the singular value decomposition (SVD), the standard LSI method, using some
standard datasets.
| [
"Andri Mirzal",
"['Andri Mirzal']"
] |
cs.CG cs.DS cs.LG | null | 1112.4105 | null | null | http://arxiv.org/pdf/1112.4105v3 | 2012-04-03T22:46:53Z | 2011-12-18T01:19:25Z | epsilon-Samples of Kernels | We study the worst case error of kernel density estimates via subset
approximation. A kernel density estimate of a distribution is the convolution
of that distribution with a fixed kernel (e.g. Gaussian kernel). Given a subset
(i.e. a point set) of the input distribution, we can compare the kernel density
estimates of the input distribution with that of the subset and bound the worst
case error. If the maximum error is eps, then this subset can be thought of as
an eps-sample (aka an eps-approximation) of the range space defined with the
input distribution as the ground set and the fixed kernel representing the
family of ranges. Interestingly, in this case the ranges are not binary, but
have a continuous range (for simplicity we focus on kernels with range of
[0,1]); these allow for smoother notions of range spaces.
It turns out, the use of this smoother family of range spaces has an added
benefit of greatly decreasing the size required for eps-samples. For instance,
in the plane the size is O((1/eps^{4/3}) log^{2/3}(1/eps)) for disks (based on
VC-dimension arguments) but is only O((1/eps) sqrt{log (1/eps)}) for Gaussian
kernels and for kernels with bounded slope that only affect a bounded domain.
These bounds are accomplished by studying the discrepancy of these "kernel"
range spaces, and here the improvement in bounds are even more pronounced. In
the plane, we show the discrepancy is O(sqrt{log n}) for these kernels, whereas
for balls there is a lower bound of Omega(n^{1/4}).
| [
"Jeff M. Phillips",
"['Jeff M. Phillips']"
] |
cs.LG | null | 1112.4133 | null | null | http://arxiv.org/pdf/1112.4133v1 | 2011-12-18T08:02:49Z | 2011-12-18T08:02:49Z | Evaluation of Performance Measures for Classifiers Comparison | The selection of the best classification algorithm for a given dataset is a
very widespread problem, occuring each time one has to choose a classifier to
solve a real-world problem. It is also a complex task with many important
methodological decisions to make. Among those, one of the most crucial is the
choice of an appropriate measure in order to properly assess the classification
performance and rank the algorithms. In this article, we focus on this specific
task. We present the most popular measures and compare their behavior through
discrimination plots. We then discuss their properties from a more theoretical
perspective. It turns out several of them are equivalent for classifiers
comparison purposes. Futhermore. they can also lead to interpretation problems.
Among the numerous measures proposed over the years, it appears that the
classical overall success rate and marginal rates are the more suitable for
classifier comparison task.
| [
"Vincent Labatut, Hocine Cherifi (Le2i)",
"['Vincent Labatut' 'Hocine Cherifi']"
] |
cs.LG cs.MM | null | 1112.4243 | null | null | http://arxiv.org/pdf/1112.4243v1 | 2011-12-19T05:29:18Z | 2011-12-19T05:29:18Z | Online Learning for Classification of Low-rank Representation Features
and Its Applications in Audio Segment Classification | In this paper, a novel framework based on trace norm minimization for audio
segment is proposed. In this framework, both the feature extraction and
classification are obtained by solving corresponding convex optimization
problem with trace norm regularization. For feature extraction, robust
principle component analysis (robust PCA) via minimization a combination of the
nuclear norm and the $\ell_1$-norm is used to extract low-rank features which
are robust to white noise and gross corruption for audio segments. These
low-rank features are fed to a linear classifier where the weight and bias are
learned by solving similar trace norm constrained problems. For this
classifier, most methods find the weight and bias in batch-mode learning, which
makes them inefficient for large-scale problems. In this paper, we propose an
online framework using accelerated proximal gradient method. This framework has
a main advantage in memory cost. In addition, as a result of the regularization
formulation of matrix classification, the Lipschitz constant was given
explicitly, and hence the step size estimation of general proximal gradient
method was omitted in our approach. Experiments on real data sets for
laugh/non-laugh and applause/non-applause classification indicate that this
novel framework is effective and noise robust.
| [
"Ziqiang Shi and Jiqing Han and Tieran Zheng and Shiwen Deng",
"['Ziqiang Shi' 'Jiqing Han' 'Tieran Zheng' 'Shiwen Deng']"
] |
cs.IT cs.LG math.IT math.ST stat.ML stat.TH | 10.1214/12-AOS1034 | 1112.4258 | null | null | http://arxiv.org/abs/1112.4258v5 | 2013-01-30T14:20:53Z | 2011-12-19T07:42:21Z | A geometric analysis of subspace clustering with outliers | This paper considers the problem of clustering a collection of unlabeled data
points assumed to lie near a union of lower-dimensional planes. As is common in
computer vision or unsupervised learning applications, we do not know in
advance how many subspaces there are nor do we have any information about their
dimensions. We develop a novel geometric analysis of an algorithm named sparse
subspace clustering (SSC) [In IEEE Conference on Computer Vision and Pattern
Recognition, 2009. CVPR 2009 (2009) 2790-2797. IEEE], which significantly
broadens the range of problems where it is provably effective. For instance, we
show that SSC can recover multiple subspaces, each of dimension comparable to
the ambient dimension. We also prove that SSC can correctly cluster data points
even when the subspaces of interest intersect. Further, we develop an extension
of SSC that succeeds when the data set is corrupted with possibly
overwhelmingly many outliers. Underlying our analysis are clear geometric
insights, which may bear on other sparse recovery problems. A numerical study
complements our theoretical analysis and demonstrates the effectiveness of
these methods.
| [
"Mahdi Soltanolkotabi, Emmanuel J. Cand\\'es",
"['Mahdi Soltanolkotabi' 'Emmanuel J. Candés']"
] |
cs.LG cs.CE cs.DB | null | 1112.4261 | null | null | http://arxiv.org/pdf/1112.4261v1 | 2011-12-19T08:16:13Z | 2011-12-19T08:16:13Z | Performance Analysis of Enhanced Clustering Algorithm for Gene
Expression Data | Microarrays are made it possible to simultaneously monitor the expression
profiles of thousands of genes under various experimental conditions. It is
used to identify the co-expressed genes in specific cells or tissues that are
actively used to make proteins. This method is used to analysis the gene
expression, an important task in bioinformatics research. Cluster analysis of
gene expression data has proved to be a useful tool for identifying
co-expressed genes, biologically relevant groupings of genes and samples. In
this paper we applied K-Means with Automatic Generations of Merge Factor for
ISODATA- AGMFI. Though AGMFI has been applied for clustering of Gene Expression
Data, this proposed Enhanced Automatic Generations of Merge Factor for ISODATA-
EAGMFI Algorithms overcome the drawbacks of AGMFI in terms of specifying the
optimal number of clusters and initialization of good cluster centroids.
Experimental results on Gene Expression Data show that the proposed EAGMFI
algorithms could identify compact clusters with perform well in terms of the
Silhouette Coefficients cluster measure.
| [
"T.Chandrasekhar, K.Thangavel and E.Elayaraja",
"['T. Chandrasekhar' 'K. Thangavel' 'E. Elayaraja']"
] |
cs.LG cs.GT | null | 1112.4344 | null | null | http://arxiv.org/pdf/1112.4344v1 | 2011-12-19T14:21:00Z | 2011-12-19T14:21:00Z | A Scalable Multiclass Algorithm for Node Classification | We introduce a scalable algorithm, MUCCA, for multiclass node classification
in weighted graphs. Unlike previously proposed methods for the same task, MUCCA
works in time linear in the number of nodes. Our approach is based on a
game-theoretic formulation of the problem in which the test labels are
expressed as a Nash Equilibrium of a certain game. However, in order to achieve
scalability, we find the equilibrium on a spanning tree of the original graph.
Experiments on real-world data reveal that MUCCA is much faster than its
competitors while achieving a similar predictive performance.
| [
"Giovanni Zappella",
"['Giovanni Zappella']"
] |
stat.ML cs.LG | null | 1112.4394 | null | null | http://arxiv.org/pdf/1112.4394v1 | 2011-12-19T16:22:09Z | 2011-12-19T16:22:09Z | Additive Gaussian Processes | We introduce a Gaussian process model of functions which are additive. An
additive function is one which decomposes into a sum of low-dimensional
functions, each depending on only a subset of the input variables. Additive GPs
generalize both Generalized Additive Models, and the standard GP models which
use squared-exponential kernels. Hyperparameter learning in this model can be
seen as Bayesian Hierarchical Kernel Learning (HKL). We introduce an expressive
but tractable parameterization of the kernel function, which allows efficient
evaluation of all input interaction terms, whose number is exponential in the
input dimension. The additional structure discoverable by this model results in
increased interpretability, as well as state-of-the-art predictive power in
regression tasks.
| [
"['David Duvenaud' 'Hannes Nickisch' 'Carl Edward Rasmussen']",
"David Duvenaud, Hannes Nickisch, Carl Edward Rasmussen"
] |
cs.LG stat.ML | null | 1112.4607 | null | null | http://arxiv.org/pdf/1112.4607v1 | 2011-12-20T08:52:56Z | 2011-12-20T08:52:56Z | Alignment Based Kernel Learning with a Continuous Set of Base Kernels | The success of kernel-based learning methods depend on the choice of kernel.
Recently, kernel learning methods have been proposed that use data to select
the most appropriate kernel, usually by combining a set of base kernels. We
introduce a new algorithm for kernel learning that combines a {\em continuous
set of base kernels}, without the common step of discretizing the space of base
kernels. We demonstrate that our new method achieves state-of-the-art
performance across a variety of real-world datasets. Furthermore, we explicitly
demonstrate the importance of combining the right dictionary of kernels, which
is problematic for methods based on a finite set of base kernels chosen a
priori. Our method is not the first approach to work with continuously
parameterized kernels. However, we show that our method requires substantially
less computation than previous such approaches, and so is more amenable to
multiple dimensional parameterizations of base kernels, which we demonstrate.
| [
"Arash Afkanpour and Csaba Szepesvari and Michael Bowling",
"['Arash Afkanpour' 'Csaba Szepesvari' 'Michael Bowling']"
] |
cs.NE cs.AI cs.LG | null | 1112.4628 | null | null | http://arxiv.org/pdf/1112.4628v1 | 2011-12-20T09:50:53Z | 2011-12-20T09:50:53Z | Using Artificial Bee Colony Algorithm for MLP Training on Earthquake
Time Series Data Prediction | Nowadays, computer scientists have shown the interest in the study of social
insect's behaviour in neural networks area for solving different combinatorial
and statistical problems. Chief among these is the Artificial Bee Colony (ABC)
algorithm. This paper investigates the use of ABC algorithm that simulates the
intelligent foraging behaviour of a honey bee swarm. Multilayer Perceptron
(MLP) trained with the standard back propagation algorithm normally utilises
computationally intensive training algorithms. One of the crucial problems with
the backpropagation (BP) algorithm is that it can sometimes yield the networks
with suboptimal weights because of the presence of many local optima in the
solution space. To overcome ABC algorithm used in this work to train MLP
learning the complex behaviour of earthquake time series data trained by BP,
the performance of MLP-ABC is benchmarked against MLP training with the
standard BP. The experimental result shows that MLP-ABC performance is better
than MLP-BP for time series data.
| [
"Habib Shah, Rozaida Ghazali, and Nazri Mohd Nawi",
"['Habib Shah' 'Rozaida Ghazali' 'Nazri Mohd Nawi']"
] |
cs.LG | null | 1112.4722 | null | null | http://arxiv.org/pdf/1112.4722v2 | 2012-10-18T15:27:25Z | 2011-12-20T15:21:26Z | Modeling transition dynamics in MDPs with RKHS embeddings of conditional
distributions | We propose a new, nonparametric approach to estimating the value function in
reinforcement learning. This approach makes use of a recently developed
representation of conditional distributions as functions in a reproducing
kernel Hilbert space. Such representations bypass the need for estimating
transition probabilities, and apply to any domain on which kernels can be
defined. Our approach avoids the need to approximate intractable integrals
since expectations are represented as RKHS inner products whose computation has
linear complexity in the sample size. Thus, we can efficiently perform value
function estimation in a wide variety of settings, including finite state
spaces, continuous states spaces, and partially observable tasks where only
sensor measurements are available. A second advantage of the approach is that
we learn the conditional distribution representation from a training sample,
and do not require an exhaustive exploration of the state space. We prove
convergence of our approach either to the optimal policy, or to the closest
projection of the optimal policy in our model class, under reasonable
assumptions. In experiments, we demonstrate the performance of our algorithm on
a learning task in a continuous state space (the under-actuated pendulum), and
on a navigation problem where only images from a sensor are observed. We
compare with least-squares policy iteration where a Gaussian process is used
for value function estimation. Our algorithm achieves better performance in
both tasks.
| [
"Steffen Gr\\\"unew\\\"alder, Luca Baldassarre, Massimiliano Pontil, Arthur\n Gretton, Guy Lever",
"['Steffen Grünewälder' 'Luca Baldassarre' 'Massimiliano Pontil'\n 'Arthur Gretton' 'Guy Lever']"
] |
cs.LG | null | 1112.5246 | null | null | http://arxiv.org/pdf/1112.5246v3 | 2013-07-21T12:08:43Z | 2011-12-22T08:07:56Z | Combining One-Class Classifiers via Meta-Learning | Selecting the best classifier among the available ones is a difficult task,
especially when only instances of one class exist. In this work we examine the
notion of combining one-class classifiers as an alternative for selecting the
best classifier. In particular, we propose two new one-class classification
performance measures to weigh classifiers and show that a simple ensemble that
implements these measures can outperform the most popular one-class ensembles.
Furthermore, we propose a new one-class ensemble scheme, TUPSO, which uses
meta-learning to combine one-class classifiers. Our experiments demonstrate the
superiority of TUPSO over all other tested ensembles and show that the TUPSO
performance is statistically indistinguishable from that of the hypothetical
best classifier.
| [
"Eitan Menahem, Lior Rokach and Yuval Elovici",
"['Eitan Menahem' 'Lior Rokach' 'Yuval Elovici']"
] |
cs.AI cs.LG | null | 1112.5309 | null | null | http://arxiv.org/pdf/1112.5309v2 | 2012-11-04T17:22:46Z | 2011-12-22T13:50:46Z | POWERPLAY: Training an Increasingly General Problem Solver by
Continually Searching for the Simplest Still Unsolvable Problem | Most of computer science focuses on automatically solving given computational
problems. I focus on automatically inventing or discovering problems in a way
inspired by the playful behavior of animals and humans, to train a more and
more general problem solver from scratch in an unsupervised fashion. Consider
the infinite set of all computable descriptions of tasks with possibly
computable solutions. The novel algorithmic framework POWERPLAY (2011)
continually searches the space of possible pairs of new tasks and modifications
of the current problem solver, until it finds a more powerful problem solver
that provably solves all previously learned tasks plus the new one, while the
unmodified predecessor does not. Wow-effects are achieved by continually making
previously learned skills more efficient such that they require less time and
space. New skills may (partially) re-use previously learned skills. POWERPLAY's
search orders candidate pairs of tasks and solver modifications by their
conditional computational (time & space) complexity, given the stored
experience so far. The new task and its corresponding task-solving skill are
those first found and validated. The computational costs of validating new
tasks need not grow with task repertoire size. POWERPLAY's ongoing search for
novelty keeps breaking the generalization abilities of its present solver. This
is related to Goedel's sequence of increasingly powerful formal theories based
on adding formerly unprovable statements to the axioms without affecting
previously provable theorems. The continually increasing repertoire of problem
solving procedures can be exploited by a parallel search for solutions to
additional externally posed tasks. POWERPLAY may be viewed as a greedy but
practical implementation of basic principles of creativity. A first
experimental analysis can be found in separate papers [53,54].
| [
"['Jürgen Schmidhuber']",
"J\\\"urgen Schmidhuber"
] |
cs.LG stat.ML | null | 1112.5404 | null | null | http://arxiv.org/pdf/1112.5404v1 | 2011-12-22T18:08:27Z | 2011-12-22T18:08:27Z | Similarity-based Learning via Data Driven Embeddings | We consider the problem of classification using similarity/distance functions
over data. Specifically, we propose a framework for defining the goodness of a
(dis)similarity function with respect to a given learning task and propose
algorithms that have guaranteed generalization properties when working with
such good functions. Our framework unifies and generalizes the frameworks
proposed by [Balcan-Blum ICML 2006] and [Wang et al ICML 2007]. An attractive
feature of our framework is its adaptability to data - we do not promote a
fixed notion of goodness but rather let data dictate it. We show, by giving
theoretical guarantees that the goodness criterion best suited to a problem can
itself be learned which makes our approach applicable to a variety of domains
and problems. We propose a landmarking-based approach to obtaining a classifier
from such learned goodness criteria. We then provide a novel diversity based
heuristic to perform task-driven selection of landmark points instead of random
selection. We demonstrate the effectiveness of our goodness criteria learning
method as well as the landmark selection heuristic on a variety of
similarity-based learning datasets and benchmark UCI datasets on which our
method consistently outperforms existing approaches by a significant margin.
| [
"['Purushottam Kar' 'Prateek Jain']",
"Purushottam Kar and Prateek Jain"
] |
physics.comp-ph cs.LG physics.chem-ph stat.ML | 10.1103/PhysRevLett.108.253002 | 1112.5441 | null | null | http://arxiv.org/abs/1112.5441v1 | 2011-12-22T20:29:32Z | 2011-12-22T20:29:32Z | Finding Density Functionals with Machine Learning | Machine learning is used to approximate density functionals. For the model
problem of the kinetic energy of non-interacting fermions in 1d, mean absolute
errors below 1 kcal/mol on test densities similar to the training set are
reached with fewer than 100 training densities. A predictor identifies if a
test density is within the interpolation region. Via principal component
analysis, a projected functional derivative finds highly accurate
self-consistent densities. Challenges for application of our method to real
electronic structure problems are discussed.
| [
"['John C. Snyder' 'Matthias Rupp' 'Katja Hansen' 'Klaus-Robert Müller'\n 'Kieron Burke']",
"John C. Snyder, Matthias Rupp, Katja Hansen, Klaus-Robert M\\\"uller,\n and Kieron Burke"
] |
cs.DC cs.AI cs.LG cs.PF | null | 1112.5505 | null | null | http://arxiv.org/pdf/1112.5505v5 | 2013-01-18T03:54:34Z | 2011-12-23T02:38:42Z | A Study on Using Uncertain Time Series Matching Algorithms in MapReduce
Applications | In this paper, we study CPU utilization time patterns of several Map-Reduce
applications. After extracting running patterns of several applications, the
patterns with their statistical information are saved in a reference database
to be later used to tweak system parameters to efficiently execute unknown
applications in future. To achieve this goal, CPU utilization patterns of new
applications along with its statistical information are compared with the
already known ones in the reference database to find/predict their most
probable execution patterns. Because of different patterns lengths, the Dynamic
Time Warping (DTW) is utilized for such comparison; a statistical analysis is
then applied to DTWs' outcomes to select the most suitable candidates.
Moreover, under a hypothesis, another algorithm is proposed to classify
applications under similar CPU utilization patterns. Three widely used text
processing applications (WordCount, Distributed Grep, and Terasort) and another
application (Exim Mainlog parsing) are used to evaluate our hypothesis in
tweaking system parameters in executing similar applications. Results were very
promising and showed effectiveness of our approach on 5-node Map-Reduce
platform
| [
"['Nikzad Babaii Rizvandi' 'Javid Taheri' 'Albert Y. Zomaya'\n 'Reza Moraveji']",
"Nikzad Babaii Rizvandi, Javid Taheri, Albert Y. Zomaya, Reza Moraveji"
] |
stat.ML cs.LG | null | 1112.5627 | null | null | http://arxiv.org/pdf/1112.5627v1 | 2011-12-23T18:12:33Z | 2011-12-23T18:12:33Z | Minimax Rates for Homology Inference | Often, high dimensional data lie close to a low-dimensional submanifold and
it is of interest to understand the geometry of these submanifolds. The
homology groups of a manifold are important topological invariants that provide
an algebraic summary of the manifold. These groups contain rich topological
information, for instance, about the connected components, holes, tunnels and
sometimes the dimension of the manifold. In this paper, we consider the
statistical problem of estimating the homology of a manifold from noisy samples
under several different noise models. We derive upper and lower bounds on the
minimax risk for this problem. Our upper bounds are based on estimators which
are constructed from a union of balls of appropriate radius around carefully
selected points. In each case we establish complementary lower bounds using Le
Cam's lemma.
| [
"Sivaraman Balakrishnan, Alessandro Rinaldo, Don Sheehy, Aarti Singh,\n Larry Wasserman",
"['Sivaraman Balakrishnan' 'Alessandro Rinaldo' 'Don Sheehy' 'Aarti Singh'\n 'Larry Wasserman']"
] |
cs.IT cs.LG math.IT stat.ML | null | 1112.5629 | null | null | http://arxiv.org/pdf/1112.5629v2 | 2011-12-27T15:22:13Z | 2011-12-23T18:25:17Z | High-Rank Matrix Completion and Subspace Clustering with Missing Data | This paper considers the problem of completing a matrix with many missing
entries under the assumption that the columns of the matrix belong to a union
of multiple low-rank subspaces. This generalizes the standard low-rank matrix
completion problem to situations in which the matrix rank can be quite high or
even full rank. Since the columns belong to a union of subspaces, this problem
may also be viewed as a missing-data version of the subspace clustering
problem. Let X be an n x N matrix whose (complete) columns lie in a union of at
most k subspaces, each of rank <= r < n, and assume N >> kn. The main result of
the paper shows that under mild assumptions each column of X can be perfectly
recovered with high probability from an incomplete version so long as at least
CrNlog^2(n) entries of X are observed uniformly at random, with C>1 a constant
depending on the usual incoherence conditions, the geometrical arrangement of
subspaces, and the distribution of columns over the subspaces. The result is
illustrated with numerical experiments and an application to Internet distance
matrix completion and topology identification.
| [
"['Brian Eriksson' 'Laura Balzano' 'Robert Nowak']",
"Brian Eriksson and Laura Balzano and Robert Nowak"
] |
stat.ML cs.LG | null | 1112.5745 | null | null | http://arxiv.org/pdf/1112.5745v1 | 2011-12-24T17:53:19Z | 2011-12-24T17:53:19Z | Bayesian Active Learning for Classification and Preference Learning | Information theoretic active learning has been widely studied for
probabilistic models. For simple regression an optimal myopic policy is easily
tractable. However, for other tasks and with more complex models, such as
classification with nonparametric models, the optimal solution is harder to
compute. Current approaches make approximations to achieve tractability. We
propose an approach that expresses information gain in terms of predictive
entropies, and apply this method to the Gaussian Process Classifier (GPC). Our
approach makes minimal approximations to the full information theoretic
objective. Our experimental performance compares favourably to many popular
active learning algorithms, and has equal or lower computational complexity. We
compare well to decision theoretic approaches also, which are privy to more
information and require much more computational time. Secondly, by developing
further a reformulation of binary preference learning to a classification
problem, we extend our algorithm to Gaussian Process preference learning.
| [
"Neil Houlsby, Ferenc Husz\\'ar, Zoubin Ghahramani, M\\'at\\'e Lengyel",
"['Neil Houlsby' 'Ferenc Huszár' 'Zoubin Ghahramani' 'Máté Lengyel']"
] |
cs.LG | null | 1112.6209 | null | null | http://arxiv.org/pdf/1112.6209v5 | 2012-07-12T04:32:50Z | 2011-12-29T00:26:54Z | Building high-level features using large scale unsupervised learning | We consider the problem of building high-level, class-specific feature
detectors from only unlabeled data. For example, is it possible to learn a face
detector using only unlabeled images? To answer this, we train a 9-layered
locally connected sparse autoencoder with pooling and local contrast
normalization on a large dataset of images (the model has 1 billion
connections, the dataset has 10 million 200x200 pixel images downloaded from
the Internet). We train this network using model parallelism and asynchronous
SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to
what appears to be a widely-held intuition, our experimental results reveal
that it is possible to train a face detector without having to label images as
containing a face or not. Control experiments show that this feature detector
is robust not only to translation but also to scaling and out-of-plane
rotation. We also find that the same network is sensitive to other high-level
concepts such as cat faces and human bodies. Starting with these learned
features, we trained our network to obtain 15.8% accuracy in recognizing 20,000
object categories from ImageNet, a leap of 70% relative improvement over the
previous state-of-the-art.
| [
"['Quoc V. Le' \"Marc'Aurelio Ranzato\" 'Rajat Monga' 'Matthieu Devin'\n 'Kai Chen' 'Greg S. Corrado' 'Jeff Dean' 'Andrew Y. Ng']",
"Quoc V. Le, Marc'Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai\n Chen, Greg S. Corrado, Jeff Dean, Andrew Y. Ng"
] |
cs.IT cs.LG cs.SY math.IT | null | 1112.6234 | null | null | http://arxiv.org/pdf/1112.6234v2 | 2013-01-05T10:41:17Z | 2011-12-29T06:07:43Z | Sparse Recovery from Nonlinear Measurements with Applications in Bad
Data Detection for Power Networks | In this paper, we consider the problem of sparse recovery from nonlinear
measurements, which has applications in state estimation and bad data detection
for power networks. An iterative mixed $\ell_1$ and $\ell_2$ convex program is
used to estimate the true state by locally linearizing the nonlinear
measurements. When the measurements are linear, through using the almost
Euclidean property for a linear subspace, we derive a new performance bound for
the state estimation error under sparse bad data and additive observation
noise. As a byproduct, in this paper we provide sharp bounds on the almost
Euclidean property of a linear subspace, using the "escape-through-the-mesh"
theorem from geometric functional analysis. When the measurements are
nonlinear, we give conditions under which the solution of the iterative
algorithm converges to the true state even though the locally linearized
measurements may not be the actual nonlinear measurements. We numerically
evaluate our iterative convex programming approach to perform bad data
detections in nonlinear electrical power networks problems. We are able to use
semidefinite programming to verify the conditions for convergence of the
proposed iterative sparse recovery algorithms from nonlinear measurements.
| [
"['Weiyu Xu' 'Meng Wang' 'Jianfeng Cai' 'Ao Tang']",
"Weiyu Xu, Meng Wang, Jianfeng Cai and Ao Tang"
] |
cs.LG | null | 1112.6399 | null | null | http://arxiv.org/pdf/1112.6399v1 | 2011-12-29T19:52:14Z | 2011-12-29T19:52:14Z | Two-Manifold Problems | Recently, there has been much interest in spectral approaches to learning
manifolds---so-called kernel eigenmap methods. These methods have had some
successes, but their applicability is limited because they are not robust to
noise. To address this limitation, we look at two-manifold problems, in which
we simultaneously reconstruct two related manifolds, each representing a
different view of the same data. By solving these interconnected learning
problems together and allowing information to flow between them, two-manifold
algorithms are able to succeed where a non-integrated approach would fail: each
view allows us to suppress noise in the other, reducing bias in the same way
that an instrumental variable allows us to remove bias in a {linear}
dimensionality reduction problem. We propose a class of algorithms for
two-manifold problems, based on spectral decomposition of cross-covariance
operators in Hilbert space. Finally, we discuss situations where two-manifold
problems are useful, and demonstrate that solving a two-manifold problem can
aid in learning a nonlinear dynamical system from limited data.
| [
"['Byron Boots' 'Geoffrey J. Gordon']",
"Byron Boots and Geoffrey J. Gordon"
] |
cs.LG math.ST stat.ML stat.TH | null | 1112.6411 | null | null | http://arxiv.org/pdf/1112.6411v1 | 2011-12-29T20:35:40Z | 2011-12-29T20:35:40Z | High-dimensional Sparse Inverse Covariance Estimation using Greedy
Methods | In this paper we consider the task of estimating the non-zero pattern of the
sparse inverse covariance matrix of a zero-mean Gaussian random vector from a
set of iid samples. Note that this is also equivalent to recovering the
underlying graph structure of a sparse Gaussian Markov Random Field (GMRF). We
present two novel greedy approaches to solving this problem. The first
estimates the non-zero covariates of the overall inverse covariance matrix
using a series of global forward and backward greedy steps. The second
estimates the neighborhood of each node in the graph separately, again using
greedy forward and backward steps, and combines the intermediate neighborhoods
to form an overall estimate. The principal contribution of this paper is a
rigorous analysis of the sparsistency, or consistency in recovering the
sparsity pattern of the inverse covariance matrix. Surprisingly, we show that
both the local and global greedy methods learn the full structure of the model
with high probability given just $O(d\log(p))$ samples, which is a
\emph{significant} improvement over state of the art $\ell_1$-regularized
Gaussian MLE (Graphical Lasso) that requires $O(d^2\log(p))$ samples. Moreover,
the restricted eigenvalue and smoothness conditions imposed by our greedy
methods are much weaker than the strong irrepresentable conditions required by
the $\ell_1$-regularization based methods. We corroborate our results with
extensive simulations and examples, comparing our local and global greedy
methods to the $\ell_1$-regularized Gaussian MLE as well as the Neighborhood
Greedy method to that of nodewise $\ell_1$-regularized linear regression
(Neighborhood Lasso).
| [
"Christopher C. Johnson, Ali Jalali and Pradeep Ravikumar",
"['Christopher C. Johnson' 'Ali Jalali' 'Pradeep Ravikumar']"
] |
cs.LG | null | 1201.0292 | null | null | http://arxiv.org/pdf/1201.0292v1 | 2011-12-31T17:29:08Z | 2011-12-31T17:29:08Z | T-Learning | Traditional Reinforcement Learning (RL) has focused on problems involving
many states and few actions, such as simple grid worlds. Most real world
problems, however, are of the opposite type, Involving Few relevant states and
many actions. For example, to return home from a conference, humans identify
only few subgoal states such as lobby, taxi, airport etc. Each valid behavior
connecting two such states can be viewed as an action, and there are trillions
of them. Assuming the subgoal identification problem is already solved, the
quality of any RL method---in real-world settings---depends less on how well it
scales with the number of states than on how well it scales with the number of
actions. This is where our new method T-Learning excels, by evaluating the
relatively few possible transits from one state to another in a
policy-independent way, rather than a huge number of state-action pairs, or
states in traditional policy-dependent ways. Illustrative experiments
demonstrate that performance improvements of T-Learning over Q-learning can be
arbitrarily large.
| [
"Vincent Graziano, Faustino Gomez, Mark Ring, Juergen Schmidhuber",
"['Vincent Graziano' 'Faustino Gomez' 'Mark Ring' 'Juergen Schmidhuber']"
] |
math.OC cs.LG math.ST stat.ML stat.TH | 10.1007/978-3-642-28551-6_31 | 1201.0341 | null | null | http://arxiv.org/abs/1201.0341v1 | 2012-01-01T09:05:33Z | 2012-01-01T09:05:33Z | Collaborative Filtering via Group-Structured Dictionary Learning | Structured sparse coding and the related structured dictionary learning
problems are novel research areas in machine learning. In this paper we present
a new application of structured dictionary learning for collaborative filtering
based recommender systems. Our extensive numerical experiments demonstrate that
the presented technique outperforms its state-of-the-art competitors and has
several advantages over approaches that do not put structured constraints on
the dictionary elements.
| [
"['Zoltan Szabo' 'Barnabas Poczos' 'Andras Lorincz']",
"Zoltan Szabo, Barnabas Poczos, Andras Lorincz"
] |
cs.LG cs.MS | null | 1201.0490 | null | null | http://arxiv.org/pdf/1201.0490v4 | 2018-06-05T13:41:07Z | 2012-01-02T16:42:40Z | Scikit-learn: Machine Learning in Python | Scikit-learn is a Python module integrating a wide range of state-of-the-art
machine learning algorithms for medium-scale supervised and unsupervised
problems. This package focuses on bringing machine learning to non-specialists
using a general-purpose high-level language. Emphasis is put on ease of use,
performance, documentation, and API consistency. It has minimal dependencies
and is distributed under the simplified BSD license, encouraging its use in
both academic and commercial settings. Source code, binaries, and documentation
can be downloaded from http://scikit-learn.org.
| [
"Fabian Pedregosa, Ga\\\"el Varoquaux, Alexandre Gramfort, Vincent\n Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Andreas M\\\"uller,\n Joel Nothman, Gilles Louppe, Peter Prettenhofer, Ron Weiss, Vincent Dubourg,\n Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher,\n Matthieu Perrot, \\'Edouard Duchesnay",
"['Fabian Pedregosa' 'Gaël Varoquaux' 'Alexandre Gramfort' 'Vincent Michel'\n 'Bertrand Thirion' 'Olivier Grisel' 'Mathieu Blondel' 'Andreas Müller'\n 'Joel Nothman' 'Gilles Louppe' 'Peter Prettenhofer' 'Ron Weiss'\n 'Vincent Dubourg' 'Jake Vanderplas' 'Alexandre Passos' 'David Cournapeau'\n 'Matthieu Brucher' 'Matthieu Perrot' 'Édouard Duchesnay']"
] |
stat.ML cs.LG | null | 1201.0610 | null | null | http://arxiv.org/pdf/1201.0610v1 | 2012-01-03T11:29:17Z | 2012-01-03T11:29:17Z | Random Forests for Metric Learning with Implicit Pairwise Position
Dependence | Metric learning makes it plausible to learn distances for complex
distributions of data from labeled data. However, to date, most metric learning
methods are based on a single Mahalanobis metric, which cannot handle
heterogeneous data well. Those that learn multiple metrics throughout the space
have demonstrated superior accuracy, but at the cost of computational
efficiency. Here, we take a new angle to the metric learning problem and learn
a single metric that is able to implicitly adapt its distance function
throughout the feature space. This metric adaptation is accomplished by using a
random forest-based classifier to underpin the distance function and
incorporate both absolute pairwise position and standard relative position into
the representation. We have implemented and tested our method against state of
the art global and multi-metric methods on a variety of data sets. Overall, the
proposed method outperforms both types of methods in terms of accuracy
(consistently ranked first) and is an order of magnitude faster than state of
the art multi-metric methods (16x faster in the worst case).
| [
"Caiming Xiong, David Johnson, Ran Xu and Jason J. Corso",
"['Caiming Xiong' 'David Johnson' 'Ran Xu' 'Jason J. Corso']"
] |
stat.ML cs.LG stat.ME | 10.1214/12-STS391 | 1201.0794 | null | null | http://arxiv.org/abs/1201.0794v2 | 2013-01-07T13:43:13Z | 2012-01-04T00:43:53Z | Sparse Nonparametric Graphical Models | We present some nonparametric methods for graphical modeling. In the discrete
case, where the data are binary or drawn from a finite alphabet, Markov random
fields are already essentially nonparametric, since the cliques can take only a
finite number of values. Continuous data are different. The Gaussian graphical
model is the standard parametric model for continuous data, but it makes
distributional assumptions that are often unrealistic. We discuss two
approaches to building more flexible graphical models. One allows arbitrary
graphs and a nonparametric extension of the Gaussian; the other uses kernel
density estimation and restricts the graphs to trees and forests. Examples of
both methods are presented. We also discuss possible future research directions
for nonparametric graphical modeling.
| [
"John Lafferty, Han Liu, Larry Wasserman",
"['John Lafferty' 'Han Liu' 'Larry Wasserman']"
] |
cs.LG | null | 1201.0838 | null | null | http://arxiv.org/pdf/1201.0838v2 | 2012-04-05T06:48:35Z | 2012-01-04T07:07:06Z | A Topic Modeling Toolbox Using Belief Propagation | Latent Dirichlet allocation (LDA) is an important hierarchical Bayesian model
for probabilistic topic modeling, which attracts worldwide interests and
touches on many important applications in text mining, computer vision and
computational biology. This paper introduces a topic modeling toolbox (TMBP)
based on the belief propagation (BP) algorithms. TMBP toolbox is implemented by
MEX C++/Matlab/Octave for either Windows 7 or Linux. Compared with existing
topic modeling packages, the novelty of this toolbox lies in the BP algorithms
for learning LDA-based topic models. The current version includes BP algorithms
for latent Dirichlet allocation (LDA), author-topic models (ATM), relational
topic models (RTM), and labeled LDA (LaLDA). This toolbox is an ongoing project
and more BP-based algorithms for various topic models will be added in the near
future. Interested users may also extend BP algorithms for learning more
complicated topic models. The source codes are freely available under the GNU
General Public Licence, Version 1.0 at https://mloss.org/software/view/399/.
| [
"Jia Zeng",
"['Jia Zeng']"
] |
stat.ML cs.LG | 10.1007/978-3-642-13312-1_46 | 1201.0959 | null | null | http://arxiv.org/abs/1201.0959v1 | 2012-01-04T18:39:37Z | 2012-01-04T18:39:37Z | Constrained variable clustering and the best basis problem in functional
data analysis | Functional data analysis involves data described by regular functions rather
than by a finite number of real valued variables. While some robust data
analysis methods can be applied directly to the very high dimensional vectors
obtained from a fine grid sampling of functional data, all methods benefit from
a prior simplification of the functions that reduces the redundancy induced by
the regularity. In this paper we propose to use a clustering approach that
targets variables rather than individual to design a piecewise constant
representation of a set of functions. The contiguity constraint induced by the
functional nature of the variables allows a polynomial complexity algorithm to
give the optimal solution.
| [
"['Fabrice Rossi' 'Yves Lechevallier']",
"Fabrice Rossi (LTCI), Yves Lechevallier (INRIA Rocquencourt / INRIA\n Sophia Antipolis)"
] |
stat.ML cs.LG | 10.1007/978-3-540-88045-5 | 1201.0963 | null | null | http://arxiv.org/abs/1201.0963v1 | 2012-01-04T18:45:23Z | 2012-01-04T18:45:23Z | Clustering Dynamic Web Usage Data | Most classification methods are based on the assumption that data conforms to
a stationary distribution. The machine learning domain currently suffers from a
lack of classification techniques that are able to detect the occurrence of a
change in the underlying data distribution. Ignoring possible changes in the
underlying concept, also known as concept drift, may degrade the performance of
the classification model. Often these changes make the model inconsistent and
regular updatings become necessary. Taking the temporal dimension into account
during the analysis of Web usage data is a necessity, since the way a site is
visited may indeed evolve due to modifications in the structure and content of
the site, or even due to changes in the behavior of certain user groups. One
solution to this problem, proposed in this article, is to update models using
summaries obtained by means of an evolutionary approach based on an intelligent
clustering approach. We carry out various clustering strategies that are
applied on time sub-periods. To validate our approach we apply two external
evaluation criteria which compare different partitions from the same data set.
Our experiments show that the proposed approach is efficient to detect the
occurrence of changes.
| [
"Alzennyr Da Silva (INRIA Rocquencourt / INRIA Sophia Antipolis), Yves\n Lechevallier (INRIA Rocquencourt / INRIA Sophia Antipolis), Fabrice Rossi\n (INRIA Rocquencourt / INRIA Sophia Antipolis), Francisco De A. T. De Carvahlo\n (CIn)",
"['Alzennyr Da Silva' 'Yves Lechevallier' 'Fabrice Rossi'\n 'Francisco De A. T. De Carvahlo']"
] |
cs.LG physics.data-an stat.ME | null | 1201.1384 | null | null | http://arxiv.org/pdf/1201.1384v2 | 2012-12-12T06:27:55Z | 2012-01-06T10:15:37Z | A Thermodynamical Approach for Probability Estimation | The issue of discrete probability estimation for samples of small size is
addressed in this study. The maximum likelihood method often suffers
over-fitting when insufficient data is available. Although the Bayesian
approach can avoid over-fitting by using prior distributions, it still has
problems with objective analysis. In response to these drawbacks, a new
theoretical framework based on thermodynamics, where energy and temperature are
introduced, was developed. Entropy and likelihood are placed at the center of
this method. The key principle of inference for probability mass functions is
the minimum free energy, which is shown to unify the two principles of maximum
likelihood and maximum entropy. Our method can robustly estimate probability
functions from small size data.
| [
"Takashi Isozaki",
"['Takashi Isozaki']"
] |
stat.ML cs.LG | null | 1201.1450 | null | null | http://arxiv.org/pdf/1201.1450v1 | 2012-01-06T16:45:57Z | 2012-01-06T16:45:57Z | The Interaction of Entropy-Based Discretization and Sample Size: An
Empirical Study | An empirical investigation of the interaction of sample size and
discretization - in this case the entropy-based method CAIM (Class-Attribute
Interdependence Maximization) - was undertaken to evaluate the impact and
potential bias introduced into data mining performance metrics due to variation
in sample size as it impacts the discretization process. Of particular interest
was the effect of discretizing within cross-validation folds averse to outside
discretization folds. Previous publications have suggested that discretizing
externally can bias performance results; however, a thorough review of the
literature found no empirical evidence to support such an assertion. This
investigation involved construction of over 117,000 models on seven distinct
datasets from the UCI (University of California-Irvine) Machine Learning
Library and multiple modeling methods across a variety of configurations of
sample size and discretization, with each unique "setup" being independently
replicated ten times. The analysis revealed a significant optimistic bias as
sample sizes decreased and discretization was employed. The study also revealed
that there may be a relationship between the interaction that produces such
bias and the numbers and types of predictor attributes, extending the "curse of
dimensionality" concept from feature selection into the discretization realm.
Directions for further exploration are laid out, as well some general
guidelines about the proper application of discretization in light of these
results.
| [
"Casey Bennett",
"['Casey Bennett']"
] |
cs.LG stat.ME stat.ML | null | 1201.1587 | null | null | http://arxiv.org/pdf/1201.1587v3 | 2012-03-21T06:31:53Z | 2012-01-07T21:15:32Z | Feature Selection via Regularized Trees | We propose a tree regularization framework, which enables many tree models to
perform feature selection efficiently. The key idea of the regularization
framework is to penalize selecting a new feature for splitting when its gain
(e.g. information gain) is similar to the features used in previous splits. The
regularization framework is applied on random forest and boosted trees here,
and can be easily applied to other tree models. Experimental studies show that
the regularized trees can select high-quality feature subsets with regard to
both strong and weak classifiers. Because tree models can naturally deal with
categorical and numerical variables, missing values, different scales between
variables, interactions and nonlinearities etc., the tree regularization
framework provides an effective and efficient feature selection solution for
many practical problems.
| [
"Houtao Deng and George Runger",
"['Houtao Deng' 'George Runger']"
] |
cs.LG | 10.4156/AISS.vol3.issue9.31 | 1201.1670 | null | null | http://arxiv.org/abs/1201.1670v1 | 2012-01-08T23:59:27Z | 2012-01-08T23:59:27Z | Customers Behavior Modeling by Semi-Supervised Learning in Customer
Relationship Management | Leveraging the power of increasing amounts of data to analyze customer base
for attracting and retaining the most valuable customers is a major problem
facing companies in this information age. Data mining technologies extract
hidden information and knowledge from large data stored in databases or data
warehouses, thereby supporting the corporate decision making process. CRM uses
data mining (one of the elements of CRM) techniques to interact with customers.
This study investigates the use of a technique, semi-supervised learning, for
the management and analysis of customer-related data warehouse and information.
The idea of semi-supervised learning is to learn not only from the labeled
training data, but to exploit also the structural information in additionally
available unlabeled data. The proposed semi-supervised method is a model by
means of a feed-forward neural network trained by a back propagation algorithm
(multi-layer perceptron) in order to predict the category of an unknown
customer (potential customers). In addition, this technique can be used with
Rapid Miner tools for both labeled and unlabeled data.
| [
"['Siavash Emtiyaz' 'MohammadReza Keyvanpour']",
"Siavash Emtiyaz, MohammadReza Keyvanpour"
] |
cs.IT cs.LG math.IT | null | 1201.2056 | null | null | http://arxiv.org/pdf/1201.2056v1 | 2012-01-10T14:10:30Z | 2012-01-10T14:10:30Z | Adaptive Context Tree Weighting | We describe an adaptive context tree weighting (ACTW) algorithm, as an
extension to the standard context tree weighting (CTW) algorithm. Unlike the
standard CTW algorithm, which weights all observations equally regardless of
the depth, ACTW gives increasing weight to more recent observations, aiming to
improve performance in cases where the input sequence is from a non-stationary
distribution. Data compression results show ACTW variants improving over CTW on
merged files from standard compression benchmark tests while never being
significantly worse on any individual file.
| [
"[\"Alexander O'Neill\" 'Marcus Hutter' 'Wen Shao' 'Peter Sunehag']",
"Alexander O'Neill and Marcus Hutter and Wen Shao and Peter Sunehag"
] |
cs.LG | null | 1201.2173 | null | null | http://arxiv.org/pdf/1201.2173v1 | 2012-01-10T11:03:42Z | 2012-01-10T11:03:42Z | Automatic Detection of Diabetes Diagnosis using Feature Weighted Support
Vector Machines based on Mutual Information and Modified Cuckoo Search | Diabetes is a major health problem in both developing and developed countries
and its incidence is rising dramatically. In this study, we investigate a novel
automatic approach to diagnose Diabetes disease based on Feature Weighted
Support Vector Machines (FW-SVMs) and Modified Cuckoo Search (MCS). The
proposed model consists of three stages: Firstly, PCA is applied to select an
optimal subset of features out of set of all the features. Secondly, Mutual
Information is employed to construct the FWSVM by weighting different features
based on their degree of importance. Finally, since parameter selection plays a
vital role in classification accuracy of SVMs, MCS is applied to select the
best parameter values. The proposed MI-MCS-FWSVM method obtains 93.58% accuracy
on UCI dataset. The experimental results demonstrate that our method
outperforms the previous methods by not only giving more accurate results but
also significantly speeding up the classification procedure.
| [
"['Davar Giveki' 'Hamid Salimi' 'GholamReza Bahmanyar' 'Younes Khademian']",
"Davar Giveki, Hamid Salimi, GholamReza Bahmanyar, Younes Khademian"
] |
cs.LG | null | 1201.2416 | null | null | http://arxiv.org/pdf/1201.2416v1 | 2012-01-11T21:03:55Z | 2012-01-11T21:03:55Z | Stochastic Low-Rank Kernel Learning for Regression | We present a novel approach to learn a kernel-based regression function. It
is based on the useof conical combinations of data-based parameterized kernels
and on a new stochastic convex optimization procedure of which we establish
convergence guarantees. The overall learning procedure has the nice properties
that a) the learned conical combination is automatically designed to perform
the regression task at hand and b) the updates implicated by the optimization
procedure are quite inexpensive. In order to shed light on the appositeness of
our learning strategy, we present empirical results from experiments conducted
on various benchmark datasets.
| [
"Pierre Machart (LIF), Thomas Peel (LIF, LATP), Liva Ralaivola (LIF),\n Sandrine Anthoine (LATP), Herv\\'e Glotin (LSIS)",
"['Pierre Machart' 'Thomas Peel' 'Liva Ralaivola' 'Sandrine Anthoine'\n 'Hervé Glotin']"
] |
cs.LG stat.ML | null | 1201.2555 | null | null | http://arxiv.org/pdf/1201.2555v2 | 2012-09-05T12:23:38Z | 2012-01-12T13:08:27Z | Sparse Reward Processes | We introduce a class of learning problems where the agent is presented with a
series of tasks. Intuitively, if there is relation among those tasks, then the
information gained during execution of one task has value for the execution of
another task. Consequently, the agent is intrinsically motivated to explore its
environment beyond the degree necessary to solve the current task it has at
hand. We develop a decision theoretic setting that generalises standard
reinforcement learning tasks and captures this intuition. More precisely, we
consider a multi-stage stochastic game between a learning agent and an
opponent. We posit that the setting is a good model for the problem of
life-long learning in uncertain environments, where while resources must be
spent learning about currently important tasks, there is also the need to
allocate effort towards learning about aspects of the world which are not
relevant at the moment. This is due to the fact that unpredictable future
events may lead to a change of priorities for the decision maker. Thus, in some
sense, the model "explains" the necessity of curiosity. Apart from introducing
the general formalism, the paper provides algorithms. These are evaluated
experimentally in some exemplary domains. In addition, performance bounds are
proven for some cases of this problem.
| [
"['Christos Dimitrakakis']",
"Christos Dimitrakakis"
] |
cs.LG cs.NI | null | 1201.2575 | null | null | http://arxiv.org/pdf/1201.2575v2 | 2012-03-04T23:06:41Z | 2012-01-12T14:28:23Z | Joint Approximation of Information and Distributed Link-Scheduling
Decisions in Wireless Networks | For a large multi-hop wireless network, nodes are preferable to make
distributed and localized link-scheduling decisions with only interactions
among a small number of neighbors. However, for a slowly decaying channel and
densely populated interferers, a small size neighborhood often results in
nontrivial link outages and is thus insufficient for making optimal scheduling
decisions. A question arises how to deal with the information outside a
neighborhood in distributed link-scheduling. In this work, we develop joint
approximation of information and distributed link scheduling. We first apply
machine learning approaches to model distributed link-scheduling with complete
information. We then characterize the information outside a neighborhood in
form of residual interference as a random loss variable. The loss variable is
further characterized by either a Mean Field approximation or a normal
distribution based on the Lyapunov central limit theorem. The approximated
information outside a neighborhood is incorporated in a factor graph. This
results in joint approximation and distributed link-scheduling in an iterative
fashion. Link-scheduling decisions are first made at each individual node based
on the approximated loss variables. Loss variables are then updated and used
for next link-scheduling decisions. The algorithm repeats between these two
phases until convergence. Interactive iterations among these variables are
implemented with a message-passing algorithm over a factor graph. Simulation
results show that using learned information outside a neighborhood jointly with
distributed link-scheduling reduces the outage probability close to zero even
for a small neighborhood.
| [
"['Sung-eok Jeon' 'Chuanyi Ji']",
"Sung-eok Jeon, and Chuanyi Ji"
] |
cs.CV cs.LG | 10.1109/TPAMI.2014.2313126 | 1201.2605 | null | null | http://arxiv.org/abs/1201.2605v2 | 2012-07-02T12:42:01Z | 2012-01-12T16:09:10Z | Autonomous Cleaning of Corrupted Scanned Documents - A Generative
Modeling Approach | We study the task of cleaning scanned text documents that are strongly
corrupted by dirt such as manual line strokes, spilled ink etc. We aim at
autonomously removing dirt from a single letter-size page based only on the
information the page contains. Our approach, therefore, has to learn character
representations without supervision and requires a mechanism to distinguish
learned representations from irregular patterns. To learn character
representations, we use a probabilistic generative model parameterizing pattern
features, feature variances, the features' planar arrangements, and pattern
frequencies. The latent variables of the model describe pattern class, pattern
position, and the presence or absence of individual pattern features. The model
parameters are optimized using a novel variational EM approximation. After
learning, the parameters represent, independently of their absolute position,
planar feature arrangements and their variances. A quality measure defined
based on the learned representation then allows for an autonomous
discrimination between regular character patterns and the irregular patterns
making up the dirt. The irregular patterns can thus be removed to clean the
document. For a full Latin alphabet we found that a single page does not
contain sufficiently many character examples. However, even if heavily
corrupted by dirt, we show that a page containing a lower number of character
types can efficiently and autonomously be cleaned solely based on the
structural regularity of the characters it contains. In different examples
using characters from different alphabets, we demonstrate generality of the
approach and discuss its implications for future developments.
| [
"Zhenwen Dai and J\\\"org L\\\"ucke",
"['Zhenwen Dai' 'Jörg Lücke']"
] |
cs.LG | null | 1201.2902 | null | null | http://arxiv.org/pdf/1201.2902v1 | 2012-01-13T17:46:17Z | 2012-01-13T17:46:17Z | Acoustical Quality Assessment of the Classroom Environment | Teaching is one of the most important factors affecting any education system.
Many research efforts have been conducted to facilitate the presentation modes
used by instructors in classrooms as well as provide means for students to
review lectures through web browsers. Other studies have been made to provide
acoustical design recommendations for classrooms like room size and
reverberation times. However, using acoustical features of classrooms as a way
to provide education systems with feedback about the learning process was not
thoroughly investigated in any of these studies. We propose a system that
extracts different sound features of students and instructors, and then uses
machine learning techniques to evaluate the acoustical quality of any learning
environment. We infer conclusions about the students' satisfaction with the
quality of lectures. Using classifiers instead of surveys and other subjective
ways of measures can facilitate and speed such experiments which enables us to
perform them continuously. We believe our system enables education systems to
continuously review and improve their teaching strategies and acoustical
quality of classrooms.
| [
"['Marian George' 'Moustafa Youssef']",
"Marian George, Moustafa Youssef"
] |
cs.LG cs.DB | null | 1201.2925 | null | null | http://arxiv.org/pdf/1201.2925v2 | 2012-03-12T20:23:24Z | 2012-01-13T19:54:27Z | Combining Heterogeneous Classifiers for Relational Databases | Most enterprise data is distributed in multiple relational databases with
expert-designed schema. Using traditional single-table machine learning
techniques over such data not only incur a computational penalty for converting
to a 'flat' form (mega-join), even the human-specified semantic information
present in the relations is lost. In this paper, we present a practical,
two-phase hierarchical meta-classification algorithm for relational databases
with a semantic divide and conquer approach. We propose a recursive, prediction
aggregation technique over heterogeneous classifiers applied on individual
database tables. The proposed algorithm was evaluated on three diverse
datasets, namely TPCH, PKDD and UCI benchmarks and showed considerable
reduction in classification time without any loss of prediction accuracy.
| [
"Geetha Manjunatha, M Narasimha Murty, Dinkar Sitaram",
"['Geetha Manjunatha' 'M Narasimha Murty' 'Dinkar Sitaram']"
] |
cs.NE cs.LG cs.RO | null | 1201.3249 | null | null | http://arxiv.org/pdf/1201.3249v1 | 2012-01-16T13:19:55Z | 2012-01-16T13:19:55Z | A Spiking Neural Learning Classifier System | Learning Classifier Systems (LCS) are population-based reinforcement learners
used in a wide variety of applications. This paper presents a LCS where each
traditional rule is represented by a spiking neural network, a type of network
with dynamic internal state. We employ a constructivist model of growth of both
neurons and dendrites that realise flexible learning by evolving structures of
sufficient complexity to solve a well-known problem involving continuous,
real-valued inputs. Additionally, we extend the system to enable temporal state
decomposition. By allowing our LCS to chain together sequences of heterogeneous
actions into macro-actions, it is shown to perform optimally in a problem where
traditional methods can fail to find a solution in a reasonable amount of time.
Our final system is tested on a simulated robotics platform.
| [
"Gerard Howard and Larry Bull and Pier-Luca Lanzi",
"['Gerard Howard' 'Larry Bull' 'Pier-Luca Lanzi']"
] |
stat.ML cs.LG | null | 1201.3382 | null | null | http://arxiv.org/pdf/1201.3382v2 | 2012-04-03T22:48:52Z | 2012-01-16T22:00:07Z | Spike-and-Slab Sparse Coding for Unsupervised Feature Discovery | We consider the problem of using a factor model we call {\em spike-and-slab
sparse coding} (S3C) to learn features for a classification task. The S3C model
resembles both the spike-and-slab RBM and sparse coding. Since exact inference
in this model is intractable, we derive a structured variational inference
procedure and employ a variational EM training algorithm. Prior work on
approximate inference for this model has not prioritized the ability to exploit
parallel architectures and scale to enormous problem sizes. We present an
inference procedure appropriate for use with GPUs which allows us to
dramatically increase both the training set size and the amount of latent
factors.
We demonstrate that this approach improves upon the supervised learning
capabilities of both sparse coding and the ssRBM on the CIFAR-10 dataset. We
evaluate our approach's potential for semi-supervised learning on subsets of
CIFAR-10. We demonstrate state-of-the art self-taught learning performance on
the STL-10 dataset and use our method to win the NIPS 2011 Workshop on
Challenges In Learning Hierarchical Models' Transfer Learning Challenge.
| [
"['Ian J. Goodfellow' 'Aaron Courville' 'Yoshua Bengio']",
"Ian J. Goodfellow and Aaron Courville and Yoshua Bengio"
] |
cs.CV cs.LG stat.ML | null | 1201.3674 | null | null | http://arxiv.org/pdf/1201.3674v1 | 2012-01-18T00:46:12Z | 2012-01-18T00:46:12Z | On the Lagrangian Biduality of Sparsity Minimization Problems | Recent results in Compressive Sensing have shown that, under certain
conditions, the solution to an underdetermined system of linear equations with
sparsity-based regularization can be accurately recovered by solving convex
relaxations of the original problem. In this work, we present a novel
primal-dual analysis on a class of sparsity minimization problems. We show that
the Lagrangian bidual (i.e., the Lagrangian dual of the Lagrangian dual) of the
sparsity minimization problems can be used to derive interesting convex
relaxations: the bidual of the $\ell_0$-minimization problem is the
$\ell_1$-minimization problem; and the bidual of the $\ell_{0,1}$-minimization
problem for enforcing group sparsity on structured data is the
$\ell_{1,\infty}$-minimization problem. The analysis provides a means to
compute per-instance non-trivial lower bounds on the (group) sparsity of the
desired solutions. In a real-world application, the bidual relaxation improves
the performance of a sparsity-based classification framework applied to robust
face recognition.
| [
"Dheeraj Singaraju, Ehsan Elhamifar, Roberto Tron, Allen Y. Yang, S.\n Shankar Sastry",
"['Dheeraj Singaraju' 'Ehsan Elhamifar' 'Roberto Tron' 'Allen Y. Yang'\n 'S. Shankar Sastry']"
] |
stat.ML cs.LG math.OC | null | 1201.4002 | null | null | http://arxiv.org/pdf/1201.4002v1 | 2012-01-19T10:06:29Z | 2012-01-19T10:06:29Z | Adaptive Policies for Sequential Sampling under Incomplete Information
and a Cost Constraint | We consider the problem of sequential sampling from a finite number of
independent statistical populations to maximize the expected infinite horizon
average outcome per period, under a constraint that the expected average
sampling cost does not exceed an upper bound. The outcome distributions are not
known. We construct a class of consistent adaptive policies, under which the
average outcome converges with probability 1 to the true value under complete
information for all distributions with finite means. We also compare the rate
of convergence for various policies in this class using simulation.
| [
"Apostolos Burnetas and Odysseas Kanavetas",
"['Apostolos Burnetas' 'Odysseas Kanavetas']"
] |
cs.LG stat.ML | null | 1201.4714 | null | null | http://arxiv.org/pdf/1201.4714v1 | 2012-01-23T13:48:33Z | 2012-01-23T13:48:33Z | A metric learning perspective of SVM: on the relation of SVM and LMNN | Support Vector Machines, SVMs, and the Large Margin Nearest Neighbor
algorithm, LMNN, are two very popular learning algorithms with quite different
learning biases. In this paper we bring them into a unified view and show that
they have a much stronger relation than what is commonly thought. We analyze
SVMs from a metric learning perspective and cast them as a metric learning
problem, a view which helps us uncover the relations of the two algorithms. We
show that LMNN can be seen as learning a set of local SVM-like models in a
quadratic space. Along the way and inspired by the metric-based interpretation
of SVM s we derive a novel variant of SVMs, epsilon-SVM, to which LMNN is even
more similar. We give a unified view of LMNN and the different SVM variants.
Finally we provide some preliminary experiments on a number of benchmark
datasets in which show that epsilon-SVM compares favorably both with respect to
LMNN and SVM.
| [
"Huyen Do, Alexandros Kalousis, Jun Wang and Adam Woznica",
"['Huyen Do' 'Alexandros Kalousis' 'Jun Wang' 'Adam Woznica']"
] |
cs.AI cs.LG | null | 1201.4777 | null | null | http://arxiv.org/pdf/1201.4777v2 | 2013-02-28T20:22:47Z | 2012-01-23T17:25:34Z | A probabilistic methodology for multilabel classification | Multilabel classification is a relatively recent subfield of machine
learning. Unlike to the classical approach, where instances are labeled with
only one category, in multilabel classification, an arbitrary number of
categories is chosen to label an instance. Due to the problem complexity (the
solution is one among an exponential number of alternatives), a very common
solution (the binary method) is frequently used, learning a binary classifier
for every category, and combining them all afterwards. The assumption taken in
this solution is not realistic, and in this work we give examples where the
decisions for all the labels are not taken independently, and thus, a
supervised approach should learn those existing relationships among categories
to make a better classification. Therefore, we show here a generic methodology
that can improve the results obtained by a set of independent probabilistic
binary classifiers, by using a combination procedure with a classifier trained
on the co-occurrences of the labels. We show an exhaustive experimentation in
three different standard corpora of labeled documents (Reuters-21578,
Ohsumed-23 and RCV1), which present noticeable improvements in all of them,
when using our methodology, in three probabilistic base classifiers.
| [
"Alfonso E. Romero, Luis M. de Campos",
"['Alfonso E. Romero' 'Luis M. de Campos']"
] |
cs.NI cs.LG | null | 1201.4906 | null | null | http://arxiv.org/pdf/1201.4906v1 | 2012-01-24T02:45:58Z | 2012-01-24T02:45:58Z | Adaptive Shortest-Path Routing under Unknown and Stochastically Varying
Link States | We consider the adaptive shortest-path routing problem in wireless networks
under unknown and stochastically varying link states. In this problem, we aim
to optimize the quality of communication between a source and a destination
through adaptive path selection. Due to the randomness and uncertainties in the
network dynamics, the quality of each link varies over time according to a
stochastic process with unknown distributions. After a path is selected for
communication, the aggregated quality of all links on this path (e.g., total
path delay) is observed. The quality of each individual link is not observable.
We formulate this problem as a multi-armed bandit with dependent arms. We show
that by exploiting arm dependencies, a regret polynomial with network size can
be achieved while maintaining the optimal logarithmic order with time. This is
in sharp contrast with the exponential regret order with network size offered
by a direct application of the classic MAB policies that ignore arm
dependencies. Furthermore, our results are obtained under a general model of
link-quality distributions (including heavy-tailed distributions) and find
applications in cognitive radio and ad hoc networks with unknown and dynamic
communication environments.
| [
"['Keqin Liu' 'Qing Zhao']",
"Keqin Liu, Qing Zhao"
] |
cs.LG cs.AI | 10.5120/677-952 | 1201.5217 | null | null | http://arxiv.org/abs/1201.5217v1 | 2012-01-25T09:44:06Z | 2012-01-25T09:44:06Z | Unsupervised Classification Using Immune Algorithm | Unsupervised classification algorithm based on clonal selection principle
named Unsupervised Clonal Selection Classification (UCSC) is proposed in this
paper. The new proposed algorithm is data driven and self-adaptive, it adjusts
its parameters to the data to make the classification operation as fast as
possible. The performance of UCSC is evaluated by comparing it with the well
known K-means algorithm using several artificial and real-life data sets. The
experiments show that the proposed UCSC algorithm is more reliable and has high
classification precision comparing to traditional classification methods such
as K-means.
| [
"['M. T. Al-Muallim' 'R. El-Kouatly']",
"M. T. Al-Muallim, R. El-Kouatly"
] |
cs.LG | null | 1201.5283 | null | null | http://arxiv.org/pdf/1201.5283v5 | 2013-07-26T05:03:51Z | 2012-01-24T04:09:54Z | An Efficient Primal-Dual Prox Method for Non-Smooth Optimization | We study the non-smooth optimization problems in machine learning, where both
the loss function and the regularizer are non-smooth functions. Previous
studies on efficient empirical loss minimization assume either a smooth loss
function or a strongly convex regularizer, making them unsuitable for
non-smooth optimization. We develop a simple yet efficient method for a family
of non-smooth optimization problems where the dual form of the loss function is
bilinear in primal and dual variables. We cast a non-smooth optimization
problem into a minimax optimization problem, and develop a primal dual prox
method that solves the minimax optimization problem at a rate of $O(1/T)$
{assuming that the proximal step can be efficiently solved}, significantly
faster than a standard subgradient descent method that has an $O(1/\sqrt{T})$
convergence rate. Our empirical study verifies the efficiency of the proposed
method for various non-smooth optimization problems that arise ubiquitously in
machine learning by comparing it to the state-of-the-art first order methods.
| [
"['Tianbao Yang' 'Mehrdad Mahdavi' 'Rong Jin' 'Shenghuo Zhu']",
"Tianbao Yang, Mehrdad Mahdavi, Rong Jin, Shenghuo Zhu"
] |
cs.LG stat.ML | 10.1007/s10618-012-0291-9 | 1201.5338 | null | null | http://arxiv.org/abs/1201.5338v2 | 2012-09-21T06:04:35Z | 2012-01-25T18:36:11Z | On Constrained Spectral Clustering and Its Applications | Constrained clustering has been well-studied for algorithms such as $K$-means
and hierarchical clustering. However, how to satisfy many constraints in these
algorithmic settings has been shown to be intractable. One alternative to
encode many constraints is to use spectral clustering, which remains a
developing area. In this paper, we propose a flexible framework for constrained
spectral clustering. In contrast to some previous efforts that implicitly
encode Must-Link and Cannot-Link constraints by modifying the graph Laplacian
or constraining the underlying eigenspace, we present a more natural and
principled formulation, which explicitly encodes the constraints as part of a
constrained optimization problem. Our method offers several practical
advantages: it can encode the degree of belief in Must-Link and Cannot-Link
constraints; it guarantees to lower-bound how well the given constraints are
satisfied using a user-specified threshold; it can be solved deterministically
in polynomial time through generalized eigendecomposition. Furthermore, by
inheriting the objective function from spectral clustering and encoding the
constraints explicitly, much of the existing analysis of unconstrained spectral
clustering techniques remains valid for our formulation. We validate the
effectiveness of our approach by empirical results on both artificial and real
datasets. We also demonstrate an innovative use of encoding large number of
constraints: transfer learning via constraints.
| [
"['Xiang Wang' 'Buyue Qian' 'Ian Davidson']",
"Xiang Wang, Buyue Qian, Ian Davidson"
] |
cs.AI cs.LG cs.NE cs.SY math.OC | 10.1007/s00500-013-1044-4 | 1201.5604 | null | null | http://arxiv.org/abs/1201.5604v2 | 2015-01-25T15:34:57Z | 2012-01-26T18:54:42Z | Discrete and fuzzy dynamical genetic programming in the XCSF learning
classifier system | A number of representation schemes have been presented for use within
learning classifier systems, ranging from binary encodings to neural networks.
This paper presents results from an investigation into using discrete and fuzzy
dynamical system representations within the XCSF learning classifier system. In
particular, asynchronous random Boolean networks are used to represent the
traditional condition-action production system rules in the discrete case and
asynchronous fuzzy logic networks in the continuous-valued case. It is shown
possible to use self-adaptive, open-ended evolution to design an ensemble of
such dynamical systems within XCSF to solve a number of well-known test
problems.
| [
"['Richard J. Preen' 'Larry Bull']",
"Richard J. Preen and Larry Bull"
] |
cs.LG | null | 1201.6053 | null | null | http://arxiv.org/pdf/1201.6053v1 | 2012-01-29T16:23:54Z | 2012-01-29T16:23:54Z | A Comparison Between Data Mining Prediction Algorithms for Fault
Detection(Case study: Ahanpishegan co.) | In the current competitive world, industrial companies seek to manufacture
products of higher quality which can be achieved by increasing reliability,
maintainability and thus the availability of products. On the other hand,
improvement in products lifecycle is necessary for achieving high reliability.
Typically, maintenance activities are aimed to reduce failures of industrial
machinery and minimize the consequences of such failures. So the industrial
companies try to improve their efficiency by using different fault detection
techniques. One strategy is to process and analyze previous generated data to
predict future failures. The purpose of this paper is to detect wasted parts
using different data mining algorithms and compare the accuracy of these
algorithms. A combination of thermal and physical characteristics has been used
and the algorithms were implemented on Ahanpishegan's current data to estimate
the availability of its produced parts.
Keywords: Data Mining, Fault Detection, Availability, Prediction Algorithms.
| [
"['Golriz Amooee' 'Behrouz Minaei-Bidgoli' 'Malihe Bagheri-Dehnavi']",
"Golriz Amooee, Behrouz Minaei-Bidgoli, Malihe Bagheri-Dehnavi"
] |
cs.HC cs.LG cs.SD | null | 1201.6251 | null | null | http://arxiv.org/pdf/1201.6251v1 | 2012-01-27T18:30:11Z | 2012-01-27T18:30:11Z | Real-time jam-session support system | We propose a method for the problem of real time chord accompaniment of
improvised music. Our implementation can learn an underlying structure of the
musical performance and predict next chord. The system uses Hidden Markov Model
to find the most probable chord sequence for the played melody and then a
Variable Order Markov Model is used to a) learn the structure (if any) and b)
predict next chord. We implemented our system in Java and MAX/Msp and compared
and evaluated using objective (prediction accuracy) and subjective
(questionnaire) evaluation methods.
| [
"['Panagiotis Tigas']",
"Panagiotis Tigas"
] |
cs.LG | null | 1201.6462 | null | null | http://arxiv.org/pdf/1201.6462v1 | 2012-01-31T07:46:08Z | 2012-01-31T07:46:08Z | Active Learning of Custering with Side Information Using $\eps$-Smooth
Relative Regret Approximations | Clustering is considered a non-supervised learning setting, in which the goal
is to partition a collection of data points into disjoint clusters. Often a
bound $k$ on the number of clusters is given or assumed by the practitioner.
Many versions of this problem have been defined, most notably $k$-means and
$k$-median.
An underlying problem with the unsupervised nature of clustering it that of
determining a similarity function. One approach for alleviating this difficulty
is known as clustering with side information, alternatively, semi-supervised
clustering. Here, the practitioner incorporates side information in the form of
"must be clustered" or "must be separated" labels for data point pairs. Each
such piece of information comes at a "query cost" (often involving human
response solicitation). The collection of labels is then incorporated in the
usual clustering algorithm as either strict or as soft constraints, possibly
adding a pairwise constraint penalty function to the chosen clustering
objective.
Our work is mostly related to clustering with side information. We ask how to
choose the pairs of data points. Our analysis gives rise to a method provably
better than simply choosing them uniformly at random. Roughly speaking, we show
that the distribution must be biased so as more weight is placed on pairs
incident to elements in smaller clusters in some optimal solution. Of course we
do not know the optimal solution, hence we don't know the bias. Using the
recently introduced method of $\eps$-smooth relative regret approximations of
Ailon, Begleiter and Ezra, we can show an iterative process that improves both
the clustering and the bias in tandem. The process provably converges to the
optimal solution faster (in terms of query cost) than an algorithm selecting
pairs uniformly.
| [
"['Nir Ailon' 'Ron Begleiter']",
"Nir Ailon and Ron Begleiter"
] |
cs.LG cs.CG math.FA stat.ML | null | 1201.6530 | null | null | http://arxiv.org/pdf/1201.6530v3 | 2012-03-26T10:56:00Z | 2012-01-31T12:59:50Z | Random Feature Maps for Dot Product Kernels | Approximating non-linear kernels using feature maps has gained a lot of
interest in recent years due to applications in reducing training and testing
times of SVM classifiers and other kernel based learning algorithms. We extend
this line of work and present low distortion embeddings for dot product kernels
into linear Euclidean spaces. We base our results on a classical result in
harmonic analysis characterizing all dot product kernels and use it to define
randomized feature maps into explicit low dimensional Euclidean spaces in which
the native dot product provides an approximation to the dot product kernel with
high confidence.
| [
"Purushottam Kar and Harish Karnick",
"['Purushottam Kar' 'Harish Karnick']"
] |
Subsets and Splits