categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
cs.LG stat.ML | null | 1301.7393 | null | null | http://arxiv.org/pdf/1301.7393v1 | 2013-01-30T15:05:15Z | 2013-01-30T15:05:15Z | Mixture Representations for Inference and Learning in Boltzmann Machines | Boltzmann machines are undirected graphical models with two-state stochastic
variables, in which the logarithms of the clique potentials are quadratic
functions of the node states. They have been widely studied in the neural
computing literature, although their practical applicability has been limited
by the difficulty of finding an effective learning algorithm. One
well-established approach, known as mean field theory, represents the
stochastic distribution using a factorized approximation. However, the
corresponding learning algorithm often fails to find a good solution. We
conjecture that this is due to the implicit uni-modality of the mean field
approximation which is therefore unable to capture multi-modality in the true
distribution. In this paper we use variational methods to approximate the
stochastic distribution using multi-modal mixtures of factorized distributions.
We present results for both inference and learning to demonstrate the
effectiveness of this approach.
| [
"Neil D. Lawrence, Christopher M. Bishop, Michael I. Jordan",
"['Neil D. Lawrence' 'Christopher M. Bishop' 'Michael I. Jordan']"
] |
cs.LG stat.ML | null | 1301.7401 | null | null | http://arxiv.org/pdf/1301.7401v2 | 2015-05-16T23:17:06Z | 2013-01-30T15:05:55Z | An Experimental Comparison of Several Clustering and Initialization
Methods | We examine methods for clustering in high dimensions. In the first part of
the paper, we perform an experimental comparison between three batch clustering
algorithms: the Expectation-Maximization (EM) algorithm, a winner take all
version of the EM algorithm reminiscent of the K-means algorithm, and
model-based hierarchical agglomerative clustering. We learn naive-Bayes models
with a hidden root node, using high-dimensional discrete-variable data sets
(both real and synthetic). We find that the EM algorithm significantly
outperforms the other methods, and proceed to investigate the effect of various
initialization schemes on the final solution produced by the EM algorithm. The
initializations that we consider are (1) parameters sampled from an
uninformative prior, (2) random perturbations of the marginal distribution of
the data, and (3) the output of hierarchical agglomerative clustering. Although
the methods are substantially different, they lead to learned models that are
strikingly similar in quality.
| [
"['Marina Meila' 'David Heckerman']",
"Marina Meila, David Heckerman"
] |
cs.AI cs.LG | null | 1301.7403 | null | null | http://arxiv.org/pdf/1301.7403v1 | 2013-01-30T15:06:05Z | 2013-01-30T15:06:05Z | A Multivariate Discretization Method for Learning Bayesian Networks from
Mixed Data | In this paper we address the problem of discretization in the context of
learning Bayesian networks (BNs) from data containing both continuous and
discrete variables. We describe a new technique for <EM>multivariate</EM>
discretization, whereby each continuous variable is discretized while taking
into account its interaction with the other variables. The technique is based
on the use of a Bayesian scoring metric that scores the discretization policy
for a continuous variable given a BN structure and the observed data. Since the
metric is relative to the BN structure currently being evaluated, the
discretization of a variable needs to be dynamically adjusted as the BN
structure changes.
| [
"Stefano Monti, Gregory F. Cooper",
"['Stefano Monti' 'Gregory F. Cooper']"
] |
cs.LG stat.ML | null | 1301.7411 | null | null | http://arxiv.org/pdf/1301.7411v1 | 2013-01-30T15:06:43Z | 2013-01-30T15:06:43Z | On the Geometry of Bayesian Graphical Models with Hidden Variables | In this paper we investigate the geometry of the likelihood of the unknown
parameters in a simple class of Bayesian directed graphs with hidden variables.
This enables us, before any numerical algorithms are employed, to obtain
certain insights in the nature of the unidentifiability inherent in such
models, the way posterior densities will be sensitive to prior densities and
the typical geometrical form these posterior densities might take. Many of
these insights carry over into more complicated Bayesian networks with
systematic missing data.
| [
"['Raffaella Settimi' 'Jim Q. Smith']",
"Raffaella Settimi, Jim Q. Smith"
] |
cs.LG cs.AI stat.ML | null | 1301.7415 | null | null | http://arxiv.org/pdf/1301.7415v2 | 2015-05-16T23:27:23Z | 2013-01-30T15:07:02Z | Learning Mixtures of DAG Models | We describe computationally efficient methods for learning mixtures in which
each component is a directed acyclic graphical model (mixtures of DAGs or
MDAGs). We argue that simple search-and-score algorithms are infeasible for a
variety of problems, and introduce a feasible approach in which parameter and
structure search is interleaved and expected data is treated as real data. Our
approach can be viewed as a combination of (1) the Cheeseman--Stutz asymptotic
approximation for model posterior probability and (2) the
Expectation--Maximization algorithm. We evaluate our procedure for selecting
among MDAGs on synthetic and real examples.
| [
"['Bo Thiesson' 'Christopher Meek' 'David Maxwell Chickering'\n 'David Heckerman']",
"Bo Thiesson, Christopher Meek, David Maxwell Chickering, David\n Heckerman"
] |
cs.RO cs.IT cs.LG math.IT | 10.1371/journal.pone.0063400 | 1301.7473 | null | null | http://arxiv.org/abs/1301.7473v2 | 2013-03-27T21:22:18Z | 2013-01-30T23:44:25Z | Information driven self-organization of complex robotic behaviors | Information theory is a powerful tool to express principles to drive
autonomous systems because it is domain invariant and allows for an intuitive
interpretation. This paper studies the use of the predictive information (PI),
also called excess entropy or effective measure complexity, of the sensorimotor
process as a driving force to generate behavior. We study nonlinear and
nonstationary systems and introduce the time-local predicting information
(TiPI) which allows us to derive exact results together with explicit update
rules for the parameters of the controller in the dynamical systems framework.
In this way the information principle, formulated at the level of behavior, is
translated to the dynamics of the synapses. We underpin our results with a
number of case studies with high-dimensional robotic systems. We show the
spontaneous cooperativity in a complex physical system with decentralized
control. Moreover, a jointly controlled humanoid robot develops a high
behavioral variety depending on its physics and the environment it is
dynamically embedded into. The behavior can be decomposed into a succession of
low-dimensional modes that increasingly explore the behavior space. This is a
promising way to avoid the curse of dimensionality which hinders learning
systems to scale well.
| [
"Georg Martius, Ralf Der, Nihat Ay",
"['Georg Martius' 'Ralf Der' 'Nihat Ay']"
] |
cs.IT cs.LG math.IT stat.ML | 10.1109/TSP.2013.2278516 | 1301.7619 | null | null | http://arxiv.org/abs/1301.7619v1 | 2013-01-31T14:17:28Z | 2013-01-31T14:17:28Z | Rank regularization and Bayesian inference for tensor completion and
extrapolation | A novel regularizer of the PARAFAC decomposition factors capturing the
tensor's rank is proposed in this paper, as the key enabler for completion of
three-way data arrays with missing entries. Set in a Bayesian framework, the
tensor completion method incorporates prior information to enhance its
smoothing and prediction capabilities. This probabilistic approach can
naturally accommodate general models for the data distribution, lending itself
to various fitting criteria that yield optimum estimates in the
maximum-a-posteriori sense. In particular, two algorithms are devised for
Gaussian- and Poisson-distributed data, that minimize the rank-regularized
least-squares error and Kullback-Leibler divergence, respectively. The proposed
technique is able to recover the "ground-truth'' tensor rank when tested on
synthetic data, and to complete brain imaging and yeast gene expression
datasets with 50% and 15% of missing entries respectively, resulting in
recovery errors at -10dB and -15dB.
| [
"Juan Andres Bazerque, Gonzalo Mateos, and Georgios B. Giannakis",
"['Juan Andres Bazerque' 'Gonzalo Mateos' 'Georgios B. Giannakis']"
] |
cs.LG cs.SI stat.ML | null | 1301.7724 | null | null | http://arxiv.org/pdf/1301.7724v2 | 2014-09-02T18:21:18Z | 2013-01-31T19:39:03Z | Axiomatic Construction of Hierarchical Clustering in Asymmetric Networks | This paper considers networks where relationships between nodes are
represented by directed dissimilarities. The goal is to study methods for the
determination of hierarchical clusters, i.e., a family of nested partitions
indexed by a connectivity parameter, induced by the given dissimilarity
structures. Our construction of hierarchical clustering methods is based on
defining admissible methods to be those methods that abide by the axioms of
value - nodes in a network with two nodes are clustered together at the maximum
of the two dissimilarities between them - and transformation - when
dissimilarities are reduced, the network may become more clustered but not
less. Several admissible methods are constructed and two particular methods,
termed reciprocal and nonreciprocal clustering, are shown to provide upper and
lower bounds in the space of admissible methods. Alternative clustering
methodologies and axioms are further considered. Allowing the outcome of
hierarchical clustering to be asymmetric, so that it matches the asymmetry of
the original data, leads to the inception of quasi-clustering methods. The
existence of a unique quasi-clustering method is shown. Allowing clustering in
a two-node network to proceed at the minimum of the two dissimilarities
generates an alternative axiomatic construction. There is a unique clustering
method in this case too. The paper also develops algorithms for the computation
of hierarchical clusters using matrix powers on a min-max dioid algebra and
studies the stability of the methods proposed. We proved that most of the
methods introduced in this paper are such that similar networks yield similar
hierarchical clustering results. Algorithms are exemplified through their
application to networks describing internal migration within states of the
United States (U.S.) and the interrelation between sectors of the U.S. economy.
| [
"Gunnar Carlsson, Facundo M\\'emoli, Alejandro Ribeiro and Santiago\n Segarra",
"['Gunnar Carlsson' 'Facundo Mémoli' 'Alejandro Ribeiro' 'Santiago Segarra']"
] |
stat.ML cs.LG math.ST stat.TH | null | 1302.0082 | null | null | http://arxiv.org/pdf/1302.0082v1 | 2013-02-01T05:35:48Z | 2013-02-01T05:35:48Z | Distribution-Free Distribution Regression | `Distribution regression' refers to the situation where a response Y depends
on a covariate P where P is a probability distribution. The model is Y=f(P) +
mu where f is an unknown regression function and mu is a random error.
Typically, we do not observe P directly, but rather, we observe a sample from
P. In this paper we develop theory and methods for distribution-free versions
of distribution regression. This means that we do not make distributional
assumptions about the error term mu and covariate P. We prove that when the
effective dimension is small enough (as measured by the doubling dimension),
then the excess prediction risk converges to zero with a polynomial rate.
| [
"Barnabas Poczos, Alessandro Rinaldo, Aarti Singh, Larry Wasserman",
"['Barnabas Poczos' 'Alessandro Rinaldo' 'Aarti Singh' 'Larry Wasserman']"
] |
cs.LG stat.ML | null | 1302.0315 | null | null | http://arxiv.org/pdf/1302.0315v1 | 2013-02-01T23:28:43Z | 2013-02-01T23:28:43Z | Sparse Multiple Kernel Learning with Geometric Convergence Rate | In this paper, we study the problem of sparse multiple kernel learning (MKL),
where the goal is to efficiently learn a combination of a fixed small number of
kernels from a large pool that could lead to a kernel classifier with a small
prediction error. We develop an efficient algorithm based on the greedy
coordinate descent algorithm, that is able to achieve a geometric convergence
rate under appropriate conditions. The convergence rate is achieved by
measuring the size of functional gradients by an empirical $\ell_2$ norm that
depends on the empirical data distribution. This is in contrast to previous
algorithms that use a functional norm to measure the size of gradients, which
is independent from the data samples. We also establish a generalization error
bound of the learned sparse kernel classifier using the technique of local
Rademacher complexity.
| [
"Rong Jin, Tianbao Yang, Mehrdad Mahdavi",
"['Rong Jin' 'Tianbao Yang' 'Mehrdad Mahdavi']"
] |
cs.RO cs.AI cs.LG | 10.1177/0278364913499192 | 1302.0386 | null | null | http://arxiv.org/abs/1302.0386v1 | 2013-02-02T14:53:05Z | 2013-02-02T14:53:05Z | Fast Damage Recovery in Robotics with the T-Resilience Algorithm | Damage recovery is critical for autonomous robots that need to operate for a
long time without assistance. Most current methods are complex and costly
because they require anticipating each potential damage in order to have a
contingency plan ready. As an alternative, we introduce the T-resilience
algorithm, a new algorithm that allows robots to quickly and autonomously
discover compensatory behaviors in unanticipated situations. This algorithm
equips the robot with a self-model and discovers new behaviors by learning to
avoid those that perform differently in the self-model and in reality. Our
algorithm thus does not identify the damaged parts but it implicitly searches
for efficient behaviors that do not use them. We evaluate the T-Resilience
algorithm on a hexapod robot that needs to adapt to leg removal, broken legs
and motor failures; we compare it to stochastic local search, policy gradient
and the self-modeling algorithm proposed by Bongard et al. The behavior of the
robot is assessed on-board thanks to a RGB-D sensor and a SLAM algorithm. Using
only 25 tests on the robot and an overall running time of 20 minutes,
T-Resilience consistently leads to substantially better results than the other
approaches.
| [
"Sylvain Koos, Antoine Cully, Jean-Baptiste Mouret",
"['Sylvain Koos' 'Antoine Cully' 'Jean-Baptiste Mouret']"
] |
cs.LG stat.ML | null | 1302.0406 | null | null | http://arxiv.org/pdf/1302.0406v1 | 2013-02-02T17:20:47Z | 2013-02-02T17:20:47Z | Generalization Guarantees for a Binary Classification Framework for
Two-Stage Multiple Kernel Learning | We present generalization bounds for the TS-MKL framework for two stage
multiple kernel learning. We also present bounds for sparse kernel learning
formulations within the TS-MKL framework.
| [
"['Purushottam Kar']",
"Purushottam Kar"
] |
cs.LG cs.CV | null | 1302.0435 | null | null | http://arxiv.org/pdf/1302.0435v2 | 2013-02-06T15:55:39Z | 2013-02-02T22:56:26Z | Parallel D2-Clustering: Large-Scale Clustering of Discrete Distributions | The discrete distribution clustering algorithm, namely D2-clustering, has
demonstrated its usefulness in image classification and annotation where each
object is represented by a bag of weighed vectors. The high computational
complexity of the algorithm, however, limits its applications to large-scale
problems. We present a parallel D2-clustering algorithm with substantially
improved scalability. A hierarchical structure for parallel computing is
devised to achieve a balance between the individual-node computation and the
integration process of the algorithm. Additionally, it is shown that even with
a single CPU, the hierarchical structure results in significant speed-up.
Experiments on real-world large-scale image data, Youtube video data, and
protein sequence data demonstrate the efficiency and wide applicability of the
parallel D2-clustering algorithm. The loss in clustering accuracy is minor in
comparison with the original sequential algorithm.
| [
"Yu Zhang, James Z. Wang and Jia Li",
"['Yu Zhang' 'James Z. Wang' 'Jia Li']"
] |
cs.LG | null | 1302.0540 | null | null | http://arxiv.org/pdf/1302.0540v1 | 2013-02-03T22:12:52Z | 2013-02-03T22:12:52Z | A game-theoretic framework for classifier ensembles using weighted
majority voting with local accuracy estimates | In this paper, a novel approach for the optimal combination of binary
classifiers is proposed. The classifier combination problem is approached from
a Game Theory perspective. The proposed framework of adapted weighted majority
rules (WMR) is tested against common rank-based, Bayesian and simple majority
models, as well as two soft-output averaging rules. Experiments with ensembles
of Support Vector Machines (SVM), Ordinary Binary Tree Classifiers (OBTC) and
weighted k-nearest-neighbor (w/k-NN) models on benchmark datasets indicate that
this new adaptive WMR model, employing local accuracy estimators and the
analytically computed optimal weights outperform all the other simple
combination rules.
| [
"['Harris V. Georgiou' 'Michael E. Mavroforakis']",
"Harris V. Georgiou, Michael E. Mavroforakis"
] |
cs.LG cs.AI cs.MA cs.RO | null | 1302.0723 | null | null | http://arxiv.org/pdf/1302.0723v2 | 2013-02-05T05:50:14Z | 2013-02-04T15:34:12Z | Multi-Robot Informative Path Planning for Active Sensing of
Environmental Phenomena: A Tale of Two Algorithms | A key problem of robotic environmental sensing and monitoring is that of
active sensing: How can a team of robots plan the most informative observation
paths to minimize the uncertainty in modeling and predicting an environmental
phenomenon? This paper presents two principled approaches to efficient
information-theoretic path planning based on entropy and mutual information
criteria for in situ active sensing of an important broad class of
widely-occurring environmental phenomena called anisotropic fields. Our
proposed algorithms are novel in addressing a trade-off between active sensing
performance and time efficiency. An important practical consequence is that our
algorithms can exploit the spatial correlation structure of Gaussian
process-based anisotropic fields to improve time efficiency while preserving
near-optimal active sensing performance. We analyze the time complexity of our
algorithms and prove analytically that they scale better than state-of-the-art
algorithms with increasing planning horizon length. We provide theoretical
guarantees on the active sensing performance of our algorithms for a class of
exploration tasks called transect sampling, which, in particular, can be
improved with longer planning time and/or lower spatial correlation along the
transect. Empirical evaluation on real-world anisotropic field data shows that
our algorithms can perform better or at least as well as the state-of-the-art
algorithms while often incurring a few orders of magnitude less computational
time, even when the field conditions are less favorable.
| [
"Nannan Cao, Kian Hsiang Low, John M. Dolan",
"['Nannan Cao' 'Kian Hsiang Low' 'John M. Dolan']"
] |
stat.ML cs.IT cs.LG math.IT math.ST stat.TH | null | 1302.0895 | null | null | http://arxiv.org/pdf/1302.0895v1 | 2013-02-04T22:51:56Z | 2013-02-04T22:51:56Z | Exact Sparse Recovery with L0 Projections | Many applications concern sparse signals, for example, detecting anomalies
from the differences between consecutive images taken by surveillance cameras.
This paper focuses on the problem of recovering a K-sparse signal x in N
dimensions. In the mainstream framework of compressed sensing (CS), the vector
x is recovered from M non-adaptive linear measurements y = xS, where S (of size
N x M) is typically a Gaussian (or Gaussian-like) design matrix, through some
optimization procedure such as linear programming (LP).
In our proposed method, the design matrix S is generated from an
$\alpha$-stable distribution with $\alpha\approx 0$. Our decoding algorithm
mainly requires one linear scan of the coordinates, followed by a few
iterations on a small number of coordinates which are "undetermined" in the
previous iteration. Comparisons with two strong baselines, linear programming
(LP) and orthogonal matching pursuit (OMP), demonstrate that our algorithm can
be significantly faster in decoding speed and more accurate in recovery
quality, for the task of exact spare recovery. Our procedure is robust against
measurement noise. Even when there are no sufficient measurements, our
algorithm can still reliably recover a significant portion of the nonzero
coordinates.
To provide the intuition for understanding our method, we also analyze the
procedure by assuming an idealistic setting. Interestingly, when K=2, the
"idealized" algorithm achieves exact recovery with merely 3 measurements,
regardless of N. For general K, the required sample size of the "idealized"
algorithm is about 5K.
| [
"Ping Li and Cun-Hui Zhang",
"['Ping Li' 'Cun-Hui Zhang']"
] |
cs.NE cs.LG | null | 1302.0962 | null | null | http://arxiv.org/pdf/1302.0962v1 | 2013-02-05T09:01:13Z | 2013-02-05T09:01:13Z | Improved Accuracy of PSO and DE using Normalization: an Application to
Stock Price Prediction | Data Mining is being actively applied to stock market since 1980s. It has
been used to predict stock prices, stock indexes, for portfolio management,
trend detection and for developing recommender systems. The various algorithms
which have been used for the same include ANN, SVM, ARIMA, GARCH etc. Different
hybrid models have been developed by combining these algorithms with other
algorithms like roughest, fuzzy logic, GA, PSO, DE, ACO etc. to improve the
efficiency. This paper proposes DE-SVM model (Differential EvolutionSupport
vector Machine) for stock price prediction. DE has been used to select best
free parameters combination for SVM to improve results. The paper also compares
the results of prediction with the outputs of SVM alone and PSO-SVM model
(Particle Swarm Optimization). The effect of normalization of data on the
accuracy of prediction has also been studied.
| [
"Savinderjit Kaur (Department of Information Technology, UIET, PU,\n Chandigarh, India), Veenu Mangat (Department of Information Technology, UIET,\n PU, Chandigarh, India)",
"['Savinderjit Kaur' 'Veenu Mangat']"
] |
cs.LG | null | 1302.0963 | null | null | http://arxiv.org/pdf/1302.0963v1 | 2013-02-05T09:04:25Z | 2013-02-05T09:04:25Z | RandomBoost: Simplified Multi-class Boosting through Randomization | We propose a novel boosting approach to multi-class classification problems,
in which multiple classes are distinguished by a set of random projection
matrices in essence. The approach uses random projections to alleviate the
proliferation of binary classifiers typically required to perform multi-class
classification. The result is a multi-class classifier with a single
vector-valued parameter, irrespective of the number of classes involved. Two
variants of this approach are proposed. The first method randomly projects the
original data into new spaces, while the second method randomly projects the
outputs of learned weak classifiers. These methods are not only conceptually
simple but also effective and easy to implement. A series of experiments on
synthetic, machine learning and visual recognition data sets demonstrate that
our proposed methods compare favorably to existing multi-class boosting
algorithms in terms of both the convergence rate and classification accuracy.
| [
"['Sakrapee Paisitkriangkrai' 'Chunhua Shen' 'Qinfeng Shi'\n 'Anton van den Hengel']",
"Sakrapee Paisitkriangkrai, Chunhua Shen, Qinfeng Shi, Anton van den\n Hengel"
] |
cs.LG | null | 1302.0974 | null | null | http://arxiv.org/pdf/1302.0974v1 | 2013-02-05T09:45:21Z | 2013-02-05T09:45:21Z | A Comparison of Relaxations of Multiset Cannonical Correlation Analysis
and Applications | Canonical correlation analysis is a statistical technique that is used to
find relations between two sets of variables. An important extension in pattern
analysis is to consider more than two sets of variables. This problem can be
expressed as a quadratically constrained quadratic program (QCQP), commonly
referred to Multi-set Canonical Correlation Analysis (MCCA). This is a
non-convex problem and so greedy algorithms converge to local optima without
any guarantees on global optimality. In this paper, we show that despite being
highly structured, finding the optimal solution is NP-Hard. This motivates our
relaxation of the QCQP to a semidefinite program (SDP). The SDP is convex, can
be solved reasonably efficiently and comes with both absolute and
output-sensitive approximation quality. In addition to theoretical guarantees,
we do an extensive comparison of the QCQP method and the SDP relaxation on a
variety of synthetic and real world data. Finally, we present two useful
extensions: we incorporate kernel methods and computing multiple sets of
canonical vectors.
| [
"Jan Rupnik, Primoz Skraba, John Shawe-Taylor, Sabrina Guettes",
"['Jan Rupnik' 'Primoz Skraba' 'John Shawe-Taylor' 'Sabrina Guettes']"
] |
cs.LG | null | 1302.1043 | null | null | http://arxiv.org/pdf/1302.1043v2 | 2013-07-09T08:04:02Z | 2013-02-05T14:31:51Z | The price of bandit information in multiclass online classification | We consider two scenarios of multiclass online learning of a hypothesis class
$H\subseteq Y^X$. In the {\em full information} scenario, the learner is
exposed to instances together with their labels. In the {\em bandit} scenario,
the true label is not exposed, but rather an indication whether the learner's
prediction is correct or not. We show that the ratio between the error rates in
the two scenarios is at most $8\cdot|Y|\cdot \log(|Y|)$ in the realizable case,
and $\tilde{O}(\sqrt{|Y|})$ in the agnostic case. The results are tight up to a
logarithmic factor and essentially answer an open question from (Daniely et.
al. - Multiclass learnability and the erm principle).
We apply these results to the class of $\gamma$-margin multiclass linear
classifiers in $\reals^d$. We show that the bandit error rate of this class is
$\tilde{\Theta}(\frac{|Y|}{\gamma^2})$ in the realizable case and
$\tilde{\Theta}(\frac{1}{\gamma}\sqrt{|Y|T})$ in the agnostic case. This
resolves an open question from (Kakade et. al. - Efficient bandit algorithms
for online multiclass prediction).
| [
"Amit Daniely and Tom Helbertal",
"['Amit Daniely' 'Tom Helbertal']"
] |
math.ST cs.DS cs.IT cs.LG math.IT math.PR stat.TH | null | 1302.1232 | null | null | http://arxiv.org/pdf/1302.1232v1 | 2013-02-05T23:20:45Z | 2013-02-05T23:20:45Z | When are the most informative components for inference also the
principal components? | Which components of the singular value decomposition of a signal-plus-noise
data matrix are most informative for the inferential task of detecting or
estimating an embedded low-rank signal matrix? Principal component analysis
ascribes greater importance to the components that capture the greatest
variation, i.e., the singular vectors associated with the largest singular
values. This choice is often justified by invoking the Eckart-Young theorem
even though that work addresses the problem of how to best represent a
signal-plus-noise matrix using a low-rank approximation and not how to
best_infer_ the underlying low-rank signal component.
Here we take a first-principles approach in which we start with a
signal-plus-noise data matrix and show how the spectrum of the noise-only
component governs whether the principal or the middle components of the
singular value decomposition of the data matrix will be the informative
components for inference. Simply put, if the noise spectrum is supported on a
connected interval, in a sense we make precise, then the use of the principal
components is justified. When the noise spectrum is supported on multiple
intervals, then the middle components might be more informative than the
principal components.
The end result is a proper justification of the use of principal components
in the setting where the noise matrix is i.i.d. Gaussian and the identification
of scenarios, generically involving heterogeneous noise models such as mixtures
of Gaussians, where the middle components might be more informative than the
principal components so that they may be exploited to extract additional
processing gain. Our results show how the blind use of principal components can
lead to suboptimal or even faulty inference because of phase transitions that
separate a regime where the principal components are informative from a regime
where they are uninformative.
| [
"['Raj Rao Nadakuditi']",
"Raj Rao Nadakuditi"
] |
cs.DS cs.LG | null | 1302.1515 | null | null | http://arxiv.org/pdf/1302.1515v2 | 2013-07-10T15:21:41Z | 2013-02-06T20:53:35Z | A Polynomial Time Algorithm for Lossy Population Recovery | We give a polynomial time algorithm for the lossy population recovery
problem. In this problem, the goal is to approximately learn an unknown
distribution on binary strings of length $n$ from lossy samples: for some
parameter $\mu$ each coordinate of the sample is preserved with probability
$\mu$ and otherwise is replaced by a `?'. The running time and number of
samples needed for our algorithm is polynomial in $n$ and $1/\varepsilon$ for
each fixed $\mu>0$. This improves on algorithm of Wigderson and Yehudayoff that
runs in quasi-polynomial time for any $\mu > 0$ and the polynomial time
algorithm of Dvir et al which was shown to work for $\mu \gtrapprox 0.30$ by
Batman et al. In fact, our algorithm also works in the more general framework
of Batman et al. in which there is no a priori bound on the size of the support
of the distribution. The algorithm we analyze is implicit in previous work; our
main contribution is to analyze the algorithm by showing (via linear
programming duality and connections to complex analysis) that a certain matrix
associated with the problem has a robust local inverse even though its
condition number is exponentially small. A corollary of our result is the first
polynomial time algorithm for learning DNFs in the restriction access model of
Dvir et al.
| [
"Ankur Moitra, Michael Saks",
"['Ankur Moitra' 'Michael Saks']"
] |
cs.LG stat.ML | null | 1302.1519 | null | null | http://arxiv.org/pdf/1302.1519v1 | 2013-02-06T15:53:33Z | 2013-02-06T15:53:33Z | Update Rules for Parameter Estimation in Bayesian Networks | This paper re-examines the problem of parameter estimation in Bayesian
networks with missing values and hidden variables from the perspective of
recent work in on-line learning [Kivinen & Warmuth, 1994]. We provide a unified
framework for parameter estimation that encompasses both on-line learning,
where the model is continuously adapted to new data cases as they arrive, and
the more traditional batch learning, where a pre-accumulated set of samples is
used in a one-time model selection process. In the batch case, our framework
encompasses both the gradient projection algorithm and the EM algorithm for
Bayesian networks. The framework also leads to new on-line and batch parameter
update schemes, including a parameterized version of EM. We provide both
empirical and theoretical results indicating that parameterized EM allows
faster convergence to the maximum likelihood parameters than does standard EM.
| [
"Eric Bauer, Daphne Koller, Yoram Singer",
"['Eric Bauer' 'Daphne Koller' 'Yoram Singer']"
] |
cs.LG cs.AI stat.ML | null | 1302.1528 | null | null | http://arxiv.org/pdf/1302.1528v2 | 2015-05-16T23:29:15Z | 2013-02-06T15:54:25Z | A Bayesian Approach to Learning Bayesian Networks with Local Structure | Recently several researchers have investigated techniques for using data to
learn Bayesian networks containing compact representations for the conditional
probability distributions (CPDs) stored at each node. The majority of this work
has concentrated on using decision-tree representations for the CPDs. In
addition, researchers typically apply non-Bayesian (or asymptotically Bayesian)
scoring functions such as MDL to evaluate the goodness-of-fit of networks to
the data. In this paper we investigate a Bayesian approach to learning Bayesian
networks that contain the more general decision-graph representations of the
CPDs. First, we describe how to evaluate the posterior probability that is, the
Bayesian score of such a network, given a database of observed cases. Second,
we describe various search spaces that can be used, in conjunction with a
scoring function and a search procedure, to identify one or more high-scoring
networks. Finally, we present an experimental evaluation of the search spaces,
using a greedy algorithm and a Bayesian scoring function.
| [
"David Maxwell Chickering, David Heckerman, Christopher Meek",
"['David Maxwell Chickering' 'David Heckerman' 'Christopher Meek']"
] |
cs.AI cs.LG | null | 1302.1529 | null | null | http://arxiv.org/pdf/1302.1529v1 | 2013-02-06T15:54:31Z | 2013-02-06T15:54:31Z | Exploring Parallelism in Learning Belief Networks | It has been shown that a class of probabilistic domain models cannot be
learned correctly by several existing algorithms which employ a single-link
look ahead search. When a multi-link look ahead search is used, the
computational complexity of the learning algorithm increases. We study how to
use parallelism to tackle the increased complexity in learning such models and
to speed up learning in large domains. An algorithm is proposed to decompose
the learning task for parallel processing. A further task decomposition is used
to balance load among processors and to increase the speed-up and efficiency.
For learning from very large datasets, we present a regrouping of the available
processors such that slow data access through file can be replaced by fast
memory access. Our implementation in a parallel computer demonstrates the
effectiveness of the algorithm.
| [
"TongSheng Chu, Yang Xiang",
"['TongSheng Chu' 'Yang Xiang']"
] |
cs.AI cs.LG | null | 1302.1538 | null | null | http://arxiv.org/pdf/1302.1538v1 | 2013-02-06T15:55:21Z | 2013-02-06T15:55:21Z | Sequential Update of Bayesian Network Structure | There is an obvious need for improving the performance and accuracy of a
Bayesian network as new data is observed. Because of errors in model
construction and changes in the dynamics of the domains, we cannot afford to
ignore the information in new data. While sequential update of parameters for a
fixed structure can be accomplished using standard techniques, sequential
update of network structure is still an open problem. In this paper, we
investigate sequential update of Bayesian networks were both parameters and
structure are expected to change. We introduce a new approach that allows for
the flexible manipulation of the tradeoff between the quality of the learned
networks and the amount of information that is maintained about past
observations. We formally describe our approach including the necessary
modifications to the scoring functions for learning Bayesian networks, evaluate
its effectiveness through an empirical study, and extend it to the case of
missing data.
| [
"['Nir Friedman' 'Moises Goldszmidt']",
"Nir Friedman, Moises Goldszmidt"
] |
cs.AI cs.LG | null | 1302.1542 | null | null | http://arxiv.org/pdf/1302.1542v1 | 2013-02-06T15:55:43Z | 2013-02-06T15:55:43Z | Learning Bayesian Nets that Perform Well | A Bayesian net (BN) is more than a succinct way to encode a probabilistic
distribution; it also corresponds to a function used to answer queries. A BN
can therefore be evaluated by the accuracy of the answers it returns. Many
algorithms for learning BNs, however, attempt to optimize another criterion
(usually likelihood, possibly augmented with a regularizing term), which is
independent of the distribution of queries that are posed. This paper takes the
"performance criteria" seriously, and considers the challenge of computing the
BN whose performance - read "accuracy over the distribution of queries" - is
optimal. We show that many aspects of this learning task are more difficult
than the corresponding subtasks in the standard model.
| [
"['Russell Greiner' 'Adam J. Grove' 'Dale Schuurmans']",
"Russell Greiner, Adam J. Grove, Dale Schuurmans"
] |
cs.LG stat.ML | null | 1302.1545 | null | null | http://arxiv.org/pdf/1302.1545v1 | 2013-02-06T15:56:07Z | 2013-02-06T15:56:07Z | Models and Selection Criteria for Regression and Classification | When performing regression or classification, we are interested in the
conditional probability distribution for an outcome or class variable Y given a
set of explanatoryor input variables X. We consider Bayesian models for this
task. In particular, we examine a special class of models, which we call
Bayesian regression/classification (BRC) models, that can be factored into
independent conditional (y|x) and input (x) models. These models are
convenient, because the conditional model (the portion of the full model that
we care about) can be analyzed by itself. We examine the practice of
transforming arbitrary Bayesian models to BRC models, and argue that this
practice is often inappropriate because it ignores prior knowledge that may be
important for learning. In addition, we examine Bayesian methods for learning
models from data. We discuss two criteria for Bayesian model selection that are
appropriate for repression/classification: one described by Spiegelhalter et
al. (1993), and another by Buntine (1993). We contrast these two criteria using
the prequential framework of Dawid (1984), and give sufficient conditions under
which the criteria agree.
| [
"David Heckerman, Christopher Meek",
"['David Heckerman' 'Christopher Meek']"
] |
cs.AI cs.LG | null | 1302.1549 | null | null | http://arxiv.org/pdf/1302.1549v1 | 2013-02-06T15:56:57Z | 2013-02-06T15:56:57Z | Learning Belief Networks in Domains with Recursively Embedded Pseudo
Independent Submodels | A pseudo independent (PI) model is a probabilistic domain model (PDM) where
proper subsets of a set of collectively dependent variables display marginal
independence. PI models cannot be learned correctly by many algorithms that
rely on a single link search. Earlier work on learning PI models has suggested
a straightforward multi-link search algorithm. However, when a domain contains
recursively embedded PI submodels, it may escape the detection of such an
algorithm. In this paper, we propose an improved algorithm that ensures the
learning of all embedded PI submodels whose sizes are upper bounded by a
predetermined parameter. We show that this improved learning capability only
increases the complexity slightly beyond that of the previous algorithm. The
performance of the new algorithm is demonstrated through experiment.
| [
"Jun Hu, Yang Xiang",
"['Jun Hu' 'Yang Xiang']"
] |
cs.LG stat.ML | null | 1302.1552 | null | null | http://arxiv.org/pdf/1302.1552v1 | 2013-02-06T15:57:20Z | 2013-02-06T15:57:20Z | An Information-Theoretic Analysis of Hard and Soft Assignment Methods
for Clustering | Assignment methods are at the heart of many algorithms for unsupervised
learning and clustering - in particular, the well-known K-means and
Expectation-Maximization (EM) algorithms. In this work, we study several
different methods of assignment, including the "hard" assignments used by
K-means and the ?soft' assignments used by EM. While it is known that K-means
minimizes the distortion on the data and EM maximizes the likelihood, little is
known about the systematic differences of behavior between the two algorithms.
Here we shed light on these differences via an information-theoretic analysis.
The cornerstone of our results is a simple decomposition of the expected
distortion, showing that K-means (and its extension for inferring general
parametric densities from unlabeled sample data) must implicitly manage a
trade-off between how similar the data assigned to each cluster are, and how
the data are balanced among the clusters. How well the data are balanced is
measured by the entropy of the partition defined by the hard assignments. In
addition to letting us predict and verify systematic differences between
K-means and EM on specific examples, the decomposition allows us to give a
rather general argument showing that K ?means will consistently find densities
with less "overlap" than EM. We also study a third natural assignment method
that we call posterior assignment, that is close in spirit to the soft
assignments of EM, but leads to a surprisingly different algorithm.
| [
"Michael Kearns, Yishay Mansour, Andrew Y. Ng",
"['Michael Kearns' 'Yishay Mansour' 'Andrew Y. Ng']"
] |
cs.AI cs.LG | null | 1302.1561 | null | null | http://arxiv.org/pdf/1302.1561v2 | 2015-05-16T23:30:56Z | 2013-02-06T15:58:24Z | Structure and Parameter Learning for Causal Independence and Causal
Interaction Models | This paper discusses causal independence models and a generalization of these
models called causal interaction models. Causal interaction models are models
that have independent mechanisms where a mechanism can have several causes. In
addition to introducing several particular types of causal interaction models,
we show how we can apply the Bayesian approach to learning causal interaction
models obtaining approximate posterior distributions for the models and obtain
MAP and ML estimates for the parameters. We illustrate the approach with a
simulation study of learning model posteriors.
| [
"['Christopher Meek' 'David Heckerman']",
"Christopher Meek, David Heckerman"
] |
cs.AI cs.LG | null | 1302.1565 | null | null | http://arxiv.org/pdf/1302.1565v1 | 2013-02-06T15:58:45Z | 2013-02-06T15:58:45Z | Learning Bayesian Networks from Incomplete Databases | Bayesian approaches to learn the graphical structure of Bayesian Belief
Networks (BBNs) from databases share the assumption that the database is
complete, that is, no entry is reported as unknown. Attempts to relax this
assumption involve the use of expensive iterative methods to discriminate among
different structures. This paper introduces a deterministic method to learn the
graphical structure of a BBN from a possibly incomplete database. Experimental
evaluations show a significant robustness of this method and a remarkable
independence of its execution time from the number of missing data.
| [
"Marco Ramoni, Paola Sebastiani",
"['Marco Ramoni' 'Paola Sebastiani']"
] |
math.ST cs.LG stat.ML stat.TH | null | 1302.1611 | null | null | http://arxiv.org/pdf/1302.1611v2 | 2013-02-12T15:48:55Z | 2013-02-06T23:20:20Z | Bounded regret in stochastic multi-armed bandits | We study the stochastic multi-armed bandit problem when one knows the value
$\mu^{(\star)}$ of an optimal arm, as a well as a positive lower bound on the
smallest positive gap $\Delta$. We propose a new randomized policy that attains
a regret {\em uniformly bounded over time} in this setting. We also prove
several lower bounds, which show in particular that bounded regret is not
possible if one only knows $\Delta$, and bounded regret of order $1/\Delta$ is
not possible if one only knows $\mu^{(\star)}$
| [
"S\\'ebastien Bubeck, Vianney Perchet and Philippe Rigollet",
"['Sébastien Bubeck' 'Vianney Perchet' 'Philippe Rigollet']"
] |
q-bio.QM cs.CE cs.LG stat.ML | null | 1302.1733 | null | null | http://arxiv.org/pdf/1302.1733v1 | 2013-02-07T12:49:57Z | 2013-02-07T12:49:57Z | Feature Selection for Microarray Gene Expression Data using Simulated
Annealing guided by the Multivariate Joint Entropy | In this work a new way to calculate the multivariate joint entropy is
presented. This measure is the basis for a fast information-theoretic based
evaluation of gene relevance in a Microarray Gene Expression data context. Its
low complexity is based on the reuse of previous computations to calculate
current feature relevance. The mu-TAFS algorithm --named as such to
differentiate it from previous TAFS algorithms-- implements a simulated
annealing technique specially designed for feature subset selection. The
algorithm is applied to the maximization of gene subset relevance in several
public-domain microarray data sets. The experimental results show a notoriously
high classification performance and low size subsets formed by biologically
meaningful genes.
| [
"['Fernando González' 'Lluís A. Belanche']",
"Fernando Gonz\\'alez, Llu\\'is A. Belanche"
] |
cs.LG cs.CV cs.SD | 10.5120/10089-4722 | 1302.1772 | null | null | http://arxiv.org/abs/1302.1772v1 | 2013-02-07T15:03:24Z | 2013-02-07T15:03:24Z | An ANN-based Method for Detecting Vocal Fold Pathology | There are different algorithms for vocal fold pathology diagnosis. These
algorithms usually have three stages which are Feature Extraction, Feature
Reduction and Classification. While the third stage implies a choice of a
variety of machine learning methods, the first and second stages play a
critical role in performance and accuracy of the classification system. In this
paper we present initial study of feature extraction and feature reduction in
the task of vocal fold pathology diagnosis. A new type of feature vector, based
on wavelet packet decomposition and Mel-Frequency-Cepstral-Coefficients
(MFCCs), is proposed. Also Principal Component Analysis (PCA) is used for
feature reduction. An Artificial Neural Network is used as a classifier for
evaluating the performance of our proposed method.
| [
"['Vahid Majidnezhad' 'Igor Kheidorov']",
"Vahid Majidnezhad and Igor Kheidorov"
] |
cs.LG | null | 1302.2157 | null | null | http://arxiv.org/pdf/1302.2157v2 | 2013-05-19T00:39:52Z | 2013-02-08T21:18:24Z | Passive Learning with Target Risk | In this paper we consider learning in passive setting but with a slight
modification. We assume that the target expected loss, also referred to as
target risk, is provided in advance for learner as prior knowledge. Unlike most
studies in the learning theory that only incorporate the prior knowledge into
the generalization bounds, we are able to explicitly utilize the target risk in
the learning process. Our analysis reveals a surprising result on the sample
complexity of learning: by exploiting the target risk in the learning
algorithm, we show that when the loss function is both strongly convex and
smooth, the sample complexity reduces to $\O(\log (\frac{1}{\epsilon}))$, an
exponential improvement compared to the sample complexity
$\O(\frac{1}{\epsilon})$ for learning with strongly convex loss functions.
Furthermore, our proof is constructive and is based on a computationally
efficient stochastic optimization algorithm for such settings which demonstrate
that the proposed algorithm is practically useful.
| [
"['Mehrdad Mahdavi' 'Rong Jin']",
"Mehrdad Mahdavi and Rong Jin"
] |
cs.LG | null | 1302.2176 | null | null | http://arxiv.org/pdf/1302.2176v1 | 2013-02-08T23:16:04Z | 2013-02-08T23:16:04Z | Minimax Optimal Algorithms for Unconstrained Linear Optimization | We design and analyze minimax-optimal algorithms for online linear
optimization games where the player's choice is unconstrained. The player
strives to minimize regret, the difference between his loss and the loss of a
post-hoc benchmark strategy. The standard benchmark is the loss of the best
strategy chosen from a bounded comparator set. When the the comparison set and
the adversary's gradients satisfy L_infinity bounds, we give the value of the
game in closed form and prove it approaches sqrt(2T/pi) as T -> infinity.
Interesting algorithms result when we consider soft constraints on the
comparator, rather than restricting it to a bounded set. As a warmup, we
analyze the game with a quadratic penalty. The value of this game is exactly
T/2, and this value is achieved by perhaps the simplest online algorithm of
all: unprojected gradient descent with a constant learning rate.
We then derive a minimax-optimal algorithm for a much softer penalty
function. This algorithm achieves good bounds under the standard notion of
regret for any comparator point, without needing to specify the comparator set
in advance. The value of this game converges to sqrt{e} as T ->infinity; we
give a closed-form for the exact value as a function of T. The resulting
algorithm is natural in unconstrained investment or betting scenarios, since it
guarantees at worst constant loss, while allowing for exponential reward
against an "easy" adversary.
| [
"['H. Brendan McMahan']",
"H. Brendan McMahan"
] |
cs.PL cs.FL cs.LG | null | 1302.2273 | null | null | http://arxiv.org/pdf/1302.2273v1 | 2013-02-09T21:41:03Z | 2013-02-09T21:41:03Z | Learning Universally Quantified Invariants of Linear Data Structures | We propose a new automaton model, called quantified data automata over words,
that can model quantified invariants over linear data structures, and build
poly-time active learning algorithms for them, where the learner is allowed to
query the teacher with membership and equivalence queries. In order to express
invariants in decidable logics, we invent a decidable subclass of QDAs, called
elastic QDAs, and prove that every QDA has a unique
minimally-over-approximating elastic QDA. We then give an application of these
theoretically sound and efficient active learning algorithms in a passive
learning framework and show that we can efficiently learn quantified linear
data structure invariants from samples obtained from dynamic runs for a large
class of programs.
| [
"['Pranav Garg' 'Christof Loding' 'P. Madhusudan' 'Daniel Neider']",
"Pranav Garg, Christof Loding, P. Madhusudan, Daniel Neider"
] |
cs.LG | null | 1302.2277 | null | null | http://arxiv.org/pdf/1302.2277v2 | 2013-02-18T00:10:56Z | 2013-02-09T22:56:45Z | A Time Series Forest for Classification and Feature Extraction | We propose a tree ensemble method, referred to as time series forest (TSF),
for time series classification. TSF employs a combination of the entropy gain
and a distance measure, referred to as the Entrance (entropy and distance)
gain, for evaluating the splits. Experimental studies show that the Entrance
gain criterion improves the accuracy of TSF. TSF randomly samples features at
each tree node and has a computational complexity linear in the length of a
time series and can be built using parallel computing techniques such as
multi-core computing used here. The temporal importance curve is also proposed
to capture the important temporal characteristics useful for classification.
Experimental studies show that TSF using simple features such as mean,
deviation and slope outperforms strong competitors such as one-nearest-neighbor
classifiers with dynamic time warping, is computationally efficient, and can
provide insights into the temporal characteristics.
| [
"Houtao Deng, George Runger, Eugene Tuv, Martyanov Vladimir",
"['Houtao Deng' 'George Runger' 'Eugene Tuv' 'Martyanov Vladimir']"
] |
cs.LG | 10.5121/ijist.2013.3103 | 1302.2436 | null | null | http://arxiv.org/abs/1302.2436v1 | 2013-02-11T10:29:17Z | 2013-02-11T10:29:17Z | Extracting useful rules through improved decision tree induction using
information entropy | Classification is widely used technique in the data mining domain, where
scalability and efficiency are the immediate problems in classification
algorithms for large databases. We suggest improvements to the existing C4.5
decision tree algorithm. In this paper attribute oriented induction (AOI) and
relevance analysis are incorporated with concept hierarchys knowledge and
HeightBalancePriority algorithm for construction of decision tree along with
Multi level mining. The assignment of priorities to attributes is done by
evaluating information entropy, at different levels of abstraction for building
decision tree using HeightBalancePriority algorithm. Modified DMQL queries are
used to understand and explore the shortcomings of the decision trees generated
by C4.5 classifier for education dataset and the results are compared with the
proposed approach.
| [
"Mohd Mahmood Ali, Mohd S Qaseem, Lakshmi Rajamani, A Govardhan",
"['Mohd Mahmood Ali' 'Mohd S Qaseem' 'Lakshmi Rajamani' 'A Govardhan']"
] |
cs.LG | null | 1302.2550 | null | null | http://arxiv.org/pdf/1302.2550v1 | 2013-02-11T17:44:10Z | 2013-02-11T17:44:10Z | Online Regret Bounds for Undiscounted Continuous Reinforcement Learning | We derive sublinear regret bounds for undiscounted reinforcement learning in
continuous state space. The proposed algorithm combines state aggregation with
the use of upper confidence bounds for implementing optimism in the face of
uncertainty. Beside the existence of an optimal policy which satisfies the
Poisson equation, the only assumptions made are Holder continuity of rewards
and transition probabilities.
| [
"Ronald Ortner and Daniil Ryabko",
"['Ronald Ortner' 'Daniil Ryabko']"
] |
cs.LG | null | 1302.2552 | null | null | http://arxiv.org/pdf/1302.2552v1 | 2013-02-11T17:49:38Z | 2013-02-11T17:49:38Z | Selecting the State-Representation in Reinforcement Learning | The problem of selecting the right state-representation in a reinforcement
learning problem is considered. Several models (functions mapping past
observations to a finite set) of the observations are given, and it is known
that for at least one of these models the resulting state dynamics are indeed
Markovian. Without knowing neither which of the models is the correct one, nor
what are the probabilistic characteristics of the resulting MDP, it is required
to obtain as much reward as the optimal policy for the correct model (or for
the best of the correct models, if there are several). We propose an algorithm
that achieves that, with a regret of order T^{2/3} where T is the horizon time.
| [
"Odalric-Ambrym Maillard, R\\'emi Munos, Daniil Ryabko",
"['Odalric-Ambrym Maillard' 'Rémi Munos' 'Daniil Ryabko']"
] |
cs.LG | null | 1302.2553 | null | null | http://arxiv.org/pdf/1302.2553v2 | 2013-03-18T09:11:15Z | 2013-02-11T17:55:49Z | Optimal Regret Bounds for Selecting the State Representation in
Reinforcement Learning | We consider an agent interacting with an environment in a single stream of
actions, observations, and rewards, with no reset. This process is not assumed
to be a Markov Decision Process (MDP). Rather, the agent has several
representations (mapping histories of past interactions to a discrete state
space) of the environment with unknown dynamics, only some of which result in
an MDP. The goal is to minimize the average regret criterion against an agent
who knows an MDP representation giving the highest optimal reward, and acts
optimally in it. Recent regret bounds for this setting are of order
$O(T^{2/3})$ with an additive term constant yet exponential in some
characteristics of the optimal MDP. We propose an algorithm whose regret after
$T$ time steps is $O(\sqrt{T})$, with all constants reasonably small. This is
optimal in $T$ since $O(\sqrt{T})$ is the optimal regret in the setting of
learning in a (single discrete) MDP.
| [
"Odalric-Ambrym Maillard, Phuong Nguyen, Ronald Ortner, Daniil Ryabko",
"['Odalric-Ambrym Maillard' 'Phuong Nguyen' 'Ronald Ortner' 'Daniil Ryabko']"
] |
cs.LG stat.ML | null | 1302.2576 | null | null | http://arxiv.org/pdf/1302.2576v1 | 2013-02-11T19:16:25Z | 2013-02-11T19:16:25Z | The trace norm constrained matrix-variate Gaussian process for multitask
bipartite ranking | We propose a novel hierarchical model for multitask bipartite ranking. The
proposed approach combines a matrix-variate Gaussian process with a generative
model for task-wise bipartite ranking. In addition, we employ a novel trace
constrained variational inference approach to impose low rank structure on the
posterior matrix-variate Gaussian process. The resulting posterior covariance
function is derived in closed form, and the posterior mean function is the
solution to a matrix-variate regression with a novel spectral elastic net
regularizer. Further, we show that variational inference for the trace
constrained matrix-variate Gaussian process combined with maximum likelihood
parameter estimation for the bipartite ranking model is jointly convex. Our
motivating application is the prioritization of candidate disease genes. The
goal of this task is to aid the identification of unobserved associations
between human genes and diseases using a small set of observed associations as
well as kernels induced by gene-gene interaction networks and disease
ontologies. Our experimental results illustrate the performance of the proposed
model on real world datasets. Moreover, we find that the resulting low rank
solution improves the computational scalability of training and testing as
compared to baseline models.
| [
"Oluwasanmi Koyejo and Cheng Lee and Joydeep Ghosh",
"['Oluwasanmi Koyejo' 'Cheng Lee' 'Joydeep Ghosh']"
] |
stat.ML cs.LG | 10.1007/978-3-642-38679-4_50 | 1302.2645 | null | null | http://arxiv.org/abs/1302.2645v2 | 2013-05-04T01:22:48Z | 2013-02-11T21:14:43Z | Geometrical complexity of data approximators | There are many methods developed to approximate a cloud of vectors embedded
in high-dimensional space by simpler objects: starting from principal points
and linear manifolds to self-organizing maps, neural gas, elastic maps, various
types of principal curves and principal trees, and so on. For each type of
approximators the measure of the approximator complexity was developed too.
These measures are necessary to find the balance between accuracy and
complexity and to define the optimal approximations of a given type. We propose
a measure of complexity (geometrical complexity) which is applicable to
approximators of several types and which allows comparing data approximations
of different types.
| [
"E. M. Mirkes, A. Zinovyev, A. N. Gorban",
"['E. M. Mirkes' 'A. Zinovyev' 'A. N. Gorban']"
] |
cs.SI cs.LG stat.ML | 10.3934/dcdsb.2014.19.1335 | 1302.2671 | null | null | http://arxiv.org/abs/1302.2671v3 | 2014-04-30T23:42:52Z | 2013-02-12T00:01:02Z | Latent Self-Exciting Point Process Model for Spatial-Temporal Networks | We propose a latent self-exciting point process model that describes
geographically distributed interactions between pairs of entities. In contrast
to most existing approaches that assume fully observable interactions, here we
consider a scenario where certain interaction events lack information about
participants. Instead, this information needs to be inferred from the available
observations. We develop an efficient approximate algorithm based on
variational expectation-maximization to infer unknown participants in an event
given the location and the time of the event. We validate the model on
synthetic as well as real-world data, and obtain very promising results on the
identity-inference task. We also use our model to predict the timing and
participants of future events, and demonstrate that it compares favorably with
baseline approaches.
| [
"['Yoon-Sik Cho' 'Aram Galstyan' 'P. Jeffrey Brantingham' 'George Tita']",
"Yoon-Sik Cho, Aram Galstyan, P. Jeffrey Brantingham, George Tita"
] |
stat.ML cs.GT cs.LG | null | 1302.2672 | null | null | http://arxiv.org/pdf/1302.2672v1 | 2013-02-12T00:14:44Z | 2013-02-12T00:14:44Z | Competing With Strategies | We study the problem of online learning with a notion of regret defined with
respect to a set of strategies. We develop tools for analyzing the minimax
rates and for deriving regret-minimization algorithms in this scenario. While
the standard methods for minimizing the usual notion of regret fail, through
our analysis we demonstrate existence of regret-minimization methods that
compete with such sets of strategies as: autoregressive algorithms, strategies
based on statistical models, regularized least squares, and follow the
regularized leader strategies. In several cases we also derive efficient
learning algorithms.
| [
"['Wei Han' 'Alexander Rakhlin' 'Karthik Sridharan']",
"Wei Han, Alexander Rakhlin, Karthik Sridharan"
] |
cs.LG cs.SI stat.ML | null | 1302.2684 | null | null | http://arxiv.org/pdf/1302.2684v4 | 2013-10-24T21:30:08Z | 2013-02-12T01:48:14Z | A Tensor Approach to Learning Mixed Membership Community Models | Community detection is the task of detecting hidden communities from observed
interactions. Guaranteed community detection has so far been mostly limited to
models with non-overlapping communities such as the stochastic block model. In
this paper, we remove this restriction, and provide guaranteed community
detection for a family of probabilistic network models with overlapping
communities, termed as the mixed membership Dirichlet model, first introduced
by Airoldi et al. This model allows for nodes to have fractional memberships in
multiple communities and assumes that the community memberships are drawn from
a Dirichlet distribution. Moreover, it contains the stochastic block model as a
special case. We propose a unified approach to learning these models via a
tensor spectral decomposition method. Our estimator is based on low-order
moment tensor of the observed network, consisting of 3-star counts. Our
learning method is fast and is based on simple linear algebraic operations,
e.g. singular value decomposition and tensor power iterations. We provide
guaranteed recovery of community memberships and model parameters and present a
careful finite sample analysis of our learning method. As an important special
case, our results match the best known scaling requirements for the
(homogeneous) stochastic block model.
| [
"['Anima Anandkumar' 'Rong Ge' 'Daniel Hsu' 'Sham M. Kakade']",
"Anima Anandkumar, Rong Ge, Daniel Hsu, Sham M. Kakade"
] |
cs.LG cs.DS stat.ML | null | 1302.2752 | null | null | http://arxiv.org/pdf/1302.2752v3 | 2015-03-25T12:18:55Z | 2013-02-12T10:20:21Z | Adaptive Metric Dimensionality Reduction | We study adaptive data-dependent dimensionality reduction in the context of
supervised learning in general metric spaces. Our main statistical contribution
is a generalization bound for Lipschitz functions in metric spaces that are
doubling, or nearly doubling. On the algorithmic front, we describe an analogue
of PCA for metric spaces: namely an efficient procedure that approximates the
data's intrinsic dimension, which is often much lower than the ambient
dimension. Our approach thus leverages the dual benefits of low dimensionality:
(1) more efficient algorithms, e.g., for proximity search, and (2) more
optimistic generalization bounds.
| [
"Lee-Ad Gottlieb, Aryeh Kontorovich, Robert Krauthgamer",
"['Lee-Ad Gottlieb' 'Aryeh Kontorovich' 'Robert Krauthgamer']"
] |
cs.LG cs.IT math.AG math.IT stat.ML | null | 1302.2767 | null | null | http://arxiv.org/pdf/1302.2767v2 | 2013-11-02T14:02:46Z | 2013-02-12T12:15:20Z | Coherence and sufficient sampling densities for reconstruction in
compressed sensing | We give a new, very general, formulation of the compressed sensing problem in
terms of coordinate projections of an analytic variety, and derive sufficient
sampling rates for signal reconstruction. Our bounds are linear in the
coherence of the signal space, a geometric parameter independent of the
specific signal and measurement, and logarithmic in the ambient dimension where
the signal is presented. We exemplify our approach by deriving sufficient
sampling densities for low-rank matrix completion and distance matrix
completion which are independent of the true matrix.
| [
"['Franz J. Király' 'Louis Theran']",
"Franz J. Kir\\'aly and Louis Theran"
] |
cs.LG | null | 1302.3219 | null | null | http://arxiv.org/pdf/1302.3219v1 | 2013-02-13T08:48:53Z | 2013-02-13T08:48:53Z | An Efficient Dual Approach to Distance Metric Learning | Distance metric learning is of fundamental interest in machine learning
because the distance metric employed can significantly affect the performance
of many learning methods. Quadratic Mahalanobis metric learning is a popular
approach to the problem, but typically requires solving a semidefinite
programming (SDP) problem, which is computationally expensive. Standard
interior-point SDP solvers typically have a complexity of $O(D^{6.5})$ (with
$D$ the dimension of input data), and can thus only practically solve problems
exhibiting less than a few thousand variables. Since the number of variables is
$D (D+1) / 2 $, this implies a limit upon the size of problem that can
practically be solved of around a few hundred dimensions. The complexity of the
popular quadratic Mahalanobis metric learning approach thus limits the size of
problem to which metric learning can be applied. Here we propose a
significantly more efficient approach to the metric learning problem based on
the Lagrange dual formulation of the problem. The proposed formulation is much
simpler to implement, and therefore allows much larger Mahalanobis metric
learning problems to be solved. The time complexity of the proposed method is
$O (D ^ 3) $, which is significantly lower than that of the SDP approach.
Experiments on a variety of datasets demonstrate that the proposed method
achieves an accuracy comparable to the state-of-the-art, but is applicable to
significantly larger problems. We also show that the proposed method can be
applied to solve more general Frobenius-norm regularized SDP problems
approximately.
| [
"Chunhua Shen, Junae Kim, Fayao Liu, Lei Wang, Anton van den Hengel",
"['Chunhua Shen' 'Junae Kim' 'Fayao Liu' 'Lei Wang' 'Anton van den Hengel']"
] |
cs.LG | null | 1302.3268 | null | null | http://arxiv.org/pdf/1302.3268v2 | 2013-05-20T15:02:15Z | 2013-02-13T22:42:44Z | Adaptive Crowdsourcing Algorithms for the Bandit Survey Problem | Very recently crowdsourcing has become the de facto platform for distributing
and collecting human computation for a wide range of tasks and applications
such as information retrieval, natural language processing and machine
learning. Current crowdsourcing platforms have some limitations in the area of
quality control. Most of the effort to ensure good quality has to be done by
the experimenter who has to manage the number of workers needed to reach good
results.
We propose a simple model for adaptive quality control in crowdsourced
multiple-choice tasks which we call the \emph{bandit survey problem}. This
model is related to, but technically different from the well-known multi-armed
bandit problem. We present several algorithms for this problem, and support
them with analysis and simulations. Our approach is based in our experience
conducting relevance evaluation for a large commercial search engine.
| [
"Ittai Abraham, Omar Alonso, Vasilis Kandylas and Aleksandrs Slivkins",
"['Ittai Abraham' 'Omar Alonso' 'Vasilis Kandylas' 'Aleksandrs Slivkins']"
] |
cs.LG | 10.1109/TPAMI.2014.2315792 | 1302.3283 | null | null | http://arxiv.org/abs/1302.3283v4 | 2020-03-09T03:33:35Z | 2013-02-14T01:01:24Z | StructBoost: Boosting Methods for Predicting Structured Output Variables | Boosting is a method for learning a single accurate predictor by linearly
combining a set of less accurate weak learners. Recently, structured learning
has found many applications in computer vision. Inspired by structured support
vector machines (SSVM), here we propose a new boosting algorithm for structured
output prediction, which we refer to as StructBoost. StructBoost supports
nonlinear structured learning by combining a set of weak structured learners.
As SSVM generalizes SVM, our StructBoost generalizes standard boosting
approaches such as AdaBoost, or LPBoost to structured learning. The resulting
optimization problem of StructBoost is more challenging than SSVM in the sense
that it may involve exponentially many variables and constraints. In contrast,
for SSVM one usually has an exponential number of constraints and a
cutting-plane method is used. In order to efficiently solve StructBoost, we
formulate an equivalent $ 1 $-slack formulation and solve it using a
combination of cutting planes and column generation. We show the versatility
and usefulness of StructBoost on a range of problems such as optimizing the
tree loss for hierarchical multi-class classification, optimizing the Pascal
overlap criterion for robust visual tracking and learning conditional random
field parameters for image segmentation.
| [
"Chunhua Shen, Guosheng Lin, Anton van den Hengel",
"['Chunhua Shen' 'Guosheng Lin' 'Anton van den Hengel']"
] |
stat.ML cs.IT cs.LG math.IT math.ST stat.TH | null | 1302.3407 | null | null | http://arxiv.org/pdf/1302.3407v1 | 2013-02-14T14:15:14Z | 2013-02-14T14:15:14Z | A consistent clustering-based approach to estimating the number of
change-points in highly dependent time-series | The problem of change-point estimation is considered under a general
framework where the data are generated by unknown stationary ergodic process
distributions. In this context, the consistent estimation of the number of
change-points is provably impossible. However, it is shown that a consistent
clustering method may be used to estimate the number of change points, under
the additional constraint that the correct number of process distributions that
generate the data is provided. This additional parameter has a natural
interpretation in many real-world applications. An algorithm is proposed that
estimates the number of change-points and locates the changes. The proposed
algorithm is shown to be asymptotically consistent; its empirical evaluations
are provided.
| [
"['Azaden Khaleghi' 'Daniil Ryabko']",
"Azaden Khaleghi and Daniil Ryabko"
] |
math.ST cs.LG cs.NA math.PR stat.TH | null | 1302.3447 | null | null | http://arxiv.org/pdf/1302.3447v1 | 2013-02-13T18:13:23Z | 2013-02-13T18:13:23Z | Exact Methods for Multistage Estimation of a Binomial Proportion | We first review existing sequential methods for estimating a binomial
proportion. Afterward, we propose a new family of group sequential sampling
schemes for estimating a binomial proportion with prescribed margin of error
and confidence level. In particular, we establish the uniform controllability
of coverage probability and the asymptotic optimality for such a family of
sampling schemes. Our theoretical results establish the possibility that the
parameters of this family of sampling schemes can be determined so that the
prescribed level of confidence is guaranteed with little waste of samples.
Analytic bounds for the cumulative distribution functions and expectations of
sample numbers are derived. Moreover, we discuss the inherent connection of
various sampling schemes. Numerical issues are addressed for improving the
accuracy and efficiency of computation. Computational experiments are conducted
for comparing sampling schemes. Illustrative examples are given for
applications in clinical trials.
| [
"Zhengjia Chen and Xinjia Chen",
"['Zhengjia Chen' 'Xinjia Chen']"
] |
cs.AI cs.LG stat.ML | null | 1302.3566 | null | null | http://arxiv.org/pdf/1302.3566v1 | 2013-02-13T14:12:58Z | 2013-02-13T14:12:58Z | Learning Equivalence Classes of Bayesian Networks Structures | Approaches to learning Bayesian networks from data typically combine a
scoring function with a heuristic search procedure. Given a Bayesian network
structure, many of the scoring functions derived in the literature return a
score for the entire equivalence class to which the structure belongs. When
using such a scoring function, it is appropriate for the heuristic search
algorithm to search over equivalence classes of Bayesian networks as opposed to
individual structures. We present the general formulation of a search space for
which the states of the search correspond to equivalence classes of structures.
Using this space, any one of a number of heuristic search algorithms can easily
be applied. We compare greedy search performance in the proposed search space
to greedy search performance in a search space for which the states correspond
to individual Bayesian network structures.
| [
"David Maxwell Chickering",
"['David Maxwell Chickering']"
] |
cs.LG cs.AI stat.ML | null | 1302.3567 | null | null | http://arxiv.org/pdf/1302.3567v2 | 2015-05-17T00:07:34Z | 2013-02-13T14:13:03Z | Efficient Approximations for the Marginal Likelihood of Incomplete Data
Given a Bayesian Network | We discuss Bayesian methods for learning Bayesian networks when data sets are
incomplete. In particular, we examine asymptotic approximations for the
marginal likelihood of incomplete data given a Bayesian network. We consider
the Laplace approximation and the less accurate but more efficient BIC/MDL
approximation. We also consider approximations proposed by Draper (1993) and
Cheeseman and Stutz (1995). These approximations are as efficient as BIC/MDL,
but their accuracy has not been studied in any depth. We compare the accuracy
of these approximations under the assumption that the Laplace approximation is
the most accurate. In experiments using synthetic data generated from discrete
naive-Bayes models having a hidden root node, we find that the CS measure is
the most accurate.
| [
"['David Maxwell Chickering' 'David Heckerman']",
"David Maxwell Chickering, David Heckerman"
] |
cs.AI cs.LG stat.ML | null | 1302.3577 | null | null | http://arxiv.org/pdf/1302.3577v1 | 2013-02-13T14:14:02Z | 2013-02-13T14:14:02Z | Learning Bayesian Networks with Local Structure | In this paper we examine a novel addition to the known methods for learning
Bayesian networks from data that improves the quality of the learned networks.
Our approach explicitly represents and learns the local structure in the
conditional probability tables (CPTs), that quantify these networks. This
increases the space of possible models, enabling the representation of CPTs
with a variable number of parameters that depends on the learned local
structures. The resulting learning procedure is capable of inducing models that
better emulate the real complexity of the interactions present in the data. We
describe the theoretical foundations and practical aspects of learning local
structures, as well as an empirical evaluation of the proposed method. This
evaluation indicates that learning curves characterizing the procedure that
exploits the local structure converge faster than these of the standard
procedure. Our results also show that networks learned with local structure
tend to be more complex (in terms of arcs), yet require less parameters.
| [
"['Nir Friedman' 'Moises Goldszmidt']",
"Nir Friedman, Moises Goldszmidt"
] |
cs.LG stat.ML | null | 1302.3579 | null | null | http://arxiv.org/pdf/1302.3579v1 | 2013-02-13T14:14:13Z | 2013-02-13T14:14:13Z | On the Sample Complexity of Learning Bayesian Networks | In recent years there has been an increasing interest in learning Bayesian
networks from data. One of the most effective methods for learning such
networks is based on the minimum description length (MDL) principle. Previous
work has shown that this learning procedure is asymptotically successful: with
probability one, it will converge to the target distribution, given a
sufficient number of samples. However, the rate of this convergence has been
hitherto unknown. In this work we examine the sample complexity of MDL based
learning procedures for Bayesian networks. We show that the number of samples
needed to learn an epsilon-close approximation (in terms of entropy distance)
with confidence delta is O((1/epsilon)^(4/3)log(1/epsilon)log(1/delta)loglog
(1/delta)). This means that the sample complexity is a low-order polynomial in
the error threshold and sub-linear in the confidence bound. We also discuss how
the constants in this term depend on the complexity of the target distribution.
Finally, we address questions of asymptotic minimality and propose a method for
using the sample complexity results to speed up the learning process.
| [
"Nir Friedman, Zohar Yakhini",
"['Nir Friedman' 'Zohar Yakhini']"
] |
cs.LG cs.AI stat.ML | null | 1302.3580 | null | null | http://arxiv.org/pdf/1302.3580v2 | 2015-05-16T23:35:58Z | 2013-02-13T14:14:19Z | Asymptotic Model Selection for Directed Networks with Hidden Variables | We extend the Bayesian Information Criterion (BIC), an asymptotic
approximation for the marginal likelihood, to Bayesian networks with hidden
variables. This approximation can be used to select models given large samples
of data. The standard BIC as well as our extension punishes the complexity of a
model according to the dimension of its parameters. We argue that the dimension
of a Bayesian network with hidden variables is the rank of the Jacobian matrix
of the transformation between the parameters of the network and the parameters
of the observable variables. We compute the dimensions of several networks
including the naive Bayes model with a hidden root node.
| [
"['Dan Geiger' 'David Heckerman' 'Christopher Meek']",
"Dan Geiger, David Heckerman, Christopher Meek"
] |
cs.LG q-bio.NC stat.AP stat.ML | null | 1302.3590 | null | null | http://arxiv.org/pdf/1302.3590v1 | 2013-02-13T14:15:20Z | 2013-02-13T14:15:20Z | Bayesian Learning of Loglinear Models for Neural Connectivity | This paper presents a Bayesian approach to learning the connectivity
structure of a group of neurons from data on configuration frequencies. A major
objective of the research is to provide statistical tools for detecting changes
in firing patterns with changing stimuli. Our framework is not restricted to
the well-understood case of pair interactions, but generalizes the Boltzmann
machine model to allow for higher order interactions. The paper applies a
Markov Chain Monte Carlo Model Composition (MC3) algorithm to search over
connectivity structures and uses Laplace's method to approximate posterior
probabilities of structures. Performance of the methods was tested on synthetic
data. The models were also applied to data obtained by Vaadia on multi-unit
recordings of several neurons in the visual cortex of a rhesus monkey in two
different attentional states. Results confirmed the experimenters' conjecture
that different attentional states were associated with different interaction
structures.
| [
"['Kathryn Blackmond Laskey' 'Laura Martignon']",
"Kathryn Blackmond Laskey, Laura Martignon"
] |
stat.ML cs.LG cs.SI | null | 1302.3639 | null | null | http://arxiv.org/pdf/1302.3639v5 | 2013-12-13T04:20:34Z | 2013-02-14T22:12:40Z | A Latent Source Model for Nonparametric Time Series Classification | For classifying time series, a nearest-neighbor approach is widely used in
practice with performance often competitive with or better than more elaborate
methods such as neural networks, decision trees, and support vector machines.
We develop theoretical justification for the effectiveness of
nearest-neighbor-like classification of time series. Our guiding hypothesis is
that in many applications, such as forecasting which topics will become trends
on Twitter, there aren't actually that many prototypical time series to begin
with, relative to the number of time series we have access to, e.g., topics
become trends on Twitter only in a few distinct manners whereas we can collect
massive amounts of Twitter data. To operationalize this hypothesis, we propose
a latent source model for time series, which naturally leads to a "weighted
majority voting" classification rule that can be approximated by a
nearest-neighbor classifier. We establish nonasymptotic performance guarantees
of both weighted majority voting and nearest-neighbor classification under our
model accounting for how much of the time series we observe and the model
complexity. Experimental results on synthetic data show weighted majority
voting achieving the same misclassification rate as nearest-neighbor
classification while observing less of the time series. We then use weighted
majority to forecast which news topics on Twitter become trends, where we are
able to detect such "trending topics" in advance of Twitter 79% of the time,
with a mean early advantage of 1 hour and 26 minutes, a true positive rate of
95%, and a false positive rate of 4%.
| [
"['George H. Chen' 'Stanislav Nikolov' 'Devavrat Shah']",
"George H. Chen, Stanislav Nikolov, Devavrat Shah"
] |
cs.LG q-bio.QM stat.ML | null | 1302.3668 | null | null | http://arxiv.org/pdf/1302.3668v1 | 2013-02-15T03:54:53Z | 2013-02-15T03:54:53Z | Bio-inspired data mining: Treating malware signatures as biosequences | The application of machine learning to bioinformatics problems is well
established. Less well understood is the application of bioinformatics
techniques to machine learning and, in particular, the representation of
non-biological data as biosequences. The aim of this paper is to explore the
effects of giving amino acid representation to problematic machine learning
data and to evaluate the benefits of supplementing traditional machine learning
with bioinformatics tools and techniques. The signatures of 60 computer viruses
and 60 computer worms were converted into amino acid representations and first
multiply aligned separately to identify conserved regions across different
families within each class (virus and worm). This was followed by a second
alignment of all 120 aligned signatures together so that non-conserved regions
were identified prior to input to a number of machine learning techniques.
Differences in length between virus and worm signatures after the first
alignment were resolved by the second alignment. Our first set of experiments
indicates that representing computer malware signatures as amino acid sequences
followed by alignment leads to greater classification and prediction accuracy.
Our second set of experiments indicates that checking the results of data
mining from artificial virus and worm data against known proteins can lead to
generalizations being made from the domain of naturally occurring proteins to
malware signatures. However, further work is needed to determine the advantages
and disadvantages of different representations and sequence alignment methods
for handling problematic machine learning data.
| [
"Ajit Narayanan and Yi Chen",
"['Ajit Narayanan' 'Yi Chen']"
] |
stat.ML cs.LG | null | 1302.3700 | null | null | http://arxiv.org/pdf/1302.3700v1 | 2013-02-15T08:16:14Z | 2013-02-15T08:16:14Z | Density Ratio Hidden Markov Models | Hidden Markov models and their variants are the predominant sequential
classification method in such domains as speech recognition, bioinformatics and
natural language processing. Being generative rather than discriminative
models, however, their classification performance is a drawback. In this paper
we apply ideas from the field of density ratio estimation to bypass the
difficult step of learning likelihood functions in HMMs. By reformulating
inference and model fitting in terms of density ratios and applying a fast
kernel-based estimation method, we show that it is possible to obtain a
striking increase in discriminative performance while retaining the
probabilistic qualities of the HMM. We demonstrate experimentally that this
formulation makes more efficient use of training data than alternative
approaches.
| [
"John A. Quinn, Masashi Sugiyama",
"['John A. Quinn' 'Masashi Sugiyama']"
] |
cs.LG | null | 1302.3721 | null | null | http://arxiv.org/pdf/1302.3721v1 | 2013-02-15T10:48:57Z | 2013-02-15T10:48:57Z | Thompson Sampling in Switching Environments with Bayesian Online Change
Point Detection | Thompson Sampling has recently been shown to be optimal in the Bernoulli
Multi-Armed Bandit setting[Kaufmann et al., 2012]. This bandit problem assumes
stationary distributions for the rewards. It is often unrealistic to model the
real world as a stationary distribution. In this paper we derive and evaluate
algorithms using Thompson Sampling for a Switching Multi-Armed Bandit Problem.
We propose a Thompson Sampling strategy equipped with a Bayesian change point
mechanism to tackle this problem. We develop algorithms for a variety of cases
with constant switching rate: when switching occurs all arms change (Global
Switching), switching occurs independently for each arm (Per-Arm Switching),
when the switching rate is known and when it must be inferred from data. This
leads to a family of algorithms we collectively term Change-Point Thompson
Sampling (CTS). We show empirical results of the algorithm in 4 artificial
environments, and 2 derived from real world data; news click-through[Yahoo!,
2011] and foreign exchange data[Dukascopy, 2012], comparing them to some other
bandit algorithms. In real world data CTS is the most effective.
| [
"Joseph Mellor, Jonathan Shapiro",
"['Joseph Mellor' 'Jonathan Shapiro']"
] |
cs.NE cs.LG stat.ML | null | 1302.3931 | null | null | http://arxiv.org/pdf/1302.3931v7 | 2013-10-09T16:55:26Z | 2013-02-16T05:49:15Z | Understanding Boltzmann Machine and Deep Learning via A Confident
Information First Principle | Typical dimensionality reduction methods focus on directly reducing the
number of random variables while retaining maximal variations in the data. In
this paper, we consider the dimensionality reduction in parameter spaces of
binary multivariate distributions. We propose a general
Confident-Information-First (CIF) principle to maximally preserve parameters
with confident estimates and rule out unreliable or noisy parameters. Formally,
the confidence of a parameter can be assessed by its Fisher information, which
establishes a connection with the inverse variance of any unbiased estimate for
the parameter via the Cram\'{e}r-Rao bound. We then revisit Boltzmann machines
(BM) and theoretically show that both single-layer BM without hidden units
(SBM) and restricted BM (RBM) can be solidly derived using the CIF principle.
This can not only help us uncover and formalize the essential parts of the
target density that SBM and RBM capture, but also suggest that the deep neural
network consisting of several layers of RBM can be seen as the layer-wise
application of CIF. Guided by the theoretical analysis, we develop a
sample-specific CIF-based contrastive divergence (CD-CIF) algorithm for SBM and
a CIF-based iterative projection procedure (IP) for RBM. Both CD-CIF and IP are
studied in a series of density estimation experiments.
| [
"Xiaozhao Zhao and Yuexian Hou and Qian Yu and Dawei Song and Wenjie Li",
"['Xiaozhao Zhao' 'Yuexian Hou' 'Qian Yu' 'Dawei Song' 'Wenjie Li']"
] |
cs.LG stat.ML | null | 1302.3956 | null | null | http://arxiv.org/pdf/1302.3956v1 | 2013-02-16T11:10:17Z | 2013-02-16T11:10:17Z | Clustering validity based on the most similarity | One basic requirement of many studies is the necessity of classifying data.
Clustering is a proposed method for summarizing networks. Clustering methods
can be divided into two categories named model-based approaches and algorithmic
approaches. Since the most of clustering methods depend on their input
parameters, it is important to evaluate the result of a clustering algorithm
with its different input parameters, to choose the most appropriate one. There
are several clustering validity techniques based on inner density and outer
density of clusters that represent different metrics to choose the most
appropriate clustering independent of the input parameters. According to
dependency of previous methods on the input parameters, one challenge in facing
with large systems, is to complete data incrementally that effects on the final
choice of the most appropriate clustering. Those methods define the existence
of high intensity in a cluster, and low intensity among different clusters as
the measure of choosing the optimal clustering. This measure has a tremendous
problem, not availing all data at the first stage. In this paper, we introduce
an efficient measure in which maximum number of repetitions for various initial
values occurs.
| [
"Raheleh Namayandeh, Farzad Didehvar, Zahra Shojaei",
"['Raheleh Namayandeh' 'Farzad Didehvar' 'Zahra Shojaei']"
] |
cs.NE cs.LG stat.ML | null | 1302.4141 | null | null | http://arxiv.org/pdf/1302.4141v1 | 2013-02-18T00:28:31Z | 2013-02-18T00:28:31Z | Canonical dual solutions to nonconvex radial basis neural network
optimization problem | Radial Basis Functions Neural Networks (RBFNNs) are tools widely used in
regression problems. One of their principal drawbacks is that the formulation
corresponding to the training with the supervision of both the centers and the
weights is a highly non-convex optimization problem, which leads to some
fundamentally difficulties for traditional optimization theory and methods.
This paper presents a generalized canonical duality theory for solving this
challenging problem. We demonstrate that by sequential canonical dual
transformations, the nonconvex optimization problem of the RBFNN can be
reformulated as a canonical dual problem (without duality gap). Both global
optimal solution and local extrema can be classified. Several applications to
one of the most used Radial Basis Functions, the Gaussian function, are
illustrated. Our results show that even for one-dimensional case, the global
minimizer of the nonconvex problem may not be the best solution to the RBFNNs,
and the canonical dual theory is a promising tool for solving general neural
networks training problems.
| [
"['Vittorio Latorre' 'David Yang Gao']",
"Vittorio Latorre and David Yang Gao"
] |
cs.LG stat.ML | 10.1109/ICASSP.2014.6854993 | 1302.4242 | null | null | http://arxiv.org/abs/1302.4242v2 | 2013-02-26T09:19:18Z | 2013-02-18T12:25:07Z | Metrics for Multivariate Dictionaries | Overcomplete representations and dictionary learning algorithms kept
attracting a growing interest in the machine learning community. This paper
addresses the emerging problem of comparing multivariate overcomplete
representations. Despite a recurrent need to rely on a distance for learning or
assessing multivariate overcomplete representations, no metrics in their
underlying spaces have yet been proposed. Henceforth we propose to study
overcomplete representations from the perspective of frame theory and matrix
manifolds. We consider distances between multivariate dictionaries as distances
between their spans which reveal to be elements of a Grassmannian manifold. We
introduce Wasserstein-like set-metrics defined on Grassmannian spaces and study
their properties both theoretically and numerically. Indeed a deep experimental
study based on tailored synthetic datasetsand real EEG signals for
Brain-Computer Interfaces (BCI) have been conducted. In particular, the
introduced metrics have been embedded in clustering algorithm and applied to
BCI Competition IV-2a for dataset quality assessment. Besides, a principled
connection is made between three close but still disjoint research fields,
namely, Grassmannian packing, dictionary learning and compressed sensing.
| [
"['Sylvain Chevallier' 'Quentin Barthélemy' 'Jamal Atif']",
"Sylvain Chevallier and Quentin Barth\\'elemy and Jamal Atif"
] |
cs.LG stat.ML | null | 1302.4297 | null | null | http://arxiv.org/pdf/1302.4297v3 | 2013-05-14T21:35:25Z | 2013-02-18T15:00:47Z | Feature Multi-Selection among Subjective Features | When dealing with subjective, noisy, or otherwise nebulous features, the
"wisdom of crowds" suggests that one may benefit from multiple judgments of the
same feature on the same object. We give theoretically-motivated `feature
multi-selection' algorithms that choose, among a large set of candidate
features, not only which features to judge but how many times to judge each
one. We demonstrate the effectiveness of this approach for linear regression on
a crowdsourced learning task of predicting people's height and weight from
photos, using features such as 'gender' and 'estimated weight' as well as
culturally fraught ones such as 'attractive'.
| [
"Sivan Sabato and Adam Kalai",
"['Sivan Sabato' 'Adam Kalai']"
] |
math.FA cs.LG stat.ML | null | 1302.4343 | null | null | http://arxiv.org/pdf/1302.4343v1 | 2013-02-18T16:42:27Z | 2013-02-18T16:42:27Z | On Translation Invariant Kernels and Screw Functions | We explore the connection between Hilbertian metrics and positive definite
kernels on the real line. In particular, we look at a well-known
characterization of translation invariant Hilbertian metrics on the real line
by von Neumann and Schoenberg (1941). Using this result we are able to give an
alternate proof of Bochner's theorem for translation invariant positive
definite kernels on the real line (Rudin, 1962).
| [
"Purushottam Kar and Harish Karnick",
"['Purushottam Kar' 'Harish Karnick']"
] |
cs.LG stat.ML | null | 1302.4387 | null | null | http://arxiv.org/pdf/1302.4387v2 | 2013-06-01T09:35:15Z | 2013-02-18T18:46:37Z | Online Learning with Switching Costs and Other Adaptive Adversaries | We study the power of different types of adaptive (nonoblivious) adversaries
in the setting of prediction with expert advice, under both full-information
and bandit feedback. We measure the player's performance using a new notion of
regret, also known as policy regret, which better captures the adversary's
adaptiveness to the player's behavior. In a setting where losses are allowed to
drift, we characterize ---in a nearly complete manner--- the power of adaptive
adversaries with bounded memories and switching costs. In particular, we show
that with switching costs, the attainable rate with bandit feedback is
$\widetilde{\Theta}(T^{2/3})$. Interestingly, this rate is significantly worse
than the $\Theta(\sqrt{T})$ rate attainable with switching costs in the
full-information case. Via a novel reduction from experts to bandits, we also
show that a bounded memory adversary can force $\widetilde{\Theta}(T^{2/3})$
regret even in the full information case, proving that switching costs are
easier to control than bounded memory adversaries. Our lower bounds rely on a
new stochastic adversary strategy that generates loss processes with strong
dependencies.
| [
"Nicolo Cesa-Bianchi, Ofer Dekel and Ohad Shamir",
"['Nicolo Cesa-Bianchi' 'Ofer Dekel' 'Ohad Shamir']"
] |
stat.ML cs.LG | null | 1302.4389 | null | null | http://arxiv.org/pdf/1302.4389v4 | 2013-09-20T08:54:35Z | 2013-02-18T18:59:07Z | Maxout Networks | We consider the problem of designing models to leverage a recently introduced
approximate model averaging technique called dropout. We define a simple new
model called maxout (so named because its output is the max of a set of inputs,
and because it is a natural companion to dropout) designed to both facilitate
optimization by dropout and improve the accuracy of dropout's fast approximate
model averaging technique. We empirically verify that the model successfully
accomplishes both of these tasks. We use maxout and dropout to demonstrate
state of the art classification performance on four benchmark datasets: MNIST,
CIFAR-10, CIFAR-100, and SVHN.
| [
"['Ian J. Goodfellow' 'David Warde-Farley' 'Mehdi Mirza' 'Aaron Courville'\n 'Yoshua Bengio']",
"Ian J. Goodfellow and David Warde-Farley and Mehdi Mirza and Aaron\n Courville and Yoshua Bengio"
] |
cs.LG stat.ML | null | 1302.4549 | null | null | http://arxiv.org/pdf/1302.4549v2 | 2013-02-20T08:35:39Z | 2013-02-19T09:21:09Z | Breaking the Small Cluster Barrier of Graph Clustering | This paper investigates graph clustering in the planted cluster model in the
presence of {\em small clusters}. Traditional results dictate that for an
algorithm to provably correctly recover the clusters, {\em all} clusters must
be sufficiently large (in particular, $\tilde{\Omega}(\sqrt{n})$ where $n$ is
the number of nodes of the graph). We show that this is not really a
restriction: by a more refined analysis of the trace-norm based recovery
approach proposed in Jalali et al. (2011) and Chen et al. (2012), we prove that
small clusters, under certain mild assumptions, do not hinder recovery of large
ones.
Based on this result, we further devise an iterative algorithm to recover
{\em almost all clusters} via a "peeling strategy", i.e., recover large
clusters first, leading to a reduced problem, and repeat this procedure. These
results are extended to the {\em partial observation} setting, in which only a
(chosen) part of the graph is observed.The peeling strategy gives rise to an
active learning algorithm, in which edges adjacent to smaller clusters are
queried more often as large clusters are learned (and removed).
From a high level, this paper sheds novel insights on high-dimensional
statistics and learning structured data, by presenting a structured matrix
learning problem for which a one shot convex relaxation approach necessarily
fails, but a carefully constructed sequence of convex relaxationsdoes the job.
| [
"Nir Ailon and Yudong Chen and Xu Huan",
"['Nir Ailon' 'Yudong Chen' 'Xu Huan']"
] |
stat.ML cs.LG cs.PF | 10.1109/LCOMM.2013.082113.131131 | 1302.4773 | null | null | http://arxiv.org/abs/1302.4773v1 | 2013-02-19T22:59:44Z | 2013-02-19T22:59:44Z | Optimal Discriminant Functions Based On Sampled Distribution Distance
for Modulation Classification | In this letter, we derive the optimal discriminant functions for modulation
classification based on the sampled distribution distance. The proposed method
classifies various candidate constellations using a low complexity approach
based on the distribution distance at specific testpoints along the cumulative
distribution function. This method, based on the Bayesian decision criteria,
asymptotically provides the minimum classification error possible given a set
of testpoints. Testpoint locations are also optimized to improve classification
performance. The method provides significant gains over existing approaches
that also use the distribution of the signal features.
| [
"Paulo Urriza, Eric Rebeiz, Danijela Cabric",
"['Paulo Urriza' 'Eric Rebeiz' 'Danijela Cabric']"
] |
cs.CL cs.LG | null | 1302.4874 | null | null | http://arxiv.org/pdf/1302.4874v1 | 2013-02-20T11:06:25Z | 2013-02-20T11:06:25Z | A Labeled Graph Kernel for Relationship Extraction | In this paper, we propose an approach for Relationship Extraction (RE) based
on labeled graph kernels. The kernel we propose is a particularization of a
random walk kernel that exploits two properties previously studied in the RE
literature: (i) the words between the candidate entities or connecting them in
a syntactic representation are particularly likely to carry information
regarding the relationship; and (ii) combining information from distinct
sources in a kernel may help the RE system make better decisions. We performed
experiments on a dataset of protein-protein interactions and the results show
that our approach obtains effectiveness values that are comparable with the
state-of-the art kernel methods. Moreover, our approach is able to outperform
the state-of-the-art kernels when combined with other kernel methods.
| [
"Gon\\c{c}alo Sim\\~oes, Helena Galhardas, David Matos",
"['Gonçalo Simões' 'Helena Galhardas' 'David Matos']"
] |
stat.ML cs.LG | null | 1302.4886 | null | null | http://arxiv.org/pdf/1302.4886v3 | 2014-03-05T10:29:18Z | 2013-02-20T12:31:30Z | Fast methods for denoising matrix completion formulations, with
applications to robust seismic data interpolation | Recent SVD-free matrix factorization formulations have enabled rank
minimization for systems with millions of rows and columns, paving the way for
matrix completion in extremely large-scale applications, such as seismic data
interpolation.
In this paper, we consider matrix completion formulations designed to hit a
target data-fitting error level provided by the user, and propose an algorithm
called LR-BPDN that is able to exploit factorized formulations to solve the
corresponding optimization problem. Since practitioners typically have strong
prior knowledge about target error level, this innovation makes it easy to
apply the algorithm in practice, leaving only the factor rank to be determined.
Within the established framework, we propose two extensions that are highly
relevant to solving practical challenges of data interpolation. First, we
propose a weighted extension that allows known subspace information to improve
the results of matrix completion formulations. We show how this weighting can
be used in the context of frequency continuation, an essential aspect to
seismic data interpolation. Second, we propose matrix completion formulations
that are robust to large measurement errors in the available data.
We illustrate the advantages of LR-BPDN on the collaborative filtering
problem using the MovieLens 1M, 10M, and Netflix 100M datasets. Then, we use
the new method, along with its robust and subspace re-weighted extensions, to
obtain high-quality reconstructions for large scale seismic interpolation
problems with real data, even in the presence of data contamination.
| [
"Aleksandr Y. Aravkin and Rajiv Kumar and Hassan Mansour and Ben Recht\n and Felix J. Herrmann",
"['Aleksandr Y. Aravkin' 'Rajiv Kumar' 'Hassan Mansour' 'Ben Recht'\n 'Felix J. Herrmann']"
] |
stat.ML cs.LG stat.ME | null | 1302.4922 | null | null | http://arxiv.org/pdf/1302.4922v4 | 2013-05-13T13:10:31Z | 2013-02-20T14:53:13Z | Structure Discovery in Nonparametric Regression through Compositional
Kernel Search | Despite its importance, choosing the structural form of the kernel in
nonparametric regression remains a black art. We define a space of kernel
structures which are built compositionally by adding and multiplying a small
number of base kernels. We present a method for searching over this space of
structures which mirrors the scientific discovery process. The learned
structures can often decompose functions into interpretable components and
enable long-range extrapolation on time-series datasets. Our structure search
method outperforms many widely used kernels and kernel combination methods on a
variety of prediction tasks.
| [
"['David Duvenaud' 'James Robert Lloyd' 'Roger Grosse'\n 'Joshua B. Tenenbaum' 'Zoubin Ghahramani']",
"David Duvenaud, James Robert Lloyd, Roger Grosse, Joshua B. Tenenbaum,\n Zoubin Ghahramani"
] |
cs.AI cs.LG | null | 1302.4949 | null | null | http://arxiv.org/pdf/1302.4949v1 | 2013-02-20T15:20:41Z | 2013-02-20T15:20:41Z | A Characterization of the Dirichlet Distribution with Application to
Learning Bayesian Networks | We provide a new characterization of the Dirichlet distribution. This
characterization implies that under assumptions made by several previous
authors for learning belief networks, a Dirichlet prior on the parameters is
inevitable.
| [
"Dan Geiger, David Heckerman",
"['Dan Geiger' 'David Heckerman']"
] |
cs.LG cs.AI stat.ML | null | 1302.4964 | null | null | http://arxiv.org/pdf/1302.4964v1 | 2013-02-20T15:22:01Z | 2013-02-20T15:22:01Z | Estimating Continuous Distributions in Bayesian Classifiers | When modeling a probability distribution with a Bayesian network, we are
faced with the problem of how to handle continuous variables. Most previous
work has either solved the problem by discretizing, or assumed that the data
are generated by a single Gaussian. In this paper we abandon the normality
assumption and instead use statistical methods for nonparametric density
estimation. For a naive Bayesian classifier, we present experimental results on
a variety of natural and artificial domains, comparing two methods of density
estimation: assuming normality and modeling each conditional distribution with
a single Gaussian; and using nonparametric kernel density estimation. We
observe large reductions in error on several natural and artificial data sets,
which suggests that kernel estimation is a useful tool for learning Bayesian
models.
| [
"George H. John, Pat Langley",
"['George H. John' 'Pat Langley']"
] |
cs.CV cs.LG stat.ML | null | 1302.5010 | null | null | http://arxiv.org/pdf/1302.5010v2 | 2014-12-24T00:14:31Z | 2013-02-20T16:09:38Z | Matching Pursuit LASSO Part II: Applications and Sparse Recovery over
Batch Signals | Matching Pursuit LASSIn Part I \cite{TanPMLPart1}, a Matching Pursuit LASSO
({MPL}) algorithm has been presented for solving large-scale sparse recovery
(SR) problems. In this paper, we present a subspace search to further improve
the performance of MPL, and then continue to address another major challenge of
SR -- batch SR with many signals, a consideration which is absent from most of
previous $\ell_1$-norm methods. As a result, a batch-mode {MPL} is developed to
vastly speed up sparse recovery of many signals simultaneously. Comprehensive
numerical experiments on compressive sensing and face recognition tasks
demonstrate the superior performance of MPL and BMPL over other methods
considered in this paper, in terms of sparse recovery ability and efficiency.
In particular, BMPL is up to 400 times faster than existing $\ell_1$-norm
methods considered to be state-of-the-art.O Part II: Applications and Sparse
Recovery over Batch Signals
| [
"['Mingkui Tan' 'Ivor W. Tsang' 'Li Wang']",
"Mingkui Tan and Ivor W. Tsang and Li Wang"
] |
cs.CV cs.LG | null | 1302.5056 | null | null | http://arxiv.org/pdf/1302.5056v1 | 2013-01-15T18:47:11Z | 2013-01-15T18:47:11Z | Pooling-Invariant Image Feature Learning | Unsupervised dictionary learning has been a key component in state-of-the-art
computer vision recognition architectures. While highly effective methods exist
for patch-based dictionary learning, these methods may learn redundant features
after the pooling stage in a given early vision architecture. In this paper, we
offer a novel dictionary learning scheme to efficiently take into account the
invariance of learned features after the spatial pooling stage. The algorithm
is built on simple clustering, and thus enjoys efficiency and scalability. We
discuss the underlying mechanism that justifies the use of clustering
algorithms, and empirically show that the algorithm finds better dictionaries
than patch-based methods with the same dictionary size.
| [
"['Yangqing Jia' 'Oriol Vinyals' 'Trevor Darrell']",
"Yangqing Jia, Oriol Vinyals, Trevor Darrell"
] |
stat.ML cs.LG | null | 1302.5125 | null | null | http://arxiv.org/pdf/1302.5125v1 | 2013-02-20T21:20:30Z | 2013-02-20T21:20:30Z | High-Dimensional Probability Estimation with Deep Density Models | One of the fundamental problems in machine learning is the estimation of a
probability distribution from data. Many techniques have been proposed to study
the structure of data, most often building around the assumption that
observations lie on a lower-dimensional manifold of high probability. It has
been more difficult, however, to exploit this insight to build explicit,
tractable density models for high-dimensional data. In this paper, we introduce
the deep density model (DDM), a new approach to density estimation. We exploit
insights from deep learning to construct a bijective map to a representation
space, under which the transformation of the distribution of the data is
approximately factorized and has identical and known marginal densities. The
simplicity of the latent distribution under the model allows us to feasibly
explore it, and the invertibility of the map to characterize contraction of
measure across it. This enables us to compute normalized densities for
out-of-sample data. This combination of tractability and flexibility allows us
to tackle a variety of probabilistic tasks on high-dimensional datasets,
including: rapid computation of normalized densities at test-time without
evaluating a partition function; generation of samples without MCMC; and
characterization of the joint entropy of the data.
| [
"Oren Rippel, Ryan Prescott Adams",
"['Oren Rippel' 'Ryan Prescott Adams']"
] |
cs.SI cs.LG | null | 1302.5145 | null | null | http://arxiv.org/pdf/1302.5145v2 | 2013-03-05T03:35:09Z | 2013-02-20T23:15:57Z | Prediction and Clustering in Signed Networks: A Local to Global
Perspective | The study of social networks is a burgeoning research area. However, most
existing work deals with networks that simply encode whether relationships
exist or not. In contrast, relationships in signed networks can be positive
("like", "trust") or negative ("dislike", "distrust"). The theory of social
balance shows that signed networks tend to conform to some local patterns that,
in turn, induce certain global characteristics. In this paper, we exploit both
local as well as global aspects of social balance theory for two fundamental
problems in the analysis of signed networks: sign prediction and clustering.
Motivated by local patterns of social balance, we first propose two families of
sign prediction methods: measures of social imbalance (MOIs), and supervised
learning using high order cycles (HOCs). These methods predict signs of edges
based on triangles and \ell-cycles for relatively small values of \ell.
Interestingly, by examining measures of social imbalance, we show that the
classic Katz measure, which is used widely in unsigned link prediction,
actually has a balance theoretic interpretation when applied to signed
networks. Furthermore, motivated by the global structure of balanced networks,
we propose an effective low rank modeling approach for both sign prediction and
clustering. For the low rank modeling approach, we provide theoretical
performance guarantees via convex relaxations, scale it up to large problem
sizes using a matrix factorization based algorithm, and provide extensive
experimental validation including comparisons with local approaches. Our
experimental results indicate that, by adopting a more global viewpoint of
balance structure, we get significant performance and computational gains in
prediction and clustering tasks on signed networks. Our work therefore
highlights the usefulness of the global aspect of balance theory for the
analysis of signed networks.
| [
"Kai-Yang Chiang, Cho-Jui Hsieh, Nagarajan Natarajan, Ambuj Tewari and\n Inderjit S. Dhillon",
"['Kai-Yang Chiang' 'Cho-Jui Hsieh' 'Nagarajan Natarajan' 'Ambuj Tewari'\n 'Inderjit S. Dhillon']"
] |
cs.LG | null | 1302.5348 | null | null | http://arxiv.org/pdf/1302.5348v3 | 2013-05-31T21:12:47Z | 2013-02-21T17:30:42Z | Graph-based Generalization Bounds for Learning Binary Relations | We investigate the generalizability of learned binary relations: functions
that map pairs of instances to a logical indicator. This problem has
application in numerous areas of machine learning, such as ranking, entity
resolution and link prediction. Our learning framework incorporates an example
labeler that, given a sequence $X$ of $n$ instances and a desired training size
$m$, subsamples $m$ pairs from $X \times X$ without replacement. The challenge
in analyzing this learning scenario is that pairwise combinations of random
variables are inherently dependent, which prevents us from using traditional
learning-theoretic arguments. We present a unified, graph-based analysis, which
allows us to analyze this dependence using well-known graph identities. We are
then able to bound the generalization error of learned binary relations using
Rademacher complexity and algorithmic stability. The rate of uniform
convergence is partially determined by the labeler's subsampling process. We
thus examine how various assumptions about subsampling affect generalization;
under a natural random subsampling process, our bounds guarantee
$\tilde{O}(1/\sqrt{n})$ uniform convergence.
| [
"Ben London and Bert Huang and Lise Getoor",
"['Ben London' 'Bert Huang' 'Lise Getoor']"
] |
cs.LG cs.CV cs.IT math.IT stat.ML | null | 1302.5449 | null | null | http://arxiv.org/pdf/1302.5449v1 | 2013-02-21T22:59:12Z | 2013-02-21T22:59:12Z | Nonparametric Basis Pursuit via Sparse Kernel-based Learning | Signal processing tasks as fundamental as sampling, reconstruction, minimum
mean-square error interpolation and prediction can be viewed under the prism of
reproducing kernel Hilbert spaces. Endowing this vantage point with
contemporary advances in sparsity-aware modeling and processing, promotes the
nonparametric basis pursuit advocated in this paper as the overarching
framework for the confluence of kernel-based learning (KBL) approaches
leveraging sparse linear regression, nuclear-norm regularization, and
dictionary learning. The novel sparse KBL toolbox goes beyond translating
sparse parametric approaches to their nonparametric counterparts, to
incorporate new possibilities such as multi-kernel selection and matrix
smoothing. The impact of sparse KBL to signal processing applications is
illustrated through test cases from cognitive radio sensing, microarray data
imputation, and network traffic prediction.
| [
"['Juan Andres Bazerque' 'Georgios B. Giannakis']",
"Juan Andres Bazerque and Georgios B. Giannakis"
] |
cs.LG | null | 1302.5565 | null | null | http://arxiv.org/pdf/1302.5565v1 | 2013-02-22T12:11:42Z | 2013-02-22T12:11:42Z | The Importance of Clipping in Neurocontrol by Direct Gradient Descent on
the Cost-to-Go Function and in Adaptive Dynamic Programming | In adaptive dynamic programming, neurocontrol and reinforcement learning, the
objective is for an agent to learn to choose actions so as to minimise a total
cost function. In this paper we show that when discretized time is used to
model the motion of the agent, it can be very important to do "clipping" on the
motion of the agent in the final time step of the trajectory. By clipping we
mean that the final time step of the trajectory is to be truncated such that
the agent stops exactly at the first terminal state reached, and no distance
further. We demonstrate that when clipping is omitted, learning performance can
fail to reach the optimum; and when clipping is done properly, learning
performance can improve significantly.
The clipping problem we describe affects algorithms which use explicit
derivatives of the model functions of the environment to calculate a learning
gradient. These include Backpropagation Through Time for Control, and methods
based on Dual Heuristic Dynamic Programming. However the clipping problem does
not significantly affect methods based on Heuristic Dynamic Programming,
Temporal Differences or Policy Gradient Learning algorithms. Similarly, the
clipping problem does not affect fixed-length finite-horizon problems.
| [
"['Michael Fairbank']",
"Michael Fairbank"
] |
stat.ML cs.LG | null | 1302.5608 | null | null | http://arxiv.org/pdf/1302.5608v1 | 2013-02-22T14:36:59Z | 2013-02-22T14:36:59Z | Accelerated Linear SVM Training with Adaptive Variable Selection
Frequencies | Support vector machine (SVM) training is an active research area since the
dawn of the method. In recent years there has been increasing interest in
specialized solvers for the important case of linear models. The algorithm
presented by Hsieh et al., probably best known under the name of the
"liblinear" implementation, marks a major breakthrough. The method is analog to
established dual decomposition algorithms for training of non-linear SVMs, but
with greatly reduced computational complexity per update step. This comes at
the cost of not keeping track of the gradient of the objective any more, which
excludes the application of highly developed working set selection algorithms.
We present an algorithmic improvement to this method. We replace uniform
working set selection with an online adaptation of selection frequencies. The
adaptation criterion is inspired by modern second order working set selection
methods. The same mechanism replaces the shrinking heuristic. This novel
technique speeds up training in some cases by more than an order of magnitude.
| [
"Tobias Glasmachers and \\\"Ur\\\"un Dogan",
"['Tobias Glasmachers' 'Ürün Dogan']"
] |
cs.LG stat.ML | 10.1109/TSP.2014.2298839 | 1302.5729 | null | null | http://arxiv.org/abs/1302.5729v3 | 2014-01-03T15:48:14Z | 2013-02-22T22:36:08Z | Sparse Signal Estimation by Maximally Sparse Convex Optimization | This paper addresses the problem of sparsity penalized least squares for
applications in sparse signal processing, e.g. sparse deconvolution. This paper
aims to induce sparsity more strongly than L1 norm regularization, while
avoiding non-convex optimization. For this purpose, this paper describes the
design and use of non-convex penalty functions (regularizers) constrained so as
to ensure the convexity of the total cost function, F, to be minimized. The
method is based on parametric penalty functions, the parameters of which are
constrained to ensure convexity of F. It is shown that optimal parameters can
be obtained by semidefinite programming (SDP). This maximally sparse convex
(MSC) approach yields maximally non-convex sparsity-inducing penalty functions
constrained such that the total cost function, F, is convex. It is demonstrated
that iterative MSC (IMSC) can yield solutions substantially more sparse than
the standard convex sparsity-inducing approach, i.e., L1 norm minimization.
| [
"Ivan W. Selesnick and Ilker Bayram",
"['Ivan W. Selesnick' 'Ilker Bayram']"
] |
cs.LG | null | 1302.5797 | null | null | http://arxiv.org/pdf/1302.5797v1 | 2013-02-23T13:33:09Z | 2013-02-23T13:33:09Z | Prediction by Random-Walk Perturbation | We propose a version of the follow-the-perturbed-leader online prediction
algorithm in which the cumulative losses are perturbed by independent symmetric
random walks. The forecaster is shown to achieve an expected regret of the
optimal order O(sqrt(n log N)) where n is the time horizon and N is the number
of experts. More importantly, it is shown that the forecaster changes its
prediction at most O(sqrt(n log N)) times, in expectation. We also extend the
analysis to online combinatorial optimization and show that even in this more
general setting, the forecaster rarely switches between experts while having a
regret of near-optimal order.
| [
"Luc Devroye, G\\'abor Lugosi, Gergely Neu",
"['Luc Devroye' 'Gábor Lugosi' 'Gergely Neu']"
] |
cs.LG math.ST stat.ML stat.TH | null | 1302.6009 | null | null | http://arxiv.org/pdf/1302.6009v1 | 2013-02-25T07:20:19Z | 2013-02-25T07:20:19Z | On learning parametric-output HMMs | We present a novel approach for learning an HMM whose outputs are distributed
according to a parametric family. This is done by {\em decoupling} the learning
task into two steps: first estimating the output parameters, and then
estimating the hidden states transition probabilities. The first step is
accomplished by fitting a mixture model to the output stationary distribution.
Given the parameters of this mixture model, the second step is formulated as
the solution of an easily solvable convex quadratic program. We provide an
error analysis for the estimated transition probabilities and show they are
robust to small perturbations in the estimates of the mixture parameters.
Finally, we support our analysis with some encouraging empirical results.
| [
"Aryeh Kontorovich, Boaz Nadler, Roi Weiss",
"['Aryeh Kontorovich' 'Boaz Nadler' 'Roi Weiss']"
] |
cs.SD cs.LG stat.ML | null | 1302.6194 | null | null | http://arxiv.org/pdf/1302.6194v1 | 2013-02-25T18:56:49Z | 2013-02-25T18:56:49Z | Phoneme discrimination using $KS$-algebra II | $KS$-algebra consists of expressions constructed with four kinds operations,
the minimum, maximum, difference and additively homogeneous generalized means.
Five families of $Z$-classifiers are investigated on binary classification
tasks between English phonemes. It is shown that the classifiers are able to
reflect well known formant characteristics of vowels, while having very small
Kolmogoroff's complexity.
| [
"['Ondrej Such' 'Lenka Mackovicova']",
"Ondrej Such and Lenka Mackovicova"
] |
cs.NE cs.LG | 10.5120/3913-5505 | 1302.6210 | null | null | http://arxiv.org/abs/1302.6210v1 | 2013-02-25T20:09:19Z | 2013-02-25T20:09:19Z | A Homogeneous Ensemble of Artificial Neural Networks for Time Series
Forecasting | Enhancing the robustness and accuracy of time series forecasting models is an
active area of research. Recently, Artificial Neural Networks (ANNs) have found
extensive applications in many practical forecasting problems. However, the
standard backpropagation ANN training algorithm has some critical issues, e.g.
it has a slow convergence rate and often converges to a local minimum, the
complex pattern of error surfaces, lack of proper training parameters selection
methods, etc. To overcome these drawbacks, various improved training methods
have been developed in literature; but, still none of them can be guaranteed as
the best for all problems. In this paper, we propose a novel weighted ensemble
scheme which intelligently combines multiple training algorithms to increase
the ANN forecast accuracies. The weight for each training algorithm is
determined from the performance of the corresponding ANN model on the
validation dataset. Experimental results on four important time series depicts
that our proposed technique reduces the mentioned shortcomings of individual
ANN training algorithms to a great extent. Also it achieves significantly
better forecast accuracies than two other popular statistical models.
| [
"['Ratnadip Adhikari' 'R. K. Agrawal']",
"Ratnadip Adhikari, R. K. Agrawal"
] |
cs.IT cs.LG math.IT | null | 1302.6315 | null | null | http://arxiv.org/pdf/1302.6315v1 | 2013-02-26T05:17:55Z | 2013-02-26T05:17:55Z | Rate-Distortion Bounds for an Epsilon-Insensitive Distortion Measure | Direct evaluation of the rate-distortion function has rarely been achieved
when it is strictly greater than its Shannon lower bound. In this paper, we
consider the rate-distortion function for the distortion measure defined by an
epsilon-insensitive loss function. We first present the Shannon lower bound
applicable to any source distribution with finite differential entropy. Then,
focusing on the Laplacian and Gaussian sources, we prove that the
rate-distortion functions of these sources are strictly greater than their
Shannon lower bounds and obtain analytically evaluable upper bounds for the
rate-distortion functions. Small distortion limit and numerical evaluation of
the bounds suggest that the Shannon lower bound provides a good approximation
to the rate-distortion function for the epsilon-insensitive distortion measure.
| [
"Kazuho Watanabe",
"['Kazuho Watanabe']"
] |
stat.ME cs.LG | null | 1302.6390 | null | null | http://arxiv.org/pdf/1302.6390v1 | 2013-02-26T10:50:38Z | 2013-02-26T10:50:38Z | The adaptive Gril estimator with a diverging number of parameters | We consider the problem of variables selection and estimation in linear
regression model in situations where the number of parameters diverges with the
sample size. We propose the adaptive Generalized Ridge-Lasso (\mbox{AdaGril})
which is an extension of the the adaptive Elastic Net. AdaGril incorporates
information redundancy among correlated variables for model selection and
estimation. It combines the strengths of the quadratic regularization and the
adaptively weighted Lasso shrinkage. In this paper, we highlight the grouped
selection property for AdaCnet method (one type of AdaGril) in the equal
correlation case. Under weak conditions, we establish the oracle property of
AdaGril which ensures the optimal large performance when the dimension is high.
Consequently, it achieves both goals of handling the problem of collinearity in
high dimension and enjoys the oracle property. Moreover, we show that AdaGril
estimator achieves a Sparsity Inequality, i. e., a bound in terms of the number
of non-zero components of the 'true' regression coefficient. This bound is
obtained under a similar weak Restricted Eigenvalue (RE) condition used for
Lasso. Simulations studies show that some particular cases of AdaGril
outperform its competitors.
| [
"Mohammed El Anbari and Abdallah Mkhadri",
"['Mohammed El Anbari' 'Abdallah Mkhadri']"
] |
cs.LO cs.LG | null | 1302.6421 | null | null | http://arxiv.org/pdf/1302.6421v3 | 2013-05-24T08:01:20Z | 2013-02-26T13:02:36Z | ML4PG in Computer Algebra verification | ML4PG is a machine-learning extension that provides statistical proof hints
during the process of Coq/SSReflect proof development. In this paper, we use
ML4PG to find proof patterns in the CoqEAL library -- a library that was
devised to verify the correctness of Computer Algebra algorithms. In
particular, we use ML4PG to help us in the formalisation of an efficient
algorithm to compute the inverse of triangular matrices.
| [
"J\\'onathan Heras and Ekaterina Komendantskaya",
"['Jónathan Heras' 'Ekaterina Komendantskaya']"
] |
stat.ML cs.LG | null | 1302.6452 | null | null | http://arxiv.org/pdf/1302.6452v1 | 2013-02-26T15:16:32Z | 2013-02-26T15:16:32Z | A Conformal Prediction Approach to Explore Functional Data | This paper applies conformal prediction techniques to compute simultaneous
prediction bands and clustering trees for functional data. These tools can be
used to detect outliers and clusters. Both our prediction bands and clustering
trees provide prediction sets for the underlying stochastic process with a
guaranteed finite sample behavior, under no distributional assumptions. The
prediction sets are also informative in that they correspond to the high
density region of the underlying process. While ordinary conformal prediction
has high computational cost for functional data, we use the inductive conformal
predictor, together with several novel choices of conformity scores, to
simplify the computation. Our methods are illustrated on some real data
examples.
| [
"Jing Lei, Alessandro Rinaldo, Larry Wasserman",
"['Jing Lei' 'Alessandro Rinaldo' 'Larry Wasserman']"
] |
cs.LG | null | 1302.6523 | null | null | http://arxiv.org/pdf/1302.6523v1 | 2013-02-26T18:18:35Z | 2013-02-26T18:18:35Z | Sparse Frequency Analysis with Sparse-Derivative Instantaneous Amplitude
and Phase Functions | This paper addresses the problem of expressing a signal as a sum of frequency
components (sinusoids) wherein each sinusoid may exhibit abrupt changes in its
amplitude and/or phase. The Fourier transform of a narrow-band signal, with a
discontinuous amplitude and/or phase function, exhibits spectral and temporal
spreading. The proposed method aims to avoid such spreading by explicitly
modeling the signal of interest as a sum of sinusoids with time-varying
amplitudes. So as to accommodate abrupt changes, it is further assumed that the
amplitude/phase functions are approximately piecewise constant (i.e., their
time-derivatives are sparse). The proposed method is based on a convex
variational (optimization) approach wherein the total variation (TV) of the
amplitude functions are regularized subject to a perfect (or approximate)
reconstruction constraint. A computationally efficient algorithm is derived
based on convex optimization techniques. The proposed technique can be used to
perform band-pass filtering that is relatively insensitive to narrow-band
amplitude/phase jumps present in data, which normally pose a challenge (due to
transients, leakage, etc.). The method is illustrated using both synthetic
signals and human EEG data for the purpose of band-pass filtering and the
estimation of phase synchrony indexes.
| [
"['Yin Ding' 'Ivan W. Selesnick']",
"Yin Ding and Ivan W. Selesnick"
] |
null | null | 1302.6584 | null | null | http://arxiv.org/pdf/1302.6584v3 | 2013-07-18T00:29:57Z | 2013-02-26T20:58:59Z | Variational Algorithms for Marginal MAP | The marginal maximum a posteriori probability (MAP) estimation problem, which calculates the mode of the marginal posterior distribution of a subset of variables with the remaining variables marginalized, is an important inference problem in many models, such as those with hidden variables or uncertain parameters. Unfortunately, marginal MAP can be NP-hard even on trees, and has attracted less attention in the literature compared to the joint MAP (maximization) and marginalization problems. We derive a general dual representation for marginal MAP that naturally integrates the marginalization and maximization operations into a joint variational optimization problem, making it possible to easily extend most or all variational-based algorithms to marginal MAP. In particular, we derive a set of "mixed-product" message passing algorithms for marginal MAP, whose form is a hybrid of max-product, sum-product and a novel "argmax-product" message updates. We also derive a class of convergent algorithms based on proximal point methods, including one that transforms the marginal MAP problem into a sequence of standard marginalization problems. Theoretically, we provide guarantees under which our algorithms give globally or locally optimal solutions, and provide novel upper bounds on the optimal objectives. Empirically, we demonstrate that our algorithms significantly outperform the existing approaches, including a state-of-the-art algorithm based on local search methods. | [
"['Qiang Liu' 'Alexander Ihler']"
] |
cs.LG stat.ML | null | 1302.6613 | null | null | http://arxiv.org/pdf/1302.6613v1 | 2013-02-26T22:18:55Z | 2013-02-26T22:18:55Z | An Introductory Study on Time Series Modeling and Forecasting | Time series modeling and forecasting has fundamental importance to various
practical domains. Thus a lot of active research works is going on in this
subject during several years. Many important models have been proposed in
literature for improving the accuracy and effectiveness of time series
forecasting. The aim of this dissertation work is to present a concise
description of some popular time series forecasting models used in practice,
with their salient features. In this thesis, we have described three important
classes of time series models, viz. the stochastic, neural networks and SVM
based models, together with their inherent forecasting strengths and
weaknesses. We have also discussed about the basic issues related to time
series modeling, such as stationarity, parsimony, overfitting, etc. Our
discussion about different time series models is supported by giving the
experimental forecast results, performed on six real time series datasets.
While fitting a model to a dataset, special care is taken to select the most
parsimonious one. To evaluate forecast accuracy as well as to compare among
different models fitted to a time series, we have used the five performance
measures, viz. MSE, MAD, RMSE, MAPE and Theil's U-statistics. For each of the
six datasets, we have shown the obtained forecast diagram which graphically
depicts the closeness between the original and forecasted observations. To have
authenticity as well as clarity in our discussion about time series modeling
and forecasting, we have taken the help of various published research works
from reputed journals and some standard books.
| [
"['Ratnadip Adhikari' 'R. K. Agrawal']",
"Ratnadip Adhikari, R. K. Agrawal"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.