title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Auto-WEKA: Combined Selection and Hyperparameter Optimization of
Classification Algorithms | cs.LG | Many different machine learning algorithms exist; taking into account each
algorithm's hyperparameters, there is a staggeringly large number of possible
alternatives overall. We consider the problem of simultaneously selecting a
learning algorithm and setting its hyperparameters, going beyond previous work
that addresses these issues in isolation. We show that this problem can be
addressed by a fully automated approach, leveraging recent innovations in
Bayesian optimization. Specifically, we consider a wide range of feature
selection techniques (combining 3 search and 8 evaluator methods) and all
classification approaches implemented in WEKA, spanning 2 ensemble methods, 10
meta-methods, 27 base classifiers, and hyperparameter settings for each
classifier. On each of 21 popular datasets from the UCI repository, the KDD Cup
09, variants of the MNIST dataset and CIFAR-10, we show classification
performance often much better than using standard selection/hyperparameter
optimization methods. We hope that our approach will help non-expert users to
more effectively identify machine learning algorithms and hyperparameter
settings appropriate to their applications, and hence to achieve improved
performance.
| Chris Thornton and Frank Hutter and Holger H. Hoos and Kevin
Leyton-Brown | null | 1208.3719 | null | null |
Online Learning with Predictable Sequences | stat.ML cs.LG | We present methods for online linear optimization that take advantage of
benign (as opposed to worst-case) sequences. Specifically if the sequence
encountered by the learner is described well by a known "predictable process",
the algorithms presented enjoy tighter bounds as compared to the typical worst
case bounds. Additionally, the methods achieve the usual worst-case regret
bounds if the sequence is not benign. Our approach can be seen as a way of
adding prior knowledge about the sequence within the paradigm of online
learning. The setting is shown to encompass partial and side information.
Variance and path-length bounds can be seen as particular examples of online
learning with simple predictable sequences.
We further extend our methods and results to include competing with a set of
possible predictable processes (models), that is "learning" the predictable
process itself concurrently with using it to obtain better regret guarantees.
We show that such model selection is possible under various assumptions on the
available feedback. Our results suggest a promising direction of further
research with potential applications to stock market and time series
prediction.
| Alexander Rakhlin and Karthik Sridharan | null | 1208.3728 | null | null |
Multiple graph regularized protein domain ranking | cs.LG cs.CE cs.IR q-bio.QM | Background Protein domain ranking is a fundamental task in structural
biology. Most protein domain ranking methods rely on the pairwise comparison of
protein domains while neglecting the global manifold structure of the protein
domain database. Recently, graph regularized ranking that exploits the global
structure of the graph defined by the pairwise similarities has been proposed.
However, the existing graph regularized ranking methods are very sensitive to
the choice of the graph model and parameters, and this remains a difficult
problem for most of the protein domain ranking methods.
Results To tackle this problem, we have developed the Multiple Graph
regularized Ranking algorithm, MultiG- Rank. Instead of using a single graph to
regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold
of protein domain distribution by combining multiple initial graphs for the
regularization. Graph weights are learned with ranking scores jointly and
automatically, by alternately minimizing an ob- jective function in an
iterative algorithm. Experimental results on a subset of the ASTRAL SCOP
protein domain database demonstrate that MultiG-Rank achieves a better ranking
performance than single graph regularized ranking methods and pairwise
similarity based ranking methods.
Conclusion The problem of graph model and parameter selection in graph
regularized protein domain ranking can be solved effectively by combining
multiple graphs. This aspect of generalization introduces a new frontier in
applying multiple graphs to solving protein domain ranking applications.
| Jim Jing-Yan Wang, Halima Bensmail and Xin Gao | 10.1186/1471-2105-13-307 | 1208.3779 | null | null |
Discriminative Sparse Coding on Multi-Manifold for Data Representation
and Classification | cs.CV cs.LG stat.ML | Sparse coding has been popularly used as an effective data representation
method in various applications, such as computer vision, medical imaging and
bioinformatics, etc. However, the conventional sparse coding algorithms and its
manifold regularized variants (graph sparse coding and Laplacian sparse
coding), learn the codebook and codes in a unsupervised manner and neglect the
class information available in the training set. To address this problem, in
this paper we propose a novel discriminative sparse coding method based on
multi-manifold, by learning discriminative class-conditional codebooks and
sparse codes from both data feature space and class labels. First, the entire
training set is partitioned into multiple manifolds according to the class
labels. Then, we formulate the sparse coding as a manifold-manifold matching
problem and learn class-conditional codebooks and codes to maximize the
manifold margins of different classes. Lastly, we present a data point-manifold
matching error based strategy to classify the unlabeled data point.
Experimental results on somatic mutations identification and breast tumors
classification in ultrasonic images tasks demonstrate the efficacy of the
proposed data representation-classification approach.
| Jing-Yan Wang | null | 1208.3839 | null | null |
Adaptive Graph via Multiple Kernel Learning for Nonnegative Matrix
Factorization | cs.LG cs.CV stat.ML | Nonnegative Matrix Factorization (NMF) has been continuously evolving in
several areas like pattern recognition and information retrieval methods. It
factorizes a matrix into a product of 2 low-rank non-negative matrices that
will define parts-based, and linear representation of nonnegative data.
Recently, Graph regularized NMF (GrNMF) is proposed to find a compact
representation,which uncovers the hidden semantics and simultaneously respects
the intrinsic geometric structure. In GNMF, an affinity graph is constructed
from the original data space to encode the geometrical information. In this
paper, we propose a novel idea which engages a Multiple Kernel Learning
approach into refining the graph structure that reflects the factorization of
the matrix and the new data space. The GrNMF is improved by utilizing the graph
refined by the kernel learning, and then a novel kernel learning method is
introduced under the GrNMF framework. Our approach shows encouraging results of
the proposed algorithm in comparison to the state-of-the-art clustering
algorithms like NMF, GrNMF, SVD etc.
| Jing-Yan Wang and Mustafa AbdulJabbar | null | 1208.3845 | null | null |
Performance Tuning Of J48 Algorithm For Prediction Of Soil Fertility | cs.LG cs.DB cs.PF stat.ML | Data mining involves the systematic analysis of large data sets, and data
mining in agricultural soil datasets is exciting and modern research area. The
productive capacity of a soil depends on soil fertility. Achieving and
maintaining appropriate levels of soil fertility, is of utmost importance if
agricultural land is to remain capable of nourishing crop production. In this
research, Steps for building a predictive model of soil fertility have been
explained.
This paper aims at predicting soil fertility class using decision tree
algorithms in data mining . Further, it focuses on performance tuning of J48
decision tree algorithm with the help of meta-techniques such as attribute
selection and boosting.
| Jay Gholap | null | 1208.3943 | null | null |
Semi-supervised Clustering Ensemble by Voting | cs.LG stat.ML | Clustering ensemble is one of the most recent advances in unsupervised
learning. It aims to combine the clustering results obtained using different
algorithms or from different runs of the same clustering algorithm for the same
data set, this is accomplished using on a consensus function, the efficiency
and accuracy of this method has been proven in many works in literature. In the
first part of this paper we make a comparison among current approaches to
clustering ensemble in literature. All of these approaches consist of two main
steps: the ensemble generation and consensus function. In the second part of
the paper, we suggest engaging supervision in the clustering ensemble procedure
to get more enhancements on the clustering results. Supervision can be applied
in two places: either by using semi-supervised algorithms in the clustering
ensemble generation step or in the form of a feedback used by the consensus
function stage. Also, we introduce a flexible two parameter weighting
mechanism, the first parameter describes the compatibility between the datasets
under study and the semi-supervised clustering algorithms used to generate the
base partitions, the second parameter is used to provide the user feedback on
the these partitions. The two parameters are engaged in a "relabeling and
voting" based consensus function to produce the final clustering.
| Ashraf Mohammed Iqbal, Abidalrahman Moh'd, Zahoor Khan | null | 1208.4138 | null | null |
Generating ordered list of Recommended Items: a Hybrid Recommender
System of Microblog | cs.IR cs.LG cs.SI | Precise recommendation of followers helps in improving the user experience
and maintaining the prosperity of twitter and microblog platforms. In this
paper, we design a hybrid recommender system of microblog as a solution of KDD
Cup 2012, track 1 task, which requires predicting users a user might follow in
Tencent Microblog. We describe the background of the problem and present the
algorithm consisting of keyword analysis, user taxonomy, (potential)interests
extraction and item recommendation. Experimental result shows the high
performance of our algorithm. Some possible improvements are discussed, which
leads to further study.
| Yingzhen Li and Ye Zhang | null | 1208.4147 | null | null |
A Learning Theoretic Approach to Energy Harvesting Communication System
Optimization | cs.LG cs.NI | A point-to-point wireless communication system in which the transmitter is
equipped with an energy harvesting device and a rechargeable battery, is
studied. Both the energy and the data arrivals at the transmitter are modeled
as Markov processes. Delay-limited communication is considered assuming that
the underlying channel is block fading with memory, and the instantaneous
channel state information is available at both the transmitter and the
receiver. The expected total transmitted data during the transmitter's
activation time is maximized under three different sets of assumptions
regarding the information available at the transmitter about the underlying
stochastic processes. A learning theoretic approach is introduced, which does
not assume any a priori information on the Markov processes governing the
communication system. In addition, online and offline optimization problems are
studied for the same setting. Full statistical knowledge and causal information
on the realizations of the underlying stochastic processes are assumed in the
online optimization problem, while the offline optimization problem assumes
non-causal knowledge of the realizations in advance. Comparing the optimal
solutions in all three frameworks, the performance loss due to the lack of the
transmitter's information regarding the behaviors of the underlying Markov
processes is quantified.
| Pol Blasco, Deniz G\"und\"uz and Mischa Dohler | 10.1109/TWC.2013.030413.121120 | 1208.4290 | null | null |
Optimized Look-Ahead Tree Policies: A Bridge Between Look-Ahead Tree
Policies and Direct Policy Search | cs.SY cs.AI cs.LG | Direct policy search (DPS) and look-ahead tree (LT) policies are two widely
used classes of techniques to produce high performance policies for sequential
decision-making problems. To make DPS approaches work well, one crucial issue
is to select an appropriate space of parameterized policies with respect to the
targeted problem. A fundamental issue in LT approaches is that, to take good
decisions, such policies must develop very large look-ahead trees which may
require excessive online computational resources. In this paper, we propose a
new hybrid policy learning scheme that lies at the intersection of DPS and LT,
in which the policy is an algorithm that develops a small look-ahead tree in a
directed way, guided by a node scoring function that is learned through DPS.
The LT-based representation is shown to be a versatile way of representing
policies in a DPS scheme, while at the same time, DPS enables to significantly
reduce the size of the look-ahead trees that are required to take high-quality
decisions.
We experimentally compare our method with two other state-of-the-art DPS
techniques and four common LT policies on four benchmark domains and show that
it combines the advantages of the two techniques from which it originates. In
particular, we show that our method: (1) produces overall better performing
policies than both pure DPS and pure LT policies, (2) requires a substantially
smaller number of policy evaluations than other DPS techniques, (3) is easy to
tune and (4) results in policies that are quite robust with respect to
perturbations of the initial conditions.
| Tobias Jung, Louis Wehenkel, Damien Ernst, Francis Maes | null | 1208.4773 | null | null |
Identification of Probabilities of Languages | cs.LG math.PR | We consider the problem of inferring the probability distribution associated
with a language, given data consisting of an infinite sequence of elements of
the languge. We do this under two assumptions on the algorithms concerned: (i)
like a real-life algorothm it has round-off errors, and (ii) it has no
round-off errors. Assuming (i) we (a) consider a probability mass function of
the elements of the language if the data are drawn independent identically
distributed (i.i.d.), provided the probability mass function is computable and
has a finite expectation. We give an effective procedure to almost surely
identify in the limit the target probability mass function using the Strong Law
of Large Numbers. Second (b) we treat the case of possibly incomputable
probabilistic mass functions in the above setting. In this case we can only
pointswize converge to the target probability mass function almost surely.
Third (c) we consider the case where the data are dependent assuming they are
typical for at least one computable measure and the language is finite. There
is an effective procedure to identify by infinite recurrence a nonempty subset
of the computable measures according to which the data is typical. Here we use
the theory of Kolmogorov complexity. Assuming (ii) we obtain the weaker result
for (a) that the target distribution is identified by infinite recurrence
almost surely; (b) stays the same as under assumption (i). We consider the
associated predictions.
| Paul M. B. Vitanyi (CWI and University of Amsterdam) and Nick Chater
(Behavioural Science Group, Warwick Business School, University of Warwick) | null | 1208.5003 | null | null |
Changepoint detection for high-dimensional time series with missing data | stat.ML cs.LG | This paper describes a novel approach to change-point detection when the
observed high-dimensional data may have missing elements. The performance of
classical methods for change-point detection typically scales poorly with the
dimensionality of the data, so that a large number of observations are
collected after the true change-point before it can be reliably detected.
Furthermore, missing components in the observed data handicap conventional
approaches. The proposed method addresses these challenges by modeling the
dynamic distribution underlying the data as lying close to a time-varying
low-dimensional submanifold embedded within the ambient observation space.
Specifically, streaming data is used to track a submanifold approximation,
measure deviations from this approximation, and calculate a series of
statistics of the deviations for detecting when the underlying manifold has
changed in a sharp or unexpected manner. The approach described in this paper
leverages several recent results in the field of high-dimensional data
analysis, including subspace tracking with missing data, multiscale analysis
techniques for point clouds, online optimization, and change-point detection
performance analysis. Simulations and experiments highlight the robustness and
efficacy of the proposed approach in detecting an abrupt change in an otherwise
slowly varying low-dimensional manifold.
| Yao Xie, Jiaji Huang, Rebecca Willett | 10.1109/JSTSP.2012.2234082 | 1208.5062 | null | null |
Vector Field k-Means: Clustering Trajectories by Fitting Multiple Vector
Fields | cs.LG | Scientists study trajectory data to understand trends in movement patterns,
such as human mobility for traffic analysis and urban planning. There is a
pressing need for scalable and efficient techniques for analyzing this data and
discovering the underlying patterns. In this paper, we introduce a novel
technique which we call vector-field $k$-means.
The central idea of our approach is to use vector fields to induce a
similarity notion between trajectories. Other clustering algorithms seek a
representative trajectory that best describes each cluster, much like $k$-means
identifies a representative "center" for each cluster. Vector-field $k$-means,
on the other hand, recognizes that in all but the simplest examples, no single
trajectory adequately describes a cluster. Our approach is based on the premise
that movement trends in trajectory data can be modeled as flows within multiple
vector fields, and the vector field itself is what defines each of the
clusters. We also show how vector-field $k$-means connects techniques for
scalar field design on meshes and $k$-means clustering.
We present an algorithm that finds a locally optimal clustering of
trajectories into vector fields, and demonstrate how vector-field $k$-means can
be used to mine patterns from trajectory data. We present experimental evidence
of its effectiveness and efficiency using several datasets, including
historical hurricane data, GPS tracks of people and vehicles, and anonymous
call records from a large phone company. We compare our results to previous
trajectory clustering techniques, and find that our algorithm performs faster
in practice than the current state-of-the-art in trajectory clustering, in some
examples by a large margin.
| Nivan Ferreira, James T. Klosowski, Carlos Scheidegger, Claudio Silva | null | 1208.5801 | null | null |
Link Prediction via Generalized Coupled Tensor Factorisation | cs.LG | This study deals with the missing link prediction problem: the problem of
predicting the existence of missing connections between entities of interest.
We address link prediction using coupled analysis of relational datasets
represented as heterogeneous data, i.e., datasets in the form of matrices and
higher-order tensors. We propose to use an approach based on probabilistic
interpretation of tensor factorisation models, i.e., Generalised Coupled Tensor
Factorisation, which can simultaneously fit a large class of tensor models to
higher-order tensors/matrices with com- mon latent factors using different loss
functions. Numerical experiments demonstrate that joint analysis of data from
multiple sources via coupled factorisation improves the link prediction
performance and the selection of right loss function and tensor model is
crucial for accurately predicting missing links.
| Beyza Ermi\c{s} and Evrim Acar and A. Taylan Cemgil | null | 1208.6231 | null | null |
Automated Marble Plate Classification System Based On Different Neural
Network Input Training Sets and PLC Implementation | cs.NE cs.LG | The process of sorting marble plates according to their surface texture is an
important task in the automated marble plate production. Nowadays some
inspection systems in marble industry that automate the classification tasks
are too expensive and are compatible only with specific technological equipment
in the plant. In this paper a new approach to the design of an Automated Marble
Plate Classification System (AMPCS),based on different neural network input
training sets is proposed, aiming at high classification accuracy using simple
processing and application of only standard devices. It is based on training a
classification MLP neural network with three different input training sets:
extracted texture histograms, Discrete Cosine and Wavelet Transform over the
histograms. The algorithm is implemented in a PLC for real-time operation. The
performance of the system is assessed with each one of the input training sets.
The experimental test results regarding classification accuracy and quick
operation are represented and discussed.
| Irina Topalova | null | 1208.6310 | null | null |
Comparative Study and Optimization of Feature-Extraction Techniques for
Content based Image Retrieval | cs.CV cs.AI cs.IR cs.LG cs.MM | The aim of a Content-Based Image Retrieval (CBIR) system, also known as Query
by Image Content (QBIC), is to help users to retrieve relevant images based on
their contents. CBIR technologies provide a method to find images in large
databases by using unique descriptors from a trained image. The image
descriptors include texture, color, intensity and shape of the object inside an
image. Several feature-extraction techniques viz., Average RGB, Color Moments,
Co-occurrence, Local Color Histogram, Global Color Histogram and Geometric
Moment have been critically compared in this paper. However, individually these
techniques result in poor performance. So, combinations of these techniques
have also been evaluated and results for the most efficient combination of
techniques have been presented and optimized for each class of image query. We
also propose an improvement in image retrieval performance by introducing the
idea of Query modification through image cropping. It enables the user to
identify a region of interest and modify the initial query to refine and
personalize the image retrieval results.
| Aman Chadha, Sushmit Mallik and Ravdeep Johar | 10.5120/8320-1959 | 1208.6335 | null | null |
A Widely Applicable Bayesian Information Criterion | cs.LG stat.ML | A statistical model or a learning machine is called regular if the map taking
a parameter to a probability distribution is one-to-one and if its Fisher
information matrix is always positive definite. If otherwise, it is called
singular. In regular statistical models, the Bayes free energy, which is
defined by the minus logarithm of Bayes marginal likelihood, can be
asymptotically approximated by the Schwarz Bayes information criterion (BIC),
whereas in singular models such approximation does not hold.
Recently, it was proved that the Bayes free energy of a singular model is
asymptotically given by a generalized formula using a birational invariant, the
real log canonical threshold (RLCT), instead of half the number of parameters
in BIC. Theoretical values of RLCTs in several statistical models are now being
discovered based on algebraic geometrical methodology. However, it has been
difficult to estimate the Bayes free energy using only training samples,
because an RLCT depends on an unknown true distribution.
In the present paper, we define a widely applicable Bayesian information
criterion (WBIC) by the average log likelihood function over the posterior
distribution with the inverse temperature $1/\log n$, where $n$ is the number
of training samples. We mathematically prove that WBIC has the same asymptotic
expansion as the Bayes free energy, even if a statistical model is singular for
and unrealizable by a statistical model. Since WBIC can be numerically
calculated without any information about a true distribution, it is a
generalized version of BIC onto singular statistical models.
| Sumio Watanabe | null | 1208.6338 | null | null |
Statistically adaptive learning for a general class of cost functions
(SA L-BFGS) | cs.LG stat.ML | We present a system that enables rapid model experimentation for tera-scale
machine learning with trillions of non-zero features, billions of training
examples, and millions of parameters. Our contribution to the literature is a
new method (SA L-BFGS) for changing batch L-BFGS to perform in near real-time
by using statistical tools to balance the contributions of previous weights,
old training examples, and new training examples to achieve fast convergence
with few iterations. The result is, to our knowledge, the most scalable and
flexible linear learning system reported in the literature, beating standard
practice with the current best system (Vowpal Wabbit and AllReduce). Using the
KDD Cup 2012 data set from Tencent, Inc. we provide experimental results to
verify the performance of this method.
| Stephen Purpura, Dustin Hillard, Mark Hubenthal, Jim Walsh, Scott
Golder, Scott Smith | null | 1209.0029 | null | null |
Learning implicitly in reasoning in PAC-Semantics | cs.AI cs.DS cs.LG cs.LO | We consider the problem of answering queries about formulas of propositional
logic based on background knowledge partially represented explicitly as other
formulas, and partially represented as partially obscured examples
independently drawn from a fixed probability distribution, where the queries
are answered with respect to a weaker semantics than usual -- PAC-Semantics,
introduced by Valiant (2000) -- that is defined using the distribution of
examples. We describe a fairly general, efficient reduction to limited versions
of the decision problem for a proof system (e.g., bounded space treelike
resolution, bounded degree polynomial calculus, etc.) from corresponding
versions of the reasoning problem where some of the background knowledge is not
explicitly given as formulas, only learnable from the examples. Crucially, we
do not generate an explicit representation of the knowledge extracted from the
examples, and so the "learning" of the background knowledge is only done
implicitly. As a consequence, this approach can utilize formulas as background
knowledge that are not perfectly valid over the distribution---essentially the
analogue of agnostic learning here.
| Brendan Juba | null | 1209.0056 | null | null |
Estimating the historical and future probabilities of large terrorist
events | physics.data-an cs.LG physics.soc-ph stat.AP stat.ME | Quantities with right-skewed distributions are ubiquitous in complex social
systems, including political conflict, economics and social networks, and these
systems sometimes produce extremely large events. For instance, the 9/11
terrorist events produced nearly 3000 fatalities, nearly six times more than
the next largest event. But, was this enormous loss of life statistically
unlikely given modern terrorism's historical record? Accurately estimating the
probability of such an event is complicated by the large fluctuations in the
empirical distribution's upper tail. We present a generic statistical algorithm
for making such estimates, which combines semi-parametric models of tail
behavior and a nonparametric bootstrap. Applied to a global database of
terrorist events, we estimate the worldwide historical probability of observing
at least one 9/11-sized or larger event since 1968 to be 11-35%. These results
are robust to conditioning on global variations in economic development,
domestic versus international events, the type of weapon used and a truncated
history that stops at 1998. We then use this procedure to make a data-driven
statistical forecast of at least one similar event over the next decade.
| Aaron Clauset, Ryan Woodard | 10.1214/12-AOAS614 | 1209.0089 | null | null |
A History of Cluster Analysis Using the Classification Society's
Bibliography Over Four Decades | cs.DL cs.LG stat.ML | The Classification Literature Automated Search Service, an annual
bibliography based on citation of one or more of a set of around 80 book or
journal publications, ran from 1972 to 2012. We analyze here the years 1994 to
2011. The Classification Society's Service, as it was termed, has been produced
by the Classification Society. In earlier decades it was distributed as a
diskette or CD with the Journal of Classification. Among our findings are the
following: an enormous increase in scholarly production post approximately
2000; a very major increase in quantity, coupled with work in different
disciplines, from approximately 2004; and a major shift also from cluster
analysis in earlier times having mathematics and psychology as disciplines of
the journals published in, and affiliations of authors, contrasted with, in
more recent times, a "centre of gravity" in management and engineering.
| Fionn Murtagh and Michael J. Kurtz | null | 1209.0125 | null | null |
Autoregressive short-term prediction of turning points using support
vector regression | cs.LG cs.CE cs.NE | This work is concerned with autoregressive prediction of turning points in
financial price sequences. Such turning points are critical local extrema
points along a series, which mark the start of new swings. Predicting the
future time of such turning points or even their early or late identification
slightly before or after the fact has useful applications in economics and
finance. Building on recently proposed neural network model for turning point
prediction, we propose and study a new autoregressive model for predicting
turning points of small swings. Our method relies on a known turning point
indicator, a Fourier enriched representation of price histories, and support
vector regression. We empirically examine the performance of the proposed
method over a long history of the Dow Jones Industrial average. Our study shows
that the proposed method is superior to the previous neural network model, in
terms of trading performance of a simple trading application and also exhibits
a quantifiable advantage over the buy-and-hold benchmark.
| Ran El-Yaniv, Alexandra Faynburd | null | 1209.0127 | null | null |
Proximal methods for the latent group lasso penalty | math.OC cs.LG stat.ML | We consider a regularized least squares problem, with regularization by
structured sparsity-inducing norms, which extend the usual $\ell_1$ and the
group lasso penalty, by allowing the subsets to overlap. Such regularizations
lead to nonsmooth problems that are difficult to optimize, and we propose in
this paper a suitable version of an accelerated proximal method to solve them.
We prove convergence of a nested procedure, obtained composing an accelerated
proximal method with an inner algorithm for computing the proximity operator.
By exploiting the geometrical properties of the penalty, we devise a new active
set strategy, thanks to which the inner iteration is relatively fast, thus
guaranteeing good computational performances of the overall algorithm. Our
approach allows to deal with high dimensional problems without pre-processing
for dimensionality reduction, leading to better computational and prediction
performances with respect to the state-of-the art methods, as shown empirically
both on toy and real data.
| Silvia Villa, Lorenzo Rosasco, Sofia Mosci, Alessandro Verri | null | 1209.0368 | null | null |
Fixed-rank matrix factorizations and Riemannian low-rank optimization | cs.LG math.OC | Motivated by the problem of learning a linear regression model whose
parameter is a large fixed-rank non-symmetric matrix, we consider the
optimization of a smooth cost function defined on the set of fixed-rank
matrices. We adopt the geometric framework of optimization on Riemannian
quotient manifolds. We study the underlying geometries of several well-known
fixed-rank matrix factorizations and then exploit the Riemannian quotient
geometry of the search space in the design of a class of gradient descent and
trust-region algorithms. The proposed algorithms generalize our previous
results on fixed-rank symmetric positive semidefinite matrices, apply to a
broad range of applications, scale to high-dimensional problems and confer a
geometric basis to recent contributions on the learning of fixed-rank
non-symmetric matrices. We make connections with existing algorithms in the
context of low-rank matrix completion and discuss relative usefulness of the
proposed framework. Numerical experiments suggest that the proposed algorithms
compete with the state-of-the-art and that manifold optimization offers an
effective and versatile framework for the design of machine learning algorithms
that learn a fixed-rank matrix.
| B. Mishra, G. Meyer, S. Bonnabel and R. Sepulchre | null | 1209.0430 | null | null |
Efficient EM Training of Gaussian Mixtures with Missing Data | cs.LG stat.ML | In data-mining applications, we are frequently faced with a large fraction of
missing entries in the data matrix, which is problematic for most discriminant
machine learning algorithms. A solution that we explore in this paper is the
use of a generative model (a mixture of Gaussians) to compute the conditional
expectation of the missing variables given the observed variables. Since
training a Gaussian mixture with many different patterns of missing values can
be computationally very expensive, we introduce a spanning-tree based algorithm
that significantly speeds up training in these conditions. We also observe that
good results can be obtained by using the generative model to fill-in the
missing values for a separate discriminant learning algorithm.
| Olivier Delalleau and Aaron Courville and Yoshua Bengio | null | 1209.0521 | null | null |
Sparse coding for multitask and transfer learning | cs.LG stat.ML | We investigate the use of sparse coding and dictionary learning in the
context of multitask and transfer learning. The central assumption of our
learning method is that the tasks parameters are well approximated by sparse
linear combinations of the atoms of a dictionary on a high or infinite
dimensional space. This assumption, together with the large quantity of
available data in the multitask and transfer learning settings, allows a
principled choice of the dictionary. We provide bounds on the generalization
error of this approach, for both settings. Numerical experiments on one
synthetic and two real datasets show the advantage of our method over single
task learning, a previous method based on orthogonal and dense representation
of the tasks and a related method learning task grouping.
| Andreas Maurer, Massimiliano Pontil, Bernardino Romera-Paredes | null | 1209.0738 | null | null |
Improving the K-means algorithm using improved downhill simplex search | cs.LG | The k-means algorithm is one of the well-known and most popular clustering
algorithms. K-means seeks an optimal partition of the data by minimizing the
sum of squared error with an iterative optimization procedure, which belongs to
the category of hill climbing algorithms. As we know hill climbing searches are
famous for converging to local optimums. Since k-means can converge to a local
optimum, different initial points generally lead to different convergence
cancroids, which makes it important to start with a reasonable initial
partition in order to achieve high quality clustering solutions. However, in
theory, there exist no efficient and universal methods for determining such
initial partitions. In this paper we tried to find an optimum initial
partitioning for k-means algorithm. To achieve this goal we proposed a new
improved version of downhill simplex search, and then we used it in order to
find an optimal result for clustering approach and then compare this algorithm
with Genetic Algorithm base (GA), Genetic K-Means (GKM), Improved Genetic
K-Means (IGKM) and k-means algorithms.
| Ehsan Saboori, Shafigh Parsazad, Anoosheh Sadeghi | 10.1109/ICSTE.2010.5608792 | 1209.0853 | null | null |
Structuring Relevant Feature Sets with Multiple Model Learning | cs.LG | Feature selection is one of the most prominent learning tasks, especially in
high-dimensional datasets in which the goal is to understand the mechanisms
that underly the learning dataset. However most of them typically deliver just
a flat set of relevant features and provide no further information on what kind
of structures, e.g. feature groupings, might underly the set of relevant
features. In this paper we propose a new learning paradigm in which our goal is
to uncover the structures that underly the set of relevant features for a given
learning problem. We uncover two types of features sets, non-replaceable
features that contain important information about the target variable and
cannot be replaced by other features, and functionally similar features sets
that can be used interchangeably in learned models, given the presence of the
non-replaceable features, with no change in the predictive performance. To do
so we propose a new learning algorithm that learns a number of disjoint models
using a model disjointness regularization constraint together with a constraint
on the predictive agreement of the disjoint models. We explore the behavior of
our approach on a number of high-dimensional datasets, and show that, as
expected by their construction, these satisfy a number of properties. Namely,
model disjointness, a high predictive agreement, and a similar predictive
performance to models learned on the full set of relevant features. The ability
to structure the set of relevant features in such a manner can become a
valuable tool in different applications of scientific knowledge discovery.
| Jun Wang and Alexandros Kalousis | null | 1209.0913 | null | null |
The Annealing Sparse Bayesian Learning Algorithm | cs.IT cs.LG math.IT | In this paper we propose a two-level hierarchical Bayesian model and an
annealing schedule to re-enable the noise variance learning capability of the
fast marginalized Sparse Bayesian Learning Algorithms. The performance such as
NMSE and F-measure can be greatly improved due to the annealing technique. This
algorithm tends to produce the most sparse solution under moderate SNR
scenarios and can outperform most concurrent SBL algorithms while pertains
small computational load.
| Benyuan Liu and Hongqi Fan and Zaiqi Lu and Qiang Fu | null | 1209.1033 | null | null |
Learning Probability Measures with respect to Optimal Transport Metrics | cs.LG stat.ML | We study the problem of estimating, in the sense of optimal transport
metrics, a measure which is assumed supported on a manifold embedded in a
Hilbert space. By establishing a precise connection between optimal transport
metrics, optimal quantization, and learning theory, we derive new probabilistic
bounds for the performance of a classic algorithm in unsupervised learning
(k-means), when used to produce a probability measure derived from the data. In
the course of the analysis, we arrive at new lower bounds, as well as
probabilistic upper bounds on the convergence rate of the empirical law of
large numbers, which, unlike existing bounds, are applicable to a wide class of
measures.
| Guillermo D. Canas and Lorenzo Rosasco | null | 1209.1077 | null | null |
Robustness and Generalization for Metric Learning | cs.LG cs.AI stat.ML | Metric learning has attracted a lot of interest over the last decade, but the
generalization ability of such methods has not been thoroughly studied. In this
paper, we introduce an adaptation of the notion of algorithmic robustness
(previously introduced by Xu and Mannor) that can be used to derive
generalization bounds for metric learning. We further show that a weak notion
of robustness is in fact a necessary and sufficient condition for a metric
learning algorithm to generalize. To illustrate the applicability of the
proposed framework, we derive generalization results for a large family of
existing metric learning algorithms, including some sparse formulations that
are not covered by previous results.
| Aur\'elien Bellet and Amaury Habrard | 10.1016/j.neucom.2014.09.044 | 1209.1086 | null | null |
Learning Manifolds with K-Means and K-Flats | cs.LG stat.ML | We study the problem of estimating a manifold from random samples. In
particular, we consider piecewise constant and piecewise linear estimators
induced by k-means and k-flats, and analyze their performance. We extend
previous results for k-means in two separate directions. First, we provide new
results for k-means reconstruction on manifolds and, secondly, we prove
reconstruction bounds for higher-order approximation (k-flats), for which no
known results were previously available. While the results for k-means are
novel, some of the technical tools are well-established in the literature. In
the case of k-flats, both the results and the mathematical tools are new.
| Guillermo D. Canas and Tomaso Poggio and Lorenzo Rosasco | null | 1209.1121 | null | null |
Multiclass Learning with Simplex Coding | stat.ML cs.LG | In this paper we discuss a novel framework for multiclass learning, defined
by a suitable coding/decoding strategy, namely the simplex coding, that allows
to generalize to multiple classes a relaxation approach commonly used in binary
classification. In this framework, a relaxation error analysis can be developed
avoiding constraints on the considered hypotheses class. Moreover, we show that
in this setting it is possible to derive the first provably consistent
regularized method with training/tuning complexity which is independent to the
number of classes. Tools from convex analysis are introduced that can be used
beyond the scope of this paper.
| Youssef Mroueh, Tomaso Poggio, Lorenzo Rosasco, Jean-Jacques Slotine | null | 1209.1360 | null | null |
On spatial selectivity and prediction across conditions with fMRI | stat.ML cs.LG | Researchers in functional neuroimaging mostly use activation coordinates to
formulate their hypotheses. Instead, we propose to use the full statistical
images to define regions of interest (ROIs). This paper presents two machine
learning approaches, transfer learning and selection transfer, that are
compared upon their ability to identify the common patterns between brain
activation maps related to two functional tasks. We provide some preliminary
quantification of these similarities, and show that selection transfer makes it
possible to set a spatial scale yielding ROIs that are more specific to the
context of interest than with transfer learning. In particular, selection
transfer outlines well known regions such as the Visual Word Form Area when
discriminating between different visual tasks.
| Yannick Schwartz (INRIA Saclay - Ile de France, LNAO), Ga\"el
Varoquaux (INRIA Saclay - Ile de France, LNAO), Bertrand Thirion (INRIA
Saclay - Ile de France, LNAO) | null | 1209.1450 | null | null |
Learning Model-Based Sparsity via Projected Gradient Descent | stat.ML cs.LG math.OC | Several convex formulation methods have been proposed previously for
statistical estimation with structured sparsity as the prior. These methods
often require a carefully tuned regularization parameter, often a cumbersome or
heuristic exercise. Furthermore, the estimate that these methods produce might
not belong to the desired sparsity model, albeit accurately approximating the
true parameter. Therefore, greedy-type algorithms could often be more desirable
in estimating structured-sparse parameters. So far, these greedy methods have
mostly focused on linear statistical models. In this paper we study the
projected gradient descent with non-convex structured-sparse parameter model as
the constraint set. Should the cost function have a Stable Model-Restricted
Hessian the algorithm produces an approximation for the desired minimizer. As
an example we elaborate on application of the main results to estimation in
Generalized Linear Model.
| Sohail Bahmani, Petros T. Boufounos, and Bhiksha Raj | 10.1109/TIT.2016.2515078 | 1209.1557 | null | null |
Rank Centrality: Ranking from Pair-wise Comparisons | cs.LG stat.ML | The question of aggregating pair-wise comparisons to obtain a global ranking
over a collection of objects has been of interest for a very long time: be it
ranking of online gamers (e.g. MSR's TrueSkill system) and chess players,
aggregating social opinions, or deciding which product to sell based on
transactions. In most settings, in addition to obtaining a ranking, finding
`scores' for each object (e.g. player's rating) is of interest for
understanding the intensity of the preferences.
In this paper, we propose Rank Centrality, an iterative rank aggregation
algorithm for discovering scores for objects (or items) from pair-wise
comparisons. The algorithm has a natural random walk interpretation over the
graph of objects with an edge present between a pair of objects if they are
compared; the score, which we call Rank Centrality, of an object turns out to
be its stationary probability under this random walk. To study the efficacy of
the algorithm, we consider the popular Bradley-Terry-Luce (BTL) model
(equivalent to the Multinomial Logit (MNL) for pair-wise comparisons) in which
each object has an associated score which determines the probabilistic outcomes
of pair-wise comparisons between objects. In terms of the pair-wise marginal
probabilities, which is the main subject of this paper, the MNL model and the
BTL model are identical. We bound the finite sample error rates between the
scores assumed by the BTL model and those estimated by our algorithm. In
particular, the number of samples required to learn the score well with high
probability depends on the structure of the comparison graph. When the
Laplacian of the comparison graph has a strictly positive spectral gap, e.g.
each item is compared to a subset of randomly chosen items, this leads to
dependence on the number of samples that is nearly order-optimal.
| Sahand Negahban, Sewoong Oh, Devavrat Shah | null | 1209.1688 | null | null |
Bandits with heavy tail | stat.ML cs.LG | The stochastic multi-armed bandit problem is well understood when the reward
distributions are sub-Gaussian. In this paper we examine the bandit problem
under the weaker assumption that the distributions have moments of order
1+\epsilon, for some $\epsilon \in (0,1]$. Surprisingly, moments of order 2
(i.e., finite variance) are sufficient to obtain regret bounds of the same
order as under sub-Gaussian reward distributions. In order to achieve such
regret, we define sampling strategies based on refined estimators of the mean
such as the truncated empirical mean, Catoni's M-estimator, and the
median-of-means estimator. We also derive matching lower bounds that also show
that the best achievable regret deteriorates when \epsilon <1.
| S\'ebastien Bubeck, Nicol\`o Cesa-Bianchi and G\'abor Lugosi | null | 1209.1727 | null | null |
Design of Spectrum Sensing Policy for Multi-user Multi-band Cognitive
Radio Network | cs.LG cs.NI | Finding an optimal sensing policy for a particular access policy and sensing
scheme is a laborious combinatorial problem that requires the system model
parameters to be known. In practise the parameters or the model itself may not
be completely known making reinforcement learning methods appealing. In this
paper a non-parametric reinforcement learning-based method is developed for
sensing and accessing multi-band radio spectrum in multi-user cognitive radio
networks. A suboptimal sensing policy search algorithm is proposed for a
particular multi-user multi-band access policy and the randomized
Chair-Varshney rule. The randomized Chair-Varshney rule is used to reduce the
probability of false alarms under a constraint on the probability of detection
that protects the primary user. The simulation results show that the proposed
method achieves a sum profit (e.g. data rate) close to the optimal sensing
policy while achieving the desired probability of detection.
| Jan Oksanen, Jarmo Lund\'en and Visa Koivunen | null | 1209.1739 | null | null |
Securing Your Transactions: Detecting Anomalous Patterns In XML
Documents | cs.CR cs.LG | XML transactions are used in many information systems to store data and
interact with other systems. Abnormal transactions, the result of either an
on-going cyber attack or the actions of a benign user, can potentially harm the
interacting systems and therefore they are regarded as a threat. In this paper
we address the problem of anomaly detection and localization in XML
transactions using machine learning techniques. We present a new XML anomaly
detection framework, XML-AD. Within this framework, an automatic method for
extracting features from XML transactions was developed as well as a practical
method for transforming XML features into vectors of fixed dimensionality. With
these two methods in place, the XML-AD framework makes it possible to utilize
general learning algorithms for anomaly detection. Central to the functioning
of the framework is a novel multi-univariate anomaly detection algorithm,
ADIFA. The framework was evaluated on four XML transactions datasets, captured
from real information systems, in which it achieved over 89% true positive
detection rate with less than a 0.2% false positive rate.
| Eitan Menahem, Alon Schclar, Lior Rokach, Yuval Elovici | null | 1209.1797 | null | null |
An Empirical Study of MAUC in Multi-class Problems with Uncertain Cost
Matrices | cs.LG | Cost-sensitive learning relies on the availability of a known and fixed cost
matrix. However, in some scenarios, the cost matrix is uncertain during
training, and re-train a classifier after the cost matrix is specified would
not be an option. For binary classification, this issue can be successfully
addressed by methods maximizing the Area Under the ROC Curve (AUC) metric.
Since the AUC can measure performance of base classifiers independent of cost
during training, and a larger AUC is more likely to lead to a smaller total
cost in testing using the threshold moving method. As an extension of AUC to
multi-class problems, MAUC has attracted lots of attentions and been widely
used. Although MAUC also measures performance of base classifiers independent
of cost, it is unclear whether a larger MAUC of classifiers is more likely to
lead to a smaller total cost. In fact, it is also unclear what kinds of
post-processing methods should be used in multi-class problems to convert base
classifiers into discrete classifiers such that the total cost is as small as
possible. In the paper, we empirically explore the relationship between MAUC
and the total cost of classifiers by applying two categories of post-processing
methods. Our results suggest that a larger MAUC is also beneficial.
Interestingly, simple calibration methods that convert the output matrix into
posterior probabilities perform better than existing sophisticated post
re-optimization methods.
| Rui Wang, Ke Tang | null | 1209.1800 | null | null |
Stochastic Dual Coordinate Ascent Methods for Regularized Loss
Minimization | stat.ML cs.LG math.OC | Stochastic Gradient Descent (SGD) has become popular for solving large scale
supervised machine learning optimization problems such as SVM, due to their
strong theoretical guarantees. While the closely related Dual Coordinate Ascent
(DCA) method has been implemented in various software packages, it has so far
lacked good convergence analysis. This paper presents a new analysis of
Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods
enjoy strong theoretical guarantees that are comparable or better than SGD.
This analysis justifies the effectiveness of SDCA for practical applications.
| Shai Shalev-Shwartz and Tong Zhang | null | 1209.1873 | null | null |
A Comparative Study of Efficient Initialization Methods for the K-Means
Clustering Algorithm | cs.LG cs.CV | K-means is undoubtedly the most widely used partitional clustering algorithm.
Unfortunately, due to its gradient descent nature, this algorithm is highly
sensitive to the initial placement of the cluster centers. Numerous
initialization methods have been proposed to address this problem. In this
paper, we first present an overview of these methods with an emphasis on their
computational efficiency. We then compare eight commonly used linear time
complexity initialization methods on a large and diverse collection of data
sets using various performance criteria. Finally, we analyze the experimental
results using non-parametric statistical tests and provide recommendations for
practitioners. We demonstrate that popular initialization methods often perform
poorly and that there are in fact strong alternatives to these methods.
| M. Emre Celebi, Hassan A. Kingravi, Patricio A. Vela | 10.1016/j.eswa.2012.07.021 | 1209.1960 | null | null |
Fused Multiple Graphical Lasso | cs.LG stat.ML | In this paper, we consider the problem of estimating multiple graphical
models simultaneously using the fused lasso penalty, which encourages adjacent
graphs to share similar structures. A motivating example is the analysis of
brain networks of Alzheimer's disease using neuroimaging data. Specifically, we
may wish to estimate a brain network for the normal controls (NC), a brain
network for the patients with mild cognitive impairment (MCI), and a brain
network for Alzheimer's patients (AD). We expect the two brain networks for NC
and MCI to share common structures but not to be identical to each other;
similarly for the two brain networks for MCI and AD. The proposed formulation
can be solved using a second-order method. Our key technical contribution is to
establish the necessary and sufficient condition for the graphs to be
decomposable. Based on this key property, a simple screening rule is presented,
which decomposes the large graphs into small subgraphs and allows an efficient
estimation of multiple independent (small) subgraphs, dramatically reducing the
computational cost. We perform experiments on both synthetic and real data; our
results demonstrate the effectiveness and efficiency of the proposed approach.
| Sen Yang, Zhaosong Lu, Xiaotong Shen, Peter Wonka, Jieping Ye | null | 1209.2139 | null | null |
Cooperative learning in multi-agent systems from intermittent
measurements | math.OC cs.LG cs.MA cs.SY | Motivated by the problem of tracking a direction in a decentralized way, we
consider the general problem of cooperative learning in multi-agent systems
with time-varying connectivity and intermittent measurements. We propose a
distributed learning protocol capable of learning an unknown vector $\mu$ from
noisy measurements made independently by autonomous nodes. Our protocol is
completely distributed and able to cope with the time-varying, unpredictable,
and noisy nature of inter-agent communication, and intermittent noisy
measurements of $\mu$. Our main result bounds the learning speed of our
protocol in terms of the size and combinatorial features of the (time-varying)
networks connecting the nodes.
| Naomi Ehrich Leonard, Alex Olshevsky | null | 1209.2194 | null | null |
Counterfactual Reasoning and Learning Systems | cs.LG cs.AI cs.IR math.ST stat.TH | This work shows how to leverage causal inference to understand the behavior
of complex learning systems interacting with their environment and predict the
consequences of changes to the system. Such predictions allow both humans and
algorithms to select changes that improve both the short-term and long-term
performance of such systems. This work is illustrated by experiments carried
out on the ad placement system associated with the Bing search engine.
| L\'eon Bottou, Jonas Peters, Joaquin Qui\~nonero-Candela, Denis X.
Charles, D. Max Chickering, Elon Portugaly, Dipankar Ray, Patrice Simard, Ed
Snelson | null | 1209.2355 | null | null |
On the Complexity of Bandit and Derivative-Free Stochastic Convex
Optimization | cs.LG math.OC stat.ML | The problem of stochastic convex optimization with bandit feedback (in the
learning community) or without knowledge of gradients (in the optimization
community) has received much attention in recent years, in the form of
algorithms and performance upper bounds. However, much less is known about the
inherent complexity of these problems, and there are few lower bounds in the
literature, especially for nonlinear functions. In this paper, we investigate
the attainable error/regret in the bandit and derivative-free settings, as a
function of the dimension d and the available number of queries T. We provide a
precise characterization of the attainable performance for strongly-convex and
smooth functions, which also imply a non-trivial lower bound for more general
problems. Moreover, we prove that in both the bandit and derivative-free
setting, the required number of queries must scale at least quadratically with
the dimension. Finally, we show that on the natural class of quadratic
functions, it is possible to obtain a "fast" O(1/T) error rate in terms of T,
under mild assumptions, even without having access to gradients. To the best of
our knowledge, this is the first such rate in a derivative-free stochastic
setting, and holds despite previous results which seem to imply the contrary.
| Ohad Shamir | null | 1209.2388 | null | null |
Query Complexity of Derivative-Free Optimization | stat.ML cs.LG | This paper provides lower bounds on the convergence rate of Derivative Free
Optimization (DFO) with noisy function evaluations, exposing a fundamental and
unavoidable gap between the performance of algorithms with access to gradients
and those with access to only function evaluations. However, there are
situations in which DFO is unavoidable, and for such situations we propose a
new DFO algorithm that is proved to be near optimal for the class of strongly
convex objective functions. A distinctive feature of the algorithm is that it
uses only Boolean-valued function comparisons, rather than function
evaluations. This makes the algorithm useful in an even wider range of
applications, such as optimization based on paired comparisons from human
subjects, for example. We also show that regardless of whether DFO is based on
noisy function evaluations or Boolean-valued function comparisons, the
convergence rate is the same.
| Kevin G. Jamieson, Robert D. Nowak, Benjamin Recht | null | 1209.2434 | null | null |
Performance Evaluation of Predictive Classifiers For Knowledge Discovery
From Engineering Materials Data Sets | cs.LG | In this paper, naive Bayesian and C4.5 Decision Tree Classifiers(DTC) are
successively applied on materials informatics to classify the engineering
materials into different classes for the selection of materials that suit the
input design specifications. Here, the classifiers are analyzed individually
and their performance evaluation is analyzed with confusion matrix predictive
parameters and standard measures, the classification results are analyzed on
different class of materials. Comparison of classifiers has found that naive
Bayesian classifier is more accurate and better than the C4.5 DTC. The
knowledge discovered by the naive bayesian classifier can be employed for
decision making in materials selection in manufacturing industries.
| Hemanth K. S Doreswamy | null | 1209.2501 | null | null |
Probabilities on Sentences in an Expressive Logic | cs.LO cs.AI cs.LG math.LO math.PR | Automated reasoning about uncertain knowledge has many applications. One
difficulty when developing such systems is the lack of a completely
satisfactory integration of logic and probability. We address this problem
directly. Expressive languages like higher-order logic are ideally suited for
representing and reasoning about structured knowledge. Uncertain knowledge can
be modeled by using graded probabilities rather than binary truth-values. The
main technical problem studied in this paper is the following: Given a set of
sentences, each having some probability of being true, what probability should
be ascribed to other (query) sentences? A natural wish-list, among others, is
that the probability distribution (i) is consistent with the knowledge base,
(ii) allows for a consistent inference procedure and in particular (iii)
reduces to deductive logic in the limit of probabilities being 0 and 1, (iv)
allows (Bayesian) inductive reasoning and (v) learning in the limit and in
particular (vi) allows confirmation of universally quantified
hypotheses/sentences. We translate this wish-list into technical requirements
for a prior probability and show that probabilities satisfying all our criteria
exist. We also give explicit constructions and several general
characterizations of probabilities that satisfy some or all of the criteria and
various (counter) examples. We also derive necessary and sufficient conditions
for extending beliefs about finitely many sentences to suitable probabilities
over all sentences, and in particular least dogmatic or least biased ones. We
conclude with a brief outlook on how the developed theory might be used and
approximated in autonomous reasoning agents. Our theory is a step towards a
globally consistent and empirically satisfactory unification of probability and
logic.
| Marcus Hutter and John W. Lloyd and Kee Siong Ng and William T. B.
Uther | null | 1209.2620 | null | null |
Conditional validity of inductive conformal predictors | cs.LG | Conformal predictors are set predictors that are automatically valid in the
sense of having coverage probability equal to or exceeding a given confidence
level. Inductive conformal predictors are a computationally efficient version
of conformal predictors satisfying the same property of validity. However,
inductive conformal predictors have been only known to control unconditional
coverage probability. This paper explores various versions of conditional
validity and various ways to achieve them using inductive conformal predictors
and their modifications.
| Vladimir Vovk | null | 1209.2673 | null | null |
Regret Bounds for Restless Markov Bandits | cs.LG math.OC stat.ML | We consider the restless Markov bandit problem, in which the state of each
arm evolves according to a Markov process independently of the learner's
actions. We suggest an algorithm that after $T$ steps achieves
$\tilde{O}(\sqrt{T})$ regret with respect to the best policy that knows the
distributions of all arms. No assumptions on the Markov chains are made except
that they are irreducible. In addition, we show that index-based policies are
necessarily suboptimal for the considered problem.
| Ronald Ortner, Daniil Ryabko, Peter Auer, R\'emi Munos | null | 1209.2693 | null | null |
Multi-track Map Matching | cs.LG cs.DS stat.AP | We study algorithms for matching user tracks, consisting of time-ordered
location points, to paths in the road network. Previous work has focused on the
scenario where the location data is linearly ordered and consists of fairly
dense and regular samples. In this work, we consider the \emph{multi-track map
matching}, where the location data comes from different trips on the same
route, each with very sparse samples. This captures the realistic scenario
where users repeatedly travel on regular routes and samples are sparsely
collected, either due to energy consumption constraints or because samples are
only collected when the user actively uses a service. In the multi-track
problem, the total set of combined locations is only partially ordered, rather
than globally ordered as required by previous map-matching algorithms. We
propose two methods, the iterative projection scheme and the graph Laplacian
scheme, to solve the multi-track problem by using a single-track map-matching
subroutine. We also propose a boosting technique which may be applied to either
approach to improve the accuracy of the estimated paths. In addition, in order
to deal with variable sampling rates in single-track map matching, we propose a
method based on a particular regularized cost function that can be adapted for
different sampling rates and measurement errors. We evaluate the effectiveness
of our techniques for reconstructing tracks under several different
configurations of sampling error and sampling rate.
| Adel Javanmard, Maya Haridasan and Li Zhang | null | 1209.2759 | null | null |
Minimax Multi-Task Learning and a Generalized Loss-Compositional
Paradigm for MTL | cs.LG stat.ML | Since its inception, the modus operandi of multi-task learning (MTL) has been
to minimize the task-wise mean of the empirical risks. We introduce a
generalized loss-compositional paradigm for MTL that includes a spectrum of
formulations as a subfamily. One endpoint of this spectrum is minimax MTL: a
new MTL formulation that minimizes the maximum of the tasks' empirical risks.
Via a certain relaxation of minimax MTL, we obtain a continuum of MTL
formulations spanning minimax MTL and classical MTL. The full paradigm itself
is loss-compositional, operating on the vector of empirical risks. It
incorporates minimax MTL, its relaxations, and many new MTL formulations as
special cases. We show theoretically that minimax MTL tends to avoid worst case
outcomes on newly drawn test tasks in the learning to learn (LTL) test setting.
The results of several MTL formulations on synthetic and real problems in the
MTL and LTL test settings are encouraging.
| Nishant A. Mehta, Dongryeol Lee, Alexander G. Gray | null | 1209.2784 | null | null |
Improving Energy Efficiency in Femtocell Networks: A Hierarchical
Reinforcement Learning Framework | cs.LG | This paper investigates energy efficiency for two-tier femtocell networks
through combining game theory and stochastic learning. With the Stackelberg
game formulation, a hierarchical reinforcement learning framework is applied to
study the joint average utility maximization of macrocells and femtocells
subject to the minimum signal-to-interference-plus-noise-ratio requirements.
The macrocells behave as the leaders and the femtocells are followers during
the learning procedure. At each time step, the leaders commit to dynamic
strategies based on the best responses of the followers, while the followers
compete against each other with no further information but the leaders'
strategy information. In this paper, we propose two learning algorithms to
schedule each cell's stochastic power levels, leading by the macrocells.
Numerical experiments are presented to validate the proposed studies and show
that the two learning algorithms substantially improve the energy efficiency of
the femtocell networks.
| Xianfu Chen, Honggang Zhang, Tao Chen, and Mika Lasanen | null | 1209.2790 | null | null |
Community Detection in the Labelled Stochastic Block Model | cs.SI cs.LG math.PR physics.soc-ph | We consider the problem of community detection from observed interactions
between individuals, in the context where multiple types of interaction are
possible. We use labelled stochastic block models to represent the observed
data, where labels correspond to interaction types. Focusing on a two-community
scenario, we conjecture a threshold for the problem of reconstructing the
hidden communities in a way that is correlated with the true partition. To
substantiate the conjecture, we prove that the given threshold correctly
identifies a transition on the behaviour of belief propagation from insensitive
to sensitive. We further prove that the same threshold corresponds to the
transition in a related inference problem on a tree model from infeasible to
feasible. Finally, numerical results using belief propagation for community
detection give further support to the conjecture.
| Simon Heimlicher, Marc Lelarge, Laurent Massouli\'e | null | 1209.2910 | null | null |
Parametric Local Metric Learning for Nearest Neighbor Classification | cs.LG | We study the problem of learning local metrics for nearest neighbor
classification. Most previous works on local metric learning learn a number of
local unrelated metrics. While this "independence" approach delivers an
increased flexibility its downside is the considerable risk of overfitting. We
present a new parametric local metric learning method in which we learn a
smooth metric matrix function over the data manifold. Using an approximation
error bound of the metric matrix function we learn local metrics as linear
combinations of basis metrics defined on anchor points over different regions
of the instance space. We constrain the metric matrix function by imposing on
the linear combinations manifold regularization which makes the learned metric
matrix function vary smoothly along the geodesics of the data manifold. Our
metric learning method has excellent performance both in terms of predictive
power and scalability. We experimented with several large-scale classification
problems, tens of thousands of instances, and compared it with several state of
the art metric learning methods, both global and local, as well as to SVM with
automatic kernel selection, all of which it outperforms in a significant
manner.
| Jun Wang, Adam Woznica, Alexandros Kalousis | null | 1209.3056 | null | null |
Analog readout for optical reservoir computers | cs.ET cs.LG cs.NE physics.optics | Reservoir computing is a new, powerful and flexible machine learning
technique that is easily implemented in hardware. Recently, by using a
time-multiplexed architecture, hardware reservoir computers have reached
performance comparable to digital implementations. Operating speeds allowing
for real time information operation have been reached using optoelectronic
systems. At present the main performance bottleneck is the readout layer which
uses slow, digital postprocessing. We have designed an analog readout suitable
for time-multiplexed optoelectronic reservoir computers, capable of working in
real time. The readout has been built and tested experimentally on a standard
benchmark task. Its performance is better than non-reservoir methods, with
ample room for further improvement. The present work thereby overcomes one of
the major limitations for the future development of hardware reservoir
computers.
| Anteo Smerieri, Fran\c{c}ois Duport, Yvan Paquot, Benjamin Schrauwen,
Marc Haelterman, Serge Massar | null | 1209.3129 | null | null |
Thompson Sampling for Contextual Bandits with Linear Payoffs | cs.LG cs.DS stat.ML | Thompson Sampling is one of the oldest heuristics for multi-armed bandit
problems. It is a randomized algorithm based on Bayesian ideas, and has
recently generated significant interest after several studies demonstrated it
to have better empirical performance compared to the state-of-the-art methods.
However, many questions regarding its theoretical performance remained open. In
this paper, we design and analyze a generalization of Thompson Sampling
algorithm for the stochastic contextual multi-armed bandit problem with linear
payoff functions, when the contexts are provided by an adaptive adversary. This
is among the most important and widely studied versions of the contextual
bandits problem. We provide the first theoretical guarantees for the contextual
version of Thompson Sampling. We prove a high probability regret bound of
$\tilde{O}(d^{3/2}\sqrt{T})$ (or $\tilde{O}(d\sqrt{T \log(N)})$), which is the
best regret bound achieved by any computationally efficient algorithm available
for this problem in the current literature, and is within a factor of
$\sqrt{d}$ (or $\sqrt{\log(N)}$) of the information-theoretic lower bound for
this problem.
| Shipra Agrawal, Navin Goyal | null | 1209.3352 | null | null |
Further Optimal Regret Bounds for Thompson Sampling | cs.LG cs.DS stat.ML | Thompson Sampling is one of the oldest heuristics for multi-armed bandit
problems. It is a randomized algorithm based on Bayesian ideas, and has
recently generated significant interest after several studies demonstrated it
to have better empirical performance compared to the state of the art methods.
In this paper, we provide a novel regret analysis for Thompson Sampling that
simultaneously proves both the optimal problem-dependent bound of
$(1+\epsilon)\sum_i \frac{\ln T}{\Delta_i}+O(\frac{N}{\epsilon^2})$ and the
first near-optimal problem-independent bound of $O(\sqrt{NT\ln T})$ on the
expected regret of this algorithm. Our near-optimal problem-independent bound
solves a COLT 2012 open problem of Chapelle and Li. The optimal
problem-dependent regret bound for this problem was first proven recently by
Kaufmann et al. [ALT 2012]. Our novel martingale-based analysis techniques are
conceptually simple, easily extend to distributions other than the Beta
distribution, and also extend to the more general contextual bandits setting
[Manuscript, Agrawal and Goyal, 2012].
| Shipra Agrawal, Navin Goyal | null | 1209.3353 | null | null |
A Hajj And Umrah Location Classification System For Video Crowded Scenes | cs.CV cs.CY cs.LG | In this paper, a new automatic system for classifying ritual locations in
diverse Hajj and Umrah video scenes is investigated. This challenging subject
has mostly been ignored in the past due to several problems one of which is the
lack of realistic annotated video datasets. HUER Dataset is defined to model
six different Hajj and Umrah ritual locations[26].
The proposed Hajj and Umrah ritual location classifying system consists of
four main phases: Preprocessing, segmentation, feature extraction, and location
classification phases. The shot boundary detection and background/foregroud
segmentation algorithms are applied to prepare the input video scenes into the
KNN, ANN, and SVM classifiers. The system improves the state of art results on
Hajj and Umrah location classifications, and successfully recognizes the six
Hajj rituals with more than 90% accuracy. The various demonstrated experiments
show the promising results.
| Hossam M. Zawbaa, Salah A. Aly, Adnan A. Gutub | null | 1209.3433 | null | null |
Active Learning for Crowd-Sourced Databases | cs.LG cs.DB | Crowd-sourcing has become a popular means of acquiring labeled data for a
wide variety of tasks where humans are more accurate than computers, e.g.,
labeling images, matching objects, or analyzing sentiment. However, relying
solely on the crowd is often impractical even for data sets with thousands of
items, due to time and cost constraints of acquiring human input (which cost
pennies and minutes per label). In this paper, we propose algorithms for
integrating machine learning into crowd-sourced databases, with the goal of
allowing crowd-sourcing applications to scale, i.e., to handle larger datasets
at lower costs. The key observation is that, in many of the above tasks, humans
and machine learning algorithms can be complementary, as humans are often more
accurate but slow and expensive, while algorithms are usually less accurate,
but faster and cheaper.
Based on this observation, we present two new active learning algorithms to
combine humans and algorithms together in a crowd-sourced database. Our
algorithms are based on the theory of non-parametric bootstrap, which makes our
results applicable to a broad class of machine learning models. Our results, on
three real-life datasets collected with Amazon's Mechanical Turk, and on 15
well-known UCI data sets, show that our methods on average ask humans to label
one to two orders of magnitude fewer items to achieve the same accuracy as a
baseline that labels random images, and two to eight times fewer questions than
previous active learning schemes.
| Barzan Mozafari, Purnamrita Sarkar, Michael J. Franklin, Michael I.
Jordan, Samuel Madden | null | 1209.3686 | null | null |
Submodularity in Batch Active Learning and Survey Problems on Gaussian
Random Fields | cs.LG cs.AI cs.DS | Many real-world datasets can be represented in the form of a graph whose edge
weights designate similarities between instances. A discrete Gaussian random
field (GRF) model is a finite-dimensional Gaussian process (GP) whose prior
covariance is the inverse of a graph Laplacian. Minimizing the trace of the
predictive covariance Sigma (V-optimality) on GRFs has proven successful in
batch active learning classification problems with budget constraints. However,
its worst-case bound has been missing. We show that the V-optimality on GRFs as
a function of the batch query set is submodular and hence its greedy selection
algorithm guarantees an (1-1/e) approximation ratio. Moreover, GRF models have
the absence-of-suppressor (AofS) condition. For active survey problems, we
propose a similar survey criterion which minimizes 1'(Sigma)1. In practice,
V-optimality criterion performs better than GPs with mutual information gain
criteria and allows nonuniform costs for different nodes.
| Yifei Ma, Roman Garnett, Jeff Schneider | null | 1209.3694 | null | null |
Generalized Canonical Correlation Analysis for Disparate Data Fusion | stat.ML cs.LG | Manifold matching works to identify embeddings of multiple disparate data
spaces into the same low-dimensional space, where joint inference can be
pursued. It is an enabling methodology for fusion and inference from multiple
and massive disparate data sources. In this paper we focus on a method called
Canonical Correlation Analysis (CCA) and its generalization Generalized
Canonical Correlation Analysis (GCCA), which belong to the more general Reduced
Rank Regression (RRR) framework. We present an efficiency investigation of CCA
and GCCA under different training conditions for a particular text document
classification task.
| Ming Sun, Carey E. Priebe, Minh Tang | null | 1209.3761 | null | null |
Evolution and the structure of learning agents | cs.AI cs.LG | This paper presents the thesis that all learning agents of finite information
size are limited by their informational structure in what goals they can
efficiently learn to achieve in a complex environment. Evolutionary change is
critical for creating the required structure for all learning agents in any
complex environment. The thesis implies that there is no efficient universal
learning algorithm. An agent can go past the learning limits imposed by its
structure only by slow evolutionary change or blind search which in a very
complex environment can only give an agent an inefficient universal learning
capability that can work only in evolutionary timescales or improbable luck.
| Alok Raj | null | 1209.3818 | null | null |
Transferring Subspaces Between Subjects in Brain-Computer Interfacing | stat.ML cs.HC cs.LG | Compensating changes between a subjects' training and testing session in
Brain Computer Interfacing (BCI) is challenging but of great importance for a
robust BCI operation. We show that such changes are very similar between
subjects, thus can be reliably estimated using data from other users and
utilized to construct an invariant feature space. This novel approach to
learning from other subjects aims to reduce the adverse effects of common
non-stationarities, but does not transfer discriminative information. This is
an important conceptual difference to standard multi-subject methods that e.g.
improve the covariance matrix estimation by shrinking it towards the average of
other users or construct a global feature space. These methods do not reduces
the shift between training and test data and may produce poor results when
subjects have very different signal characteristics. In this paper we compare
our approach to two state-of-the-art multi-subject methods on toy data and two
data sets of EEG recordings from subjects performing motor imagery. We show
that it can not only achieve a significant increase in performance, but also
that the extracted change patterns allow for a neurophysiologically meaningful
interpretation.
| Wojciech Samek, Frank C. Meinecke, Klaus-Robert M\"uller | 10.1109/TBME.2013.2253608 | 1209.4115 | null | null |
Comunication-Efficient Algorithms for Statistical Optimization | stat.ML cs.LG stat.CO | We analyze two communication-efficient algorithms for distributed statistical
optimization on large-scale data sets. The first algorithm is a standard
averaging method that distributes the $N$ data samples evenly to $\nummac$
machines, performs separate minimization on each subset, and then averages the
estimates. We provide a sharp analysis of this average mixture algorithm,
showing that under a reasonable set of conditions, the combined parameter
achieves mean-squared error that decays as $\order(N^{-1}+(N/m)^{-2})$.
Whenever $m \le \sqrt{N}$, this guarantee matches the best possible rate
achievable by a centralized algorithm having access to all $\totalnumobs$
samples. The second algorithm is a novel method, based on an appropriate form
of bootstrap subsampling. Requiring only a single round of communication, it
has mean-squared error that decays as $\order(N^{-1} + (N/m)^{-3})$, and so is
more robust to the amount of parallelization. In addition, we show that a
stochastic gradient-based method attains mean-squared error decaying as
$O(N^{-1} + (N/ m)^{-3/2})$, easing computation at the expense of penalties in
the rate of convergence. We also provide experimental evaluation of our
methods, investigating their performance both on simulated data and on a
large-scale regression problem from the internet search domain. In particular,
we show that our methods can be used to efficiently solve an advertisement
prediction problem from the Chinese SoSo Search Engine, which involves logistic
regression with $N \approx 2.4 \times 10^8$ samples and $d \approx 740,000$
covariates.
| Yuchen Zhang and John C. Duchi and Martin Wainwright | null | 1209.4129 | null | null |
Efficient Regularized Least-Squares Algorithms for Conditional Ranking
on Relational Data | cs.LG stat.ML | In domains like bioinformatics, information retrieval and social network
analysis, one can find learning tasks where the goal consists of inferring a
ranking of objects, conditioned on a particular target object. We present a
general kernel framework for learning conditional rankings from various types
of relational data, where rankings can be conditioned on unseen data objects.
We propose efficient algorithms for conditional ranking by optimizing squared
regression and ranking loss functions. We show theoretically, that learning
with the ranking loss is likely to generalize better than with the regression
loss. Further, we prove that symmetry or reciprocity properties of relations
can be efficiently enforced in the learned models. Experiments on synthetic and
real-world data illustrate that the proposed methods deliver state-of-the-art
performance in terms of predictive power and computational efficiency.
Moreover, we also show empirically that incorporating symmetry or reciprocity
properties can improve the generalization performance.
| Tapio Pahikkala, Antti Airola, Michiel Stock, Bernard De Baets, Willem
Waegeman | null | 1209.4825 | null | null |
On the Sensitivity of Shape Fitting Problems | cs.CG cs.LG | In this article, we study shape fitting problems, $\epsilon$-coresets, and
total sensitivity. We focus on the $(j,k)$-projective clustering problems,
including $k$-median/$k$-means, $k$-line clustering, $j$-subspace
approximation, and the integer $(j,k)$-projective clustering problem. We derive
upper bounds of total sensitivities for these problems, and obtain
$\epsilon$-coresets using these upper bounds. Using a dimension-reduction type
argument, we are able to greatly simplify earlier results on total sensitivity
for the $k$-median/$k$-means clustering problems, and obtain
positively-weighted $\epsilon$-coresets for several variants of the
$(j,k)$-projective clustering problem. We also extend an earlier result on
$\epsilon$-coresets for the integer $(j,k)$-projective clustering problem in
fixed dimension to the case of high dimension.
| Kasturi Varadarajan and Xin Xiao | null | 1209.4893 | null | null |
An efficient model-free estimation of multiclass conditional probability | stat.ML cs.LG stat.ME | Conventional multiclass conditional probability estimation methods, such as
Fisher's discriminate analysis and logistic regression, often require
restrictive distributional model assumption. In this paper, a model-free
estimation method is proposed to estimate multiclass conditional probability
through a series of conditional quantile regression functions. Specifically,
the conditional class probability is formulated as difference of corresponding
cumulative distribution functions, where the cumulative distribution functions
can be converted from the estimated conditional quantile regression functions.
The proposed estimation method is also efficient as its computation cost does
not increase exponentially with the number of classes. The theoretical and
numerical studies demonstrate that the proposed estimation method is highly
competitive against the existing competitors, especially when the number of
classes is relatively large.
| Tu Xu and Junhui Wang | null | 1209.4951 | null | null |
A Bayesian Nonparametric Approach to Image Super-resolution | cs.LG stat.ML | Super-resolution methods form high-resolution images from low-resolution
images. In this paper, we develop a new Bayesian nonparametric model for
super-resolution. Our method uses a beta-Bernoulli process to learn a set of
recurring visual patterns, called dictionary elements, from the data. Because
it is nonparametric, the number of elements found is also determined from the
data. We test the results on both benchmark and natural images, comparing with
several other models from the research literature. We perform large-scale human
evaluation experiments to assess the visual quality of the results. In a first
implementation, we use Gibbs sampling to approximate the posterior. However,
this algorithm is not feasible for large-scale data. To circumvent this, we
then develop an online variational Bayes (VB) algorithm. This algorithm finds
high quality dictionaries in a fraction of the time needed by the Gibbs
sampler.
| Gungor Polatkan and Mingyuan Zhou and Lawrence Carin and David Blei
and Ingrid Daubechies | null | 1209.5019 | null | null |
Fast Randomized Model Generation for Shapelet-Based Time Series
Classification | cs.LG | Time series classification is a field which has drawn much attention over the
past decade. A new approach for classification of time series uses
classification trees based on shapelets. A shapelet is a subsequence extracted
from one of the time series in the dataset. A disadvantage of this approach is
the time required for building the shapelet-based classification tree. The
search for the best shapelet requires examining all subsequences of all lengths
from all time series in the training set.
A key goal of this work was to find an evaluation order of the shapelets
space which enables fast convergence to an accurate model. The comparative
analysis we conducted clearly indicates that a random evaluation order yields
the best results. Our empirical analysis of the distribution of high-quality
shapelets within the shapelets space provides insights into why randomized
shapelets sampling is superior to alternative evaluation orders.
We present an algorithm for randomized model generation for shapelet-based
classification that converges extremely quickly to a model with surprisingly
high accuracy after evaluating only an exceedingly small fraction of the
shapelets space.
| Daniel Gordon, Danny Hendler, Lior Rokach | null | 1209.5038 | null | null |
On Move Pattern Trends in a Large Go Games Corpus | cs.AI cs.LG | We process a large corpus of game records of the board game of Go and propose
a way of extracting summary information on played moves. We then apply several
basic data-mining methods on the summary information to identify the most
differentiating features within the summary information, and discuss their
correspondence with traditional Go knowledge. We show statistically significant
mappings of the features to player attributes such as playing strength or
informally perceived "playing style" (e.g. territoriality or aggressivity),
describe accurate classifiers for these attributes, and propose applications
including seeding real-work ranks of internet players, aiding in Go study and
tuning of Go-playing programs, or contribution to Go-theoretical discussion on
the scope of "playing style".
| Petr Baudi\v{s}, Josef Moud\v{r}\'ik | null | 1209.5251 | null | null |
Towards Ultrahigh Dimensional Feature Selection for Big Data | cs.LG | In this paper, we present a new adaptive feature scaling scheme for
ultrahigh-dimensional feature selection on Big Data. To solve this problem
effectively, we first reformulate it as a convex semi-infinite programming
(SIP) problem and then propose an efficient \emph{feature generating paradigm}.
In contrast with traditional gradient-based approaches that conduct
optimization on all input features, the proposed method iteratively activates a
group of features and solves a sequence of multiple kernel learning (MKL)
subproblems of much reduced scale. To further speed up the training, we propose
to solve the MKL subproblems in their primal forms through a modified
accelerated proximal gradient approach. Due to such an optimization scheme,
some efficient cache techniques are also developed. The feature generating
paradigm can guarantee that the solution converges globally under mild
conditions and achieve lower feature selection bias. Moreover, the proposed
method can tackle two challenging tasks in feature selection: 1) group-based
feature selection with complex structures and 2) nonlinear feature selection
with explicit feature mappings. Comprehensive experiments on a wide range of
synthetic and real-world datasets containing tens of million data points with
$O(10^{14})$ features demonstrate the competitive performance of the proposed
method over state-of-the-art feature selection methods in terms of
generalization performance and training efficiency.
| Mingkui Tan and Ivor W. Tsang and Li Wang | null | 1209.5260 | null | null |
BPRS: Belief Propagation Based Iterative Recommender System | cs.LG | In this paper we introduce the first application of the Belief Propagation
(BP) algorithm in the design of recommender systems. We formulate the
recommendation problem as an inference problem and aim to compute the marginal
probability distributions of the variables which represent the ratings to be
predicted. However, computing these marginal probability functions is
computationally prohibitive for large-scale systems. Therefore, we utilize the
BP algorithm to efficiently compute these functions. Recommendations for each
active user are then iteratively computed by probabilistic message passing. As
opposed to the previous recommender algorithms, BPRS does not require solving
the recommendation problem for all the users if it wishes to update the
recommendations for only a single active. Further, BPRS computes the
recommendations for each user with linear complexity and without requiring a
training period. Via computer simulations (using the 100K MovieLens dataset),
we verify that BPRS iteratively reduces the error in the predicted ratings of
the users until it converges. Finally, we confirm that BPRS is comparable to
the state of art methods such as Correlation-based neighborhood model (CorNgbr)
and Singular Value Decomposition (SVD) in terms of rating and precision
accuracy. Therefore, we believe that the BP-based recommendation algorithm is a
new promising approach which offers a significant advantage on scalability
while providing competitive accuracy for the recommender systems.
| Erman Ayday, Arash Einolghozati, Faramarz Fekri | null | 1209.5335 | null | null |
Learning Topic Models and Latent Bayesian Networks Under Expansion
Constraints | stat.ML cs.LG stat.AP | Unsupervised estimation of latent variable models is a fundamental problem
central to numerous applications of machine learning and statistics. This work
presents a principled approach for estimating broad classes of such models,
including probabilistic topic models and latent linear Bayesian networks, using
only second-order observed moments. The sufficient conditions for
identifiability of these models are primarily based on weak expansion
constraints on the topic-word matrix, for topic models, and on the directed
acyclic graph, for Bayesian networks. Because no assumptions are made on the
distribution among the latent variables, the approach can handle arbitrary
correlations among the topics or latent factors. In addition, a tractable
learning method via $\ell_1$ optimization is proposed and studied in numerical
experiments.
| Animashree Anandkumar, Daniel Hsu, Adel Javanmard, Sham M. Kakade | null | 1209.5350 | null | null |
Minimizing inter-subject variability in fNIRS based Brain Computer
Interfaces via multiple-kernel support vector learning | stat.ML cs.LG | Brain signal variability in the measurements obtained from different subjects
during different sessions significantly deteriorates the accuracy of most
brain-computer interface (BCI) systems. Moreover these variabilities, also
known as inter-subject or inter-session variabilities, require lengthy
calibration sessions before the BCI system can be used. Furthermore, the
calibration session has to be repeated for each subject independently and
before use of the BCI due to the inter-session variability. In this study, we
present an algorithm in order to minimize the above-mentioned variabilities and
to overcome the time-consuming and usually error-prone calibration time. Our
algorithm is based on linear programming support-vector machines and their
extensions to a multiple kernel learning framework. We tackle the inter-subject
or -session variability in the feature spaces of the classifiers. This is done
by incorporating each subject- or session-specific feature spaces into much
richer feature spaces with a set of optimal decision boundaries. Each decision
boundary represents the subject- or a session specific spatio-temporal
variabilities of neural signals. Consequently, a single classifier with
multiple feature spaces will generalize well to new unseen test patterns even
without the calibration steps. We demonstrate that classifiers maintain good
performances even under the presence of a large degree of BCI variability. The
present study analyzes BCI variability related to oxy-hemoglobin neural signals
measured using a functional near-infrared spectroscopy.
| Berdakh Abibullaev, Jinung An, Seung-Hyun Lee, Sang-Hyeon Jin, Jeon-Il
Moon | null | 1209.5467 | null | null |
Optimal Weighting of Multi-View Data with Low Dimensional Hidden States | stat.ML cs.LG | In Natural Language Processing (NLP) tasks, data often has the following two
properties: First, data can be chopped into multi-views which has been
successfully used for dimension reduction purposes. For example, in topic
classification, every paper can be chopped into the title, the main text and
the references. However, it is common that some of the views are less noisier
than other views for supervised learning problems. Second, unlabeled data are
easy to obtain while labeled data are relatively rare. For example, articles
occurred on New York Times in recent 10 years are easy to grab but having them
classified as 'Politics', 'Finance' or 'Sports' need human labor. Hence less
noisy features are preferred before running supervised learning methods. In
this paper we propose an unsupervised algorithm which optimally weights
features from different views when these views are generated from a low
dimensional hidden state, which occurs in widely used models like Mixture
Gaussian Model, Hidden Markov Model (HMM) and Latent Dirichlet Allocation
(LDA).
| Yichao Lu and Dean P. Foster | null | 1209.5477 | null | null |
Towards a learning-theoretic analysis of spike-timing dependent
plasticity | q-bio.NC cs.LG stat.ML | This paper suggests a learning-theoretic perspective on how synaptic
plasticity benefits global brain functioning. We introduce a model, the
selectron, that (i) arises as the fast time constant limit of leaky
integrate-and-fire neurons equipped with spiking timing dependent plasticity
(STDP) and (ii) is amenable to theoretical analysis. We show that the selectron
encodes reward estimates into spikes and that an error bound on spikes is
controlled by a spiking margin and the sum of synaptic weights. Moreover, the
efficacy of spikes (their usefulness to other reward maximizing selectrons)
also depends on total synaptic strength. Finally, based on our analysis, we
propose a regularized version of STDP, and show the regularization improves the
robustness of neuronal learning when faced with multiple stimuli.
| David Balduzzi and Michel Besserve | null | 1209.5549 | null | null |
Supervised Blockmodelling | cs.LG cs.SI stat.ML | Collective classification models attempt to improve classification
performance by taking into account the class labels of related instances.
However, they tend not to learn patterns of interactions between classes and/or
make the assumption that instances of the same class link to each other
(assortativity assumption). Blockmodels provide a solution to these issues,
being capable of modelling assortative and disassortative interactions, and
learning the pattern of interactions in the form of a summary network. The
Supervised Blockmodel provides good classification performance using link
structure alone, whilst simultaneously providing an interpretable summary of
network interactions to allow a better understanding of the data. This work
explores three variants of supervised blockmodels of varying complexity and
tests them on four structurally different real world networks.
| Leto Peel | null | 1209.5561 | null | null |
Feature selection with test cost constraint | cs.AI cs.LG | Feature selection is an important preprocessing step in machine learning and
data mining. In real-world applications, costs, including money, time and other
resources, are required to acquire the features. In some cases, there is a test
cost constraint due to limited resources. We shall deliberately select an
informative and cheap feature subset for classification. This paper proposes
the feature selection with test cost constraint problem for this issue. The new
problem has a simple form while described as a constraint satisfaction problem
(CSP). Backtracking is a general algorithm for CSP, and it is efficient in
solving the new problem on medium-sized data. As the backtracking algorithm is
not scalable to large datasets, a heuristic algorithm is also developed.
Experimental results show that the heuristic algorithm can find the optimal
solution in most cases. We also redefine some existing feature selection
problems in rough sets, especially in decision-theoretic rough sets, from the
viewpoint of CSP. These new definitions provide insight to some new research
directions.
| Fan Min, Qinghua Hu, William Zhu | 10.1016/j.ijar.2013.04.003 | 1209.5601 | null | null |
Locality-Sensitive Hashing with Margin Based Feature Selection | cs.LG cs.IR | We propose a learning method with feature selection for Locality-Sensitive
Hashing. Locality-Sensitive Hashing converts feature vectors into bit arrays.
These bit arrays can be used to perform similarity searches and personal
authentication. The proposed method uses bit arrays longer than those used in
the end for similarity and other searches and by learning selects the bits that
will be used. We demonstrated this method can effectively perform optimization
for cases such as fingerprint images with a large number of labels and
extremely few data that share the same labels, as well as verifying that it is
also effective for natural images, handwritten digits, and speech features.
| Makiko Konoshima and Yui Noma | null | 1209.5833 | null | null |
Subset Selection for Gaussian Markov Random Fields | cs.LG stat.ML | Given a Gaussian Markov random field, we consider the problem of selecting a
subset of variables to observe which minimizes the total expected squared
prediction error of the unobserved variables. We first show that finding an
exact solution is NP-hard even for a restricted class of Gaussian Markov random
fields, called Gaussian free fields, which arise in semi-supervised learning
and computer vision. We then give a simple greedy approximation algorithm for
Gaussian free fields on arbitrary graphs. Finally, we give a message passing
algorithm for general Gaussian Markov random fields on bounded tree-width
graphs.
| Satyaki Mahalanabis, Daniel Stefankovic | null | 1209.5991 | null | null |
Bayesian Mixture Models for Frequent Itemset Discovery | cs.LG cs.IR stat.ML | In binary-transaction data-mining, traditional frequent itemset mining often
produces results which are not straightforward to interpret. To overcome this
problem, probability models are often used to produce more compact and
conclusive results, albeit with some loss of accuracy. Bayesian statistics have
been widely used in the development of probability models in machine learning
in recent years and these methods have many advantages, including their
abilities to avoid overfitting. In this paper, we develop two Bayesian mixture
models with the Dirichlet distribution prior and the Dirichlet process (DP)
prior to improve the previous non-Bayesian mixture model developed for
transaction dataset mining. We implement the inference of both mixture models
using two methods: a collapsed Gibbs sampling scheme and a variational
approximation algorithm. Experiments in several benchmark problems have shown
that both mixture models achieve better performance than a non-Bayesian mixture
model. The variational algorithm is the faster of the two approaches while the
Gibbs sampling method achieves a more accurate results. The Dirichlet process
mixture model can automatically grow to a proper complexity for a better
approximation. Once the model is built, it can be very fast to query and run
analysis on (typically 10 times faster than Eclat, as we will show in the
experiment section). However, these approaches also show that mixture models
underestimate the probabilities of frequent itemsets. Consequently, these
models have a higher sensitivity but a lower specificity.
| Ruefei He and Jonathan Shapiro | null | 1209.6001 | null | null |
The Issue-Adjusted Ideal Point Model | stat.ML cs.LG stat.AP | We develop a model of issue-specific voting behavior. This model can be used
to explore lawmakers' personal voting patterns of voting by issue area,
providing an exploratory window into how the language of the law is correlated
with political support. We derive approximate posterior inference algorithms
based on variational methods. Across 12 years of legislative data, we
demonstrate both improvement in heldout prediction performance and the model's
utility in interpreting an inherently multi-dimensional space.
| Sean M. Gerrish and David M. Blei | null | 1209.6004 | null | null |
Movie Popularity Classification based on Inherent Movie Attributes using
C4.5,PART and Correlation Coefficient | cs.LG cs.DB cs.IR | Abundance of movie data across the internet makes it an obvious candidate for
machine learning and knowledge discovery. But most researches are directed
towards bi-polar classification of movie or generation of a movie
recommendation system based on reviews given by viewers on various internet
sites. Classification of movie popularity based solely on attributes of a movie
i.e. actor, actress, director rating, language, country and budget etc. has
been less highlighted due to large number of attributes that are associated
with each movie and their differences in dimensions. In this paper, we propose
classification scheme of pre-release movie popularity based on inherent
attributes using C4.5 and PART classifier algorithm and define the relation
between attributes of post release movies using correlation coefficient.
| Khalid Ibnal Asad, Tanvir Ahmed, Md. Saiedur Rahman | null | 1209.6070 | null | null |
More Is Better: Large Scale Partially-supervised Sentiment
Classification - Appendix | cs.LG | We describe a bootstrapping algorithm to learn from partially labeled data,
and the results of an empirical study for using it to improve performance of
sentiment classification using up to 15 million unlabeled Amazon product
reviews. Our experiments cover semi-supervised learning, domain adaptation and
weakly supervised learning. In some cases our methods were able to reduce test
error by more than half using such large amount of data.
NOTICE: This is only the supplementary material.
| Yoav Haimovitch, Koby Crammer, Shie Mannor | null | 1209.6329 | null | null |
Sparse Ising Models with Covariates | stat.ML cs.LG | There has been a lot of work fitting Ising models to multivariate binary data
in order to understand the conditional dependency relationships between the
variables. However, additional covariates are frequently recorded together with
the binary data, and may influence the dependence relationships. Motivated by
such a dataset on genomic instability collected from tumor samples of several
types, we propose a sparse covariate dependent Ising model to study both the
conditional dependency within the binary data and its relationship with the
additional covariates. This results in subject-specific Ising models, where the
subject's covariates influence the strength of association between the genes.
As in all exploratory data analysis, interpretability of results is important,
and we use L1 penalties to induce sparsity in the fitted graphs and in the
number of selected covariates. Two algorithms to fit the model are proposed and
compared on a set of simulated data, and asymptotic results are established.
The results on the tumor dataset and their biological significance are
discussed in detail.
| Jie Cheng, Elizaveta Levina, Pei Wang and Ji Zhu | null | 1209.6342 | null | null |
Learning Robust Low-Rank Representations | cs.LG math.OC | In this paper we present a comprehensive framework for learning robust
low-rank representations by combining and extending recent ideas for learning
fast sparse coding regressors with structured non-convex optimization
techniques. This approach connects robust principal component analysis (RPCA)
with dictionary learning techniques and allows its approximation via trainable
encoders. We propose an efficient feed-forward architecture derived from an
optimization algorithm designed to exactly solve robust low dimensional
projections. This architecture, in combination with different training
objective functions, allows the regressors to be used as online approximants of
the exact offline RPCA problem or as RPCA-based neural networks. Simple
modifications of these encoders can handle challenging extensions, such as the
inclusion of geometric data transformations. We present several examples with
real data from image, audio, and video processing. When used to approximate
RPCA, our basic implementation shows several orders of magnitude speedup
compared to the exact solvers with almost no performance degradation. We show
the strength of the inclusion of learning to the RPCA approach on a music
source separation application, where the encoders outperform the exact RPCA
algorithms, which are already reported to produce state-of-the-art results on a
benchmark database. Our preliminary implementation on an iPad shows
faster-than-real-time performance with minimal latency.
| Pablo Sprechmann, Alex M. Bronstein, Guillermo Sapiro | null | 1209.6393 | null | null |
A Deterministic Analysis of an Online Convex Mixture of Expert
Algorithms | cs.LG | We analyze an online learning algorithm that adaptively combines outputs of
two constituent algorithms (or the experts) running in parallel to model an
unknown desired signal. This online learning algorithm is shown to achieve (and
in some cases outperform) the mean-square error (MSE) performance of the best
constituent algorithm in the mixture in the steady-state. However, the MSE
analysis of this algorithm in the literature uses approximations and relies on
statistical models on the underlying signals and systems. Hence, such an
analysis may not be useful or valid for signals generated by various real life
systems that show high degrees of nonstationarity, limit cycles and, in many
cases, that are even chaotic. In this paper, we produce results in an
individual sequence manner. In particular, we relate the time-accumulated
squared estimation error of this online algorithm at any time over any interval
to the time accumulated squared estimation error of the optimal convex mixture
of the constituent algorithms directly tuned to the underlying signal in a
deterministic sense without any statistical assumptions. In this sense, our
analysis provides the transient, steady-state and tracking behavior of this
algorithm in a strong sense without any approximations in the derivations or
statistical assumptions on the underlying signals such that our results are
guaranteed to hold. We illustrate the introduced results through examples.
| Mehmet A. Donmez, Sait Tunc, Suleyman S. Kozat | null | 1209.6409 | null | null |
Partial Gaussian Graphical Model Estimation | cs.LG cs.IT math.IT stat.ML | This paper studies the partial estimation of Gaussian graphical models from
high-dimensional empirical observations. We derive a convex formulation for
this problem using $\ell_1$-regularized maximum-likelihood estimation, which
can be solved via a block coordinate descent algorithm. Statistical estimation
performance can be established for our method. The proposed approach has
competitive empirical performance compared to existing methods, as demonstrated
by various experiments on synthetic and real datasets.
| Xiao-Tong Yuan and Tong Zhang | null | 1209.6419 | null | null |
Gene selection with guided regularized random forest | cs.LG cs.CE | The regularized random forest (RRF) was recently proposed for feature
selection by building only one ensemble. In RRF the features are evaluated on a
part of the training data at each tree node. We derive an upper bound for the
number of distinct Gini information gain values in a node, and show that many
features can share the same information gain at a node with a small number of
instances and a large number of features. Therefore, in a node with a small
number of instances, RRF is likely to select a feature not strongly relevant.
Here an enhanced RRF, referred to as the guided RRF (GRRF), is proposed. In
GRRF, the importance scores from an ordinary random forest (RF) are used to
guide the feature selection process in RRF. Experiments on 10 gene data sets
show that the accuracy performance of GRRF is, in general, more robust than RRF
when their parameters change. GRRF is computationally efficient, can select
compact feature subsets, and has competitive accuracy performance, compared to
RRF, varSelRF and LASSO logistic regression (with evaluations from an RF
classifier). Also, RF applied to the features selected by RRF with the minimal
regularization outperforms RF applied to all the features for most of the data
sets considered here. Therefore, if accuracy is considered more important than
the size of the feature subset, RRF with the minimal regularization may be
considered. We use the accuracy performance of RF, a strong classifier, to
evaluate feature selection methods, and illustrate that weak classifiers are
less capable of capturing the information contained in a feature subset. Both
RRF and GRRF were implemented in the "RRF" R package available at CRAN, the
official R package archive.
| Houtao Deng and George Runger | null | 1209.6425 | null | null |
A Complete System for Candidate Polyps Detection in Virtual Colonoscopy | cs.CV cs.LG | Computer tomographic colonography, combined with computer-aided detection, is
a promising emerging technique for colonic polyp analysis. We present a
complete pipeline for polyp detection, starting with a simple colon
segmentation technique that enhances polyps, followed by an adaptive-scale
candidate polyp delineation and classification based on new texture and
geometric features that consider both the information in the candidate polyp
location and its immediate surrounding area. The proposed system is tested with
ground truth data, including flat and small polyps which are hard to detect
even with optical colonoscopy. For polyps larger than 6mm in size we achieve
100% sensitivity with just 0.9 false positives per case, and for polyps larger
than 3mm in size we achieve 93% sensitivity with 2.8 false positives per case.
| Marcelo Fiori, Pablo Mus\'e, Guillermo Sapiro | null | 1209.6525 | null | null |
Scoring and Searching over Bayesian Networks with Causal and Associative
Priors | cs.AI cs.LG stat.ML | A significant theoretical advantage of search-and-score methods for learning
Bayesian Networks is that they can accept informative prior beliefs for each
possible network, thus complementing the data. In this paper, a method is
presented for assigning priors based on beliefs on the presence or absence of
certain paths in the true network. Such beliefs correspond to knowledge about
the possible causal and associative relations between pairs of variables. This
type of knowledge naturally arises from prior experimental and observational
data, among others. In addition, a novel search-operator is proposed to take
advantage of such prior knowledge. Experiments show that, using path beliefs
improves the learning of the skeleton, as well as the edge directions in the
network.
| Giorgos Borboudakis and Ioannis Tsamardinos | null | 1209.6561 | null | null |
Iterative Reweighted Minimization Methods for $l_p$ Regularized
Unconstrained Nonlinear Programming | math.OC cs.LG stat.CO stat.ML | In this paper we study general $l_p$ regularized unconstrained minimization
problems. In particular, we derive lower bounds for nonzero entries of first-
and second-order stationary points, and hence also of local minimizers of the
$l_p$ minimization problems. We extend some existing iterative reweighted $l_1$
(IRL1) and $l_2$ (IRL2) minimization methods to solve these problems and
proposed new variants for them in which each subproblem has a closed form
solution. Also, we provide a unified convergence analysis for these methods. In
addition, we propose a novel Lipschitz continuous $\epsilon$-approximation to
$\|x\|^p_p$. Using this result, we develop new IRL1 methods for the $l_p$
minimization problems and showed that any accumulation point of the sequence
generated by these methods is a first-order stationary point, provided that the
approximation parameter $\epsilon$ is below a computable threshold value. This
is a remarkable result since all existing iterative reweighted minimization
methods require that $\epsilon$ be dynamically updated and approach zero. Our
computational results demonstrate that the new IRL1 method is generally more
stable than the existing IRL1 methods [21,18] in terms of objective function
value and CPU time.
| Zhaosong Lu | null | 1210.0066 | null | null |
Optimistic Agents are Asymptotically Optimal | cs.AI cs.LG | We use optimism to introduce generic asymptotically optimal reinforcement
learning agents. They achieve, with an arbitrary finite or compact class of
environments, asymptotically optimal behavior. Furthermore, in the finite
deterministic case we provide finite error bounds.
| Peter Sunehag and Marcus Hutter | null | 1210.0077 | null | null |
Memory Constraint Online Multitask Classification | cs.LG | We investigate online kernel algorithms which simultaneously process multiple
classification tasks while a fixed constraint is imposed on the size of their
active sets. We focus in particular on the design of algorithms that can
efficiently deal with problems where the number of tasks is extremely high and
the task data are large scale. Two new projection-based algorithms are
introduced to efficiently tackle those issues while presenting different trade
offs on how the available memory is managed with respect to the prior
information about the learning tasks. Theoretically sound budget algorithms are
devised by coupling the Randomized Budget Perceptron and the Forgetron
algorithms with the multitask kernel. We show how the two seemingly contrasting
properties of learning from multiple tasks and keeping a constant memory
footprint can be balanced, and how the sharing of the available space among
different tasks is automatically taken care of. We propose and discuss new
insights on the multitask kernel. Experiments show that online kernel multitask
algorithms running on a budget can efficiently tackle real world learning
problems involving multiple tasks.
| Giovanni Cavallanti, Nicol\`o Cesa-Bianchi | null | 1210.0473 | null | null |
Inference algorithms for pattern-based CRFs on sequence data | cs.LG cs.DS | We consider Conditional Random Fields (CRFs) with pattern-based potentials
defined on a chain. In this model the energy of a string (labeling) $x_1...x_n$
is the sum of terms over intervals $[i,j]$ where each term is non-zero only if
the substring $x_i...x_j$ equals a prespecified pattern $\alpha$. Such CRFs can
be naturally applied to many sequence tagging problems.
We present efficient algorithms for the three standard inference tasks in a
CRF, namely computing (i) the partition function, (ii) marginals, and (iii)
computing the MAP. Their complexities are respectively $O(n L)$, $O(n L
\ell_{max})$ and $O(n L \min\{|D|,\log (\ell_{max}+1)\})$ where $L$ is the
combined length of input patterns, $\ell_{max}$ is the maximum length of a
pattern, and $D$ is the input alphabet. This improves on the previous
algorithms of (Ye et al., 2009) whose complexities are respectively $O(n L
|D|)$, $O(n |\Gamma| L^2 \ell_{max}^2)$ and $O(n L |D|)$, where $|\Gamma|$ is
the number of input patterns.
In addition, we give an efficient algorithm for sampling. Finally, we
consider the case of non-positive weights. (Komodakis & Paragios, 2009) gave an
$O(n L)$ algorithm for computing the MAP. We present a modification that has
the same worst-case complexity but can beat it in the best case.
| Rustem Takhanov and Vladimir Kolmogorov | 10.1007/s00453-015-0017-7 | 1210.0508 | null | null |
Sparse LMS via Online Linearized Bregman Iteration | cs.IT cs.LG math.IT stat.ML | We propose a version of least-mean-square (LMS) algorithm for sparse system
identification. Our algorithm called online linearized Bregman iteration (OLBI)
is derived from minimizing the cumulative prediction error squared along with
an l1-l2 norm regularizer. By systematically treating the non-differentiable
regularizer we arrive at a simple two-step iteration. We demonstrate that OLBI
is bias free and compare its operation with existing sparse LMS algorithms by
rederiving them in the online convex optimization framework. We perform
convergence analysis of OLBI for white input signals and derive theoretical
expressions for both the steady state and instantaneous mean square deviations
(MSD). We demonstrate numerically that OLBI improves the performance of LMS
type algorithms for signals generated from sparse tap weights.
| Tao Hu and Dmitri B. Chklovskii | null | 1210.0563 | null | null |
Nonparametric Unsupervised Classification | cs.LG stat.ML | Unsupervised classification methods learn a discriminative classifier from
unlabeled data, which has been proven to be an effective way of simultaneously
clustering the data and training a classifier from the data. Various
unsupervised classification methods obtain appealing results by the classifiers
learned in an unsupervised manner. However, existing methods do not consider
the misclassification error of the unsupervised classifiers except unsupervised
SVM, so the performance of the unsupervised classifiers is not fully evaluated.
In this work, we study the misclassification error of two popular classifiers,
i.e. the nearest neighbor classifier (NN) and the plug-in classifier, in the
setting of unsupervised classification.
| Yingzhen Yang, Thomas S. Huang | null | 1210.0645 | null | null |
Local stability and robustness of sparse dictionary learning in the
presence of noise | stat.ML cs.LG | A popular approach within the signal processing and machine learning
communities consists in modelling signals as sparse linear combinations of
atoms selected from a learned dictionary. While this paradigm has led to
numerous empirical successes in various fields ranging from image to audio
processing, there have only been a few theoretical arguments supporting these
evidences. In particular, sparse coding, or sparse dictionary learning, relies
on a non-convex procedure whose local minima have not been fully analyzed yet.
In this paper, we consider a probabilistic model of sparse signals, and show
that, with high probability, sparse coding admits a local minimum around the
reference dictionary generating the signals. Our study takes into account the
case of over-complete dictionaries and noisy signals, thus extending previous
work limited to noiseless settings and/or under-complete dictionaries. The
analysis we conduct is non-asymptotic and makes it possible to understand how
the key quantities of the problem, such as the coherence or the level of noise,
can scale with respect to the dimension of the signals, the number of atoms,
the sparsity and the number of observations.
| Rodolphe Jenatton (CMAP), R\'emi Gribonval (INRIA - IRISA), Francis
Bach (LIENS, INRIA Paris - Rocquencourt) | null | 1210.0685 | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.