categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
cs.LG | null | 1305.1359 | null | null | http://arxiv.org/pdf/1305.1359v1 | 2013-05-07T00:02:51Z | 2013-05-07T00:02:51Z | A Differential Equations Approach to Optimizing Regret Trade-offs | We consider the classical question of predicting binary sequences and study
the {\em optimal} algorithms for obtaining the best possible regret and payoff
functions for this problem. The question turns out to be also equivalent to the
problem of optimal trade-offs between the regrets of two experts in an "experts
problem", studied before by \cite{kearns-regret}. While, say, a regret of
$\Theta(\sqrt{T})$ is known, we argue that it important to ask what is the
provably optimal algorithm for this problem --- both because it leads to
natural algorithms, as well as because regret is in fact often comparable in
magnitude to the final payoffs and hence is a non-negligible term.
In the basic setting, the result essentially follows from a classical result
of Cover from '65. Here instead, we focus on another standard setting, of
time-discounted payoffs, where the final "stopping time" is not specified. We
exhibit an explicit characterization of the optimal regret for this setting.
To obtain our main result, we show that the optimal payoff functions have to
satisfy the Hermite differential equation, and hence are given by the solutions
to this equation. It turns out that characterization of the payoff function is
qualitatively different from the classical (non-discounted) setting, and,
namely, there's essentially a unique optimal solution.
| [
"['Alexandr Andoni' 'Rina Panigrahy']",
"Alexandr Andoni and Rina Panigrahy"
] |
cs.LG | null | 1305.1363 | null | null | http://arxiv.org/pdf/1305.1363v2 | 2013-05-16T13:24:37Z | 2013-05-07T00:30:32Z | One-Pass AUC Optimization | AUC is an important performance measure and many algorithms have been devoted
to AUC optimization, mostly by minimizing a surrogate convex loss on a training
data set. In this work, we focus on one-pass AUC optimization that requires
only going through the training data once without storing the entire training
dataset, where conventional online learning algorithms cannot be applied
directly because AUC is measured by a sum of losses defined over pairs of
instances from different classes. We develop a regression-based algorithm which
only needs to maintain the first and second order statistics of training data
in memory, resulting a storage requirement independent from the size of
training data. To efficiently handle high dimensional data, we develop a
randomized algorithm that approximates the covariance matrices by low rank
matrices. We verify, both theoretically and empirically, the effectiveness of
the proposed algorithm.
| [
"['Wei Gao' 'Rong Jin' 'Shenghuo Zhu' 'Zhi-Hua Zhou']",
"Wei Gao and Rong Jin and Shenghuo Zhu and Zhi-Hua Zhou"
] |
cs.CV cs.LG stat.ML | 10.1016/j.patcog.2013.01.006 | 1305.1396 | null | null | http://arxiv.org/abs/1305.1396v2 | 2013-09-12T16:09:55Z | 2013-05-07T04:05:24Z | A new framework for optimal classifier design | The use of alternative measures to evaluate classifier performance is gaining
attention, specially for imbalanced problems. However, the use of these
measures in the classifier design process is still unsolved. In this work we
propose a classifier designed specifically to optimize one of these alternative
measures, namely, the so-called F-measure. Nevertheless, the technique is
general, and it can be used to optimize other evaluation measures. An algorithm
to train the novel classifier is proposed, and the numerical scheme is tested
with several databases, showing the optimality and robustness of the presented
classifier.
| [
"Mat\\'ias Di Martino, Guzman Hern\\'andez, Marcelo Fiori, Alicia\n Fern\\'andez",
"['Matías Di Martino' 'Guzman Hernández' 'Marcelo Fiori' 'Alicia Fernández']"
] |
cs.AI cs.LG | null | 1305.1679 | null | null | http://arxiv.org/pdf/1305.1679v1 | 2013-05-07T23:40:08Z | 2013-05-07T23:40:08Z | High Level Pattern Classification via Tourist Walks in Networks | Complex networks refer to large-scale graphs with nontrivial connection
patterns. The salient and interesting features that the complex network study
offer in comparison to graph theory are the emphasis on the dynamical
properties of the networks and the ability of inherently uncovering pattern
formation of the vertices. In this paper, we present a hybrid data
classification technique combining a low level and a high level classifier. The
low level term can be equipped with any traditional classification techniques,
which realize the classification task considering only physical features (e.g.,
geometrical or statistical features) of the input data. On the other hand, the
high level term has the ability of detecting data patterns with semantic
meanings. In this way, the classification is realized by means of the
extraction of the underlying network's features constructed from the input
data. As a result, the high level classification process measures the
compliance of the test instances with the pattern formation of the training
data. Out of various high level perspectives that can be utilized to capture
semantic meaning, we utilize the dynamical features that are generated from a
tourist walker in a networked environment. Specifically, a weighted combination
of transient and cycle lengths generated by the tourist walk is employed for
that end. Interestingly, our study shows that the proposed technique is able to
further improve the already optimized performance of traditional classification
techniques.
| [
"Thiago Christiano Silva and Liang Zhao",
"['Thiago Christiano Silva' 'Liang Zhao']"
] |
cs.LG | null | 1305.1707 | null | null | http://arxiv.org/pdf/1305.1707v1 | 2013-05-08T03:39:17Z | 2013-05-08T03:39:17Z | Class Imbalance Problem in Data Mining Review | In last few years there are major changes and evolution has been done on
classification of data. As the application area of technology is increases the
size of data also increases. Classification of data becomes difficult because
of unbounded size and imbalance nature of data. Class imbalance problem become
greatest issue in data mining. Imbalance problem occur where one of the two
classes having more sample than other classes. The most of algorithm are more
focusing on classification of major sample while ignoring or misclassifying
minority sample. The minority samples are those that rarely occur but very
important. There are different methods available for classification of
imbalance data set which is divided into three main categories, the algorithmic
approach, data-preprocessing approach and feature selection approach. Each of
this technique has their own advantages and disadvantages. In this paper
systematic study of each approach is define which gives the right direction for
research in class imbalance problem.
| [
"['Rushi Longadge' 'Snehalata Dongre']",
"Rushi Longadge and Snehalata Dongre"
] |
stat.ML cs.LG | null | 1305.1809 | null | null | http://arxiv.org/pdf/1305.1809v2 | 2014-05-02T09:44:45Z | 2013-05-08T13:11:52Z | Cover Tree Bayesian Reinforcement Learning | This paper proposes an online tree-based Bayesian approach for reinforcement
learning. For inference, we employ a generalised context tree model. This
defines a distribution on multivariate Gaussian piecewise-linear models, which
can be updated in closed form. The tree structure itself is constructed using
the cover tree method, which remains efficient in high dimensional spaces. We
combine the model with Thompson sampling and approximate dynamic programming to
obtain effective exploration policies in unknown environments. The flexibility
and computational simplicity of the model render it suitable for many
reinforcement learning problems in continuous state spaces. We demonstrate this
in an experimental comparison with least squares policy iteration.
| [
"Nikolaos Tziortziotis and Christos Dimitrakakis and Konstantinos\n Blekas",
"['Nikolaos Tziortziotis' 'Christos Dimitrakakis' 'Konstantinos Blekas']"
] |
stat.ML cs.LG | null | 1305.1956 | null | null | http://arxiv.org/pdf/1305.1956v2 | 2013-05-10T01:05:09Z | 2013-05-08T20:44:55Z | Joint Topic Modeling and Factor Analysis of Textual Information and
Graded Response Data | Modern machine learning methods are critical to the development of
large-scale personalized learning systems that cater directly to the needs of
individual learners. The recently developed SPARse Factor Analysis (SPARFA)
framework provides a new statistical model and algorithms for machine
learning-based learning analytics, which estimate a learner's knowledge of the
latent concepts underlying a domain, and content analytics, which estimate the
relationships among a collection of questions and the latent concepts. SPARFA
estimates these quantities given only the binary-valued graded responses to a
collection of questions. In order to better interpret the estimated latent
concepts, SPARFA relies on a post-processing step that utilizes user-defined
tags (e.g., topics or keywords) available for each question. In this paper, we
relax the need for user-defined tags by extending SPARFA to jointly process
both graded learner responses and the text of each question and its associated
answer(s) or other feedback. Our purely data-driven approach (i) enhances the
interpretability of the estimated latent concepts without the need of
explicitly generating a set of tags or performing a post-processing step, (ii)
improves the prediction performance of SPARFA, and (iii) scales to large
test/assessments where human annotation would prove burdensome. We demonstrate
the efficacy of the proposed approach on two real educational datasets.
| [
"Andrew S. Lan, Christoph Studer, Andrew E. Waters and Richard G.\n Baraniuk",
"['Andrew S. Lan' 'Christoph Studer' 'Andrew E. Waters'\n 'Richard G. Baraniuk']"
] |
null | null | 1305.2218 | null | null | http://arxiv.org/pdf/1305.2218v1 | 2013-05-09T21:31:47Z | 2013-05-09T21:31:47Z | Stochastic gradient descent algorithms for strongly convex functions at
O(1/T) convergence rates | With a weighting scheme proportional to t, a traditional stochastic gradient descent (SGD) algorithm achieves a high probability convergence rate of O({kappa}/T) for strongly convex functions, instead of O({kappa} ln(T)/T). We also prove that an accelerated SGD algorithm also achieves a rate of O({kappa}/T). | [
"['Shenghuo Zhu']"
] |
stat.ML cs.LG | null | 1305.2238 | null | null | http://arxiv.org/pdf/1305.2238v2 | 2016-07-28T05:05:18Z | 2013-05-10T01:08:36Z | Calibrated Multivariate Regression with Application to Neural Semantic
Basis Discovery | We propose a calibrated multivariate regression method named CMR for fitting
high dimensional multivariate regression models. Compared with existing
methods, CMR calibrates regularization for each regression task with respect to
its noise level so that it simultaneously attains improved finite-sample
performance and tuning insensitiveness. Theoretically, we provide sufficient
conditions under which CMR achieves the optimal rate of convergence in
parameter estimation. Computationally, we propose an efficient smoothed
proximal gradient algorithm with a worst-case numerical rate of convergence
$\cO(1/\epsilon)$, where $\epsilon$ is a pre-specified accuracy of the
objective function value. We conduct thorough numerical simulations to
illustrate that CMR consistently outperforms other high dimensional
multivariate regression methods. We also apply CMR to solve a brain activity
prediction problem and find that it is as competitive as a handcrafted model
created by human experts. The R package \texttt{camel} implementing the
proposed method is available on the Comprehensive R Archive Network
\url{http://cran.r-project.org/web/packages/camel/}.
| [
"Han Liu and Lie Wang and Tuo Zhao",
"['Han Liu' 'Lie Wang' 'Tuo Zhao']"
] |
cs.CV cs.LG stat.ML | null | 1305.2362 | null | null | http://arxiv.org/pdf/1305.2362v1 | 2013-05-10T15:09:11Z | 2013-05-10T15:09:11Z | Revisiting Bayesian Blind Deconvolution | Blind deconvolution involves the estimation of a sharp signal or image given
only a blurry observation. Because this problem is fundamentally ill-posed,
strong priors on both the sharp image and blur kernel are required to
regularize the solution space. While this naturally leads to a standard MAP
estimation framework, performance is compromised by unknown trade-off parameter
settings, optimization heuristics, and convergence issues stemming from
non-convexity and/or poor prior selections. To mitigate some of these problems,
a number of authors have recently proposed substituting a variational Bayesian
(VB) strategy that marginalizes over the high-dimensional image space leading
to better estimates of the blur kernel. However, the underlying cost function
now involves both integrals with no closed-form solution and complex,
function-valued arguments, thus losing the transparency of MAP. Beyond standard
Bayesian-inspired intuitions, it thus remains unclear by exactly what mechanism
these methods are able to operate, rendering understanding, improvements and
extensions more difficult. To elucidate these issues, we demonstrate that the
VB methodology can be recast as an unconventional MAP problem with a very
particular penalty/prior that couples the image, blur kernel, and noise level
in a principled way. This unique penalty has a number of useful characteristics
pertaining to relative concavity, local minima avoidance, and scale-invariance
that allow us to rigorously explain the success of VB including its existing
implementational heuristics and approximations. It also provides strict
criteria for choosing the optimal image prior that, perhaps
counter-intuitively, need not reflect the statistics of natural scenes. In so
doing we challenge the prevailing notion of why VB is successful for blind
deconvolution while providing a transparent platform for introducing
enhancements.
| [
"David Wipf and Haichao Zhang",
"['David Wipf' 'Haichao Zhang']"
] |
cs.CR cs.LG | null | 1305.2388 | null | null | http://arxiv.org/pdf/1305.2388v1 | 2013-04-01T05:27:47Z | 2013-04-01T05:27:47Z | Fast Feature Reduction in intrusion detection datasets | In the most intrusion detection systems (IDS), a system tries to learn
characteristics of different type of attacks by analyzing packets that sent or
received in network. These packets have a lot of features. But not all of them
is required to be analyzed to detect that specific type of attack. Detection
speed and computational cost is another vital matter here, because in these
types of problems, datasets are very huge regularly. In this paper we tried to
propose a very simple and fast feature selection method to eliminate features
with no helpful information on them. Result faster learning in process of
redundant feature omission. We compared our proposed method with three most
successful similarity based feature selection algorithm including Correlation
Coefficient, Least Square Regression Error and Maximal Information Compression
Index. After that we used recommended features by each of these algorithms in
two popular classifiers including: Bayes and KNN classifier to measure the
quality of the recommendations. Experimental result shows that although the
proposed method can't outperform evaluated algorithms with high differences in
accuracy, but in computational cost it has huge superiority over them.
| [
"['Shafigh Parsazad' 'Ehsan Saboori' 'Amin Allahyar']",
"Shafigh Parsazad, Ehsan Saboori, Amin Allahyar"
] |
cs.LG | null | 1305.2452 | null | null | http://arxiv.org/pdf/1305.2452v1 | 2013-05-10T23:06:47Z | 2013-05-10T23:06:47Z | Stochastic Collapsed Variational Bayesian Inference for Latent Dirichlet
Allocation | In the internet era there has been an explosion in the amount of digital text
information available, leading to difficulties of scale for traditional
inference algorithms for topic models. Recent advances in stochastic
variational inference algorithms for latent Dirichlet allocation (LDA) have
made it feasible to learn topic models on large-scale corpora, but these
methods do not currently take full advantage of the collapsed representation of
the model. We propose a stochastic algorithm for collapsed variational Bayesian
inference for LDA, which is simpler and more efficient than the state of the
art method. We show connections between collapsed variational Bayesian
inference and MAP estimation for LDA, and leverage these connections to prove
convergence properties of the proposed algorithm. In experiments on large-scale
text corpora, the algorithm was found to converge faster and often to a better
solution than the previous method. Human-subject experiments also demonstrated
that the method can learn coherent topics in seconds on small corpora,
facilitating the use of topic models in interactive document analysis software.
| [
"['James Foulds' 'Levi Boyles' 'Christopher Dubois' 'Padhraic Smyth'\n 'Max Welling']",
"James Foulds, Levi Boyles, Christopher Dubois, Padhraic Smyth, Max\n Welling"
] |
cs.LG stat.ML | null | 1305.2505 | null | null | http://arxiv.org/pdf/1305.2505v1 | 2013-05-11T13:52:37Z | 2013-05-11T13:52:37Z | On the Generalization Ability of Online Learning Algorithms for Pairwise
Loss Functions | In this paper, we study the generalization properties of online learning
based stochastic methods for supervised learning problems where the loss
function is dependent on more than one training sample (e.g., metric learning,
ranking). We present a generic decoupling technique that enables us to provide
Rademacher complexity-based generalization error bounds. Our bounds are in
general tighter than those obtained by Wang et al (COLT 2012) for the same
problem. Using our decoupling technique, we are further able to obtain fast
convergence rates for strongly convex pairwise loss functions. We are also able
to analyze a class of memory efficient online learning algorithms for pairwise
learning problems that use only a bounded subset of past training samples to
update the hypothesis at each step. Finally, in order to complement our
generalization bounds, we propose a novel memory efficient online learning
algorithm for higher order learning problems with bounded regret guarantees.
| [
"['Purushottam Kar' 'Bharath K Sriperumbudur' 'Prateek Jain'\n 'Harish C Karnick']",
"Purushottam Kar, Bharath K Sriperumbudur, Prateek Jain and Harish C\n Karnick"
] |
cs.LG stat.ML | null | 1305.2532 | null | null | http://arxiv.org/pdf/1305.2532v1 | 2013-05-11T18:09:52Z | 2013-05-11T18:09:52Z | Learning Policies for Contextual Submodular Prediction | Many prediction domains, such as ad placement, recommendation, trajectory
prediction, and document summarization, require predicting a set or list of
options. Such lists are often evaluated using submodular reward functions that
measure both quality and diversity. We propose a simple, efficient, and
provably near-optimal approach to optimizing such prediction problems based on
no-regret learning. Our method leverages a surprising result from online
submodular optimization: a single no-regret online learner can compete with an
optimal sequence of predictions. Compared to previous work, which either learn
a sequence of classifiers or rely on stronger assumptions such as
realizability, we ensure both data-efficiency as well as performance guarantees
in the fully agnostic setting. Experiments validate the efficiency and
applicability of the approach on a wide range of problems including manipulator
trajectory optimization, news recommendation and document summarization.
| [
"Stephane Ross, Jiaji Zhou, Yisong Yue, Debadeepta Dey, J. Andrew\n Bagnell",
"['Stephane Ross' 'Jiaji Zhou' 'Yisong Yue' 'Debadeepta Dey'\n 'J. Andrew Bagnell']"
] |
cs.DS cs.LG | 10.1109/FOCS.2013.30 | 1305.2545 | null | null | http://arxiv.org/abs/1305.2545v8 | 2017-09-05T14:00:33Z | 2013-05-11T21:50:46Z | Bandits with Knapsacks | Multi-armed bandit problems are the predominant theoretical model of
exploration-exploitation tradeoffs in learning, and they have countless
applications ranging from medical trials, to communication networks, to Web
search and advertising. In many of these application domains the learner may be
constrained by one or more supply (or budget) limits, in addition to the
customary limitation on the time horizon. The literature lacks a general model
encompassing these sorts of problems. We introduce such a model, called
"bandits with knapsacks", that combines aspects of stochastic integer
programming with online learning. A distinctive feature of our problem, in
comparison to the existing regret-minimization literature, is that the optimal
policy for a given latent distribution may significantly outperform the policy
that plays the optimal fixed arm. Consequently, achieving sublinear regret in
the bandits-with-knapsacks problem is significantly more challenging than in
conventional bandit problems.
We present two algorithms whose reward is close to the information-theoretic
optimum: one is based on a novel "balanced exploration" paradigm, while the
other is a primal-dual algorithm that uses multiplicative updates. Further, we
prove that the regret achieved by both algorithms is optimal up to
polylogarithmic factors. We illustrate the generality of the problem by
presenting applications in a number of different domains including electronic
commerce, routing, and scheduling. As one example of a concrete application, we
consider the problem of dynamic posted pricing with limited supply and obtain
the first algorithm whose regret, with respect to the optimal dynamic policy,
is sublinear in the supply.
| [
"['Ashwinkumar Badanidiyuru' 'Robert Kleinberg' 'Aleksandrs Slivkins']",
"Ashwinkumar Badanidiyuru, Robert Kleinberg and Aleksandrs Slivkins"
] |
stat.ML cs.LG | null | 1305.2581 | null | null | http://arxiv.org/pdf/1305.2581v1 | 2013-05-12T12:46:25Z | 2013-05-12T12:46:25Z | Accelerated Mini-Batch Stochastic Dual Coordinate Ascent | Stochastic dual coordinate ascent (SDCA) is an effective technique for
solving regularized loss minimization problems in machine learning. This paper
considers an extension of SDCA under the mini-batch setting that is often used
in practice. Our main contribution is to introduce an accelerated mini-batch
version of SDCA and prove a fast convergence rate for this method. We discuss
an implementation of our method over a parallel computing system, and compare
the results to both the vanilla stochastic dual coordinate ascent and to the
accelerated deterministic gradient descent method of
\cite{nesterov2007gradient}.
| [
"Shai Shalev-Shwartz and Tong Zhang",
"['Shai Shalev-Shwartz' 'Tong Zhang']"
] |
cs.LG stat.ML | null | 1305.2648 | null | null | http://arxiv.org/pdf/1305.2648v1 | 2013-05-13T00:15:14Z | 2013-05-13T00:15:14Z | Boosting with the Logistic Loss is Consistent | This manuscript provides optimization guarantees, generalization bounds, and
statistical consistency results for AdaBoost variants which replace the
exponential loss with the logistic and similar losses (specifically, twice
differentiable convex losses which are Lipschitz and tend to zero on one side).
The heart of the analysis is to show that, in lieu of explicit regularization
and constraints, the structure of the problem is fairly rigidly controlled by
the source distribution itself. The first control of this type is in the
separable case, where a distribution-dependent relaxed weak learning rate
induces speedy convergence with high probability over any sample. Otherwise, in
the nonseparable case, the convex surrogate risk itself exhibits
distribution-dependent levels of curvature, and consequently the algorithm's
output has small norm with high probability.
| [
"['Matus Telgarsky']",
"Matus Telgarsky"
] |
cs.LG | null | 1305.2732 | null | null | http://arxiv.org/pdf/1305.2732v1 | 2013-05-13T10:39:47Z | 2013-05-13T10:39:47Z | An efficient algorithm for learning with semi-bandit feedback | We consider the problem of online combinatorial optimization under
semi-bandit feedback. The goal of the learner is to sequentially select its
actions from a combinatorial decision set so as to minimize its cumulative
loss. We propose a learning algorithm for this problem based on combining the
Follow-the-Perturbed-Leader (FPL) prediction method with a novel loss
estimation procedure called Geometric Resampling (GR). Contrary to previous
solutions, the resulting algorithm can be efficiently implemented for any
decision set where efficient offline combinatorial optimization is possible at
all. Assuming that the elements of the decision set can be described with
d-dimensional binary vectors with at most m non-zero entries, we show that the
expected regret of our algorithm after T rounds is O(m sqrt(dT log d)). As a
side result, we also improve the best known regret bounds for FPL in the full
information setting to O(m^(3/2) sqrt(T log d)), gaining a factor of sqrt(d/m)
over previous bounds for this algorithm.
| [
"Gergely Neu and G\\'abor Bart\\'ok",
"['Gergely Neu' 'Gábor Bartók']"
] |
cs.LG stat.AP | null | 1305.2788 | null | null | http://arxiv.org/pdf/1305.2788v1 | 2013-05-13T14:19:24Z | 2013-05-13T14:19:24Z | HRF estimation improves sensitivity of fMRI encoding and decoding models | Extracting activation patterns from functional Magnetic Resonance Images
(fMRI) datasets remains challenging in rapid-event designs due to the inherent
delay of blood oxygen level-dependent (BOLD) signal. The general linear model
(GLM) allows to estimate the activation from a design matrix and a fixed
hemodynamic response function (HRF). However, the HRF is known to vary
substantially between subjects and brain regions. In this paper, we propose a
model for jointly estimating the hemodynamic response function (HRF) and the
activation patterns via a low-rank representation of task effects.This model is
based on the linearity assumption behind the GLM and can be computed using
standard gradient-based solvers. We use the activation patterns computed by our
model as input data for encoding and decoding studies and report performance
improvement in both settings.
| [
"Fabian Pedregosa (INRIA Paris - Rocquencourt, INRIA Saclay - Ile de\n France), Michael Eickenberg (INRIA Saclay - Ile de France, LNAO), Bertrand\n Thirion (INRIA Saclay - Ile de France, LNAO), Alexandre Gramfort (LTCI)",
"['Fabian Pedregosa' 'Michael Eickenberg' 'Bertrand Thirion'\n 'Alexandre Gramfort']"
] |
cs.LG | null | 1305.2982 | null | null | http://arxiv.org/pdf/1305.2982v1 | 2013-05-14T00:29:42Z | 2013-05-14T00:29:42Z | Estimating or Propagating Gradients Through Stochastic Neurons | Stochastic neurons can be useful for a number of reasons in deep learning
models, but in many cases they pose a challenging problem: how to estimate the
gradient of a loss function with respect to the input of such stochastic
neurons, i.e., can we "back-propagate" through these stochastic neurons? We
examine this question, existing approaches, and present two novel families of
solutions, applicable in different settings. In particular, it is demonstrated
that a simple biologically plausible formula gives rise to an an unbiased (but
noisy) estimator of the gradient with respect to a binary stochastic neuron
firing probability. Unlike other estimators which view the noise as a small
perturbation in order to estimate gradients by finite differences, this
estimator is unbiased even without assuming that the stochastic perturbation is
small. This estimator is also interesting because it can be applied in very
general settings which do not allow gradient back-propagation, including the
estimation of the gradient with respect to future rewards, as required in
reinforcement learning setups. We also propose an approach to approximating
this unbiased but high-variance estimator by learning to predict it using a
biased estimator. The second approach we propose assumes that an estimator of
the gradient can be back-propagated and it provides an unbiased estimator of
the gradient, but can only work with non-linearities unlike the hard threshold,
but like the rectifier, that are not flat for all of their range. This is
similar to traditional sigmoidal units but has the advantage that for many
inputs, a hard decision (e.g., a 0 output) can be produced, which would be
convenient for conditional computation and achieving sparse representations and
sparse gradients.
| [
"['Yoshua Bengio']",
"Yoshua Bengio"
] |
cs.GT cs.LG | null | 1305.3011 | null | null | http://arxiv.org/pdf/1305.3011v1 | 2013-05-14T03:39:45Z | 2013-05-14T03:39:45Z | Real Time Bid Optimization with Smooth Budget Delivery in Online
Advertising | Today, billions of display ad impressions are purchased on a daily basis
through a public auction hosted by real time bidding (RTB) exchanges. A
decision has to be made for advertisers to submit a bid for each selected RTB
ad request in milliseconds. Restricted by the budget, the goal is to buy a set
of ad impressions to reach as many targeted users as possible. A desired action
(conversion), advertiser specific, includes purchasing a product, filling out a
form, signing up for emails, etc. In addition, advertisers typically prefer to
spend their budget smoothly over the time in order to reach a wider range of
audience accessible throughout a day and have a sustainable impact. However,
since the conversions occur rarely and the occurrence feedback is normally
delayed, it is very challenging to achieve both budget and performance goals at
the same time. In this paper, we present an online approach to the smooth
budget delivery while optimizing for the conversion performance. Our algorithm
tries to select high quality impressions and adjust the bid price based on the
prior performance distribution in an adaptive manner by distributing the budget
optimally across time. Our experimental results from real advertising campaigns
demonstrate the effectiveness of our proposed approach.
| [
"Kuang-Chih Lee, Ali Jalali and Ali Dasdan",
"['Kuang-Chih Lee' 'Ali Jalali' 'Ali Dasdan']"
] |
cs.LG cs.DB | null | 1305.3014 | null | null | http://arxiv.org/pdf/1305.3014v1 | 2013-05-14T03:48:09Z | 2013-05-14T03:48:09Z | Scalable Audience Reach Estimation in Real-time Online Advertising | Online advertising has been introduced as one of the most efficient methods
of advertising throughout the recent years. Yet, advertisers are concerned
about the efficiency of their online advertising campaigns and consequently,
would like to restrict their ad impressions to certain websites and/or certain
groups of audience. These restrictions, known as targeting criteria, limit the
reachability for better performance. This trade-off between reachability and
performance illustrates a need for a forecasting system that can quickly
predict/estimate (with good accuracy) this trade-off. Designing such a system
is challenging due to (a) the huge amount of data to process, and, (b) the need
for fast and accurate estimates. In this paper, we propose a distributed fault
tolerant system that can generate such estimates fast with good accuracy. The
main idea is to keep a small representative sample in memory across multiple
machines and formulate the forecasting problem as queries against the sample.
The key challenge is to find the best strata across the past data, perform
multivariate stratified sampling while ensuring fuzzy fall-back to cover the
small minorities. Our results show a significant improvement over the uniform
and simple stratified sampling strategies which are currently widely used in
the industry.
| [
"['Ali Jalali' 'Santanu Kolay' 'Peter Foldes' 'Ali Dasdan']",
"Ali Jalali, Santanu Kolay, Peter Foldes and Ali Dasdan"
] |
stat.ML cs.LG math.OC | null | 1305.3120 | null | null | http://arxiv.org/pdf/1305.3120v1 | 2013-05-14T11:49:34Z | 2013-05-14T11:49:34Z | Optimization with First-Order Surrogate Functions | In this paper, we study optimization methods consisting of iteratively
minimizing surrogates of an objective function. By proposing several
algorithmic variants and simple convergence analyses, we make two main
contributions. First, we provide a unified viewpoint for several first-order
optimization techniques such as accelerated proximal gradient, block coordinate
descent, or Frank-Wolfe algorithms. Second, we introduce a new incremental
scheme that experimentally matches or outperforms state-of-the-art solvers for
large-scale optimization problems typically arising in machine learning.
| [
"Julien Mairal (INRIA Grenoble Rh\\^one-Alpes / LJK Laboratoire Jean\n Kuntzmann)",
"['Julien Mairal']"
] |
cs.CE cs.LG | null | 1305.3149 | null | null | http://arxiv.org/pdf/1305.3149v1 | 2013-05-14T13:23:19Z | 2013-05-14T13:23:19Z | Qualitative detection of oil adulteration with machine learning
approaches | The study focused on the machine learning analysis approaches to identify the
adulteration of 9 kinds of edible oil qualitatively and answered the following
three questions: Is the oil sample adulterant? How does it constitute? What is
the main ingredient of the adulteration oil? After extracting the
high-performance liquid chromatography (HPLC) data on triglyceride from 370 oil
samples, we applied the adaptive boosting with multi-class Hamming loss
(AdaBoost.MH) to distinguish the oil adulteration in contrast with the support
vector machine (SVM). Further, we regarded the adulterant oil and the pure oil
samples as ones with multiple labels and with only one label, respectively.
Then multi-label AdaBoost.MH and multi-label learning vector quantization
(ML-LVQ) model were built to determine the ingredients and their relative ratio
in the adulteration oil. The experimental results on six measures show that
ML-LVQ achieves better performance than multi-label AdaBoost.MH.
| [
"['Xiao-Bo Jin' 'Qiang Lu' 'Feng Wang' 'Quan-gong Huo']",
"Xiao-Bo Jin, Qiang Lu, Feng Wang, Quan-gong Huo"
] |
cs.LG cs.DS stat.ML | null | 1305.3207 | null | null | http://arxiv.org/pdf/1305.3207v1 | 2013-05-14T16:54:10Z | 2013-05-14T16:54:10Z | Efficient Density Estimation via Piecewise Polynomial Approximation | We give a highly efficient "semi-agnostic" algorithm for learning univariate
probability distributions that are well approximated by piecewise polynomial
density functions. Let $p$ be an arbitrary distribution over an interval $I$
which is $\tau$-close (in total variation distance) to an unknown probability
distribution $q$ that is defined by an unknown partition of $I$ into $t$
intervals and $t$ unknown degree-$d$ polynomials specifying $q$ over each of
the intervals. We give an algorithm that draws $\tilde{O}(t\new{(d+1)}/\eps^2)$
samples from $p$, runs in time $\poly(t,d,1/\eps)$, and with high probability
outputs a piecewise polynomial hypothesis distribution $h$ that is
$(O(\tau)+\eps)$-close (in total variation distance) to $p$. This sample
complexity is essentially optimal; we show that even for $\tau=0$, any
algorithm that learns an unknown $t$-piecewise degree-$d$ probability
distribution over $I$ to accuracy $\eps$ must use $\Omega({\frac {t(d+1)}
{\poly(1 + \log(d+1))}} \cdot {\frac 1 {\eps^2}})$ samples from the
distribution, regardless of its running time. Our algorithm combines tools from
approximation theory, uniform convergence, linear programming, and dynamic
programming.
We apply this general algorithm to obtain a wide range of results for many
natural problems in density estimation over both continuous and discrete
domains. These include state-of-the-art results for learning mixtures of
log-concave distributions; mixtures of $t$-modal distributions; mixtures of
Monotone Hazard Rate distributions; mixtures of Poisson Binomial Distributions;
mixtures of Gaussians; and mixtures of $k$-monotone densities. Our general
technique yields computationally efficient algorithms for all these problems,
in many cases with provably optimal sample complexities (up to logarithmic
factors) in all parameters.
| [
"Siu-On Chan, Ilias Diakonikolas, Rocco A. Servedio, Xiaorui Sun",
"['Siu-On Chan' 'Ilias Diakonikolas' 'Rocco A. Servedio' 'Xiaorui Sun']"
] |
cs.LG cs.GT math.OC stat.ML | null | 1305.3334 | null | null | http://arxiv.org/pdf/1305.3334v1 | 2013-05-15T01:22:34Z | 2013-05-15T01:22:34Z | Online Learning in a Contract Selection Problem | In an online contract selection problem there is a seller which offers a set
of contracts to sequentially arriving buyers whose types are drawn from an
unknown distribution. If there exists a profitable contract for the buyer in
the offered set, i.e., a contract with payoff higher than the payoff of not
accepting any contracts, the buyer chooses the contract that maximizes its
payoff. In this paper we consider the online contract selection problem to
maximize the sellers profit. Assuming that a structural property called ordered
preferences holds for the buyer's payoff function, we propose online learning
algorithms that have sub-linear regret with respect to the best set of
contracts given the distribution over the buyer's type. This problem has many
applications including spectrum contracts, wireless service provider data plans
and recommendation systems.
| [
"Cem Tekin and Mingyan Liu",
"['Cem Tekin' 'Mingyan Liu']"
] |
cs.LG cs.IR | null | 1305.3384 | null | null | http://arxiv.org/pdf/1305.3384v1 | 2013-05-15T08:00:54Z | 2013-05-15T08:00:54Z | Transfer Learning for Content-Based Recommender Systems using Tree
Matching | In this paper we present a new approach to content-based transfer learning
for solving the data sparsity problem in cases when the users' preferences in
the target domain are either scarce or unavailable, but the necessary
information on the preferences exists in another domain. We show that training
a system to use such information across domains can produce better performance.
Specifically, we represent users' behavior patterns based on topological graph
structures. Each behavior pattern represents the behavior of a set of users,
when the users' behavior is defined as the items they rated and the items'
rating values. In the next step we find a correlation between behavior patterns
in the source domain and behavior patterns in the target domain. This mapping
is considered a bridge between the two domains. Based on the correlation and
content-attributes of the items, we train a machine learning model to predict
users' ratings in the target domain. When we compare our approach to the
popularity approach and KNN-cross-domain on a real world dataset, the results
show that on an average of 83$%$ of the cases our approach outperforms both
methods.
| [
"['Naseem Biadsy' 'Lior Rokach' 'Armin Shmilovici']",
"Naseem Biadsy, Lior Rokach, Armin Shmilovici"
] |
cs.IT cs.LG math.IT math.ST stat.ML stat.TH | null | 1305.3486 | null | null | http://arxiv.org/pdf/1305.3486v2 | 2013-07-18T11:04:58Z | 2013-05-15T14:12:50Z | Noisy Subspace Clustering via Thresholding | We consider the problem of clustering noisy high-dimensional data points into
a union of low-dimensional subspaces and a set of outliers. The number of
subspaces, their dimensions, and their orientations are unknown. A
probabilistic performance analysis of the thresholding-based subspace
clustering (TSC) algorithm introduced recently in [1] shows that TSC succeeds
in the noisy case, even when the subspaces intersect. Our results reveal an
explicit tradeoff between the allowed noise level and the affinity of the
subspaces. We furthermore find that the simple outlier detection scheme
introduced in [1] provably succeeds in the noisy case.
| [
"['Reinhard Heckel' 'Helmut Bölcskei']",
"Reinhard Heckel and Helmut B\\\"olcskei"
] |
cs.NE cs.LG stat.ML | null | 1305.3794 | null | null | http://arxiv.org/pdf/1305.3794v2 | 2013-05-22T09:28:04Z | 2013-05-16T13:25:20Z | Evolution of Covariance Functions for Gaussian Process Regression using
Genetic Programming | In this contribution we describe an approach to evolve composite covariance
functions for Gaussian processes using genetic programming. A critical aspect
of Gaussian processes and similar kernel-based models such as SVM is, that the
covariance function should be adapted to the modeled data. Frequently, the
squared exponential covariance function is used as a default. However, this can
lead to a misspecified model, which does not fit the data well. In the proposed
approach we use a grammar for the composition of covariance functions and
genetic programming to search over the space of sentences that can be derived
from the grammar. We tested the proposed approach on synthetic data from
two-dimensional test functions, and on the Mauna Loa CO2 time series. The
results show, that our approach is feasible, finding covariance functions that
perform much better than a default covariance function. For the CO2 data set a
composite covariance function is found, that matches the performance of a
hand-tuned covariance function.
| [
"Gabriel Kronberger and Michael Kommenda",
"['Gabriel Kronberger' 'Michael Kommenda']"
] |
cs.IR cs.LG | null | 1305.3814 | null | null | http://arxiv.org/pdf/1305.3814v2 | 2013-07-24T05:21:16Z | 2013-05-16T14:11:02Z | Multi-View Learning for Web Spam Detection | Spam pages are designed to maliciously appear among the top search results by
excessive usage of popular terms. Therefore, spam pages should be removed using
an effective and efficient spam detection system. Previous methods for web spam
classification used several features from various information sources (page
contents, web graph, access logs, etc.) to detect web spam. In this paper, we
follow page-level classification approach to build fast and scalable spam
filters. We show that each web page can be classified with satisfiable accuracy
using only its own HTML content. In order to design a multi-view classification
system, we used state-of-the-art spam classification methods with distinct
feature sets (views) as the base classifiers. Then, a fusion model is learned
to combine the output of the base classifiers and make final prediction.
Results show that multi-view learning significantly improves the classification
performance, namely AUC by 22%, while providing linear speedup for parallel
execution.
| [
"['Ali Hadian' 'Behrouz Minaei-Bidgoli']",
"Ali Hadian, Behrouz Minaei-Bidgoli"
] |
cs.SI cs.HC cs.LG | 10.1145/2531602.2531607 | 1305.3932 | null | null | http://arxiv.org/abs/1305.3932v3 | 2013-11-16T00:06:38Z | 2013-05-16T20:47:05Z | Inferring the Origin Locations of Tweets with Quantitative Confidence | Social Internet content plays an increasingly critical role in many domains,
including public health, disaster management, and politics. However, its
utility is limited by missing geographic information; for example, fewer than
1.6% of Twitter messages (tweets) contain a geotag. We propose a scalable,
content-based approach to estimate the location of tweets using a novel yet
simple variant of gaussian mixture models. Further, because real-world
applications depend on quantified uncertainty for such estimates, we propose
novel metrics of accuracy, precision, and calibration, and we evaluate our
approach accordingly. Experiments on 13 million global, comprehensively
multi-lingual tweets show that our approach yields reliable, well-calibrated
results competitive with previous computationally intensive methods. We also
show that a relatively small number of training data are required for good
estimates (roughly 30,000 tweets) and models are quite time-invariant
(effective on tweets many weeks newer than the training set). Finally, we show
that toponyms and languages with small geographic footprint provide the most
useful location signals.
| [
"Reid Priedhorsky (1), Aron Culotta (2), Sara Y. Del Valle (1) ((1) Los\n Alamos National Laboratory, (2) Illinois Institute of Technology)",
"['Reid Priedhorsky' 'Aron Culotta' 'Sara Y. Del Valle']"
] |
cs.LG | null | 1305.4076 | null | null | http://arxiv.org/pdf/1305.4076v5 | 2014-04-23T11:40:12Z | 2013-05-17T13:42:49Z | Contractive De-noising Auto-encoder | Auto-encoder is a special kind of neural network based on reconstruction.
De-noising auto-encoder (DAE) is an improved auto-encoder which is robust to
the input by corrupting the original data first and then reconstructing the
original input by minimizing the reconstruction error function. And contractive
auto-encoder (CAE) is another kind of improved auto-encoder to learn robust
feature by introducing the Frobenius norm of the Jacobean matrix of the learned
feature with respect to the original input. In this paper, we combine
de-noising auto-encoder and contractive auto- encoder, and propose another
improved auto-encoder, contractive de-noising auto- encoder (CDAE), which is
robust to both the original input and the learned feature. We stack CDAE to
extract more abstract features and apply SVM for classification. The experiment
result on benchmark dataset MNIST shows that our proposed CDAE performed better
than both DAE and CAE, proving the effective of our method.
| [
"Fu-qiang Chen, Yan Wu, Guo-dong Zhao, Jun-ming Zhang, Ming Zhu, Jing\n Bai",
"['Fu-qiang Chen' 'Yan Wu' 'Guo-dong Zhao' 'Jun-ming Zhang' 'Ming Zhu'\n 'Jing Bai']"
] |
cs.LG cs.NA math.OC | null | 1305.4081 | null | null | http://arxiv.org/pdf/1305.4081v1 | 2013-05-17T13:53:17Z | 2013-05-17T13:53:17Z | Conditions for Convergence in Regularized Machine Learning Objectives | Analysis of the convergence rates of modern convex optimization algorithms
can be achived through binary means: analysis of emperical convergence, or
analysis of theoretical convergence. These two pathways of capturing
information diverge in efficacy when moving to the world of distributed
computing, due to the introduction of non-intuitive, non-linear slowdowns
associated with broadcasting, and in some cases, gathering operations. Despite
these nuances in the rates of convergence, we can still show the existence of
convergence, and lower bounds for the rates. This paper will serve as a helpful
cheat-sheet for machine learning practitioners encountering this problem class
in the field.
| [
"Patrick Hop, Xinghao Pan",
"['Patrick Hop' 'Xinghao Pan']"
] |
cs.LG cs.CV | null | 1305.4204 | null | null | http://arxiv.org/pdf/1305.4204v1 | 2013-05-17T22:40:14Z | 2013-05-17T22:40:14Z | Machine learning on images using a string-distance | We present a new method for image feature-extraction which is based on
representing an image by a finite-dimensional vector of distances that measure
how different the image is from a set of image prototypes. We use the recently
introduced Universal Image Distance (UID) \cite{RatsabyChesterIEEE2012} to
compare the similarity between an image and a prototype image. The advantage in
using the UID is the fact that no domain knowledge nor any image analysis need
to be done. Each image is represented by a finite dimensional feature vector
whose components are the UID values between the image and a finite set of image
prototypes from each of the feature categories. The method is automatic since
once the user selects the prototype images, the feature vectors are
automatically calculated without the need to do any image analysis. The
prototype images can be of different size, in particular, different than the
image size. Based on a collection of such cases any supervised or unsupervised
learning algorithm can be used to train and produce an image classifier or
image cluster analysis. In this paper we present the image feature-extraction
method and use it on several supervised and unsupervised learning experiments
for satellite image data.
| [
"['Uzi Chester' 'Joel Ratsaby']",
"Uzi Chester, Joel Ratsaby"
] |
cs.LG stat.ML | null | 1305.4324 | null | null | http://arxiv.org/pdf/1305.4324v1 | 2013-05-19T04:56:05Z | 2013-05-19T04:56:05Z | Horizon-Independent Optimal Prediction with Log-Loss in Exponential
Families | We study online learning under logarithmic loss with regular parametric
models. Hedayati and Bartlett (2012b) showed that a Bayesian prediction
strategy with Jeffreys prior and sequential normalized maximum likelihood
(SNML) coincide and are optimal if and only if the latter is exchangeable, and
if and only if the optimal strategy can be calculated without knowing the time
horizon in advance. They put forward the question what families have
exchangeable SNML strategies. This paper fully answers this open problem for
one-dimensional exponential families. The exchangeability can happen only for
three classes of natural exponential family distributions, namely the Gaussian,
Gamma, and the Tweedie exponential family of order 3/2. Keywords: SNML
Exchangeability, Exponential Family, Online Learning, Logarithmic Loss,
Bayesian Strategy, Jeffreys Prior, Fisher Information1
| [
"Peter Bartlett, Peter Grunwald, Peter Harremoes, Fares Hedayati,\n Wojciech Kotlowski",
"['Peter Bartlett' 'Peter Grunwald' 'Peter Harremoes' 'Fares Hedayati'\n 'Wojciech Kotlowski']"
] |
q-bio.QM cs.LG | null | 1305.4339 | null | null | http://arxiv.org/pdf/1305.4339v1 | 2013-05-19T07:50:14Z | 2013-05-19T07:50:14Z | Generalized Centroid Estimators in Bioinformatics | In a number of estimation problems in bioinformatics, accuracy measures of
the target problem are usually given, and it is important to design estimators
that are suitable to those accuracy measures. However, there is often a
discrepancy between an employed estimator and a given accuracy measure of the
problem. In this study, we introduce a general class of efficient estimators
for estimation problems on high-dimensional binary spaces, which representmany
fundamental problems in bioinformatics. Theoretical analysis reveals that the
proposed estimators generally fit with commonly-used accuracy measures (e.g.
sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in
many cases, and cover a wide range of problems in bioinformatics from the
viewpoint of the principle of maximum expected accuracy (MEA). It is also shown
that some important algorithms in bioinformatics can be interpreted in a
unified manner. Not only the concept presented in this paper gives a useful
framework to design MEA-based estimators but also it is highly extendable and
sheds new light on many problems in bioinformatics.
| [
"['Michiaki Hamada' 'Hisanori Kiryu' 'Wataru Iwasaki' 'Kiyoshi Asai']",
"Michiaki Hamada, Hisanori Kiryu, Wataru Iwasaki and Kiyoshi Asai"
] |
cs.LG | null | 1305.4345 | null | null | http://arxiv.org/pdf/1305.4345v1 | 2013-05-19T10:24:06Z | 2013-05-19T10:24:06Z | Ensembles of Classifiers based on Dimensionality Reduction | We present a novel approach for the construction of ensemble classifiers
based on dimensionality reduction. Dimensionality reduction methods represent
datasets using a small number of attributes while preserving the information
conveyed by the original dataset. The ensemble members are trained based on
dimension-reduced versions of the training set. These versions are obtained by
applying dimensionality reduction to the original training set using different
values of the input parameters. This construction meets both the diversity and
accuracy criteria which are required to construct an ensemble classifier where
the former criterion is obtained by the various input parameter values and the
latter is achieved due to the decorrelation and noise reduction properties of
dimensionality reduction. In order to classify a test sample, it is first
embedded into the dimension reduced space of each individual classifier by
using an out-of-sample extension algorithm. Each classifier is then applied to
the embedded sample and the classification is obtained via a voting scheme. We
present three variations of the proposed approach based on the Random
Projections, the Diffusion Maps and the Random Subspaces dimensionality
reduction algorithms. We also present a multi-strategy ensemble which combines
AdaBoost and Diffusion Maps. A comparison is made with the Bagging, AdaBoost,
Rotation Forest ensemble classifiers and also with the base classifier which
does not incorporate dimensionality reduction. Our experiments used seventeen
benchmark datasets from the UCI repository. The results obtained by the
proposed algorithms were superior in many cases to other algorithms.
| [
"['Alon Schclar' 'Lior Rokach' 'Amir Amit']",
"Alon Schclar and Lior Rokach and Amir Amit"
] |
cs.LG stat.ML | null | 1305.4433 | null | null | http://arxiv.org/pdf/1305.4433v1 | 2013-05-20T04:05:23Z | 2013-05-20T04:05:23Z | Meta Path-Based Collective Classification in Heterogeneous Information
Networks | Collective classification has been intensively studied due to its impact in
many important applications, such as web mining, bioinformatics and citation
analysis. Collective classification approaches exploit the dependencies of a
group of linked objects whose class labels are correlated and need to be
predicted simultaneously. In this paper, we focus on studying the collective
classification problem in heterogeneous networks, which involves multiple types
of data objects interconnected by multiple types of links. Intuitively, two
objects are correlated if they are linked by many paths in the network.
However, most existing approaches measure the dependencies among objects
through directly links or indirect links without considering the different
semantic meanings behind different paths. In this paper, we study the
collective classification problem taht is defined among the same type of
objects in heterogenous networks. Moreover, by considering different linkage
paths in the network, one can capture the subtlety of different types of
dependencies among objects. We introduce the concept of meta-path based
dependencies among objects, where a meta path is a path consisting a certain
sequence of linke types. We show that the quality of collective classification
results strongly depends upon the meta paths used. To accommodate the large
network size, a novel solution, called HCC (meta-path based Heterogenous
Collective Classification), is developed to effectively assign labels to a
group of instances that are interconnected through different meta-paths. The
proposed HCC model can capture different types of dependencies among objects
with respect to different meta paths. Empirical studies on real-world networks
demonstrate that effectiveness of the proposed meta path-based collective
classification approach.
| [
"['Xiangnan Kong' 'Bokai Cao' 'Philip S. Yu' 'Ying Ding' 'David J. Wild']",
"Xiangnan Kong, Bokai Cao, Philip S. Yu, Ying Ding and David J. Wild"
] |
cs.LG q-bio.QM | null | 1305.4525 | null | null | http://arxiv.org/pdf/1305.4525v3 | 2013-10-18T15:30:45Z | 2013-05-20T13:39:03Z | Robustness of Random Forest-based gene selection methods | Gene selection is an important part of microarray data analysis because it
provides information that can lead to a better mechanistic understanding of an
investigated phenomenon. At the same time, gene selection is very difficult
because of the noisy nature of microarray data. As a consequence, gene
selection is often performed with machine learning methods. The Random Forest
method is particularly well suited for this purpose. In this work, four
state-of-the-art Random Forest-based feature selection methods were compared in
a gene selection context. The analysis focused on the stability of selection
because, although it is necessary for determining the significance of results,
it is often ignored in similar studies.
The comparison of post-selection accuracy in the validation of Random Forest
classifiers revealed that all investigated methods were equivalent in this
context. However, the methods substantially differed with respect to the number
of selected genes and the stability of selection. Of the analysed methods, the
Boruta algorithm predicted the most genes as potentially important.
The post-selection classifier error rate, which is a frequently used measure,
was found to be a potentially deceptive measure of gene selection quality. When
the number of consistently selected genes was considered, the Boruta algorithm
was clearly the best. Although it was also the most computationally intensive
method, the Boruta algorithm's computational demands could be reduced to levels
comparable to those of other algorithms by replacing the Random Forest
importance with a comparable measure from Random Ferns (a similar but
simplified classifier). Despite their design assumptions, the minimal optimal
selection methods, were found to select a high fraction of false positives.
| [
"Miron B. Kursa",
"['Miron B. Kursa']"
] |
math.OC cs.LG cs.NA math.NA stat.ML | null | 1305.4723 | null | null | http://arxiv.org/pdf/1305.4723v1 | 2013-05-21T06:12:42Z | 2013-05-21T06:12:42Z | On the Complexity Analysis of Randomized Block-Coordinate Descent
Methods | In this paper we analyze the randomized block-coordinate descent (RBCD)
methods proposed in [8,11] for minimizing the sum of a smooth convex function
and a block-separable convex function. In particular, we extend Nesterov's
technique developed in [8] for analyzing the RBCD method for minimizing a
smooth convex function over a block-separable closed convex set to the
aforementioned more general problem and obtain a sharper expected-value type of
convergence rate than the one implied in [11]. Also, we obtain a better
high-probability type of iteration complexity, which improves upon the one in
[11] by at least the amount $O(n/\epsilon)$, where $\epsilon$ is the target
solution accuracy and $n$ is the number of problem blocks. In addition, for
unconstrained smooth convex minimization, we develop a new technique called
{\it randomized estimate sequence} to analyze the accelerated RBCD method
proposed by Nesterov [11] and establish a sharper expected-value type of
convergence rate than the one given in [11].
| [
"['Zhaosong Lu' 'Lin Xiao']",
"Zhaosong Lu and Lin Xiao"
] |
cs.LG cs.CG | null | 1305.4757 | null | null | http://arxiv.org/pdf/1305.4757v1 | 2013-05-21T08:51:30Z | 2013-05-21T08:51:30Z | Power to the Points: Validating Data Memberships in Clusterings | A clustering is an implicit assignment of labels of points, based on
proximity to other points. It is these labels that are then used for downstream
analysis (either focusing on individual clusters, or identifying
representatives of clusters and so on). Thus, in order to trust a clustering as
a first step in exploratory data analysis, we must trust the labels assigned to
individual data. Without supervision, how can we validate this assignment? In
this paper, we present a method to attach affinity scores to the implicit
labels of individual points in a clustering. The affinity scores capture the
confidence level of the cluster that claims to "own" the point. This method is
very general: it can be used with clusterings derived from Euclidean data,
kernelized data, or even data derived from information spaces. It smoothly
incorporates importance functions on clusters, allowing us to eight different
clusters differently. It is also efficient: assigning an affinity score to a
point depends only polynomially on the number of clusters and is independent of
the number of points in the data. The dimensionality of the underlying space
only appears in preprocessing. We demonstrate the value of our approach with an
experimental study that illustrates the use of these scores in different data
analysis tasks, as well as the efficiency and flexibility of the method. We
also demonstrate useful visualizations of these scores; these might prove
useful within an interactive analytics framework.
| [
"Parasaran Raman and Suresh Venkatasubramanian",
"['Parasaran Raman' 'Suresh Venkatasubramanian']"
] |
math.OC cs.LG | 10.1214/14-AOP997 | 1305.4778 | null | null | http://arxiv.org/abs/1305.4778v4 | 2016-03-15T10:55:10Z | 2013-05-21T10:32:29Z | Zero-sum repeated games: Counterexamples to the existence of the
asymptotic value and the conjecture
$\operatorname{maxmin}=\operatorname{lim}v_n$ | Mertens [In Proceedings of the International Congress of Mathematicians
(Berkeley, Calif., 1986) (1987) 1528-1577 Amer. Math. Soc.] proposed two
general conjectures about repeated games: the first one is that, in any
two-person zero-sum repeated game, the asymptotic value exists, and the second
one is that, when Player 1 is more informed than Player 2, in the long run
Player 1 is able to guarantee the asymptotic value. We disprove these two
long-standing conjectures by providing an example of a zero-sum repeated game
with public signals and perfect observation of the actions, where the value of
the $\lambda$-discounted game does not converge when $\lambda$ goes to 0. The
aforementioned example involves seven states, two actions and two signals for
each player. Remarkably, players observe the payoffs, and play in turn.
| [
"Bruno Ziliotto",
"['Bruno Ziliotto']"
] |
cs.AI cs.LG | 10.1109/IJCNN.2009.5178616 | 1305.4955 | null | null | http://arxiv.org/abs/1305.4955v2 | 2013-06-26T21:59:35Z | 2013-05-21T20:29:02Z | A Data Mining Approach to Solve the Goal Scoring Problem | In soccer, scoring goals is a fundamental objective which depends on many
conditions and constraints. Considering the RoboCup soccer 2D-simulator, this
paper presents a data mining-based decision system to identify the best time
and direction to kick the ball towards the goal to maximize the overall chances
of scoring during a simulated soccer match. Following the CRISP-DM methodology,
data for modeling were extracted from matches of major international
tournaments (10691 kicks), knowledge about soccer was embedded via
transformation of variables and a Multilayer Perceptron was used to estimate
the scoring chance. Experimental performance assessment to compare this
approach against previous LDA-based approach was conducted from 100 matches.
Several statistical metrics were used to analyze the performance of the system
and the results showed an increase of 7.7% in the number of kicks, producing an
overall increase of 78% in the number of goals scored.
| [
"Renato Oliveira and Paulo Adeodato and Arthur Carvalho and Icamaan\n Viegas and Christian Diego and Tsang Ing-Ren",
"['Renato Oliveira' 'Paulo Adeodato' 'Arthur Carvalho' 'Icamaan Viegas'\n 'Christian Diego' 'Tsang Ing-Ren']"
] |
cs.AI cs.LG stat.ML | null | 1305.4987 | null | null | http://arxiv.org/pdf/1305.4987v2 | 2014-04-29T07:32:58Z | 2013-05-21T23:36:18Z | Robust Logistic Regression using Shift Parameters (Long Version) | Annotation errors can significantly hurt classifier performance, yet datasets
are only growing noisier with the increased use of Amazon Mechanical Turk and
techniques like distant supervision that automatically generate labels. In this
paper, we present a robust extension of logistic regression that incorporates
the possibility of mislabelling directly into the objective. Our model can be
trained through nearly the same means as logistic regression, and retains its
efficiency on high-dimensional datasets. Through named entity recognition
experiments, we demonstrate that our approach can provide a significant
improvement over the standard model when annotation errors are present.
| [
"Julie Tibshirani and Christopher D. Manning",
"['Julie Tibshirani' 'Christopher D. Manning']"
] |
math.ST cs.LG stat.ML stat.TH | null | 1305.5029 | null | null | http://arxiv.org/pdf/1305.5029v2 | 2014-04-29T22:02:35Z | 2013-05-22T06:30:46Z | Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with
Minimax Optimal Rates | We establish optimal convergence rates for a decomposition-based scalable
approach to kernel ridge regression. The method is simple to describe: it
randomly partitions a dataset of size N into m subsets of equal size, computes
an independent kernel ridge regression estimator for each subset, then averages
the local solutions into a global predictor. This partitioning leads to a
substantial reduction in computation time versus the standard approach of
performing kernel ridge regression on all N samples. Our two main theorems
establish that despite the computational speed-up, statistical optimality is
retained: as long as m is not too large, the partition-based estimator achieves
the statistical minimax rate over all estimators using the set of N samples. As
concrete examples, our theory guarantees that the number of processors m may
grow nearly linearly for finite-rank kernels and Gaussian kernels and
polynomially in N for Sobolev spaces, which in turn allows for substantial
reductions in computational cost. We conclude with experiments on both
simulated data and a music-prediction task that complement our theoretical
results, exhibiting the computational and statistical benefits of our approach.
| [
"Yuchen Zhang and John C. Duchi and Martin J. Wainwright",
"['Yuchen Zhang' 'John C. Duchi' 'Martin J. Wainwright']"
] |
cs.LG cs.IR cs.SD | null | 1305.5078 | null | null | http://arxiv.org/pdf/1305.5078v1 | 2013-05-22T10:43:25Z | 2013-05-22T10:43:25Z | A Comparison of Random Forests and Ferns on Recognition of Instruments
in Jazz Recordings | In this paper, we first apply random ferns for classification of real music
recordings of a jazz band. No initial segmentation of audio data is assumed,
i.e., no onset, offset, nor pitch data are needed. The notion of random ferns
is described in the paper, to familiarize the reader with this classification
algorithm, which was introduced quite recently and applied so far in image
recognition tasks. The performance of random ferns is compared with random
forests for the same data. The results of experiments are presented in the
paper, and conclusions are drawn.
| [
"['Alicja A. Wieczorkowska' 'Miron B. Kursa']",
"Alicja A. Wieczorkowska, Miron B. Kursa"
] |
cs.CV cs.LG stat.ML | null | 1305.5306 | null | null | http://arxiv.org/pdf/1305.5306v1 | 2013-05-23T03:35:31Z | 2013-05-23T03:35:31Z | A Supervised Neural Autoregressive Topic Model for Simultaneous Image
Classification and Annotation | Topic modeling based on latent Dirichlet allocation (LDA) has been a
framework of choice to perform scene recognition and annotation. Recently, a
new type of topic model called the Document Neural Autoregressive Distribution
Estimator (DocNADE) was proposed and demonstrated state-of-the-art performance
for document modeling. In this work, we show how to successfully apply and
extend this model to the context of visual scene modeling. Specifically, we
propose SupDocNADE, a supervised extension of DocNADE, that increases the
discriminative power of the hidden topic features by incorporating label
information into the training objective of the model. We also describe how to
leverage information about the spatial position of the visual words and how to
embed additional image annotations, so as to simultaneously perform image
classification and annotation. We test our model on the Scene15, LabelMe and
UIUC-Sports datasets and show that it compares favorably to other topic models
such as the supervised variant of LDA.
| [
"['Yin Zheng' 'Yu-Jin Zhang' 'Hugo Larochelle']",
"Yin Zheng, Yu-Jin Zhang, Hugo Larochelle"
] |
math.OC cs.GT cs.LG stat.ML | null | 1305.5399 | null | null | http://arxiv.org/pdf/1305.5399v1 | 2013-05-23T12:44:29Z | 2013-05-23T12:44:29Z | A Primal Condition for Approachability with Partial Monitoring | In approachability with full monitoring there are two types of conditions
that are known to be equivalent for convex sets: a primal and a dual condition.
The primal one is of the form: a set C is approachable if and only all
containing half-spaces are approachable in the one-shot game; while the dual
one is of the form: a convex set C is approachable if and only if it intersects
all payoff sets of a certain form. We consider approachability in games with
partial monitoring. In previous works (Perchet 2011; Mannor et al. 2011) we
provided a dual characterization of approachable convex sets; we also exhibited
efficient strategies in the case where C is a polytope. In this paper we
provide primal conditions on a convex set to be approachable with partial
monitoring. They depend on a modified reward function and lead to
approachability strategies, based on modified payoff functions, that proceed by
projections similarly to Blackwell's (1956) strategy; this is in contrast with
previously studied strategies in this context that relied mostly on the
signaling structure and aimed at estimating well the distributions of the
signals received. Our results generalize classical results by Kohlberg 1975
(see also Mertens et al. 1994) and apply to games with arbitrary signaling
structure as well as to arbitrary convex sets.
| [
"['Shie Mannor' 'Vianney Perchet' 'Gilles Stoltz']",
"Shie Mannor (EE-Technion), Vianney Perchet (LPMA), Gilles Stoltz\n (INRIA Paris - Rocquencourt, DMA, GREGH)"
] |
stat.ML cs.LG | null | 1305.5734 | null | null | http://arxiv.org/pdf/1305.5734v1 | 2013-05-24T13:51:20Z | 2013-05-24T13:51:20Z | Characterizing A Database of Sequential Behaviors with Latent Dirichlet
Hidden Markov Models | This paper proposes a generative model, the latent Dirichlet hidden Markov
models (LDHMM), for characterizing a database of sequential behaviors
(sequences). LDHMMs posit that each sequence is generated by an underlying
Markov chain process, which are controlled by the corresponding parameters
(i.e., the initial state vector, transition matrix and the emission matrix).
These sequence-level latent parameters for each sequence are modeled as latent
Dirichlet random variables and parameterized by a set of deterministic
database-level hyper-parameters. Through this way, we expect to model the
sequence in two levels: the database level by deterministic hyper-parameters
and the sequence-level by latent parameters. To learn the deterministic
hyper-parameters and approximate posteriors of parameters in LDHMMs, we propose
an iterative algorithm under the variational EM framework, which consists of E
and M steps. We examine two different schemes, the fully-factorized and
partially-factorized forms, for the framework, based on different assumptions.
We present empirical results of behavior modeling and sequence classification
on three real-world data sets, and compare them to other related models. The
experimental results prove that the proposed LDHMMs produce better
generalization performance in terms of log-likelihood and deliver competitive
results on the sequence classification problem.
| [
"Yin Song, Longbing Cao, Xuhui Fan, Wei Cao and Jian Zhang",
"['Yin Song' 'Longbing Cao' 'Xuhui Fan' 'Wei Cao' 'Jian Zhang']"
] |
stat.ML cs.LG cs.SI physics.data-an | null | 1305.5782 | null | null | http://arxiv.org/pdf/1305.5782v1 | 2013-05-24T16:32:10Z | 2013-05-24T16:32:10Z | Adapting the Stochastic Block Model to Edge-Weighted Networks | We generalize the stochastic block model to the important case in which edges
are annotated with weights drawn from an exponential family distribution. This
generalization introduces several technical difficulties for model estimation,
which we solve using a Bayesian approach. We introduce a variational algorithm
that efficiently approximates the model's posterior distribution for dense
graphs. In specific numerical experiments on edge-weighted networks, this
weighted stochastic block model outperforms the common approach of first
applying a single threshold to all weights and then applying the classic
stochastic block model, which can obscure latent block structure in networks.
This model will enable the recovery of latent structure in a broader range of
network data than was previously possible.
| [
"Christopher Aicher, Abigail Z. Jacobs, Aaron Clauset",
"['Christopher Aicher' 'Abigail Z. Jacobs' 'Aaron Clauset']"
] |
stat.ML cs.DC cs.LG | null | 1305.5826 | null | null | http://arxiv.org/pdf/1305.5826v1 | 2013-05-24T19:00:28Z | 2013-05-24T19:00:28Z | Parallel Gaussian Process Regression with Low-Rank Covariance Matrix
Approximations | Gaussian processes (GP) are Bayesian non-parametric models that are widely
used for probabilistic regression. Unfortunately, it cannot scale well with
large data nor perform real-time predictions due to its cubic time cost in the
data size. This paper presents two parallel GP regression methods that exploit
low-rank covariance matrix approximations for distributing the computational
load among parallel machines to achieve time efficiency and scalability. We
theoretically guarantee the predictive performances of our proposed parallel
GPs to be equivalent to that of some centralized approximate GP regression
methods: The computation of their centralized counterparts can be distributed
among parallel machines, hence achieving greater time efficiency and
scalability. We analytically compare the properties of our parallel GPs such as
time, space, and communication complexity. Empirical evaluation on two
real-world datasets in a cluster of 20 computing nodes shows that our parallel
GPs are significantly more time-efficient and scalable than their centralized
counterparts and exact/full GP while achieving predictive performances
comparable to full GP.
| [
"['Jie Chen' 'Nannan Cao' 'Kian Hsiang Low' 'Ruofei Ouyang'\n 'Colin Keng-Yan Tan' 'Patrick Jaillet']",
"Jie Chen, Nannan Cao, Kian Hsiang Low, Ruofei Ouyang, Colin Keng-Yan\n Tan, Patrick Jaillet"
] |
math.NA cs.LG cs.NA | null | 1305.5829 | null | null | http://arxiv.org/pdf/1305.5829v1 | 2013-05-24T19:09:02Z | 2013-05-24T19:09:02Z | A Symmetric Rank-one Quasi Newton Method for Non-negative Matrix
Factorization | As we all known, the nonnegative matrix factorization (NMF) is a dimension
reduction method that has been widely used in image processing, text
compressing and signal processing etc. In this paper, an algorithm for
nonnegative matrix approximation is proposed. This method mainly bases on the
active set and the quasi-Newton type algorithm, by using the symmetric rank-one
and negative curvature direction technologies to approximate the Hessian
matrix. Our method improves the recent results of those methods in [Pattern
Recognition, 45(2012)3557-3565; SIAM J. Sci. Comput., 33(6)(2011)3261-3281;
Neural Computation, 19(10)(2007)2756-2779, etc.]. Moreover, the object function
decreases faster than many other NMF methods. In addition, some numerical
experiments are presented in the synthetic data, imaging processing and text
clustering. By comparing with the other six nonnegative matrix approximation
methods, our experiments confirm to our analysis.
| [
"Shu-Zhen Lai, Hou-Biao Li, Zu-Tao Zhang",
"['Shu-Zhen Lai' 'Hou-Biao Li' 'Zu-Tao Zhang']"
] |
cs.LG cs.CE | 10.5121/csit.2013.3305 | 1305.6046 | null | null | http://arxiv.org/abs/1305.6046v1 | 2013-05-26T18:16:52Z | 2013-05-26T18:16:52Z | Supervised Feature Selection for Diagnosis of Coronary Artery Disease
Based on Genetic Algorithm | Feature Selection (FS) has become the focus of much research on decision
support systems areas for which data sets with tremendous number of variables
are analyzed. In this paper we present a new method for the diagnosis of
Coronary Artery Diseases (CAD) founded on Genetic Algorithm (GA) wrapped Bayes
Naive (BN) based FS. Basically, CAD dataset contains two classes defined with
13 features. In GA BN algorithm, GA generates in each iteration a subset of
attributes that will be evaluated using the BN in the second step of the
selection procedure. The final set of attribute contains the most relevant
feature model that increases the accuracy. The algorithm in this case produces
85.50% classification accuracy in the diagnosis of CAD. Thus, the asset of the
Algorithm is then compared with the use of Support Vector Machine (SVM),
MultiLayer Perceptron (MLP) and C4.5 decision tree Algorithm. The result of
classification accuracy for those algorithms are respectively 83.5%, 83.16% and
80.85%. Consequently, the GA wrapped BN Algorithm is correspondingly compared
with other FS algorithms. The Obtained results have shown very promising
outcomes for the diagnosis of CAD.
| [
"['Sidahmed Mokeddem' 'Baghdad Atmani' 'Mostefa Mokaddem']",
"Sidahmed Mokeddem, Baghdad Atmani and Mostefa Mokaddem"
] |
cs.LG cs.AI cs.MA cs.RO | null | 1305.6129 | null | null | http://arxiv.org/pdf/1305.6129v1 | 2013-05-27T07:28:05Z | 2013-05-27T07:28:05Z | Information-Theoretic Approach to Efficient Adaptive Path Planning for
Mobile Robotic Environmental Sensing | Recent research in robot exploration and mapping has focused on sampling
environmental hotspot fields. This exploration task is formalized by Low,
Dolan, and Khosla (2008) in a sequential decision-theoretic planning under
uncertainty framework called MASP. The time complexity of solving MASP
approximately depends on the map resolution, which limits its use in
large-scale, high-resolution exploration and mapping. To alleviate this
computational difficulty, this paper presents an information-theoretic approach
to MASP (iMASP) for efficient adaptive path planning; by reformulating the
cost-minimizing iMASP as a reward-maximizing problem, its time complexity
becomes independent of map resolution and is less sensitive to increasing robot
team size as demonstrated both theoretically and empirically. Using the
reward-maximizing dual, we derive a novel adaptive variant of maximum entropy
sampling, thus improving the induced exploration policy performance. It also
allows us to establish theoretical bounds quantifying the performance advantage
of optimal adaptive over non-adaptive policies and the performance quality of
approximately optimal vs. optimal adaptive policies. We show analytically and
empirically the superior performance of iMASP-based policies for sampling the
log-Gaussian process to that of policies for the widely-used Gaussian process
in mapping the hotspot field. Lastly, we provide sufficient conditions that,
when met, guarantee adaptivity has no benefit under an assumed environment
model.
| [
"Kian Hsiang Low, John M. Dolan, Pradeep Khosla",
"['Kian Hsiang Low' 'John M. Dolan' 'Pradeep Khosla']"
] |
cs.CL cs.IR cs.LG | 10.1007/978-3-642-41278-3_24 | 1305.6143 | null | null | http://arxiv.org/abs/1305.6143v2 | 2013-09-16T05:36:29Z | 2013-05-27T08:37:26Z | Fast and accurate sentiment classification using an enhanced Naive Bayes
model | We have explored different methods of improving the accuracy of a Naive Bayes
classifier for sentiment analysis. We observed that a combination of methods
like negation handling, word n-grams and feature selection by mutual
information results in a significant improvement in accuracy. This implies that
a highly accurate and fast sentiment classifier can be built using a simple
Naive Bayes model that has linear training and testing time complexities. We
achieved an accuracy of 88.80% on the popular IMDB movie reviews dataset.
| [
"['Vivek Narayanan' 'Ishan Arora' 'Arjun Bhatia']",
"Vivek Narayanan, Ishan Arora, Arjun Bhatia"
] |
math.ST cs.CG cs.LG math.GT stat.TH | null | 1305.6239 | null | null | http://arxiv.org/pdf/1305.6239v1 | 2013-05-27T14:37:29Z | 2013-05-27T14:37:29Z | Optimal rates of convergence for persistence diagrams in Topological
Data Analysis | Computational topology has recently known an important development toward
data analysis, giving birth to the field of topological data analysis.
Topological persistence, or persistent homology, appears as a fundamental tool
in this field. In this paper, we study topological persistence in general
metric spaces, with a statistical approach. We show that the use of persistent
homology can be naturally considered in general statistical frameworks and
persistence diagrams can be used as statistics with interesting convergence
properties. Some numerical experiments are performed in various contexts to
illustrate our results.
| [
"['Frédéric Chazal' 'Marc Glisse' 'Catherine Labruère' 'Bertrand Michel']",
"Fr\\'ed\\'eric Chazal and Marc Glisse and Catherine Labru\\`ere and\n Bertrand Michel"
] |
cs.LG cs.RO stat.ML | 10.1109/CIG.2011.6031994 | 1305.6568 | null | null | http://arxiv.org/abs/1305.6568v1 | 2013-05-28T17:47:08Z | 2013-05-28T17:47:08Z | Reinforcement Learning for the Soccer Dribbling Task | We propose a reinforcement learning solution to the \emph{soccer dribbling
task}, a scenario in which a soccer agent has to go from the beginning to the
end of a region keeping possession of the ball, as an adversary attempts to
gain possession. While the adversary uses a stationary policy, the dribbler
learns the best action to take at each decision point. After defining
meaningful variables to represent the state space, and high-level macro-actions
to incorporate domain knowledge, we describe our application of the
reinforcement learning algorithm \emph{Sarsa} with CMAC for function
approximation. Our experiments show that, after the training period, the
dribbler is able to accomplish its task against a strong adversary around 58%
of the time.
| [
"Arthur Carvalho and Renato Oliveira",
"['Arthur Carvalho' 'Renato Oliveira']"
] |
cs.LG stat.ML | null | 1305.6646 | null | null | http://arxiv.org/pdf/1305.6646v1 | 2013-05-28T22:12:59Z | 2013-05-28T22:12:59Z | Normalized Online Learning | We introduce online learning algorithms which are independent of feature
scales, proving regret bounds dependent on the ratio of scales existent in the
data rather than the absolute scale. This has several useful effects: there is
no need to pre-normalize data, the test-time and test-space complexity are
reduced, and the algorithms are more robust.
| [
"Stephane Ross and Paul Mineiro and John Langford",
"['Stephane Ross' 'Paul Mineiro' 'John Langford']"
] |
cs.LG stat.ML | null | 1305.6659 | null | null | http://arxiv.org/pdf/1305.6659v2 | 2013-11-01T18:25:39Z | 2013-05-28T23:59:16Z | Dynamic Clustering via Asymptotics of the Dependent Dirichlet Process
Mixture | This paper presents a novel algorithm, based upon the dependent Dirichlet
process mixture model (DDPMM), for clustering batch-sequential data containing
an unknown number of evolving clusters. The algorithm is derived via a
low-variance asymptotic analysis of the Gibbs sampling algorithm for the DDPMM,
and provides a hard clustering with convergence guarantees similar to those of
the k-means algorithm. Empirical results from a synthetic test with moving
Gaussian clusters and a test with real ADS-B aircraft trajectory data
demonstrate that the algorithm requires orders of magnitude less computational
time than contemporary probabilistic and hard clustering algorithms, while
providing higher accuracy on the examined datasets.
| [
"Trevor Campbell, Miao Liu, Brian Kulis, Jonathan P. How, Lawrence\n Carin",
"['Trevor Campbell' 'Miao Liu' 'Brian Kulis' 'Jonathan P. How'\n 'Lawrence Carin']"
] |
cs.LG | null | 1305.6663 | null | null | http://arxiv.org/pdf/1305.6663v4 | 2013-11-11T02:27:55Z | 2013-05-29T00:25:54Z | Generalized Denoising Auto-Encoders as Generative Models | Recent work has shown how denoising and contractive autoencoders implicitly
capture the structure of the data-generating density, in the case where the
corruption noise is Gaussian, the reconstruction error is the squared error,
and the data is continuous-valued. This has led to various proposals for
sampling from this implicitly learned density function, using Langevin and
Metropolis-Hastings MCMC. However, it remained unclear how to connect the
training procedure of regularized auto-encoders to the implicit estimation of
the underlying data-generating distribution when the data are discrete, or
using other forms of corruption process and reconstruction errors. Another
issue is the mathematical justification which is only valid in the limit of
small corruption noise. We propose here a different attack on the problem,
which deals with all these issues: arbitrary (but noisy enough) corruption,
arbitrary reconstruction loss (seen as a log-likelihood), handling both
discrete and continuous-valued variables, and removing the bias due to
non-infinitesimal corruption noise (or non-infinitesimal contractive penalty).
| [
"Yoshua Bengio, Li Yao, Guillaume Alain, and Pascal Vincent",
"['Yoshua Bengio' 'Li Yao' 'Guillaume Alain' 'Pascal Vincent']"
] |
cs.LG stat.ML | null | 1305.7057 | null | null | http://arxiv.org/pdf/1305.7057v1 | 2013-05-30T10:44:41Z | 2013-05-30T10:44:41Z | Predicting the Severity of Breast Masses with Data Mining Methods | Mammography is the most effective and available tool for breast cancer
screening. However, the low positive predictive value of breast biopsy
resulting from mammogram interpretation leads to approximately 70% unnecessary
biopsies with benign outcomes. Data mining algorithms could be used to help
physicians in their decisions to perform a breast biopsy on a suspicious lesion
seen in a mammogram image or to perform a short term follow-up examination
instead. In this research paper data mining classification algorithms; Decision
Tree (DT), Artificial Neural Network (ANN), and Support Vector Machine (SVM)
are analyzed on mammographic masses data set. The purpose of this study is to
increase the ability of physicians to determine the severity (benign or
malignant) of a mammographic mass lesion from BI-RADS attributes and the
patient,s age. The whole data set is divided for training the models and test
them by the ratio of 70:30% respectively and the performances of classification
algorithms are compared through three statistical measures; sensitivity,
specificity, and classification accuracy. Accuracy of DT, ANN and SVM are
78.12%, 80.56% and 81.25% of test samples respectively. Our analysis shows that
out of these three classification models SVM predicts severity of breast cancer
with least error rate and highest accuracy.
| [
"['Sahar A. Mokhtar' 'Alaa. M. Elsayad']",
"Sahar A. Mokhtar and Alaa. M. Elsayad"
] |
cs.LG | null | 1305.7111 | null | null | http://arxiv.org/pdf/1305.7111v1 | 2013-05-30T13:52:32Z | 2013-05-30T13:52:32Z | Test cost and misclassification cost trade-off using reframing | Many solutions to cost-sensitive classification (and regression) rely on some
or all of the following assumptions: we have complete knowledge about the cost
context at training time, we can easily re-train whenever the cost context
changes, and we have technique-specific methods (such as cost-sensitive
decision trees) that can take advantage of that information. In this paper we
address the problem of selecting models and minimising joint cost (integrating
both misclassification cost and test costs) without any of the above
assumptions. We introduce methods and plots (such as the so-called JROC plots)
that can work with any off-the-shelf predictive technique, including ensembles,
such that we reframe the model to use the appropriate subset of attributes (the
feature configuration) during deployment time. In other words, models are
trained with the available attributes (once and for all) and then deployed by
setting missing values on the attributes that are deemed ineffective for
reducing the joint cost. As the number of feature configuration combinations
grows exponentially with the number of features we introduce quadratic methods
that are able to approximate the optimal configuration and model choices, as
shown by the experimental results.
| [
"Celestine Periale Maguedong-Djoumessi, Jos\\'e Hern\\'andez-Orallo",
"['Celestine Periale Maguedong-Djoumessi' 'José Hernández-Orallo']"
] |
cs.LG q-bio.QM stat.AP | null | 1305.7331 | null | null | http://arxiv.org/pdf/1305.7331v2 | 2013-06-05T04:56:15Z | 2013-05-31T09:15:47Z | Alternating Decision trees for early diagnosis of dengue fever | Dengue fever is a flu-like illness spread by the bite of an infected mosquito
which is fast emerging as a major health problem. Timely and cost effective
diagnosis using clinical and laboratory features would reduce the mortality
rates besides providing better grounds for clinical management and disease
surveillance. We wish to develop a robust and effective decision tree based
approach for predicting dengue disease. Our analysis is based on the clinical
characteristics and laboratory measurements of the diseased individuals. We
have developed and trained an alternating decision tree with boosting and
compared its performance with C4.5 algorithm for dengue disease diagnosis. Of
the 65 patient records a diagnosis establishes that 53 individuals have been
confirmed to have dengue fever. An alternating decision tree based algorithm
was able to differentiate the dengue fever using the clinical and laboratory
data with number of correctly classified instances as 89%, F-measure of 0.86
and receiver operator characteristics (ROC) of 0.826 as compared to C4.5 having
correctly classified instances as 78%,h F-measure of 0.738 and ROC of 0.617
respectively. Alternating decision tree based approach with boosting has been
able to predict dengue fever with a higher degree of accuracy than C4.5 based
decision tree using simple clinical and laboratory features. Further analysis
on larger data sets is required to improve the sensitivity and specificity of
the alternating decision trees.
| [
"M. Naresh Kumar",
"['M. Naresh Kumar']"
] |
cs.LG stat.ML | null | 1305.7454 | null | null | http://arxiv.org/pdf/1305.7454v1 | 2013-05-31T15:28:44Z | 2013-05-31T15:28:44Z | Privileged Information for Data Clustering | Many machine learning algorithms assume that all input samples are
independently and identically distributed from some common distribution on
either the input space X, in the case of unsupervised learning, or the input
and output space X x Y in the case of supervised and semi-supervised learning.
In the last number of years the relaxation of this assumption has been explored
and the importance of incorporation of additional information within machine
learning algorithms became more apparent. Traditionally such fusion of
information was the domain of semi-supervised learning. More recently the
inclusion of knowledge from separate hypothetical spaces has been proposed by
Vapnik as part of the supervised setting. In this work we are interested in
exploring Vapnik's idea of master-class learning and the associated learning
using privileged information, however within the unsupervised setting. Adoption
of the advanced supervised learning paradigm for the unsupervised setting
instigates investigation into the difference between privileged and technical
data. By means of our proposed aRi-MAX method stability of the KMeans algorithm
is improved and identification of the best clustering solution is achieved on
an artificial dataset. Subsequently an information theoretic dot product based
algorithm called P-Dot is proposed. This method has the ability to utilize a
wide variety of clustering techniques, individually or in combination, while
fusing privileged and technical data for improved clustering. Application of
the P-Dot method to the task of digit recognition confirms our findings in a
real-world scenario.
| [
"Jan Feyereisl, Uwe Aickelin",
"['Jan Feyereisl' 'Uwe Aickelin']"
] |
math.ST cs.LG math.OC stat.ME stat.ML stat.TH | null | 1305.7477 | null | null | http://arxiv.org/pdf/1305.7477v8 | 2014-10-11T05:54:58Z | 2013-05-31T16:24:17Z | On model selection consistency of regularized M-estimators | Regularized M-estimators are used in diverse areas of science and engineering
to fit high-dimensional models with some low-dimensional structure. Usually the
low-dimensional structure is encoded by the presence of the (unknown)
parameters in some low-dimensional model subspace. In such settings, it is
desirable for estimates of the model parameters to be \emph{model selection
consistent}: the estimates also fall in the model subspace. We develop a
general framework for establishing consistency and model selection consistency
of regularized M-estimators and show how it applies to some special cases of
interest in statistical learning. Our analysis identifies two key properties of
regularized M-estimators, referred to as geometric decomposability and
irrepresentability, that ensure the estimators are consistent and model
selection consistent.
| [
"['Jason D. Lee' 'Yuekai Sun' 'Jonathan E. Taylor']",
"Jason D. Lee, Yuekai Sun, Jonathan E. Taylor"
] |
cs.LG | null | 1306.0125 | null | null | http://arxiv.org/pdf/1306.0125v1 | 2013-06-01T15:48:58Z | 2013-06-01T15:48:58Z | Understanding ACT-R - an Outsider's Perspective | The ACT-R theory of cognition developed by John Anderson and colleagues
endeavors to explain how humans recall chunks of information and how they solve
problems. ACT-R also serves as a theoretical basis for "cognitive tutors",
i.e., automatic tutoring systems that help students learn mathematics, computer
programming, and other subjects. The official ACT-R definition is distributed
across a large body of literature spanning many articles and monographs, and
hence it is difficult for an "outsider" to learn the most important aspects of
the theory. This paper aims to provide a tutorial to the core components of the
ACT-R theory.
| [
"Jacob Whitehill",
"['Jacob Whitehill']"
] |
cs.LG cs.DS | null | 1306.0155 | null | null | http://arxiv.org/pdf/1306.0155v1 | 2013-06-01T22:00:03Z | 2013-06-01T22:00:03Z | Dynamic Ad Allocation: Bandits with Budgets | We consider an application of multi-armed bandits to internet advertising
(specifically, to dynamic ad allocation in the pay-per-click model, with
uncertainty on the click probabilities). We focus on an important practical
issue that advertisers are constrained in how much money they can spend on
their ad campaigns. This issue has not been considered in the prior work on
bandit-based approaches for ad allocation, to the best of our knowledge.
We define a simple, stylized model where an algorithm picks one ad to display
in each round, and each ad has a \emph{budget}: the maximal amount of money
that can be spent on this ad. This model admits a natural variant of UCB1, a
well-known algorithm for multi-armed bandits with stochastic rewards. We derive
strong provable guarantees for this algorithm.
| [
"Aleksandrs Slivkins",
"['Aleksandrs Slivkins']"
] |
stat.ML cs.IT cs.LG math.IT | null | 1306.0160 | null | null | http://arxiv.org/pdf/1306.0160v2 | 2015-06-12T11:45:50Z | 2013-06-02T00:45:12Z | Phase Retrieval using Alternating Minimization | Phase retrieval problems involve solving linear equations, but with missing
sign (or phase, for complex numbers) information. More than four decades after
it was first proposed, the seminal error reduction algorithm of (Gerchberg and
Saxton 1972) and (Fienup 1982) is still the popular choice for solving many
variants of this problem. The algorithm is based on alternating minimization;
i.e. it alternates between estimating the missing phase information, and the
candidate solution. Despite its wide usage in practice, no global convergence
guarantees for this algorithm are known. In this paper, we show that a
(resampling) variant of this approach converges geometrically to the solution
of one such problem -- finding a vector $\mathbf{x}$ from
$\mathbf{y},\mathbf{A}$, where $\mathbf{y} =
\left|\mathbf{A}^{\top}\mathbf{x}\right|$ and $|\mathbf{z}|$ denotes a vector
of element-wise magnitudes of $\mathbf{z}$ -- under the assumption that
$\mathbf{A}$ is Gaussian.
Empirically, we demonstrate that alternating minimization performs similar to
recently proposed convex techniques for this problem (which are based on
"lifting" to a convex matrix problem) in sample complexity and robustness to
noise. However, it is much more efficient and can scale to large problems.
Analytically, for a resampling version of alternating minimization, we show
geometric convergence to the solution, and sample complexity that is off by log
factors from obvious lower bounds. We also establish close to optimal scaling
for the case when the unknown vector is sparse. Our work represents the first
theoretical guarantee for alternating minimization (albeit with resampling) for
any variant of phase retrieval problems in the non-convex setting.
| [
"Praneeth Netrapalli and Prateek Jain and Sujay Sanghavi",
"['Praneeth Netrapalli' 'Prateek Jain' 'Sujay Sanghavi']"
] |
stat.ML cs.LG | null | 1306.0186 | null | null | http://arxiv.org/pdf/1306.0186v2 | 2014-01-09T11:14:27Z | 2013-06-02T09:37:53Z | RNADE: The real-valued neural autoregressive density-estimator | We introduce RNADE, a new model for joint density estimation of real-valued
vectors. Our model calculates the density of a datapoint as the product of
one-dimensional conditionals modeled using mixture density networks with shared
parameters. RNADE learns a distributed representation of the data, while having
a tractable expression for the calculation of densities. A tractable likelihood
allows direct comparison with other methods and training by standard
gradient-based optimizers. We compare the performance of RNADE on several
datasets of heterogeneous and perceptual data, finding it outperforms mixture
models in all but one case.
| [
"Benigno Uria, Iain Murray, Hugo Larochelle",
"['Benigno Uria' 'Iain Murray' 'Hugo Larochelle']"
] |
cs.LG | null | 1306.0237 | null | null | http://arxiv.org/pdf/1306.0237v3 | 2013-11-18T08:52:49Z | 2013-06-02T18:30:45Z | Guided Random Forest in the RRF Package | Random Forest (RF) is a powerful supervised learner and has been popularly
used in many applications such as bioinformatics.
In this work we propose the guided random forest (GRF) for feature selection.
Similar to a feature selection method called guided regularized random forest
(GRRF), GRF is built using the importance scores from an ordinary RF. However,
the trees in GRRF are built sequentially, are highly correlated and do not
allow for parallel computing, while the trees in GRF are built independently
and can be implemented in parallel. Experiments on 10 high-dimensional gene
data sets show that, with a fixed parameter value (without tuning the
parameter), RF applied to features selected by GRF outperforms RF applied to
all features on 9 data sets and 7 of them have significant differences at the
0.05 level. Therefore, both accuracy and interpretability are significantly
improved. GRF selects more features than GRRF, however, leads to better
classification accuracy. Note in this work the guided random forest is guided
by the importance scores from an ordinary random forest, however, it can also
be guided by other methods such as human insights (by specifying $\lambda_i$).
GRF can be used in "RRF" v1.4 (and later versions), a package that also
includes the regularized random forest methods.
| [
"Houtao Deng",
"['Houtao Deng']"
] |
cs.LG stat.ML | null | 1306.0239 | null | null | http://arxiv.org/pdf/1306.0239v4 | 2015-02-21T16:58:39Z | 2013-06-02T18:46:58Z | Deep Learning using Linear Support Vector Machines | Recently, fully-connected and convolutional neural networks have been trained
to achieve state-of-the-art performance on a wide variety of tasks such as
speech recognition, image classification, natural language processing, and
bioinformatics. For classification tasks, most of these "deep learning" models
employ the softmax activation function for prediction and minimize
cross-entropy loss. In this paper, we demonstrate a small but consistent
advantage of replacing the softmax layer with a linear support vector machine.
Learning minimizes a margin-based loss instead of the cross-entropy loss. While
there have been various combinations of neural nets and SVMs in prior art, our
results using L2-SVMs show that by simply replacing softmax with linear SVMs
gives significant gains on popular deep learning datasets MNIST, CIFAR-10, and
the ICML 2013 Representation Learning Workshop's face expression recognition
challenge.
| [
"Yichuan Tang",
"['Yichuan Tang']"
] |
cs.LG cs.IR | null | 1306.0271 | null | null | http://arxiv.org/pdf/1306.0271v1 | 2013-06-03T01:44:28Z | 2013-06-03T01:44:28Z | KERT: Automatic Extraction and Ranking of Topical Keyphrases from
Content-Representative Document Titles | We introduce KERT (Keyphrase Extraction and Ranking by Topic), a framework
for topical keyphrase generation and ranking. By shifting from the
unigram-centric traditional methods of unsupervised keyphrase extraction to a
phrase-centric approach, we are able to directly compare and rank phrases of
different lengths. We construct a topical keyphrase ranking function which
implements the four criteria that represent high quality topical keyphrases
(coverage, purity, phraseness, and completeness). The effectiveness of our
approach is demonstrated on two collections of content-representative titles in
the domains of Computer Science and Physics.
| [
"Marina Danilevsky, Chi Wang, Nihit Desai, Jingyi Guo, Jiawei Han",
"['Marina Danilevsky' 'Chi Wang' 'Nihit Desai' 'Jingyi Guo' 'Jiawei Han']"
] |
stat.ML cs.LG math.NA | null | 1306.0308 | null | null | http://arxiv.org/pdf/1306.0308v2 | 2014-02-12T12:51:32Z | 2013-06-03T06:56:47Z | Probabilistic Solutions to Differential Equations and their Application
to Riemannian Statistics | We study a probabilistic numerical method for the solution of both boundary
and initial value problems that returns a joint Gaussian process posterior over
the solution. Such methods have concrete value in the statistics on Riemannian
manifolds, where non-analytic ordinary differential equations are involved in
virtually all computations. The probabilistic formulation permits marginalising
the uncertainty of the numerical solution such that statistics are less
sensitive to inaccuracies. This leads to new Riemannian algorithms for mean
value computations and principal geodesic analysis. Marginalisation also means
results can be less precise than point estimates, enabling a noticeable
speed-up over the state of the art. Our approach is an argument for a wider
point that uncertainty caused by numerical calculations should be tracked
throughout the pipeline of machine learning algorithms.
| [
"Philipp Hennig and S{\\o}ren Hauberg",
"['Philipp Hennig' 'Søren Hauberg']"
] |
cs.LG stat.ML | null | 1306.0393 | null | null | http://arxiv.org/pdf/1306.0393v3 | 2017-02-18T00:34:19Z | 2013-06-03T13:10:35Z | Learning from networked examples in a k-partite graph | Many machine learning algorithms are based on the assumption that training
examples are drawn independently. However, this assumption does not hold
anymore when learning from a networked sample where two or more training
examples may share common features. We propose an efficient weighting method
for learning from networked examples and show the sample error bound which is
better than previous work.
| [
"Yuyi Wang, Jan Ramon and Zheng-Chu Guo",
"['Yuyi Wang' 'Jan Ramon' 'Zheng-Chu Guo']"
] |
cs.NE cs.LG | null | 1306.0514 | null | null | http://arxiv.org/pdf/1306.0514v4 | 2015-02-03T18:35:36Z | 2013-06-03T17:36:14Z | Riemannian metrics for neural networks II: recurrent networks and
learning symbolic data sequences | Recurrent neural networks are powerful models for sequential data, able to
represent complex dependencies in the sequence that simpler models such as
hidden Markov models cannot handle. Yet they are notoriously hard to train.
Here we introduce a training procedure using a gradient ascent in a Riemannian
metric: this produces an algorithm independent from design choices such as the
encoding of parameters and unit activities. This metric gradient ascent is
designed to have an algorithmic cost close to backpropagation through time for
sparsely connected networks. We use this procedure on gated leaky neural
networks (GLNNs), a variant of recurrent neural networks with an architecture
inspired by finite automata and an evolution equation inspired by
continuous-time networks. GLNNs trained with a Riemannian gradient are
demonstrated to effectively capture a variety of structures in synthetic
problems: basic block nesting as in context-free grammars (an important feature
of natural languages, but difficult to learn), intersections of multiple
independent Markov-type relations, or long-distance relationships such as the
distant-XOR problem. This method does not require adjusting the network
structure or initial parameters: the network used is a sparse random graph and
the initialization is identical for all problems considered.
| [
"Yann Ollivier",
"['Yann Ollivier']"
] |
cs.AI cs.LG | null | 1306.0539 | null | null | http://arxiv.org/pdf/1306.0539v1 | 2013-06-03T19:13:53Z | 2013-06-03T19:13:53Z | On the Performance Bounds of some Policy Search Dynamic Programming
Algorithms | We consider the infinite-horizon discounted optimal control problem
formalized by Markov Decision Processes. We focus on Policy Search algorithms,
that compute an approximately optimal policy by following the standard Policy
Iteration (PI) scheme via an -approximate greedy operator (Kakade and Langford,
2002; Lazaric et al., 2010). We describe existing and a few new performance
bounds for Direct Policy Iteration (DPI) (Lagoudakis and Parr, 2003; Fern et
al., 2006; Lazaric et al., 2010) and Conservative Policy Iteration (CPI)
(Kakade and Langford, 2002). By paying a particular attention to the
concentrability constants involved in such guarantees, we notably argue that
the guarantee of CPI is much better than that of DPI, but this comes at the
cost of a relative--exponential in $\frac{1}{\epsilon}$-- increase of time
complexity. We then describe an algorithm, Non-Stationary Direct Policy
Iteration (NSDPI), that can either be seen as 1) a variation of Policy Search
by Dynamic Programming by Bagnell et al. (2003) to the infinite horizon
situation or 2) a simplified version of the Non-Stationary PI with growing
period of Scherrer and Lesner (2012). We provide an analysis of this algorithm,
that shows in particular that it enjoys the best of both worlds: its
performance guarantee is similar to that of CPI, but within a time complexity
similar to that of DPI.
| [
"['Bruno Scherrer']",
"Bruno Scherrer (INRIA Nancy - Grand Est / LORIA)"
] |
cs.LG cs.CE | null | 1306.0541 | null | null | http://arxiv.org/pdf/1306.0541v1 | 2013-05-12T22:00:09Z | 2013-05-12T22:00:09Z | Identifying Pairs in Simulated Bio-Medical Time-Series | The paper presents a time-series-based classification approach to identify
similarities in pairs of simulated human-generated patterns. An example for a
pattern is a time-series representing a heart rate during a specific
time-range, wherein the time-series is a sequence of data points that represent
the changes in the heart rate values. A bio-medical simulator system was
developed to acquire a collection of 7,871 price patterns of financial
instruments. The financial instruments traded in real-time on three American
stock exchanges, NASDAQ, NYSE, and AMEX, simulate bio-medical measurements. The
system simulates a human in which each price pattern represents one bio-medical
sensor. Data provided during trading hours from the stock exchanges allowed
real-time classification. Classification is based on new machine learning
techniques: self-labeling, which allows the application of supervised learning
methods on unlabeled time-series and similarity ranking, which applied on a
decision tree learning algorithm to classify time-series regardless of type and
quantity.
| [
"Uri Kartoun",
"['Uri Kartoun']"
] |
cs.LG cs.NE stat.ML | null | 1306.0543 | null | null | http://arxiv.org/pdf/1306.0543v2 | 2014-10-27T11:49:08Z | 2013-06-03T19:16:26Z | Predicting Parameters in Deep Learning | We demonstrate that there is significant redundancy in the parameterization
of several deep learning models. Given only a few weight values for each
feature it is possible to accurately predict the remaining values. Moreover, we
show that not only can the parameter values be predicted, but many of them need
not be learned at all. We train several different architectures by learning
only a small number of weights and predicting the rest. In the best case we are
able to predict more than 95% of the weights of a network without any drop in
accuracy.
| [
"['Misha Denil' 'Babak Shakibi' 'Laurent Dinh' \"Marc'Aurelio Ranzato\"\n 'Nando de Freitas']",
"Misha Denil, Babak Shakibi, Laurent Dinh, Marc'Aurelio Ranzato, Nando\n de Freitas"
] |
cs.LG cs.DC stat.ML | null | 1306.0604 | null | null | http://arxiv.org/pdf/1306.0604v4 | 2020-01-25T23:23:11Z | 2013-06-03T21:49:19Z | Distributed k-Means and k-Median Clustering on General Topologies | This paper provides new algorithms for distributed clustering for two popular
center-based objectives, k-median and k-means. These algorithms have provable
guarantees and improve communication complexity over existing approaches.
Following a classic approach in clustering by \cite{har2004coresets}, we reduce
the problem of finding a clustering with low cost to the problem of finding a
coreset of small size. We provide a distributed method for constructing a
global coreset which improves over the previous methods by reducing the
communication complexity, and which works over general communication
topologies. Experimental results on large scale data sets show that this
approach outperforms other coreset-based distributed clustering algorithms.
| [
"['Maria Florina Balcan' 'Steven Ehrlich' 'Yingyu Liang']",
"Maria Florina Balcan, Steven Ehrlich, Yingyu Liang"
] |
stat.ML cs.LG | null | 1306.0618 | null | null | http://arxiv.org/pdf/1306.0618v3 | 2014-02-12T22:01:18Z | 2013-06-03T22:57:20Z | Prediction with Missing Data via Bayesian Additive Regression Trees | We present a method for incorporating missing data in non-parametric
statistical learning without the need for imputation. We focus on a tree-based
method, Bayesian Additive Regression Trees (BART), enhanced with "Missingness
Incorporated in Attributes," an approach recently proposed incorporating
missingness into decision trees (Twala, 2008). This procedure takes advantage
of the partitioning mechanisms found in tree-based models. Simulations on
generated models and real data indicate that our proposed method can forecast
well on complicated missing-at-random and not-missing-at-random models as well
as models where missingness itself influences the response. Our procedure has
higher predictive performance and is more stable than competitors in many
cases. We also illustrate BART's abilities to incorporate missingness into
uncertainty intervals and to detect the influence of missingness on the model
fit.
| [
"['Adam Kapelner' 'Justin Bleich']",
"Adam Kapelner and Justin Bleich"
] |
cs.LG cs.IT math.IT stat.ML | null | 1306.0626 | null | null | http://arxiv.org/pdf/1306.0626v1 | 2013-06-04T00:38:17Z | 2013-06-04T00:38:17Z | Provable Inductive Matrix Completion | Consider a movie recommendation system where apart from the ratings
information, side information such as user's age or movie's genre is also
available. Unlike standard matrix completion, in this setting one should be
able to predict inductively on new users/movies. In this paper, we study the
problem of inductive matrix completion in the exact recovery setting. That is,
we assume that the ratings matrix is generated by applying feature vectors to a
low-rank matrix and the goal is to recover back the underlying matrix.
Furthermore, we generalize the problem to that of low-rank matrix estimation
using rank-1 measurements. We study this generic problem and provide conditions
that the set of measurements should satisfy so that the alternating
minimization method (which otherwise is a non-convex method with no convergence
guarantees) is able to recover back the {\em exact} underlying low-rank matrix.
In addition to inductive matrix completion, we show that two other low-rank
estimation problems can be studied in our framework: a) general low-rank matrix
sensing using rank-1 measurements, and b) multi-label regression with missing
labels. For both the problems, we provide novel and interesting bounds on the
number of measurements required by alternating minimization to provably
converges to the {\em exact} low-rank matrix. In particular, our analysis for
the general low rank matrix sensing problem significantly improves the required
storage and computational cost than that required by the RIP-based matrix
sensing methods \cite{RechtFP2007}. Finally, we provide empirical validation of
our approach and demonstrate that alternating minimization is able to recover
the true matrix for the above mentioned problems using a small number of
measurements.
| [
"Prateek Jain and Inderjit S. Dhillon",
"['Prateek Jain' 'Inderjit S. Dhillon']"
] |
cs.LG cs.AI stat.ML | null | 1306.0686 | null | null | http://arxiv.org/pdf/1306.0686v2 | 2013-06-05T01:01:04Z | 2013-06-04T07:39:21Z | Online Learning under Delayed Feedback | Online learning with delayed feedback has received increasing attention
recently due to its several applications in distributed, web-based learning
problems. In this paper we provide a systematic study of the topic, and analyze
the effect of delay on the regret of online learning algorithms. Somewhat
surprisingly, it turns out that delay increases the regret in a multiplicative
way in adversarial problems, and in an additive way in stochastic problems. We
give meta-algorithms that transform, in a black-box fashion, algorithms
developed for the non-delayed case into ones that can handle the presence of
delays in the feedback loop. Modifications of the well-known UCB algorithm are
also developed for the bandit problem with delayed feedback, with the advantage
over the meta-algorithms that they can be implemented with lower complexity.
| [
"Pooria Joulani, Andr\\'as Gy\\\"orgy, Csaba Szepesv\\'ari",
"['Pooria Joulani' 'András György' 'Csaba Szepesvári']"
] |
cs.LG stat.ML | null | 1306.0733 | null | null | http://arxiv.org/pdf/1306.0733v1 | 2013-06-04T11:28:32Z | 2013-06-04T11:28:32Z | Fast Gradient-Based Inference with Continuous Latent Variable Models in
Auxiliary Form | We propose a technique for increasing the efficiency of gradient-based
inference and learning in Bayesian networks with multiple layers of continuous
latent vari- ables. We show that, in many cases, it is possible to express such
models in an auxiliary form, where continuous latent variables are
conditionally deterministic given their parents and a set of independent
auxiliary variables. Variables of mod- els in this auxiliary form have much
larger Markov blankets, leading to significant speedups in gradient-based
inference, e.g. rapid mixing Hybrid Monte Carlo and efficient gradient-based
optimization. The relative efficiency is confirmed in ex- periments.
| [
"Diederik P Kingma",
"['Diederik P Kingma']"
] |
cs.LG cs.SI stat.ML | null | 1306.0811 | null | null | http://arxiv.org/pdf/1306.0811v3 | 2013-11-04T10:07:42Z | 2013-06-04T14:24:31Z | A Gang of Bandits | Multi-armed bandit problems are receiving a great deal of attention because
they adequately formalize the exploration-exploitation trade-offs arising in
several industrially relevant applications, such as online advertisement and,
more generally, recommendation systems. In many cases, however, these
applications have a strong social component, whose integration in the bandit
algorithm could lead to a dramatic performance increase. For instance, we may
want to serve content to a group of users by taking advantage of an underlying
network of social relationships among them. In this paper, we introduce novel
algorithmic approaches to the solution of such networked bandit problems. More
specifically, we design and analyze a global strategy which allocates a bandit
algorithm to each network node (user) and allows it to "share" signals
(contexts and payoffs) with the neghboring nodes. We then derive two more
scalable variants of this strategy based on different ways of clustering the
graph nodes. We experimentally compare the algorithm and its variants to
state-of-the-art methods for contextual bandits that do not use the relational
information. Our experiments, carried out on synthetic and real-world datasets,
show a marked increase in prediction performance obtained by exploiting the
network structure.
| [
"['Nicolò Cesa-Bianchi' 'Claudio Gentile' 'Giovanni Zappella']",
"Nicol\\`o Cesa-Bianchi, Claudio Gentile and Giovanni Zappella"
] |
stat.ML cs.LG math.ST stat.TH | null | 1306.0842 | null | null | http://arxiv.org/pdf/1306.0842v2 | 2013-06-06T17:18:25Z | 2013-06-04T16:09:20Z | Kernel Mean Estimation and Stein's Effect | A mean function in reproducing kernel Hilbert space, or a kernel mean, is an
important part of many applications ranging from kernel principal component
analysis to Hilbert-space embedding of distributions. Given finite samples, an
empirical average is the standard estimate for the true kernel mean. We show
that this estimator can be improved via a well-known phenomenon in statistics
called Stein's phenomenon. After consideration, our theoretical analysis
reveals the existence of a wide class of estimators that are better than the
standard. Focusing on a subset of this class, we propose efficient shrinkage
estimators for the kernel mean. Empirical evaluations on several benchmark
applications clearly demonstrate that the proposed estimators outperform the
standard kernel mean estimator.
| [
"['Krikamol Muandet' 'Kenji Fukumizu' 'Bharath Sriperumbudur'\n 'Arthur Gretton' 'Bernhard Schölkopf']",
"Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur, Arthur\n Gretton, Bernhard Sch\\\"olkopf"
] |
cs.LG stat.ML | null | 1306.0886 | null | null | http://arxiv.org/pdf/1306.0886v1 | 2013-06-04T19:35:31Z | 2013-06-04T19:35:31Z | $\propto$SVM for learning with label proportions | We study the problem of learning with label proportions in which the training
data is provided in groups and only the proportion of each class in each group
is known. We propose a new method called proportion-SVM, or $\propto$SVM, which
explicitly models the latent unknown instance labels together with the known
group label proportions in a large-margin framework. Unlike the existing works,
our approach avoids making restrictive assumptions about the data. The
$\propto$SVM model leads to a non-convex integer programming problem. In order
to solve it efficiently, we propose two algorithms: one based on simple
alternating optimization and the other based on a convex relaxation. Extensive
experiments on standard datasets show that $\propto$SVM outperforms the
state-of-the-art, especially for larger group sizes.
| [
"Felix X. Yu, Dong Liu, Sanjiv Kumar, Tony Jebara, Shih-Fu Chang",
"['Felix X. Yu' 'Dong Liu' 'Sanjiv Kumar' 'Tony Jebara' 'Shih-Fu Chang']"
] |
stat.ML cs.LG | null | 1306.0940 | null | null | http://arxiv.org/pdf/1306.0940v5 | 2013-12-26T09:20:29Z | 2013-06-04T23:00:56Z | (More) Efficient Reinforcement Learning via Posterior Sampling | Most provably-efficient learning algorithms introduce optimism about
poorly-understood states and actions to encourage exploration. We study an
alternative approach for efficient exploration, posterior sampling for
reinforcement learning (PSRL). This algorithm proceeds in repeated episodes of
known duration. At the start of each episode, PSRL updates a prior distribution
over Markov decision processes and takes one sample from this posterior. PSRL
then follows the policy that is optimal for this sample during the episode. The
algorithm is conceptually simple, computationally efficient and allows an agent
to encode prior knowledge in a natural way. We establish an $\tilde{O}(\tau S
\sqrt{AT})$ bound on the expected regret, where $T$ is time, $\tau$ is the
episode length and $S$ and $A$ are the cardinalities of the state and action
spaces. This bound is one of the first for an algorithm not based on optimism,
and close to the state of the art for any reinforcement learning algorithm. We
show through simulation that PSRL significantly outperforms existing algorithms
with similar regret bounds.
| [
"Ian Osband, Daniel Russo, Benjamin Van Roy",
"['Ian Osband' 'Daniel Russo' 'Benjamin Van Roy']"
] |
stat.ML cs.LG | null | 1306.1066 | null | null | http://arxiv.org/pdf/1306.1066v5 | 2016-12-23T12:28:36Z | 2013-06-05T11:38:46Z | Bayesian Differential Privacy through Posterior Sampling | Differential privacy formalises privacy-preserving mechanisms that provide
access to a database. We pose the question of whether Bayesian inference itself
can be used directly to provide private access to data, with no modification.
The answer is affirmative: under certain conditions on the prior, sampling from
the posterior distribution can be used to achieve a desired level of privacy
and utility. To do so, we generalise differential privacy to arbitrary dataset
metrics, outcome spaces and distribution families. This allows us to also deal
with non-i.i.d or non-tabular datasets. We prove bounds on the sensitivity of
the posterior to the data, which gives a measure of robustness. We also show
how to use posterior sampling to provide differentially private responses to
queries, within a decision-theoretic framework. Finally, we provide bounds on
the utility and on the distinguishability of datasets. The latter are
complemented by a novel use of Le Cam's method to obtain lower bounds. All our
general results hold for arbitrary database metrics, including those for the
common definition of differential privacy. For specific choices of the metric,
we give a number of examples satisfying our assumptions.
| [
"Christos Dimitrakakis and Blaine Nelson and and Zuhe Zhang and\n Aikaterini Mitrokotsa and Benjamin Rubinstein",
"['Christos Dimitrakakis' 'Blaine Nelson' 'and Zuhe Zhang'\n 'Aikaterini Mitrokotsa' 'Benjamin Rubinstein']"
] |
cs.CV cs.LG | null | 1306.1083 | null | null | http://arxiv.org/pdf/1306.1083v1 | 2013-06-05T12:48:02Z | 2013-06-05T12:48:02Z | Discriminative Parameter Estimation for Random Walks Segmentation:
Technical Report | The Random Walks (RW) algorithm is one of the most e - cient and easy-to-use
probabilistic segmentation methods. By combining contrast terms with prior
terms, it provides accurate segmentations of medical images in a fully
automated manner. However, one of the main drawbacks of using the RW algorithm
is that its parameters have to be hand-tuned. we propose a novel discriminative
learning framework that estimates the parameters using a training dataset. The
main challenge we face is that the training samples are not fully supervised.
Speci cally, they provide a hard segmentation of the images, instead of a
proba-bilistic segmentation. We overcome this challenge by treating the optimal
probabilistic segmentation that is compatible with the given hard segmentation
as a latent variable. This allows us to employ the latent support vector
machine formulation for parameter estimation. We show that our approach signi
cantly outperforms the baseline methods on a challenging dataset consisting of
real clinical 3D MRI volumes of skeletal muscles.
| [
"['Pierre-Yves Baudin' 'Danny Goodman' 'Puneet Kumar' 'Noura Azzabou'\n 'Pierre G. Carlier' 'Nikos Paragios' 'M. Pawan Kumar']",
"Pierre-Yves Baudin (INRIA Saclay - Ile de France), Danny Goodman,\n Puneet Kumar (INRIA Saclay - Ile de France, CVN), Noura Azzabou (MIRCEN,\n UPMC), Pierre G. Carlier (UPMC), Nikos Paragios (INRIA Saclay - Ile de\n France, LIGM, ENPC, MAS), M. Pawan Kumar (INRIA Saclay - Ile de France, CVN)"
] |
cs.LG | null | 1306.1091 | null | null | http://arxiv.org/pdf/1306.1091v5 | 2014-05-24T00:05:18Z | 2013-06-05T13:01:14Z | Deep Generative Stochastic Networks Trainable by Backprop | We introduce a novel training principle for probabilistic models that is an
alternative to maximum likelihood. The proposed Generative Stochastic Networks
(GSN) framework is based on learning the transition operator of a Markov chain
whose stationary distribution estimates the data distribution. The transition
distribution of the Markov chain is conditional on the previous state,
generally involving a small move, so this conditional distribution has fewer
dominant modes, being unimodal in the limit of small moves. Thus, it is easier
to learn because it is easier to approximate its partition function, more like
learning to perform supervised function approximation, with gradients that can
be obtained by backprop. We provide theorems that generalize recent work on the
probabilistic interpretation of denoising autoencoders and obtain along the way
an interesting justification for dependency networks and generalized
pseudolikelihood, along with a definition of an appropriate joint distribution
and sampling mechanism even when the conditionals are not consistent. GSNs can
be used with missing inputs and can be used to sample subsets of variables
given the rest. We validate these theoretical results with experiments on two
image datasets using an architecture that mimics the Deep Boltzmann Machine
Gibbs sampler but allows training to proceed with simple backprop, without the
need for layerwise pretraining.
| [
"Yoshua Bengio, \\'Eric Thibodeau-Laufer, Guillaume Alain and Jason\n Yosinski",
"['Yoshua Bengio' 'Éric Thibodeau-Laufer' 'Guillaume Alain'\n 'Jason Yosinski']"
] |
stat.ML cs.LG math.OC | null | 1306.1185 | null | null | http://arxiv.org/pdf/1306.1185v1 | 2013-06-05T17:42:57Z | 2013-06-05T17:42:57Z | Multiclass Total Variation Clustering | Ideas from the image processing literature have recently motivated a new set
of clustering algorithms that rely on the concept of total variation. While
these algorithms perform well for bi-partitioning tasks, their recursive
extensions yield unimpressive results for multiclass clustering tasks. This
paper presents a general framework for multiclass total variation clustering
that does not rely on recursion. The results greatly outperform previous total
variation algorithms and compare well with state-of-the-art NMF approaches.
| [
"Xavier Bresson, Thomas Laurent, David Uminsky and James H. von Brecht",
"['Xavier Bresson' 'Thomas Laurent' 'David Uminsky' 'James H. von Brecht']"
] |
stat.ML cs.LG math.ST physics.data-an stat.TH | null | 1306.1298 | null | null | http://arxiv.org/pdf/1306.1298v1 | 2013-06-06T05:32:00Z | 2013-06-06T05:32:00Z | Multiclass Semi-Supervised Learning on Graphs using Ginzburg-Landau
Functional Minimization | We present a graph-based variational algorithm for classification of
high-dimensional data, generalizing the binary diffuse interface model to the
case of multiple classes. Motivated by total variation techniques, the method
involves minimizing an energy functional made up of three terms. The first two
terms promote a stepwise continuous classification function with sharp
transitions between classes, while preserving symmetry among the class labels.
The third term is a data fidelity term, allowing us to incorporate prior
information into the model in a semi-supervised framework. The performance of
the algorithm on synthetic data, as well as on the COIL and MNIST benchmark
datasets, is competitive with state-of-the-art graph-based multiclass
segmentation methods.
| [
"['Cristina Garcia-Cardona' 'Arjuna Flenner' 'Allon G. Percus']",
"Cristina Garcia-Cardona, Arjuna Flenner, Allon G. Percus"
] |
cs.LG cs.CE stat.ML | null | 1306.1323 | null | null | http://arxiv.org/pdf/1306.1323v1 | 2013-06-06T07:26:06Z | 2013-06-06T07:26:06Z | Verdict Accuracy of Quick Reduct Algorithm using Clustering and
Classification Techniques for Gene Expression Data | In most gene expression data, the number of training samples is very small
compared to the large number of genes involved in the experiments. However,
among the large amount of genes, only a small fraction is effective for
performing a certain task. Furthermore, a small subset of genes is desirable in
developing gene expression based diagnostic tools for delivering reliable and
understandable results. With the gene selection results, the cost of biological
experiment and decision can be greatly reduced by analyzing only the marker
genes. An important application of gene expression data in functional genomics
is to classify samples according to their gene expression profiles. Feature
selection (FS) is a process which attempts to select more informative features.
It is one of the important steps in knowledge discovery. Conventional
supervised FS methods evaluate various feature subsets using an evaluation
function or metric to select only those features which are related to the
decision classes of the data under consideration. This paper studies a feature
selection method based on rough set theory. Further K-Means, Fuzzy C-Means
(FCM) algorithm have implemented for the reduced feature set without
considering class labels. Then the obtained results are compared with the
original class labels. Back Propagation Network (BPN) has also been used for
classification. Then the performance of K-Means, FCM, and BPN are analyzed
through the confusion matrix. It is found that the BPN is performing well
comparatively.
| [
"T. Chandrasekhar, K. Thangavel, E.N. Sathishkumar",
"['T. Chandrasekhar' 'K. Thangavel' 'E. N. Sathishkumar']"
] |
cs.LG | 10.1109/ICCCA.2012.6179181 | 1306.1326 | null | null | http://arxiv.org/abs/1306.1326v1 | 2013-06-06T07:42:33Z | 2013-06-06T07:42:33Z | Performance analysis of unsupervised feature selection methods | Feature selection (FS) is a process which attempts to select more informative
features. In some cases, too many redundant or irrelevant features may
overpower main features for classification. Feature selection can remedy this
problem and therefore improve the prediction accuracy and reduce the
computational overhead of classification algorithms. The main aim of feature
selection is to determine a minimal feature subset from a problem domain while
retaining a suitably high accuracy in representing the original features. In
this paper, Principal Component Analysis (PCA), Rough PCA, Unsupervised Quick
Reduct (USQR) algorithm and Empirical Distribution Ranking (EDR) approaches are
applied to discover discriminative features that will be the most adequate ones
for classification. Efficiency of the approaches is evaluated using standard
classification metrics.
| [
"A. Nisthana Parveen, H. Hannah Inbarani, E.N. Sathishkumar",
"['A. Nisthana Parveen' 'H. Hannah Inbarani' 'E. N. Sathishkumar']"
] |
cs.CE cs.LG stat.ML | 10.1109/MLSP.2013.6661923 | 1306.1350 | null | null | http://arxiv.org/abs/1306.1350v4 | 2013-09-27T08:58:30Z | 2013-06-06T09:29:25Z | Diffusion map for clustering fMRI spatial maps extracted by independent
component analysis | Functional magnetic resonance imaging (fMRI) produces data about activity
inside the brain, from which spatial maps can be extracted by independent
component analysis (ICA). In datasets, there are n spatial maps that contain p
voxels. The number of voxels is very high compared to the number of analyzed
spatial maps. Clustering of the spatial maps is usually based on correlation
matrices. This usually works well, although such a similarity matrix inherently
can explain only a certain amount of the total variance contained in the
high-dimensional data where n is relatively small but p is large. For
high-dimensional space, it is reasonable to perform dimensionality reduction
before clustering. In this research, we used the recently developed diffusion
map for dimensionality reduction in conjunction with spectral clustering. This
research revealed that the diffusion map based clustering worked as well as the
more traditional methods, and produced more compact clusters when needed.
| [
"['Tuomo Sipola' 'Fengyu Cong' 'Tapani Ristaniemi' 'Vinoo Alluri'\n 'Petri Toiviainen' 'Elvira Brattico' 'Asoke K. Nandi']",
"Tuomo Sipola, Fengyu Cong, Tapani Ristaniemi, Vinoo Alluri, Petri\n Toiviainen, Elvira Brattico, Asoke K. Nandi"
] |
cs.LG stat.ML | null | 1306.1433 | null | null | http://arxiv.org/pdf/1306.1433v3 | 2013-11-11T04:43:10Z | 2013-06-06T15:15:07Z | Tight Lower Bound on the Probability of a Binomial Exceeding its
Expectation | We give the proof of a tight lower bound on the probability that a binomial
random variable exceeds its expected value. The inequality plays an important
role in a variety of contexts, including the analysis of relative deviation
bounds in learning theory and generalization bounds for unbounded loss
functions.
| [
"Spencer Greenberg, Mehryar Mohri",
"['Spencer Greenberg' 'Mehryar Mohri']"
] |
cs.DC cs.LG | null | 1306.1467 | null | null | http://arxiv.org/pdf/1306.1467v1 | 2013-06-06T16:38:26Z | 2013-06-06T16:38:26Z | Highly Scalable, Parallel and Distributed AdaBoost Algorithm using Light
Weight Threads and Web Services on a Network of Multi-Core Machines | AdaBoost is an important algorithm in machine learning and is being widely
used in object detection. AdaBoost works by iteratively selecting the best
amongst weak classifiers, and then combines several weak classifiers to obtain
a strong classifier. Even though AdaBoost has proven to be very effective, its
learning execution time can be quite large depending upon the application e.g.,
in face detection, the learning time can be several days. Due to its increasing
use in computer vision applications, the learning time needs to be drastically
reduced so that an adaptive near real time object detection system can be
incorporated. In this paper, we develop a hybrid parallel and distributed
AdaBoost algorithm that exploits the multiple cores in a CPU via light weight
threads, and also uses multiple machines via a web service software
architecture to achieve high scalability. We present a novel hierarchical web
services based distributed architecture and achieve nearly linear speedup up to
the number of processors available to us. In comparison with the previously
published work, which used a single level master-slave parallel and distributed
implementation [1] and only achieved a speedup of 2.66 on four nodes, we
achieve a speedup of 95.1 on 31 workstations each having a quad-core processor,
resulting in a learning time of only 4.8 seconds per feature.
| [
"Munther Abualkibash, Ahmed ElSayed, Ausif Mahmood",
"['Munther Abualkibash' 'Ahmed ElSayed' 'Ausif Mahmood']"
] |
cs.RO cs.DC cs.LG cs.MA | null | 1306.1491 | null | null | http://arxiv.org/pdf/1306.1491v1 | 2013-06-02T14:05:49Z | 2013-06-02T14:05:49Z | Gaussian Process-Based Decentralized Data Fusion and Active Sensing for
Mobility-on-Demand System | Mobility-on-demand (MoD) systems have recently emerged as a promising
paradigm of one-way vehicle sharing for sustainable personal urban mobility in
densely populated cities. In this paper, we enhance the capability of a MoD
system by deploying robotic shared vehicles that can autonomously cruise the
streets to be hailed by users. A key challenge to managing the MoD system
effectively is that of real-time, fine-grained mobility demand sensing and
prediction. This paper presents a novel decentralized data fusion and active
sensing algorithm for real-time, fine-grained mobility demand sensing and
prediction with a fleet of autonomous robotic vehicles in a MoD system. Our
Gaussian process (GP)-based decentralized data fusion algorithm can achieve a
fine balance between predictive power and time efficiency. We theoretically
guarantee its predictive performance to be equivalent to that of a
sophisticated centralized sparse approximation for the GP model: The
computation of such a sparse approximate GP model can thus be distributed among
the MoD vehicles, hence achieving efficient and scalable demand prediction.
Though our decentralized active sensing strategy is devised to gather the most
informative demand data for demand prediction, it can achieve a dual effect of
fleet rebalancing to service the mobility demands. Empirical evaluation on
real-world mobility demand data shows that our proposed algorithm can achieve a
better balance between predictive accuracy and time efficiency than
state-of-the-art algorithms.
| [
"['Jie Chen' 'Kian Hsiang Low' 'Colin Keng-Yan Tan']",
"Jie Chen, Kian Hsiang Low, Colin Keng-Yan Tan"
] |
cs.LG cs.AI cs.RO math.OC | null | 1306.1520 | null | null | http://arxiv.org/pdf/1306.1520v1 | 2013-06-06T19:27:01Z | 2013-06-06T19:27:01Z | Policy Search: Any Local Optimum Enjoys a Global Performance Guarantee | Local Policy Search is a popular reinforcement learning approach for handling
large state spaces. Formally, it searches locally in a paramet erized policy
space in order to maximize the associated value function averaged over some
predefined distribution. It is probably commonly b elieved that the best one
can hope in general from such an approach is to get a local optimum of this
criterion. In this article, we show th e following surprising result:
\emph{any} (approximate) \emph{local optimum} enjoys a \emph{global performance
guarantee}. We compare this g uarantee with the one that is satisfied by Direct
Policy Iteration, an approximate dynamic programming algorithm that does some
form of Poli cy Search: if the approximation error of Local Policy Search may
generally be bigger (because local search requires to consider a space of s
tochastic policies), we argue that the concentrability coefficient that appears
in the performance bound is much nicer. Finally, we discuss several practical
and theoretical consequences of our analysis.
| [
"Bruno Scherrer (INRIA Nancy - Grand Est / LORIA), Matthieu Geist",
"['Bruno Scherrer' 'Matthieu Geist']"
] |
cs.LG cs.DS math.NA stat.ML | null | 1306.1716 | null | null | http://arxiv.org/pdf/1306.1716v1 | 2013-06-07T13:14:50Z | 2013-06-07T13:14:50Z | Fast greedy algorithm for subspace clustering from corrupted and
incomplete data | We describe the Fast Greedy Sparse Subspace Clustering (FGSSC) algorithm
providing an efficient method for clustering data belonging to a few
low-dimensional linear or affine subspaces. The main difference of our
algorithm from predecessors is its ability to work with noisy data having a
high rate of erasures (missed entries with the known coordinates) and errors
(corrupted entries with unknown coordinates). We discuss here how to implement
the fast version of the greedy algorithm with the maximum efficiency whose
greedy strategy is incorporated into iterations of the basic algorithm.
We provide numerical evidences that, in the subspace clustering capability,
the fast greedy algorithm outperforms not only the existing state-of-the art
SSC algorithm taken by the authors as a basic algorithm but also the recent
GSSC algorithm. At the same time, its computational cost is only slightly
higher than the cost of SSC.
The numerical evidence of the algorithm significant advantage is presented
for a few synthetic models as well as for the Extended Yale B dataset of facial
images. In particular, the face recognition misclassification rate turned out
to be 6-20 times lower than for the SSC algorithm. We provide also the
numerical evidence that the FGSSC algorithm is able to perform clustering of
corrupted data efficiently even when the sum of subspace dimensions
significantly exceeds the dimension of the ambient space.
| [
"['Alexander Petukhov' 'Inna Kozlov']",
"Alexander Petukhov and Inna Kozlov"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.