categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
cs.LG math.OC | null | 1109.2415 | null | null | http://arxiv.org/pdf/1109.2415v2 | 2011-12-01T16:06:06Z | 2011-09-12T09:45:02Z | Convergence Rates of Inexact Proximal-Gradient Methods for Convex
Optimization | We consider the problem of optimizing the sum of a smooth convex function and
a non-smooth convex function using proximal-gradient methods, where an error is
present in the calculation of the gradient of the smooth term or in the
proximity operator with respect to the non-smooth term. We show that both the
basic proximal-gradient method and the accelerated proximal-gradient method
achieve the same convergence rate as in the error-free case, provided that the
errors decrease at appropriate rates.Using these rates, we perform as well as
or better than a carefully chosen fixed error level on a set of structured
sparsity problems.
| [
"Mark Schmidt (INRIA Paris - Rocquencourt, LIENS), Nicolas Le Roux\n (INRIA Paris - Rocquencourt, LIENS), Francis Bach (INRIA Paris -\n Rocquencourt, LIENS)",
"['Mark Schmidt' 'Nicolas Le Roux' 'Francis Bach']"
] |
cs.IT cs.LG cs.SI math.IT physics.soc-ph stat.ML | null | 1109.3240 | null | null | http://arxiv.org/pdf/1109.3240v1 | 2011-09-15T02:10:26Z | 2011-09-15T02:10:26Z | Active Learning for Node Classification in Assortative and
Disassortative Networks | In many real-world networks, nodes have class labels, attributes, or
variables that affect the network's topology. If the topology of the network is
known but the labels of the nodes are hidden, we would like to select a small
subset of nodes such that, if we knew their labels, we could accurately predict
the labels of all the other nodes. We develop an active learning algorithm for
this problem which uses information-theoretic techniques to choose which nodes
to explore. We test our algorithm on networks from three different domains: a
social network, a network of English words that appear adjacently in a novel,
and a marine food web. Our algorithm makes no initial assumptions about how the
groups connect, and performs well even when faced with quite general types of
network structure. In particular, we do not assume that nodes of the same class
are more likely to be connected to each other---only that they connect to the
rest of the network in similar ways.
| [
"Cristopher Moore, Xiaoran Yan, Yaojia Zhu, Jean-Baptiste Rouquier,\n Terran Lane",
"['Cristopher Moore' 'Xiaoran Yan' 'Yaojia Zhu' 'Jean-Baptiste Rouquier'\n 'Terran Lane']"
] |
cs.LG stat.ML | null | 1109.3248 | null | null | http://arxiv.org/pdf/1109.3248v1 | 2011-09-15T03:12:36Z | 2011-09-15T03:12:36Z | Reconstruction of sequential data with density models | We introduce the problem of reconstructing a sequence of multidimensional
real vectors where some of the data are missing. This problem contains
regression and mapping inversion as particular cases where the pattern of
missing data is independent of the sequence index. The problem is hard because
it involves possibly multivalued mappings at each vector in the sequence, where
the missing variables can take more than one value given the present variables;
and the set of missing variables can vary from one vector to the next. To solve
this problem, we propose an algorithm based on two redundancy assumptions:
vector redundancy (the data live in a low-dimensional manifold), so that the
present variables constrain the missing ones; and sequence redundancy (e.g.
continuity), so that consecutive vectors constrain each other. We capture the
low-dimensional nature of the data in a probabilistic way with a joint density
model, here the generative topographic mapping, which results in a Gaussian
mixture. Candidate reconstructions at each vector are obtained as all the modes
of the conditional distribution of missing variables given present variables.
The reconstructed sequence is obtained by minimising a global constraint, here
the sequence length, by dynamic programming. We present experimental results
for a toy problem and for inverse kinematics of a robot arm.
| [
"['Miguel Á. Carreira-Perpiñán']",
"Miguel \\'A. Carreira-Perpi\\~n\\'an"
] |
cs.LG | null | 1109.3318 | null | null | http://arxiv.org/pdf/1109.3318v2 | 2013-04-22T09:11:23Z | 2011-09-15T11:31:31Z | Distributed User Profiling via Spectral Methods | User profiling is a useful primitive for constructing personalised services,
such as content recommendation. In the present paper we investigate the
feasibility of user profiling in a distributed setting, with no central
authority and only local information exchanges between users. We compute a
profile vector for each user (i.e., a low-dimensional vector that characterises
her taste) via spectral transformation of observed user-produced ratings for
items. Our two main contributions follow: i) We consider a low-rank
probabilistic model of user taste. More specifically, we consider that users
and items are partitioned in a constant number of classes, such that users and
items within the same class are statistically identical. We prove that without
prior knowledge of the compositions of the classes, based solely on few random
observed ratings (namely $O(N\log N)$ such ratings for $N$ users), we can
predict user preference with high probability for unrated items by running a
local vote among users with similar profile vectors. In addition, we provide
empirical evaluations characterising the way in which spectral profiling
performance depends on the dimension of the profile space. Such evaluations are
performed on a data set of real user ratings provided by Netflix. ii) We
develop distributed algorithms which provably achieve an embedding of users
into a low-dimensional space, based on spectral transformation. These involve
simple message passing among users, and provably converge to the desired
embedding. Our method essentially relies on a novel combination of gossiping
and the algorithm proposed by Oja and Karhunen.
| [
"['Dan-Cristian Tomozei' 'Laurent Massoulié']",
"Dan-Cristian Tomozei, Laurent Massouli\\'e"
] |
cs.LG | 10.1109/TPAMI.2012.185 | 1109.3437 | null | null | http://arxiv.org/abs/1109.3437v4 | 2012-03-24T12:47:02Z | 2011-09-15T19:20:48Z | Learning Topic Models by Belief Propagation | Latent Dirichlet allocation (LDA) is an important hierarchical Bayesian model
for probabilistic topic modeling, which attracts worldwide interests and
touches on many important applications in text mining, computer vision and
computational biology. This paper represents LDA as a factor graph within the
Markov random field (MRF) framework, which enables the classic loopy belief
propagation (BP) algorithm for approximate inference and parameter estimation.
Although two commonly-used approximate inference methods, such as variational
Bayes (VB) and collapsed Gibbs sampling (GS), have gained great successes in
learning LDA, the proposed BP is competitive in both speed and accuracy as
validated by encouraging experimental results on four large-scale document data
sets. Furthermore, the BP algorithm has the potential to become a generic
learning scheme for variants of LDA-based topic models. To this end, we show
how to learn two typical variants of LDA-based topic models, such as
author-topic models (ATM) and relational topic models (RTM), using BP based on
the factor graph representation.
| [
"Jia Zeng and William K. Cheung and Jiming Liu",
"['Jia Zeng' 'William K. Cheung' 'Jiming Liu']"
] |
cs.LG cs.IT math.IT stat.ML | null | 1109.3701 | null | null | http://arxiv.org/pdf/1109.3701v2 | 2011-12-10T01:02:14Z | 2011-09-16T19:35:13Z | Active Ranking using Pairwise Comparisons | This paper examines the problem of ranking a collection of objects using
pairwise comparisons (rankings of two objects). In general, the ranking of $n$
objects can be identified by standard sorting methods using $n log_2 n$
pairwise comparisons. We are interested in natural situations in which
relationships among the objects may allow for ranking using far fewer pairwise
comparisons. Specifically, we assume that the objects can be embedded into a
$d$-dimensional Euclidean space and that the rankings reflect their relative
distances from a common reference point in $R^d$. We show that under this
assumption the number of possible rankings grows like $n^{2d}$ and demonstrate
an algorithm that can identify a randomly selected ranking using just slightly
more than $d log n$ adaptively selected pairwise comparisons, on average. If
instead the comparisons are chosen at random, then almost all pairwise
comparisons must be made in order to identify any ranking. In addition, we
propose a robust, error-tolerant algorithm that only requires that the pairwise
comparisons are probably correct. Experimental studies with synthetic and real
datasets support the conclusions of our theoretical analysis.
| [
"['Kevin G. Jamieson' 'Robert D. Nowak']",
"Kevin G. Jamieson and Robert D. Nowak"
] |
cs.DS cs.DM cs.LG | null | 1109.3843 | null | null | http://arxiv.org/pdf/1109.3843v2 | 2012-12-05T00:13:53Z | 2011-09-18T04:38:12Z | Fast approximation of matrix coherence and statistical leverage | The statistical leverage scores of a matrix $A$ are the squared row-norms of
the matrix containing its (top) left singular vectors and the coherence is the
largest leverage score. These quantities are of interest in recently-popular
problems such as matrix completion and Nystr\"{o}m-based low-rank matrix
approximation as well as in large-scale statistical data analysis applications
more generally; moreover, they are of interest since they define the key
structural nonuniformity that must be dealt with in developing fast randomized
matrix algorithms. Our main result is a randomized algorithm that takes as
input an arbitrary $n \times d$ matrix $A$, with $n \gg d$, and that returns as
output relative-error approximations to all $n$ of the statistical leverage
scores. The proposed algorithm runs (under assumptions on the precise values of
$n$ and $d$) in $O(n d \log n)$ time, as opposed to the $O(nd^2)$ time required
by the na\"{i}ve algorithm that involves computing an orthogonal basis for the
range of $A$. Our analysis may be viewed in terms of computing a relative-error
approximation to an underconstrained least-squares approximation problem, or,
relatedly, it may be viewed as an application of Johnson-Lindenstrauss type
ideas. Several practically-important extensions of our basic result are also
described, including the approximation of so-called cross-leverage scores, the
extension of these ideas to matrices with $n \approx d$, and the extension to
streaming environments.
| [
"['Petros Drineas' 'Malik Magdon-Ismail' 'Michael W. Mahoney'\n 'David P. Woodruff']",
"Petros Drineas and Malik Magdon-Ismail and Michael W. Mahoney and\n David P. Woodruff"
] |
cs.LG cs.AI stat.ME stat.ML | null | 1109.3940 | null | null | http://arxiv.org/pdf/1109.3940v1 | 2011-09-19T04:19:30Z | 2011-09-19T04:19:30Z | Learning Discriminative Metrics via Generative Models and Kernel
Learning | Metrics specifying distances between data points can be learned in a
discriminative manner or from generative models. In this paper, we show how to
unify generative and discriminative learning of metrics via a kernel learning
framework. Specifically, we learn local metrics optimized from parametric
generative models. These are then used as base kernels to construct a global
kernel that minimizes a discriminative training criterion. We consider both
linear and nonlinear combinations of local metric kernels. Our empirical
results show that these combinations significantly improve performance on
classification tasks. The proposed learning algorithm is also very efficient,
achieving order of magnitude speedup in training time compared to previous
discriminative baseline methods.
| [
"Yuan Shi, Yung-Kyun Noh, Fei Sha, Daniel D. Lee",
"['Yuan Shi' 'Yung-Kyun Noh' 'Fei Sha' 'Daniel D. Lee']"
] |
math.CO cs.LG stat.ML | null | 1109.4347 | null | null | http://arxiv.org/pdf/1109.4347v1 | 2011-09-20T16:53:29Z | 2011-09-20T16:53:29Z | VC dimension of ellipsoids | We will establish that the VC dimension of the class of d-dimensional
ellipsoids is (d^2+3d)/2, and that maximum likelihood estimate with N-component
d-dimensional Gaussian mixture models induces a geometric class having VC
dimension at least N(d^2+3d)/2.
Keywords: VC dimension; finite dimensional ellipsoid; Gaussian mixture model
| [
"['Yohji Akama' 'Kei Irie']",
"Yohji Akama and Kei Irie"
] |
math.ST cs.LG stat.ML stat.TH | 10.1214/12-AOS994 | 1109.4540 | null | null | http://arxiv.org/abs/1109.4540v2 | 2012-06-05T13:37:56Z | 2011-09-21T14:29:33Z | Manifold estimation and singular deconvolution under Hausdorff loss | We find lower and upper bounds for the risk of estimating a manifold in
Hausdorff distance under several models. We also show that there are close
connections between manifold estimation and the problem of deconvolving a
singular measure.
| [
"['Christopher R. Genovese' 'Marco Perone-Pacifico' 'Isabella Verdinelli'\n 'Larry Wasserman']",
"Christopher R. Genovese, Marco Perone-Pacifico, Isabella Verdinelli,\n Larry Wasserman"
] |
cs.NE cs.AI cs.AR cs.LG | null | 1109.4609 | null | null | http://arxiv.org/pdf/1109.4609v1 | 2011-09-21T18:45:03Z | 2011-09-21T18:45:03Z | Memristive fuzzy edge detector | Fuzzy inference systems always suffer from the lack of efficient structures
or platforms for their hardware implementation. In this paper, we tried to
overcome this problem by proposing new method for the implementation of those
fuzzy inference systems which use fuzzy rule base to make inference. To achieve
this goal, we have designed a multi-layer neuro-fuzzy computing system based on
the memristor crossbar structure by introducing some new concepts like fuzzy
minterms. Although many applications can be realized through the use of our
proposed system, in this study we show how the fuzzy XOR function can be
constructed and how it can be used to extract edges from grayscale images. Our
memristive fuzzy edge detector (implemented in analog form) compared with other
common edge detectors has this advantage that it can extract edges of any given
image all at once in real-time.
| [
"['Farnood Merrikh-Bayat' 'Saeed Bagheri Shouraki']",
"Farnood Merrikh-Bayat and Saeed Bagheri Shouraki"
] |
math.PR cs.LG math.ST q-bio.PE stat.TH | null | 1109.4668 | null | null | http://arxiv.org/pdf/1109.4668v1 | 2011-09-21T22:34:12Z | 2011-09-21T22:34:12Z | Robust estimation of latent tree graphical models: Inferring hidden
states with inexact parameters | Latent tree graphical models are widely used in computational biology, signal
and image processing, and network tomography. Here we design a new efficient,
estimation procedure for latent tree models, including Gaussian and discrete,
reversible models, that significantly improves on previous sample requirement
bounds. Our techniques are based on a new hidden state estimator which is
robust to inaccuracies in estimated parameters. More precisely, we prove that
latent tree models can be estimated with high probability in the so-called
Kesten-Stigum regime with $O(log^2 n)$ samples where $n$ is the number of
nodes.
| [
"['Elchanan Mossel' 'Sebastien Roch' 'Allan Sly']",
"Elchanan Mossel, Sebastien Roch, Allan Sly"
] |
cs.AI cs.LG | 10.1007/s11263-012-0602-z | 1109.4684 | null | null | http://arxiv.org/abs/1109.4684v1 | 2011-09-22T00:56:22Z | 2011-09-22T00:56:22Z | Exhaustive and Efficient Constraint Propagation: A Semi-Supervised
Learning Perspective and Its Applications | This paper presents a novel pairwise constraint propagation approach by
decomposing the challenging constraint propagation problem into a set of
independent semi-supervised learning subproblems which can be solved in
quadratic time using label propagation based on k-nearest neighbor graphs.
Considering that this time cost is proportional to the number of all possible
pairwise constraints, our approach actually provides an efficient solution for
exhaustively propagating pairwise constraints throughout the entire dataset.
The resulting exhaustive set of propagated pairwise constraints are further
used to adjust the similarity matrix for constrained spectral clustering. Other
than the traditional constraint propagation on single-source data, our approach
is also extended to more challenging constraint propagation on multi-source
data where each pairwise constraint is defined over a pair of data points from
different sources. This multi-source constraint propagation has an important
application to cross-modal multimedia retrieval. Extensive results have shown
the superior performance of our approach.
| [
"['Zhiwu Lu' 'Horace H. S. Ip' 'Yuxin Peng']",
"Zhiwu Lu, Horace H.S. Ip, Yuxin Peng"
] |
cs.MM cs.AI cs.LG | 10.1016/j.patcog.2012.09.027 | 1109.4979 | null | null | http://arxiv.org/abs/1109.4979v1 | 2011-09-23T00:39:51Z | 2011-09-23T00:39:51Z | Latent Semantic Learning with Structured Sparse Representation for Human
Action Recognition | This paper proposes a novel latent semantic learning method for extracting
high-level features (i.e. latent semantics) from a large vocabulary of abundant
mid-level features (i.e. visual keywords) with structured sparse
representation, which can help to bridge the semantic gap in the challenging
task of human action recognition. To discover the manifold structure of
midlevel features, we develop a spectral embedding approach to latent semantic
learning based on L1-graph, without the need to tune any parameter for graph
construction as a key step of manifold learning. More importantly, we construct
the L1-graph with structured sparse representation, which can be obtained by
structured sparse coding with its structured sparsity ensured by novel L1-norm
hypergraph regularization over mid-level features. In the new embedding space,
we learn latent semantics automatically from abundant mid-level features
through spectral clustering. The learnt latent semantics can be readily used
for human action recognition with SVM by defining a histogram intersection
kernel. Different from the traditional latent semantic analysis based on topic
models, our latent semantic learning method can explore the manifold structure
of mid-level features in both L1-graph construction and spectral embedding,
which results in compact but discriminative high-level features. The
experimental results on the commonly used KTH action dataset and unconstrained
YouTube action dataset show the superior performance of our method.
| [
"Zhiwu Lu, Yuxin Peng",
"['Zhiwu Lu' 'Yuxin Peng']"
] |
cs.LG | null | 1109.5078 | null | null | http://arxiv.org/pdf/1109.5078v1 | 2011-09-23T13:51:31Z | 2011-09-23T13:51:31Z | Application of distances between terms for flat and hierarchical data | In machine learning, distance-based algorithms, and other approaches, use
information that is represented by propositional data. However, this kind of
representation can be quite restrictive and, in many cases, it requires more
complex structures in order to represent data in a more natural way. Terms are
the basis for functional and logic programming representation. Distances
between terms are a useful tool not only to compare terms, but also to
determine the search space in many of these applications. This dissertation
applies distances between terms, exploiting the features of each distance and
the possibility to compare from propositional data types to hierarchical
representations. The distances between terms are applied through the k-NN
(k-nearest neighbor) classification algorithm using XML as a common language
representation. To be able to represent these data in an XML structure and to
take advantage of the benefits of distance between terms, it is necessary to
apply some transformations. These transformations allow the conversion of flat
data into hierarchical data represented in XML, using some techniques based on
intuitive associations between the names and values of variables and
associations based on attribute similarity.
Several experiments with the distances between terms of Nienhuys-Cheng and
Estruch et al. were performed. In the case of originally propositional data,
these distances are compared to the Euclidean distance. In all cases, the
experiments were performed with the distance-weighted k-nearest neighbor
algorithm, using several exponents for the attraction function (weighted
distance). It can be seen that in some cases, the term distances can
significantly improve the results on approaches applied to flat
representations.
| [
"['Jorge-Alonso Bedoya-Puerta' 'Jose Hernandez-Orallo']",
"Jorge-Alonso Bedoya-Puerta and Jose Hernandez-Orallo"
] |
cs.LG | 10.1109/TSMCB.2012.2223460 | 1109.5231 | null | null | http://arxiv.org/abs/1109.5231v4 | 2012-10-13T11:14:22Z | 2011-09-24T04:50:55Z | Noise Tolerance under Risk Minimization | In this paper we explore noise tolerant learning of classifiers. We formulate
the problem as follows. We assume that there is an ${\bf unobservable}$
training set which is noise-free. The actual training set given to the learning
algorithm is obtained from this ideal data set by corrupting the class label of
each example. The probability that the class label of an example is corrupted
is a function of the feature vector of the example. This would account for most
kinds of noisy data one encounters in practice. We say that a learning method
is noise tolerant if the classifiers learnt with the ideal noise-free data and
with noisy data, both have the same classification accuracy on the noise-free
data. In this paper we analyze the noise tolerance properties of risk
minimization (under different loss functions), which is a generic method for
learning classifiers. We show that risk minimization under 0-1 loss function
has impressive noise tolerance properties and that under squared error loss is
tolerant only to uniform noise; risk minimization under other loss functions is
not noise tolerant. We conclude the paper with some discussion on implications
of these theoretical results.
| [
"['Naresh Manwani' 'P. S. Sastry']",
"Naresh Manwani, P. S. Sastry"
] |
cs.LG cs.IT math.IT | 10.1109/TSP.2012.2215026 | 1109.5302 | null | null | http://arxiv.org/abs/1109.5302v3 | 2012-04-18T21:58:48Z | 2011-09-24T20:32:42Z | Simultaneous Codeword Optimization (SimCO) for Dictionary Update and
Learning | We consider the data-driven dictionary learning problem. The goal is to seek
an over-complete dictionary from which every training signal can be best
approximated by a linear combination of only a few codewords. This task is
often achieved by iteratively executing two operations: sparse coding and
dictionary update. In the literature, there are two benchmark mechanisms to
update a dictionary. The first approach, such as the MOD algorithm, is
characterized by searching for the optimal codewords while fixing the sparse
coefficients. In the second approach, represented by the K-SVD method, one
codeword and the related sparse coefficients are simultaneously updated while
all other codewords and coefficients remain unchanged. We propose a novel
framework that generalizes the aforementioned two methods. The unique feature
of our approach is that one can update an arbitrary set of codewords and the
corresponding sparse coefficients simultaneously: when sparse coefficients are
fixed, the underlying optimization problem is similar to that in the MOD
algorithm; when only one codeword is selected for update, it can be proved that
the proposed algorithm is equivalent to the K-SVD method; and more importantly,
our method allows us to update all codewords and all sparse coefficients
simultaneously, hence the term simultaneous codeword optimization (SimCO).
Under the proposed framework, we design two algorithms, namely, primitive and
regularized SimCO. We implement these two algorithms based on a simple gradient
descent mechanism. Simulations are provided to demonstrate the performance of
the proposed algorithms, as compared with two baseline algorithms MOD and
K-SVD. Results show that regularized SimCO is particularly appealing in terms
of both learning performance and running speed.
| [
"['Wei Dai' 'Tao Xu' 'Wenwu Wang']",
"Wei Dai, Tao Xu, Wenwu Wang"
] |
cs.LG stat.ML | null | 1109.5311 | null | null | http://arxiv.org/pdf/1109.5311v1 | 2011-09-24T22:14:46Z | 2011-09-24T22:14:46Z | Bias Plus Variance Decomposition for Survival Analysis Problems | Bias - variance decomposition of the expected error defined for regression
and classification problems is an important tool to study and compare different
algorithms, to find the best areas for their application. Here the
decomposition is introduced for the survival analysis problem. In our
experiments, we study bias -variance parts of the expected error for two
algorithms: original Cox proportional hazard regression and CoxPath, path
algorithm for L1-regularized Cox regression, on the series of increased
training sets. The experiments demonstrate that, contrary expectations, CoxPath
does not necessarily have an advantage over Cox regression.
| [
"['Marina Sapir']",
"Marina Sapir"
] |
cs.CV cs.AI cs.IR cs.LG | null | 1109.5370 | null | null | http://arxiv.org/pdf/1109.5370v1 | 2011-09-25T16:58:06Z | 2011-09-25T16:58:06Z | Higher-Order Markov Tag-Topic Models for Tagged Documents and Images | This paper studies the topic modeling problem of tagged documents and images.
Higher-order relations among tagged documents and images are major and
ubiquitous characteristics, and play positive roles in extracting reliable and
interpretable topics. In this paper, we propose the tag-topic models (TTM) to
depict such higher-order topic structural dependencies within the Markov random
field (MRF) framework. First, we use the novel factor graph representation of
latent Dirichlet allocation (LDA)-based topic models from the MRF perspective,
and present an efficient loopy belief propagation (BP) algorithm for
approximate inference and parameter estimation. Second, we propose the factor
hypergraph representation of TTM, and focus on both pairwise and higher-order
relation modeling among tagged documents and images. Efficient loopy BP
algorithm is developed to learn TTM, which encourages the topic labeling
smoothness among tagged documents and images. Extensive experimental results
confirm the incorporation of higher-order relations to be effective in
enhancing the overall topic modeling performance, when compared with current
state-of-the-art topic models, in many text and image mining tasks of broad
interests such as word and link prediction, document classification, and tag
recommendation.
| [
"Jia Zeng, Wei Feng, William K. Cheung, Chun-Hung Li",
"['Jia Zeng' 'Wei Feng' 'William K. Cheung' 'Chun-Hung Li']"
] |
cs.LG math.OC | null | 1109.5647 | null | null | http://arxiv.org/pdf/1109.5647v7 | 2012-12-09T21:19:27Z | 2011-09-26T17:24:52Z | Making Gradient Descent Optimal for Strongly Convex Stochastic
Optimization | Stochastic gradient descent (SGD) is a simple and popular method to solve
stochastic optimization problems which arise in machine learning. For strongly
convex problems, its convergence rate was known to be O(\log(T)/T), by running
SGD for T iterations and returning the average point. However, recent results
showed that using a different algorithm, one can get an optimal O(1/T) rate.
This might lead one to believe that standard SGD is suboptimal, and maybe
should even be replaced as a method of choice. In this paper, we investigate
the optimality of SGD in a stochastic setting. We show that for smooth
problems, the algorithm attains the optimal O(1/T) rate. However, for
non-smooth problems, the convergence rate with averaging might really be
\Omega(\log(T)/T), and this is not just an artifact of the analysis. On the
flip side, we show that a simple modification of the averaging step suffices to
recover the O(1/T) rate, and no other change of the algorithm is necessary. We
also present experimental results which support our findings, and point out
open problems.
| [
"Alexander Rakhlin, Ohad Shamir, Karthik Sridharan",
"['Alexander Rakhlin' 'Ohad Shamir' 'Karthik Sridharan']"
] |
cs.LG cs.DS | 10.1109/TIT.2013.2255021 | 1109.5664 | null | null | http://arxiv.org/abs/1109.5664v4 | 2013-06-21T20:52:27Z | 2011-09-26T18:44:00Z | Deterministic Feature Selection for $k$-means Clustering | We study feature selection for $k$-means clustering. Although the literature
contains many methods with good empirical performance, algorithms with provable
theoretical behavior have only recently been developed. Unfortunately, these
algorithms are randomized and fail with, say, a constant probability. We
address this issue by presenting a deterministic feature selection algorithm
for k-means with theoretical guarantees. At the heart of our algorithm lies a
deterministic method for decompositions of the identity.
| [
"Christos Boutsidis, Malik Magdon-Ismail",
"['Christos Boutsidis' 'Malik Magdon-Ismail']"
] |
cs.LG stat.ML | null | 1109.5894 | null | null | http://arxiv.org/pdf/1109.5894v1 | 2011-09-27T13:58:39Z | 2011-09-27T13:58:39Z | Learning Item Trees for Probabilistic Modelling of Implicit Feedback | User preferences for items can be inferred from either explicit feedback,
such as item ratings, or implicit feedback, such as rental histories. Research
in collaborative filtering has concentrated on explicit feedback, resulting in
the development of accurate and scalable models. However, since explicit
feedback is often difficult to collect it is important to develop effective
models that take advantage of the more widely available implicit feedback. We
introduce a probabilistic approach to collaborative filtering with implicit
feedback based on modelling the user's item selection process. In the interests
of scalability, we restrict our attention to tree-structured distributions over
items and develop a principled and efficient algorithm for learning item trees
from data. We also identify a problem with a widely used protocol for
evaluating implicit feedback models and propose a way of addressing it using a
small quantity of explicit feedback data.
| [
"['Andriy Mnih' 'Yee Whye Teh']",
"Andriy Mnih, Yee Whye Teh"
] |
cs.LG cs.CL | 10.1613/jair.1872 | 1109.6341 | null | null | http://arxiv.org/abs/1109.6341v1 | 2011-09-28T20:18:30Z | 2011-09-28T20:18:30Z | Domain Adaptation for Statistical Classifiers | The most basic assumption used in statistical learning theory is that
training data and test data are drawn from the same underlying distribution.
Unfortunately, in many applications, the "in-domain" test data is drawn from a
distribution that is related, but not identical, to the "out-of-domain"
distribution of the training data. We consider the common case in which labeled
out-of-domain data is plentiful, but labeled in-domain data is scarce. We
introduce a statistical formulation of this problem in terms of a simple
mixture model and present an instantiation of this framework to maximum entropy
classifiers and their linear chain counterparts. We present efficient inference
algorithms for this special case based on the technique of conditional
expectation maximization. Our experimental results show that our approach leads
to improved performance on three real world tasks on four different data sets
from the natural language processing domain.
| [
"['H. Daume III' 'D. Marcu']",
"H. Daume III, D. Marcu"
] |
cs.LG cs.CV | 10.1007/978-3-642-24031-7_17 | 1110.0061 | null | null | http://arxiv.org/abs/1110.0061v1 | 2011-10-01T01:07:03Z | 2011-10-01T01:07:03Z | Learning image transformations without training examples | The use of image transformations is essential for efficient modeling and
learning of visual data. But the class of relevant transformations is large:
affine transformations, projective transformations, elastic deformations, ...
the list goes on. Therefore, learning these transformations, rather than hand
coding them, is of great conceptual interest. To the best of our knowledge, all
the related work so far has been concerned with either supervised or weakly
supervised learning (from correlated sequences, video streams, or
image-transform pairs). In this paper, on the contrary, we present a simple
method for learning affine and elastic transformations when no examples of
these transformations are explicitly given, and no prior knowledge of space
(such as ordering of pixels) is included either. The system has only access to
a moderately large database of natural images arranged in no particular order.
| [
"['Sergey Pankov']",
"Sergey Pankov"
] |
cs.LG cs.AI cs.CV cs.NE | null | 1110.0214 | null | null | http://arxiv.org/pdf/1110.0214v1 | 2011-10-02T18:59:42Z | 2011-10-02T18:59:42Z | Eclectic Extraction of Propositional Rules from Neural Networks | Artificial Neural Network is among the most popular algorithm for supervised
learning. However, Neural Networks have a well-known drawback of being a "Black
Box" learner that is not comprehensible to the Users. This lack of transparency
makes it unsuitable for many high risk tasks such as medical diagnosis that
requires a rational justification for making a decision. Rule Extraction
methods attempt to curb this limitation by extracting comprehensible rules from
a trained Network. Many such extraction algorithms have been developed over the
years with their respective strengths and weaknesses. They have been broadly
categorized into three types based on their approach to use internal model of
the Network. Eclectic Methods are hybrid algorithms that combine the other
approaches to attain more performance. In this paper, we present an Eclectic
method called HERETIC. Our algorithm uses Inductive Decision Tree learning
combined with information of the neural network structure for extracting
logical rules. Experiments and theoretical analysis show HERETIC to be better
in terms of speed and performance.
| [
"['Ridwan Al Iqbal']",
"Ridwan Al Iqbal"
] |
stat.ML cs.LG | null | 1110.0413 | null | null | http://arxiv.org/pdf/1110.0413v1 | 2011-10-03T16:49:45Z | 2011-10-03T16:49:45Z | Group Lasso with Overlaps: the Latent Group Lasso approach | We study a norm for structured sparsity which leads to sparse linear
predictors whose supports are unions of prede ned overlapping groups of
variables. We call the obtained formulation latent group Lasso, since it is
based on applying the usual group Lasso penalty on a set of latent variables. A
detailed analysis of the norm and its properties is presented and we
characterize conditions under which the set of groups associated with latent
variables are correctly identi ed. We motivate and discuss the delicate choice
of weights associated to each group, and illustrate this approach on simulated
data and on the problem of breast cancer prognosis from gene expression data.
| [
"['Guillaume Obozinski' 'Laurent Jacob' 'Jean-Philippe Vert']",
"Guillaume Obozinski (LIENS, INRIA Paris - Rocquencourt), Laurent\n Jacob, Jean-Philippe Vert (CBIO)"
] |
cs.LG cs.AI | null | 1110.0593 | null | null | http://arxiv.org/pdf/1110.0593v1 | 2011-10-04T07:34:13Z | 2011-10-04T07:34:13Z | Two Projection Pursuit Algorithms for Machine Learning under
Non-Stationarity | This thesis derives, tests and applies two linear projection algorithms for
machine learning under non-stationarity. The first finds a direction in a
linear space upon which a data set is maximally non-stationary. The second aims
to robustify two-way classification against non-stationarity. The algorithm is
tested on a key application scenario, namely Brain Computer Interfacing.
| [
"['Duncan A. J. Blythe']",
"Duncan A. J. Blythe"
] |
cs.IT cs.LG cs.SY math.IT | null | 1110.0718 | null | null | http://arxiv.org/pdf/1110.0718v1 | 2011-10-04T15:15:08Z | 2011-10-04T15:15:08Z | Directed information and Pearl's causal calculus | Probabilistic graphical models are a fundamental tool in statistics, machine
learning, signal processing, and control. When such a model is defined on a
directed acyclic graph (DAG), one can assign a partial ordering to the events
occurring in the corresponding stochastic system. Based on the work of Judea
Pearl and others, these DAG-based "causal factorizations" of joint probability
measures have been used for characterization and inference of functional
dependencies (causal links). This mostly expository paper focuses on several
connections between Pearl's formalism (and in particular his notion of
"intervention") and information-theoretic notions of causality and feedback
(such as causal conditioning, directed stochastic kernels, and directed
information). As an application, we show how conditional directed information
can be used to develop an information-theoretic version of Pearl's "back-door"
criterion for identifiability of causal effects from passive observations. This
suggests that the back-door criterion can be thought of as a causal analog of
statistical sufficiency.
| [
"Maxim Raginsky",
"['Maxim Raginsky']"
] |
cs.CV cs.AI cs.LG | null | 1110.0879 | null | null | http://arxiv.org/pdf/1110.0879v1 | 2011-10-05T02:11:38Z | 2011-10-05T02:11:38Z | Linearized Additive Classifiers | We revisit the additive model learning literature and adapt a penalized
spline formulation due to Eilers and Marx, to train additive classifiers
efficiently. We also propose two new embeddings based two classes of orthogonal
basis with orthogonal derivatives, which can also be used to efficiently learn
additive classifiers. This paper follows the popular theme in the current
literature where kernel SVMs are learned much more efficiently using a
approximate embedding and linear machine. In this paper we show that spline
basis are especially well suited for learning additive models because of their
sparsity structure and the ease of computing the embedding which enables one to
train these models in an online manner, without incurring the memory overhead
of precomputing the storing the embeddings. We show interesting connections
between B-Spline basis and histogram intersection kernel and show that for a
particular choice of regularization and degree of the B-Splines, our proposed
learning algorithm closely approximates the histogram intersection kernel SVM.
This enables one to learn additive models with almost no memory overhead
compared to fast a linear solver, such as LIBLINEAR, while being only 5-6X
slower on average. On two large scale image classification datasets, MNIST and
Daimler Chrysler pedestrians, the proposed additive classifiers are as accurate
as the kernel SVM, while being two orders of magnitude faster to train.
| [
"['Subhransu Maji']",
"Subhransu Maji"
] |
cs.LG cs.CV | null | 1110.0957 | null | null | http://arxiv.org/pdf/1110.0957v1 | 2011-10-05T11:49:09Z | 2011-10-05T11:49:09Z | Dictionary Learning for Deblurring and Digital Zoom | This paper proposes a novel approach to image deblurring and digital zooming
using sparse local models of image appearance. These models, where small image
patches are represented as linear combinations of a few elements drawn from
some large set (dictionary) of candidates, have proven well adapted to several
image restoration tasks. A key to their success has been to learn dictionaries
adapted to the reconstruction of small image patches. In contrast, recent works
have proposed instead to learn dictionaries which are not only adapted to data
reconstruction, but also tuned for a specific task. We introduce here such an
approach to deblurring and digital zoom, using pairs of blurry/sharp (or
low-/high-resolution) images for training, as well as an effective stochastic
gradient algorithm for solving the corresponding optimization task. Although
this learning problem is not convex, once the dictionaries have been learned,
the sharp/high-resolution image can be recovered via convex optimization at
test time. Experiments with synthetic and real data demonstrate the
effectiveness of the proposed approach, leading to state-of-the-art performance
for non-blind image deblurring and digital zoom.
| [
"Florent Couzinie-Devy and Julien Mairal and Francis Bach and Jean\n Ponce",
"['Florent Couzinie-Devy' 'Julien Mairal' 'Francis Bach' 'Jean Ponce']"
] |
cs.LG | 10.1613/jair.2005 | 1110.1073 | null | null | http://arxiv.org/abs/1110.1073v1 | 2011-10-05T18:59:49Z | 2011-10-05T18:59:49Z | Active Learning with Multiple Views | Active learners alleviate the burden of labeling large amounts of data by
detecting and asking the user to label only the most informative examples in
the domain. We focus here on active learning for multi-view domains, in which
there are several disjoint subsets of features (views), each of which is
sufficient to learn the target concept. In this paper we make several
contributions. First, we introduce Co-Testing, which is the first approach to
multi-view active learning. Second, we extend the multi-view learning framework
by also exploiting weak views, which are adequate only for learning a concept
that is more general/specific than the target concept. Finally, we empirically
show that Co-Testing outperforms existing active learners on a variety of real
world domains such as wrapper induction, Web page classification, advertisement
removal, and discourse tree parsing.
| [
"['C. A. Knoblock' 'S. Minton' 'I. Muslea']",
"C. A. Knoblock, S. Minton, I. Muslea"
] |
cs.LG | 10.1109/TSP.2012.2200479 | 1110.1075 | null | null | http://arxiv.org/abs/1110.1075v1 | 2011-10-05T19:03:35Z | 2011-10-05T19:03:35Z | The Augmented Complex Kernel LMS | Recently, a unified framework for adaptive kernel based signal processing of
complex data was presented by the authors, which, besides offering techniques
to map the input data to complex Reproducing Kernel Hilbert Spaces, developed a
suitable Wirtinger-like Calculus for general Hilbert Spaces. In this short
paper, the extended Wirtinger's calculus is adopted to derive complex
kernel-based widely-linear estimation filters. Furthermore, we illuminate
several important characteristics of the widely linear filters. We show that,
although in many cases the gains from adopting widely linear estimation
filters, as alternatives to ordinary linear ones, are rudimentary, for the case
of kernel based widely linear filters significant performance improvements can
be obtained.
| [
"['Pantelis Bouboulis' 'Sergios Theodoridis' 'Michael Mavroforakis']",
"Pantelis Bouboulis, Sergios Theodoridis, Michael Mavroforakis"
] |
cs.GT cs.LG | null | 1110.1514 | null | null | http://arxiv.org/pdf/1110.1514v1 | 2011-10-07T13:04:14Z | 2011-10-07T13:04:14Z | Blackwell Approachability and Minimax Theory | This manuscript investigates the relationship between Blackwell
Approachability, a stochastic vector-valued repeated game, and minimax theory,
a single-play scalar-valued scenario. First, it is established in a general
setting --- one not permitting invocation of minimax theory --- that
Blackwell's Approachability Theorem and its generalization due to Hou are still
valid. Second, minimax structure grants a result in the spirit of Blackwell's
weak-approachability conjecture, later resolved by Vieille, that any set is
either approachable by one player, or avoidable by the opponent. This analysis
also reveals a strategy for the opponent.
| [
"['Matus Telgarsky']",
"Matus Telgarsky"
] |
stat.ML cs.LG physics.data-an | null | 1110.1769 | null | null | http://arxiv.org/pdf/1110.1769v1 | 2011-10-08T21:24:36Z | 2011-10-08T21:24:36Z | On the trade-off between complexity and correlation decay in structural
learning algorithms | We consider the problem of learning the structure of Ising models (pairwise
binary Markov random fields) from i.i.d. samples. While several methods have
been proposed to accomplish this task, their relative merits and limitations
remain somewhat obscure. By analyzing a number of concrete examples, we show
that low-complexity algorithms often fail when the Markov random field develops
long-range correlations. More precisely, this phenomenon appears to be related
to the Ising model phase transition (although it does not coincide with it).
| [
"['José Bento' 'Andrea Montanari']",
"Jos\\'e Bento, Andrea Montanari"
] |
cs.LG cs.SY | null | 1110.1781 | null | null | http://arxiv.org/pdf/1110.1781v1 | 2011-10-09T02:18:50Z | 2011-10-09T02:18:50Z | A Study of Unsupervised Adaptive Crowdsourcing | We consider unsupervised crowdsourcing performance based on the model wherein
the responses of end-users are essentially rated according to how their
responses correlate with the majority of other responses to the same
subtasks/questions. In one setting, we consider an independent sequence of
identically distributed crowdsourcing assignments (meta-tasks), while in the
other we consider a single assignment with a large number of component
subtasks. Both problems yield intuitive results in which the overall
reliability of the crowd is a factor.
| [
"G. Kesidis and A. Kurve",
"['G. Kesidis' 'A. Kurve']"
] |
cs.RO cs.LG | null | 1110.1796 | null | null | http://arxiv.org/pdf/1110.1796v1 | 2011-10-09T06:16:57Z | 2011-10-09T06:16:57Z | A Behavior-based Approach for Multi-agent Q-learning for Autonomous
Exploration | The use of mobile robots is being popular over the world mainly for
autonomous explorations in hazardous/ toxic or unknown environments. This
exploration will be more effective and efficient if the explorations in unknown
environment can be aided with the learning from past experiences. Currently
reinforcement learning is getting more acceptances for implementing learning in
robots from the system-environment interactions. This learning can be
implemented using the concept of both single-agent and multiagent. This paper
describes such a multiagent approach for implementing a type of reinforcement
learning using a priority based behaviour-based architecture. This proposed
methodology has been successfully tested in both indoor and outdoor
environments.
| [
"Dip Narayan Ray, Somajyoti Majumder, Sumit Mukhopadhyay",
"['Dip Narayan Ray' 'Somajyoti Majumder' 'Sumit Mukhopadhyay']"
] |
cs.LG | null | 1110.2098 | null | null | http://arxiv.org/pdf/1110.2098v3 | 2012-08-04T22:11:49Z | 2011-10-10T16:35:51Z | Dynamic Matrix Factorization: A State Space Approach | Matrix factorization from a small number of observed entries has recently
garnered much attention as the key ingredient of successful recommendation
systems. One unresolved problem in this area is how to adapt current methods to
handle changing user preferences over time. Recent proposals to address this
issue are heuristic in nature and do not fully exploit the time-dependent
structure of the problem. As a principled and general temporal formulation, we
propose a dynamical state space model of matrix factorization. Our proposal
builds upon probabilistic matrix factorization, a Bayesian model with Gaussian
priors. We utilize results in state tracking, such as the Kalman filter, to
provide accurate recommendations in the presence of both process and
measurement noise. We show how system parameters can be learned via
expectation-maximization and provide comparisons to current published
techniques.
| [
"John Z. Sun, Kush R. Varshney and Karthik Subbian",
"['John Z. Sun' 'Kush R. Varshney' 'Karthik Subbian']"
] |
cs.LG | null | 1110.2136 | null | null | http://arxiv.org/pdf/1110.2136v3 | 2012-06-20T13:56:24Z | 2011-10-10T18:32:32Z | Active Learning Using Smooth Relative Regret Approximations with
Applications | The disagreement coefficient of Hanneke has become a central data independent
invariant in proving active learning rates. It has been shown in various ways
that a concept class with low complexity together with a bound on the
disagreement coefficient at an optimal solution allows active learning rates
that are superior to passive learning ones.
We present a different tool for pool based active learning which follows from
the existence of a certain uniform version of low disagreement coefficient, but
is not equivalent to it. In fact, we present two fundamental active learning
problems of significant interest for which our approach allows nontrivial
active learning bounds. However, any general purpose method relying on the
disagreement coefficient bounds only fails to guarantee any useful bounds for
these problems.
The tool we use is based on the learner's ability to compute an estimator of
the difference between the loss of any hypotheses and some fixed "pivotal"
hypothesis to within an absolute error of at most $\eps$ times the
| [
"Nir Ailon and Ron Begleiter and Esther Ezra",
"['Nir Ailon' 'Ron Begleiter' 'Esther Ezra']"
] |
cs.AI cs.CL cs.LG | null | 1110.2162 | null | null | http://arxiv.org/pdf/1110.2162v2 | 2011-10-13T17:51:20Z | 2011-10-10T19:54:57Z | Large-Margin Learning of Submodular Summarization Methods | In this paper, we present a supervised learning approach to training
submodular scoring functions for extractive multi-document summarization. By
taking a structured predicition approach, we provide a large-margin method that
directly optimizes a convex relaxation of the desired performance measure. The
learning method applies to all submodular summarization methods, and we
demonstrate its effectiveness for both pairwise as well as coverage-based
scoring functions on multiple datasets. Compared to state-of-the-art functions
that were tuned manually, our method significantly improves performance and
enables high-fidelity models with numbers of parameters well beyond what could
reasonbly be tuned by hand.
| [
"['Ruben Sipos' 'Pannaga Shivaswamy' 'Thorsten Joachims']",
"Ruben Sipos, Pannaga Shivaswamy, Thorsten Joachims"
] |
cs.LG cs.AI | 10.1613/jair.2113 | 1110.2211 | null | null | http://arxiv.org/abs/1110.2211v1 | 2011-10-10T21:58:58Z | 2011-10-10T21:58:58Z | Learning Symbolic Models of Stochastic Domains | In this article, we work towards the goal of developing agents that can learn
to act in complex worlds. We develop a probabilistic, relational planning rule
representation that compactly models noisy, nondeterministic action effects,
and show how such rules can be effectively learned. Through experiments in
simple planning domains and a 3D simulated blocks world with realistic physics,
we demonstrate that this learning algorithm allows agents to effectively model
world dynamics.
| [
"L. P. Kaelbling, H. M. Pasula, L. S. Zettlemoyer",
"['L. P. Kaelbling' 'H. M. Pasula' 'L. S. Zettlemoyer']"
] |
stat.ML cs.CV cs.LG | null | 1110.2306 | null | null | http://arxiv.org/pdf/1110.2306v1 | 2011-10-11T09:04:56Z | 2011-10-11T09:04:56Z | Ground Metric Learning | Transportation distances have been used for more than a decade now in machine
learning to compare histograms of features. They have one parameter: the ground
metric, which can be any metric between the features themselves. As is the case
for all parameterized distances, transportation distances can only prove useful
in practice when this parameter is carefully chosen. To date, the only option
available to practitioners to set the ground metric parameter was to rely on a
priori knowledge of the features, which limited considerably the scope of
application of transportation distances. We propose to lift this limitation and
consider instead algorithms that can learn the ground metric using only a
training set of labeled histograms. We call this approach ground metric
learning. We formulate the problem of learning the ground metric as the
minimization of the difference of two polyhedral convex functions over a convex
set of distance matrices. We follow the presentation of our algorithms with
promising experimental results on binary classification tasks using GIST
descriptors of images taken in the Caltech-256 set.
| [
"['Marco Cuturi' 'David Avis']",
"Marco Cuturi, David Avis"
] |
cs.LG math.PR | null | 1110.2392 | null | null | http://arxiv.org/pdf/1110.2392v2 | 2011-10-13T19:04:19Z | 2011-10-11T14:53:35Z | A Variant of Azuma's Inequality for Martingales with Subgaussian Tails | We provide a variant of Azuma's concentration inequality for martingales, in
which the standard boundedness requirement is replaced by the milder
requirement of a subgaussian tail.
| [
"['Ohad Shamir']",
"Ohad Shamir"
] |
cs.LG | null | 1110.2416 | null | null | http://arxiv.org/pdf/1110.2416v1 | 2011-10-11T16:19:06Z | 2011-10-11T16:19:06Z | Supervised learning of short and high-dimensional temporal sequences for
life science measurements | The analysis of physiological processes over time are often given by
spectrometric or gene expression profiles over time with only few time points
but a large number of measured variables. The analysis of such temporal
sequences is challenging and only few methods have been proposed. The
information can be encoded time independent, by means of classical expression
differences for a single time point or in expression profiles over time.
Available methods are limited to unsupervised and semi-supervised settings. The
predictive variables can be identified only by means of wrapper or
post-processing techniques. This is complicated due to the small number of
samples for such studies. Here, we present a supervised learning approach,
termed Supervised Topographic Mapping Through Time (SGTM-TT). It learns a
supervised mapping of the temporal sequences onto a low dimensional grid. We
utilize a hidden markov model (HMM) to account for the time domain and
relevance learning to identify the relevant feature dimensions most predictive
over time. The learned mapping can be used to visualize the temporal sequences
and to predict the class of a new sequence. The relevance learning permits the
identification of discriminating masses or gen expressions and prunes
dimensions which are unnecessary for the classification task or encode mainly
noise. In this way we obtain a very efficient learning system for temporal
sequences. The results indicate that using simultaneous supervised learning and
metric adaptation significantly improves the prediction accuracy for
synthetically and real life data in comparison to the standard techniques. The
discriminating features, identified by relevance learning, compare favorably
with the results of alternative methods. Our method permits the visualization
of the data on a low dimensional grid, highlighting the observed temporal
structure.
| [
"['F. -M. Schleif' 'A. Gisbrecht' 'B. Hammer']",
"F.-M. Schleif, A. Gisbrecht, B. Hammer"
] |
stat.ML cs.LG math.OC | null | 1110.2529 | null | null | http://arxiv.org/pdf/1110.2529v2 | 2012-06-07T03:12:48Z | 2011-10-11T23:27:42Z | The Generalization Ability of Online Algorithms for Dependent Data | We study the generalization performance of online learning algorithms trained
on samples coming from a dependent source of data. We show that the
generalization error of any stable online algorithm concentrates around its
regret--an easily computable statistic of the online performance of the
algorithm--when the underlying ergodic process is $\beta$- or $\phi$-mixing. We
show high probability error bounds assuming the loss function is convex, and we
also establish sharp convergence rates and deviation bounds for strongly convex
losses and several linear prediction problems such as linear and logistic
regression, least-squares SVM, and boosting on dependent data. In addition, our
results have straightforward applications to stochastic optimization with
dependent data, and our analysis requires only martingale convergence
arguments; we need not rely on more powerful statistical tools such as
empirical process theory.
| [
"Alekh Agarwal and John C. Duchi",
"['Alekh Agarwal' 'John C. Duchi']"
] |
cs.IR cs.LG | null | 1110.2610 | null | null | http://arxiv.org/pdf/1110.2610v1 | 2011-10-12T09:27:58Z | 2011-10-12T09:27:58Z | Issues,Challenges and Tools of Clustering Algorithms | Clustering is an unsupervised technique of Data Mining. It means grouping
similar objects together and separating the dissimilar ones. Each object in the
data set is assigned a class label in the clustering process using a distance
measure. This paper has captured the problems that are faced in real when
clustering algorithms are implemented .It also considers the most extensively
used tools which are readily available and support functions which ease the
programming. Once algorithms have been implemented, they also need to be tested
for its validity. There exist several validation indexes for testing the
performance and accuracy which have also been discussed here.
| [
"['Parul Agarwal' 'M. Afshar Alam' 'Ranjit Biswas']",
"Parul Agarwal, M.Afshar Alam, Ranjit Biswas"
] |
cs.LG cs.DB | 10.5121/ijdkp.2011.1501 | 1110.2626 | null | null | http://arxiv.org/abs/1110.2626v1 | 2011-10-12T10:56:29Z | 2011-10-12T10:56:29Z | Analysis of Heart Diseases Dataset using Neural Network Approach | One of the important techniques of Data mining is Classification. Many real
world problems in various fields such as business, science, industry and
medicine can be solved by using classification approach. Neural Networks have
emerged as an important tool for classification. The advantages of Neural
Networks helps for efficient classification of given data. In this study a
Heart diseases dataset is analyzed using Neural Network approach. To increase
the efficiency of the classification process parallel approach is also adopted
in the training phase.
| [
"['K. Usha Rani']",
"K. Usha Rani"
] |
cs.LG cs.IT math.IT | null | 1110.2755 | null | null | http://arxiv.org/pdf/1110.2755v3 | 2012-07-10T23:24:32Z | 2011-10-12T18:48:09Z | Efficient Tracking of Large Classes of Experts | In the framework of prediction of individual sequences, sequential prediction
methods are to be constructed that perform nearly as well as the best expert
from a given class. We consider prediction strategies that compete with the
class of switching strategies that can segment a given sequence into several
blocks, and follow the advice of a different "base" expert in each block. As
usual, the performance of the algorithm is measured by the regret defined as
the excess loss relative to the best switching strategy selected in hindsight
for the particular sequence to be predicted. In this paper we construct
prediction strategies of low computational cost for the case where the set of
base experts is large. In particular we provide a method that can transform any
prediction algorithm $\A$ that is designed for the base class into a tracking
algorithm. The resulting tracking algorithm can take advantage of the
prediction performance and potential computational efficiency of $\A$ in the
sense that it can be implemented with time and space complexity only
$O(n^{\gamma} \ln n)$ times larger than that of $\A$, where $n$ is the time
horizon and $\gamma \ge 0$ is a parameter of the algorithm. With $\A$ properly
chosen, our algorithm achieves a regret bound of optimal order for $\gamma>0$,
and only $O(\ln n)$ times larger than the optimal order for $\gamma=0$ for all
typical regret bound types we examined. For example, for predicting binary
sequences with switching parameters under the logarithmic loss, our method
achieves the optimal $O(\ln n)$ regret rate with time complexity
$O(n^{1+\gamma}\ln n)$ for any $\gamma\in (0,1)$.
| [
"Andr\\'as Gyorgy, Tam\\'as Linder, G\\'abor Lugosi",
"['András Gyorgy' 'Tamás Linder' 'Gábor Lugosi']"
] |
math.PR cs.LG | null | 1110.2842 | null | null | http://arxiv.org/pdf/1110.2842v1 | 2011-10-13T04:56:17Z | 2011-10-13T04:56:17Z | A tail inequality for quadratic forms of subgaussian random vectors | We prove an exponential probability tail inequality for positive semidefinite
quadratic forms in a subgaussian random vector. The bound is analogous to one
that holds when the vector has independent Gaussian entries.
| [
"Daniel Hsu and Sham M. Kakade and Tong Zhang",
"['Daniel Hsu' 'Sham M. Kakade' 'Tong Zhang']"
] |
cs.LG cs.CV stat.ML | 10.1109/CVPR.2011.5995636 | 1110.2855 | null | null | http://arxiv.org/abs/1110.2855v1 | 2011-10-13T07:35:05Z | 2011-10-13T07:35:05Z | Sparse Image Representation with Epitomes | Sparse coding, which is the decomposition of a vector using only a few basis
elements, is widely used in machine learning and image processing. The basis
set, also called dictionary, is learned to adapt to specific data. This
approach has proven to be very effective in many image processing tasks.
Traditionally, the dictionary is an unstructured "flat" set of atoms. In this
paper, we study structured dictionaries which are obtained from an epitome, or
a set of epitomes. The epitome is itself a small image, and the atoms are all
the patches of a chosen size inside this image. This considerably reduces the
number of parameters to learn and provides sparse image decompositions with
shiftinvariance properties. We propose a new formulation and an algorithm for
learning the structured dictionaries associated with epitomes, and illustrate
their use in image denoising tasks.
| [
"Louise Beno\\^it (INRIA Paris - Rocquencourt, LIENS, INRIA Paris -\n Rocquencourt), Julien Mairal (INRIA Paris - Rocquencourt, LIENS), Francis\n Bach (INRIA Paris - Rocquencourt), Jean Ponce (INRIA Paris - Rocquencourt)",
"['Louise Benoît' 'Julien Mairal' 'Francis Bach' 'Jean Ponce']"
] |
cs.DS cs.LG | null | 1110.2897 | null | null | http://arxiv.org/pdf/1110.2897v3 | 2014-11-04T19:40:43Z | 2011-10-13T11:24:59Z | Randomized Dimensionality Reduction for k-means Clustering | We study the topic of dimensionality reduction for $k$-means clustering.
Dimensionality reduction encompasses the union of two approaches: \emph{feature
selection} and \emph{feature extraction}. A feature selection based algorithm
for $k$-means clustering selects a small subset of the input features and then
applies $k$-means clustering on the selected features. A feature extraction
based algorithm for $k$-means clustering constructs a small set of new
artificial features and then applies $k$-means clustering on the constructed
features. Despite the significance of $k$-means clustering as well as the
wealth of heuristic methods addressing it, provably accurate feature selection
methods for $k$-means clustering are not known. On the other hand, two provably
accurate feature extraction methods for $k$-means clustering are known in the
literature; one is based on random projections and the other is based on the
singular value decomposition (SVD).
This paper makes further progress towards a better understanding of
dimensionality reduction for $k$-means clustering. Namely, we present the first
provably accurate feature selection method for $k$-means clustering and, in
addition, we present two feature extraction methods. The first feature
extraction method is based on random projections and it improves upon the
existing results in terms of time complexity and number of features needed to
be extracted. The second feature extraction method is based on fast approximate
SVD factorizations and it also improves upon the existing results in terms of
time complexity. The proposed algorithms are randomized and provide
constant-factor approximation guarantees with respect to the optimal $k$-means
objective value.
| [
"['Christos Boutsidis' 'Anastasios Zouzias' 'Michael W. Mahoney'\n 'Petros Drineas']",
"Christos Boutsidis and Anastasios Zouzias and Michael W. Mahoney and\n Petros Drineas"
] |
stat.ML cs.LG cs.SI physics.soc-ph | null | 1110.2899 | null | null | http://arxiv.org/pdf/1110.2899v1 | 2011-10-13T11:34:21Z | 2011-10-13T11:34:21Z | Discovering Emerging Topics in Social Streams via Link Anomaly Detection | Detection of emerging topics are now receiving renewed interest motivated by
the rapid growth of social networks. Conventional term-frequency-based
approaches may not be appropriate in this context, because the information
exchanged are not only texts but also images, URLs, and videos. We focus on the
social aspects of theses networks. That is, the links between users that are
generated dynamically intentionally or unintentionally through replies,
mentions, and retweets. We propose a probability model of the mentioning
behaviour of a social network user, and propose to detect the emergence of a
new topic from the anomaly measured through the model. We combine the proposed
mention anomaly score with a recently proposed change-point detection technique
based on the Sequentially Discounting Normalized Maximum Likelihood (SDNML), or
with Kleinberg's burst model. Aggregating anomaly scores from hundreds of
users, we show that we can detect emerging topics only based on the
reply/mention relationships in social network posts. We demonstrate our
technique in a number of real data sets we gathered from Twitter. The
experiments show that the proposed mention-anomaly-based approaches can detect
new topics at least as early as the conventional term-frequency-based approach,
and sometimes much earlier when the keyword is ill-defined.
| [
"Toshimitsu Takahashi, Ryota Tomioka, Kenji Yamanishi",
"['Toshimitsu Takahashi' 'Ryota Tomioka' 'Kenji Yamanishi']"
] |
math.OC cs.LG | null | 1110.3001 | null | null | http://arxiv.org/pdf/1110.3001v1 | 2011-10-13T17:25:42Z | 2011-10-13T17:25:42Z | Step size adaptation in first-order method for stochastic strongly
convex programming | We propose a first-order method for stochastic strongly convex optimization
that attains $O(1/n)$ rate of convergence, analysis show that the proposed
method is simple, easily to implement, and in worst case, asymptotically four
times faster than its peers. We derive this method from several intuitive
observations that are generalized from existing first order optimization
methods.
| [
"['Peng Cheng']",
"Peng Cheng"
] |
stat.ML cs.LG | null | 1110.3076 | null | null | http://arxiv.org/pdf/1110.3076v1 | 2011-10-13T21:48:04Z | 2011-10-13T21:48:04Z | Efficient Latent Variable Graphical Model Selection via Split Bregman
Method | We consider the problem of covariance matrix estimation in the presence of
latent variables. Under suitable conditions, it is possible to learn the
marginal covariance matrix of the observed variables via a tractable convex
program, where the concentration matrix of the observed variables is decomposed
into a sparse matrix (representing the graphical structure of the observed
variables) and a low rank matrix (representing the marginalization effect of
latent variables). We present an efficient first-order method based on split
Bregman to solve the convex problem. The algorithm is guaranteed to converge
under mild conditions. We show that our algorithm is significantly faster than
the state-of-the-art algorithm on both artificial and real-world data. Applying
the algorithm to a gene expression data involving thousands of genes, we show
that most of the correlation between observed variables can be explained by
only a few dozen latent factors.
| [
"['Gui-Bo Ye' 'Yuanfeng Wang' 'Yifei Chen' 'Xiaohui Xie']",
"Gui-Bo Ye, Yuanfeng Wang, Yifei Chen, and Xiaohui Xie"
] |
cs.CV cs.LG | 10.1109/TIP.2014.2375641 | 1110.3109 | null | null | http://arxiv.org/abs/1110.3109v2 | 2013-01-10T23:22:48Z | 2011-10-14T02:05:14Z | Robust Image Analysis by L1-Norm Semi-supervised Learning | This paper presents a novel L1-norm semi-supervised learning algorithm for
robust image analysis by giving new L1-norm formulation of Laplacian
regularization which is the key step of graph-based semi-supervised learning.
Since our L1-norm Laplacian regularization is defined directly over the
eigenvectors of the normalized Laplacian matrix, we successfully formulate
semi-supervised learning as an L1-norm linear reconstruction problem which can
be effectively solved with sparse coding. By working with only a small subset
of eigenvectors, we further develop a fast sparse coding algorithm for our
L1-norm semi-supervised learning. Due to the sparsity induced by sparse coding,
the proposed algorithm can deal with the noise in the data to some extent and
thus has important applications to robust image analysis, such as noise-robust
image classification and noise reduction for visual and textual bag-of-words
(BOW) models. In particular, this paper is the first attempt to obtain robust
image representation by sparse co-refinement of visual and textual BOW models.
The experimental results have shown the promising performance of the proposed
algorithm.
| [
"['Zhiwu Lu' 'Yuxin Peng']",
"Zhiwu Lu and Yuxin Peng"
] |
cs.LG cs.AI stat.ML | null | 1110.3239 | null | null | http://arxiv.org/pdf/1110.3239v1 | 2011-10-12T12:17:51Z | 2011-10-12T12:17:51Z | Improving parameter learning of Bayesian nets from incomplete data | This paper addresses the estimation of parameters of a Bayesian network from
incomplete data. The task is usually tackled by running the
Expectation-Maximization (EM) algorithm several times in order to obtain a high
log-likelihood estimate. We argue that choosing the maximum log-likelihood
estimate (as well as the maximum penalized log-likelihood and the maximum a
posteriori estimate) has severe drawbacks, being affected both by overfitting
and model uncertainty. Two ideas are discussed to overcome these issues: a
maximum entropy approach and a Bayesian model averaging approach. Both ideas
can be easily applied on top of EM, while the entropy idea can be also
implemented in a more sophisticated way, through a dedicated non-linear solver.
A vast set of experiments shows that these ideas produce significantly better
estimates and inferences than the traditional and widely used maximum
(penalized) log-likelihood and maximum a posteriori estimates. In particular,
if EM is adopted as optimization engine, the model averaging approach is the
best performing one; its performance is matched by the entropy approach when
implemented using the non-linear solver. The results suggest that the
applicability of these ideas is immediate (they are easy to implement and to
integrate in currently available inference engines) and that they constitute a
better way to learn Bayesian network parameters.
| [
"Giorgio Corani and Cassio P. De Campos",
"['Giorgio Corani' 'Cassio P. De Campos']"
] |
cs.LG | null | 1110.3347 | null | null | http://arxiv.org/pdf/1110.3347v1 | 2011-10-14T21:47:12Z | 2011-10-14T21:47:12Z | Dynamic Batch Bayesian Optimization | Bayesian optimization (BO) algorithms try to optimize an unknown function
that is expensive to evaluate using minimum number of evaluations/experiments.
Most of the proposed algorithms in BO are sequential, where only one experiment
is selected at each iteration. This method can be time inefficient when each
experiment takes a long time and more than one experiment can be ran
concurrently. On the other hand, requesting a fix-sized batch of experiments at
each iteration causes performance inefficiency in BO compared to the sequential
policies. In this paper, we present an algorithm that asks a batch of
experiments at each time step t where the batch size p_t is dynamically
determined in each step. Our algorithm is based on the observation that the
sequence of experiments selected by the sequential policy can sometimes be
almost independent from each other. Our algorithm identifies such scenarios and
request those experiments at the same time without degrading the performance.
We evaluate our proposed method using the Expected Improvement policy and the
results show substantial speedup with little impact on the performance in eight
real and synthetic benchmarks.
| [
"Javad Azimi, Ali Jalali, Xiaoli Fern",
"['Javad Azimi' 'Ali Jalali' 'Xiaoli Fern']"
] |
cs.LG cs.DS cs.HC stat.ML | null | 1110.3564 | null | null | http://arxiv.org/pdf/1110.3564v4 | 2013-03-26T07:28:04Z | 2011-10-17T02:52:20Z | Budget-Optimal Task Allocation for Reliable Crowdsourcing Systems | Crowdsourcing systems, in which numerous tasks are electronically distributed
to numerous "information piece-workers", have emerged as an effective paradigm
for human-powered solving of large scale problems in domains such as image
classification, data entry, optical character recognition, recommendation, and
proofreading. Because these low-paid workers can be unreliable, nearly all such
systems must devise schemes to increase confidence in their answers, typically
by assigning each task multiple times and combining the answers in an
appropriate manner, e.g. majority voting.
In this paper, we consider a general model of such crowdsourcing tasks and
pose the problem of minimizing the total price (i.e., number of task
assignments) that must be paid to achieve a target overall reliability. We give
a new algorithm for deciding which tasks to assign to which workers and for
inferring correct answers from the workers' answers. We show that our
algorithm, inspired by belief propagation and low-rank matrix approximation,
significantly outperforms majority voting and, in fact, is optimal through
comparison to an oracle that knows the reliability of every worker. Further, we
compare our approach with a more general class of algorithms which can
dynamically assign tasks. By adaptively deciding which questions to ask to the
next arriving worker, one might hope to reduce uncertainty more efficiently. We
show that, perhaps surprisingly, the minimum price necessary to achieve a
target reliability scales in the same manner under both adaptive and
non-adaptive scenarios. Hence, our non-adaptive approach is order-optimal under
both scenarios. This strongly relies on the fact that workers are fleeting and
can not be exploited. Therefore, architecturally, our results suggest that
building a reliable worker-reputation system is essential to fully harnessing
the potential of adaptive designs.
| [
"David R. Karger and Sewoong Oh and Devavrat Shah",
"['David R. Karger' 'Sewoong Oh' 'Devavrat Shah']"
] |
cs.IT cs.LG math.IT stat.ML | null | 1110.3592 | null | null | http://arxiv.org/pdf/1110.3592v2 | 2011-11-28T06:56:52Z | 2011-10-17T07:51:59Z | Information, learning and falsification | There are (at least) three approaches to quantifying information. The first,
algorithmic information or Kolmogorov complexity, takes events as strings and,
given a universal Turing machine, quantifies the information content of a
string as the length of the shortest program producing it. The second, Shannon
information, takes events as belonging to ensembles and quantifies the
information resulting from observing the given event in terms of the number of
alternate events that have been ruled out. The third, statistical learning
theory, has introduced measures of capacity that control (in part) the expected
risk of classifiers. These capacities quantify the expectations regarding
future data that learning algorithms embed into classifiers.
This note describes a new method of quantifying information, effective
information, that links algorithmic information to Shannon information, and
also links both to capacities arising in statistical learning theory. After
introducing the measure, we show that it provides a non-universal analog of
Kolmogorov complexity. We then apply it to derive basic capacities in
statistical learning theory: empirical VC-entropy and empirical Rademacher
complexity. A nice byproduct of our approach is an interpretation of the
explanatory power of a learning algorithm in terms of the number of hypotheses
it falsifies, counted in two different ways for the two capacities. We also
discuss how effective information relates to information gain, Shannon and
mutual information.
| [
"['David Balduzzi']",
"David Balduzzi"
] |
cs.LG q-bio.QM | 10.1371/journal.pone.0034796 | 1110.3717 | null | null | http://arxiv.org/abs/1110.3717v2 | 2011-10-18T06:29:11Z | 2011-10-17T16:13:15Z | A critical evaluation of network and pathway based classifiers for
outcome prediction in breast cancer | Recently, several classifiers that combine primary tumor data, like gene
expression data, and secondary data sources, such as protein-protein
interaction networks, have been proposed for predicting outcome in breast
cancer. In these approaches, new composite features are typically constructed
by aggregating the expression levels of several genes. The secondary data
sources are employed to guide this aggregation. Although many studies claim
that these approaches improve classification performance over single gene
classifiers, the gain in performance is difficult to assess. This stems mainly
from the fact that different breast cancer data sets and validation procedures
are employed to assess the performance. Here we address these issues by
employing a large cohort of six breast cancer data sets as benchmark set and by
performing an unbiased evaluation of the classification accuracies of the
different approaches. Contrary to previous claims, we find that composite
feature classifiers do not outperform simple single gene classifiers. We
investigate the effect of (1) the number of selected features; (2) the specific
gene set from which features are selected; (3) the size of the training set and
(4) the heterogeneity of the data set on the performance of composite feature
and single gene classifiers. Strikingly, we find that randomization of
secondary data sources, which destroys all biological information in these
sources, does not result in a deterioration in performance of composite feature
classifiers. Finally, we show that when a proper correction for gene set size
is performed, the stability of single gene sets is similar to the stability of
composite feature sets. Based on these results there is currently no reason to
prefer prognostic classifiers based on composite features over single gene
classifiers for predicting outcome in breast cancer.
| [
"['C. Staiger' 'S. Cadot' 'R. Kooter' 'M. Dittrich' 'T. Mueller'\n 'G. W. Klau' 'L. F. A. Wessels']",
"C. Staiger, S. Cadot, R. Kooter, M. Dittrich, T. Mueller, G. W. Klau,\n L. F. A. Wessels"
] |
cs.LG cs.CV cs.DB stat.ML | null | 1110.3741 | null | null | http://arxiv.org/pdf/1110.3741v3 | 2013-01-07T17:18:42Z | 2011-10-17T17:48:22Z | Multi-criteria Anomaly Detection using Pareto Depth Analysis | We consider the problem of identifying patterns in a data set that exhibit
anomalous behavior, often referred to as anomaly detection. In most anomaly
detection algorithms, the dissimilarity between data samples is calculated by a
single criterion, such as Euclidean distance. However, in many cases there may
not exist a single dissimilarity measure that captures all possible anomalous
patterns. In such a case, multiple criteria can be defined, and one can test
for anomalies by scalarizing the multiple criteria using a linear combination
of them. If the importance of the different criteria are not known in advance,
the algorithm may need to be executed multiple times with different choices of
weights in the linear combination. In this paper, we introduce a novel
non-parametric multi-criteria anomaly detection method using Pareto depth
analysis (PDA). PDA uses the concept of Pareto optimality to detect anomalies
under multiple criteria without having to run an algorithm multiple times with
different choices of weights. The proposed PDA approach scales linearly in the
number of criteria and is provably better than linear combinations of the
criteria.
| [
"Ko-Jen Hsiao, Kevin S. Xu, Jeff Calder, and Alfred O. Hero III",
"['Ko-Jen Hsiao' 'Kevin S. Xu' 'Jeff Calder' 'Alfred O. Hero III']"
] |
cs.LG cs.IR | null | 1110.3917 | null | null | http://arxiv.org/pdf/1110.3917v1 | 2011-10-18T09:17:29Z | 2011-10-18T09:17:29Z | How to Evaluate Dimensionality Reduction? - Improving the Co-ranking
Matrix | The growing number of dimensionality reduction methods available for data
visualization has recently inspired the development of quality assessment
measures, in order to evaluate the resulting low-dimensional representation
independently from a methods' inherent criteria. Several (existing) quality
measures can be (re)formulated based on the so-called co-ranking matrix, which
subsumes all rank errors (i.e. differences between the ranking of distances
from every point to all others, comparing the low-dimensional representation to
the original data). The measures are often based on the partioning of the
co-ranking matrix into 4 submatrices, divided at the K-th row and column,
calculating a weighted combination of the sums of each submatrix. Hence, the
evaluation process typically involves plotting a graph over several (or even
all possible) settings of the parameter K. Considering simple artificial
examples, we argue that this parameter controls two notions at once, that need
not necessarily be combined, and that the rectangular shape of submatrices is
disadvantageous for an intuitive interpretation of the parameter. We debate
that quality measures, as general and flexible evaluation tools, should have
parameters with a direct and intuitive interpretation as to which specific
error types are tolerated or penalized. Therefore, we propose to replace K with
two parameters to control these notions separately, and introduce a differently
shaped weighting on the co-ranking matrix. The two new parameters can then
directly be interpreted as a threshold up to which rank errors are tolerated,
and a threshold up to which the rank-distances are significant for the
evaluation. Moreover, we propose a color representation of local quality to
visually support the evaluation process for a given mapping, where every point
in the mapping is colored according to its local contribution to the overall
quality.
| [
"['Wouter Lueks' 'Bassam Mokbel' 'Michael Biehl' 'Barbara Hammer']",
"Wouter Lueks, Bassam Mokbel, Michael Biehl, Barbara Hammer"
] |
cs.LG | null | 1110.4181 | null | null | http://arxiv.org/pdf/1110.4181v1 | 2011-10-19T04:42:33Z | 2011-10-19T04:42:33Z | Injecting External Solutions Into CMA-ES | This report considers how to inject external candidate solutions into the
CMA-ES algorithm. The injected solutions might stem from a gradient or a Newton
step, a surrogate model optimizer or any other oracle or search mechanism. They
can also be the result of a repair mechanism, for example to render infeasible
solutions feasible. Only small modifications to the CMA-ES are necessary to
turn injection into a reliable and effective method: too long steps need to be
tightly renormalized. The main objective of this report is to reveal this
simple mechanism. Depending on the source of the injected solutions,
interesting variants of CMA-ES arise. When the best-ever solution is always
(re-)injected, an elitist variant of CMA-ES with weighted multi-recombination
arises. When \emph{all} solutions are injected from an \emph{external} source,
the resulting algorithm might be viewed as \emph{adaptive encoding} with
step-size control. In first experiments, injected solutions of very good
quality lead to a convergence speed twice as fast as on the (simple) sphere
function without injection. This means that we observe an impressive speed-up
on otherwise difficult to solve functions. Single bad injected solutions on the
other hand do no significant harm.
| [
"Nikolaus Hansen (INRIA Saclay - Ile de France, LRI, MSR - INRIA)",
"['Nikolaus Hansen']"
] |
cs.LG stat.ML | null | 1110.4198 | null | null | http://arxiv.org/pdf/1110.4198v3 | 2013-07-12T03:28:17Z | 2011-10-19T07:34:19Z | A Reliable Effective Terascale Linear Learning System | We present a system and a set of techniques for learning linear predictors
with convex losses on terascale datasets, with trillions of features, {The
number of features here refers to the number of non-zero entries in the data
matrix.} billions of training examples and millions of parameters in an hour
using a cluster of 1000 machines. Individually none of the component techniques
are new, but the careful synthesis required to obtain an efficient
implementation is. The result is, up to our knowledge, the most scalable and
efficient linear learning system reported in the literature (as of 2011 when
our experiments were conducted). We describe and thoroughly evaluate the
components of the system, showing the importance of the various design choices.
| [
"['Alekh Agarwal' 'Olivier Chapelle' 'Miroslav Dudik' 'John Langford']",
"Alekh Agarwal, Olivier Chapelle, Miroslav Dudik, John Langford"
] |
cs.LG stat.ML | null | 1110.4322 | null | null | http://arxiv.org/pdf/1110.4322v3 | 2012-02-14T16:14:39Z | 2011-10-19T15:57:27Z | An Optimal Algorithm for Linear Bandits | We provide the first algorithm for online bandit linear optimization whose
regret after T rounds is of order sqrt{Td ln N} on any finite class X of N
actions in d dimensions, and of order d*sqrt{T} (up to log factors) when X is
infinite. These bounds are not improvable in general. The basic idea utilizes
tools from convex geometry to construct what is essentially an optimal
exploration basis. We also present an application to a model of linear bandits
with expert advice. Interestingly, these results show that bandit linear
optimization with expert advice in d dimensions is no more difficult (in terms
of the achievable regret) than the online d-armed bandit problem with expert
advice (where EXP4 is optimal).
| [
"['Nicolò Cesa-Bianchi' 'Sham Kakade']",
"Nicol\\`o Cesa-Bianchi and Sham Kakade"
] |
cs.GT cs.LG | 10.1137/110852462 | 1110.4412 | null | null | http://arxiv.org/abs/1110.4412v1 | 2011-10-19T22:30:03Z | 2011-10-19T22:30:03Z | Aspiration Learning in Coordination Games | We consider the problem of distributed convergence to efficient outcomes in
coordination games through dynamics based on aspiration learning. Under
aspiration learning, a player continues to play an action as long as the
rewards received exceed a specified aspiration level. Here, the aspiration
level is a fading memory average of past rewards, and these levels also are
subject to occasional random perturbations. A player becomes dissatisfied
whenever a received reward is less than the aspiration level, in which case the
player experiments with a probability proportional to the degree of
dissatisfaction. Our first contribution is the characterization of the
asymptotic behavior of the induced Markov chain of the iterated process in
terms of an equivalent finite-state Markov chain. We then characterize
explicitly the behavior of the proposed aspiration learning in a generalized
version of coordination games, examples of which include network formation and
common-pool games. In particular, we show that in generic coordination games
the frequency at which an efficient action profile is played can be made
arbitrarily large. Although convergence to efficient outcomes is desirable, in
several coordination games, such as common-pool games, attainability of fair
outcomes, i.e., sequences of plays at which players experience highly rewarding
returns with the same frequency, might also be of special interest. To this
end, we demonstrate through analysis and simulations that aspiration learning
also establishes fair outcomes in all symmetric coordination games, including
common-pool games.
| [
"['Georgios C. Chasparis' 'Ari Arapostathis' 'Jeff S. Shamma']",
"Georgios C. Chasparis, Ari Arapostathis and Jeff S. Shamma"
] |
cs.LG | null | 1110.4416 | null | null | http://arxiv.org/pdf/1110.4416v1 | 2011-10-20T00:56:53Z | 2011-10-20T00:56:53Z | Data-dependent kernels in nearly-linear time | We propose a method to efficiently construct data-dependent kernels which can
make use of large quantities of (unlabeled) data. Our construction makes an
approximation in the standard construction of semi-supervised kernels in
Sindhwani et al. 2005. In typical cases these kernels can be computed in
nearly-linear time (in the amount of data), improving on the cubic time of the
standard construction, enabling large scale semi-supervised learning in a
variety of contexts. The methods are validated on semi-supervised and
unsupervised problems on data sets containing upto 64,000 sample points.
| [
"['Guy Lever' 'Tom Diethe' 'John Shawe-Taylor']",
"Guy Lever, Tom Diethe and John Shawe-Taylor"
] |
cs.LG | 10.1117/12.893811 | 1110.4481 | null | null | http://arxiv.org/abs/1110.4481v1 | 2011-10-20T09:50:58Z | 2011-10-20T09:50:58Z | Learning Hierarchical and Topographic Dictionaries with Structured
Sparsity | Recent work in signal processing and statistics have focused on defining new
regularization functions, which not only induce sparsity of the solution, but
also take into account the structure of the problem. We present in this paper a
class of convex penalties introduced in the machine learning community, which
take the form of a sum of l_2 and l_infinity-norms over groups of variables.
They extend the classical group-sparsity regularization in the sense that the
groups possibly overlap, allowing more flexibility in the group design. We
review efficient optimization methods to deal with the corresponding inverse
problems, and their application to the problem of learning dictionaries of
natural image patches: On the one hand, dictionary learning has indeed proven
effective for various signal processing tasks. On the other hand, structured
sparsity provides a natural framework for modeling dependencies between
dictionary elements. We thus consider a structured sparse regularization to
learn dictionaries embedded in a particular structure, for instance a tree or a
two-dimensional grid. In the latter case, the results we obtain are similar to
the dictionaries produced by topographic independent component analysis.
| [
"Julien Mairal, Rodolphe Jenatton (LIENS, INRIA Paris - Rocquencourt),\n Guillaume Obozinski (LIENS, INRIA Paris - Rocquencourt), Francis Bach (LIENS,\n INRIA Paris - Rocquencourt)",
"['Julien Mairal' 'Rodolphe Jenatton' 'Guillaume Obozinski' 'Francis Bach']"
] |
cs.LG stat.ML | null | 1110.4713 | null | null | http://arxiv.org/pdf/1110.4713v1 | 2011-10-21T07:29:36Z | 2011-10-21T07:29:36Z | Kernel Topic Models | Latent Dirichlet Allocation models discrete data as a mixture of discrete
distributions, using Dirichlet beliefs over the mixture weights. We study a
variation of this concept, in which the documents' mixture weight beliefs are
replaced with squashed Gaussian distributions. This allows documents to be
associated with elements of a Hilbert space, admitting kernel topic models
(KTM), modelling temporal, spatial, hierarchical, social and other structure
between documents. The main challenge is efficient approximate inference on the
latent Gaussian. We present an approximate algorithm cast around a Laplace
approximation in a transformed basis. The KTM can also be interpreted as a type
of Gaussian process latent variable model, or as a topic model conditional on
document features, uncovering links between earlier work in these areas.
| [
"Philipp Hennig, David Stern, Ralf Herbrich and Thore Graepel",
"['Philipp Hennig' 'David Stern' 'Ralf Herbrich' 'Thore Graepel']"
] |
q-fin.ST cs.LG physics.soc-ph | 10.1371/journal.pone.0040014 | 1110.4784 | null | null | http://arxiv.org/abs/1110.4784v3 | 2012-06-04T15:42:35Z | 2011-10-21T13:15:59Z | Web search queries can predict stock market volumes | We live in a computerized and networked society where many of our actions
leave a digital trace and affect other people's actions. This has lead to the
emergence of a new data-driven research field: mathematical methods of computer
science, statistical physics and sociometry provide insights on a wide range of
disciplines ranging from social science to human mobility. A recent important
discovery is that query volumes (i.e., the number of requests submitted by
users to search engines on the www) can be used to track and, in some cases, to
anticipate the dynamics of social phenomena. Successful exemples include
unemployment levels, car and home sales, and epidemics spreading. Few recent
works applied this approach to stock prices and market sentiment. However, it
remains unclear if trends in financial markets can be anticipated by the
collective wisdom of on-line users on the web. Here we show that trading
volumes of stocks traded in NASDAQ-100 are correlated with the volumes of
queries related to the same stocks. In particular, query volumes anticipate in
many cases peaks of trading by one day or more. Our analysis is carried out on
a unique dataset of queries, submitted to an important web search engine, which
enable us to investigate also the user behavior. We show that the query volume
dynamics emerges from the collective but seemingly uncoordinated activity of
many users. These findings contribute to the debate on the identification of
early warnings of financial systemic risk, based on the activity of users of
the www.
| [
"Ilaria Bordino, Stefano Battiston, Guido Caldarelli, Matthieu\n Cristelli, Antti Ukkonen, Ingmar Weber",
"['Ilaria Bordino' 'Stefano Battiston' 'Guido Caldarelli'\n 'Matthieu Cristelli' 'Antti Ukkonen' 'Ingmar Weber']"
] |
cs.LG | null | 1110.5051 | null | null | http://arxiv.org/pdf/1110.5051v1 | 2011-10-23T14:41:21Z | 2011-10-23T14:41:21Z | Wikipedia Edit Number Prediction based on Temporal Dynamics Only | In this paper, we describe our approach to the Wikipedia Participation
Challenge which aims to predict the number of edits a Wikipedia editor will
make in the next 5 months. The best submission from our team, "zeditor",
achieved 41.7% improvement over WMF's baseline predictive model and the final
rank of 3rd place among 96 teams. An interesting characteristic of our approach
is that only temporal dynamics features (i.e., how the number of edits changes
in recent periods, etc.) are used in a self-supervised learning framework,
which makes it easy to be generalised to other application domains.
| [
"Dell Zhang",
"['Dell Zhang']"
] |
stat.ML cs.LG stat.CO | null | 1110.5383 | null | null | http://arxiv.org/pdf/1110.5383v2 | 2012-02-09T13:54:17Z | 2011-10-24T23:47:21Z | Quilting Stochastic Kronecker Product Graphs to Generate Multiplicative
Attribute Graphs | We describe the first sub-quadratic sampling algorithm for the Multiplicative
Attribute Graph Model (MAGM) of Kim and Leskovec (2010). We exploit the close
connection between MAGM and the Kronecker Product Graph Model (KPGM) of
Leskovec et al. (2010), and show that to sample a graph from a MAGM it suffices
to sample small number of KPGM graphs and \emph{quilt} them together. Under a
restricted set of technical conditions our algorithm runs in $O((\log_2(n))^3
|E|)$ time, where $n$ is the number of nodes and $|E|$ is the number of edges
in the sampled graph. We demonstrate the scalability of our algorithm via
extensive empirical evaluation; we can sample a MAGM graph with 8 million nodes
and 20 billion edges in under 6 hours.
| [
"Hyokun Yun, S. V. N. Vishwanathan",
"['Hyokun Yun' 'S. V. N. Vishwanathan']"
] |
math.OC cs.LG | null | 1110.5447 | null | null | http://arxiv.org/pdf/1110.5447v1 | 2011-10-25T09:01:15Z | 2011-10-25T09:01:15Z | Optimal discovery with probabilistic expert advice | We consider an original problem that arises from the issue of security
analysis of a power system and that we name optimal discovery with
probabilistic expert advice. We address it with an algorithm based on the
optimistic paradigm and the Good-Turing missing mass estimator. We show that
this strategy uniformly attains the optimal discovery rate in a macroscopic
limit sense, under some assumptions on the probabilistic experts. We also
provide numerical experiments suggesting that this optimal behavior may still
hold under weaker assumptions.
| [
"['Sébastien Bubeck' 'Damien Ernst' 'Aurélien Garivier']",
"S\\'ebastien Bubeck, Damien Ernst, Aur\\'elien Garivier"
] |
cs.AI cs.LG | null | 1110.5667 | null | null | http://arxiv.org/pdf/1110.5667v1 | 2011-10-25T21:06:39Z | 2011-10-25T21:06:39Z | Inducing Probabilistic Programs by Bayesian Program Merging | This report outlines an approach to learning generative models from data. We
express models as probabilistic programs, which allows us to capture abstract
patterns within the examples. By choosing our language for programs to be an
extension of the algebraic data type of the examples, we can begin with a
program that generates all and only the examples. We then introduce greater
abstraction, and hence generalization, incrementally to the extent that it
improves the posterior probability of the examples given the program. Motivated
by previous approaches to model merging and program induction, we search for
such explanatory abstractions using program transformations. We consider two
types of transformation: Abstraction merges common subexpressions within a
program into new functions (a form of anti-unification). Deargumentation
simplifies functions by reducing the number of arguments. We demonstrate that
this approach finds key patterns in the domain of nested lists, including
parameterized sub-functions and stochastic recursion.
| [
"Irvin Hwang, Andreas Stuhlm\\\"uller, Noah D. Goodman",
"['Irvin Hwang' 'Andreas Stuhlmüller' 'Noah D. Goodman']"
] |
astro-ph.IM astro-ph.CO cs.LG | null | 1110.5688 | null | null | http://arxiv.org/pdf/1110.5688v1 | 2011-10-26T00:22:36Z | 2011-10-26T00:22:36Z | Discussion on "Techniques for Massive-Data Machine Learning in
Astronomy" by A. Gray | Astronomy is increasingly encountering two fundamental truths: (1) The field
is faced with the task of extracting useful information from extremely large,
complex, and high dimensional datasets; (2) The techniques of astroinformatics
and astrostatistics are the only way to make this tractable, and bring the
required level of sophistication to the analysis. Thus, an approach which
provides these tools in a way that scales to these datasets is not just
desirable, it is vital. The expertise required spans not just astronomy, but
also computer science, statistics, and informatics. As a computer scientist and
expert in machine learning, Alex's contribution of expertise and a large number
of fast algorithms designed to scale to large datasets, is extremely welcome.
We focus in this discussion on the questions raised by the practical
application of these algorithms to real astronomical datasets. That is, what is
needed to maximally leverage their potential to improve the science return?
This is not a trivial task. While computing and statistical expertise are
required, so is astronomical expertise. Precedent has shown that, to-date, the
collaborations most productive in producing astronomical science results (e.g,
the Sloan Digital Sky Survey), have either involved astronomers expert in
computer science and/or statistics, or astronomers involved in close, long-term
collaborations with experts in those fields. This does not mean that the
astronomers are giving the most important input, but simply that their input is
crucial in guiding the effort in the most fruitful directions, and coping with
the issues raised by real data. Thus, the tools must be useable and
understandable by those whose primary expertise is not computing or statistics,
even though they may have quite extensive knowledge of those fields.
| [
"['Nicholas M. Ball']",
"Nicholas M. Ball (Herzberg Institute of Astrophysics, Victoria, BC,\n Canada)"
] |
math.ST cs.LG stat.ML stat.TH | 10.1214/13-AOS1101 | 1110.6084 | null | null | http://arxiv.org/abs/1110.6084v3 | 2013-05-24T09:35:28Z | 2011-10-27T14:09:12Z | The multi-armed bandit problem with covariates | We consider a multi-armed bandit problem in a setting where each arm produces
a noisy reward realization which depends on an observable random covariate. As
opposed to the traditional static multi-armed bandit problem, this setting
allows for dynamically changing rewards that better describe applications where
side information is available. We adopt a nonparametric model where the
expected rewards are smooth functions of the covariate and where the hardness
of the problem is captured by a margin parameter. To maximize the expected
cumulative reward, we introduce a policy called Adaptively Binned Successive
Elimination (abse) that adaptively decomposes the global problem into suitably
"localized" static bandit problems. This policy constructs an adaptive
partition using a variant of the Successive Elimination (se) policy. Our
results include sharper regret bounds for the se policy in a static bandit
problem and minimax optimal regret bounds for the abse policy in the dynamic
problem.
| [
"['Vianney Perchet' 'Philippe Rigollet']",
"Vianney Perchet, Philippe Rigollet"
] |
cs.LG | null | 1110.6287 | null | null | http://arxiv.org/pdf/1110.6287v1 | 2011-10-28T10:20:25Z | 2011-10-28T10:20:25Z | Deciding of HMM parameters based on number of critical points for
gesture recognition from motion capture data | This paper presents a method of choosing number of states of a HMM based on
number of critical points of the motion capture data. The choice of Hidden
Markov Models(HMM) parameters is crucial for recognizer's performance as it is
the first step of the training and cannot be corrected automatically within
HMM. In this article we define predictor of number of states based on number of
critical points of the sequence and test its effectiveness against sample data.
| [
"['Michał Cholewa' 'Przemysław Głomb']",
"Micha{\\l} Cholewa and Przemys{\\l}aw G{\\l}omb"
] |
cs.LG | null | 1110.6755 | null | null | http://arxiv.org/pdf/1110.6755v2 | 2012-01-30T15:46:58Z | 2011-10-31T11:36:49Z | PAC-Bayes-Bernstein Inequality for Martingales and its Application to
Multiarmed Bandits | We develop a new tool for data-dependent analysis of the
exploration-exploitation trade-off in learning under limited feedback. Our tool
is based on two main ingredients. The first ingredient is a new concentration
inequality that makes it possible to control the concentration of weighted
averages of multiple (possibly uncountably many) simultaneously evolving and
interdependent martingales. The second ingredient is an application of this
inequality to the exploration-exploitation trade-off via importance weighted
sampling. We apply the new tool to the stochastic multiarmed bandit problem,
however, the main importance of this paper is the development and understanding
of the new tool rather than improvement of existing algorithms for stochastic
multiarmed bandits. In the follow-up work we demonstrate that the new tool can
improve over state-of-the-art in structurally richer problems, such as
stochastic multiarmed bandits with side information (Seldin et al., 2011a).
| [
"Yevgeny Seldin, Nicol\\`o Cesa-Bianchi, Peter Auer, Fran\\c{c}ois\n Laviolette, John Shawe-Taylor",
"['Yevgeny Seldin' 'Nicolò Cesa-Bianchi' 'Peter Auer' 'François Laviolette'\n 'John Shawe-Taylor']"
] |
cs.LG cs.IT math.IT stat.ML | null | 1110.6886 | null | null | http://arxiv.org/pdf/1110.6886v3 | 2012-07-30T14:02:53Z | 2011-10-31T18:22:24Z | PAC-Bayesian Inequalities for Martingales | We present a set of high-probability inequalities that control the
concentration of weighted averages of multiple (possibly uncountably many)
simultaneously evolving and interdependent martingales. Our results extend the
PAC-Bayesian analysis in learning theory from the i.i.d. setting to martingales
opening the way for its application to importance weighted sampling,
reinforcement learning, and other interactive learning domains, as well as many
other domains in probability theory and statistics, where martingales are
encountered.
We also present a comparison inequality that bounds the expectation of a
convex function of a martingale difference sequence shifted to the [0,1]
interval by the expectation of the same function of independent Bernoulli
variables. This inequality is applied to derive a tighter analog of
Hoeffding-Azuma's inequality.
| [
"['Yevgeny Seldin' 'François Laviolette' 'Nicolò Cesa-Bianchi'\n 'John Shawe-Taylor' 'Peter Auer']",
"Yevgeny Seldin, Fran\\c{c}ois Laviolette, Nicol\\`o Cesa-Bianchi, John\n Shawe-Taylor, Peter Auer"
] |
cs.SD cs.CR cs.LG eess.AS | 10.5120/3864-5394 | 1111.0024 | null | null | http://arxiv.org/abs/1111.0024v1 | 2011-10-31T20:31:08Z | 2011-10-31T20:31:08Z | Text-Independent Speaker Recognition for Low SNR Environments with
Encryption | Recognition systems are commonly designed to authenticate users at the access
control levels of a system. A number of voice recognition methods have been
developed using a pitch estimation process which are very vulnerable in low
Signal to Noise Ratio (SNR) environments thus, these programs fail to provide
the desired level of accuracy and robustness. Also, most text independent
speaker recognition programs are incapable of coping with unauthorized attempts
to gain access by tampering with the samples or reference database. The
proposed text-independent voice recognition system makes use of multilevel
cryptography to preserve data integrity while in transit or storage. Encryption
and decryption follow a transform based approach layered with pseudorandom
noise addition whereas for pitch detection, a modified version of the
autocorrelation pitch extraction algorithm is used. The experimental results
show that the proposed algorithm can decrypt the signal under test with
exponentially reducing Mean Square Error over an increasing range of SNR.
Further, it outperforms the conventional algorithms in actual identification
tasks even in noisy environments. The recognition rate thus obtained using the
proposed method is compared with other conventional methods used for speaker
identification.
| [
"['Aman Chadha' 'Divya Jyoti' 'M. Mani Roja']",
"Aman Chadha, Divya Jyoti, M. Mani Roja"
] |
math.OC cs.IT cs.LG cs.SI math.IT physics.soc-ph | 10.1109/TSP.2012.2198470 | 1111.0034 | null | null | http://arxiv.org/abs/1111.0034v3 | 2012-05-12T23:35:40Z | 2011-10-31T21:16:40Z | Diffusion Adaptation Strategies for Distributed Optimization and
Learning over Networks | We propose an adaptive diffusion mechanism to optimize a global cost function
in a distributed manner over a network of nodes. The cost function is assumed
to consist of a collection of individual components. Diffusion adaptation
allows the nodes to cooperate and diffuse information in real-time; it also
helps alleviate the effects of stochastic gradient noise and measurement noise
through a continuous learning process. We analyze the mean-square-error
performance of the algorithm in some detail, including its transient and
steady-state behavior. We also apply the diffusion algorithm to two problems:
distributed estimation with sparse parameters and distributed localization.
Compared to well-studied incremental methods, diffusion methods do not require
the use of a cyclic path over the nodes and are robust to node and link
failure. Diffusion methods also endow networks with adaptation abilities that
enable the individual nodes to continue learning even when the cost function
changes with time. Examples involving such dynamic cost functions with moving
targets are common in the context of biological networks.
| [
"['Jianshu Chen' 'Ali H. Sayed']",
"Jianshu Chen, Ali H. Sayed"
] |
cs.LG stat.ML | null | 1111.0352 | null | null | http://arxiv.org/pdf/1111.0352v2 | 2012-06-14T15:05:55Z | 2011-11-02T00:09:18Z | Revisiting k-means: New Algorithms via Bayesian Nonparametrics | Bayesian models offer great flexibility for clustering
applications---Bayesian nonparametrics can be used for modeling infinite
mixtures, and hierarchical Bayesian models can be utilized for sharing clusters
across multiple data sets. For the most part, such flexibility is lacking in
classical clustering methods such as k-means. In this paper, we revisit the
k-means clustering algorithm from a Bayesian nonparametric viewpoint. Inspired
by the asymptotic connection between k-means and mixtures of Gaussians, we show
that a Gibbs sampling algorithm for the Dirichlet process mixture approaches a
hard clustering algorithm in the limit, and further that the resulting
algorithm monotonically minimizes an elegant underlying k-means-like clustering
objective that includes a penalty for the number of clusters. We generalize
this analysis to the case of clustering multiple data sets through a similar
asymptotic argument with the hierarchical Dirichlet process. We also discuss
further extensions that highlight the benefits of our analysis: i) a spectral
relaxation involving thresholded eigenvectors, and ii) a normalized cut graph
clustering algorithm that does not fix the number of clusters in the graph.
| [
"Brian Kulis and Michael I. Jordan",
"['Brian Kulis' 'Michael I. Jordan']"
] |
cs.LG cs.AI | null | 1111.0432 | null | null | http://arxiv.org/pdf/1111.0432v2 | 2011-11-03T13:33:27Z | 2011-11-02T09:24:26Z | Approximate Stochastic Subgradient Estimation Training for Support
Vector Machines | Subgradient algorithms for training support vector machines have been quite
successful for solving large-scale and online learning problems. However, they
have been restricted to linear kernels and strongly convex formulations. This
paper describes efficient subgradient approaches without such limitations. Our
approaches make use of randomized low-dimensional approximations to nonlinear
kernels, and minimization of a reduced primal formulation using an algorithm
based on robust stochastic approximation, which do not require strong
convexity. Experiments illustrate that our approaches produce solutions of
comparable prediction accuracy with the solutions acquired from existing SVM
solvers, but often in much shorter time. We also suggest efficient prediction
schemes that depend only on the dimension of kernel approximation, not on the
number of support vectors.
| [
"Sangkyun Lee and Stephen J. Wright",
"['Sangkyun Lee' 'Stephen J. Wright']"
] |
cs.LG cs.AI | null | 1111.0712 | null | null | http://arxiv.org/pdf/1111.0712v1 | 2011-11-03T01:58:45Z | 2011-11-03T01:58:45Z | Online Learning with Preference Feedback | We propose a new online learning model for learning with preference feedback.
The model is especially suited for applications like web search and recommender
systems, where preference data is readily available from implicit user feedback
(e.g. clicks). In particular, at each time step a potentially structured object
(e.g. a ranking) is presented to the user in response to a context (e.g.
query), providing him or her with some unobserved amount of utility. As
feedback the algorithm receives an improved object that would have provided
higher utility. We propose a learning algorithm with provable regret bounds for
this online learning setting and demonstrate its effectiveness on a web-search
application. The new learning model also applies to many other interactive
learning problems and admits several interesting extensions.
| [
"['Pannagadatta K. Shivaswamy' 'Thorsten Joachims']",
"Pannagadatta K. Shivaswamy and Thorsten Joachims"
] |
cs.DS cs.LG | null | 1111.0952 | null | null | http://arxiv.org/pdf/1111.0952v1 | 2011-11-03T19:15:56Z | 2011-11-03T19:15:56Z | Computing a Nonnegative Matrix Factorization -- Provably | In the Nonnegative Matrix Factorization (NMF) problem we are given an $n
\times m$ nonnegative matrix $M$ and an integer $r > 0$. Our goal is to express
$M$ as $A W$ where $A$ and $W$ are nonnegative matrices of size $n \times r$
and $r \times m$ respectively. In some applications, it makes sense to ask
instead for the product $AW$ to approximate $M$ -- i.e. (approximately)
minimize $\norm{M - AW}_F$ where $\norm{}_F$ denotes the Frobenius norm; we
refer to this as Approximate NMF. This problem has a rich history spanning
quantum mechanics, probability theory, data analysis, polyhedral combinatorics,
communication complexity, demography, chemometrics, etc. In the past decade NMF
has become enormously popular in machine learning, where $A$ and $W$ are
computed using a variety of local search heuristics. Vavasis proved that this
problem is NP-complete. We initiate a study of when this problem is solvable in
polynomial time:
1. We give a polynomial-time algorithm for exact and approximate NMF for
every constant $r$. Indeed NMF is most interesting in applications precisely
when $r$ is small.
2. We complement this with a hardness result, that if exact NMF can be solved
in time $(nm)^{o(r)}$, 3-SAT has a sub-exponential time algorithm. This rules
out substantial improvements to the above algorithm.
3. We give an algorithm that runs in time polynomial in $n$, $m$ and $r$
under the separablity condition identified by Donoho and Stodden in 2003. The
algorithm may be practical since it is simple and noise tolerant (under benign
assumptions). Separability is believed to hold in many practical settings.
To the best of our knowledge, this last result is the first example of a
polynomial-time algorithm that provably works under a non-trivial condition on
the input and we believe that this will be an interesting and important
direction for future work.
| [
"['Sanjeev Arora' 'Rong Ge' 'Ravi Kannan' 'Ankur Moitra']",
"Sanjeev Arora, Rong Ge, Ravi Kannan, Ankur Moitra"
] |
cs.LG cs.CC | null | 1111.1124 | null | null | http://arxiv.org/pdf/1111.1124v1 | 2011-11-04T13:33:24Z | 2011-11-04T13:33:24Z | Tight Bounds on Proper Equivalence Query Learning of DNF | We prove a new structural lemma for partial Boolean functions $f$, which we
call the seed lemma for DNF. Using the lemma, we give the first subexponential
algorithm for proper learning of DNF in Angluin's Equivalence Query (EQ) model.
The algorithm has time and query complexity $2^{(\tilde{O}{\sqrt{n}})}$, which
is optimal. We also give a new result on certificates for DNF-size, a simple
algorithm for properly PAC-learning DNF, and new results on EQ-learning $\log
n$-term DNF and decision trees.
| [
"['Lisa Hellerstein' 'Devorah Kletenik' 'Linda Sellie' 'Rocco Servedio']",
"Lisa Hellerstein, Devorah Kletenik, Linda Sellie and Rocco Servedio"
] |
cs.LG cs.IT math.IT | null | 1111.1136 | null | null | http://arxiv.org/pdf/1111.1136v2 | 2011-11-14T21:16:59Z | 2011-11-04T14:18:31Z | Universal MMSE Filtering With Logarithmic Adaptive Regret | We consider the problem of online estimation of a real-valued signal
corrupted by oblivious zero-mean noise using linear estimators. The estimator
is required to iteratively predict the underlying signal based on the current
and several last noisy observations, and its performance is measured by the
mean-square-error. We describe and analyze an algorithm for this task which: 1.
Achieves logarithmic adaptive regret against the best linear filter in
hindsight. This bound is assyptotically tight, and resolves the question of
Moon and Weissman [1]. 2. Runs in linear time in terms of the number of filter
coefficients. Previous constructions required at least quadratic time.
| [
"['Dan Garber' 'Elad Hazan']",
"Dan Garber, Elad Hazan"
] |
cs.LG astro-ph.IM | 10.1088/0004-637X/756/1/67 | 1111.1315 | null | null | http://arxiv.org/abs/1111.1315v2 | 2012-03-07T02:23:33Z | 2011-11-05T14:27:11Z | Nonparametric Bayesian Estimation of Periodic Functions | Many real world problems exhibit patterns that have periodic behavior. For
example, in astrophysics, periodic variable stars play a pivotal role in
understanding our universe. An important step when analyzing data from such
processes is the problem of identifying the period: estimating the period of a
periodic function based on noisy observations made at irregularly spaced time
points. This problem is still a difficult challenge despite extensive study in
different disciplines. The paper makes several contributions toward solving
this problem. First, we present a nonparametric Bayesian model for period
finding, based on Gaussian Processes (GP), that does not make strong
assumptions on the shape of the periodic function. As our experiments
demonstrate, the new model leads to significantly better results in period
estimation when the target function is non-sinusoidal. Second, we develop a new
algorithm for parameter optimization for GP which is useful when the likelihood
function is very sensitive to the setting of the hyper-parameters with numerous
local minima, as in the case of period estimation. The algorithm combines
gradient optimization with grid search and incorporates several mechanisms to
overcome the high complexity of inference with GP. Third, we develop a novel
approach for using domain knowledge, in the form of a probabilistic generative
model, and incorporate it into the period estimation algorithm. Experimental
results on astrophysics data validate our approach showing significant
improvement over the state of the art in this domain.
| [
"['Yuyang Wang' 'Roni Khardon' 'Pavlos Protopapas']",
"Yuyang Wang, Roni Khardon, Pavlos Protopapas"
] |
cs.LG | null | 1111.1386 | null | null | http://arxiv.org/pdf/1111.1386v1 | 2011-11-06T08:43:21Z | 2011-11-06T08:43:21Z | Confidence Estimation in Structured Prediction | Structured classification tasks such as sequence labeling and dependency
parsing have seen much interest by the Natural Language Processing and the
machine learning communities. Several online learning algorithms were adapted
for structured tasks such as Perceptron, Passive- Aggressive and the recently
introduced Confidence-Weighted learning . These online algorithms are easy to
implement, fast to train and yield state-of-the-art performance. However,
unlike probabilistic models like Hidden Markov Model and Conditional random
fields, these methods generate models that output merely a prediction with no
additional information regarding confidence in the correctness of the output.
In this work we fill the gap proposing few alternatives to compute the
confidence in the output of non-probabilistic algorithms.We show how to compute
confidence estimates in the prediction such that the confidence reflects the
probability that the word is labeled correctly. We then show how to use our
methods to detect mislabeled words, trade recall for precision and active
learning. We evaluate our methods on four noun-phrase chunking and named entity
recognition sequence labeling tasks, and on dependency parsing for 14
languages.
| [
"Avihai Mejer and Koby Crammer",
"['Avihai Mejer' 'Koby Crammer']"
] |
math.ST cs.LG stat.TH | null | 1111.1418 | null | null | http://arxiv.org/pdf/1111.1418v1 | 2011-11-06T13:34:10Z | 2011-11-06T13:34:10Z | Efficient Nonparametric Conformal Prediction Regions | We investigate and extend the conformal prediction method due to
Vovk,Gammerman and Shafer (2005) to construct nonparametric prediction regions.
These regions have guaranteed distribution free, finite sample coverage,
without any assumptions on the distribution or the bandwidth. Explicit
convergence rates of the loss function are established for such regions under
standard regularity conditions. Approximations for simplifying implementation
and data driven bandwidth selection methods are also discussed. The theoretical
properties of our method are demonstrated through simulations.
| [
"Jing Lei, James Robins and Larry Wasserman",
"['Jing Lei' 'James Robins' 'Larry Wasserman']"
] |
cs.LG | null | 1111.1422 | null | null | http://arxiv.org/pdf/1111.1422v1 | 2011-11-06T14:01:14Z | 2011-11-06T14:01:14Z | Robust Interactive Learning | In this paper we propose and study a generalization of the standard
active-learning model where a more general type of query, class conditional
query, is allowed. Such queries have been quite useful in applications, but
have been lacking theoretical understanding. In this work, we characterize the
power of such queries under two well-known noise models. We give nearly tight
upper and lower bounds on the number of queries needed to learn both for the
general agnostic setting and for the bounded noise model. We further show that
our methods can be made adaptive to the (unknown) noise rate, with only
negligible loss in query complexity.
| [
"['Maria-Florina Balcan' 'Steve Hanneke']",
"Maria-Florina Balcan and Steve Hanneke"
] |
stat.ML cs.AI cs.LG | null | 1111.1784 | null | null | http://arxiv.org/pdf/1111.1784v2 | 2011-11-13T17:28:34Z | 2011-11-08T02:41:48Z | UPAL: Unbiased Pool Based Active Learning | In this paper we address the problem of pool based active learning, and
provide an algorithm, called UPAL, that works by minimizing the unbiased
estimator of the risk of a hypothesis in a given hypothesis space. For the
space of linear classifiers and the squared loss we show that UPAL is
equivalent to an exponentially weighted average forecaster. Exploiting some
recent results regarding the spectra of random matrices allows us to establish
consistency of UPAL when the true hypothesis is a linear hypothesis. Empirical
comparison with an active learner implementation in Vowpal Wabbit, and a
previously proposed pool based active learner implementation show good
empirical performance and better scalability.
| [
"Ravi Ganti and Alexander Gray",
"['Ravi Ganti' 'Alexander Gray']"
] |
cs.LG cs.DS | null | 1111.1797 | null | null | http://arxiv.org/pdf/1111.1797v3 | 2012-04-09T10:43:05Z | 2011-11-08T04:27:01Z | Analysis of Thompson Sampling for the multi-armed bandit problem | The multi-armed bandit problem is a popular model for studying
exploration/exploitation trade-off in sequential decision problems. Many
algorithms are now available for this well-studied problem. One of the earliest
algorithms, given by W. R. Thompson, dates back to 1933. This algorithm,
referred to as Thompson Sampling, is a natural Bayesian algorithm. The basic
idea is to choose an arm to play according to its probability of being the best
arm. Thompson Sampling algorithm has experimentally been shown to be close to
optimal. In addition, it is efficient to implement and exhibits several
desirable properties such as small regret for delayed feedback. However,
theoretical understanding of this algorithm was quite limited. In this paper,
for the first time, we show that Thompson Sampling algorithm achieves
logarithmic expected regret for the multi-armed bandit problem. More precisely,
for the two-armed bandit problem, the expected regret in time $T$ is
$O(\frac{\ln T}{\Delta} + \frac{1}{\Delta^3})$. And, for the $N$-armed bandit
problem, the expected regret in time $T$ is $O([(\sum_{i=2}^N
\frac{1}{\Delta_i^2})^2] \ln T)$. Our bounds are optimal but for the dependence
on $\Delta_i$ and the constant factors in big-Oh.
| [
"Shipra Agrawal, Navin Goyal",
"['Shipra Agrawal' 'Navin Goyal']"
] |
cs.SI cs.LG | null | 1111.2092 | null | null | http://arxiv.org/pdf/1111.2092v1 | 2011-11-09T03:22:16Z | 2011-11-09T03:22:16Z | Pushing Your Point of View: Behavioral Measures of Manipulation in
Wikipedia | As a major source for information on virtually any topic, Wikipedia serves an
important role in public dissemination and consumption of knowledge. As a
result, it presents tremendous potential for people to promulgate their own
points of view; such efforts may be more subtle than typical vandalism. In this
paper, we introduce new behavioral metrics to quantify the level of controversy
associated with a particular user: a Controversy Score (C-Score) based on the
amount of attention the user focuses on controversial pages, and a Clustered
Controversy Score (CC-Score) that also takes into account topical clustering.
We show that both these measures are useful for identifying people who try to
"push" their points of view, by showing that they are good predictors of which
editors get blocked. The metrics can be used to triage potential POV pushers.
We apply this idea to a dataset of users who requested promotion to
administrator status and easily identify some editors who significantly changed
their behavior upon becoming administrators. At the same time, such behavior is
not rampant. Those who are promoted to administrator status tend to have more
stable behavior than comparable groups of prolific editors. This suggests that
the Adminship process works well, and that the Wikipedia community is not
overwhelmed by users who become administrators to promote their own points of
view.
| [
"Sanmay Das, Allen Lavoie, and Malik Magdon-Ismail",
"['Sanmay Das' 'Allen Lavoie' 'Malik Magdon-Ismail']"
] |
cs.DS cs.LG | null | 1111.2111 | null | null | http://arxiv.org/pdf/1111.2111v2 | 2011-12-02T02:08:47Z | 2011-11-09T06:39:17Z | Generic Multiplicative Methods for Implementing Machine Learning
Algorithms on MapReduce | In this paper we introduce a generic model for multiplicative algorithms
which is suitable for the MapReduce parallel programming paradigm. We implement
three typical machine learning algorithms to demonstrate how similarity
comparison, gradient descent, power method and other classic learning
techniques fit this model well. Two versions of large-scale matrix
multiplication are discussed in this paper, and different methods are developed
for both cases with regard to their unique computational characteristics and
problem settings. In contrast to earlier research, we focus on fundamental
linear algebra techniques that establish a generic approach for a range of
algorithms, rather than specific ways of scaling up algorithms one at a time.
Experiments show promising results when evaluated on both speedup and accuracy.
Compared with a standard implementation with computational complexity $O(m^3)$
in the worst case, the large-scale matrix multiplication experiments prove our
design is considerably more efficient and maintains a good speedup as the
number of cores increases. Algorithm-specific experiments also produce
encouraging results on runtime performance.
| [
"Song Liu, Peter Flach, Nello Cristianini",
"['Song Liu' 'Peter Flach' 'Nello Cristianini']"
] |
cs.NE cs.AI cs.LG | null | 1111.2221 | null | null | http://arxiv.org/pdf/1111.2221v1 | 2011-11-09T14:44:58Z | 2011-11-09T14:44:58Z | Scaling Up Estimation of Distribution Algorithms For Continuous
Optimization | Since Estimation of Distribution Algorithms (EDA) were proposed, many
attempts have been made to improve EDAs' performance in the context of global
optimization. So far, the studies or applications of multivariate probabilistic
model based continuous EDAs are still restricted to rather low dimensional
problems (smaller than 100D). Traditional EDAs have difficulties in solving
higher dimensional problems because of the curse of dimensionality and their
rapidly increasing computational cost. However, scaling up continuous EDAs for
higher dimensional optimization is still necessary, which is supported by the
distinctive feature of EDAs: Because a probabilistic model is explicitly
estimated, from the learnt model one can discover useful properties or features
of the problem. Besides obtaining a good solution, understanding of the problem
structure can be of great benefit, especially for black box optimization. We
propose a novel EDA framework with Model Complexity Control (EDA-MCC) to scale
up EDAs. By using Weakly dependent variable Identification (WI) and Subspace
Modeling (SM), EDA-MCC shows significantly better performance than traditional
EDAs on high dimensional problems. Moreover, the computational cost and the
requirement of large population sizes can be reduced in EDA-MCC. In addition to
being able to find a good solution, EDA-MCC can also produce a useful problem
structure characterization. EDA-MCC is the first successful instance of
multivariate model based EDAs that can be effectively applied a general class
of up to 500D problems. It also outperforms some newly developed algorithms
designed specifically for large scale optimization. In order to understand the
strength and weakness of EDA-MCC, we have carried out extensive computational
studies of EDA-MCC. Our results have revealed when EDA-MCC is likely to
outperform others on what kind of benchmark functions.
| [
"Weishan Dong, Tianshi Chen, Peter Tino, and Xin Yao",
"['Weishan Dong' 'Tianshi Chen' 'Peter Tino' 'Xin Yao']"
] |
cs.LG cs.NA | 10.1109/TIT.2013.2271378 | 1111.2262 | null | null | http://arxiv.org/abs/1111.2262v4 | 2012-07-24T18:34:52Z | 2011-11-09T16:34:55Z | Improved Bound for the Nystrom's Method and its Application to Kernel
Classification | We develop two approaches for analyzing the approximation error bound for the
Nystr\"{o}m method, one based on the concentration inequality of integral
operator, and one based on the compressive sensing theory. We show that the
approximation error, measured in the spectral norm, can be improved from
$O(N/\sqrt{m})$ to $O(N/m^{1 - \rho})$ in the case of large eigengap, where $N$
is the total number of data points, $m$ is the number of sampled data points,
and $\rho \in (0, 1/2)$ is a positive constant that characterizes the eigengap.
When the eigenvalues of the kernel matrix follow a $p$-power law, our analysis
based on compressive sensing theory further improves the bound to $O(N/m^{p -
1})$ under an incoherence assumption, which explains why the Nystr\"{o}m method
works well for kernel matrix with skewed eigenvalues. We present a kernel
classification approach based on the Nystr\"{o}m method and derive its
generalization performance using the improved bound. We show that when the
eigenvalues of kernel matrix follow a $p$-power law, we can reduce the number
of support vectors to $N^{2p/(p^2 - 1)}$, a number less than $N$ when $p >
1+\sqrt{2}$, without seriously sacrificing its generalization performance.
| [
"['Rong Jin' 'Tianbao Yang' 'Mehrdad Mahdavi' 'Yu-Feng Li' 'Zhi-Hua Zhou']",
"Rong Jin, Tianbao Yang, Mehrdad Mahdavi, Yu-Feng Li, Zhi-Hua Zhou"
] |
cs.LG cs.GT | null | 1111.2664 | null | null | http://arxiv.org/pdf/1111.2664v1 | 2011-11-11T05:09:33Z | 2011-11-11T05:09:33Z | A Collaborative Mechanism for Crowdsourcing Prediction Problems | Machine Learning competitions such as the Netflix Prize have proven
reasonably successful as a method of "crowdsourcing" prediction tasks. But
these competitions have a number of weaknesses, particularly in the incentive
structure they create for the participants. We propose a new approach, called a
Crowdsourced Learning Mechanism, in which participants collaboratively "learn"
a hypothesis for a given prediction task. The approach draws heavily from the
concept of a prediction market, where traders bet on the likelihood of a future
event. In our framework, the mechanism continues to publish the current
hypothesis, and participants can modify this hypothesis by wagering on an
update. The critical incentive property is that a participant will profit an
amount that scales according to how much her update improves performance on a
released test set.
| [
"Jacob Abernethy and Rafael M. Frongillo",
"['Jacob Abernethy' 'Rafael M. Frongillo']"
] |
cs.LG cs.IR | null | 1111.2948 | null | null | http://arxiv.org/pdf/1111.2948v2 | 2011-11-15T10:20:57Z | 2011-11-12T17:53:10Z | Using Contextual Information as Virtual Items on Top-N Recommender
Systems | Traditionally, recommender systems for the Web deal with applications that
have two dimensions, users and items. Based on access logs that relate these
dimensions, a recommendation model can be built and used to identify a set of N
items that will be of interest to a certain user. In this paper we propose a
method to complement the information in the access logs with contextual
information without changing the recommendation algorithm. The method consists
in representing context as virtual items. We empirically test this method with
two top-N recommender systems, an item-based collaborative filtering technique
and association rules, on three data sets. The results show that our method is
able to take advantage of the context (new dimensions) when it is informative.
| [
"['Marcos A. Domingues' 'Alipio Mario Jorge' 'Carlos Soares']",
"Marcos A. Domingues, Alipio Mario Jorge, Carlos Soares"
] |
null | null | 1111.3735 | null | null | http://arxiv.org/pdf/1111.3735v1 | 2011-11-16T09:26:14Z | 2011-11-16T09:26:14Z | A Bayesian Model for Plan Recognition in RTS Games applied to StarCraft | The task of keyhole (unobtrusive) plan recognition is central to adaptive game AI. "Tech trees" or "build trees" are the core of real-time strategy (RTS) game strategic (long term) planning. This paper presents a generic and simple Bayesian model for RTS build tree prediction from noisy observations, which parameters are learned from replays (game logs). This unsupervised machine learning approach involves minimal work for the game developers as it leverage players' data (com- mon in RTS). We applied it to StarCraft1 and showed that it yields high quality and robust predictions, that can feed an adaptive AI. | [
"['Gabriel Synnaeve' 'Pierre Bessière']"
] |
cs.LG cs.IT math.IT | null | 1111.3846 | null | null | http://arxiv.org/pdf/1111.3846v1 | 2011-11-16T16:06:57Z | 2011-11-16T16:06:57Z | No Free Lunch versus Occam's Razor in Supervised Learning | The No Free Lunch theorems are often used to argue that domain specific
knowledge is required to design successful algorithms. We use algorithmic
information theory to argue the case for a universal bias allowing an algorithm
to succeed in all interesting problem domains. Additionally, we give a new
algorithm for off-line classification, inspired by Solomonoff induction, with
good performance on all structured problems under reasonable assumptions. This
includes a proof of the efficacy of the well-known heuristic of randomly
selecting training data in the hope of reducing misclassification rates.
| [
"Tor Lattimore and Marcus Hutter",
"['Tor Lattimore' 'Marcus Hutter']"
] |
Subsets and Splits