title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Complexity Analysis of the Lasso Regularization Path | stat.ML cs.LG math.OC | The regularization path of the Lasso can be shown to be piecewise linear,
making it possible to "follow" and explicitly compute the entire path. We
analyze in this paper this popular strategy, and prove that its worst case
complexity is exponential in the number of variables. We then oppose this
pessimistic result to an (optimistic) approximate analysis: We show that an
approximate path with at most O(1/sqrt(epsilon)) linear segments can always be
obtained, where every point on the path is guaranteed to be optimal up to a
relative epsilon-duality gap. We complete our theoretical analysis with a
practical algorithm to compute these approximate paths.
| Julien Mairal and Bin Yu | null | 1205.0079 | null | null |
ProPPA: A Fast Algorithm for $\ell_1$ Minimization and Low-Rank Matrix
Completion | cs.LG math.OC | We propose a Projected Proximal Point Algorithm (ProPPA) for solving a class
of optimization problems. The algorithm iteratively computes the proximal point
of the last estimated solution projected into an affine space which itself is
parallel and approaching to the feasible set. We provide convergence analysis
theoretically supporting the general algorithm, and then apply it for solving
$\ell_1$-minimization problems and the matrix completion problem. These
problems arise in many applications including machine learning, image and
signal processing. We compare our algorithm with the existing state-of-the-art
algorithms. Experimental results on solving these problems show that our
algorithm is very efficient and competitive.
| Ranch Y.Q. Lai and Pong C. Yuen | null | 1205.0088 | null | null |
A Randomized Mirror Descent Algorithm for Large Scale Multiple Kernel
Learning | cs.LG stat.ML | We consider the problem of simultaneously learning to linearly combine a very
large number of kernels and learn a good predictor based on the learnt kernel.
When the number of kernels $d$ to be combined is very large, multiple kernel
learning methods whose computational cost scales linearly in $d$ are
intractable. We propose a randomized version of the mirror descent algorithm to
overcome this issue, under the objective of minimizing the group $p$-norm
penalized empirical risk. The key to achieve the required exponential speed-up
is the computationally efficient construction of low-variance estimates of the
gradient. We propose importance sampling based estimates, and find that the
ideal distribution samples a coordinate with a probability proportional to the
magnitude of the corresponding gradient. We show the surprising result that in
the case of learning the coefficients of a polynomial kernel, the combinatorial
structure of the base kernels to be combined allows the implementation of
sampling from this distribution to run in $O(\log(d))$ time, making the total
computational cost of the method to achieve an $\epsilon$-optimal solution to
be $O(\log(d)/\epsilon^2)$, thereby allowing our method to operate for very
large values of $d$. Experiments with simulated and real data confirm that the
new algorithm is computationally more efficient than its state-of-the-art
alternatives.
| Arash Afkanpour, Andr\'as Gy\"orgy, Csaba Szepesv\'ari, Michael
Bowling | null | 1205.0288 | null | null |
Minimax Classifier for Uncertain Costs | cs.LG | Many studies on the cost-sensitive learning assumed that a unique cost matrix
is known for a problem. However, this assumption may not hold for many
real-world problems. For example, a classifier might need to be applied in
several circumstances, each of which associates with a different cost matrix.
Or, different human experts have different opinions about the costs for a given
problem. Motivated by these facts, this study aims to seek the minimax
classifier over multiple cost matrices. In summary, we theoretically proved
that, no matter how many cost matrices are involved, the minimax problem can be
tackled by solving a number of standard cost-sensitive problems and
sub-problems that involve only two cost matrices. As a result, a general
framework for achieving minimax classifier over multiple cost matrices is
suggested and justified by preliminary empirical studies.
| Rui Wang and Ke Tang | null | 1205.0406 | null | null |
Hypothesis testing using pairwise distances and associated kernels (with
Appendix) | cs.LG stat.ME stat.ML | We provide a unifying framework linking two classes of statistics used in
two-sample and independence testing: on the one hand, the energy distances and
distance covariances from the statistics literature; on the other, distances
between embeddings of distributions to reproducing kernel Hilbert spaces
(RKHS), as established in machine learning. The equivalence holds when energy
distances are computed with semimetrics of negative type, in which case a
kernel may be defined such that the RKHS distance between distributions
corresponds exactly to the energy distance. We determine the class of
probability distributions for which kernels induced by semimetrics are
characteristic (that is, for which embeddings of the distributions to an RKHS
are injective). Finally, we investigate the performance of this family of
kernels in two-sample and independence tests: we show in particular that the
energy distance most commonly employed in statistics is just one member of a
parametric family of kernels, and that other choices from this family can yield
more powerful tests.
| Dino Sejdinovic, Arthur Gretton, Bharath Sriperumbudur, Kenji Fukumizu | null | 1205.0411 | null | null |
Greedy Multiple Instance Learning via Codebook Learning and Nearest
Neighbor Voting | cs.LG | Multiple instance learning (MIL) has attracted great attention recently in
machine learning community. However, most MIL algorithms are very slow and
cannot be applied to large datasets. In this paper, we propose a greedy
strategy to speed up the multiple instance learning process. Our contribution
is two fold. First, we propose a density ratio model, and show that maximizing
a density ratio function is the low bound of the DD model under certain
conditions. Secondly, we make use of a histogram ratio between positive bags
and negative bags to represent the density ratio function and find codebooks
separately for positive bags and negative bags by a greedy strategy. For
testing, we make use of a nearest neighbor strategy to classify new bags. We
test our method on both small benchmark datasets and the large TRECVID MED11
dataset. The experimental results show that our method yields comparable
accuracy to the current state of the art, while being up to at least one order
of magnitude faster.
| Gang Chen and Jason Corso | null | 1205.0610 | null | null |
Generative Maximum Entropy Learning for Multiclass Classification | cs.IT cs.LG math.IT | Maximum entropy approach to classification is very well studied in applied
statistics and machine learning and almost all the methods that exists in
literature are discriminative in nature. In this paper, we introduce a maximum
entropy classification method with feature selection for large dimensional data
such as text datasets that is generative in nature. To tackle the curse of
dimensionality of large data sets, we employ conditional independence
assumption (Naive Bayes) and we perform feature selection simultaneously, by
enforcing a `maximum discrimination' between estimated class conditional
densities. For two class problems, in the proposed method, we use Jeffreys
($J$) divergence to discriminate the class conditional densities. To extend our
method to the multi-class case, we propose a completely new approach by
considering a multi-distribution divergence: we replace Jeffreys divergence by
Jensen-Shannon ($JS$) divergence to discriminate conditional densities of
multiple classes. In order to reduce computational complexity, we employ a
modified Jensen-Shannon divergence ($JS_{GM}$), based on AM-GM inequality. We
show that the resulting divergence is a natural generalization of Jeffreys
divergence to a multiple distributions case. As far as the theoretical
justifications are concerned we show that when one intends to select the best
features in a generative maximum entropy approach, maximum discrimination using
$J-$divergence emerges naturally in binary classification. Performance and
comparative study of the proposed algorithms have been demonstrated on large
dimensional text and gene expression datasets that show our methods scale up
very well with large dimensional datasets.
| Ambedkar Dukkipati, Gaurav Pandey, Debarghya Ghoshdastidar, Paramita
Koley, D. M. V. Satya Sriram | null | 1205.0651 | null | null |
Weighted Patterns as a Tool for Improving the Hopfield Model | cond-mat.dis-nn cs.LG cs.NE | We generalize the standard Hopfield model to the case when a weight is
assigned to each input pattern. The weight can be interpreted as the frequency
of the pattern occurrence at the input of the network. In the framework of the
statistical physics approach we obtain the saddle-point equation allowing us to
examine the memory of the network. In the case of unequal weights our model
does not lead to the catastrophic destruction of the memory due to its
overfilling (that is typical for the standard Hopfield model). The real memory
consists only of the patterns with weights exceeding a critical value that is
determined by the weights distribution. We obtain the algorithm allowing us to
find this critical value for an arbitrary distribution of the weights, and
analyze in detail some particular weights distributions. It is shown that the
memory decreases as compared to the case of the standard Hopfield model.
However, in our model the network can learn online without the catastrophic
destruction of the memory.
| Iakov Karandashev, Boris Kryzhanovsky and Leonid Litinskii | 10.1103/PhysRevE.85.041925 | 1205.0908 | null | null |
Variable Selection for Latent Dirichlet Allocation | cs.LG stat.ML | In latent Dirichlet allocation (LDA), topics are multinomial distributions
over the entire vocabulary. However, the vocabulary usually contains many words
that are not relevant in forming the topics. We adopt a variable selection
method widely used in statistical modeling as a dimension reduction tool and
combine it with LDA. In this variable selection model for LDA (vsLDA), topics
are multinomial distributions over a subset of the vocabulary, and by excluding
words that are not informative for finding the latent topic structure of the
corpus, vsLDA finds topics that are more robust and discriminative. We compare
three models, vsLDA, LDA with symmetric priors, and LDA with asymmetric priors,
on heldout likelihood, MCMC chain consistency, and document classification. The
performance of vsLDA is better than symmetric LDA for likelihood and
classification, better than asymmetric LDA for consistency and classification,
and about the same in the other comparisons.
| Dongwoo Kim, Yeonseung Chung, Alice Oh | null | 1205.1053 | null | null |
On the Complexity of Trial and Error | cs.CC cs.DS cs.LG | Motivated by certain applications from physics, biochemistry, economics, and
computer science, in which the objects under investigation are not accessible
because of various limitations, we propose a trial-and-error model to examine
algorithmic issues in such situations. Given a search problem with a hidden
input, we are asked to find a valid solution, to find which we can propose
candidate solutions (trials), and use observed violations (errors), to prepare
future proposals. In accordance with our motivating applications, we consider
the fairly broad class of constraint satisfaction problems, and assume that
errors are signaled by a verification oracle in the format of the index of a
violated constraint (with the content of the constraint still hidden).
Our discoveries are summarized as follows. On one hand, despite the seemingly
very little information provided by the verification oracle, efficient
algorithms do exist for a number of important problems. For the Nash, Core,
Stable Matching, and SAT problems, the unknown-input versions are as hard as
the corresponding known-input versions, up to a factor of polynomial. We
further give almost tight bounds on the latter two problems' trial
complexities. On the other hand, there are problems whose complexities are
substantially increased in the unknown-input model. In particular, no
time-efficient algorithms exist (under standard hardness assumptions) for Graph
Isomorphism and Group Isomorphism problems. The tools used to achieve these
results include order theory, strong ellipsoid method, and some non-standard
reductions.
Our model investigates the value of information, and our results demonstrate
that the lack of input information can introduce various levels of extra
difficulty. The model exhibits intimate connections with (and we hope can also
serve as a useful supplement to) certain existing learning and complexity
theories.
| Xiaohui Bei, Ning Chen, Shengyu Zhang | null | 1205.1183 | null | null |
Convex Relaxation for Combinatorial Penalties | stat.ML cs.LG | In this paper, we propose an unifying view of several recently proposed
structured sparsity-inducing norms. We consider the situation of a model
simultaneously (a) penalized by a set- function de ned on the support of the
unknown parameter vector which represents prior knowledge on supports, and (b)
regularized in Lp-norm. We show that the natural combinatorial optimization
problems obtained may be relaxed into convex optimization problems and
introduce a notion, the lower combinatorial envelope of a set-function, that
characterizes the tightness of our relaxations. We moreover establish links
with norms based on latent representations including the latent group Lasso and
block-coding, and with norms obtained from submodular functions.
| Guillaume Obozinski (INRIA Paris - Rocquencourt, LIENS), Francis Bach
(INRIA Paris - Rocquencourt, LIENS) | null | 1205.1240 | null | null |
Sparse group lasso and high dimensional multinomial classification | stat.ML cs.LG stat.CO | The sparse group lasso optimization problem is solved using a coordinate
gradient descent algorithm. The algorithm is applicable to a broad class of
convex loss functions. Convergence of the algorithm is established, and the
algorithm is used to investigate the performance of the multinomial sparse
group lasso classifier. On three different real data examples the multinomial
group lasso clearly outperforms multinomial lasso in terms of achieved
classification error rate and in terms of including fewer features for the
classification. The run-time of our sparse group lasso implementation is of the
same order of magnitude as the multinomial lasso algorithm implemented in the R
package glmnet. Our implementation scales well with the problem size. One of
the high dimensional examples considered is a 50 class classification problem
with 10k features, which amounts to estimating 500k parameters. The
implementation is available as the R package msgl.
| Martin Vincent, Niels Richard Hansen | null | 1205.1245 | null | null |
Compressed Sensing for Energy-Efficient Wireless Telemonitoring of
Noninvasive Fetal ECG via Block Sparse Bayesian Learning | stat.ML cs.LG stat.AP | Fetal ECG (FECG) telemonitoring is an important branch in telemedicine. The
design of a telemonitoring system via a wireless body-area network with low
energy consumption for ambulatory use is highly desirable. As an emerging
technique, compressed sensing (CS) shows great promise in
compressing/reconstructing data with low energy consumption. However, due to
some specific characteristics of raw FECG recordings such as non-sparsity and
strong noise contamination, current CS algorithms generally fail in this
application.
This work proposes to use the block sparse Bayesian learning (BSBL) framework
to compress/reconstruct non-sparse raw FECG recordings. Experimental results
show that the framework can reconstruct the raw recordings with high quality.
Especially, the reconstruction does not destroy the interdependence relation
among the multichannel recordings. This ensures that the independent component
analysis decomposition of the reconstructed recordings has high fidelity.
Furthermore, the framework allows the use of a sparse binary sensing matrix
with much fewer nonzero entries to compress recordings. Particularly, each
column of the matrix can contain only two nonzero entries. This shows the
framework, compared to other algorithms such as current CS algorithms and
wavelet algorithms, can greatly reduce code execution in CPU in the data
compression stage.
| Zhilin Zhang, Tzyy-Ping Jung, Scott Makeig, Bhaskar D. Rao | 10.1109/TBME.2012.2226175 | 1205.1287 | null | null |
Detecting Spammers via Aggregated Historical Data Set | cs.CR cs.LG | The battle between email service providers and senders of mass unsolicited
emails (Spam) continues to gain traction. Vast numbers of Spam emails are sent
mainly from automatic botnets distributed over the world. One method for
mitigating Spam in a computationally efficient manner is fast and accurate
blacklisting of the senders. In this work we propose a new sender reputation
mechanism that is based on an aggregated historical data-set which encodes the
behavior of mail transfer agents over time. A historical data-set is created
from labeled logs of received emails. We use machine learning algorithms to
build a model that predicts the \emph{spammingness} of mail transfer agents in
the near future. The proposed mechanism is targeted mainly at large enterprises
and email service providers and can be used for updating both the black and the
white lists. We evaluate the proposed mechanism using 9.5M anonymized log
entries obtained from the biggest Internet service provider in Europe.
Experiments show that proposed method detects more than 94% of the Spam emails
that escaped the blacklist (i.e., TPR), while having less than 0.5%
false-alarms. Therefore, the effectiveness of the proposed method is much
higher than of previously reported reputation mechanisms, which rely on emails
logs. In addition, the proposed method, when used for updating both the black
and white lists, eliminated the need in automatic content inspection of 4 out
of 5 incoming emails, which resulted in dramatic reduction in the filtering
computational load.
| Eitan Menahem and Rami Puzis | null | 1205.1357 | null | null |
Dynamic Multi-Relational Chinese Restaurant Process for Analyzing
Influences on Users in Social Media | cs.SI cs.LG physics.soc-ph | We study the problem of analyzing influence of various factors affecting
individual messages posted in social media. The problem is challenging because
of various types of influences propagating through the social media network
that act simultaneously on any user. Additionally, the topic composition of the
influencing factors and the susceptibility of users to these influences evolve
over time. This problem has not studied before, and off-the-shelf models are
unsuitable for this purpose. To capture the complex interplay of these various
factors, we propose a new non-parametric model called the Dynamic
Multi-Relational Chinese Restaurant Process. This accounts for the user network
for data generation and also allows the parameters to evolve over time.
Designing inference algorithms for this model suited for large scale
social-media data is another challenge. To this end, we propose a scalable and
multi-threaded inference algorithm based on online Gibbs Sampling. Extensive
evaluations on large-scale Twitter and Facebook data show that the extracted
topics when applied to authorship and commenting prediction outperform
state-of-the-art baselines. More importantly, our model produces valuable
insights on topic trends and user personality trends, beyond the capability of
existing approaches.
| Himabindu Lakkaraju, Indrajit Bhattacharya, Chiranjib Bhattacharyya | null | 1205.1456 | null | null |
Risk estimation for matrix recovery with spectral regularization | math.OC cs.IT cs.LG math.IT math.ST stat.ML stat.TH | In this paper, we develop an approach to recursively estimate the quadratic
risk for matrix recovery problems regularized with spectral functions. Toward
this end, in the spirit of the SURE theory, a key step is to compute the (weak)
derivative and divergence of a solution with respect to the observations. As
such a solution is not available in closed form, but rather through a proximal
splitting algorithm, we propose to recursively compute the divergence from the
sequence of iterates. A second challenge that we unlocked is the computation of
the (weak) derivative of the proximity operator of a spectral function. To show
the potential applicability of our approach, we exemplify it on a matrix
completion problem to objectively and automatically select the regularization
parameter.
| Charles-Alban Deledalle (CEREMADE), Samuel Vaiter (CEREMADE), Gabriel
Peyr\'e (CEREMADE), Jalal Fadili (GREYC), Charles Dossal (IMB) | null | 1205.1482 | null | null |
Graph-based Learning with Unbalanced Clusters | stat.ML cs.LG | Graph construction is a crucial step in spectral clustering (SC) and
graph-based semi-supervised learning (SSL). Spectral methods applied on
standard graphs such as full-RBF, $\epsilon$-graphs and $k$-NN graphs can lead
to poor performance in the presence of proximal and unbalanced data. This is
because spectral methods based on minimizing RatioCut or normalized cut on
these graphs tend to put more importance on balancing cluster sizes over
reducing cut values. We propose a novel graph construction technique and show
that the RatioCut solution on this new graph is able to handle proximal and
unbalanced data. Our method is based on adaptively modulating the neighborhood
degrees in a $k$-NN graph, which tends to sparsify neighborhoods in low density
regions. Our method adapts to data with varying levels of unbalancedness and
can be naturally used for small cluster detection. We justify our ideas through
limit cut analysis. Unsupervised and semi-supervised experiments on synthetic
and real data sets demonstrate the superiority of our method.
| Jing Qian, Venkatesh Saligrama, Manqi Zhao | null | 1205.1496 | null | null |
Approximate Dynamic Programming By Minimizing Distributionally Robust
Bounds | stat.ML cs.LG | Approximate dynamic programming is a popular method for solving large Markov
decision processes. This paper describes a new class of approximate dynamic
programming (ADP) methods- distributionally robust ADP-that address the curse
of dimensionality by minimizing a pessimistic bound on the policy loss. This
approach turns ADP into an optimization problem, for which we derive new
mathematical program formulations and analyze its properties. DRADP improves on
the theoretical guarantees of existing ADP methods-it guarantees convergence
and L1 norm based error bounds. The empirical evaluation of DRADP shows that
the theoretical guarantees translate well into good performance on benchmark
problems.
| Marek Petrik | null | 1205.1782 | null | null |
The Natural Gradient by Analogy to Signal Whitening, and Recipes and
Tricks for its Use | cs.LG stat.ML | The natural gradient allows for more efficient gradient descent by removing
dependencies and biases inherent in a function's parameterization. Several
papers present the topic thoroughly and precisely. It remains a very difficult
idea to get your head around however. The intent of this note is to provide
simple intuition for the natural gradient and its use. We review how an ill
conditioned parameter space can undermine learning, introduce the natural
gradient by analogy to the more widely understood concept of signal whitening,
and present tricks and specific prescriptions for applying the natural gradient
to learning problems.
| Jascha Sohl-Dickstein | null | 1205.1828 | null | null |
Hamiltonian Annealed Importance Sampling for partition function
estimation | cs.LG physics.data-an | We introduce an extension to annealed importance sampling that uses
Hamiltonian dynamics to rapidly estimate normalization constants. We
demonstrate this method by computing log likelihoods in directed and undirected
probabilistic image models. We compare the performance of linear generative
models with both Gaussian and Laplace priors, product of experts models with
Laplace and Student's t experts, the mc-RBM, and a bilinear generative model.
We provide code to compare additional models.
| Jascha Sohl-Dickstein and Benjamin J. Culpepper | null | 1205.1925 | null | null |
The representer theorem for Hilbert spaces: a necessary and sufficient
condition | math.FA cs.LG | A family of regularization functionals is said to admit a linear representer
theorem if every member of the family admits minimizers that lie in a fixed
finite dimensional subspace. A recent characterization states that a general
class of regularization functionals with differentiable regularizer admits a
linear representer theorem if and only if the regularization term is a
non-decreasing function of the norm. In this report, we improve over such
result by replacing the differentiability assumption with lower semi-continuity
and deriving a proof that is independent of the dimensionality of the space.
| Francesco Dinuzzo, Bernhard Sch\"olkopf | null | 1205.1928 | null | null |
Hamiltonian Monte Carlo with Reduced Momentum Flips | physics.data-an cs.LG | Hamiltonian Monte Carlo (or hybrid Monte Carlo) with partial momentum
refreshment explores the state space more slowly than it otherwise would due to
the momentum reversals which occur on proposal rejection. These cause
trajectories to double back on themselves, leading to random walk behavior on
timescales longer than the typical rejection time, and leading to slower
mixing. I present a technique by which the number of momentum reversals can be
reduced. This is accomplished by maintaining the net exchange of probability
between states with opposite momenta, but reducing the rate of exchange in both
directions such that it is 0 in one direction. An experiment illustrates these
reduced momentum flips accelerating mixing for a particular distribution.
| Jascha Sohl-Dickstein | null | 1205.1939 | null | null |
Dynamic Behavioral Mixed-Membership Model for Large Evolving Networks | cs.SI cs.LG physics.soc-ph stat.ML | The majority of real-world networks are dynamic and extremely large (e.g.,
Internet Traffic, Twitter, Facebook, ...). To understand the structural
behavior of nodes in these large dynamic networks, it may be necessary to model
the dynamics of behavioral roles representing the main connectivity patterns
over time. In this paper, we propose a dynamic behavioral mixed-membership
model (DBMM) that captures the roles of nodes in the graph and how they evolve
over time. Unlike other node-centric models, our model is scalable for
analyzing large dynamic networks. In addition, DBMM is flexible,
parameter-free, has no functional form or parameterization, and is
interpretable (identifies explainable patterns). The performance results
indicate our approach can be applied to very large networks while the
experimental results show that our model uncovers interesting patterns
underlying the dynamics of these networks.
| Ryan Rossi, Brian Gallagher, Jennifer Neville, Keith Henderson | null | 1205.2056 | null | null |
A Converged Algorithm for Tikhonov Regularized Nonnegative Matrix
Factorization with Automatic Regularization Parameters Determination | cs.LG | We present a converged algorithm for Tikhonov regularized nonnegative matrix
factorization (NMF). We specially choose this regularization because it is
known that Tikhonov regularized least square (LS) is the more preferable form
in solving linear inverse problems than the conventional LS. Because an NMF
problem can be decomposed into LS subproblems, it can be expected that Tikhonov
regularized NMF will be the more appropriate approach in solving NMF problems.
The algorithm is derived using additive update rules which have been shown to
have convergence guarantee. We equip the algorithm with a mechanism to
automatically determine the regularization parameters based on the L-curve, a
well-known concept in the inverse problems community, but is rather unknown in
the NMF research. The introduction of this algorithm thus solves two inherent
problems in Tikhonov regularized NMF algorithm research, i.e., convergence
guarantee and regularization parameters determination.
| Andri Mirzal | null | 1205.2151 | null | null |
A Generalized Kernel Approach to Structured Output Learning | stat.ML cs.LG | We study the problem of structured output learning from a regression
perspective. We first provide a general formulation of the kernel dependency
estimation (KDE) problem using operator-valued kernels. We show that some of
the existing formulations of this problem are special cases of our framework.
We then propose a covariance-based operator-valued kernel that allows us to
take into account the structure of the kernel feature space. This kernel
operates on the output space and encodes the interactions between the outputs
without any reference to the input space. To address this issue, we introduce a
variant of our KDE method based on the conditional covariance operator that in
addition to the correlation between the outputs takes into account the effects
of the input variables. Finally, we evaluate the performance of our KDE
approach using both covariance and conditional covariance kernels on two
structured output problems, and compare it to the state-of-the-art kernel-based
structured output regression methods.
| Hachem Kadri (INRIA Lille - Nord Europe), Mohammad Ghavamzadeh (INRIA
Lille - Nord Europe), Philippe Preux (INRIA Lille - Nord Europe) | null | 1205.2171 | null | null |
Modularity-Based Clustering for Network-Constrained Trajectories | stat.ML cs.LG physics.data-an | We present a novel clustering approach for moving object trajectories that
are constrained by an underlying road network. The approach builds a similarity
graph based on these trajectories then uses modularity-optimization hiearchical
graph clustering to regroup trajectories with similar profiles. Our
experimental study shows the superiority of the proposed approach over classic
hierarchical clustering and gives a brief insight to visualization of the
clustering results.
| Mohamed Khalil El Mahrsi (LTCI), Fabrice Rossi (SAMM) | null | 1205.2172 | null | null |
Efficient Constrained Regret Minimization | cs.LG | Online learning constitutes a mathematical and compelling framework to
analyze sequential decision making problems in adversarial environments. The
learner repeatedly chooses an action, the environment responds with an outcome,
and then the learner receives a reward for the played action. The goal of the
learner is to maximize his total reward. However, there are situations in
which, in addition to maximizing the cumulative reward, there are some
additional constraints on the sequence of decisions that must be satisfied on
average by the learner. In this paper we study an extension to the online
learning where the learner aims to maximize the total reward given that some
additional constraints need to be satisfied. By leveraging on the theory of
Lagrangian method in constrained optimization, we propose Lagrangian
exponentially weighted average (LEWA) algorithm, which is a primal-dual variant
of the well known exponentially weighted average algorithm, to efficiently
solve constrained online decision making problems. Using novel theoretical
analysis, we establish the regret and the violation of the constraint bounds in
full information and bandit feedback models.
| Mehrdad Mahdavi, Tianbao Yang, Rong Jin | null | 1205.2265 | null | null |
A Discussion on Parallelization Schemes for Stochastic Vector
Quantization Algorithms | stat.ML cs.DC cs.LG | This paper studies parallelization schemes for stochastic Vector Quantization
algorithms in order to obtain time speed-ups using distributed resources. We
show that the most intuitive parallelization scheme does not lead to better
performances than the sequential algorithm. Another distributed scheme is
therefore introduced which obtains the expected speed-ups. Then, it is improved
to fit implementation on distributed architectures where communications are
slow and inter-machines synchronization too costly. The schemes are tested with
simulated distributed architectures and, for the last one, with Microsoft
Windows Azure platform obtaining speed-ups up to 32 Virtual Machines.
| Matthieu Durut (LTCI), Beno\^it Patra (LSTA), Fabrice Rossi (SAMM) | null | 1205.2282 | null | null |
Sparse Approximation via Penalty Decomposition Methods | cs.LG math.OC stat.CO stat.ML | In this paper we consider sparse approximation problems, that is, general
$l_0$ minimization problems with the $l_0$-"norm" of a vector being a part of
constraints or objective function. In particular, we first study the
first-order optimality conditions for these problems. We then propose penalty
decomposition (PD) methods for solving them in which a sequence of penalty
subproblems are solved by a block coordinate descent (BCD) method. Under some
suitable assumptions, we establish that any accumulation point of the sequence
generated by the PD methods satisfies the first-order optimality conditions of
the problems. Furthermore, for the problems in which the $l_0$ part is the only
nonconvex part, we show that such an accumulation point is a local minimizer of
the problems. In addition, we show that any accumulation point of the sequence
generated by the BCD method is a saddle point of the penalty subproblem.
Moreover, for the problems in which the $l_0$ part is the only nonconvex part,
we establish that such an accumulation point is a local minimizer of the
penalty subproblem. Finally, we test the performance of our PD methods by
applying them to sparse logistic regression, sparse inverse covariance
selection, and compressed sensing problems. The computational results
demonstrate that our methods generally outperform the existing methods in terms
of solution quality and/or speed.
| Zhaosong Lu and Yong Zhang | null | 1205.2334 | null | null |
Low Complexity Damped Gauss-Newton Algorithms for CANDECOMP/PARAFAC | cs.NA cs.LG math.OC | The damped Gauss-Newton (dGN) algorithm for CANDECOMP/PARAFAC (CP)
decomposition can handle the challenges of collinearity of factors and
different magnitudes of factors; nevertheless, for factorization of an $N$-D
tensor of size $I_1\times I_N$ with rank $R$, the algorithm is computationally
demanding due to construction of large approximate Hessian of size $(RT \times
RT)$ and its inversion where $T = \sum_n I_n$. In this paper, we propose a fast
implementation of the dGN algorithm which is based on novel expressions of the
inverse approximate Hessian in block form. The new implementation has lower
computational complexity, besides computation of the gradient (this part is
common to both methods), requiring the inversion of a matrix of size
$NR^2\times NR^2$, which is much smaller than the whole approximate Hessian, if
$T \gg NR$. In addition, the implementation has lower memory requirements,
because neither the Hessian nor its inverse never need to be stored in their
entirety. A variant of the algorithm working with complex valued data is
proposed as well. Complexity and performance of the proposed algorithm is
compared with those of dGN and ALS with line search on examples of difficult
benchmark tensors.
| Anh Huy Phan and Petr Tichavsk\'y and Andrzej Cichocki | null | 1205.2584 | null | null |
On the Identifiability of the Post-Nonlinear Causal Model | stat.ML cs.LG | By taking into account the nonlinear effect of the cause, the inner noise
effect, and the measurement distortion effect in the observed variables, the
post-nonlinear (PNL) causal model has demonstrated its excellent performance in
distinguishing the cause from effect. However, its identifiability has not been
properly addressed, and how to apply it in the case of more than two variables
is also a problem. In this paper, we conduct a systematic investigation on its
identifiability in the two-variable case. We show that this model is
identifiable in most cases; by enumerating all possible situations in which the
model is not identifiable, we provide sufficient conditions for its
identifiability. Simulations are given to support the theoretical results.
Moreover, in the case of more than two variables, we show that the whole causal
structure can be found by applying the PNL causal model to each structure in
the Markov equivalent class and testing if the disturbance is independent of
the direct causes for each variable. In this way the exhaustive search over all
possible causal structures is avoided.
| Kun Zhang, Aapo Hyvarinen | null | 1205.2599 | null | null |
A Uniqueness Theorem for Clustering | cs.LG | Despite the widespread use of Clustering, there is distressingly little
general theory of clustering available. Questions like "What distinguishes a
clustering of data from other data partitioning?", "Are there any principles
governing all clustering paradigms?", "How should a user choose an appropriate
clustering algorithm for a particular task?", etc. are almost completely
unanswered by the existing body of clustering literature. We consider an
axiomatic approach to the theory of Clustering. We adopt the framework of
Kleinberg, [Kle03]. By relaxing one of Kleinberg's clustering axioms, we
sidestep his impossibility result and arrive at a consistent set of axioms. We
suggest to extend these axioms, aiming to provide an axiomatic taxonomy of
clustering paradigms. Such a taxonomy should provide users some guidance
concerning the choice of the appropriate clustering paradigm for a given task.
The main result of this paper is a set of abstract properties that characterize
the Single-Linkage clustering function. This characterization result provides
new insight into the properties of desired data groupings that make
Single-Linkage the appropriate choice. We conclude by considering a taxonomy of
clustering functions based on abstract properties that each satisfies.
| Reza Bosagh Zadeh, Shai Ben-David | null | 1205.2600 | null | null |
The Entire Quantile Path of a Risk-Agnostic SVM Classifier | cs.LG | A quantile binary classifier uses the rule: Classify x as +1 if P(Y = 1|X =
x) >= t, and as -1 otherwise, for a fixed quantile parameter t {[0, 1]. It has
been shown that Support Vector Machines (SVMs) in the limit are quantile
classifiers with t = 1/2 . In this paper, we show that by using asymmetric cost
of misclassification SVMs can be appropriately extended to recover, in the
limit, the quantile binary classifier for any t. We then present a principled
algorithm to solve the extended SVM classifier for all values of t
simultaneously. This has two implications: First, one can recover the entire
conditional distribution P(Y = 1|X = x) = t for t {[0, 1]. Second, we can build
a risk-agnostic SVM classifier where the cost of misclassification need not be
known apriori. Preliminary numerical experiments show the effectiveness of the
proposed algorithm.
| Jin Yu, S. V.N. Vishwanatan, Jian Zhang | null | 1205.2602 | null | null |
The Infinite Latent Events Model | stat.ML cs.LG | We present the Infinite Latent Events Model, a nonparametric hierarchical
Bayesian distribution over infinite dimensional Dynamic Bayesian Networks with
binary state representations and noisy-OR-like transitions. The distribution
can be used to learn structure in discrete timeseries data by simultaneously
inferring a set of latent events, which events fired at each timestep, and how
those events are causally linked. We illustrate the model on a sound
factorization task, a network topology identification task, and a video game
task.
| David Wingate, Noah Goodman, Daniel Roy, Joshua Tenenbaum | null | 1205.2604 | null | null |
Herding Dynamic Weights for Partially Observed Random Field Models | cs.LG stat.ML | Learning the parameters of a (potentially partially observable) random field
model is intractable in general. Instead of focussing on a single optimal
parameter value we propose to treat parameters as dynamical quantities. We
introduce an algorithm to generate complex dynamics for parameters and (both
visible and hidden) state vectors. We show that under certain conditions
averages computed over trajectories of the proposed dynamical system converge
to averages computed over the data. Our "herding dynamics" does not require
expensive operations such as exponentiation and is fully deterministic.
| Max Welling | null | 1205.2605 | null | null |
Exploring compact reinforcement-learning representations with linear
regression | cs.LG cs.AI | This paper presents a new algorithm for online linear regression whose
efficiency guarantees satisfy the requirements of the KWIK (Knows What It
Knows) framework. The algorithm improves on the complexity bounds of the
current state-of-the-art procedure in this setting. We explore several
applications of this algorithm for learning compact reinforcement-learning
representations. We show that KWIK linear regression can be used to learn the
reward function of a factored MDP and the probabilities of action outcomes in
Stochastic STRIPS and Object Oriented MDPs, none of which have been proven to
be efficiently learnable in the RL setting before. We also combine KWIK linear
regression with other KWIK learners to learn larger portions of these models,
including experiments on learning factored MDP transition and reward functions
together.
| Thomas J. Walsh, Istvan Szita, Carlos Diuk, Michael L. Littman | null | 1205.2606 | null | null |
Temporal-Difference Networks for Dynamical Systems with Continuous
Observations and Actions | cs.LG stat.ML | Temporal-difference (TD) networks are a class of predictive state
representations that use well-established TD methods to learn models of
partially observable dynamical systems. Previous research with TD networks has
dealt only with dynamical systems with finite sets of observations and actions.
We present an algorithm for learning TD network representations of dynamical
systems with continuous observations and actions. Our results show that the
algorithm is capable of learning accurate and robust models of several noisy
continuous dynamical systems. The algorithm presented here is the first fully
incremental method for learning a predictive representation of a continuous
dynamical system.
| Christopher M. Vigorito | null | 1205.2608 | null | null |
Which Spatial Partition Trees are Adaptive to Intrinsic Dimension? | stat.ML cs.LG | Recent theory work has found that a special type of spatial partition tree -
called a random projection tree - is adaptive to the intrinsic dimension of the
data from which it is built. Here we examine this same question, with a
combination of theory and experiments, for a broader class of trees that
includes k-d trees, dyadic trees, and PCA trees. Our motivation is to get a
feel for (i) the kind of intrinsic low dimensional structure that can be
empirically verified, (ii) the extent to which a spatial partition can exploit
such structure, and (iii) the implications for standard statistical tasks such
as regression, vector quantization, and nearest neighbor search.
| Nakul Verma, Samory Kpotufe, Sanjoy Dasgupta | null | 1205.2609 | null | null |
Probabilistic Structured Predictors | cs.LG | We consider MAP estimators for structured prediction with exponential family
models. In particular, we concentrate on the case that efficient algorithms for
uniform sampling from the output space exist. We show that under this
assumption (i) exact computation of the partition function remains a hard
problem, and (ii) the partition function and the gradient of the log partition
function can be approximated efficiently. Our main result is an approximation
scheme for the partition function based on Markov Chain Monte Carlo theory. We
also show that the efficient uniform sampling assumption holds in several
application settings that are of importance in machine learning.
| Shankar Vembu, Thomas Gartner, Mario Boley | null | 1205.2610 | null | null |
Ordinal Boltzmann Machines for Collaborative Filtering | cs.IR cs.LG | Collaborative filtering is an effective recommendation technique wherein the
preference of an individual can potentially be predicted based on preferences
of other members. Early algorithms often relied on the strong locality in the
preference data, that is, it is enough to predict preference of a user on a
particular item based on a small subset of other users with similar tastes or
of other items with similar properties. More recently, dimensionality reduction
techniques have proved to be equally competitive, and these are based on the
co-occurrence patterns rather than locality. This paper explores and extends a
probabilistic model known as Boltzmann Machine for collaborative filtering
tasks. It seamlessly integrates both the similarity and co-occurrence in a
principled manner. In particular, we study parameterisation options to deal
with the ordinal nature of the preferences, and propose a joint modelling of
both the user-based and item-based processes. Experiments on moderate and
large-scale movie recommendation show that our framework rivals existing
well-known methods.
| Tran The Truyen, Dinh Q. Phung, Svetha Venkatesh | null | 1205.2611 | null | null |
Computing Posterior Probabilities of Structural Features in Bayesian
Networks | cs.LG stat.ML | We study the problem of learning Bayesian network structures from data.
Koivisto and Sood (2004) and Koivisto (2006) presented algorithms that can
compute the exact marginal posterior probability of a subnetwork, e.g., a
single edge, in O(n2n) time and the posterior probabilities for all n(n-1)
potential edges in O(n2n) total time, assuming that the number of parents per
node or the indegree is bounded by a constant. One main drawback of their
algorithms is the requirement of a special structure prior that is non uniform
and does not respect Markov equivalence. In this paper, we develop an algorithm
that can compute the exact posterior probability of a subnetwork in O(3n) time
and the posterior probabilities for all n(n-1) potential edges in O(n3n) total
time. Our algorithm also assumes a bounded indegree but allows general
structure priors. We demonstrate the applicability of the algorithm on several
data sets with up to 20 variables.
| Jin Tian, Ru He | null | 1205.2612 | null | null |
Products of Hidden Markov Models: It Takes N>1 to Tango | cs.LG stat.ML | Products of Hidden Markov Models(PoHMMs) are an interesting class of
generative models which have received little attention since their
introduction. This maybe in part due to their more computationally expensive
gradient-based learning algorithm,and the intractability of computing the log
likelihood of sequences under the model. In this paper, we demonstrate how the
partition function can be estimated reliably via Annealed Importance Sampling.
We perform experiments using contrastive divergence learning on rainfall data
and data captured from pairs of people dancing. Our results suggest that
advances in learning and evaluation for undirected graphical models and recent
increases in available computing power make PoHMMs worth considering for
complex time-series modeling tasks.
| Graham W Taylor, Geoffrey E. Hinton | null | 1205.2614 | null | null |
Modeling Discrete Interventional Data using Directed Cyclic Graphical
Models | stat.ML cs.LG stat.ME | We outline a representation for discrete multivariate distributions in terms
of interventional potential functions that are globally normalized. This
representation can be used to model the effects of interventions, and the
independence properties encoded in this model can be represented as a directed
graph that allows cycles. In addition to discussing inference and sampling with
this representation, we give an exponential family parametrization that allows
parameter estimation to be stated as a convex optimization problem; we also
give a convex relaxation of the task of simultaneous parameter and structure
learning using group l1-regularization. The model is evaluated on simulated
data and intracellular flow cytometry data.
| Mark Schmidt, Kevin Murphy | null | 1205.2617 | null | null |
BPR: Bayesian Personalized Ranking from Implicit Feedback | cs.IR cs.LG stat.ML | Item recommendation is the task of predicting a personalized ranking on a set
of items (e.g. websites, movies, products). In this paper, we investigate the
most common scenario with implicit feedback (e.g. clicks, purchases). There are
many methods for item recommendation from implicit feedback like matrix
factorization (MF) or adaptive knearest-neighbor (kNN). Even though these
methods are designed for the item prediction task of personalized ranking, none
of them is directly optimized for ranking. In this paper we present a generic
optimization criterion BPR-Opt for personalized ranking that is the maximum
posterior estimator derived from a Bayesian analysis of the problem. We also
provide a generic learning algorithm for optimizing models with respect to
BPR-Opt. The learning method is based on stochastic gradient descent with
bootstrap sampling. We show how to apply our method to two state-of-the-art
recommender models: matrix factorization and adaptive kNN. Our experiments
indicate that for the task of personalized ranking our optimization method
outperforms the standard learning techniques for MF and kNN. The results show
the importance of optimizing models for the right criterion.
| Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, Lars
Schmidt-Thieme | null | 1205.2618 | null | null |
Using the Gene Ontology Hierarchy when Predicting Gene Function | cs.LG cs.CE stat.ML | The problem of multilabel classification when the labels are related through
a hierarchical categorization scheme occurs in many application domains such as
computational biology. For example, this problem arises naturally when trying
to automatically assign gene function using a controlled vocabularies like Gene
Ontology. However, most existing approaches for predicting gene functions solve
independent classification problems to predict genes that are involved in a
given function category, independently of the rest. Here, we propose two simple
methods for incorporating information about the hierarchical nature of the
categorization scheme. In the first method, we use information about a gene's
previous annotation to set an initial prior on its label. In a second approach,
we extend a graph-based semi-supervised learning algorithm for predicting gene
function in a hierarchy. We show that we can efficiently solve this problem by
solving a linear system of equations. We compare these approaches with a
previous label reconciliation-based approach. Results show that using the
hierarchy information directly, compared to using reconciliation methods,
improves gene function prediction.
| Sara Mostafavi, Quaid Morris | null | 1205.2622 | null | null |
Virtual Vector Machine for Bayesian Online Classification | cs.LG stat.ML | In a typical online learning scenario, a learner is required to process a
large data stream using a small memory buffer. Such a requirement is usually in
conflict with a learner's primary pursuit of prediction accuracy. To address
this dilemma, we introduce a novel Bayesian online classi cation algorithm,
called the Virtual Vector Machine. The virtual vector machine allows you to
smoothly trade-off prediction accuracy with memory size. The virtual vector
machine summarizes the information contained in the preceding data stream by a
Gaussian distribution over the classi cation weights plus a constant number of
virtual data points. The virtual data points are designed to add extra
non-Gaussian information about the classi cation weights. To maintain the
constant number of virtual points, the virtual vector machine adds the current
real data point into the virtual point set, merges two most similar virtual
points into a new virtual point or deletes a virtual point that is far from the
decision boundary. The information lost in this process is absorbed into the
Gaussian distribution. The extra information provided by the virtual points
leads to improved predictive accuracy over previous online classification
algorithms.
| Thomas P. Minka, Rongjing Xiang, Yuan (Alan) Qi | null | 1205.2623 | null | null |
Convexifying the Bethe Free Energy | cs.AI cs.LG | The introduction of loopy belief propagation (LBP) revitalized the
application of graphical models in many domains. Many recent works present
improvements on the basic LBP algorithm in an attempt to overcome convergence
and local optima problems. Notable among these are convexified free energy
approximations that lead to inference procedures with provable convergence and
quality properties. However, empirically LBP still outperforms most of its
convex variants in a variety of settings, as we also demonstrate here.
Motivated by this fact we seek convexified free energies that directly
approximate the Bethe free energy. We show that the proposed approximations
compare favorably with state-of-the art convex free energy approximations.
| Ofer Meshi, Ariel Jaimovich, Amir Globerson, Nir Friedman | null | 1205.2624 | null | null |
Convergent message passing algorithms - a unifying view | cs.AI cs.LG | Message-passing algorithms have emerged as powerful techniques for
approximate inference in graphical models. When these algorithms converge, they
can be shown to find local (or sometimes even global) optima of variational
formulations to the inference problem. But many of the most popular algorithms
are not guaranteed to converge. This has lead to recent interest in convergent
message-passing algorithms. In this paper, we present a unified view of
convergent message-passing algorithms. We present a simple derivation of an
abstract algorithm, tree-consistency bound optimization (TCBO) that is provably
convergent in both its sum and max product forms. We then show that many of the
existing convergent algorithms are instances of our TCBO algorithm, and obtain
novel convergent algorithms "for free" by exchanging maximizations and
summations in existing algorithms. In particular, we show that Wainwright's
non-convergent sum-product algorithm for tree based variational bounds, is
actually convergent with the right update order for the case where trees are
monotonic chains.
| Talya Meltzer, Amir Globerson, Yair Weiss | null | 1205.2625 | null | null |
Group Sparse Priors for Covariance Estimation | stat.ML cs.LG | Recently it has become popular to learn sparse Gaussian graphical models
(GGMs) by imposing l1 or group l1,2 penalties on the elements of the precision
matrix. Thispenalized likelihood approach results in a tractable convex
optimization problem. In this paper, we reinterpret these results as performing
MAP estimation under a novel prior which we call the group l1 and l1,2
positivedefinite matrix distributions. This enables us to build a hierarchical
model in which the l1 regularization terms vary depending on which group the
entries are assigned to, which in turn allows us to learn block structured
sparse GGMs with unknown group assignments. Exact inference in this
hierarchical model is intractable, due to the need to compute the normalization
constant of these matrix distributions. However, we derive upper bounds on the
partition functions, which lets us use fast variational inference (optimizing a
lower bound on the joint posterior). We show that on two real world data sets
(motion capture and financial data), our method which infers the block
structure outperforms a method that uses a fixed block structure, which in turn
outperforms baseline methods that ignore block structure.
| Benjamin Marlin, Mark Schmidt, Kevin Murphy | null | 1205.2626 | null | null |
Domain Knowledge Uncertainty and Probabilistic Parameter Constraints | cs.LG stat.ML | Incorporating domain knowledge into the modeling process is an effective way
to improve learning accuracy. However, as it is provided by humans, domain
knowledge can only be specified with some degree of uncertainty. We propose to
explicitly model such uncertainty through probabilistic constraints over the
parameter space. In contrast to hard parameter constraints, our approach is
effective also when the domain knowledge is inaccurate and generally results in
superior modeling accuracy. We focus on generative and conditional modeling
where the parameters are assigned a Dirichlet or Gaussian prior and demonstrate
the framework with experiments on both synthetic and real-world data.
| Yi Mao, Guy Lebanon | null | 1205.2627 | null | null |
Multiple Source Adaptation and the Renyi Divergence | cs.LG stat.ML | This paper presents a novel theoretical study of the general problem of
multiple source adaptation using the notion of Renyi divergence. Our results
build on our previous work [12], but significantly broaden the scope of that
work in several directions. We extend previous multiple source loss guarantees
based on distribution weighted combinations to arbitrary target distributions
P, not necessarily mixtures of the source distributions, analyze both known and
unknown target distribution cases, and prove a lower bound. We further extend
our bounds to deal with the case where the learner receives an approximate
distribution for each source instead of the exact one, and show that similar
loss guarantees can be achieved depending on the divergence between the
approximate and true distributions. We also analyze the case where the labeling
functions of the source domains are somewhat different. Finally, we report the
results of experiments with both an artificial data set and a sentiment
analysis task, showing the performance benefits of the distribution weighted
combinations and the quality of our bounds based on the Renyi divergence.
| Yishay Mansour, Mehryar Mohri, Afshin Rostamizadeh | null | 1205.2628 | null | null |
Interpretation and Generalization of Score Matching | cs.LG stat.ML | Score matching is a recently developed parameter learning method that is
particularly effective to complicated high dimensional density models with
intractable partition functions. In this paper, we study two issues that have
not been completely resolved for score matching. First, we provide a formal
link between maximum likelihood and score matching. Our analysis shows that
score matching finds model parameters that are more robust with noisy training
data. Second, we develop a generalization of score matching. Based on this
generalization, we further demonstrate an extension of score matching to models
of discrete data.
| Siwei Lyu | null | 1205.2629 | null | null |
Multi-Task Feature Learning Via Efficient l2,1-Norm Minimization | cs.LG cs.CV stat.ML | The problem of joint feature selection across a group of related tasks has
applications in many areas including biomedical informatics and computer
vision. We consider the l2,1-norm regularized regression model for joint
feature selection from multiple tasks, which can be derived in the
probabilistic framework by assuming a suitable prior from the exponential
family. One appealing feature of the l2,1-norm regularization is that it
encourages multiple predictors to share similar sparsity patterns. However, the
resulting optimization problem is challenging to solve due to the
non-smoothness of the l2,1-norm regularization. In this paper, we propose to
accelerate the computation by reformulating it as two equivalent smooth convex
optimization problems which are then solved via the Nesterov's method-an
optimal first-order black-box method for smooth convex optimization. A key
building block in solving the reformulations is the Euclidean projection. We
show that the Euclidean projection for the first reformulation can be
analytically computed, while the Euclidean projection for the second one can be
computed in linear time. Empirical evaluations on several data sets verify the
efficiency of the proposed algorithms.
| Jun Liu, Shuiwang Ji, Jieping Ye | null | 1205.2631 | null | null |
Improving Compressed Counting | cs.DS cs.LG stat.ML | Compressed Counting (CC) [22] was recently proposed for estimating the ath
frequency moments of data streams, where 0 < a <= 2. CC can be used for
estimating Shannon entropy, which can be approximated by certain functions of
the ath frequency moments as a -> 1. Monitoring Shannon entropy for anomaly
detection (e.g., DDoS attacks) in large networks is an important task. This
paper presents a new algorithm for improving CC. The improvement is most
substantial when a -> 1--. For example, when a = 0:99, the new algorithm
reduces the estimation variance roughly by 100-fold. This new algorithm would
make CC considerably more practical for estimating Shannon entropy.
Furthermore, the new algorithm is statistically optimal when a = 0.5.
| Ping Li | null | 1205.2632 | null | null |
Identifying confounders using additive noise models | stat.ML cs.LG | We propose a method for inferring the existence of a latent common cause
('confounder') of two observed random variables. The method assumes that the
two effects of the confounder are (possibly nonlinear) functions of the
confounder plus independent, additive noise. We discuss under which conditions
the model is identifiable (up to an arbitrary reparameterization of the
confounder) from the joint distribution of the effects. We state and prove a
theoretical result that provides evidence for the conjecture that the model is
generically identifiable under suitable technical conditions. In addition, we
propose a practical method to estimate the confounder from a finite i.i.d.
sample of the effects and illustrate that the method works well on both
simulated and real-world data.
| Dominik Janzing, Jonas Peters, Joris Mooij, Bernhard Schoelkopf | null | 1205.2640 | null | null |
Bayesian Discovery of Linear Acyclic Causal Models | stat.ML cs.LG stat.ME | Methods for automated discovery of causal relationships from
non-interventional data have received much attention recently. A widely used
and well understood model family is given by linear acyclic causal models
(recursive structural equation models). For Gaussian data both constraint-based
methods (Spirtes et al., 1993; Pearl, 2000) (which output a single equivalence
class) and Bayesian score-based methods (Geiger and Heckerman, 1994) (which
assign relative scores to the equivalence classes) are available. On the
contrary, all current methods able to utilize non-Gaussianity in the data
(Shimizu et al., 2006; Hoyer et al., 2008) always return only a single graph or
a single equivalence class, and so are fundamentally unable to express the
degree of certainty attached to that output. In this paper we develop a
Bayesian score-based approach able to take advantage of non-Gaussianity when
estimating linear acyclic causal models, and we empirically demonstrate that,
at least on very modest size networks, its accuracy is as good as or better
than existing methods. We provide a complete code package (in R) which
implements all algorithms and performs all of the analysis provided in the
paper, and hope that this will further the application of these methods to
solving causal inference problems.
| Patrik O. Hoyer, Antti Hyttinen | null | 1205.2641 | null | null |
New inference strategies for solving Markov Decision Processes using
reversible jump MCMC | cs.LG cs.SY math.OC stat.CO stat.ML | In this paper we build on previous work which uses inferences techniques, in
particular Markov Chain Monte Carlo (MCMC) methods, to solve parameterized
control problems. We propose a number of modifications in order to make this
approach more practical in general, higher-dimensional spaces. We first
introduce a new target distribution which is able to incorporate more reward
information from sampled trajectories. We also show how to break strong
correlations between the policy parameters and sampled trajectories in order to
sample more freely. Finally, we show how to incorporate these techniques in a
principled manner to obtain estimates of the optimal policy.
| Matthias Hoffman, Hendrik Kueck, Nando de Freitas, Arnaud Doucet | null | 1205.2643 | null | null |
Censored Exploration and the Dark Pool Problem | cs.LG cs.GT | We introduce and analyze a natural algorithm for multi-venue exploration from
censored data, which is motivated by the Dark Pool Problem of modern
quantitative finance. We prove that our algorithm converges in polynomial time
to a near-optimal allocation policy; prior results for similar problems in
stochastic inventory control guaranteed only asymptotic convergence and
examined variants in which each venue could be treated independently. Our
analysis bears a strong resemblance to that of efficient exploration/
exploitation schemes in the reinforcement learning literature. We describe an
extensive experimental evaluation of our algorithm on the Dark Pool Problem
using real trading data.
| Kuzman Ganchev, Michael Kearns, Yuriy Nevmyvaka, Jennifer Wortman
Vaughan | null | 1205.2646 | null | null |
Learning Continuous-Time Social Network Dynamics | cs.SI cs.LG physics.soc-ph stat.ML | We demonstrate that a number of sociology models for social network dynamics
can be viewed as continuous time Bayesian networks (CTBNs). A sampling-based
approximate inference method for CTBNs can be used as the basis of an
expectation-maximization procedure that achieves better accuracy in estimating
the parameters of the model than the standard method of moments
algorithmfromthe sociology literature. We extend the existing social network
models to allow for indirect and asynchronous observations of the links. A
Markov chain Monte Carlo sampling algorithm for this new model permits
estimation and inference. We provide results on both a synthetic network (for
verification) and real social network data.
| Yu Fan, Christian R. Shelton | null | 1205.2648 | null | null |
Correlated Non-Parametric Latent Feature Models | cs.LG stat.ML | We are often interested in explaining data through a set of hidden factors or
features. When the number of hidden features is unknown, the Indian Buffet
Process (IBP) is a nonparametric latent feature model that does not bound the
number of active features in dataset. However, the IBP assumes that all latent
features are uncorrelated, making it inadequate for many realworld problems. We
introduce a framework for correlated nonparametric feature models, generalising
the IBP. We use this framework to generate several specific models and
demonstrate applications on realworld datasets.
| Finale Doshi-Velez, Zoubin Ghahramani | null | 1205.2650 | null | null |
L2 Regularization for Learning Kernels | cs.LG stat.ML | The choice of the kernel is critical to the success of many learning
algorithms but it is typically left to the user. Instead, the training data can
be used to learn the kernel by selecting it out of a given family, such as that
of non-negative linear combinations of p base kernels, constrained by a trace
or L1 regularization. This paper studies the problem of learning kernels with
the same family of kernels but with an L2 regularization instead, and for
regression problems. We analyze the problem of learning kernels with ridge
regression. We derive the form of the solution of the optimization problem and
give an efficient iterative algorithm for computing that solution. We present a
novel theoretical analysis of the problem based on stability and give learning
bounds for orthogonal kernels that contain only an additive term O(pp/m) when
compared to the standard kernel ridge regression stability bound. We also
report the results of experiments indicating that L1 regularization can lead to
modest improvements for a small number of kernels, but to performance
degradations in larger-scale cases. In contrast, L2 regularization never
degrades performance and in fact achieves significant improvements with a large
number of kernels.
| Corinna Cortes, Mehryar Mohri, Afshin Rostamizadeh | null | 1205.2653 | null | null |
Convex Coding | cs.LG cs.IT math.IT stat.ML | Inspired by recent work on convex formulations of clustering (Lashkari &
Golland, 2008; Nowozin & Bakir, 2008) we investigate a new formulation of the
Sparse Coding Problem (Olshausen & Field, 1997). In sparse coding we attempt to
simultaneously represent a sequence of data-vectors sparsely (i.e. sparse
approximation (Tropp et al., 2006)) in terms of a 'code' defined by a set of
basis elements, while also finding a code that enables such an approximation.
As existing alternating optimization procedures for sparse coding are
theoretically prone to severe local minima problems, we propose a convex
relaxation of the sparse coding problem and derive a boosting-style algorithm,
that (Nowozin & Bakir, 2008) serves as a convex 'master problem' which calls a
(potentially non-convex) sub-problem to identify the next code element to add.
Finally, we demonstrate the properties of our boosted coding algorithm on an
image denoising task.
| David M. Bradley, J Andrew Bagnell | null | 1205.2656 | null | null |
Multilingual Topic Models for Unaligned Text | cs.CL cs.IR cs.LG stat.ML | We develop the multilingual topic model for unaligned text (MuTo), a
probabilistic model of text that is designed to analyze corpora composed of
documents in two languages. From these documents, MuTo uses stochastic EM to
simultaneously discover both a matching between the languages and multilingual
latent topics. We demonstrate that MuTo is able to find shared topics on
real-world multilingual corpora, successfully pairing related documents across
languages. MuTo provides a new framework for creating multilingual topic models
without needing carefully curated parallel corpora and allows applications
built using the topic model formalism to be applied to a much wider class of
corpora.
| Jordan Boyd-Graber, David Blei | null | 1205.2657 | null | null |
Optimization of Structured Mean Field Objectives | stat.ML cs.LG | In intractable, undirected graphical models, an intuitive way of creating
structured mean field approximations is to select an acyclic tractable
subgraph. We show that the hardness of computing the objective function and
gradient of the mean field objective qualitatively depends on a simple graph
property. If the tractable subgraph has this property- we call such subgraphs
v-acyclic-a very fast block coordinate ascent algorithm is possible. If not,
optimization is harder, but we show a new algorithm based on the construction
of an auxiliary exponential family that can be used to make inference possible
in this case as well. We discuss the advantages and disadvantages of each
regime and compare the algorithms empirically.
| Alexandre Bouchard-Cote, Michael I. Jordan | null | 1205.2658 | null | null |
Alternating Projections for Learning with Expectation Constraints | cs.LG stat.ML | We present an objective function for learning with unlabeled data that
utilizes auxiliary expectation constraints. We optimize this objective function
using a procedure that alternates between information and moment projections.
Our method provides an alternate interpretation of the posterior regularization
framework (Graca et al., 2008), maintains uncertainty during optimization
unlike constraint-driven learning (Chang et al., 2007), and is more efficient
than generalized expectation criteria (Mann & McCallum, 2008). Applications of
this framework include minimally supervised learning, semisupervised learning,
and learning with constraints that are more expressive than the underlying
model. In experiments, we demonstrate comparable accuracy to generalized
expectation criteria for minimally supervised learning, and use expressive
structural constraints to guide semi-supervised learning, providing a 3%-6%
improvement over stateof-the-art constraint-driven learning.
| Kedar Bellare, Gregory Druck, Andrew McCallum | null | 1205.2660 | null | null |
REGAL: A Regularization based Algorithm for Reinforcement Learning in
Weakly Communicating MDPs | cs.LG | We provide an algorithm that achieves the optimal regret rate in an unknown
weakly communicating Markov Decision Process (MDP). The algorithm proceeds in
episodes where, in each episode, it picks a policy using regularization based
on the span of the optimal bias vector. For an MDP with S states and A actions
whose optimal bias vector has span bounded by H, we show a regret bound of
~O(HSpAT). We also relate the span to various diameter-like quantities
associated with the MDP, demonstrating how our results improve on previous
regret bounds.
| Peter L. Bartlett, Ambuj Tewari | null | 1205.2661 | null | null |
On Smoothing and Inference for Topic Models | cs.LG stat.ML | Latent Dirichlet analysis, or topic modeling, is a flexible latent variable
framework for modeling high-dimensional sparse count data. Various learning
algorithms have been developed in recent years, including collapsed Gibbs
sampling, variational inference, and maximum a posteriori estimation, and this
variety motivates the need for careful empirical comparisons. In this paper, we
highlight the close connections between these approaches. We find that the main
differences are attributable to the amount of smoothing applied to the counts.
When the hyperparameters are optimized, the differences in performance among
the algorithms diminish significantly. The ability of these algorithms to
achieve solutions of comparable accuracy gives us the freedom to select
computationally efficient approaches. Using the insights gained from this
comparative study, we show how accurate topic models can be learned in several
seconds on text corpora with thousands of documents.
| Arthur Asuncion, Max Welling, Padhraic Smyth, Yee Whye Teh | null | 1205.2662 | null | null |
A Bayesian Sampling Approach to Exploration in Reinforcement Learning | cs.LG | We present a modular approach to reinforcement learning that uses a Bayesian
representation of the uncertainty over models. The approach, BOSS (Best of
Sampled Set), drives exploration by sampling multiple models from the posterior
and selecting actions optimistically. It extends previous work by providing a
rule for deciding when to resample and how to combine the models. We show that
our algorithm achieves nearoptimal reward with high probability with a sample
complexity that is low relative to the speed at which the posterior
distribution converges during learning. We demonstrate that BOSS performs quite
favorably compared to state-of-the-art reinforcement-learning approaches and
illustrate its flexibility by pairing it with a non-parametric model that
generalizes across states.
| John Asmuth, Lihong Li, Michael L. Littman, Ali Nouri, David Wingate | null | 1205.2664 | null | null |
Decoupling Exploration and Exploitation in Multi-Armed Bandits | cs.LG | We consider a multi-armed bandit problem where the decision maker can explore
and exploit different arms at every round. The exploited arm adds to the
decision maker's cumulative reward (without necessarily observing the reward)
while the explored arm reveals its value. We devise algorithms for this setup
and show that the dependence on the number of arms, k, can be much better than
the standard square root of k dependence, depending on the behavior of the
arms' reward sequences. For the important case of piecewise stationary
stochastic bandits, we show a significant improvement over existing algorithms.
Our algorithms are based on a non-uniform sampling policy, which we show is
essential to the success of any algorithm in the adversarial setup. Finally, we
show some simulation results on an ultra-wide band channel selection inspired
setting indicating the applicability of our algorithms.
| Orly Avner, Shie Mannor, Ohad Shamir | null | 1205.2874 | null | null |
Density Sensitive Hashing | cs.IR cs.LG | Nearest neighbors search is a fundamental problem in various research fields
like machine learning, data mining and pattern recognition. Recently,
hashing-based approaches, e.g., Locality Sensitive Hashing (LSH), are proved to
be effective for scalable high dimensional nearest neighbors search. Many
hashing algorithms found their theoretic root in random projection. Since these
algorithms generate the hash tables (projections) randomly, a large number of
hash tables (i.e., long codewords) are required in order to achieve both high
precision and recall. To address this limitation, we propose a novel hashing
algorithm called {\em Density Sensitive Hashing} (DSH) in this paper. DSH can
be regarded as an extension of LSH. By exploring the geometric structure of the
data, DSH avoids the purely random projections selection and uses those
projective functions which best agree with the distribution of the data.
Extensive experimental results on real-world data sets have shown that the
proposed method achieves better performance compared to the state-of-the-art
hashing approaches.
| Yue Lin and Deng Cai and Cheng Li | null | 1205.2930 | null | null |
b-Bit Minwise Hashing in Practice: Large-Scale Batch and Online Learning
and Using GPUs for Fast Preprocessing with Simple Hash Functions | cs.IR cs.DB cs.LG | In this paper, we study several critical issues which must be tackled before
one can apply b-bit minwise hashing to the volumes of data often used
industrial applications, especially in the context of search.
1. (b-bit) Minwise hashing requires an expensive preprocessing step that
computes k (e.g., 500) minimal values after applying the corresponding
permutations for each data vector. We developed a parallelization scheme using
GPUs and observed that the preprocessing time can be reduced by a factor of
20-80 and becomes substantially smaller than the data loading time.
2. One major advantage of b-bit minwise hashing is that it can substantially
reduce the amount of memory required for batch learning. However, as online
algorithms become increasingly popular for large-scale learning in the context
of search, it is not clear if b-bit minwise yields significant improvements for
them. This paper demonstrates that $b$-bit minwise hashing provides an
effective data size/dimension reduction scheme and hence it can dramatically
reduce the data loading time for each epoch of the online training process.
This is significant because online learning often requires many (e.g., 10 to
100) epochs to reach a sufficient accuracy.
3. Another critical issue is that for very large data sets it becomes
impossible to store a (fully) random permutation matrix, due to its space
requirements. Our paper is the first study to demonstrate that $b$-bit minwise
hashing implemented using simple hash functions, e.g., the 2-universal (2U) and
4-universal (4U) hash families, can produce very similar learning results as
using fully random permutations. Experiments on datasets of up to 200GB are
presented.
| Ping Li and Anshumali Shrivastava and Arnd Christian Konig | null | 1205.2958 | null | null |
Malware Detection Module using Machine Learning Algorithms to Assist in
Centralized Security in Enterprise Networks | cs.CR cs.LG | Malicious software is abundant in a world of innumerable computer users, who
are constantly faced with these threats from various sources like the internet,
local networks and portable drives. Malware is potentially low to high risk and
can cause systems to function incorrectly, steal data and even crash. Malware
may be executable or system library files in the form of viruses, worms,
Trojans, all aimed at breaching the security of the system and compromising
user privacy. Typically, anti-virus software is based on a signature definition
system which keeps updating from the internet and thus keeping track of known
viruses. While this may be sufficient for home-users, a security risk from a
new virus could threaten an entire enterprise network. This paper proposes a
new and more sophisticated antivirus engine that can not only scan files, but
also build knowledge and detect files as potential viruses. This is done by
extracting system API calls made by various normal and harmful executable, and
using machine learning algorithms to classify and hence, rank files on a scale
of security risk. While such a system is processor heavy, it is very effective
when used centrally to protect an enterprise network which maybe more prone to
such threats.
| Priyank Singhal, Nataasha Raul | 10.5121/ijnsa.2012.4106 | 1205.3062 | null | null |
Efficient Bayes-Adaptive Reinforcement Learning using Sample-Based
Search | cs.LG cs.AI stat.ML | Bayesian model-based reinforcement learning is a formally elegant approach to
learning optimal behaviour under model uncertainty, trading off exploration and
exploitation in an ideal way. Unfortunately, finding the resulting
Bayes-optimal policies is notoriously taxing, since the search space becomes
enormous. In this paper we introduce a tractable, sample-based method for
approximate Bayes-optimal planning which exploits Monte-Carlo tree search. Our
approach outperformed prior Bayesian model-based RL algorithms by a significant
margin on several well-known benchmark problems -- because it avoids expensive
applications of Bayes rule within the search tree by lazily sampling models
from the current beliefs. We illustrate the advantages of our approach by
showing it working in an infinite state space domain which is qualitatively out
of reach of almost all previous work in Bayesian exploration.
| Arthur Guez and David Silver and Peter Dayan | null | 1205.3109 | null | null |
Unsupervised Discovery of Mid-Level Discriminative Patches | cs.CV cs.AI cs.LG | The goal of this paper is to discover a set of discriminative patches which
can serve as a fully unsupervised mid-level visual representation. The desired
patches need to satisfy two requirements: 1) to be representative, they need to
occur frequently enough in the visual world; 2) to be discriminative, they need
to be different enough from the rest of the visual world. The patches could
correspond to parts, objects, "visual phrases", etc. but are not restricted to
be any one of them. We pose this as an unsupervised discriminative clustering
problem on a huge dataset of image patches. We use an iterative procedure which
alternates between clustering and training discriminative classifiers, while
applying careful cross-validation at each step to prevent overfitting. The
paper experimentally demonstrates the effectiveness of discriminative patches
as an unsupervised mid-level visual representation, suggesting that it could be
used in place of visual words for many tasks. Furthermore, discriminative
patches can also be used in a supervised regime, such as scene classification,
where they demonstrate state-of-the-art performance on the MIT Indoor-67
dataset.
| Saurabh Singh, Abhinav Gupta, Alexei A. Efros | null | 1205.3137 | null | null |
Multiple Identifications in Multi-Armed Bandits | cs.LG stat.ML | We study the problem of identifying the top $m$ arms in a multi-armed bandit
game. Our proposed solution relies on a new algorithm based on successive
rejects of the seemingly bad arms, and successive accepts of the good ones.
This algorithmic contribution allows to tackle other multiple identifications
settings that were previously out of reach. In particular we show that this
idea of successive accepts and rejects applies to the multi-bandit best arm
identification problem.
| S\'ebastien Bubeck, Tengyao Wang, Nitin Viswanathan | null | 1205.3181 | null | null |
Genetic Programming for Multibiometrics | cs.NE cs.CR cs.LG | Biometric systems suffer from some drawbacks: a biometric system can provide
in general good performances except with some individuals as its performance
depends highly on the quality of the capture. One solution to solve some of
these problems is to use multibiometrics where different biometric systems are
combined together (multiple captures of the same biometric modality, multiple
feature extraction algorithms, multiple biometric modalities...). In this
paper, we are interested in score level fusion functions application (i.e., we
use a multibiometric authentication scheme which accept or deny the claimant
for using an application). In the state of the art, the weighted sum of scores
(which is a linear classifier) and the use of an SVM (which is a non linear
classifier) provided by different biometric systems provide one of the best
performances. We present a new method based on the use of genetic programming
giving similar or better performances (depending on the complexity of the
database). We derive a score fusion function by assembling some classical
primitives functions (+, *, -, ...). We have validated the proposed method on
three significant biometric benchmark datasets from the state of the art.
| Romain Giot (GREYC), Christophe Rosenberger (GREYC) | 10.1016/j.eswa.2011.08.066 | 1205.3441 | null | null |
Normalized Maximum Likelihood Coding for Exponential Family with Its
Applications to Optimal Clustering | cs.LG | We are concerned with the issue of how to calculate the normalized maximum
likelihood (NML) code-length. There is a problem that the normalization term of
the NML code-length may diverge when it is continuous and unbounded and a
straightforward computation of it is highly expensive when the data domain is
finite . In previous works it has been investigated how to calculate the NML
code-length for specific types of distributions. We first propose a general
method for computing the NML code-length for the exponential family. Then we
specifically focus on Gaussian mixture model (GMM), and propose a new efficient
method for computing the NML to them. We develop it by generalizing Rissanen's
re-normalizing technique. Then we apply this method to the clustering issue, in
which a clustering structure is modeled using a GMM, and the main task is to
estimate the optimal number of clusters on the basis of the NML code-length. We
demonstrate using artificial data sets the superiority of the NML-based
clustering over other criteria such as AIC, BIC in terms of the data size
required for high accuracy rate to be achieved.
| So Hirai and Kenji Yamanishi | null | 1205.3549 | null | null |
Universal Algorithm for Online Trading Based on the Method of
Calibration | cs.LG q-fin.PM | We present a universal algorithm for online trading in Stock Market which
performs asymptotically at least as good as any stationary trading strategy
that computes the investment at each step using a fixed function of the side
information that belongs to a given RKHS (Reproducing Kernel Hilbert Space).
Using a universal kernel, we extend this result for any continuous stationary
strategy. In this learning process, a trader rationally chooses his gambles
using predictions made by a randomized well-calibrated algorithm. Our strategy
is based on Dawid's notion of calibration with more general checking rules and
on some modification of Kakade and Foster's randomized rounding algorithm for
computing the well-calibrated forecasts. We combine the method of randomized
calibration with Vovk's method of defensive forecasting in RKHS. Unlike the
statistical theory, no stochastic assumptions are made about the stock prices.
Our empirical results on historical markets provide strong evidence that this
type of technical trading can "beat the market" if transaction costs are
ignored.
| Vladimir V'yugin and Vladimir Trunov | null | 1205.3767 | null | null |
kLog: A Language for Logical and Relational Learning with Kernels | cs.AI cs.LG cs.PL | We introduce kLog, a novel approach to statistical relational learning.
Unlike standard approaches, kLog does not represent a probability distribution
directly. It is rather a language to perform kernel-based learning on
expressive logical and relational representations. kLog allows users to specify
learning problems declaratively. It builds on simple but powerful concepts:
learning from interpretations, entity/relationship data modeling, logic
programming, and deductive databases. Access by the kernel to the rich
representation is mediated by a technique we call graphicalization: the
relational representation is first transformed into a graph --- in particular,
a grounded entity/relationship diagram. Subsequently, a choice of graph kernel
defines the feature space. kLog supports mixed numerical and symbolic data, as
well as background knowledge in the form of Prolog or Datalog programs as in
inductive logic programming systems. The kLog framework can be applied to
tackle the same range of tasks that has made statistical relational learning so
popular, including classification, regression, multitask learning, and
collective classification. We also report about empirical comparisons, showing
that kLog can be either more accurate, or much faster at the same level of
accuracy, than Tilde and Alchemy. kLog is GPLv3 licensed and is available at
http://klog.dinfo.unifi.it along with tutorials.
| Paolo Frasconi, Fabrizio Costa, Luc De Raedt, Kurt De Grave | 10.1016/j.artint.2014.08.003 | 1205.3981 | null | null |
Constrained Overcomplete Analysis Operator Learning for Cosparse Signal
Modelling | math.NA cs.LG | We consider the problem of learning a low-dimensional signal model from a
collection of training samples. The mainstream approach would be to learn an
overcomplete dictionary to provide good approximations of the training samples
using sparse synthesis coefficients. This famous sparse model has a less well
known counterpart, in analysis form, called the cosparse analysis model. In
this new model, signals are characterised by their parsimony in a transformed
domain using an overcomplete (linear) analysis operator. We propose to learn an
analysis operator from a training corpus using a constrained optimisation
framework based on L1 optimisation. The reason for introducing a constraint in
the optimisation framework is to exclude trivial solutions. Although there is
no final answer here for which constraint is the most relevant constraint, we
investigate some conventional constraints in the model adaptation field and use
the uniformly normalised tight frame (UNTF) for this purpose. We then derive a
practical learning algorithm, based on projected subgradients and
Douglas-Rachford splitting technique, and demonstrate its ability to robustly
recover a ground truth analysis operator, when provided with a clean training
set, of sufficient size. We also find an analysis operator for images, using
some noisy cosparse signals, which is indeed a more realistic experiment. As
the derived optimisation problem is not a convex program, we often find a local
minimum using such variational methods. Some local optimality conditions are
derived for two different settings, providing preliminary theoretical support
for the well-posedness of the learning problem under appropriate conditions.
| Mehrdad Yaghoobi, Sangnam Nam, Remi Gribonval and Mike E. Davies | 10.1109/TSP.2013.2250968 | 1205.4133 | null | null |
Theory of Dependent Hierarchical Normalized Random Measures | cs.LG math.ST stat.ML stat.TH | This paper presents theory for Normalized Random Measures (NRMs), Normalized
Generalized Gammas (NGGs), a particular kind of NRM, and Dependent Hierarchical
NRMs which allow networks of dependent NRMs to be analysed. These have been
used, for instance, for time-dependent topic modelling. In this paper, we first
introduce some mathematical background of completely random measures (CRMs) and
their construction from Poisson processes, and then introduce NRMs and NGGs.
Slice sampling is also introduced for posterior inference. The dependency
operators in Poisson processes and for the corresponding CRMs and NRMs is then
introduced and Posterior inference for the NGG presented. Finally, we give
dependency and composition results when applying these operators to NRMs so
they can be used in a network with hierarchical and dependent relations.
| Changyou Chen, Wray Buntine and Nan Ding | null | 1205.4159 | null | null |
Online Structured Prediction via Coactive Learning | cs.LG cs.AI cs.IR | We propose Coactive Learning as a model of interaction between a learning
system and a human user, where both have the common goal of providing results
of maximum utility to the user. At each step, the system (e.g. search engine)
receives a context (e.g. query) and predicts an object (e.g. ranking). The user
responds by correcting the system if necessary, providing a slightly improved
-- but not necessarily optimal -- object as feedback. We argue that such
feedback can often be inferred from observable user behavior, for example, from
clicks in web-search. Evaluating predictions by their cardinal utility to the
user, we propose efficient learning algorithms that have ${\cal
O}(\frac{1}{\sqrt{T}})$ average regret, even though the learning algorithm
never observes cardinal utility values as in conventional online learning. We
demonstrate the applicability of our model and learning algorithms on a movie
recommendation task, as well as ranking for web-search.
| Pannaga Shivaswamy and Thorsten Joachims | null | 1205.4213 | null | null |
Thompson Sampling: An Asymptotically Optimal Finite Time Analysis | stat.ML cs.LG | The question of the optimality of Thompson Sampling for solving the
stochastic multi-armed bandit problem had been open since 1933. In this paper
we answer it positively for the case of Bernoulli rewards by providing the
first finite-time analysis that matches the asymptotic rate given in the Lai
and Robbins lower bound for the cumulative regret. The proof is accompanied by
a numerical comparison with other optimal policies, experiments that have been
lacking in the literature until now for the Bernoulli case.
| Emilie Kaufmann, Nathaniel Korda and R\'emi Munos | null | 1205.4217 | null | null |
Diffusion Adaptation over Networks | cs.MA cs.LG | Adaptive networks are well-suited to perform decentralized information
processing and optimization tasks and to model various types of self-organized
and complex behavior encountered in nature. Adaptive networks consist of a
collection of agents with processing and learning abilities. The agents are
linked together through a connection topology, and they cooperate with each
other through local interactions to solve distributed optimization, estimation,
and inference problems in real-time. The continuous diffusion of information
across the network enables agents to adapt their performance in relation to
streaming data and network conditions; it also results in improved adaptation
and learning performance relative to non-cooperative agents. This article
provides an overview of diffusion strategies for adaptation and learning over
networks. The article is divided into several sections: 1. Motivation; 2.
Mean-Square-Error Estimation; 3. Distributed Optimization via Diffusion
Strategies; 4. Adaptive Diffusion Strategies; 5. Performance of
Steepest-Descent Diffusion Strategies; 6. Performance of Adaptive Diffusion
Strategies; 7. Comparing the Performance of Cooperative Strategies; 8.
Selecting the Combination Weights; 9. Diffusion with Noisy Information
Exchanges; 10. Extensions and Further Considerations; Appendix A: Properties of
Kronecker Products; Appendix B: Graph Laplacian and Network Connectivity;
Appendix C: Stochastic Matrices; Appendix D: Block Maximum Norm; Appendix E:
Comparison with Consensus Strategies; References.
| Ali H. Sayed | null | 1205.4220 | null | null |
Visualization of features of a series of measurements with
one-dimensional cellular structure | cs.LG | This paper describes the method of visualization of periodic constituents and
instability areas in series of measurements, being based on the algorithm of
smoothing out and concept of one-dimensional cellular automata. A method can be
used at the analysis of temporal series, related to the volumes of thematic
publications in web-space.
| D. V. Lande | null | 1205.4234 | null | null |
Efficient Methods for Unsupervised Learning of Probabilistic Models | cs.LG cs.AI cs.IT cs.NE math.IT physics.data-an | In this thesis I develop a variety of techniques to train, evaluate, and
sample from intractable and high dimensional probabilistic models. Abstract
exceeds arXiv space limitations -- see PDF.
| Jascha Sohl-Dickstein | null | 1205.4295 | null | null |
New Analysis and Algorithm for Learning with Drifting Distributions | cs.LG stat.ML | We present a new analysis of the problem of learning with drifting
distributions in the batch setting using the notion of discrepancy. We prove
learning bounds based on the Rademacher complexity of the hypothesis set and
the discrepancy of distributions both for a drifting PAC scenario and a
tracking scenario. Our bounds are always tighter and in some cases
substantially improve upon previous ones based on the $L_1$ distance. We also
present a generalization of the standard on-line to batch conversion to the
drifting scenario in terms of the discrepancy and arbitrary convex combinations
of hypotheses. We introduce a new algorithm exploiting these learning
guarantees, which we show can be formulated as a simple QP. Finally, we report
the results of preliminary experiments demonstrating the benefits of this
algorithm.
| Mehryar Mohri and Andres Munoz Medina | null | 1205.4343 | null | null |
From Exact Learning to Computing Boolean Functions and Back Again | cs.LG cs.DM | The goal of the paper is to relate complexity measures associated with the
evaluation of Boolean functions (certificate complexity, decision tree
complexity) and learning dimensions used to characterize exact learning
(teaching dimension, extended teaching dimension). The high level motivation is
to discover non-trivial relations between exact learning of an unknown concept
and testing whether an unknown concept is part of a concept class or not.
Concretely, the goal is to provide lower and upper bounds of complexity
measures for one problem type in terms of the other.
| Sergiu Goschin | null | 1205.4349 | null | null |
Sparse Signal Recovery in the Presence of Intra-Vector and Inter-Vector
Correlation | cs.IT cs.LG math.IT stat.ME stat.ML | This work discusses the problem of sparse signal recovery when there is
correlation among the values of non-zero entries. We examine intra-vector
correlation in the context of the block sparse model and inter-vector
correlation in the context of the multiple measurement vector model, as well as
their combination. Algorithms based on the sparse Bayesian learning are
presented and the benefits of incorporating correlation at the algorithm level
are discussed. The impact of correlation on the limits of support recovery is
also discussed highlighting the different impact intra-vector and inter-vector
correlations have on such limits.
| Bhaskar D. Rao, Zhilin Zhang, Yuzhe Jin | null | 1205.4471 | null | null |
Soft Rule Ensembles for Statistical Learning | stat.ML cs.LG stat.AP | In this article supervised learning problems are solved using soft rule
ensembles. We first review the importance sampling learning ensembles (ISLE)
approach that is useful for generating hard rules. The soft rules are then
obtained with logistic regression from the corresponding hard rules. In order
to deal with the perfect separation problem related to the logistic regression,
Firth's bias corrected likelihood is used. Various examples and simulation
results show that soft rule ensembles can improve predictive performance over
hard rule ensembles.
| Deniz Akdemir and Nicolas Heslot | null | 1205.4476 | null | null |
Streaming Algorithms for Pattern Discovery over Dynamically Changing
Event Sequences | cs.LG cs.DB | Discovering frequent episodes over event sequences is an important data
mining task. In many applications, events constituting the data sequence arrive
as a stream, at furious rates, and recent trends (or frequent episodes) can
change and drift due to the dynamical nature of the underlying event generation
process. The ability to detect and track such the changing sets of frequent
episodes can be valuable in many application scenarios. Current methods for
frequent episode discovery are typically multipass algorithms, making them
unsuitable in the streaming context. In this paper, we propose a new streaming
algorithm for discovering frequent episodes over a window of recent events in
the stream. Our algorithm processes events as they arrive, one batch at a time,
while discovering the top frequent episodes over a window consisting of several
batches in the immediate past. We derive approximation guarantees for our
algorithm under the condition that frequent episodes are approximately
well-separated from infrequent ones in every batch of the window. We present
extensive experimental evaluations of our algorithm on both real and synthetic
data. We also present comparisons with baselines and adaptations of streaming
algorithms from itemset mining literature.
| Debprakash Patnaik and Naren Ramakrishnan and Srivatsan Laxman and
Badrish Chandramouli | null | 1205.4477 | null | null |
Stochastic Smoothing for Nonsmooth Minimizations: Accelerating SGD by
Exploiting Structure | cs.LG stat.CO stat.ML | In this work we consider the stochastic minimization of nonsmooth convex loss
functions, a central problem in machine learning. We propose a novel algorithm
called Accelerated Nonsmooth Stochastic Gradient Descent (ANSGD), which
exploits the structure of common nonsmooth loss functions to achieve optimal
convergence rates for a class of problems including SVMs. It is the first
stochastic algorithm that can achieve the optimal O(1/t) rate for minimizing
nonsmooth loss functions (with strong convexity). The fast rates are confirmed
by empirical comparisons, in which ANSGD significantly outperforms previous
subgradient descent algorithms including SGD.
| Hua Ouyang, Alexander Gray | null | 1205.4481 | null | null |
Conditional mean embeddings as regressors - supplementary | cs.LG stat.ML | We demonstrate an equivalence between reproducing kernel Hilbert space (RKHS)
embeddings of conditional distributions and vector-valued regressors. This
connection introduces a natural regularized loss function which the RKHS
embeddings minimise, providing an intuitive understanding of the embeddings and
a justification for their use. Furthermore, the equivalence allows the
application of vector-valued regression methods and results to the problem of
learning conditional distributions. Using this link we derive a sparse version
of the embedding by considering alternative formulations. Further, by applying
convergence results for vector-valued regression to the embedding problem we
derive minimax convergence rates which are O(\log(n)/n) -- compared to current
state of the art rates of O(n^{-1/4}) -- and are valid under milder and more
intuitive assumptions. These minimax upper rates coincide with lower rates up
to a logarithmic factor, showing that the embedding method achieves nearly
optimal rates. We study our sparse embedding algorithm in a reinforcement
learning task where the algorithm shows significant improvement in sparsity
over an incomplete Cholesky decomposition.
| Steffen Gr\"unew\"alder, Guy Lever, Luca Baldassarre, Sam Patterson,
Arthur Gretton, Massimilano Pontil | null | 1205.4656 | null | null |
The Role of Weight Shrinking in Large Margin Perceptron Learning | cs.LG | We introduce into the classical perceptron algorithm with margin a mechanism
that shrinks the current weight vector as a first step of the update. If the
shrinking factor is constant the resulting algorithm may be regarded as a
margin-error-driven version of NORMA with constant learning rate. In this case
we show that the allowed strength of shrinking depends on the value of the
maximum margin. We also consider variable shrinking factors for which there is
no such dependence. In both cases we obtain new generalizations of the
perceptron with margin able to provably attain in a finite number of steps any
desirable approximation of the maximal margin hyperplane. The new approximate
maximum margin classifiers appear experimentally to be very competitive in
2-norm soft margin tasks involving linear kernels.
| Constantinos Panagiotakopoulos and Petroula Tsampouka | null | 1205.4698 | null | null |
Visual and semantic interpretability of projections of high dimensional
data for classification tasks | cs.HC cs.LG | A number of visual quality measures have been introduced in visual analytics
literature in order to automatically select the best views of high dimensional
data from a large number of candidate data projections. These methods generally
concentrate on the interpretability of the visualization and pay little
attention to the interpretability of the projection axes. In this paper, we
argue that interpretability of the visualizations and the feature
transformation functions are both crucial for visual exploration of high
dimensional labeled data. We present a two-part user study to examine these two
related but orthogonal aspects of interpretability. We first study how humans
judge the quality of 2D scatterplots of various datasets with varying number of
classes and provide comparisons with ten automated measures, including a number
of visual quality measures and related measures from various machine learning
fields. We then investigate how the user perception on interpretability of
mathematical expressions relate to various automated measures of complexity
that can be used to characterize data projection functions. We conclude with a
discussion of how automated measures of visual and semantic interpretability of
data projections can be used together for exploratory analysis in
classification tasks.
| Ilknur Icke and Andrew Rosenberg | 10.1109/VAST.2011.6102474 | 1205.4776 | null | null |
Safe Exploration in Markov Decision Processes | cs.LG | In environments with uncertain dynamics exploration is necessary to learn how
to perform well. Existing reinforcement learning algorithms provide strong
exploration guarantees, but they tend to rely on an ergodicity assumption. The
essence of ergodicity is that any state is eventually reachable from any other
state by following a suitable policy. This assumption allows for exploration
algorithms that operate by simply favoring states that have rarely been visited
before. For most physical systems this assumption is impractical as the systems
would break before any reasonable exploration has taken place, i.e., most
physical systems don't satisfy the ergodicity assumption. In this paper we
address the need for safe exploration methods in Markov decision processes. We
first propose a general formulation of safety through ergodicity. We show that
imposing safety by restricting attention to the resulting set of guaranteed
safe policies is NP-hard. We then present an efficient algorithm for guaranteed
safe, but potentially suboptimal, exploration. At the core is an optimization
formulation in which the constraints restrict attention to a subset of the
guaranteed safe policies and the objective favors exploration policies. Our
framework is compatible with the majority of previously proposed exploration
methods, which rely on an exploration bonus. Our experiments, which include a
Martian terrain exploration problem, show that our method is able to explore
better than classical exploration methods.
| Teodor Mihai Moldovan, Pieter Abbeel | null | 1205.4810 | null | null |
Off-Policy Actor-Critic | cs.LG | This paper presents the first actor-critic algorithm for off-policy
reinforcement learning. Our algorithm is online and incremental, and its
per-time-step complexity scales linearly with the number of learned weights.
Previous work on actor-critic algorithms is limited to the on-policy setting
and does not take advantage of the recent advances in off-policy gradient
temporal-difference learning. Off-policy techniques, such as Greedy-GQ, enable
a target policy to be learned while following and obtaining data from another
(behavior) policy. For many problems, however, actor-critic methods are more
practical than action value methods (like Greedy-GQ) because they explicitly
represent the policy; consequently, the policy can be stochastic and utilize a
large action space. In this paper, we illustrate how to practically combine the
generality and learning potential of off-policy learning with the flexibility
in action selection given by actor-critic methods. We derive an incremental,
linear time and space complexity algorithm that includes eligibility traces,
prove convergence under assumptions similar to previous off-policy algorithms,
and empirically show better or comparable performance to existing algorithms on
standard reinforcement-learning benchmark problems.
| Thomas Degris, Martha White, Richard S. Sutton | null | 1205.4839 | null | null |
Clustering is difficult only when it does not matter | cs.LG cs.DS | Numerous papers ask how difficult it is to cluster data. We suggest that the
more relevant and interesting question is how difficult it is to cluster data
sets {\em that can be clustered well}. More generally, despite the ubiquity and
the great importance of clustering, we still do not have a satisfactory
mathematical theory of clustering. In order to properly understand clustering,
it is clearly necessary to develop a solid theoretical basis for the area. For
example, from the perspective of computational complexity theory the clustering
problem seems very hard. Numerous papers introduce various criteria and
numerical measures to quantify the quality of a given clustering. The resulting
conclusions are pessimistic, since it is computationally difficult to find an
optimal clustering of a given data set, if we go by any of these popular
criteria. In contrast, the practitioners' perspective is much more optimistic.
Our explanation for this disparity of opinions is that complexity theory
concentrates on the worst case, whereas in reality we only care for data sets
that can be clustered well.
We introduce a theoretical framework of clustering in metric spaces that
revolves around a notion of "good clustering". We show that if a good
clustering exists, then in many cases it can be efficiently found. Our
conclusion is that contrary to popular belief, clustering should not be
considered a hard task.
| Amit Daniely and Nati Linial and Michael Saks | null | 1205.4891 | null | null |
On the practically interesting instances of MAXCUT | cs.CC cs.LG | The complexity of a computational problem is traditionally quantified based
on the hardness of its worst case. This approach has many advantages and has
led to a deep and beautiful theory. However, from the practical perspective,
this leaves much to be desired. In application areas, practically interesting
instances very often occupy just a tiny part of an algorithm's space of
instances, and the vast majority of instances are simply irrelevant. Addressing
these issues is a major challenge for theoretical computer science which may
make theory more relevant to the practice of computer science.
Following Bilu and Linial, we apply this perspective to MAXCUT, viewed as a
clustering problem. Using a variety of techniques, we investigate practically
interesting instances of this problem. Specifically, we show how to solve in
polynomial time distinguished, metric, expanding and dense instances of MAXCUT
under mild stability assumptions. In particular, $(1+\epsilon)$-stability
(which is optimal) suffices for metric and dense MAXCUT. We also show how to
solve in polynomial time $\Omega(\sqrt{n})$-stable instances of MAXCUT,
substantially improving the best previously known result.
| Yonatan Bilu and Amit Daniely and Nati Linial and Michael Saks | null | 1205.4893 | null | null |
Learning Mixed Graphical Models | stat.ML cs.CV cs.LG math.OC | We consider the problem of learning the structure of a pairwise graphical
model over continuous and discrete variables. We present a new pairwise model
for graphical models with both continuous and discrete variables that is
amenable to structure learning. In previous work, authors have considered
structure learning of Gaussian graphical models and structure learning of
discrete models. Our approach is a natural generalization of these two lines of
work to the mixed case. The penalization scheme involves a novel symmetric use
of the group-lasso norm and follows naturally from a particular parametrization
of the model.
| Jason D. Lee and Trevor J. Hastie | null | 1205.5012 | null | null |
Subsets and Splits