title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Theano: new features and speed improvements | cs.SC cs.LG | Theano is a linear algebra compiler that optimizes a user's
symbolically-specified mathematical computations to produce efficient low-level
implementations. In this paper, we present new features and efficiency
improvements to Theano, and benchmarks demonstrating Theano's performance
relative to Torch7, a recently introduced machine learning library, and to
RNNLM, a C++ library targeted at recurrent neural networks.
| Fr\'ed\'eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra,
Ian Goodfellow, Arnaud Bergeron, Nicolas Bouchard, David Warde-Farley, Yoshua
Bengio | null | 1211.5590 | null | null |
Texture Modeling with Convolutional Spike-and-Slab RBMs and Deep
Extensions | cs.LG stat.ML | We apply the spike-and-slab Restricted Boltzmann Machine (ssRBM) to texture
modeling. The ssRBM with tiled-convolution weight sharing (TssRBM) achieves or
surpasses the state-of-the-art on texture synthesis and inpainting by
parametric models. We also develop a novel RBM model with a spike-and-slab
visible layer and binary variables in the hidden layer. This model is designed
to be stacked on top of the TssRBM. We show the resulting deep belief network
(DBN) is a powerful generative model that improves on single-layer models and
is capable of modeling not only single high-resolution and challenging textures
but also multiple textures.
| Heng Luo, Pierre Luc Carrier, Aaron Courville, Yoshua Bengio | null | 1211.5687 | null | null |
Bayesian learning of noisy Markov decision processes | stat.ML cs.LG stat.CO | We consider the inverse reinforcement learning problem, that is, the problem
of learning from, and then predicting or mimicking a controller based on
state/action data. We propose a statistical model for such data, derived from
the structure of a Markov decision process. Adopting a Bayesian approach to
inference, we show how latent variables of the model can be estimated, and how
predictions about actions can be made, in a unified framework. A new Markov
chain Monte Carlo (MCMC) sampler is devised for simulation from the posterior
distribution. This step includes a parameter expansion step, which is shown to
be essential for good convergence properties of the MCMC sampler. As an
illustration, the method is applied to learning a human controller.
| Sumeetpal S. Singh and Nicolas Chopin and Nick Whiteley | null | 1211.5901 | null | null |
Online Stochastic Optimization with Multiple Objectives | cs.LG math.OC | In this paper we propose a general framework to characterize and solve the
stochastic optimization problems with multiple objectives underlying many real
world learning applications. We first propose a projection based algorithm
which attains an $O(T^{-1/3})$ convergence rate. Then, by leveraging on the
theory of Lagrangian in constrained optimization, we devise a novel primal-dual
stochastic approximation algorithm which attains the optimal convergence rate
of $O(T^{-1/2})$ for general Lipschitz continuous objectives.
| Mehrdad Mahdavi, Tianbao Yang, Rong Jin | null | 1211.6013 | null | null |
Random Projections for Linear Support Vector Machines | cs.LG stat.ML | Let X be a data matrix of rank \rho, whose rows represent n points in
d-dimensional space. The linear support vector machine constructs a hyperplane
separator that maximizes the 1-norm soft margin. We develop a new oblivious
dimension reduction technique which is precomputed and can be applied to any
input matrix X. We prove that, with high probability, the margin and minimum
enclosing ball in the feature space are preserved to within \epsilon-relative
error, ensuring comparable generalization as in the original space in the case
of classification. For regression, we show that the margin is preserved to
\epsilon-relative error with high probability. We present extensive experiments
with real and synthetic data to support our theory.
| Saurabh Paul, Christos Boutsidis, Malik Magdon-Ismail, Petros Drineas | null | 1211.6085 | null | null |
The Interplay Between Stability and Regret in Online Learning | cs.LG stat.ML | This paper considers the stability of online learning algorithms and its
implications for learnability (bounded regret). We introduce a novel quantity
called {\em forward regret} that intuitively measures how good an online
learning algorithm is if it is allowed a one-step look-ahead into the future.
We show that given stability, bounded forward regret is equivalent to bounded
regret. We also show that the existence of an algorithm with bounded regret
implies the existence of a stable algorithm with bounded regret and bounded
forward regret. The equivalence results apply to general, possibly non-convex
problems. To the best of our knowledge, our analysis provides the first general
connection between stability and regret in the online setting that is not
restricted to a particular class of algorithms. Our stability-regret connection
provides a simple recipe for analyzing regret incurred by any online learning
algorithm. Using our framework, we analyze several existing online learning
algorithms as well as the "approximate" versions of algorithms like RDA that
solve an optimization problem at each iteration. Our proofs are simpler than
existing analysis for the respective algorithms, show a clear trade-off between
stability and forward regret, and provide tighter regret bounds in some cases.
Furthermore, using our recipe, we analyze "approximate" versions of several
algorithms such as follow-the-regularized-leader (FTRL) that requires solving
an optimization problem at each step.
| Ankan Saha and Prateek Jain and Ambuj Tewari | null | 1211.6158 | null | null |
A simple non-parametric Topic Mixture for Authors and Documents | cs.LG stat.ML | This article reviews the Author-Topic Model and presents a new non-parametric
extension based on the Hierarchical Dirichlet Process. The extension is
especially suitable when no prior information about the number of components
necessary is available. A blocked Gibbs sampler is described and focus put on
staying as close as possible to the original model with only the minimum of
theoretical and implementation overhead necessary.
| Arnim Bleier | null | 1211.6248 | null | null |
Duality between subgradient and conditional gradient methods | cs.LG math.OC stat.ML | Given a convex optimization problem and its dual, there are many possible
first-order algorithms. In this paper, we show the equivalence between mirror
descent algorithms and algorithms generalizing the conditional gradient method.
This is done through convex duality, and implies notably that for certain
problems, such as for supervised machine learning problems with non-smooth
losses or problems regularized by non-smooth regularizers, the primal
subgradient method and the dual conditional gradient method are formally
equivalent. The dual interpretation leads to a form of line search for mirror
descent, as well as guarantees of convergence for primal-dual certificates.
| Francis Bach (INRIA Paris - Rocquencourt, LIENS) | null | 1211.6302 | null | null |
An Approach of Improving Students Academic Performance by using k means
clustering algorithm and Decision tree | cs.LG | Improving students academic performance is not an easy task for the academic
community of higher learning. The academic performance of engineering and
science students during their first year at university is a turning point in
their educational path and usually encroaches on their General Point
Average,GPA in a decisive manner. The students evaluation factors like class
quizzes mid and final exam assignment lab work are studied. It is recommended
that all these correlated information should be conveyed to the class teacher
before the conduction of final exam. This study will help the teachers to
reduce the drop out ratio to a significant level and improve the performance of
students. In this paper, we present a hybrid procedure based on Decision Tree
of Data mining method and Data Clustering that enables academicians to predict
students GPA and based on that instructor can take necessary step to improve
student academic performance.
| Md. Hedayetul Islam Shovon, Mahfuza Haque | null | 1211.6340 | null | null |
Multi-Target Regression via Input Space Expansion: Treating Targets as
Inputs | cs.LG | In many practical applications of supervised learning the task involves the
prediction of multiple target variables from a common set of input variables.
When the prediction targets are binary the task is called multi-label
classification, while when the targets are continuous the task is called
multi-target regression. In both tasks, target variables often exhibit
statistical dependencies and exploiting them in order to improve predictive
accuracy is a core challenge. A family of multi-label classification methods
address this challenge by building a separate model for each target on an
expanded input space where other targets are treated as additional input
variables. Despite the success of these methods in the multi-label
classification domain, their applicability and effectiveness in multi-target
regression has not been studied until now. In this paper, we introduce two new
methods for multi-target regression, called Stacked Single-Target and Ensemble
of Regressor Chains, by adapting two popular multi-label classification methods
of this family. Furthermore, we highlight an inherent problem of these methods
- a discrepancy of the values of the additional input variables between
training and prediction - and develop extensions that use out-of-sample
estimates of the target variables during training in order to tackle this
problem. The results of an extensive experimental evaluation carried out on a
large and diverse collection of datasets show that, when the discrepancy is
appropriately mitigated, the proposed methods attain consistent improvements
over the independent regressions baseline. Moreover, two versions of Ensemble
of Regression Chains perform significantly better than four state-of-the-art
methods including regularization-based multi-task learning methods and a
multi-objective random forest approach.
| Eleftherios Spyromitros-Xioufis, Grigorios Tsoumakas, William Groves,
Ioannis Vlahavas | 10.1007/s10994-016-5546-z | 1211.6581 | null | null |
TACT: A Transfer Actor-Critic Learning Framework for Energy Saving in
Cellular Radio Access Networks | cs.NI cs.AI cs.IT cs.LG math.IT | Recent works have validated the possibility of improving energy efficiency in
radio access networks (RANs), achieved by dynamically turning on/off some base
stations (BSs). In this paper, we extend the research over BS switching
operations, which should match up with traffic load variations. Instead of
depending on the dynamic traffic loads which are still quite challenging to
precisely forecast, we firstly formulate the traffic variations as a Markov
decision process. Afterwards, in order to foresightedly minimize the energy
consumption of RANs, we design a reinforcement learning framework based BS
switching operation scheme. Furthermore, to avoid the underlying curse of
dimensionality in reinforcement learning, a transfer actor-critic algorithm
(TACT), which utilizes the transferred learning expertise in historical periods
or neighboring regions, is proposed and provably converges. In the end, we
evaluate our proposed scheme by extensive simulations under various practical
configurations and show that the proposed TACT algorithm contributes to a
performance jumpstart and demonstrates the feasibility of significant energy
efficiency improvement at the expense of tolerable delay performance.
| Rongpeng Li, Zhifeng Zhao, Xianfu Chen, Jacques Palicot, Honggang
Zhang | 10.1109/TWC.2014.022014.130840 | 1211.6616 | null | null |
Nonparametric Bayesian Mixed-effect Model: a Sparse Gaussian Process
Approach | cs.LG stat.ML | Multi-task learning models using Gaussian processes (GP) have been developed
and successfully applied in various applications. The main difficulty with this
approach is the computational cost of inference using the union of examples
from all tasks. Therefore sparse solutions, that avoid using the entire data
directly and instead use a set of informative "representatives" are desirable.
The paper investigates this problem for the grouped mixed-effect GP model where
each individual response is given by a fixed-effect, taken from one of a set of
unknown groups, plus a random individual effect function that captures
variations among individuals. Such models have been widely used in previous
work but no sparse solutions have been developed. The paper presents the first
sparse solution for such problems, showing how the sparse approximation can be
obtained by maximizing a variational lower bound on the marginal likelihood,
generalizing ideas from single-task Gaussian processes to handle the
mixed-effect model as well as grouping. Experiments using artificial and real
data validate the approach showing that it can recover the performance of
inference with the full sample, that it outperforms baseline methods, and that
it outperforms state of the art sparse solutions for other multi-task GP
formulations.
| Yuyang Wang, Roni Khardon | 10.1007/978-3-642-33460-3_51 | 1211.6653 | null | null |
Robustness Analysis of Hottopixx, a Linear Programming Model for
Factoring Nonnegative Matrices | stat.ML cs.LG cs.NA math.OC | Although nonnegative matrix factorization (NMF) is NP-hard in general, it has
been shown very recently that it is tractable under the assumption that the
input nonnegative data matrix is close to being separable (separability
requires that all columns of the input matrix belongs to the cone spanned by a
small subset of these columns). Since then, several algorithms have been
designed to handle this subclass of NMF problems. In particular, Bittorf,
Recht, R\'e and Tropp (`Factoring nonnegative matrices with linear programs',
NIPS 2012) proposed a linear programming model, referred to as Hottopixx. In
this paper, we provide a new and more general robustness analysis of their
method. In particular, we design a provably more robust variant using a
post-processing strategy which allows us to deal with duplicates and near
duplicates in the dataset.
| Nicolas Gillis | 10.1137/120900629 | 1211.6687 | null | null |
Graph Laplacians on Singular Manifolds: Toward understanding complex
spaces: graph Laplacians on manifolds with singularities and boundaries | cs.AI cs.CG cs.LG | Recently, much of the existing work in manifold learning has been done under
the assumption that the data is sampled from a manifold without boundaries and
singularities or that the functions of interest are evaluated away from such
points. At the same time, it can be argued that singularities and boundaries
are an important aspect of the geometry of realistic data.
In this paper we consider the behavior of graph Laplacians at points at or
near boundaries and two main types of other singularities: intersections, where
different manifolds come together and sharp "edges", where a manifold sharply
changes direction. We show that the behavior of graph Laplacian near these
singularities is quite different from that in the interior of the manifolds. In
fact, a phenomenon somewhat reminiscent of the Gibbs effect in the analysis of
Fourier series, can be observed in the behavior of graph Laplacian near such
points. Unlike in the interior of the domain, where graph Laplacian converges
to the Laplace-Beltrami operator, near singularities graph Laplacian tends to a
first-order differential operator, which exhibits different scaling behavior as
a function of the kernel width. One important implication is that while points
near the singularities occupy only a small part of the total volume, the
difference in scaling results in a disproportionately large contribution to the
total behavior. Another significant finding is that while the scaling behavior
of the operator is the same near different types of singularities, they are
very distinct at a more refined level of analysis.
We believe that a comprehensive understanding of these structures in addition
to the standard case of a smooth manifold can take us a long way toward better
methods for analysis of complex non-linear data and can lead to significant
progress in algorithm design.
| Mikhail Belkin and Qichao Que and Yusu Wang and Xueyuan Zhou | null | 1211.6727 | null | null |
On unbiased performance evaluation for protein inference | stat.AP cs.LG q-bio.QM | This letter is a response to the comments of Serang (2012) on Huang and He
(2012) in Bioinformatics. Serang (2012) claimed that the parameters for the
Fido algorithm should be specified using the grid search method in Serang et
al. (2010) so as to generate a deserved accuracy in performance comparison. It
seems that it is an argument on parameter tuning. However, it is indeed the
issue of how to conduct an unbiased performance evaluation for comparing
different protein inference algorithms. In this letter, we would explain why we
don't use the grid search for parameter selection in Huang and He (2012) and
show that this procedure may result in an over-estimated performance that is
unfair to competing algorithms. In fact, this issue has also been pointed out
by Li and Radivojac (2012).
| Zengyou He, Ting Huang, Peijun Zhu | null | 1211.6834 | null | null |
Classification Recouvrante Bas\'ee sur les M\'ethodes \`a Noyau | cs.LG stat.CO stat.ME stat.ML | Overlapping clustering problem is an important learning issue in which
clusters are not mutually exclusive and each object may belongs simultaneously
to several clusters. This paper presents a kernel based method that produces
overlapping clusters on a high feature space using mercer kernel techniques to
improve separability of input patterns. The proposed method, called
OKM-K(Overlapping $k$-means based kernel method), extends OKM (Overlapping
$k$-means) method to produce overlapping schemes. Experiments are performed on
overlapping dataset and empirical results obtained with OKM-K outperform
results obtained with OKM.
| Chiheb-Eddine Ben N'Cir and Nadia Essoussi | null | 1211.6851 | null | null |
Overlapping clustering based on kernel similarity metric | stat.ML cs.LG stat.ME | Producing overlapping schemes is a major issue in clustering. Recent proposed
overlapping methods relies on the search of an optimal covering and are based
on different metrics, such as Euclidean distance and I-Divergence, used to
measure closeness between observations. In this paper, we propose the use of
another measure for overlapping clustering based on a kernel similarity metric
.We also estimate the number of overlapped clusters using the Gram matrix.
Experiments on both Iris and EachMovie datasets show the correctness of the
estimation of number of clusters and show that measure based on kernel
similarity metric improves the precision, recall and f-measure in overlapping
clustering.
| Chiheb-Eddine Ben N'Cir and Nadia Essoussi and Patrice Bertrand | null | 1211.6859 | null | null |
Automating rule generation for grammar checkers | cs.CL cs.LG | In this paper, I describe several approaches to automatic or semi-automatic
development of symbolic rules for grammar checkers from the information
contained in corpora. The rules obtained this way are an important addition to
manually-created rules that seem to dominate in rule-based checkers. However,
the manual process of creation of rules is costly, time-consuming and
error-prone. It seems therefore advisable to use machine-learning algorithms to
create the rules automatically or semi-automatically. The results obtained seem
to corroborate my initial hypothesis that symbolic machine learning algorithms
can be useful for acquiring new rules for grammar checking. It turns out,
however, that for practical uses, error corpora cannot be the sole source of
information used in grammar checking. I suggest therefore that only by using
different approaches, grammar-checkers, or more generally, computer-aided
proofreading tools, will be able to cover most frequent and severe mistakes and
avoid false alarms that seem to distract users.
| Marcin Mi{\l}kowski | null | 1211.6887 | null | null |
On the Use of Non-Stationary Policies for Stationary Infinite-Horizon
Markov Decision Processes | cs.LG cs.AI | We consider infinite-horizon stationary $\gamma$-discounted Markov Decision
Processes, for which it is known that there exists a stationary optimal policy.
Using Value and Policy Iteration with some error $\epsilon$ at each iteration,
it is well-known that one can compute stationary policies that are
$\frac{2\gamma}{(1-\gamma)^2}\epsilon$-optimal. After arguing that this
guarantee is tight, we develop variations of Value and Policy Iteration for
computing non-stationary policies that can be up to
$\frac{2\gamma}{1-\gamma}\epsilon$-optimal, which constitutes a significant
improvement in the usual situation when $\gamma$ is close to 1. Surprisingly,
this shows that the problem of "computing near-optimal non-stationary policies"
is much simpler than that of "computing near-optimal stationary policies".
| Bruno Scherrer (INRIA Nancy - Grand Est / LORIA), Boris Lesner (INRIA
Nancy - Grand Est / LORIA) | null | 1211.6898 | null | null |
Learning-Assisted Automated Reasoning with Flyspeck | cs.AI cs.DL cs.LG cs.LO | The considerable mathematical knowledge encoded by the Flyspeck project is
combined with external automated theorem provers (ATPs) and machine-learning
premise selection methods trained on the proofs, producing an AI system capable
of answering a wide range of mathematical queries automatically. The
performance of this architecture is evaluated in a bootstrapping scenario
emulating the development of Flyspeck from axioms to the last theorem, each
time using only the previous theorems and proofs. It is shown that 39% of the
14185 theorems could be proved in a push-button mode (without any high-level
advice and user interaction) in 30 seconds of real time on a fourteen-CPU
workstation. The necessary work involves: (i) an implementation of sound
translations of the HOL Light logic to ATP formalisms: untyped first-order,
polymorphic typed first-order, and typed higher-order, (ii) export of the
dependency information from HOL Light and ATP proofs for the machine learners,
and (iii) choice of suitable representations and methods for learning from
previous proofs, and their integration as advisors with HOL Light. This work is
described and discussed here, and an initial analysis of the body of proofs
that were found fully automatically is provided.
| Cezary Kaliszyk and Josef Urban | 10.1007/s10817-014-9303-3 | 1211.7012 | null | null |
Orientation Determination from Cryo-EM images Using Least Unsquared
Deviation | cs.LG math.NA math.OC q-bio.BM | A major challenge in single particle reconstruction from cryo-electron
microscopy is to establish a reliable ab-initio three-dimensional model using
two-dimensional projection images with unknown orientations. Common-lines based
methods estimate the orientations without additional geometric information.
However, such methods fail when the detection rate of common-lines is too low
due to the high level of noise in the images. An approximation to the least
squares global self consistency error was obtained using convex relaxation by
semidefinite programming. In this paper we introduce a more robust global self
consistency error and show that the corresponding optimization problem can be
solved via semidefinite relaxation. In order to prevent artificial clustering
of the estimated viewing directions, we further introduce a spectral norm term
that is added as a constraint or as a regularization term to the relaxed
minimization problem. The resulted problems are solved by using either the
alternating direction method of multipliers or an iteratively reweighted least
squares procedure. Numerical experiments with both simulated and real images
demonstrate that the proposed methods significantly reduce the orientation
estimation error when the detection rate of common-lines is low.
| Lanhui Wang, Amit Singer, Zaiwen Wen | null | 1211.7045 | null | null |
A recursive divide-and-conquer approach for sparse principal component
analysis | cs.CV cs.LG stat.ML | In this paper, a new method is proposed for sparse PCA based on the recursive
divide-and-conquer methodology. The main idea is to separate the original
sparse PCA problem into a series of much simpler sub-problems, each having a
closed-form solution. By recursively solving these sub-problems in an
analytical way, an efficient algorithm is constructed to solve the sparse PCA
problem. The algorithm only involves simple computations and is thus easy to
implement. The proposed method can also be very easily extended to other sparse
PCA problems with certain constraints, such as the nonnegative sparse PCA
problem. Furthermore, we have shown that the proposed algorithm converges to a
stationary point of the problem, and its computational complexity is
approximately linear in both data size and dimensionality. The effectiveness of
the proposed method is substantiated by extensive experiments implemented on a
series of synthetic and real data in both reconstruction-error-minimization and
data-variance-maximization viewpoints.
| Qian Zhao and Deyu Meng and Zongben Xu | null | 1211.7219 | null | null |
Efficient algorithms for robust recovery of images from compressed data | cs.IT cs.LG math.IT stat.ML | Compressed sensing (CS) is an important theory for sub-Nyquist sampling and
recovery of compressible data. Recently, it has been extended by Pham and
Venkatesh to cope with the case where corruption to the CS data is modeled as
impulsive noise. The new formulation, termed as robust CS, combines robust
statistics and CS into a single framework to suppress outliers in the CS
recovery. To solve the newly formulated robust CS problem, Pham and Venkatesh
suggested a scheme that iteratively solves a number of CS problems, the
solutions from which converge to the true robust compressed sensing solution.
However, this scheme is rather inefficient as it has to use existing CS solvers
as a proxy. To overcome limitation with the original robust CS algorithm, we
propose to solve the robust CS problem directly in this paper and drive more
computationally efficient algorithms by following latest advances in
large-scale convex optimization for non-smooth regularization. Furthermore, we
also extend the robust CS formulation to various settings, including additional
affine constraints, $\ell_1$-norm loss function, mixed-norm regularization, and
multi-tasking, so as to further improve robust CS. We also derive simple but
effective algorithms to solve these extensions. We demonstrate that the new
algorithms provide much better computational advantage over the original robust
CS formulation, and effectively solve more sophisticated extensions where the
original methods simply cannot. We demonstrate the usefulness of the extensions
on several CS imaging tasks.
| Duc Son Pham and Svetha Venkatesh | null | 1211.7276 | null | null |
Approximate Rank-Detecting Factorization of Low-Rank Tensors | stat.ML cs.LG math.NA | We present an algorithm, AROFAC2, which detects the (CP-)rank of a degree 3
tensor and calculates its factorization into rank-one components. We provide
generative conditions for the algorithm to work and demonstrate on both
synthetic and real world data that AROFAC2 is a potentially outperforming
alternative to the gold standard PARAFAC over which it has the advantages that
it can intrinsically detect the true rank, avoids spurious components, and is
stable with respect to outliers and non-Gaussian noise.
| Franz J. Kir\'aly and Andreas Ziehe | null | 1211.7369 | null | null |
Cumulative Step-size Adaptation on Linear Functions | cs.LG stat.ML | The CSA-ES is an Evolution Strategy with Cumulative Step size Adaptation,
where the step size is adapted measuring the length of a so-called cumulative
path. The cumulative path is a combination of the previous steps realized by
the algorithm, where the importance of each step decreases with time. This
article studies the CSA-ES on composites of strictly increasing functions with
affine linear functions through the investigation of its underlying Markov
chains. Rigorous results on the change and the variation of the step size are
derived with and without cumulation. The step-size diverges geometrically fast
in most cases. Furthermore, the influence of the cumulation parameter is
studied.
| Alexandre Chotard (INRIA Saclay - Ile de France, LRI), Anne Auger
(INRIA Saclay - Ile de France), Nikolaus Hansen (INRIA Saclay - Ile de
France) | null | 1212.0139 | null | null |
Pedestrian Detection with Unsupervised Multi-Stage Feature Learning | cs.CV cs.LG | Pedestrian detection is a problem of considerable practical interest. Adding
to the list of successful applications of deep learning methods to vision, we
report state-of-the-art and competitive results on all major pedestrian
datasets with a convolutional network model. The model uses a few new twists,
such as multi-stage features, connections that skip layers to integrate global
shape information with local distinctive motif information, and an unsupervised
method based on convolutional sparse coding to pre-train the filters at each
stage.
| Pierre Sermanet and Koray Kavukcuoglu and Soumith Chintala and Yann
LeCun | null | 1212.0142 | null | null |
Message-Passing Algorithms for Quadratic Minimization | cs.IT cs.LG math.IT stat.ML | Gaussian belief propagation (GaBP) is an iterative algorithm for computing
the mean of a multivariate Gaussian distribution, or equivalently, the minimum
of a multivariate positive definite quadratic function. Sufficient conditions,
such as walk-summability, that guarantee the convergence and correctness of
GaBP are known, but GaBP may fail to converge to the correct solution given an
arbitrary positive definite quadratic function. As was observed in previous
work, the GaBP algorithm fails to converge if the computation trees produced by
the algorithm are not positive definite. In this work, we will show that the
failure modes of the GaBP algorithm can be understood via graph covers, and we
prove that a parameterized generalization of the min-sum algorithm can be used
to ensure that the computation trees remain positive definite whenever the
input matrix is positive definite. We demonstrate that the resulting algorithm
is closely related to other iterative schemes for quadratic minimization such
as the Gauss-Seidel and Jacobi algorithms. Finally, we observe, empirically,
that there always exists a choice of parameters such that the above
generalization of the GaBP algorithm converges.
| Nicholas Ruozzi and Sekhar Tatikonda | null | 1212.0171 | null | null |
Hypergraph and protein function prediction with gene expression data | stat.ML cs.LG q-bio.QM | Most network-based protein (or gene) function prediction methods are based on
the assumption that the labels of two adjacent proteins in the network are
likely to be the same. However, assuming the pairwise relationship between
proteins or genes is not complete, the information a group of genes that show
very similar patterns of expression and tend to have similar functions (i.e.
the functional modules) is missed. The natural way overcoming the information
loss of the above assumption is to represent the gene expression data as the
hypergraph. Thus, in this paper, the three un-normalized, random walk, and
symmetric normalized hypergraph Laplacian based semi-supervised learning
methods applied to hypergraph constructed from the gene expression data in
order to predict the functions of yeast proteins are introduced. Experiment
results show that the average accuracy performance measures of these three
hypergraph Laplacian based semi-supervised learning methods are the same.
However, their average accuracy performance measures of these three methods are
much greater than the average accuracy performance measures of un-normalized
graph Laplacian based semi-supervised learning method (i.e. the baseline method
of this paper) applied to gene co-expression network created from the gene
expression data.
| Loc Tran | null | 1212.0388 | null | null |
Nonparametric risk bounds for time-series forecasting | math.ST cs.LG stat.ML stat.TH | We derive generalization error bounds for traditional time-series forecasting
models. Our results hold for many standard forecasting tools including
autoregressive models, moving average models, and, more generally, linear
state-space models. These non-asymptotic bounds need only weak assumptions on
the data-generating process, yet allow forecasters to select among competing
models and to guarantee, with high probability, that their chosen model will
perform well. We motivate our techniques with and apply them to standard
economic and financial forecasting tools---a GARCH model for predicting equity
volatility and a dynamic stochastic general equilibrium model (DSGE), the
standard tool in macroeconomic forecasting. We demonstrate in particular how
our techniques can aid forecasters and policy makers in choosing models which
behave well under uncertainty and mis-specification.
| Daniel J. McDonald and Cosma Rohilla Shalizi and Mark Schervish | null | 1212.0463 | null | null |
Low-rank Matrix Completion using Alternating Minimization | stat.ML cs.LG math.OC | Alternating minimization represents a widely applicable and empirically
successful approach for finding low-rank matrices that best fit the given data.
For example, for the problem of low-rank matrix completion, this method is
believed to be one of the most accurate and efficient, and formed a major
component of the winning entry in the Netflix Challenge.
In the alternating minimization approach, the low-rank target matrix is
written in a bi-linear form, i.e. $X = UV^\dag$; the algorithm then alternates
between finding the best $U$ and the best $V$. Typically, each alternating step
in isolation is convex and tractable. However the overall problem becomes
non-convex and there has been almost no theoretical understanding of when this
approach yields a good result.
In this paper we present first theoretical analysis of the performance of
alternating minimization for matrix completion, and the related problem of
matrix sensing. For both these problems, celebrated recent results have shown
that they become well-posed and tractable once certain (now standard)
conditions are imposed on the problem. We show that alternating minimization
also succeeds under similar conditions. Moreover, compared to existing results,
our paper shows that alternating minimization guarantees faster (in particular,
geometric) convergence to the true matrix, while allowing a simpler analysis.
| Prateek Jain, Praneeth Netrapalli and Sujay Sanghavi | null | 1212.0467 | null | null |
Machine learning prediction of cancer cell sensitivity to drugs based on
genomic and chemical properties | q-bio.GN cs.CE cs.LG q-bio.CB | Predicting the response of a specific cancer to a therapy is a major goal in
modern oncology that should ultimately lead to a personalised treatment.
High-throughput screenings of potentially active compounds against a panel of
genomically heterogeneous cancer cell lines have unveiled multiple
relationships between genomic alterations and drug responses. Various
computational approaches have been proposed to predict sensitivity based on
genomic features, while others have used the chemical properties of the drugs
to ascertain their effect. In an effort to integrate these complementary
approaches, we developed machine learning models to predict the response of
cancer cell lines to drug treatment, quantified through IC50 values, based on
both the genomic features of the cell lines and the chemical properties of the
considered drugs. Models predicted IC50 values in a 8-fold cross-validation and
an independent blind test with coefficient of determination R2 of 0.72 and 0.64
respectively. Furthermore, models were able to predict with comparable accuracy
(R2 of 0.61) IC50s of cell lines from a tissue not used in the training stage.
Our in silico models can be used to optimise the experimental design of
drug-cell screenings by estimating a large proportion of missing IC50 values
rather than experimentally measure them. The implications of our results go
beyond virtual drug screening design: potentially thousands of drugs could be
probed in silico to systematically test their potential efficacy as anti-tumour
agents based on their structure, thus providing a computational framework to
identify new drug repositioning opportunities as well as ultimately be useful
for personalized medicine by linking the genomic traits of patients to drug
sensitivity.
| Michael P. Menden, Francesco Iorio, Mathew Garnett, Ultan McDermott,
Cyril Benes, Pedro J. Ballester, Julio Saez-Rodriguez | 10.1371/journal.pone.0061318 | 1212.0504 | null | null |
An Empirical Evaluation of Portfolios Approaches for solving CSPs | cs.AI cs.LG | Recent research in areas such as SAT solving and Integer Linear Programming
has shown that the performances of a single arbitrarily efficient solver can be
significantly outperformed by a portfolio of possibly slower on-average
solvers. We report an empirical evaluation and comparison of portfolio
approaches applied to Constraint Satisfaction Problems (CSPs). We compared
models developed on top of off-the-shelf machine learning algorithms with
respect to approaches used in the SAT field and adapted for CSPs, considering
different portfolio sizes and using as evaluation metrics the number of solved
problems and the time taken to solve them. Results indicate that the best SAT
approaches have top performances also in the CSP field and are slightly more
competitive than simple models built on top of classification algorithms.
| Roberto Amadini, Maurizio Gabbrielli, Jacopo Mauro | null | 1212.0692 | null | null |
Training Support Vector Machines Using Frank-Wolfe Optimization Methods | cs.LG cs.CV math.OC stat.ML | Training a Support Vector Machine (SVM) requires the solution of a quadratic
programming problem (QP) whose computational complexity becomes prohibitively
expensive for large scale datasets. Traditional optimization methods cannot be
directly applied in these cases, mainly due to memory restrictions.
By adopting a slightly different objective function and under mild conditions
on the kernel used within the model, efficient algorithms to train SVMs have
been devised under the name of Core Vector Machines (CVMs). This framework
exploits the equivalence of the resulting learning problem with the task of
building a Minimal Enclosing Ball (MEB) problem in a feature space, where data
is implicitly embedded by a kernel function.
In this paper, we improve on the CVM approach by proposing two novel methods
to build SVMs based on the Frank-Wolfe algorithm, recently revisited as a fast
method to approximate the solution of a MEB problem. In contrast to CVMs, our
algorithms do not require to compute the solutions of a sequence of
increasingly complex QPs and are defined by using only analytic optimization
steps. Experiments on a large collection of datasets show that our methods
scale better than CVMs in most cases, sometimes at the price of a slightly
lower accuracy. As CVMs, the proposed methods can be easily extended to machine
learning problems other than binary classification. However, effective
classifiers are also obtained using kernels which do not satisfy the condition
required by CVMs and can thus be used for a wider set of problems.
| Emanuele Frandi, Ricardo Nanculef, Maria Grazia Gasparo, Stefano Lodi,
Claudio Sartori | 10.1142/S0218001413600033 | 1212.0695 | null | null |
Dynamic recommender system : using cluster-based biases to improve the
accuracy of the predictions | cs.LG cs.DB cs.IR | It is today accepted that matrix factorization models allow a high quality of
rating prediction in recommender systems. However, a major drawback of matrix
factorization is its static nature that results in a progressive declining of
the accuracy of the predictions after each factorization. This is due to the
fact that the new obtained ratings are not taken into account until a new
factorization is computed, which can not be done very often because of the high
cost of matrix factorization.
In this paper, aiming at improving the accuracy of recommender systems, we
propose a cluster-based matrix factorization technique that enables online
integration of new ratings. Thus, we significantly enhance the obtained
predictions between two matrix factorizations. We use finer-grained user biases
by clustering similar items into groups, and allocating in these groups a bias
to each user. The experiments we did on large datasets demonstrated the
efficiency of our approach.
| Modou Gueye, Talel Abdessalem, Hubert Naacke | null | 1212.0763 | null | null |
Advances in Optimizing Recurrent Networks | cs.LG | After a more than decade-long period of relatively little research activity
in the area of recurrent neural networks, several new developments will be
reviewed here that have allowed substantial progress both in understanding and
in technical solutions towards more efficient training of recurrent networks.
These advances have been motivated by and related to the optimization issues
surrounding deep learning. Although recurrent networks are extremely powerful
in what they can in principle represent in terms of modelling sequences,their
training is plagued by two aspects of the same issue regarding the learning of
long-term dependencies. Experiments reported here evaluate the use of clipping
gradients, spanning longer time ranges with leaky integration, advanced
momentum techniques, using more powerful output probability models, and
encouraging sparser gradients to help symmetry breaking and credit assignment.
The experiments are performed on text and music data and show off the combined
effects of these techniques in generally improving both training and test
error.
| Yoshua Bengio, Nicolas Boulanger-Lewandowski and Razvan Pascanu | null | 1212.0901 | null | null |
Multiclass Diffuse Interface Models for Semi-Supervised Learning on
Graphs | stat.ML cs.LG math.ST physics.data-an stat.TH | We present a graph-based variational algorithm for multiclass classification
of high-dimensional data, motivated by total variation techniques. The energy
functional is based on a diffuse interface model with a periodic potential. We
augment the model by introducing an alternative measure of smoothness that
preserves symmetry among the class labels. Through this modification of the
standard Laplacian, we construct an efficient multiclass method that allows for
sharp transitions between classes. The experimental results demonstrate that
our approach is competitive with the state of the art among other graph-based
algorithms.
| Cristina Garcia-Cardona, Arjuna Flenner and Allon G. Percus | null | 1212.0945 | null | null |
Evaluating Classifiers Without Expert Labels | cs.LG cs.IR stat.ML | This paper considers the challenge of evaluating a set of classifiers, as
done in shared task evaluations like the KDD Cup or NIST TREC, without expert
labels. While expert labels provide the traditional cornerstone for evaluating
statistical learners, limited or expensive access to experts represents a
practical bottleneck. Instead, we seek methodology for estimating performance
of the classifiers which is more scalable than expert labeling yet preserves
high correlation with evaluation based on expert labels. We consider both: 1)
using only labels automatically generated by the classifiers (blind
evaluation); and 2) using labels obtained via crowdsourcing. While
crowdsourcing methods are lauded for scalability, using such data for
evaluation raises serious concerns given the prevalence of label noise. In
regard to blind evaluation, two broad strategies are investigated: combine &
score and score & combine methods infer a single pseudo-gold label set by
aggregating classifier labels; classifiers are then evaluated based on this
single pseudo-gold label set. On the other hand, score & combine methods: 1)
sample multiple label sets from classifier outputs, 2) evaluate classifiers on
each label set, and 3) average classifier performance across label sets. When
additional crowd labels are also collected, we investigate two alternative
avenues for exploiting them: 1) direct evaluation of classifiers; or 2)
supervision of combine & score methods. To assess generality of our techniques,
classifier performance is measured using four common classification metrics,
with statistical significance tests. Finally, we measure both score and rank
correlations between estimated classifier performance vs. actual performance
according to expert judgments. Rigorous evaluation of classifiers from the TREC
2011 Crowdsourcing Track shows reliable evaluation can be achieved without
reliance on expert labels.
| Hyun Joon Jung and Matthew Lease | null | 1212.0960 | null | null |
Compiling Relational Database Schemata into Probabilistic Graphical
Models | cs.AI cs.DB cs.LG stat.ML | Instead of requiring a domain expert to specify the probabilistic
dependencies of the data, in this work we present an approach that uses the
relational DB schema to automatically construct a Bayesian graphical model for
a database. This resulting model contains customized distributions for columns,
latent variables that cluster the data, and factors that reflect and represent
the foreign key links. Experiments demonstrate the accuracy of the model and
the scalability of inference on synthetic and real-world data.
| Sameer Singh and Thore Graepel | null | 1212.0967 | null | null |
Cost-Sensitive Support Vector Machines | cs.LG stat.ML | A new procedure for learning cost-sensitive SVM(CS-SVM) classifiers is
proposed. The SVM hinge loss is extended to the cost sensitive setting, and the
CS-SVM is derived as the minimizer of the associated risk. The extension of the
hinge loss draws on recent connections between risk minimization and
probability elicitation. These connections are generalized to cost-sensitive
classification, in a manner that guarantees consistency with the cost-sensitive
Bayes risk, and associated Bayes decision rule. This ensures that optimal
decision rules, under the new hinge loss, implement the Bayes-optimal
cost-sensitive classification boundary. Minimization of the new hinge loss is
shown to be a generalization of the classic SVM optimization problem, and can
be solved by identical procedures. The dual problem of CS-SVM is carefully
scrutinized by means of regularization theory and sensitivity analysis and the
CS-SVM algorithm is substantiated. The proposed algorithm is also extended to
cost-sensitive learning with example dependent costs. The minimum cost
sensitive risk is proposed as the performance measure and is connected to ROC
analysis through vector optimization. The resulting algorithm avoids the
shortcomings of previous approaches to cost-sensitive SVM design, and is shown
to have superior experimental performance on a large number of cost sensitive
and imbalanced datasets.
| Hamed Masnadi-Shirazi, Nuno Vasconcelos and Arya Iranmehr | null | 1212.0975 | null | null |
Making Early Predictions of the Accuracy of Machine Learning
Applications | cs.LG cs.AI stat.ML | The accuracy of machine learning systems is a widely studied research topic.
Established techniques such as cross-validation predict the accuracy on unseen
data of the classifier produced by applying a given learning method to a given
training data set. However, they do not predict whether incurring the cost of
obtaining more data and undergoing further training will lead to higher
accuracy. In this paper we investigate techniques for making such early
predictions. We note that when a machine learning algorithm is presented with a
training set the classifier produced, and hence its error, will depend on the
characteristics of the algorithm, on training set's size, and also on its
specific composition. In particular we hypothesise that if a number of
classifiers are produced, and their observed error is decomposed into bias and
variance terms, then although these components may behave differently, their
behaviour may be predictable.
We test our hypothesis by building models that, given a measurement taken
from the classifier created from a limited number of samples, predict the
values that would be measured from the classifier produced when the full data
set is presented. We create separate models for bias, variance and total error.
Our models are built from the results of applying ten different machine
learning algorithms to a range of data sets, and tested with "unseen"
algorithms and datasets. We analyse the results for various numbers of initial
training samples, and total dataset sizes. Results show that our predictions
are very highly correlated with the values observed after undertaking the extra
training. Finally we consider the more complex case where an ensemble of
heterogeneous classifiers is trained, and show how we can accurately estimate
an upper bound on the accuracy achievable after further training.
| J. E. Smith, P. Caleb-Solly, M. A. Tahir, D. Sannen, H. van-Brussel | null | 1212.1100 | null | null |
On the Convergence Properties of Optimal AdaBoost | cs.LG cs.AI stat.ML | AdaBoost is one of the most popular ML algorithms. It is simple to implement
and often found very effective by practitioners, while still being
mathematically elegant and theoretically sound. AdaBoost's interesting behavior
in practice still puzzles the ML community. We address the algorithm's
stability and establish multiple convergence properties of "Optimal AdaBoost,"
a term coined by Rudin, Daubechies, and Schapire in 2004. We prove, in a
reasonably strong computational sense, the almost universal existence of time
averages, and with that, the convergence of the classifier itself, its
generalization error, and its resulting margins, among many other objects, for
fixed data sets under arguably reasonable conditions. Specifically, we frame
Optimal AdaBoost as a dynamical system and, employing tools from ergodic
theory, prove that, under a condition that Optimal AdaBoost does not have ties
for best weak classifier eventually, a condition for which we provide empirical
evidence from high dimensional real-world datasets, the algorithm's update
behaves like a continuous map. We provide constructive proofs of several
arbitrarily accurate approximations of Optimal AdaBoost; prove that they
exhibit certain cycling behavior in finite time, and that the resulting
dynamical system is ergodic; and establish sufficient conditions for the same
to hold for the actual Optimal-AdaBoost update. We believe that our results
provide reasonably strong evidence for the affirmative answer to two open
conjectures, at least from a broad computational-theory perspective: AdaBoost
always cycles and is an ergodic dynamical system. We present empirical evidence
that cycles are hard to detect while time averages stabilize quickly. Our
results ground future convergence-rate analysis and may help optimize
generalization ability and alleviate a practitioner's burden of deciding how
long to run the algorithm.
| Joshua Belanich and Luis E. Ortiz | null | 1212.1108 | null | null |
Using Wikipedia to Boost SVD Recommender Systems | cs.LG cs.IR stat.ML | Singular Value Decomposition (SVD) has been used successfully in recent years
in the area of recommender systems. In this paper we present how this model can
be extended to consider both user ratings and information from Wikipedia. By
mapping items to Wikipedia pages and quantifying their similarity, we are able
to use this information in order to improve recommendation accuracy, especially
when the sparsity is high. Another advantage of the proposed approach is the
fact that it can be easily integrated into any other SVD implementation,
regardless of additional parameters that may have been added to it. Preliminary
experimental results on the MovieLens dataset are encouraging.
| Gilad Katz, Guy Shani, Bracha Shapira, Lior Rokach | null | 1212.1131 | null | null |
On Some Integrated Approaches to Inference | stat.ML cs.LG | We present arguments for the formulation of unified approach to different
standard continuous inference methods from partial information. It is claimed
that an explicit partition of information into a priori (prior knowledge) and a
posteriori information (data) is an important way of standardizing inference
approaches so that they can be compared on a normative scale, and so that
notions of optimal algorithms become farther-reaching. The inference methods
considered include neural network approaches, information-based complexity, and
Monte Carlo, spline, and regularization methods. The model is an extension of
currently used continuous complexity models, with a class of algorithms in the
form of optimization methods, in which an optimization functional (involving
the data) is minimized. This extends the family of current approaches in
continuous complexity theory, which include the use of interpolatory algorithms
in worst and average case settings.
| Mark A. Kon and Leszek Plaskota | null | 1212.1180 | null | null |
Distributed Adaptive Networks: A Graphical Evolutionary Game-Theoretic
View | cs.GT cs.LG | Distributed adaptive filtering has been considered as an effective approach
for data processing and estimation over distributed networks. Most existing
distributed adaptive filtering algorithms focus on designing different
information diffusion rules, regardless of the nature evolutionary
characteristic of a distributed network. In this paper, we study the adaptive
network from the game theoretic perspective and formulate the distributed
adaptive filtering problem as a graphical evolutionary game. With the proposed
formulation, the nodes in the network are regarded as players and the local
combiner of estimation information from different neighbors is regarded as
different strategies selection. We show that this graphical evolutionary game
framework is very general and can unify the existing adaptive network
algorithms. Based on this framework, as examples, we further propose two
error-aware adaptive filtering algorithms. Moreover, we use graphical
evolutionary game theory to analyze the information diffusion process over the
adaptive networks and evolutionarily stable strategy of the system. Finally,
simulation results are shown to verify the effectiveness of our analysis and
proposed methods.
| Chunxiao Jiang and Yan Chen and K. J. Ray Liu | 10.1109/TSP.2013.2280444 | 1212.1245 | null | null |
Excess risk bounds for multitask learning with trace norm regularization | stat.ML cs.LG | Trace norm regularization is a popular method of multitask learning. We give
excess risk bounds with explicit dependence on the number of tasks, the number
of examples per task and properties of the data distribution. The bounds are
independent of the dimension of the input space, which may be infinite as in
the case of reproducing kernel Hilbert spaces. A byproduct of the proof are
bounds on the expected norm of sums of random positive semidefinite matrices
with subexponential moments.
| Andreas Maurer and Massimiliano Pontil | null | 1212.1496 | null | null |
Layer-wise learning of deep generative models | cs.NE cs.LG stat.ML | When using deep, multi-layered architectures to build generative models of
data, it is difficult to train all layers at once. We propose a layer-wise
training procedure admitting a performance guarantee compared to the global
optimum. It is based on an optimistic proxy of future performance, the best
latent marginal. We interpret auto-encoders in this setting as generative
models, by showing that they train a lower bound of this criterion. We test the
new learning procedure against a state of the art method (stacked RBMs), and
find it to improve performance. Both theory and experiments highlight the
importance, when training deep architectures, of using an inference model (from
data to hidden variables) richer than the generative model (from hidden
variables to data).
| Ludovic Arnold and Yann Ollivier | null | 1212.1524 | null | null |
Learning Mixtures of Arbitrary Distributions over Large Discrete Domains | cs.LG cs.DS | We give an algorithm for learning a mixture of {\em unstructured}
distributions. This problem arises in various unsupervised learning scenarios,
for example in learning {\em topic models} from a corpus of documents spanning
several topics. We show how to learn the constituents of a mixture of $k$
arbitrary distributions over a large discrete domain $[n]=\{1,2,\dots,n\}$ and
the mixture weights, using $O(n\polylog n)$ samples. (In the topic-model
learning setting, the mixture constituents correspond to the topic
distributions.) This task is information-theoretically impossible for $k>1$
under the usual sampling process from a mixture distribution. However, there
are situations (such as the above-mentioned topic model case) in which each
sample point consists of several observations from the same mixture
constituent. This number of observations, which we call the {\em "sampling
aperture"}, is a crucial parameter of the problem. We obtain the {\em first}
bounds for this mixture-learning problem {\em without imposing any assumptions
on the mixture constituents.} We show that efficient learning is possible
exactly at the information-theoretically least-possible aperture of $2k-1$.
Thus, we achieve near-optimal dependence on $n$ and optimal aperture. While the
sample-size required by our algorithm depends exponentially on $k$, we prove
that such a dependence is {\em unavoidable} when one considers general
mixtures. A sequence of tools contribute to the algorithm, such as
concentration results for random matrices, dimension reduction, moment
estimations, and sensitivity analysis.
| Yuval Rabani, Leonard Schulman, Chaitanya Swamy | null | 1212.1527 | null | null |
Stochastic Gradient Descent for Non-smooth Optimization: Convergence
Results and Optimal Averaging Schemes | cs.LG math.OC stat.ML | Stochastic Gradient Descent (SGD) is one of the simplest and most popular
stochastic optimization methods. While it has already been theoretically
studied for decades, the classical analysis usually required non-trivial
smoothness assumptions, which do not apply to many modern applications of SGD
with non-smooth objective functions such as support vector machines. In this
paper, we investigate the performance of SGD without such smoothness
assumptions, as well as a running average scheme to convert the SGD iterates to
a solution with optimal optimization accuracy. In this framework, we prove that
after T rounds, the suboptimality of the last SGD iterate scales as
O(log(T)/\sqrt{T}) for non-smooth convex objective functions, and O(log(T)/T)
in the non-smooth strongly convex case. To the best of our knowledge, these are
the first bounds of this kind, and almost match the minimax-optimal rates
obtainable by appropriate averaging schemes. We also propose a new and simple
averaging scheme, which not only attains optimal rates, but can also be easily
computed on-the-fly (in contrast, the suffix averaging scheme proposed in
Rakhlin et al. (2011) is not as simple to implement). Finally, we provide some
experimental illustrations.
| Ohad Shamir and Tong Zhang | null | 1212.1824 | null | null |
High-dimensional sequence transduction | cs.LG | We investigate the problem of transforming an input sequence into a
high-dimensional output sequence in order to transcribe polyphonic audio music
into symbolic notation. We introduce a probabilistic model based on a recurrent
neural network that is able to learn realistic output distributions given the
input and we devise an efficient algorithm to search for the global mode of
that distribution. The resulting method produces musically plausible
transcriptions even under high levels of noise and drastically outperforms
previous state-of-the-art approaches on five datasets of synthesized sounds and
real recordings, approximately halving the test error rate.
| Nicolas Boulanger-Lewandowski, Yoshua Bengio and Pascal Vincent | null | 1212.1936 | null | null |
A simpler approach to obtaining an O(1/t) convergence rate for the
projected stochastic subgradient method | cs.LG math.OC stat.ML | In this note, we present a new averaging technique for the projected
stochastic subgradient method. By using a weighted average with a weight of t+1
for each iterate w_t at iteration t, we obtain the convergence rate of O(1/t)
with both an easy proof and an easy implementation. The new scheme is compared
empirically to existing techniques, with similar performance behavior.
| Simon Lacoste-Julien, Mark Schmidt, Francis Bach | null | 1212.2002 | null | null |
A class of random fields on complete graphs with tractable partition
function | cs.LG stat.ML | The aim of this short note is to draw attention to a method by which the
partition function and marginal probabilities for a certain class of random
fields on complete graphs can be computed in polynomial time. This class
includes Ising models with homogeneous pairwise potentials but arbitrary
(inhomogeneous) unary potentials. Similarly, the partition function and
marginal probabilities can be computed in polynomial time for random fields on
complete bipartite graphs, provided they have homogeneous pairwise potentials.
We expect that these tractable classes of large scale random fields can be very
useful for the evaluation of approximation algorithms by providing exact error
estimates.
| Boris Flach | 10.1109/TPAMI.2013.99 | 1212.2136 | null | null |
Bag-of-Words Representation for Biomedical Time Series Classification | cs.LG cs.AI | Automatic analysis of biomedical time series such as electroencephalogram
(EEG) and electrocardiographic (ECG) signals has attracted great interest in
the community of biomedical engineering due to its important applications in
medicine. In this work, a simple yet effective bag-of-words representation that
is able to capture both local and global structure similarity information is
proposed for biomedical time series representation. In particular, similar to
the bag-of-words model used in text document domain, the proposed method treats
a time series as a text document and extracts local segments from the time
series as words. The biomedical time series is then represented as a histogram
of codewords, each entry of which is the count of a codeword appeared in the
time series. Although the temporal order of the local segments is ignored, the
bag-of-words representation is able to capture high-level structural
information because both local and global structural information are well
utilized. The performance of the bag-of-words model is validated on three
datasets extracted from real EEG and ECG signals. The experimental results
demonstrate that the proposed method is not only insensitive to parameters of
the bag-of-words model such as local segment length and codebook size, but also
robust to noise.
| Jin Wang, Ping Liu, Mary F.H.She, Saeid Nahavandi and and Abbas
Kouzani | 10.1016/j.bspc.2013.06.004 | 1212.2262 | null | null |
Runtime Optimizations for Prediction with Tree-Based Models | cs.DB cs.IR cs.LG | Tree-based models have proven to be an effective solution for web ranking as
well as other problems in diverse domains. This paper focuses on optimizing the
runtime performance of applying such models to make predictions, given an
already-trained model. Although exceedingly simple conceptually, most
implementations of tree-based models do not efficiently utilize modern
superscalar processor architectures. By laying out data structures in memory in
a more cache-conscious fashion, removing branches from the execution flow using
a technique called predication, and micro-batching predictions using a
technique called vectorization, we are able to better exploit modern processor
architectures and significantly improve the speed of tree-based models over
hard-coded if-else blocks. Our work contributes to the exploration of
architecture-conscious runtime implementations of machine learning algorithms.
| Nima Asadi, Jimmy Lin, and Arjen P. de Vries | null | 1212.2287 | null | null |
PAC-Bayesian Learning and Domain Adaptation | stat.ML cs.LG | In machine learning, Domain Adaptation (DA) arises when the distribution gen-
erating the test (target) data differs from the one generating the learning
(source) data. It is well known that DA is an hard task even under strong
assumptions, among which the covariate-shift where the source and target
distributions diverge only in their marginals, i.e. they have the same labeling
function. Another popular approach is to consider an hypothesis class that
moves closer the two distributions while implying a low-error for both tasks.
This is a VC-dim approach that restricts the complexity of an hypothesis class
in order to get good generalization. Instead, we propose a PAC-Bayesian
approach that seeks for suitable weights to be given to each hypothesis in
order to build a majority vote. We prove a new DA bound in the PAC-Bayesian
context. This leads us to design the first DA-PAC-Bayesian algorithm based on
the minimization of the proposed bound. Doing so, we seek for a \rho-weighted
majority vote that takes into account a trade-off between three quantities. The
first two quantities being, as usual in the PAC-Bayesian approach, (a) the
complexity of the majority vote (measured by a Kullback-Leibler divergence) and
(b) its empirical risk (measured by the \rho-average errors on the source
sample). The third quantity is (c) the capacity of the majority vote to
distinguish some structural difference between the source and target samples.
| Pascal Germain, Amaury Habrard (LAHC), Fran\c{c}ois Laviolette, Emilie
Morvant (LIF) | null | 1212.2340 | null | null |
On the complexity of learning a language: An improvement of Block's
algorithm | cs.CL cs.LG | Language learning is thought to be a highly complex process. One of the
hurdles in learning a language is to learn the rules of syntax of the language.
Rules of syntax are often ordered in that before one rule can applied one must
apply another. It has been thought that to learn the order of n rules one must
go through all n! permutations. Thus to learn the order of 27 rules would
require 27! steps or 1.08889x10^{28} steps. This number is much greater than
the number of seconds since the beginning of the universe! In an insightful
analysis the linguist Block ([Block 86], pp. 62-63, p.238) showed that with the
assumption of transitivity this vast number of learning steps reduces to a mere
377 steps. We present a mathematical analysis of the complexity of Block's
algorithm. The algorithm has a complexity of order n^2 given n rules. In
addition, we improve Block's results exponentially, by introducing an algorithm
that has complexity of order less than n log n.
| Eric Werner | null | 1212.2390 | null | null |
Mining Techniques in Network Security to Enhance Intrusion Detection
Systems | cs.CR cs.LG | In intrusion detection systems, classifiers still suffer from several
drawbacks such as data dimensionality and dominance, different network feature
types, and data impact on the classification. In this paper two significant
enhancements are presented to solve these drawbacks. The first enhancement is
an improved feature selection using sequential backward search and information
gain. This, in turn, extracts valuable features that enhance positively the
detection rate and reduce the false positive rate. The second enhancement is
transferring nominal network features to numeric ones by exploiting the
discrete random variable and the probability mass function to solve the problem
of different feature types, the problem of data dominance, and data impact on
the classification. The latter is combined to known normalization methods to
achieve a significant hybrid normalization approach. Finally, an intensive and
comparative study approves the efficiency of these enhancements and shows
better performance comparing to other proposed methods.
| Maher Salem and Ulrich Buehler | 10.5121/ijnsa | 1212.2414 | null | null |
Robust Face Recognition using Local Illumination Normalization and
Discriminant Feature Point Selection | cs.LG cs.CV | Face recognition systems must be robust to the variation of various factors
such as facial expression, illumination, head pose and aging. Especially, the
robustness against illumination variation is one of the most important problems
to be solved for the practical use of face recognition systems. Gabor wavelet
is widely used in face detection and recognition because it gives the
possibility to simulate the function of human visual system. In this paper, we
propose a method for extracting Gabor wavelet features which is stable under
the variation of local illumination and show experiment results demonstrating
its effectiveness.
| Song Han, Jinsong Kim, Cholhun Kim, Jongchol Jo, and Sunam Han | null | 1212.2415 | null | null |
Active Collaborative Filtering | cs.IR cs.LG stat.ML | Collaborative filtering (CF) allows the preferences of multiple users to be
pooled to make recommendations regarding unseen products. We consider in this
paper the problem of online and interactive CF: given the current ratings
associated with a user, what queries (new ratings) would most improve the
quality of the recommendations made? We cast this terms of expected value of
information (EVOI); but the online computational cost of computing optimal
queries is prohibitive. We show how offline prototyping and computation of
bounds on EVOI can be used to dramatically reduce the required online
computation. The framework we develop is general, but we focus on derivations
and empirical study in the specific case of the multiple-cause vector
quantization model.
| Craig Boutilier, Richard S. Zemel, Benjamin Marlin | null | 1212.2442 | null | null |
Bayesian Hierarchical Mixtures of Experts | cs.LG stat.ML | The Hierarchical Mixture of Experts (HME) is a well-known tree-based model
for regression and classification, based on soft probabilistic splits. In its
original formulation it was trained by maximum likelihood, and is therefore
prone to over-fitting. Furthermore the maximum likelihood framework offers no
natural metric for optimizing the complexity and structure of the tree.
Previous attempts to provide a Bayesian treatment of the HME model have relied
either on ad-hoc local Gaussian approximations or have dealt with related
models representing the joint distribution of both input and output variables.
In this paper we describe a fully Bayesian treatment of the HME model based on
variational inference. By combining local and global variational methods we
obtain a rigourous lower bound on the marginal probability of the data under
the model. This bound is optimized during the training phase, and its resulting
value can be used for model order selection. We present results using this
approach for a data set describing robot arm kinematics.
| Christopher M. Bishop, Markus Svensen | null | 1212.2447 | null | null |
The Information Bottleneck EM Algorithm | cs.LG stat.ML | Learning with hidden variables is a central challenge in probabilistic
graphical models that has important implications for many real-life problems.
The classical approach is using the Expectation Maximization (EM) algorithm.
This algorithm, however, can get trapped in local maxima. In this paper we
explore a new approach that is based on the Information Bottleneck principle.
In this approach, we view the learning problem as a tradeoff between two
information theoretic objectives. The first is to make the hidden variables
uninformative about the identity of specific instances. The second is to make
the hidden variables informative about the observed attributes. By exploring
different tradeoffs between these two objectives, we can gradually converge on
a high-scoring solution. As we show, the resulting, Information Bottleneck
Expectation Maximization (IB-EM) algorithm, manages to find solutions that are
superior to standard EM methods.
| Gal Elidan, Nir Friedman | null | 1212.2460 | null | null |
A New Algorithm for Maximum Likelihood Estimation in Gaussian Graphical
Models for Marginal Independence | stat.ME cs.LG stat.ML | Graphical models with bi-directed edges (<->) represent marginal
independence: the absence of an edge between two vertices indicates that the
corresponding variables are marginally independent. In this paper, we consider
maximum likelihood estimation in the case of continuous variables with a
Gaussian joint distribution, sometimes termed a covariance graph model. We
present a new fitting algorithm which exploits standard regression techniques
and establish its convergence properties. Moreover, we contrast our procedure
to existing estimation methods.
| Mathias Drton, Thomas S. Richardson | null | 1212.2462 | null | null |
A Robust Independence Test for Constraint-Based Learning of Causal
Structure | cs.AI cs.LG stat.ML | Constraint-based (CB) learning is a formalism for learning a causal network
with a database D by performing a series of conditional-independence tests to
infer structural information. This paper considers a new test of independence
that combines ideas from Bayesian learning, Bayesian network inference, and
classical hypothesis testing to produce a more reliable and robust test. The
new test can be calculated in the same asymptotic time and space required for
the standard tests such as the chi-squared test, but it allows the
specification of a prior distribution over parameters and can be used when the
database is incomplete. We prove that the test is correct, and we demonstrate
empirically that, when used with a CB causal discovery algorithm with
noninformative priors, it recovers structural features more reliably and it
produces networks with smaller KL-Divergence, especially as the number of nodes
increases or the number of records decreases. Another benefit is the dramatic
reduction in the probability that a CB algorithm will stall during the search,
providing a remedy for an annoying problem plaguing CB learning when the
database is small.
| Denver Dash, Marek J. Druzdzel | null | 1212.2464 | null | null |
On Information Regularization | cs.LG stat.ML | We formulate a principle for classification with the knowledge of the
marginal distribution over the data points (unlabeled data). The principle is
cast in terms of Tikhonov style regularization where the regularization penalty
articulates the way in which the marginal density should constrain otherwise
unrestricted conditional distributions. Specifically, the regularization
penalty penalizes any information introduced between the examples and labels
beyond what is provided by the available labeled examples. The work extends
Szummer and Jaakkola's information regularization (NIPS 2002) to multiple
dimensions, providing a regularizer independent of the covering of the space
used in the derivation. We show in addition how the information regularizer can
be used as a measure of complexity of the classification task with unlabeled
data and prove a relevant sample-complexity bound. We illustrate the
regularization principle in practice by restricting the class of conditional
distributions to be logistic regression models and constructing the
regularization penalty from a finite set of unlabeled examples.
| Adrian Corduneanu, Tommi S. Jaakkola | null | 1212.2466 | null | null |
Large-Sample Learning of Bayesian Networks is NP-Hard | cs.LG cs.AI stat.ML | In this paper, we provide new complexity results for algorithms that learn
discrete-variable Bayesian networks from data. Our results apply whenever the
learning algorithm uses a scoring criterion that favors the simplest model able
to represent the generative distribution exactly. Our results therefore hold
whenever the learning algorithm uses a consistent scoring criterion and is
applied to a sufficiently large dataset. We show that identifying high-scoring
structures is hard, even when we are given an independence oracle, an inference
oracle, and/or an information oracle. Our negative results also apply to the
learning of discrete-variable Bayesian networks in which each node has at most
k parents, for all k > 3.
| David Maxwell Chickering, Christopher Meek, David Heckerman | null | 1212.2468 | null | null |
Reasoning about Bayesian Network Classifiers | cs.LG cs.AI stat.ML | Bayesian network classifiers are used in many fields, and one common class of
classifiers are naive Bayes classifiers. In this paper, we introduce an
approach for reasoning about Bayesian network classifiers in which we
explicitly convert them into Ordered Decision Diagrams (ODDs), which are then
used to reason about the properties of these classifiers. Specifically, we
present an algorithm for converting any naive Bayes classifier into an ODD, and
we show theoretically and experimentally that this algorithm can give us an ODD
that is tractable in size even given an intractable number of instances. Since
ODDs are tractable representations of classifiers, our algorithm allows us to
efficiently test the equivalence of two naive Bayes classifiers and
characterize discrepancies between them. We also show a number of additional
results including a count of distinct classifiers that can be induced by
changing some CPT in a naive Bayes classifier, and the range of allowable
changes to a CPT which keeps the current classifier unchanged.
| Hei Chan, Adnan Darwiche | null | 1212.2470 | null | null |
Monte Carlo Matrix Inversion Policy Evaluation | cs.LG cs.AI cs.NA | In 1950, Forsythe and Leibler (1950) introduced a statistical technique for
finding the inverse of a matrix by characterizing the elements of the matrix
inverse as expected values of a sequence of random walks. Barto and Duff (1994)
subsequently showed relations between this technique and standard dynamic
programming and temporal differencing methods. The advantage of the Monte Carlo
matrix inversion (MCMI) approach is that it scales better with respect to
state-space size than alternative techniques. In this paper, we introduce an
algorithm for performing reinforcement learning policy evaluation using MCMI.
We demonstrate that MCMI improves on runtime over a maximum likelihood
model-based policy evaluation approach and on both runtime and accuracy over
the temporal differencing (TD) policy evaluation approach. We further improve
on MCMI policy evaluation by adding an importance sampling technique to our
algorithm to reduce the variance of our estimator. Lastly, we illustrate
techniques for scaling up MCMI to large state spaces in order to perform policy
improvement.
| Fletcher Lu, Dale Schuurmans | null | 1212.2471 | null | null |
Budgeted Learning of Naive-Bayes Classifiers | cs.LG stat.ML | Frequently, acquiring training data has an associated cost. We consider the
situation where the learner may purchase data during training, subject TO a
budget. IN particular, we examine the CASE WHERE each feature label has an
associated cost, AND the total cost OF ALL feature labels acquired during
training must NOT exceed the budget.This paper compares methods FOR choosing
which feature label TO purchase next, given the budget AND the CURRENT belief
state OF naive Bayes model parameters.Whereas active learning has traditionally
focused ON myopic(greedy) strategies FOR query selection, this paper presents a
tractable method FOR incorporating knowledge OF the budget INTO the decision
making process, which improves performance.
| Daniel J. Lizotte, Omid Madani, Russell Greiner | null | 1212.2472 | null | null |
Learning Riemannian Metrics | cs.LG stat.ML | We propose a solution to the problem of estimating a Riemannian metric
associated with a given differentiable manifold. The metric learning problem is
based on minimizing the relative volume of a given set of points. We derive the
details for a family of metrics on the multinomial simplex. The resulting
metric has applications in text classification and bears some similarity to
TFIDF representation of text documents.
| Guy Lebanon | null | 1212.2474 | null | null |
Efficient Gradient Estimation for Motor Control Learning | cs.LG cs.SY | The task of estimating the gradient of a function in the presence of noise is
central to several forms of reinforcement learning, including policy search
methods. We present two techniques for reducing gradient estimation errors in
the presence of observable input noise applied to the control signal. The first
method extends the idea of a reinforcement baseline by fitting a local linear
model to the function whose gradient is being estimated; we show how to find
the linear model that minimizes the variance of the gradient estimate, and how
to estimate the model from data. The second method improves this further by
discounting components of the gradient vector that have high variance. These
methods are applied to the problem of motor control learning, where actuator
noise has a significant influence on behavior. In particular, we apply the
techniques to learn locally optimal controllers for a dart-throwing task using
a simulated three-link arm; we demonstrate that proposed methods significantly
improve the reward function gradient estimate and, consequently, the learning
curve, over existing methods.
| Gregory Lawrence, Noah Cowan, Stuart Russell | null | 1212.2475 | null | null |
Approximate Inference and Constrained Optimization | cs.LG cs.AI stat.ML | Loopy and generalized belief propagation are popular algorithms for
approximate inference in Markov random fields and Bayesian networks. Fixed
points of these algorithms correspond to extrema of the Bethe and Kikuchi free
energy. However, belief propagation does not always converge, which explains
the need for approaches that explicitly minimize the Kikuchi/Bethe free energy,
such as CCCP and UPS. Here we describe a class of algorithms that solves this
typically nonconvex constrained minimization of the Kikuchi free energy through
a sequence of convex constrained minimizations of upper bounds on the Kikuchi
free energy. Intuitively one would expect tighter bounds to lead to faster
algorithms, which is indeed convincingly demonstrated in our simulations.
Several ideas are applied to obtain tight convex bounds that yield dramatic
speed-ups over CCCP.
| Tom Heskes, Kees Albers, Hilbert Kappen | null | 1212.2480 | null | null |
Sufficient Dimensionality Reduction with Irrelevant Statistics | cs.LG stat.ML | The problem of finding a reduced dimensionality representation of categorical
variables while preserving their most relevant characteristics is fundamental
for the analysis of complex data. Specifically, given a co-occurrence matrix of
two variables, one often seeks a compact representation of one variable which
preserves information about the other variable. We have recently introduced
``Sufficient Dimensionality Reduction' [GT-2003], a method that extracts
continuous reduced dimensional features whose measurements (i.e., expectation
values) capture maximal mutual information among the variables. However, such
measurements often capture information that is irrelevant for a given task.
Widely known examples are illumination conditions, which are irrelevant as
features for face recognition, writing style which is irrelevant as a feature
for content classification, and intonation which is irrelevant as a feature for
speech recognition. Such irrelevance cannot be deduced apriori, since it
depends on the details of the task, and is thus inherently ill defined in the
purely unsupervised case. Separating relevant from irrelevant features can be
achieved using additional side data that contains such irrelevant structures.
This approach was taken in [CT-2002], extending the information bottleneck
method, which uses clustering to compress the data. Here we use this
side-information framework to identify features whose measurements are
maximally informative for the original data set, but carry as little
information as possible on a side data set. In statistical terms this can be
understood as extracting statistics which are maximally sufficient for the
original dataset, while simultaneously maximally ancillary for the side
dataset. We formulate this tradeoff as a constrained optimization problem and
characterize its solutions. We then derive a gradient descent algorithm for
this problem, which is based on the Generalized Iterative Scaling method for
finding maximum entropy distributions. The method is demonstrated on synthetic
data, as well as on real face recognition datasets, and is shown to outperform
standard methods such as oriented PCA.
| Amir Globerson, Gal Chechik, Naftali Tishby | null | 1212.2483 | null | null |
Locally Weighted Naive Bayes | cs.LG stat.ML | Despite its simplicity, the naive Bayes classifier has surprised machine
learning researchers by exhibiting good performance on a variety of learning
problems. Encouraged by these results, researchers have looked to overcome
naive Bayes primary weakness - attribute independence - and improve the
performance of the algorithm. This paper presents a locally weighted version of
naive Bayes that relaxes the independence assumption by learning local models
at prediction time. Experimental results show that locally weighted naive Bayes
rarely degrades accuracy compared to standard naive Bayes and, in many cases,
improves accuracy dramatically. The main advantage of this method compared to
other techniques for enhancing naive Bayes is its conceptual and computational
simplicity.
| Eibe Frank, Mark Hall, Bernhard Pfahringer | null | 1212.2487 | null | null |
A Distance-Based Branch and Bound Feature Selection Algorithm | cs.LG stat.ML | There is no known efficient method for selecting k Gaussian features from n
which achieve the lowest Bayesian classification error. We show an example of
how greedy algorithms faced with this task are led to give results that are not
optimal. This motivates us to propose a more robust approach. We present a
Branch and Bound algorithm for finding a subset of k independent Gaussian
features which minimizes the naive Bayesian classification error. Our algorithm
uses additive monotonic distance measures to produce bounds for the Bayesian
classification error in order to exclude many feature subsets from evaluation,
while still returning an optimal solution. We test our method on synthetic data
as well as data obtained from gene expression profiling.
| Ari Frank, Dan Geiger, Zohar Yakhini | null | 1212.2488 | null | null |
On the Convergence of Bound Optimization Algorithms | cs.LG stat.ML | Many practitioners who use the EM algorithm complain that it is sometimes
slow. When does this happen, and what can be done about it? In this paper, we
study the general class of bound optimization algorithms - including
Expectation-Maximization, Iterative Scaling and CCCP - and their relationship
to direct optimization algorithms such as gradient-based methods for parameter
learning. We derive a general relationship between the updates performed by
bound optimization methods and those of gradient and second-order methods and
identify analytic conditions under which bound optimization algorithms exhibit
quasi-Newton behavior, and conditions under which they possess poor,
first-order convergence. Based on this analysis, we consider several specific
algorithms, interpret and analyze their convergence properties and provide some
recipes for preprocessing input to these algorithms to yield faster convergence
behavior. We report empirical results supporting our analysis and showing that
simple data preprocessing can result in dramatically improved performance of
bound optimizers in practice.
| Ruslan R Salakhutdinov, Sam T Roweis, Zoubin Ghahramani | null | 1212.2490 | null | null |
Automated Analytic Asymptotic Evaluation of the Marginal Likelihood for
Latent Models | cs.LG stat.ML | We present and implement two algorithms for analytic asymptotic evaluation of
the marginal likelihood of data given a Bayesian network with hidden nodes. As
shown by previous work, this evaluation is particularly hard for latent
Bayesian network models, namely networks that include hidden variables, where
asymptotic approximation deviates from the standard BIC score. Our algorithms
solve two central difficulties in asymptotic evaluation of marginal likelihood
integrals, namely, evaluation of regular dimensionality drop for latent
Bayesian network models and computation of non-standard approximation formulas
for singular statistics for these models. The presented algorithms are
implemented in Matlab and Maple and their usage is demonstrated for marginal
likelihood approximations for Bayesian networks with hidden variables.
| Dmitry Rusakov, Dan Geiger | null | 1212.2491 | null | null |
Learning Generative Models of Similarity Matrices | cs.LG stat.ML | We describe a probabilistic (generative) view of affinity matrices along with
inference algorithms for a subclass of problems associated with data
clustering. This probabilistic view is helpful in understanding different
models and algorithms that are based on affinity functions OF the data. IN
particular, we show how(greedy) inference FOR a specific probabilistic model IS
equivalent TO the spectral clustering algorithm.It also provides a framework
FOR developing new algorithms AND extended models. AS one CASE, we present new
generative data clustering models that allow us TO infer the underlying
distance measure suitable for the clustering problem at hand. These models seem
to perform well in a larger class of problems for which other clustering
algorithms (including spectral clustering) usually fail. Experimental
evaluation was performed in a variety point data sets, showing excellent
performance.
| Romer Rosales, Brendan J. Frey | null | 1212.2494 | null | null |
Learning Continuous Time Bayesian Networks | cs.LG stat.ML | Continuous time Bayesian networks (CTBNs) describe structured stochastic
processes with finitely many states that evolve over continuous time. A CTBN is
a directed (possibly cyclic) dependency graph over a set of variables, each of
which represents a finite state continuous time Markov process whose transition
model is a function of its parents. We address the problem of learning
parameters and structure of a CTBN from fully observed data. We define a
conjugate prior for CTBNs, and show how it can be used both for Bayesian
parameter estimation and as the basis of a Bayesian score for structure
learning. Because acyclicity is not a constraint in CTBNs, we can show that the
structure learning problem is significantly easier, both in theory and in
practice, than structure learning for dynamic Bayesian networks (DBNs).
Furthermore, as CTBNs can tailor the parameters and dependency structure to the
different time granularities of the evolution of different variables, they can
provide a better fit to continuous-time processes than DBNs with a fixed time
granularity.
| Uri Nodelman, Christian R. Shelton, Daphne Koller | null | 1212.2498 | null | null |
On Local Optima in Learning Bayesian Networks | cs.LG cs.AI stat.ML | This paper proposes and evaluates the k-greedy equivalence search algorithm
(KES) for learning Bayesian networks (BNs) from complete data. The main
characteristic of KES is that it allows a trade-off between greediness and
randomness, thus exploring different good local optima. When greediness is set
at maximum, KES corresponds to the greedy equivalence search algorithm (GES).
When greediness is kept at minimum, we prove that under mild assumptions KES
asymptotically returns any inclusion optimal BN with nonzero probability.
Experimental results for both synthetic and real data are reported showing that
KES often finds a better local optima than GES. Moreover, we use KES to
experimentally confirm that the number of different local optima is often huge.
| Jens D. Nielsen, Tomas Kocka, Jose M. Pena | null | 1212.2500 | null | null |
Efficiently Inducing Features of Conditional Random Fields | cs.LG stat.ML | Conditional Random Fields (CRFs) are undirected graphical models, a special
case of which correspond to conditionally-trained finite state machines. A key
advantage of these models is their great flexibility to include a wide array of
overlapping, multi-granularity, non-independent features of the input. In face
of this freedom, an important question that remains is, what features should be
used? This paper presents a feature induction method for CRFs. Founded on the
principle of constructing only those feature conjunctions that significantly
increase log-likelihood, the approach is based on that of Della Pietra et al
[1997], but altered to work with conditional rather than joint probabilities,
and with additional modifications for providing tractability specifically for a
sequence model. In comparison with traditional approaches, automated feature
induction offers both improved accuracy and more than an order of magnitude
reduction in feature count; it enables the use of richer, higher-order Markov
models, and offers more freedom to liberally guess about which atomic features
may be relevant to a task. The induction method applies to linear-chain CRFs,
as well as to more arbitrary CRF structures, also known as Relational Markov
Networks [Taskar & Koller, 2002]. We present experimental results on a named
entity extraction task.
| Andrew McCallum | null | 1212.2504 | null | null |
Collaborative Ensemble Learning: Combining Collaborative and
Content-Based Information Filtering via Hierarchical Bayes | cs.LG cs.IR stat.ML | Collaborative filtering (CF) and content-based filtering (CBF) have widely
been used in information filtering applications. Both approaches have their
strengths and weaknesses which is why researchers have developed hybrid
systems. This paper proposes a novel approach to unify CF and CBF in a
probabilistic framework, named collaborative ensemble learning. It uses
probabilistic SVMs to model each user's profile (as CBF does).At the prediction
phase, it combines a society OF users profiles, represented by their respective
SVM models, to predict an active users preferences(the CF idea).The combination
scheme is embedded in a probabilistic framework and retains an intuitive
explanation.Moreover, collaborative ensemble learning does not require a global
training stage and thus can incrementally incorporate new data.We report
results based on two data sets. For the Reuters-21578 text data set, we
simulate user ratings under the assumption that each user is interested in only
one category. In the second experiment, we use users' opinions on a set of 642
art images that were collected through a web-based survey. For both data sets,
collaborative ensemble achieved excellent performance in terms of
recommendation accuracy.
| Kai Yu, Anton Schwaighofer, Volker Tresp, Wei-Ying Ma, HongJiang Zhang | null | 1212.2508 | null | null |
Markov Random Walk Representations with Continuous Distributions | cs.LG stat.ML | Representations based on random walks can exploit discrete data distributions
for clustering and classification. We extend such representations from discrete
to continuous distributions. Transition probabilities are now calculated using
a diffusion equation with a diffusion coefficient that inversely depends on the
data density. We relate this diffusion equation to a path integral and derive
the corresponding path probability measure. The framework is useful for
incorporating continuous data densities and prior knowledge.
| Chen-Hsiang Yeang, Martin Szummer | null | 1212.2510 | null | null |
Stochastic complexity of Bayesian networks | cs.LG stat.ML | Bayesian networks are now being used in enormous fields, for example,
diagnosis of a system, data mining, clustering and so on. In spite of their
wide range of applications, the statistical properties have not yet been
clarified, because the models are nonidentifiable and non-regular. In a
Bayesian network, the set of its parameter for a smaller model is an analytic
set with singularities in the space of large ones. Because of these
singularities, the Fisher information matrices are not positive definite. In
other words, the mathematical foundation for learning was not constructed. In
recent years, however, we have developed a method to analyze non-regular models
using algebraic geometry. This method revealed the relation between the models
singularities and its statistical properties. In this paper, applying this
method to Bayesian networks with latent variables, we clarify the order of the
stochastic complexities.Our result claims that the upper bound of those is
smaller than the dimension of the parameter space. This means that the Bayesian
generalization error is also far smaller than that of regular model, and that
Schwarzs model selection criterion BIC needs to be improved for Bayesian
networks.
| Keisuke Yamazaki, Sumio Watanbe | null | 1212.2511 | null | null |
A Generalized Mean Field Algorithm for Variational Inference in
Exponential Families | cs.LG stat.ML | The mean field methods, which entail approximating intractable probability
distributions variationally with distributions from a tractable family, enjoy
high efficiency, guaranteed convergence, and provide lower bounds on the true
likelihood. But due to requirement for model-specific derivation of the
optimization equations and unclear inference quality in various models, it is
not widely used as a generic approximate inference algorithm. In this paper, we
discuss a generalized mean field theory on variational approximation to a broad
class of intractable distributions using a rich set of tractable distributions
via constrained optimization over distribution spaces. We present a class of
generalized mean field (GMF) algorithms for approximate inference in complex
exponential family models, which entails limiting the optimization over the
class of cluster-factorizable distributions. GMF is a generic method requiring
no model-specific derivations. It factors a complex model into a set of
disjoint variable clusters, and uses a set of canonical fix-point equations to
iteratively update the cluster distributions, and converge to locally optimal
cluster marginals that preserve the original dependency structure within each
cluster, hence, fully decomposed the overall inference problem. We empirically
analyzed the effect of different tractable family (clusters of different
granularity) on inference quality, and compared GMF with BP on several
canonical models. Possible extension to higher-order MF approximation is also
discussed.
| Eric P. Xing, Michael I. Jordan, Stuart Russell | null | 1212.2512 | null | null |
Efficient Parametric Projection Pursuit Density Estimation | cs.LG stat.ML | Product models of low dimensional experts are a powerful way to avoid the
curse of dimensionality. We present the ``under-complete product of experts'
(UPoE), where each expert models a one dimensional projection of the data. The
UPoE is fully tractable and may be interpreted as a parametric probabilistic
model for projection pursuit. Its ML learning rules are identical to the
approximate learning rules proposed before for under-complete ICA. We also
derive an efficient sequential learning algorithm and discuss its relationship
to projection pursuit density estimation and feature induction algorithms for
additive random field models.
| Max Welling, Richard S. Zemel, Geoffrey E. Hinton | null | 1212.2513 | null | null |
Boltzmann Machine Learning with the Latent Maximum Entropy Principle | cs.LG stat.ML | We present a new statistical learning paradigm for Boltzmann machines based
on a new inference principle we have proposed: the latent maximum entropy
principle (LME). LME is different both from Jaynes maximum entropy principle
and from standard maximum likelihood estimation.We demonstrate the LME
principle BY deriving new algorithms for Boltzmann machine parameter
estimation, and show how robust and fast new variant of the EM algorithm can be
developed.Our experiments show that estimation based on LME generally yields
better results than maximum likelihood estimation, particularly when inferring
hidden units from small amounts of data.
| Shaojun Wang, Dale Schuurmans, Fuchun Peng, Yunxin Zhao | null | 1212.2514 | null | null |
Learning Measurement Models for Unobserved Variables | cs.LG stat.ML | Observed associations in a database may be due in whole or part to variations
in unrecorded (latent) variables. Identifying such variables and their causal
relationships with one another is a principal goal in many scientific and
practical domains. Previous work shows that, given a partition of observed
variables such that members of a class share only a single latent common cause,
standard search algorithms for causal Bayes nets can infer structural relations
between latent variables. We introduce an algorithm for discovering such
partitions when they exist. Uniquely among available procedures, the algorithm
is (asymptotically) correct under standard assumptions in causal Bayes net
search algorithms, requires no prior knowledge of the number of latent
variables, and does not depend on the mathematical form of the relationships
among the latent variables. We evaluate the algorithm on a variety of simulated
data sets.
| Ricardo Silva, Richard Scheines, Clark Glymour, Peter L. Spirtes | null | 1212.2516 | null | null |
Learning Module Networks | cs.LG cs.CE stat.ML | Methods for learning Bayesian network structure can discover dependency
structure between observed variables, and have been shown to be useful in many
applications. However, in domains that involve a large number of variables, the
space of possible network structures is enormous, making it difficult, for both
computational and statistical reasons, to identify a good model. In this paper,
we consider a solution to this problem, suitable for domains where many
variables have similar behavior. Our method is based on a new class of models,
which we call module networks. A module network explicitly represents the
notion of a module - a set of variables that have the same parents in the
network and share the same conditional probability distribution. We define the
semantics of module networks, and describe an algorithm that learns a module
network from data. The algorithm learns both the partitioning of the variables
into modules and the dependency structure between the variables. We evaluate
our algorithm on synthetic data, and on real data in the domains of gene
expression and the stock market. Our results show that module networks
generalize better than Bayesian networks, and that the learned module network
structure reveals regularities that are obscured in learned Bayesian networks.
| Eran Segal, Dana Pe'er, Aviv Regev, Daphne Koller, Nir Friedman | null | 1212.2517 | null | null |
Convex Relaxations for Learning Bounded Treewidth Decomposable Graphs | cs.LG cs.DS stat.ML | We consider the problem of learning the structure of undirected graphical
models with bounded treewidth, within the maximum likelihood framework. This is
an NP-hard problem and most approaches consider local search techniques. In
this paper, we pose it as a combinatorial optimization problem, which is then
relaxed to a convex optimization problem that involves searching over the
forest and hyperforest polytopes with special structures, independently. A
supergradient method is used to solve the dual problem, with a run-time
complexity of $O(k^3 n^{k+2} \log n)$ for each iteration, where $n$ is the
number of variables and $k$ is a bound on the treewidth. We compare our
approach to state-of-the-art methods on synthetic datasets and classical
benchmarks, showing the gains of the novel convex approach.
| K. S. Sesh Kumar (LIENS, INRIA Paris - Rocquencourt), Francis Bach
(LIENS, INRIA Paris - Rocquencourt) | null | 1212.2573 | null | null |
Optimal diagnostic tests for sporadic Creutzfeldt-Jakob disease based on
support vector machine classification of RT-QuIC data | q-bio.QM cs.LG stat.AP | In this work we study numerical construction of optimal clinical diagnostic
tests for detecting sporadic Creutzfeldt-Jakob disease (sCJD). A cerebrospinal
fluid sample (CSF) from a suspected sCJD patient is subjected to a process
which initiates the aggregation of a protein present only in cases of sCJD.
This aggregation is indirectly observed in real-time at regular intervals, so
that a longitudinal set of data is constructed that is then analysed for
evidence of this aggregation. The best existing test is based solely on the
final value of this set of data, which is compared against a threshold to
conclude whether or not aggregation, and thus sCJD, is present. This test
criterion was decided upon by analysing data from a total of 108 sCJD and
non-sCJD samples, but this was done subjectively and there is no supporting
mathematical analysis declaring this criterion to be exploiting the available
data optimally. This paper addresses this deficiency, seeking to validate or
improve the test primarily via support vector machine (SVM) classification.
Besides this, we address a number of additional issues such as i) early
stopping of the measurement process, ii) the possibility of detecting the
particular type of sCJD and iii) the incorporation of additional patient data
such as age, sex, disease duration and timing of CSF sampling into the
construction of the test.
| William Hulme, Peter Richt\'arik, Lynne McGuire and Alison Green | null | 1212.2617 | null | null |
Joint Training of Deep Boltzmann Machines | stat.ML cs.LG | We introduce a new method for training deep Boltzmann machines jointly. Prior
methods require an initial learning pass that trains the deep Boltzmann machine
greedily, one layer at a time, or do not perform well on classifi- cation
tasks.
| Ian Goodfellow, Aaron Courville, Yoshua Bengio | null | 1212.2686 | null | null |
Bayesian one-mode projection for dynamic bipartite graphs | stat.ML cond-mat.stat-mech cs.LG | We propose a Bayesian methodology for one-mode projecting a bipartite network
that is being observed across a series of discrete time steps. The resulting
one mode network captures the uncertainty over the presence/absence of each
link and provides a probability distribution over its possible weight values.
Additionally, the incorporation of prior knowledge over previous states makes
the resulting network less sensitive to noise and missing observations that
usually take place during the data collection process. The methodology consists
of computationally inexpensive update rules and is scalable to large problems,
via an appropriate distributed implementation.
| Ioannis Psorakis, Iead Rezek, Zach Frankel, Stephen J. Roberts | null | 1212.2767 | null | null |
Dictionary Subselection Using an Overcomplete Joint Sparsity Model | cs.LG math.OC stat.ML | Many natural signals exhibit a sparse representation, whenever a suitable
describing model is given. Here, a linear generative model is considered, where
many sparsity-based signal processing techniques rely on such a simplified
model. As this model is often unknown for many classes of the signals, we need
to select such a model based on the domain knowledge or using some exemplar
signals. This paper presents a new exemplar based approach for the linear model
(called the dictionary) selection, for such sparse inverse problems. The
problem of dictionary selection, which has also been called the dictionary
learning in this setting, is first reformulated as a joint sparsity model. The
joint sparsity model here differs from the standard joint sparsity model as it
considers an overcompleteness in the representation of each signal, within the
range of selected subspaces. The new dictionary selection paradigm is examined
with some synthetic and realistic simulations.
| Mehrdad Yaghoobi, Laurent Daudet, Michael E. Davies | null | 1212.2834 | null | null |
Cost-Sensitive Feature Selection of Data with Errors | cs.LG | In data mining applications, feature selection is an essential process since
it reduces a model's complexity. The cost of obtaining the feature values must
be taken into consideration in many domains. In this paper, we study the
cost-sensitive feature selection problem on numerical data with measurement
errors, test costs and misclassification costs. The major contributions of this
paper are four-fold. First, a new data model is built to address test costs and
misclassification costs as well as error boundaries. Second, a covering-based
rough set with measurement errors is constructed. Given a confidence interval,
the neighborhood is an ellipse in a two-dimension space, or an ellipsoidal in a
three-dimension space, etc. Third, a new cost-sensitive feature selection
problem is defined on this covering-based rough set. Fourth, both backtracking
and heuristic algorithms are proposed to deal with this new problem. The
algorithms are tested on six UCI (University of California - Irvine) data sets.
Experimental results show that (1) the pruning techniques of the backtracking
algorithm help reducing the number of operations significantly, and (2) the
heuristic algorithm usually obtains optimal results. This study is a step
toward realistic applications of cost-sensitive learning.
| Hong Zhao, Fan Min and William Zhu | null | 1212.3185 | null | null |
Learning Sparse Low-Threshold Linear Classifiers | stat.ML cs.LG | We consider the problem of learning a non-negative linear classifier with a
$1$-norm of at most $k$, and a fixed threshold, under the hinge-loss. This
problem generalizes the problem of learning a $k$-monotone disjunction. We
prove that we can learn efficiently in this setting, at a rate which is linear
in both $k$ and the size of the threshold, and that this is the best possible
rate. We provide an efficient online learning algorithm that achieves the
optimal rate, and show that in the batch case, empirical risk minimization
achieves this rate as well. The rates we show are tighter than the uniform
convergence rate, which grows with $k^2$.
| Sivan Sabato and Shai Shalev-Shwartz and Nathan Srebro and Daniel Hsu
and Tong Zhang | null | 1212.3276 | null | null |
Know Your Personalization: Learning Topic level Personalization in
Online Services | cs.LG cs.IR | Online service platforms (OSPs), such as search engines, news-websites,
ad-providers, etc., serve highly pe rsonalized content to the user, based on
the profile extracted from his history with the OSP. Although personalization
(generally) leads to a better user experience, it also raises privacy concerns
for the user---he does not know what is present in his profile and more
importantly, what is being used to per sonalize content for him. In this paper,
we capture OSP's personalization for an user in a new data structure called the
person alization vector ($\eta$), which is a weighted vector over a set of
topics, and present techniques to compute it for users of an OSP. Our approach
treats OSPs as black-boxes, and extracts $\eta$ by mining only their output,
specifical ly, the personalized (for an user) and vanilla (without any user
information) contents served, and the differences in these content. We
formulate a new model called Latent Topic Personalization (LTP) that captures
the personalization vector into a learning framework and present efficient
inference algorithms for it. We do extensive experiments for search result
personalization using both data from real Google users and synthetic datasets.
Our results show high accuracy (R-pre = 84%) of LTP in finding personalized
topics. For Google data, our qualitative results show how LTP can also
identifies evidences---queries for results on a topic with high $\eta$ value
were re-ranked. Finally, we show how our approach can be used to build a new
Privacy evaluation framework focused at end-user privacy on commercial OSPs.
| Anirban Majumder and Nisheeth Shrivastava | null | 1212.3390 | null | null |
Proceedings Quantities in Formal Methods | cs.LO cs.FL cs.LG cs.SE | This volume contains the proceedings of the Workshop on Quantities in Formal
Methods, QFM 2012, held in Paris, France on 28 August 2012. The workshop was
affiliated with the 18th Symposium on Formal Methods, FM 2012. The focus of the
workshop was on quantities in modeling, verification, and synthesis. Modern
applications of formal methods require to reason formally on quantities such as
time, resources, or probabilities. Standard formal methods and tools have
gotten very good at modeling (and verifying) qualitative properties: whether or
not certain events will occur. During the last years, these methods and tools
have been extended to also cover quantitative aspects, notably leading to tools
like e.g. UPPAAL (for real-time systems), PRISM (for probabilistic systems),
and PHAVer (for hybrid systems). A lot of work remains to be done however
before these tools can be used in the industrial applications at which they are
aiming.
| Uli Fahrenberg (Irisa / INRIA Rennes, France), Axel Legay (Irisa /
INRIA Rennes, France), Claus Thrane (Aalborg University, Denmark) | 10.4204/EPTCS.103 | 1212.3454 | null | null |
Machine Learning in Proof General: Interfacing Interfaces | cs.AI cs.LG cs.LO | We present ML4PG - a machine learning extension for Proof General. It allows
users to gather proof statistics related to shapes of goals, sequences of
applied tactics, and proof tree structures from the libraries of interactive
higher-order proofs written in Coq and SSReflect. The gathered data is
clustered using the state-of-the-art machine learning algorithms available in
MATLAB and Weka. ML4PG provides automated interfacing between Proof General and
MATLAB/Weka. The results of clustering are used by ML4PG to provide proof hints
in the process of interactive proof development.
| Ekaterina Komendantskaya (School of Computing, University of Dundee),
J\'onathan Heras (School of Computing, University of Dundee), Gudmund Grov
(School of Mathematical and Computer Sciences, Heriot-Watt University) | 10.4204/EPTCS.118.2 | 1212.3618 | null | null |
Learning efficient sparse and low rank models | cs.LG | Parsimony, including sparsity and low rank, has been shown to successfully
model data in numerous machine learning and signal processing tasks.
Traditionally, such modeling approaches rely on an iterative algorithm that
minimizes an objective function with parsimony-promoting terms. The inherently
sequential structure and data-dependent complexity and latency of iterative
optimization constitute a major limitation in many applications requiring
real-time performance or involving large-scale data. Another limitation
encountered by these modeling techniques is the difficulty of their inclusion
in discriminative learning scenarios. In this work, we propose to move the
emphasis from the model to the pursuit algorithm, and develop a process-centric
view of parsimonious modeling, in which a learned deterministic
fixed-complexity pursuit process is used in lieu of iterative optimization. We
show a principled way to construct learnable pursuit process architectures for
structured sparse and robust low rank models, derived from the iteration of
proximal descent algorithms. These architectures learn to approximate the exact
parsimonious representation at a fraction of the complexity of the standard
optimization methods. We also show that appropriate training regimes allow to
naturally extend parsimonious models to discriminative settings.
State-of-the-art results are demonstrated on several challenging problems in
image and audio processing with several orders of magnitude speedup compared to
the exact optimization algorithms.
| Pablo Sprechmann, Alex M. Bronstein and Guillermo Sapiro | null | 1212.3631 | null | null |
A metric for software vulnerabilities classification | cs.SE cs.LG | Vulnerability discovery and exploits detection are two wide areas of study in
software engineering. This preliminary work tries to combine existing methods
with machine learning techniques to define a metric classification of
vulnerable computer programs. First a feature set has been defined and later
two models have been tested against real world vulnerabilities. A relation
between the classifier choice and the features has also been outlined.
| Gabriele Modena | null | 1212.3669 | null | null |
Biologically Inspired Spiking Neurons : Piecewise Linear Models and
Digital Implementation | cs.LG cs.NE q-bio.NC | There has been a strong push recently to examine biological scale simulations
of neuromorphic algorithms to achieve stronger inference capabilities. This
paper presents a set of piecewise linear spiking neuron models, which can
reproduce different behaviors, similar to the biological neuron, both for a
single neuron as well as a network of neurons. The proposed models are
investigated, in terms of digital implementation feasibility and costs,
targeting large scale hardware implementation. Hardware synthesis and physical
implementations on FPGA show that the proposed models can produce precise
neural behaviors with higher performance and considerably lower implementation
costs compared with the original model. Accordingly, a compact structure of the
models which can be trained with supervised and unsupervised learning
algorithms has been developed. Using this structure and based on a spike rate
coding, a character recognition case study has been implemented and tested.
| Hamid Soleimani, Arash Ahmadi and Mohammad Bavandpour | 10.1109/TCSI.2012.2206463 | 1212.3765 | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.