title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Using Temporal Data for Making Recommendations | cs.IR cs.AI cs.LG | We treat collaborative filtering as a univariate time series estimation
problem: given a user's previous votes, predict the next vote. We describe two
families of methods for transforming data to encode time order in ways amenable
to off-the-shelf classification and density estimation tools, and examine the
results of using these approaches on several real-world data sets. The
improvements in predictive accuracy we realize recommend the use of other
predictive algorithms that exploit the temporal order of data.
| Andrew Zimdars, David Maxwell Chickering, Christopher Meek | null | 1301.2320 | null | null |
Planning by Prioritized Sweeping with Small Backups | cs.AI cs.LG | Efficient planning plays a crucial role in model-based reinforcement
learning. Traditionally, the main planning operation is a full backup based on
the current estimates of the successor states. Consequently, its computation
time is proportional to the number of successor states. In this paper, we
introduce a new planning backup that uses only the current value of a single
successor state and has a computation time independent of the number of
successor states. This new backup, which we call a small backup, opens the door
to a new class of model-based reinforcement learning methods that exhibit much
finer control over their planning process than traditional methods. We
empirically demonstrate that this increased flexibility allows for more
efficient planning by showing that an implementation of prioritized sweeping
based on small backups achieves a substantial performance improvement over
classical implementations.
| Harm van Seijen and Richard S. Sutton | null | 1301.2343 | null | null |
Robust subspace clustering | cs.LG cs.IT math.IT math.OC math.ST stat.ML stat.TH | Subspace clustering refers to the task of finding a multi-subspace
representation that best fits a collection of points taken from a
high-dimensional space. This paper introduces an algorithm inspired by sparse
subspace clustering (SSC) [In IEEE Conference on Computer Vision and Pattern
Recognition, CVPR (2009) 2790-2797] to cluster noisy data, and develops some
novel theory demonstrating its correctness. In particular, the theory uses
ideas from geometric functional analysis to show that the algorithm can
accurately recover the underlying subspaces under minimal requirements on their
orientation, and on the number of samples per subspace. Synthetic as well as
real data experiments complement our theoretical study, illustrating our
approach and demonstrating its effectiveness.
| Mahdi Soltanolkotabi, Ehsan Elhamifar, Emmanuel J. Cand\`es | 10.1214/13-AOS1199 | 1301.2603 | null | null |
Learning to Optimize Via Posterior Sampling | cs.LG | This paper considers the use of a simple posterior sampling algorithm to
balance between exploration and exploitation when learning to optimize actions
such as in multi-armed bandit problems. The algorithm, also known as Thompson
Sampling, offers significant advantages over the popular upper confidence bound
(UCB) approach, and can be applied to problems with finite or infinite action
spaces and complicated relationships among action rewards. We make two
theoretical contributions. The first establishes a connection between posterior
sampling and UCB algorithms. This result lets us convert regret bounds
developed for UCB algorithms into Bayesian regret bounds for posterior
sampling. Our second theoretical contribution is a Bayesian regret bound for
posterior sampling that applies broadly and can be specialized to many model
classes. This bound depends on a new notion we refer to as the eluder
dimension, which measures the degree of dependence among action rewards.
Compared to UCB algorithm Bayesian regret bounds for specific model classes,
our general bound matches the best available for linear models and is stronger
than the best available for generalized linear models. Further, our analysis
provides insight into performance advantages of posterior sampling, which are
highlighted through simulation results that demonstrate performance surpassing
recently proposed UCB algorithms.
| Daniel Russo and Benjamin Van Roy | null | 1301.2609 | null | null |
Robust Text Detection in Natural Scene Images | cs.CV cs.IR cs.LG | Text detection in natural scene images is an important prerequisite for many
content-based image analysis tasks. In this paper, we propose an accurate and
robust method for detecting texts in natural scene images. A fast and effective
pruning algorithm is designed to extract Maximally Stable Extremal Regions
(MSERs) as character candidates using the strategy of minimizing regularized
variations. Character candidates are grouped into text candidates by the
ingle-link clustering algorithm, where distance weights and threshold of the
clustering algorithm are learned automatically by a novel self-training
distance metric learning algorithm. The posterior probabilities of text
candidates corresponding to non-text are estimated with an character
classifier; text candidates with high probabilities are then eliminated and
finally texts are identified with a text classifier. The proposed system is
evaluated on the ICDAR 2011 Robust Reading Competition dataset; the f measure
is over 76% and is significantly better than the state-of-the-art performance
of 71%. Experimental results on a publicly available multilingual dataset also
show that our proposed method can outperform the other competitive method with
the f measure increase of over 9 percent. Finally, we have setup an online demo
of our proposed scene text detection system at
http://kems.ustb.edu.cn/learning/yin/dtext.
| Xu-Cheng Yin, Xuwang Yin, Kaizhu Huang, Hong-Wei Hao | 10.1109/TPAMI.2013.182 | 1301.2628 | null | null |
Functional Regularized Least Squares Classi cation with Operator-valued
Kernels | cs.LG stat.ML | Although operator-valued kernels have recently received increasing interest
in various machine learning and functional data analysis problems such as
multi-task learning or functional regression, little attention has been paid to
the understanding of their associated feature spaces. In this paper, we explore
the potential of adopting an operator-valued kernel feature space perspective
for the analysis of functional data. We then extend the Regularized Least
Squares Classification (RLSC) algorithm to cover situations where there are
multiple functions per observation. Experiments on a sound recognition problem
show that the proposed method outperforms the classical RLSC algorithm.
| Hachem Kadri (INRIA Lille - Nord Europe), Asma Rabaoui (IMS), Philippe
Preux (INRIA Lille - Nord Europe, LIFL), Emmanuel Duflos (INRIA Lille - Nord
Europe, LAGIS), Alain Rakotomamonjy (LITIS) | null | 1301.2655 | null | null |
Multiple functional regression with both discrete and continuous
covariates | stat.ML cs.LG | In this paper we present a nonparametric method for extending functional
regression methodology to the situation where more than one functional
covariate is used to predict a functional response. Borrowing the idea from
Kadri et al. (2010a), the method, which support mixed discrete and continuous
explanatory variables, is based on estimating a function-valued function in
reproducing kernel Hilbert spaces by virtue of positive operator-valued
kernels.
| Hachem Kadri (INRIA Lille - Nord Europe), Philippe Preux (INRIA Lille
- Nord Europe, LIFL), Emmanuel Duflos (INRIA Lille - Nord Europe, LAGIS),
St\'ephane Canu (LITIS) | null | 1301.2656 | null | null |
A Triclustering Approach for Time Evolving Graphs | cs.LG cs.SI stat.ML | This paper introduces a novel technique to track structures in time evolving
graphs. The method is based on a parameter free approach for three-dimensional
co-clustering of the source vertices, the target vertices and the time. All
these features are simultaneously segmented in order to build time segments and
clusters of vertices whose edge distributions are similar and evolve in the
same way over the time segments. The main novelty of this approach lies in that
the time segments are directly inferred from the evolution of the edge
distribution between the vertices, thus not requiring the user to make an a
priori discretization. Experiments conducted on a synthetic dataset illustrate
the good behaviour of the technique, and a study of a real-life dataset shows
the potential of the proposed approach for exploratory data analysis.
| Romain Guigour\`es, Marc Boull\'e, Fabrice Rossi (SAMM) | 10.1109/ICDMW.2012.61 | 1301.2659 | null | null |
BliStr: The Blind Strategymaker | cs.AI cs.LG cs.LO | BliStr is a system that automatically develops strategies for E prover on a
large set of problems. The main idea is to interleave (i) iterated
low-timelimit local search for new strategies on small sets of similar easy
problems with (ii) higher-timelimit evaluation of the new strategies on all
problems. The accumulated results of the global higher-timelimit runs are used
to define and evolve the notion of "similar easy problems", and to control the
selection of the next strategy to be improved. The technique was used to
significantly strengthen the set of E strategies used by the MaLARea, PS-E,
E-MaLeS, and E systems in the CASC@Turing 2012 competition, particularly in the
Mizar division. Similar improvement was obtained on the problems created from
the Flyspeck corpus.
| Josef Urban | null | 1301.2683 | null | null |
Robust High Dimensional Sparse Regression and Matching Pursuit | stat.ML cs.IT cs.LG math.IT math.ST stat.TH | We consider high dimensional sparse regression, and develop strategies able
to deal with arbitrary -- possibly, severe or coordinated -- errors in the
covariance matrix $X$. These may come from corrupted data, persistent
experimental errors, or malicious respondents in surveys/recommender systems,
etc. Such non-stochastic error-in-variables problems are notoriously difficult
to treat, and as we demonstrate, the problem is particularly pronounced in
high-dimensional settings where the primary goal is {\em support recovery} of
the sparse regressor. We develop algorithms for support recovery in sparse
regression, when some number $n_1$ out of $n+n_1$ total covariate/response
pairs are {\it arbitrarily (possibly maliciously) corrupted}. We are interested
in understanding how many outliers, $n_1$, we can tolerate, while identifying
the correct support. To the best of our knowledge, neither standard outlier
rejection techniques, nor recently developed robust regression algorithms (that
focus only on corrupted response variables), nor recent algorithms for dealing
with stochastic noise or erasures, can provide guarantees on support recovery.
Perhaps surprisingly, we also show that the natural brute force algorithm that
searches over all subsets of $n$ covariate/response pairs, and all subsets of
possible support coordinates in order to minimize regression error, is
remarkably poor, unable to correctly identify the support with even $n_1 =
O(n/k)$ corrupted points, where $k$ is the sparsity. This is true even in the
basic setting we consider, where all authentic measurements and noise are
independent and sub-Gaussian. In this setting, we provide a simple algorithm --
no more computationally taxing than OMP -- that gives stronger performance
guarantees, recovering the support with up to $n_1 = O(n/(\sqrt{k} \log p))$
corrupted points, where $p$ is the dimension of the signal to be recovered.
| Yudong Chen, Constantine Caramanis, Shie Mannor | null | 1301.2725 | null | null |
A comparison of SVM and RVM for Document Classification | cs.IR cs.LG | Document classification is a task of assigning a new unclassified document to
one of the predefined set of classes. The content based document classification
uses the content of the document with some weighting criteria to assign it to
one of the predefined classes. It is a major task in library science,
electronic document management systems and information sciences. This paper
investigates document classification by using two different classification
techniques (1) Support Vector Machine (SVM) and (2) Relevance Vector Machine
(RVM). SVM is a supervised machine learning technique that can be used for
classification task. In its basic form, SVM represents the instances of the
data into space and tries to separate the distinct classes by a maximum
possible wide gap (hyper plane) that separates the classes. On the other hand
RVM uses probabilistic measure to define this separation space. RVM uses
Bayesian inference to obtain succinct solution, thus RVM uses significantly
fewer basis functions. Experimental studies on three standard text
classification datasets reveal that although RVM takes more training time, its
classification is much better as compared to SVM.
| Muhammad Rafi, Mohammad Shahid Shaikh | null | 1301.2785 | null | null |
Unsupervised Feature Learning for low-level Local Image Descriptors | cs.CV cs.LG stat.ML | Unsupervised feature learning has shown impressive results for a wide range
of input modalities, in particular for object classification tasks in computer
vision. Using a large amount of unlabeled data, unsupervised feature learning
methods are utilized to construct high-level representations that are
discriminative enough for subsequently trained supervised classification
algorithms. However, it has never been \emph{quantitatively} investigated yet
how well unsupervised learning methods can find \emph{low-level
representations} for image patches without any additional supervision. In this
paper we examine the performance of pure unsupervised methods on a low-level
correspondence task, a problem that is central to many Computer Vision
applications. We find that a special type of Restricted Boltzmann Machines
(RBMs) performs comparably to hand-crafted descriptors. Additionally, a simple
binarization scheme produces compact representations that perform better than
several state-of-the-art descriptors.
| Christian Osendorfer and Justin Bayer and Sebastian Urban and Patrick
van der Smagt | null | 1301.2840 | null | null |
Matrix Approximation under Local Low-Rank Assumption | cs.LG stat.ML | Matrix approximation is a common tool in machine learning for building
accurate prediction models for recommendation systems, text mining, and
computer vision. A prevalent assumption in constructing matrix approximations
is that the partially observed matrix is of low-rank. We propose a new matrix
approximation model where we assume instead that the matrix is only locally of
low-rank, leading to a representation of the observed matrix as a weighted sum
of low-rank matrices. We analyze the accuracy of the proposed local low-rank
modeling. Our experiments show improvements in prediction accuracy in
recommendation tasks.
| Joonseok Lee, Seungyeon Kim, Guy Lebanon, Yoram Singer | null | 1301.3192 | null | null |
Learning Graphical Model Parameters with Approximate Marginal Inference | cs.LG cs.CV | Likelihood based-learning of graphical models faces challenges of
computational-complexity and robustness to model mis-specification. This paper
studies methods that fit parameters directly to maximize a measure of the
accuracy of predicted marginals, taking into account both model and inference
approximations at training time. Experiments on imaging problems suggest
marginalization-based learning performs better than likelihood-based
approximations on difficult problems where the model being fit is approximate
in nature.
| Justin Domke | 10.1109/TPAMI.2013.31 | 1301.3193 | null | null |
Efficient Learning of Domain-invariant Image Representations | cs.LG | We present an algorithm that learns representations which explicitly
compensate for domain mismatch and which can be efficiently realized as linear
classifiers. Specifically, we form a linear transformation that maps features
from the target (test) domain to the source (training) domain as part of
training the classifier. We optimize both the transformation and classifier
parameters jointly, and introduce an efficient cost function based on
misclassification loss. Our method combines several features previously
unavailable in a single algorithm: multi-class adaptation through
representation learning, ability to map across heterogeneous feature spaces,
and scalability to large datasets. We present experiments on several image
datasets that demonstrate improved accuracy and computational advantages
compared to previous approaches.
| Judy Hoffman, Erik Rodner, Jeff Donahue, Trevor Darrell, Kate Saenko | null | 1301.3224 | null | null |
The Expressive Power of Word Embeddings | cs.LG cs.CL stat.ML | We seek to better understand the difference in quality of the several
publicly released embeddings. We propose several tasks that help to distinguish
the characteristics of different embeddings. Our evaluation of sentiment
polarity and synonym/antonym relations shows that embeddings are able to
capture surprisingly nuanced semantics even in the absence of sentence
structure. Moreover, benchmarking the embeddings shows great variance in
quality and characteristics of the semantics captured by the tested embeddings.
Finally, we show the impact of varying the number of dimensions and the
resolution of each dimension on the effective useful features captured by the
embedding space. Our contributions highlight the importance of embeddings for
NLP tasks and the effect of their quality on the final results.
| Yanqing Chen, Bryan Perozzi, Rami Al-Rfou, Steven Skiena | null | 1301.3226 | null | null |
Auto-pooling: Learning to Improve Invariance of Image Features from
Image Sequences | cs.CV cs.LG | Learning invariant representations from images is one of the hardest
challenges facing computer vision. Spatial pooling is widely used to create
invariance to spatial shifting, but it is restricted to convolutional models.
In this paper, we propose a novel pooling method that can learn soft clustering
of features from image sequences. It is trained to improve the temporal
coherence of features, while keeping the information loss at minimum. Our
method does not use spatial information, so it can be used with
non-convolutional models too. Experiments on images extracted from natural
videos showed that our method can cluster similar features together. When
trained by convolutional features, auto-pooling outperformed traditional
spatial pooling on an image classification task, even though it does not use
the spatial topology of features.
| Sainbayar Sukhbaatar, Takaki Makino and Kazuyuki Aihara | null | 1301.3323 | null | null |
Barnes-Hut-SNE | cs.LG cs.CV stat.ML | The paper presents an O(N log N)-implementation of t-SNE -- an embedding
technique that is commonly used for the visualization of high-dimensional data
in scatter plots and that normally runs in O(N^2). The new implementation uses
vantage-point trees to compute sparse pairwise similarities between the input
data objects, and it uses a variant of the Barnes-Hut algorithm - an algorithm
used by astronomers to perform N-body simulations - to approximate the forces
between the corresponding points in the embedding. Our experiments show that
the new algorithm, called Barnes-Hut-SNE, leads to substantial computational
advantages over standard t-SNE, and that it makes it possible to learn
embeddings of data sets with millions of objects.
| Laurens van der Maaten | null | 1301.3342 | null | null |
Multi-agent learning using Fictitious Play and Extended Kalman Filter | cs.MA cs.LG math.OC stat.ML | Decentralised optimisation tasks are important components of multi-agent
systems. These tasks can be interpreted as n-player potential games: therefore
game-theoretic learning algorithms can be used to solve decentralised
optimisation tasks. Fictitious play is the canonical example of these
algorithms. Nevertheless fictitious play implicitly assumes that players have
stationary strategies. We present a novel variant of fictitious play where
players predict their opponents' strategies using Extended Kalman filters and
use their predictions to update their strategies.
We show that in 2 by 2 games with at least one pure Nash equilibrium and in
potential games where players have two available actions, the proposed
algorithm converges to the pure Nash equilibrium. The performance of the
proposed algorithm was empirically tested, in two strategic form games and an
ad-hoc sensor network surveillance problem. The proposed algorithm performs
better than the classic fictitious play algorithm in these games and therefore
improves the performance of game-theoretical learning in decentralised
optimisation.
| Michalis Smyrnakis | null | 1301.3347 | null | null |
The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization | cs.NA cs.LG | Non-negative matrix factorization (NMF) has become a popular machine learning
approach to many problems in text mining, speech and image processing,
bio-informatics and seismic data analysis to name a few. In NMF, a matrix of
non-negative data is approximated by the low-rank product of two matrices with
non-negative entries. In this paper, the approximation quality is measured by
the Kullback-Leibler divergence between the data and its low-rank
reconstruction. The existence of the simple multiplicative update (MU)
algorithm for computing the matrix factors has contributed to the success of
NMF. Despite the availability of algorithms showing faster convergence, MU
remains popular due to its simplicity. In this paper, a diagonalized Newton
algorithm (DNA) is proposed showing faster convergence while the implementation
remains simple and suitable for high-rank problems. The DNA algorithm is
applied to various publicly available data sets, showing a substantial speed-up
on modern hardware.
| Hugo Van hamme | null | 1301.3389 | null | null |
Feature grouping from spatially constrained multiplicative interaction | cs.LG | We present a feature learning model that learns to encode relationships
between images. The model is defined as a Gated Boltzmann Machine, which is
constrained such that hidden units that are nearby in space can gate each
other's connections. We show how frequency/orientation "columns" as well as
topographic filter maps follow naturally from training the model on image
pairs. The model also helps explain why square-pooling models yield feature
groups with similar grouping properties. Experimental results on synthetic
image transformations show that spatially constrained gating is an effective
way to reduce the number of parameters and thereby to regularize a
transformation-learning model.
| Felix Bauer, Roland Memisevic | null | 1301.3391 | null | null |
Factorized Topic Models | cs.LG cs.CV cs.IR | In this paper we present a modification to a latent topic model, which makes
the model exploit supervision to produce a factorized representation of the
observed data. The structured parameterization separately encodes variance that
is shared between classes from variance that is private to each class by the
introduction of a new prior over the topic space. The approach allows for a
more eff{}icient inference and provides an intuitive interpretation of the data
in terms of an informative signal together with structured noise. The
factorized representation is shown to enhance inference performance for image,
text, and video classification.
| Cheng Zhang and Carl Henrik Ek and Andreas Damianou and Hedvig
Kjellstrom | null | 1301.3461 | null | null |
Boltzmann Machines and Denoising Autoencoders for Image Denoising | stat.ML cs.CV cs.LG | Image denoising based on a probabilistic model of local image patches has
been employed by various researchers, and recently a deep (denoising)
autoencoder has been proposed by Burger et al. [2012] and Xie et al. [2012] as
a good model for this. In this paper, we propose that another popular family of
models in the field of deep learning, called Boltzmann machines, can perform
image denoising as well as, or in certain cases of high level of noise, better
than denoising autoencoders. We empirically evaluate the two models on three
different sets of images with different types and levels of noise. Throughout
the experiments we also examine the effect of the depth of the models. The
experiments confirmed our claim and revealed that the performance can be
improved by adding more hidden layers, especially when the level of noise is
high.
| Kyunghyun Cho | null | 1301.3468 | null | null |
Pushing Stochastic Gradient towards Second-Order Methods --
Backpropagation Learning with Transformations in Nonlinearities | cs.LG cs.CV stat.ML | Recently, we proposed to transform the outputs of each hidden neuron in a
multi-layer perceptron network to have zero output and zero slope on average,
and use separate shortcut connections to model the linear dependencies instead.
We continue the work by firstly introducing a third transformation to normalize
the scale of the outputs of each hidden neuron, and secondly by analyzing the
connections to second order optimization methods. We show that the
transformations make a simple stochastic gradient behave closer to second-order
optimization methods and thus speed up learning. This is shown both in theory
and with experiments. The experiments on the third transformation show that
while it further increases the speed of learning, it can also hurt performance
by converging to a worse local optimum, where both the inputs and outputs of
many hidden neurons are close to zero.
| Tommi Vatanen, Tapani Raiko, Harri Valpola, Yann LeCun | null | 1301.3476 | null | null |
A Semantic Matching Energy Function for Learning with Multi-relational
Data | cs.LG | Large-scale relational learning becomes crucial for handling the huge amounts
of structured data generated daily in many application domains ranging from
computational biology or information retrieval, to natural language processing.
In this paper, we present a new neural network architecture designed to embed
multi-relational graphs into a flexible continuous vector space in which the
original data is kept and enhanced. The network is trained to encode the
semantics of these graphs in order to assign high probabilities to plausible
components. We empirically show that it reaches competitive performance in link
prediction on standard datasets from the literature.
| Xavier Glorot and Antoine Bordes and Jason Weston and Yoshua Bengio | null | 1301.3485 | null | null |
Learnable Pooling Regions for Image Classification | cs.CV cs.LG | Biologically inspired, from the early HMAX model to Spatial Pyramid Matching,
pooling has played an important role in visual recognition pipelines. Spatial
pooling, by grouping of local codes, equips these methods with a certain degree
of robustness to translation and deformation yet preserving important spatial
information. Despite the predominance of this approach in current recognition
systems, we have seen little progress to fully adapt the pooling strategy to
the task at hand. This paper proposes a model for learning task dependent
pooling scheme -- including previously proposed hand-crafted pooling schemes as
a particular instantiation. In our work, we investigate the role of different
regularization terms showing that the smooth regularization term is crucial to
achieve strong performance using the presented architecture. Finally, we
propose an efficient and parallel method to train the model. Our experiments
show improved performance over hand-crafted pooling schemes on the CIFAR-10 and
CIFAR-100 datasets -- in particular improving the state-of-the-art to 56.29% on
the latter.
| Mateusz Malinowski and Mario Fritz | null | 1301.3516 | null | null |
How good is the Electricity benchmark for evaluating concept drift
adaptation | cs.LG | In this correspondence, we will point out a problem with testing adaptive
classifiers on autocorrelated data. In such a case random change alarms may
boost the accuracy figures. Hence, we cannot be sure if the adaptation is
working well.
| Indre Zliobaite | null | 1301.3524 | null | null |
Block Coordinate Descent for Sparse NMF | cs.LG cs.NA | Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data
analysis. An important variant is the sparse NMF problem which arises when we
explicitly require the learnt features to be sparse. A natural measure of
sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms,
such as L$_1$/L$_2$ measure, have been shown to model sparsity robustly, based
on intuitive attributes that such measures need to satisfy. This is in contrast
to computationally cheaper alternatives such as the plain L$_1$ norm. However,
present algorithms designed for optimizing the mixed norm L$_1$/L$_2$ are slow
and other formulations for sparse NMF have been proposed such as those based on
L$_1$ and L$_0$ norms. Our proposed algorithm allows us to solve the mixed norm
sparsity constraints while not sacrificing computation time. We present
experimental evidence on real-world datasets that shows our new algorithm
performs an order of magnitude faster compared to the current state-of-the-art
solvers optimizing the mixed norm and is suitable for large-scale datasets.
| Vamsi K. Potluru, Sergey M. Plis, Jonathan Le Roux, Barak A.
Pearlmutter, Vince D. Calhoun, Thomas P. Hayes | null | 1301.3527 | null | null |
An Efficient Sufficient Dimension Reduction Method for Identifying
Genetic Variants of Clinical Significance | q-bio.GN cs.LG stat.ML | Fast and cheaper next generation sequencing technologies will generate
unprecedentedly massive and highly-dimensional genomic and epigenomic variation
data. In the near future, a routine part of medical record will include the
sequenced genomes. A fundamental question is how to efficiently extract genomic
and epigenomic variants of clinical utility which will provide information for
optimal wellness and interference strategies. Traditional paradigm for
identifying variants of clinical validity is to test association of the
variants. However, significantly associated genetic variants may or may not be
usefulness for diagnosis and prognosis of diseases. Alternative to association
studies for finding genetic variants of predictive utility is to systematically
search variants that contain sufficient information for phenotype prediction.
To achieve this, we introduce concepts of sufficient dimension reduction and
coordinate hypothesis which project the original high dimensional data to very
low dimensional space while preserving all information on response phenotypes.
We then formulate clinically significant genetic variant discovery problem into
sparse SDR problem and develop algorithms that can select significant genetic
variants from up to or even ten millions of predictors with the aid of dividing
SDR for whole genome into a number of subSDR problems defined for genomic
regions. The sparse SDR is in turn formulated as sparse optimal scoring
problem, but with penalty which can remove row vectors from the basis matrix.
To speed up computation, we develop the modified alternating direction method
for multipliers to solve the sparse optimal scoring problem which can easily be
implemented in parallel. To illustrate its application, the proposed method is
applied to simulation data and the NHLBI's Exome Sequencing Project dataset
| Momiao Xiong and Long Ma | null | 1301.3528 | null | null |
The Neural Representation Benchmark and its Evaluation on Brain and
Machine | cs.NE cs.CV cs.LG q-bio.NC | A key requirement for the development of effective learning representations
is their evaluation and comparison to representations we know to be effective.
In natural sensory domains, the community has viewed the brain as a source of
inspiration and as an implicit benchmark for success. However, it has not been
possible to directly test representational learning algorithms directly against
the representations contained in neural systems. Here, we propose a new
benchmark for visual representations on which we have directly tested the
neural representation in multiple visual cortical areas in macaque (utilizing
data from [Majaj et al., 2012]), and on which any computer vision algorithm
that produces a feature space can be tested. The benchmark measures the
effectiveness of the neural or machine representation by computing the
classification loss on the ordered eigendecomposition of a kernel matrix
[Montavon et al., 2011]. In our analysis we find that the neural representation
in visual area IT is superior to visual area V4. In our analysis of
representational learning algorithms, we find that three-layer models approach
the representational performance of V4 and the algorithm in [Le et al., 2012]
surpasses the performance of V4. Impressively, we find that a recent supervised
algorithm [Krizhevsky et al., 2012] achieves performance comparable to that of
IT for an intermediate level of image variation difficulty, and surpasses IT at
a higher difficulty level. We believe this result represents a major milestone:
it is the first learning algorithm we have found that exceeds our current
estimate of IT representation performance. We hope that this benchmark will
assist the community in matching the representational performance of visual
cortex and will serve as an initial rallying point for further correspondence
between representations derived in brains and machines.
| Charles F. Cadieu, Ha Hong, Dan Yamins, Nicolas Pinto, Najib J. Majaj,
James J. DiCarlo | null | 1301.3530 | null | null |
Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint | cs.NE cs.LG stat.ML | Deep Belief Networks (DBN) have been successfully applied on popular machine
learning tasks. Specifically, when applied on hand-written digit recognition,
DBNs have achieved approximate accuracy rates of 98.8%. In an effort to
optimize the data representation achieved by the DBN and maximize their
descriptive power, recent advances have focused on inducing sparse constraints
at each layer of the DBN. In this paper we present a theoretical approach for
sparse constraints in the DBN using the mixed norm for both non-overlapping and
overlapping groups. We explore how these constraints affect the classification
accuracy for digit recognition in three different datasets (MNIST, USPS, RIMES)
and provide initial estimations of their usefulness by altering different
parameters such as the group size and overlap percentage.
| Xanadu Halkias, Sebastien Paris, Herve Glotin | null | 1301.3533 | null | null |
Learning Features with Structure-Adapting Multi-view Exponential Family
Harmoniums | cs.LG | We proposea graphical model for multi-view feature extraction that
automatically adapts its structure to achieve better representation of data
distribution. The proposed model, structure-adapting multi-view harmonium
(SA-MVH) has switch parameters that control the connection between hidden nodes
and input views, and learn the switch parameter while training. Numerical
experiments on synthetic and a real-world dataset demonstrate the useful
behavior of the SA-MVH, compared to existing multi-view feature extraction
methods.
| Yoonseop Kang and Seungjin Choi | null | 1301.3539 | null | null |
Deep Predictive Coding Networks | cs.LG cs.CV stat.ML | The quality of data representation in deep learning methods is directly
related to the prior model imposed on the representations; however, generally
used fixed priors are not capable of adjusting to the context in the data. To
address this issue, we propose deep predictive coding networks, a hierarchical
generative model that empirically alters priors on the latent representations
in a dynamic and context-sensitive manner. This model captures the temporal
dependencies in time-varying signals and uses top-down information to modulate
the representation in lower layers. The centerpiece of our model is a novel
procedure to infer sparse states of a dynamic model which is used for feature
extraction. We also extend this feature extraction block to introduce a pooling
function that captures locally invariant representations. When applied on a
natural video data, we show that our method is able to learn high-level visual
features. We also demonstrate the role of the top-down connections by showing
the robustness of the proposed model to structured noise.
| Rakesh Chalasani and Jose C. Principe | null | 1301.3541 | null | null |
Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines | cs.LG cs.NE stat.ML | This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for
training Boltzmann Machines. Similar in spirit to the Hessian-Free method of
Martens [8], our algorithm belongs to the family of truncated Newton methods
and exploits an efficient matrix-vector product to avoid explicitely storing
the natural gradient metric $L$. This metric is shown to be the expected second
derivative of the log-partition function (under the model distribution), or
equivalently, the variance of the vector of partial derivatives of the energy
function. We evaluate our method on the task of joint-training a 3-layer Deep
Boltzmann Machine and show that MFNG does indeed have faster per-epoch
convergence compared to Stochastic Maximum Likelihood with centering, though
wall-clock performance is currently not competitive.
| Guillaume Desjardins, Razvan Pascanu, Aaron Courville and Yoshua
Bengio | null | 1301.3545 | null | null |
Information Theoretic Learning with Infinitely Divisible Kernels | cs.LG cs.CV | In this paper, we develop a framework for information theoretic learning
based on infinitely divisible matrices. We formulate an entropy-like functional
on positive definite matrices based on Renyi's axiomatic definition of entropy
and examine some key properties of this functional that lead to the concept of
infinite divisibility. The proposed formulation avoids the plug in estimation
of density and brings along the representation power of reproducing kernel
Hilbert spaces. As an application example, we derive a supervised metric
learning algorithm using a matrix based analogue to conditional entropy
achieving results comparable with the state of the art.
| Luis G. Sanchez Giraldo and Jose C. Principe | null | 1301.3551 | null | null |
Stochastic Pooling for Regularization of Deep Convolutional Neural
Networks | cs.LG cs.NE stat.ML | We introduce a simple and effective method for regularizing large
convolutional neural networks. We replace the conventional deterministic
pooling operations with a stochastic procedure, randomly picking the activation
within each pooling region according to a multinomial distribution, given by
the activities within the pooling region. The approach is hyper-parameter free
and can be combined with other regularization approaches, such as dropout and
data augmentation. We achieve state-of-the-art performance on four image
datasets, relative to other approaches that do not utilize data augmentation.
| Matthew D. Zeiler and Rob Fergus | null | 1301.3557 | null | null |
Joint Training Deep Boltzmann Machines for Classification | stat.ML cs.LG | We introduce a new method for training deep Boltzmann machines jointly. Prior
methods of training DBMs require an initial learning pass that trains the model
greedily, one layer at a time, or do not perform well on classification tasks.
In our approach, we train all layers of the DBM simultaneously, using a novel
training procedure called multi-prediction training. The resulting model can
either be interpreted as a single generative model trained to maximize a
variational approximation to the generalized pseudolikelihood, or as a family
of recurrent networks that share parameters and may be approximately averaged
together using a novel technique we call the multi-inference trick. We show
that our approach performs competitively for classification and outperforms
previous methods in terms of accuracy of approximate inference and
classification with missing inputs.
| Ian J. Goodfellow and Aaron Courville and Yoshua Bengio | null | 1301.3568 | null | null |
Kernelized Locality-Sensitive Hashing for Semi-Supervised Agglomerative
Clustering | cs.LG cs.CV stat.ML | Large scale agglomerative clustering is hindered by computational burdens. We
propose a novel scheme where exact inter-instance distance calculation is
replaced by the Hamming distance between Kernelized Locality-Sensitive Hashing
(KLSH) hashed values. This results in a method that drastically decreases
computation time. Additionally, we take advantage of certain labeled data
points via distance metric learning to achieve a competitive precision and
recall comparing to K-Means but in much less computation time.
| Boyi Xie, Shuheng Zheng | null | 1301.3575 | null | null |
Saturating Auto-Encoders | cs.LG | We introduce a simple new regularizer for auto-encoders whose hidden-unit
activation functions contain at least one zero-gradient (saturated) region.
This regularizer explicitly encourages activations in the saturated region(s)
of the corresponding activation function. We call these Saturating
Auto-Encoders (SATAE). We show that the saturation regularizer explicitly
limits the SATAE's ability to reconstruct inputs which are not near the data
manifold. Furthermore, we show that a wide variety of features can be learned
when different activation functions are used. Finally, connections are
established with the Contractive and Sparse Auto-Encoders.
| Rostislav Goroshin and Yann LeCun | null | 1301.3577 | null | null |
Big Neural Networks Waste Capacity | cs.LG cs.CV | This article exposes the failure of some big neural networks to leverage
added capacity to reduce underfitting. Past research suggest diminishing
returns when increasing the size of neural networks. Our experiments on
ImageNet LSVRC-2010 show that this may be due to the fact there are highly
diminishing returns for capacity in terms of training error, leading to
underfitting. This suggests that the optimization method - first order gradient
descent - fails at this regime. Directly attacking this problem, either through
the optimization method or the choices of parametrization, may allow to improve
the generalization error on large datasets, for which a large capacity is
required.
| Yann N. Dauphin, Yoshua Bengio | null | 1301.3583 | null | null |
Revisiting Natural Gradient for Deep Networks | cs.LG cs.NA | We evaluate natural gradient, an algorithm originally proposed in Amari
(1997), for learning deep models. The contributions of this paper are as
follows. We show the connection between natural gradient and three other
recently proposed methods for training deep models: Hessian-Free (Martens,
2010), Krylov Subspace Descent (Vinyals and Povey, 2012) and TONGA (Le Roux et
al., 2008). We describe how one can use unlabeled data to improve the
generalization error obtained by natural gradient and empirically evaluate the
robustness of the algorithm to the ordering of the training set compared to
stochastic gradient descent. Finally we extend natural gradient to incorporate
second order information alongside the manifold information and provide a
benchmark of the new algorithm using a truncated Newton approach for inverting
the metric matrix instead of using a diagonal approximation of it.
| Razvan Pascanu and Yoshua Bengio | null | 1301.3584 | null | null |
Deep Learning for Detecting Robotic Grasps | cs.LG cs.CV cs.RO | We consider the problem of detecting robotic grasps in an RGB-D view of a
scene containing objects. In this work, we apply a deep learning approach to
solve this problem, which avoids time-consuming hand-design of features. This
presents two main challenges. First, we need to evaluate a huge number of
candidate grasps. In order to make detection fast, as well as robust, we
present a two-step cascaded structure with two deep networks, where the top
detections from the first are re-evaluated by the second. The first network has
fewer features, is faster to run, and can effectively prune out unlikely
candidate grasps. The second, with more features, is slower but has to run only
on the top few detections. Second, we need to handle multimodal inputs well,
for which we present a method to apply structured regularization on the weights
based on multimodal group regularization. We demonstrate that our method
outperforms the previous state-of-the-art methods in robotic grasp detection,
and can be used to successfully execute grasps on two different robotic
platforms.
| Ian Lenz and Honglak Lee and Ashutosh Saxena | null | 1301.3592 | null | null |
Feature Learning in Deep Neural Networks - Studies on Speech Recognition
Tasks | cs.LG cs.CL cs.NE eess.AS | Recent studies have shown that deep neural networks (DNNs) perform
significantly better than shallow networks and Gaussian mixture models (GMMs)
on large vocabulary speech recognition tasks. In this paper, we argue that the
improved accuracy achieved by the DNNs is the result of their ability to
extract discriminative internal representations that are robust to the many
sources of variability in speech signals. We show that these representations
become increasingly insensitive to small perturbations in the input with
increasing network depth, which leads to better speech recognition performance
with deeper networks. We also show that DNNs cannot extrapolate to test samples
that are substantially different from the training examples. If the training
data are sufficiently representative, however, internal features learned by the
DNN are relatively stable with respect to speaker differences, bandwidth
differences, and environment distortion. This enables DNN-based recognizers to
perform as well or better than state-of-the-art systems based on GMMs or
shallow networks without the need for explicit model adaptation or feature
normalization.
| Dong Yu, Michael L. Seltzer, Jinyu Li, Jui-Ting Huang, Frank Seide | null | 1301.3605 | null | null |
Learning New Facts From Knowledge Bases With Neural Tensor Networks and
Semantic Word Vectors | cs.CL cs.LG | Knowledge bases provide applications with the benefit of easily accessible,
systematic relational knowledge but often suffer in practice from their
incompleteness and lack of knowledge of new entities and relations. Much work
has focused on building or extending them by finding patterns in large
unannotated text corpora. In contrast, here we mainly aim to complete a
knowledge base by predicting additional true relationships between entities,
based on generalizations that can be discerned in the given knowledgebase. We
introduce a neural tensor network (NTN) model which predicts new relationship
entries that can be added to the database. This model can be improved by
initializing entity representations with word vectors learned in an
unsupervised fashion from text, and when doing this, existing relations can
even be queried for entities that were not present in the database. Our model
generalizes and outperforms existing models for this problem, and can classify
unseen relationships in WordNet with an accuracy of 75.8%.
| Danqi Chen, Richard Socher, Christopher D. Manning, Andrew Y. Ng | null | 1301.3618 | null | null |
Two SVDs produce more focal deep learning representations | cs.CL cs.LG | A key characteristic of work on deep learning and neural networks in general
is that it relies on representations of the input that support generalization,
robust inference, domain adaptation and other desirable functionalities. Much
recent progress in the field has focused on efficient and effective methods for
computing representations. In this paper, we propose an alternative method that
is more efficient than prior work and produces representations that have a
property we call focality -- a property we hypothesize to be important for
neural network representations. The method consists of a simple application of
two consecutive SVDs and is inspired by Anandkumar (2012).
| Hinrich Schuetze, Christian Scheible | null | 1301.3627 | null | null |
Behavior Pattern Recognition using A New Representation Model | cs.LG | We study the use of inverse reinforcement learning (IRL) as a tool for the
recognition of agents' behavior on the basis of observation of their sequential
decision behavior interacting with the environment. We model the problem faced
by the agents as a Markov decision process (MDP) and model the observed
behavior of the agents in terms of forward planning for the MDP. We use IRL to
learn reward functions and then use these reward functions as the basis for
clustering or classification models. Experimental studies with GridWorld, a
navigation problem, and the secretary problem, an optimal stopping problem,
suggest reward vectors found from IRL can be a good basis for behavior pattern
recognition problems. Empirical comparisons of our method with several existing
IRL algorithms and with direct methods that use feature statistics observed in
state-action space suggest it may be superior for recognition problems.
| Qifeng Qiao and Peter A. Beling | null | 1301.3630 | null | null |
Training Neural Networks with Stochastic Hessian-Free Optimization | cs.LG cs.NE stat.ML | Hessian-free (HF) optimization has been successfully used for training deep
autoencoders and recurrent networks. HF uses the conjugate gradient algorithm
to construct update directions through curvature-vector products that can be
computed on the same order of time as gradients. In this paper we exploit this
property and study stochastic HF with gradient and curvature mini-batches
independent of the dataset size. We modify Martens' HF for these settings and
integrate dropout, a method for preventing co-adaptation of feature detectors,
to guard against overfitting. Stochastic Hessian-free optimization gives an
intermediary between SGD and HF that achieves competitive performance on both
classification and deep autoencoder experiments.
| Ryan Kiros | null | 1301.3641 | null | null |
Regularized Discriminant Embedding for Visual Descriptor Learning | cs.CV cs.LG | Images can vary according to changes in viewpoint, resolution, noise, and
illumination. In this paper, we aim to learn representations for an image,
which are robust to wide changes in such environmental conditions, using
training pairs of matching and non-matching local image patches that are
collected under various environmental conditions. We present a regularized
discriminant analysis that emphasizes two challenging categories among the
given training pairs: (1) matching, but far apart pairs and (2) non-matching,
but close pairs in the original feature space (e.g., SIFT feature space).
Compared to existing work on metric learning and discriminant analysis, our
method can better distinguish relevant images from irrelevant, but look-alike
images.
| Kye-Hyeon Kim, Rui Cai, Lei Zhang, Seungjin Choi | null | 1301.3644 | null | null |
Zero-Shot Learning Through Cross-Modal Transfer | cs.CV cs.LG | This work introduces a model that can recognize objects in images even if no
training data is available for the objects. The only necessary knowledge about
the unseen categories comes from unsupervised large text corpora. In our
zero-shot framework distributional information in language can be seen as
spanning a semantic basis for understanding what objects look like. Most
previous zero-shot learning models can only differentiate between unseen
classes. In contrast, our model can both obtain state of the art performance on
classes that have thousands of training images and obtain reasonable
performance on unseen classes. This is achieved by first using outlier
detection in the semantic space and then two separate recognition models.
Furthermore, our model does not require any manually defined semantic features
for either words or images.
| Richard Socher, Milind Ganjoo, Hamsa Sridhar, Osbert Bastani,
Christopher D. Manning, Andrew Y. Ng | null | 1301.3666 | null | null |
The IBMAP approach for Markov networks structure learning | cs.AI cs.LG | In this work we consider the problem of learning the structure of Markov
networks from data. We present an approach for tackling this problem called
IBMAP, together with an efficient instantiation of the approach: the IBMAP-HC
algorithm, designed for avoiding important limitations of existing
independence-based algorithms. These algorithms proceed by performing
statistical independence tests on data, trusting completely the outcome of each
test. In practice tests may be incorrect, resulting in potential cascading
errors and the consequent reduction in the quality of the structures learned.
IBMAP contemplates this uncertainty in the outcome of the tests through a
probabilistic maximum-a-posteriori approach. The approach is instantiated in
the IBMAP-HC algorithm, a structure selection strategy that performs a
polynomial heuristic local search in the space of possible structures. We
present an extensive empirical evaluation on synthetic and real data, showing
that our algorithm outperforms significantly the current independence-based
algorithms, in terms of data efficiency and quality of learned structures, with
equivalent computational complexities. We also show the performance of IBMAP-HC
in a real-world application of knowledge discovery: EDAs, which are
evolutionary algorithms that use structure learning on each generation for
modeling the distribution of populations. The experiments show that when
IBMAP-HC is used to learn the structure, EDAs improve the convergence to the
optimum.
| Federico Schl\"uter and Facundo Bromberg and Alejandro Edera | 10.1007/s10472-014-9419-5 | 1301.3720 | null | null |
Switched linear encoding with rectified linear autoencoders | cs.LG | Several recent results in machine learning have established formal
connections between autoencoders---artificial neural network models that
attempt to reproduce their inputs---and other coding models like sparse coding
and K-means. This paper explores in depth an autoencoder model that is
constructed using rectified linear activations on its hidden units. Our
analysis builds on recent results to further unify the world of sparse linear
coding models. We provide an intuitive interpretation of the behavior of these
coding models and demonstrate this intuition using small, artificial datasets
with known distributions.
| Leif Johnson and Craig Corcoran | null | 1301.3753 | null | null |
Adaptive learning rates and parallelization for stochastic, sparse,
non-smooth gradients | cs.LG cs.AI stat.ML | Recent work has established an empirically successful framework for adapting
learning rates for stochastic gradient descent (SGD). This effectively removes
all needs for tuning, while automatically reducing learning rates over time on
stationary problems, and permitting learning rates to grow appropriately in
non-stationary tasks. Here, we extend the idea in three directions, addressing
proper minibatch parallelization, including reweighted updates for sparse or
orthogonal gradients, improving robustness on non-smooth loss functions, in the
process replacing the diagonal Hessian estimation procedure that may not always
be available by a robust finite-difference approximation. The final algorithm
integrates all these components, has linear complexity and is hyper-parameter
free.
| Tom Schaul, Yann LeCun | null | 1301.3764 | null | null |
Discriminative Recurrent Sparse Auto-Encoders | cs.LG cs.CV | We present the discriminative recurrent sparse auto-encoder model, comprising
a recurrent encoder of rectified linear units, unrolled for a fixed number of
iterations, and connected to two linear decoders that reconstruct the input and
predict its supervised classification. Training via
backpropagation-through-time initially minimizes an unsupervised sparse
reconstruction error; the loss function is then augmented with a discriminative
term on the supervised classification. The depth implicit in the
temporally-unrolled form allows the system to exhibit all the power of deep
networks, while substantially reducing the number of trainable parameters.
From an initially unstructured network the hidden units differentiate into
categorical-units, each of which represents an input prototype with a
well-defined class; and part-units representing deformations of these
prototypes. The learned organization of the recurrent encoder is hierarchical:
part-units are driven directly by the input, whereas the activity of
categorical-units builds up over time through interactions with the part-units.
Even using a small number of hidden units per layer, discriminative recurrent
sparse auto-encoders achieve excellent performance on MNIST.
| Jason Tyler Rolfe and Yann LeCun | null | 1301.3775 | null | null |
Learning Output Kernels for Multi-Task Problems | cs.LG | Simultaneously solving multiple related learning tasks is beneficial under a
variety of circumstances, but the prior knowledge necessary to correctly model
task relationships is rarely available in practice. In this paper, we develop a
novel kernel-based multi-task learning technique that automatically reveals
structural inter-task relationships. Building over the framework of output
kernel learning (OKL), we introduce a method that jointly learns multiple
functions and a low-rank multi-task kernel by solving a non-convex
regularization problem. Optimization is carried out via a block coordinate
descent strategy, where each subproblem is solved using suitable conjugate
gradient (CG) type iterative methods for linear operator equations. The
effectiveness of the proposed approach is demonstrated on pharmacological and
collaborative filtering data.
| Francesco Dinuzzo | 10.1016/j.neucom.2013.02.024 | 1301.3816 | null | null |
Reversible Jump MCMC Simulated Annealing for Neural Networks | cs.LG cs.NE stat.ML | We propose a novel reversible jump Markov chain Monte Carlo (MCMC) simulated
annealing algorithm to optimize radial basis function (RBF) networks. This
algorithm enables us to maximize the joint posterior distribution of the
network parameters and the number of basis functions. It performs a global
search in the joint space of the parameters and number of parameters, thereby
surmounting the problem of local minima. We also show that by calibrating a
Bayesian model, we can obtain the classical AIC, BIC and MDL model selection
criteria within a penalized likelihood framework. Finally, we show
theoretically and empirically that the algorithm converges to the modes of the
full posterior distribution in an efficient way.
| Christophe Andrieu, Nando de Freitas, Arnaud Doucet | null | 1301.3833 | null | null |
Dynamic Bayesian Multinets | cs.LG cs.AI stat.ML | In this work, dynamic Bayesian multinets are introduced where a Markov chain
state at time t determines conditional independence patterns between random
variables lying within a local time window surrounding t. It is shown how
information-theoretic criterion functions can be used to induce sparse,
discriminative, and class-conditional network structures that yield an optimal
approximation to the class posterior probability, and therefore are useful for
the classification task. Using a new structure learning heuristic, the
resulting models are tested on a medium-vocabulary isolated-word speech
recognition task. It is demonstrated that these discriminatively structured
dynamic Bayesian multinets, when trained in a maximum likelihood setting using
EM, can outperform both HMMs and other dynamic Bayesian networks with a similar
number of parameters.
| Jeff A. Bilmes | null | 1301.3837 | null | null |
Variational Relevance Vector Machines | cs.LG stat.ML | The Support Vector Machine (SVM) of Vapnik (1998) has become widely
established as one of the leading approaches to pattern recognition and machine
learning. It expresses predictions in terms of a linear combination of kernel
functions centred on a subset of the training data, known as support vectors.
Despite its widespread success, the SVM suffers from some important
limitations, one of the most significant being that it makes point predictions
rather than generating predictive distributions. Recently Tipping (1999) has
formulated the Relevance Vector Machine (RVM), a probabilistic model whose
functional form is equivalent to the SVM. It achieves comparable recognition
accuracy to the SVM, yet provides a full predictive distribution, and also
requires substantially fewer kernel functions.
The original treatment of the RVM relied on the use of type II maximum
likelihood (the `evidence framework') to provide point estimates of the
hyperparameters which govern model sparsity. In this paper we show how the RVM
can be formulated and solved within a completely Bayesian paradigm through the
use of variational inference, thereby giving a posterior distribution over both
parameters and hyperparameters. We demonstrate the practicality and performance
of the variational RVM using both synthetic and real world examples.
| Christopher M. Bishop, Michael Tipping | null | 1301.3838 | null | null |
Utilities as Random Variables: Density Estimation and Structure
Discovery | cs.AI cs.LG | Decision theory does not traditionally include uncertainty over utility
functions. We argue that the a person's utility value for a given outcome can
be treated as we treat other domain attributes: as a random variable with a
density function over its possible values. We show that we can apply
statistical density estimation techniques to learn such a density function from
a database of partially elicited utility functions. In particular, we define a
Bayesian learning framework for this problem, assuming the distribution over
utilities is a mixture of Gaussians, where the mixture components represent
statistically coherent subpopulations. We can also extend our techniques to the
problem of discovering generalized additivity structure in the utility
functions in the population. We define a Bayesian model selection criterion for
utility function structure and a search procedure over structures. The
factorization of the utilities in the learned model, and the generalization
obtained from density estimation, allows us to provide robust estimates of
utilities using a significantly smaller number of utility elicitation
questions. We experiment with our technique on synthetic utility data and on a
real database of utility functions in the domain of prenatal diagnosis.
| Urszula Chajewska, Daphne Koller | null | 1301.3840 | null | null |
Bayesian Classification and Feature Selection from Finite Data Sets | cs.LG stat.ML | Feature selection aims to select the smallest subset of features for a
specified level of performance. The optimal achievable classification
performance on a feature subset is summarized by its Receiver Operating Curve
(ROC). When infinite data is available, the Neyman- Pearson (NP) design
procedure provides the most efficient way of obtaining this curve. In practice
the design procedure is applied to density estimates from finite data sets. We
perform a detailed statistical analysis of the resulting error propagation on
finite alphabets. We show that the estimated performance curve (EPC) produced
by the design procedure is arbitrarily accurate given sufficient data,
independent of the size of the feature set. However, the underlying likelihood
ranking procedure is highly sensitive to errors that reduces the probability
that the EPC is in fact the ROC. In the worst case, guaranteeing that the EPC
is equal to the ROC may require data sizes exponential in the size of the
feature set. These results imply that in theory the NP design approach may only
be valid for characterizing relatively small feature subsets, even when the
performance of any given classifier can be estimated very accurately. We
discuss the practical limitations for on-line methods that ensures that the NP
procedure operates in a statistically valid region.
| Frans Coetzee, Steve Lawrence, C. Lee Giles | null | 1301.3843 | null | null |
Experiments with Random Projection | cs.LG stat.ML | Recent theoretical work has identified random projection as a promising
dimensionality reduction technique for learning mixtures of Gausians. Here we
summarize these results and illustrate them by a wide variety of experiments on
synthetic and real data.
| Sanjoy Dasgupta | null | 1301.3849 | null | null |
A Two-round Variant of EM for Gaussian Mixtures | cs.LG stat.ML | Given a set of possible models (e.g., Bayesian network structures) and a data
sample, in the unsupervised model selection problem the task is to choose the
most accurate model with respect to the domain joint probability distribution.
In contrast to this, in supervised model selection it is a priori known that
the chosen model will be used in the future for prediction tasks involving more
``focused' predictive distributions. Although focused predictive distributions
can be produced from the joint probability distribution by marginalization, in
practice the best model in the unsupervised sense does not necessarily perform
well in supervised domains. In particular, the standard marginal likelihood
score is a criterion for the unsupervised task, and, although frequently used
for supervised model selection also, does not perform well in such tasks. In
this paper we study the performance of the marginal likelihood score
empirically in supervised Bayesian network selection tasks by using a large
number of publicly available classification data sets, and compare the results
to those obtained by alternative model selection criteria, including empirical
crossvalidation methods, an approximation of a supervised marginal likelihood
measure, and a supervised version of Dawids prequential(predictive sequential)
principle.The results demonstrate that the marginal likelihood score does NOT
perform well FOR supervised model selection, WHILE the best results are
obtained BY using Dawids prequential r napproach.
| Sanjoy Dasgupta, Leonard Schulman | null | 1301.3850 | null | null |
Minimum Message Length Clustering Using Gibbs Sampling | cs.LG stat.ML | The K-Mean and EM algorithms are popular in clustering and mixture modeling,
due to their simplicity and ease of implementation. However, they have several
significant limitations. Both coverage to a local optimum of their respective
objective functions (ignoring the uncertainty in the model space), require the
apriori specification of the number of classes/clsuters, and are inconsistent.
In this work we overcome these limitations by using the Minimum Message Length
(MML) principle and a variation to the K-Means/EM observation assignment and
parameter calculation scheme. We maintain the simplicity of these approaches
while constructing a Bayesian mixture modeling tool that samples/searches the
model space using a Markov Chain Monte Carlo (MCMC) sampler known as a Gibbs
sampler. Gibbs sampling allows us to visit each model according to its
posterior probability. Therefore, if the model space is multi-modal we will
visit all models and not get stuck in local optima. We call our approach
multiple chains at equilibrium (MCE) MML sampling.
| Ian Davidson | null | 1301.3851 | null | null |
Mix-nets: Factored Mixtures of Gaussians in Bayesian Networks With Mixed
Continuous And Discrete Variables | cs.LG cs.AI stat.ML | Recently developed techniques have made it possible to quickly learn accurate
probability density functions from data in low-dimensional continuous space. In
particular, mixtures of Gaussians can be fitted to data very quickly using an
accelerated EM algorithm that employs multiresolution kd-trees (Moore, 1999).
In this paper, we propose a kind of Bayesian networks in which low-dimensional
mixtures of Gaussians over different subsets of the domain's variables are
combined into a coherent joint probability model over the entire domain. The
network is also capable of modeling complex dependencies between discrete
variables and continuous variables without requiring discretization of the
continuous variables. We present efficient heuristic algorithms for
automatically learning these networks from data, and perform comparative
experiments illustrated how well these networks model real scientific data and
synthetic data. We also briefly discuss some possible improvements to the
networks, as well as possible applications.
| Scott Davies, Andrew Moore | null | 1301.3852 | null | null |
Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks | cs.LG cs.AI stat.CO | Particle filters (PFs) are powerful sampling-based inference/learning
algorithms for dynamic Bayesian networks (DBNs). They allow us to treat, in a
principled way, any type of probability distribution, nonlinearity and
non-stationarity. They have appeared in several fields under such names as
"condensation", "sequential Monte Carlo" and "survival of the fittest". In this
paper, we show how we can exploit the structure of the DBN to increase the
efficiency of particle filtering, using a technique known as
Rao-Blackwellisation. Essentially, this samples some of the variables, and
marginalizes out the rest exactly, using the Kalman filter, HMM filter,
junction tree algorithm, or any other finite dimensional optimal filter. We
show that Rao-Blackwellised particle filters (RBPFs) lead to more accurate
estimates than standard PFs. We demonstrate RBPFs on two problems, namely
non-stationary online regression with radial basis function networks and robot
localization and map building. We also discuss other potential application
areas and provide references to some finite dimensional optimal filters.
| Arnaud Doucet, Nando de Freitas, Kevin Murphy, Stuart Russell | null | 1301.3853 | null | null |
Learning Graphical Models of Images, Videos and Their Spatial
Transformations | cs.CV cs.LG stat.ML | Mixtures of Gaussians, factor analyzers (probabilistic PCA) and hidden Markov
models are staples of static and dynamic data modeling and image and video
modeling in particular. We show how topographic transformations in the input,
such as translation and shearing in images, can be accounted for in these
models by including a discrete transformation variable. The resulting models
perform clustering, dimensionality reduction and time-series analysis in a way
that is invariant to transformations in the input. Using the EM algorithm,
these transformation-invariant models can be fit to static data and time
series. We give results on filtering microscopy images, face and facial pose
clustering, handwritten digit modeling and recognition, video clustering,
object tracking, and removal of distractions from video sequences.
| Brendan J. Frey, Nebojsa Jojic | null | 1301.3854 | null | null |
Being Bayesian about Network Structure | cs.LG cs.AI stat.ML | In many domains, we are interested in analyzing the structure of the
underlying distribution, e.g., whether one variable is a direct parent of the
other. Bayesian model-selection attempts to find the MAP model and use its
structure to answer these questions. However, when the amount of available data
is modest, there might be many models that have non-negligible posterior. Thus,
we want compute the Bayesian posterior of a feature, i.e., the total posterior
probability of all models that contain it. In this paper, we propose a new
approach for this task. We first show how to efficiently compute a sum over the
exponential number of networks that are consistent with a fixed ordering over
network variables. This allows us to compute, for a given ordering, both the
marginal probability of the data and the posterior of a feature. We then use
this result as the basis for an algorithm that approximates the Bayesian
posterior of a feature. Our approach uses a Markov Chain Monte Carlo (MCMC)
method, but over orderings rather than over network structures. The space of
orderings is much smaller and more regular than the space of structures, and
has a smoother posterior `landscape'. We present empirical results on synthetic
and real-life datasets that compare our approach to full model averaging (when
possible), to MCMC over network structures, and to a non-Bayesian bootstrap
approach.
| Nir Friedman, Daphne Koller | null | 1301.3856 | null | null |
Gaussian Process Networks | cs.AI cs.LG stat.ML | In this paper we address the problem of learning the structure of a Bayesian
network in domains with continuous variables. This task requires a procedure
for comparing different candidate structures. In the Bayesian framework, this
is done by evaluating the {em marginal likelihood/} of the data given a
candidate structure. This term can be computed in closed-form for standard
parametric families (e.g., Gaussians), and can be approximated, at some
computational cost, for some semi-parametric families (e.g., mixtures of
Gaussians).
We present a new family of continuous variable probabilistic networks that
are based on {em Gaussian Process/} priors. These priors are semi-parametric in
nature and can learn almost arbitrary noisy functional relations. Using these
priors, we can directly compute marginal likelihoods for structure learning.
The resulting method can discover a wide range of functional dependencies in
multivariate data. We develop the Bayesian score of Gaussian Process Networks
and describe how to learn them from data. We present empirical results on
artificial data as well as on real-life domains with non-linear dependencies.
| Nir Friedman, Iftach Nachman | null | 1301.3857 | null | null |
Inference for Belief Networks Using Coupling From the Past | cs.AI cs.LG | Inference for belief networks using Gibbs sampling produces a distribution
for unobserved variables that differs from the correct distribution by a
(usually) unknown error, since convergence to the right distribution occurs
only asymptotically. The method of "coupling from the past" samples from
exactly the correct distribution by (conceptually) running dependent Gibbs
sampling simulations from every possible starting state from a time far enough
in the past that all runs reach the same state at time t=0. Explicitly
considering every possible state is intractable for large networks, however. We
propose a method for layered noisy-or networks that uses a compact, but often
imprecise, summary of a set of states. This method samples from exactly the
correct distribution, and requires only about twice the time per step as
ordinary Gibbs sampling, but it may require more simulation steps than would be
needed if chains were tracked exactly.
| Michael Harvey, Radford M. Neal | null | 1301.3861 | null | null |
Dependency Networks for Collaborative Filtering and Data Visualization | cs.AI cs.IR cs.LG | We describe a graphical model for probabilistic relationships---an
alternative to the Bayesian network---called a dependency network. The graph of
a dependency network, unlike a Bayesian network, is potentially cyclic. The
probability component of a dependency network, like a Bayesian network, is a
set of conditional distributions, one for each node given its parents. We
identify several basic properties of this representation and describe a
computationally efficient procedure for learning the graph and probability
components from data. We describe the application of this representation to
probabilistic inference, collaborative filtering (the task of predicting
preferences), and the visualization of acausal predictive relationships.
| David Heckerman, David Maxwell Chickering, Christopher Meek, Robert
Rounthwaite, Carl Kadie | null | 1301.3862 | null | null |
Feature Selection and Dualities in Maximum Entropy Discrimination | cs.LG stat.ML | Incorporating feature selection into a classification or regression method
often carries a number of advantages. In this paper we formalize feature
selection specifically from a discriminative perspective of improving
classification/regression accuracy. The feature selection method is developed
as an extension to the recently proposed maximum entropy discrimination (MED)
framework. We describe MED as a flexible (Bayesian) regularization approach
that subsumes, e.g., support vector classification, regression and exponential
family models. For brevity, we restrict ourselves primarily to feature
selection in the context of linear classification/regression methods and
demonstrate that the proposed approach indeed carries substantial improvements
in practice. Moreover, we discuss and develop various extensions of feature
selection, including the problem of dealing with example specific but
unobserved degrees of freedom -- alignments or invariants.
| Tony S. Jebara, Tommi S. Jaakkola | null | 1301.3865 | null | null |
Tractable Bayesian Learning of Tree Belief Networks | cs.LG cs.AI stat.ML | In this paper we present decomposable priors, a family of priors over
structure and parameters of tree belief nets for which Bayesian learning with
complete observations is tractable, in the sense that the posterior is also
decomposable and can be completely determined analytically in polynomial time.
This follows from two main results: First, we show that factored distributions
over spanning trees in a graph can be integrated in closed form. Second, we
examine priors over tree parameters and show that a set of assumptions similar
to (Heckerman and al. 1995) constrain the tree parameter priors to be a
compactly parameterized product of Dirichlet distributions. Beside allowing for
exact Bayesian learning, these results permit us to formulate a new class of
tractable latent variable models in which the likelihood of a data point is
computed through an ensemble average over tree structures.
| Marina Meila, Tommi S. Jaakkola | null | 1301.3875 | null | null |
The Anchors Hierachy: Using the triangle inequality to survive high
dimensional data | cs.LG cs.DS stat.ML | This paper is about metric data structures in high-dimensional or
non-Euclidean space that permit cached sufficient statistics accelerations of
learning algorithms.
It has recently been shown that for less than about 10 dimensions, decorating
kd-trees with additional "cached sufficient statistics" such as first and
second moments and contingency tables can provide satisfying acceleration for a
very wide range of statistical learning tasks such as kernel regression,
locally weighted regression, k-means clustering, mixture modeling and Bayes Net
learning.
In this paper, we begin by defining the anchors hierarchy - a fast data
structure and algorithm for localizing data based only on a
triangle-inequality-obeying distance metric. We show how this, in its own
right, gives a fast and effective clustering of data. But more importantly we
show how it can produce a well-balanced structure similar to a Ball-Tree
(Omohundro, 1991) or a kind of metric tree (Uhlmann, 1991; Ciaccia, Patella, &
Zezula, 1997) in a way that is neither "top-down" nor "bottom-up" but instead
"middle-out". We then show how this structure, decorated with cached sufficient
statistics, allows a wide variety of statistical learning algorithms to be
accelerated even in thousands of dimensions.
| Andrew Moore | null | 1301.3877 | null | null |
PEGASUS: A Policy Search Method for Large MDPs and POMDPs | cs.AI cs.LG | We propose a new approach to the problem of searching a space of policies for
a Markov decision process (MDP) or a partially observable Markov decision
process (POMDP), given a model. Our approach is based on the following
observation: Any (PO)MDP can be transformed into an "equivalent" POMDP in which
all state transitions (given the current state and action) are deterministic.
This reduces the general problem of policy search to one in which we need only
consider POMDPs with deterministic transitions. We give a natural way of
estimating the value of all policies in these transformed POMDPs. Policy search
is then simply performed by searching for a policy with high estimated value.
We also establish conditions under which our value estimates will be good,
recovering theoretical results similar to those of Kearns, Mansour and Ng
(1999), but with "sample complexity" bounds that have only a polynomial rather
than exponential dependence on the horizon time. Our method applies to
arbitrary POMDPs, including ones with infinite state and action spaces. We also
present empirical results for our approach on a small discrete problem, and on
a complex continuous state/continuous action problem involving learning to ride
a bicycle.
| Andrew Y. Ng, Michael I. Jordan | null | 1301.3878 | null | null |
Adaptive Importance Sampling for Estimation in Structured Domains | cs.AI cs.LG stat.ML | Sampling is an important tool for estimating large, complex sums and
integrals over high dimensional spaces. For instance, important sampling has
been used as an alternative to exact methods for inference in belief networks.
Ideally, we want to have a sampling distribution that provides optimal-variance
estimators. In this paper, we present methods that improve the sampling
distribution by systematically adapting it as we obtain information from the
samples. We present a stochastic-gradient-descent method for sequentially
updating the sampling distribution based on the direct minization of the
variance. We also present other stochastic-gradient-descent methods based on
the minimizationof typical notions of distance between the current sampling
distribution and approximations of the target, optimal distribution. We finally
validate and compare the different methods empirically by applying them to the
problem of action evaluation in influence diagrams.
| Luis E. Ortiz, Leslie Pack Kaelbling | null | 1301.3882 | null | null |
Monte Carlo Inference via Greedy Importance Sampling | cs.LG stat.CO stat.ML | We present a new method for conducting Monte Carlo inference in graphical
models which combines explicit search with generalized importance sampling. The
idea is to reduce the variance of importance sampling by searching for
significant points in the target distribution. We prove that it is possible to
introduce search and still maintain unbiasedness. We then demonstrate our
procedure on a few simple inference tasks and show that it can improve the
inference quality of standard MCMC methods, including Gibbs sampling,
Metropolis sampling, and Hybrid Monte Carlo. This paper extends previous work
which showed how greedy importance sampling could be correctly realized in the
one-dimensional case.
| Dale Schuurmans, Finnegan Southey | null | 1301.3890 | null | null |
Combining Feature and Prototype Pruning by Uncertainty Minimization | cs.LG stat.ML | We focus in this paper on dataset reduction techniques for use in k-nearest
neighbor classification. In such a context, feature and prototype selections
have always been independently treated by the standard storage reduction
algorithms. While this certifying is theoretically justified by the fact that
each subproblem is NP-hard, we assume in this paper that a joint storage
reduction is in fact more intuitive and can in practice provide better results
than two independent processes. Moreover, it avoids a lot of distance
calculations by progressively removing useless instances during the feature
pruning. While standard selection algorithms often optimize the accuracy to
discriminate the set of solutions, we use in this paper a criterion based on an
uncertainty measure within a nearest-neighbor graph. This choice comes from
recent results that have proven that accuracy is not always the suitable
criterion to optimize. In our approach, a feature or an instance is removed if
its deletion improves information of the graph. Numerous experiments are
presented in this paper and a statistical analysis shows the relevance of our
approach, and its tolerance in the presence of noise.
| Marc Sebban, Richard Nock | null | 1301.3891 | null | null |
Dynamic Trees: A Structured Variational Method Giving Efficient
Propagation Rules | cs.LG cs.AI stat.ML | Dynamic trees are mixtures of tree structured belief networks. They solve
some of the problems of fixed tree networks at the cost of making exact
inference intractable. For this reason approximate methods such as sampling or
mean field approaches have been used. However, mean field approximations assume
a factorized distribution over node states. Such a distribution seems unlickely
in the posterior, as nodes are highly correlated in the prior. Here a
structured variational approach is used, where the posterior distribution over
the non-evidential nodes is itself approximated by a dynamic tree. It turns out
that this form can be used tractably and efficiently. The result is a set of
update rules which can propagate information through the network to obtain both
a full variational approximation, and the relevant marginals. The progagtion
rules are more efficient than the mean field approach and give noticeable
quantitative and qualitative improvement in the inference. The marginals
calculated give better approximations to the posterior than loopy propagation
on a small toy problem.
| Amos J. Storkey | null | 1301.3895 | null | null |
An Uncertainty Framework for Classification | cs.LG stat.ML | We define a generalized likelihood function based on uncertainty measures and
show that maximizing such a likelihood function for different measures induces
different types of classifiers. In the probabilistic framework, we obtain
classifiers that optimize the cross-entropy function. In the possibilistic
framework, we obtain classifiers that maximize the interclass margin.
Furthermore, we show that the support vector machine is a sub-class of these
maximum-margin classifiers.
| Loo-Nin Teow, Kia-Fock Loe | null | 1301.3896 | null | null |
A Branch-and-Bound Algorithm for MDL Learning Bayesian Networks | cs.AI cs.LG stat.ML | This paper extends the work in [Suzuki, 1996] and presents an efficient
depth-first branch-and-bound algorithm for learning Bayesian network
structures, based on the minimum description length (MDL) principle, for a
given (consistent) variable ordering. The algorithm exhaustively searches
through all network structures and guarantees to find the network with the best
MDL score. Preliminary experiments show that the algorithm is efficient, and
that the time complexity grows slowly with the sample size. The algorithm is
useful for empirically studying both the performance of suboptimal heuristic
search algorithms and the adequacy of the MDL principle in learning Bayesian
networks.
| Jin Tian | null | 1301.3897 | null | null |
Model-Based Hierarchical Clustering | cs.LG cs.AI stat.ML | We present an approach to model-based hierarchical clustering by formulating
an objective function based on a Bayesian analysis. This model organizes the
data into a cluster hierarchy while specifying a complex feature-set
partitioning that is a key component of our model. Features can have either a
unique distribution in every cluster or a common distribution over some (or
even all) of the clusters. The cluster subsets over which these features have
such a common distribution correspond to the nodes (clusters) of the tree
representing the hierarchy. We apply this general model to the problem of
document clustering for which we use a multinomial likelihood function and
Dirichlet priors. Our algorithm consists of a two-stage process wherein we
first perform a flat clustering followed by a modified hierarchical
agglomerative merging process that includes determining the features that will
have common distributions over the merged clusters. The regularization induced
by using the marginal likelihood automatically determines the optimal model
structure including number of clusters, the depth of the tree and the subset of
features to be modeled as having a common distribution at each node. We present
experimental results on both synthetic data and a real document collection.
| Shivakumar Vaithyanathan, Byron E Dom | null | 1301.3899 | null | null |
Variational Approximations between Mean Field Theory and the Junction
Tree Algorithm | cs.LG cs.AI stat.ML | Recently, variational approximations such as the mean field approximation
have received much interest. We extend the standard mean field method by using
an approximating distribution that factorises into cluster potentials. This
includes undirected graphs, directed acyclic graphs and junction trees. We
derive generalized mean field equations to optimize the cluster potentials. We
show that the method bridges the gap between the standard mean field
approximation and the exact junction tree algorithm. In addition, we address
the problem of how to choose the graphical structure of the approximating
distribution. From the generalised mean field equations we derive rules to
simplify the structure of the approximating distribution in advance without
affecting the quality of the approximation. We also show how the method fits
into some other variational approximations that are currently popular.
| Wim Wiegerinck | null | 1301.3901 | null | null |
Efficient Sample Reuse in Policy Gradients with Parameter-based
Exploration | cs.LG stat.ML | The policy gradient approach is a flexible and powerful reinforcement
learning method particularly for problems with continuous actions such as robot
control. A common challenge in this scenario is how to reduce the variance of
policy gradient estimates for reliable policy updates. In this paper, we
combine the following three ideas and give a highly effective policy gradient
method: (a) the policy gradients with parameter based exploration, which is a
recently proposed policy search method with low variance of gradient estimates,
(b) an importance sampling technique, which allows us to reuse previously
gathered data in a consistent way, and (c) an optimal baseline, which minimizes
the variance of gradient estimates with their unbiasedness being maintained.
For the proposed method, we give theoretical analysis of the variance of
gradient estimates and show its usefulness through extensive experiments.
| Tingting Zhao, Hirotaka Hachiya, Voot Tangkaratt, Jun Morimoto,
Masashi Sugiyama | null | 1301.3966 | null | null |
Knowledge Matters: Importance of Prior Information for Optimization | cs.LG cs.CV cs.NE stat.ML | We explore the effect of introducing prior information into the intermediate
level of neural networks for a learning task on which all the state-of-the-art
machine learning algorithms tested failed to learn. We motivate our work from
the hypothesis that humans learn such intermediate concepts from other
individuals via a form of supervision or guidance using a curriculum. The
experiments we have conducted provide positive evidence in favor of this
hypothesis. In our experiments, a two-tiered MLP architecture is trained on a
dataset with 64x64 binary inputs images, each image with three sprites. The
final task is to decide whether all the sprites are the same or one of them is
different. Sprites are pentomino tetris shapes and they are placed in an image
with different locations using scaling and rotation transformations. The first
part of the two-tiered MLP is pre-trained with intermediate-level targets being
the presence of sprites at each location, while the second part takes the
output of the first part as input and predicts the final task's target binary
event. The two-tiered MLP architecture, with a few tens of thousand examples,
was able to learn the task perfectly, whereas all other algorithms (include
unsupervised pre-training, but also traditional algorithms like SVMs, decision
trees and boosting) all perform no better than chance. We hypothesize that the
optimization difficulty involved when the intermediate pre-training is not
performed is due to the {\em composition} of two highly non-linear tasks. Our
findings are also consistent with hypotheses on cultural learning inspired by
the observations of optimization problems with deep learning, presumably
because of effective local minima.
| \c{C}a\u{g}lar G\"ul\c{c}ehre and Yoshua Bengio | null | 1301.4083 | null | null |
On the Product Rule for Classification Problems | cs.LG cs.CV stat.ML | We discuss theoretical aspects of the product rule for classification
problems in supervised machine learning for the case of combining classifiers.
We show that (1) the product rule arises from the MAP classifier supposing
equivalent priors and conditional independence given a class; (2) under some
conditions, the product rule is equivalent to minimizing the sum of the squared
distances to the respective centers of the classes related with different
features, such distances being weighted by the spread of the classes; (3)
observing some hypothesis, the product rule is equivalent to concatenating the
vectors of features.
| Marcelo Cicconet | null | 1301.4157 | null | null |
Herded Gibbs Sampling | cs.LG stat.CO stat.ML | The Gibbs sampler is one of the most popular algorithms for inference in
statistical models. In this paper, we introduce a herding variant of this
algorithm, called herded Gibbs, that is entirely deterministic. We prove that
herded Gibbs has an $O(1/T)$ convergence rate for models with independent
variables and for fully connected probabilistic graphical models. Herded Gibbs
is shown to outperform Gibbs in the tasks of image denoising with MRFs and
named entity recognition with CRFs. However, the convergence for herded Gibbs
for sparsely connected probabilistic graphical models is still an open problem.
| Luke Bornn, Yutian Chen, Nando de Freitas, Mareija Eskelin, Jing Fang,
Max Welling | null | 1301.4168 | null | null |
Affinity Weighted Embedding | cs.IR cs.LG stat.ML | Supervised (linear) embedding models like Wsabie and PSI have proven
successful at ranking, recommendation and annotation tasks. However, despite
being scalable to large datasets they do not take full advantage of the extra
data due to their linear nature, and typically underfit. We propose a new class
of models which aim to provide improved performance while retaining many of the
benefits of the existing class of embedding models. Our new approach works by
iteratively learning a linear embedding model where the next iteration's
features and labels are reweighted as a function of the previous iteration. We
describe several variants of the family, and give some initial results.
| Jason Weston, Ron Weiss, Hector Yee | null | 1301.4171 | null | null |
Latent Relation Representations for Universal Schemas | cs.LG stat.ML | Traditional relation extraction predicts relations within some fixed and
finite target schema. Machine learning approaches to this task require either
manual annotation or, in the case of distant supervision, existing structured
sources of the same schema. The need for existing datasets can be avoided by
using a universal schema: the union of all involved schemas (surface form
predicates as in OpenIE, and relations in the schemas of pre-existing
databases). This schema has an almost unlimited set of relations (due to
surface forms), and supports integration with existing structured data (through
the relation types of existing databases). To populate a database of such
schema we present a family of matrix factorization models that predict affinity
between database tuples and relations. We show that this achieves substantially
higher accuracy than the traditional classification approach. More importantly,
by operating simultaneously on relations observed in text and in pre-existing
structured DBs such as Freebase, we are able to reason about unstructured and
structured data in mutually-supporting ways. By doing so our approach
outperforms state-of-the-art distant supervision systems.
| Sebastian Riedel, Limin Yao, Andrew McCallum | null | 1301.4293 | null | null |
A Linearly Convergent Conditional Gradient Algorithm with Applications
to Online and Stochastic Optimization | cs.LG math.OC stat.ML | Linear optimization is many times algorithmically simpler than non-linear
convex optimization. Linear optimization over matroid polytopes, matching
polytopes and path polytopes are example of problems for which we have simple
and efficient combinatorial algorithms, but whose non-linear convex counterpart
is harder and admits significantly less efficient algorithms. This motivates
the computational model of convex optimization, including the offline, online
and stochastic settings, using a linear optimization oracle. In this
computational model we give several new results that improve over the previous
state-of-the-art. Our main result is a novel conditional gradient algorithm for
smooth and strongly convex optimization over polyhedral sets that performs only
a single linear optimization step over the domain on each iteration and enjoys
a linear convergence rate. This gives an exponential improvement in convergence
rate over previous results.
Based on this new conditional gradient algorithm we give the first algorithms
for online convex optimization over polyhedral sets that perform only a single
linear optimization step over the domain while having optimal regret
guarantees, answering an open question of Kalai and Vempala, and Hazan and
Kale. Our online algorithms also imply conditional gradient algorithms for
non-smooth and stochastic convex optimization with the same convergence rates
as projected (sub)gradient methods.
| Dan Garber, Elad Hazan | null | 1301.4666 | null | null |
Cellular Tree Classifiers | stat.ML cs.LG math.ST stat.TH | The cellular tree classifier model addresses a fundamental problem in the
design of classifiers for a parallel or distributed computing world: Given a
data set, is it sufficient to apply a majority rule for classification, or
shall one split the data into two or more parts and send each part to a
potentially different computer (or cell) for further processing? At first
sight, it seems impossible to define with this paradigm a consistent classifier
as no cell knows the "original data size", $n$. However, we show that this is
not so by exhibiting two different consistent classifiers. The consistency is
universal but is only shown for distributions with nonatomic marginals.
| G\'erard Biau (LPMA, LSTA, DMA, INRIA Paris - Rocquencourt), Luc
Devroye (SOCS) | null | 1301.4679 | null | null |
Pattern Matching for Self- Tuning of MapReduce Jobs | cs.DC cs.AI cs.LG | In this paper, we study CPU utilization time patterns of several MapReduce
applications. After extracting running patterns of several applications, they
are saved in a reference database to be later used to tweak system parameters
to efficiently execute unknown applications in future. To achieve this goal,
CPU utilization patterns of new applications are compared with the already
known ones in the reference database to find/predict their most probable
execution patterns. Because of different patterns lengths, the Dynamic Time
Warping (DTW) is utilized for such comparison; a correlation analysis is then
applied to DTWs outcomes to produce feasible similarity patterns. Three real
applications (WordCount, Exim Mainlog parsing and Terasort) are used to
evaluate our hypothesis in tweaking system parameters in executing similar
applications. Results were very promising and showed effectiveness of our
approach on pseudo-distributed MapReduce platforms.
| Nikzad Babaii Rizvandi, Javid Taheri, Albert Y.Zomaya | 10.1109/ISPA.2011.24 | 1301.4753 | null | null |
A Linear Time Active Learning Algorithm for Link Classification -- Full
Version -- | cs.LG cs.SI stat.ML | We present very efficient active learning algorithms for link classification
in signed networks. Our algorithms are motivated by a stochastic model in which
edge labels are obtained through perturbations of a initial sign assignment
consistent with a two-clustering of the nodes. We provide a theoretical
analysis within this model, showing that we can achieve an optimal (to whithin
a constant factor) number of mistakes on any graph G = (V,E) such that |E| =
\Omega(|V|^{3/2}) by querying O(|V|^{3/2}) edge labels. More generally, we show
an algorithm that achieves optimality to within a factor of O(k) by querying at
most order of |V| + (|V|/k)^{3/2} edge labels. The running time of this
algorithm is at most of order |E| + |V|\log|V|.
| Nicolo Cesa-Bianchi, Claudio Gentile, Fabio Vitale, Giovanni Zappella | null | 1301.4767 | null | null |
A Correlation Clustering Approach to Link Classification in Signed
Networks -- Full Version -- | cs.LG cs.DS stat.ML | Motivated by social balance theory, we develop a theory of link
classification in signed networks using the correlation clustering index as
measure of label regularity. We derive learning bounds in terms of correlation
clustering within three fundamental transductive learning settings: online,
batch and active. Our main algorithmic contribution is in the active setting,
where we introduce a new family of efficient link classifiers based on covering
the input graph with small circuits. These are the first active algorithms for
link classification with mistake bounds that hold for arbitrary signed
networks.
| Nicolo Cesa-Bianchi, Claudio Gentile, Fabio Vitale, Giovanni Zappella | null | 1301.4769 | null | null |
Active Learning of Inverse Models with Intrinsically Motivated Goal
Exploration in Robots | cs.LG cs.AI cs.CV cs.NE cs.RO | We introduce the Self-Adaptive Goal Generation - Robust Intelligent Adaptive
Curiosity (SAGG-RIAC) architecture as an intrinsi- cally motivated goal
exploration mechanism which allows active learning of inverse models in
high-dimensional redundant robots. This allows a robot to efficiently and
actively learn distributions of parameterized motor skills/policies that solve
a corresponding distribution of parameterized tasks/goals. The architecture
makes the robot sample actively novel parameterized tasks in the task space,
based on a measure of competence progress, each of which triggers low-level
goal-directed learning of the motor policy pa- rameters that allow to solve it.
For both learning and generalization, the system leverages regression
techniques which allow to infer the motor policy parameters corresponding to a
given novel parameterized task, and based on the previously learnt
correspondences between policy and task parameters. We present experiments with
high-dimensional continuous sensorimotor spaces in three different robotic
setups: 1) learning the inverse kinematics in a highly-redundant robotic arm,
2) learning omnidirectional locomotion with motor primitives in a quadruped
robot, 3) an arm learning to control a fishing rod with a flexible wire. We
show that 1) exploration in the task space can be a lot faster than exploration
in the actuator space for learning inverse models in redundant robots; 2)
selecting goals maximizing competence progress creates developmental
trajectories driving the robot to progressively focus on tasks of increasing
complexity and is statistically significantly more efficient than selecting
tasks randomly, as well as more efficient than different standard active motor
babbling methods; 3) this architecture allows the robot to actively discover
which parts of its task space it can learn to reach and which part it cannot.
| Adrien Baranes and Pierre-Yves Oudeyer | 10.1016/j.robot.2012.05.008 | 1301.4862 | null | null |
Dirichlet draws are sparse with high probability | cs.LG math.PR stat.ML | This note provides an elementary proof of the folklore fact that draws from a
Dirichlet distribution (with parameters less than 1) are typically sparse (most
coordinates are small).
| Matus Telgarsky | null | 1301.4917 | null | null |
Evaluation of a Supervised Learning Approach for Stock Market Operations | stat.ML cs.LG stat.AP | Data mining methods have been widely applied in financial markets, with the
purpose of providing suitable tools for prices forecasting and automatic
trading. Particularly, learning methods aim to identify patterns in time series
and, based on such patterns, to recommend buy/sell operations. The objective of
this work is to evaluate the performance of Random Forests, a supervised
learning method based on ensembles of decision trees, for decision support in
stock markets. Preliminary results indicate good rates of successful operations
and good rates of return per operation, providing a strong motivation for
further research in this topic.
| Marcelo S. Lauretto, Barbara B. C. Silva and Pablo M. Andrade | null | 1301.4944 | null | null |
Heteroscedastic Conditional Ordinal Random Fields for Pain Intensity
Estimation from Facial Images | cs.CV cs.LG stat.ML | We propose a novel method for automatic pain intensity estimation from facial
images based on the framework of kernel Conditional Ordinal Random Fields
(KCORF). We extend this framework to account for heteroscedasticity on the
output labels(i.e., pain intensity scores) and introduce a novel dynamic
features, dynamic ranks, that impose temporal ordinal constraints on the static
ranks (i.e., intensity scores). Our experimental results show that the proposed
approach outperforms state-of-the art methods for sequence classification with
ordinal data and other ordinal regression models. The approach performs
significantly better than other models in terms of Intra-Class Correlation
measure, which is the most accepted evaluation measure in the tasks of facial
behaviour intensity estimation.
| Ognjen Rudovic, Maja Pantic, Vladimir Pavlovic | null | 1301.5063 | null | null |
Piecewise Linear Multilayer Perceptrons and Dropout | stat.ML cs.LG | We propose a new type of hidden layer for a multilayer perceptron, and
demonstrate that it obtains the best reported performance for an MLP on the
MNIST dataset.
| Ian J. Goodfellow | null | 1301.5088 | null | null |
Active Learning on Trees and Graphs | cs.LG stat.ML | We investigate the problem of active learning on a given tree whose nodes are
assigned binary labels in an adversarial way. Inspired by recent results by
Guillory and Bilmes, we characterize (up to constant factors) the optimal
placement of queries so to minimize the mistakes made on the non-queried nodes.
Our query selection algorithm is extremely efficient, and the optimal number of
mistakes on the non-queried nodes is achieved by a simple and efficient mincut
classifier. Through a simple modification of the query selection algorithm we
also show optimality (up to constant factors) with respect to the trade-off
between number of queries and number of mistakes on non-queried nodes. By using
spanning trees, our algorithms can be efficiently applied to general graphs,
although the problem of finding optimal and efficient active learning
algorithms for general graphs remains open. Towards this end, we provide a
lower bound on the number of mistakes made on arbitrary graphs by any active
learning algorithm using a number of queries which is up to a constant fraction
of the graph size.
| Nicolo Cesa-Bianchi, Claudio Gentile, Fabio Vitale, Giovanni Zappella | null | 1301.5112 | null | null |
See the Tree Through the Lines: The Shazoo Algorithm -- Full Version -- | cs.LG | Predicting the nodes of a given graph is a fascinating theoretical problem
with applications in several domains. Since graph sparsification via spanning
trees retains enough information while making the task much easier, trees are
an important special case of this problem. Although it is known how to predict
the nodes of an unweighted tree in a nearly optimal way, in the weighted case a
fully satisfactory algorithm is not available yet. We fill this hole and
introduce an efficient node predictor, Shazoo, which is nearly optimal on any
weighted tree. Moreover, we show that Shazoo can be viewed as a common
nontrivial generalization of both previous approaches for unweighted trees and
weighted lines. Experiments on real-world datasets confirm that Shazoo performs
well in that it fully exploits the structure of the input tree, and gets very
close to (and sometimes better than) less scalable energy minimization methods.
| Fabio Vitale, Nicolo Cesa-Bianchi, Claudio Gentile, Giovanni Zappella | null | 1301.5160 | null | null |
Properties of the Least Squares Temporal Difference learning algorithm | stat.ML cs.LG | This paper presents four different ways of looking at the well-known Least
Squares Temporal Differences (LSTD) algorithm for computing the value function
of a Markov Reward Process, each of them leading to different insights: the
operator-theory approach via the Galerkin method, the statistical approach via
instrumental variables, the linear dynamical system view as well as the limit
of the TD iteration. We also give a geometric view of the algorithm as an
oblique projection. Furthermore, there is an extensive comparison of the
optimization problem solved by LSTD as compared to Bellman Residual
Minimization (BRM). We then review several schemes for the regularization of
the LSTD solution. We then proceed to treat the modification of LSTD for the
case of episodic Markov Reward Processes.
| Kamil Ciosek | null | 1301.5220 | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.