title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Belief Propagation for Continuous State Spaces: Stochastic
Message-Passing with Quantitative Guarantees | cs.IT cs.LG math.IT stat.ML | The sum-product or belief propagation (BP) algorithm is a widely used
message-passing technique for computing approximate marginals in graphical
models. We introduce a new technique, called stochastic orthogonal series
message-passing (SOSMP), for computing the BP fixed point in models with
continuous random variables. It is based on a deterministic approximation of
the messages via orthogonal series expansion, and a stochastic approximation
via Monte Carlo estimates of the integral updates of the basis coefficients. We
prove that the SOSMP iterates converge to a \delta-neighborhood of the unique
BP fixed point for any tree-structured graph, and for any graphs with cycles in
which the BP updates satisfy a contractivity condition. In addition, we
demonstrate how to choose the number of basis coefficients as a function of the
desired approximation accuracy \delta and smoothness of the compatibility
functions. We illustrate our theory with both simulated examples and in
application to optical flow estimation.
| Nima Noorshams and Martin J. Wainwright | null | 1212.3850 | null | null |
Learning Markov Decision Processes for Model Checking | cs.LG cs.LO cs.SE | Constructing an accurate system model for formal model verification can be
both resource demanding and time-consuming. To alleviate this shortcoming,
algorithms have been proposed for automatically learning system models based on
observed system behaviors. In this paper we extend the algorithm on learning
probabilistic automata to reactive systems, where the observed system behavior
is in the form of alternating sequences of inputs and outputs. We propose an
algorithm for automatically learning a deterministic labeled Markov decision
process model from the observed behavior of a reactive system. The proposed
learning algorithm is adapted from algorithms for learning deterministic
probabilistic finite automata, and extended to include both probabilistic and
nondeterministic transitions. The algorithm is empirically analyzed and
evaluated by learning system models of slot machines. The evaluation is
performed by analyzing the probabilistic linear temporal logic properties of
the system as well as by analyzing the schedulers, in particular the optimal
schedulers, induced by the learned models.
| Hua Mao (AAU), Yingke Chen (AAU), Manfred Jaeger (AAU), Thomas D.
Nielsen (AAU), Kim G. Larsen (AAU), Brian Nielsen (AAU) | 10.4204/EPTCS.103.6 | 1212.3873 | null | null |
A Tutorial on Probabilistic Latent Semantic Analysis | stat.ML cs.LG | In this tutorial, I will discuss the details about how Probabilistic Latent
Semantic Analysis (PLSA) is formalized and how different learning algorithms
are proposed to learn the model.
| Liangjie Hong | null | 1212.3900 | null | null |
Group Component Analysis for Multiblock Data: Common and Individual
Feature Extraction | cs.CV cs.LG | Very often data we encounter in practice is a collection of matrices rather
than a single matrix. These multi-block data are naturally linked and hence
often share some common features and at the same time they have their own
individual features, due to the background in which they are measured and
collected. In this study we proposed a new scheme of common and individual
feature analysis (CIFA) that processes multi-block data in a linked way aiming
at discovering and separating their common and individual features. According
to whether the number of common features is given or not, two efficient
algorithms were proposed to extract the common basis which is shared by all
data. Then feature extraction is performed on the common and the individual
spaces separately by incorporating the techniques such as dimensionality
reduction and blind source separation. We also discussed how the proposed CIFA
can significantly improve the performance of classification and clustering
tasks by exploiting common and individual features of samples respectively. Our
experimental results show some encouraging features of the proposed methods in
comparison to the state-of-the-art methods on synthetic and real data.
| Guoxu Zhou and Andrzej Cichocki and Yu Zhang and Danilo Mandic | 10.1109/TNNLS.2015.2487364 | 1212.3913 | null | null |
Alternating Maximization: Unifying Framework for 8 Sparse PCA
Formulations and Efficient Parallel Codes | stat.ML cs.LG math.OC | Given a multivariate data set, sparse principal component analysis (SPCA)
aims to extract several linear combinations of the variables that together
explain the variance in the data as much as possible, while controlling the
number of nonzero loadings in these combinations. In this paper we consider 8
different optimization formulations for computing a single sparse loading
vector; these are obtained by combining the following factors: we employ two
norms for measuring variance (L2, L1) and two sparsity-inducing norms (L0, L1),
which are used in two different ways (constraint, penalty). Three of our
formulations, notably the one with L0 constraint and L1 variance, have not been
considered in the literature. We give a unifying reformulation which we propose
to solve via a natural alternating maximization (AM) method. We show the the AM
method is nontrivially equivalent to GPower (Journ\'{e}e et al; JMLR
11:517--553, 2010) for all our formulations. Besides this, we provide 24
efficient parallel SPCA implementations: 3 codes (multi-core, GPU and cluster)
for each of the 8 problems. Parallelism in the methods is aimed at i) speeding
up computations (our GPU code can be 100 times faster than an efficient serial
code written in C++), ii) obtaining solutions explaining more variance and iii)
dealing with big data problems (our cluster code is able to solve a 357 GB
problem in about a minute).
| Peter Richt\'arik, Majid Jahani, Selin Damla Ahipa\c{s}ao\u{g}lu,
Martin Tak\'a\v{c} | null | 1212.4137 | null | null |
Feature Clustering for Accelerating Parallel Coordinate Descent | stat.ML cs.DC cs.LG math.OC | Large-scale L1-regularized loss minimization problems arise in
high-dimensional applications such as compressed sensing and high-dimensional
supervised learning, including classification and regression problems.
High-performance algorithms and implementations are critical to efficiently
solving these problems. Building upon previous work on coordinate descent
algorithms for L1-regularized problems, we introduce a novel family of
algorithms called block-greedy coordinate descent that includes, as special
cases, several existing algorithms such as SCD, Greedy CD, Shotgun, and
Thread-Greedy. We give a unified convergence analysis for the family of
block-greedy algorithms. The analysis suggests that block-greedy coordinate
descent can better exploit parallelism if features are clustered so that the
maximum inner product between features in different blocks is small. Our
theoretical convergence analysis is supported with experimental re- sults using
data from diverse real-world applications. We hope that algorithmic approaches
and convergence analysis we provide will not only advance the field, but will
also encourage researchers to systematically explore the design space of
algorithms for solving large-scale L1-regularization problems.
| Chad Scherrer, Ambuj Tewari, Mahantesh Halappanavar, David Haglin | null | 1212.4174 | null | null |
Bayesian Group Nonnegative Matrix Factorization for EEG Analysis | cs.LG stat.ML | We propose a generative model of a group EEG analysis, based on appropriate
kernel assumptions on EEG data. We derive the variational inference update rule
using various approximation techniques. The proposed model outperforms the
current state-of-the-art algorithms in terms of common pattern extraction. The
validity of the proposed model is tested on the BCI competition dataset.
| Bonggun Shin, Alice Oh | null | 1212.4347 | null | null |
Variational Optimization | stat.ML cs.LG cs.NA | We discuss a general technique that can be used to form a differentiable
bound on the optima of non-differentiable or discrete objective functions. We
form a unified description of these methods and consider under which
circumstances the bound is concave. In particular we consider two concrete
applications of the method, namely sparse learning and support vector
classification.
| Joe Staines and David Barber | null | 1212.4507 | null | null |
A Multi-View Embedding Space for Modeling Internet Images, Tags, and
their Semantics | cs.CV cs.IR cs.LG cs.MM | This paper investigates the problem of modeling Internet images and
associated text or tags for tasks such as image-to-image search, tag-to-image
search, and image-to-tag search (image annotation). We start with canonical
correlation analysis (CCA), a popular and successful approach for mapping
visual and textual features to the same latent space, and incorporate a third
view capturing high-level image semantics, represented either by a single
category or multiple non-mutually-exclusive concepts. We present two ways to
train the three-view embedding: supervised, with the third view coming from
ground-truth labels or search keywords; and unsupervised, with semantic themes
automatically obtained by clustering the tags. To ensure high accuracy for
retrieval tasks while keeping the learning process scalable, we combine
multiple strong visual features and use explicit nonlinear kernel mappings to
efficiently approximate kernel CCA. To perform retrieval, we use a specially
designed similarity function in the embedded space, which substantially
outperforms the Euclidean distance. The resulting system produces compelling
qualitative results and outperforms a number of two-view baselines on retrieval
tasks on three large-scale Internet image datasets.
| Yunchao Gong and Qifa Ke and Michael Isard and Svetlana Lazebnik | null | 1212.4522 | null | null |
Analysis of Large-scale Traffic Dynamics using Non-negative Tensor
Factorization | cs.LG | In this paper, we present our work on clustering and prediction of temporal
dynamics of global congestion configurations in large-scale road networks.
Instead of looking into temporal traffic state variation of individual links,
or of small areas, we focus on spatial congestion configurations of the whole
network. In our work, we aim at describing the typical temporal dynamic
patterns of this network-level traffic state and achieving long-term prediction
of the large-scale traffic dynamics, in a unified data-mining framework. To
this end, we formulate this joint task using Non-negative Tensor Factorization
(NTF), which has been shown to be a useful decomposition tools for multivariate
data sequences. Clustering and prediction are performed based on the compact
tensor factorization results. Experiments on large-scale simulated data
illustrate the interest of our method with promising results for long-term
forecast of traffic evolution.
| Yufei Han (INRIA Rocquencourt), Fabien Moutarde (CAOR) | null | 1212.4675 | null | null |
Role Mining with Probabilistic Models | cs.CR cs.LG stat.ML | Role mining tackles the problem of finding a role-based access control (RBAC)
configuration, given an access-control matrix assigning users to access
permissions as input. Most role mining approaches work by constructing a large
set of candidate roles and use a greedy selection strategy to iteratively pick
a small subset such that the differences between the resulting RBAC
configuration and the access control matrix are minimized. In this paper, we
advocate an alternative approach that recasts role mining as an inference
problem rather than a lossy compression problem. Instead of using combinatorial
algorithms to minimize the number of roles needed to represent the
access-control matrix, we derive probabilistic models to learn the RBAC
configuration that most likely underlies the given matrix.
Our models are generative in that they reflect the way that permissions are
assigned to users in a given RBAC configuration. We additionally model how
user-permission assignments that conflict with an RBAC configuration emerge and
we investigate the influence of constraints on role hierarchies and on the
number of assignments. In experiments with access-control matrices from
real-world enterprises, we compare our proposed models with other role mining
methods. Our results show that our probabilistic models infer roles that
generalize well to new system users for a wide variety of data, while other
models' generalization abilities depend on the dataset given.
| Mario Frank, Joachim M. Buhmann, David Basin | null | 1212.4775 | null | null |
A Practical Algorithm for Topic Modeling with Provable Guarantees | cs.LG cs.DS stat.ML | Topic models provide a useful method for dimensionality reduction and
exploratory data analysis in large text corpora. Most approaches to topic model
inference have been based on a maximum likelihood objective. Efficient
algorithms exist that approximate this objective, but they have no provable
guarantees. Recently, algorithms have been introduced that provide provable
bounds, but these algorithms are not practical because they are inefficient and
not robust to violations of model assumptions. In this paper we present an
algorithm for topic model inference that is both provable and practical. The
algorithm produces results comparable to the best MCMC implementations while
running orders of magnitude faster.
| Sanjeev Arora, Rong Ge, Yoni Halpern, David Mimno, Ankur Moitra, David
Sontag, Yichen Wu, Michael Zhu | null | 1212.4777 | null | null |
Maximally Informative Observables and Categorical Perception | cs.LG cs.SD | We formulate the problem of perception in the framework of information
theory, and prove that categorical perception is equivalent to the existence of
an observable that has the maximum possible information on the target of
perception. We call such an observable maximally informative. Regardless
whether categorical perception is real, maximally informative observables can
form the basis of a theory of perception. We conclude with the implications of
such a theory for the problem of speech perception.
| Elaine Tsiang | null | 1212.5091 | null | null |
Hybrid Fuzzy-ART based K-Means Clustering Methodology to Cellular
Manufacturing Using Operational Time | cs.LG | This paper presents a new hybrid Fuzzy-ART based K-Means Clustering technique
to solve the part machine grouping problem in cellular manufacturing systems
considering operational time. The performance of the proposed technique is
tested with problems from open literature and the results are compared to the
existing clustering models such as simple K-means algorithm and modified ART1
algorithm using an efficient modified performance measure known as modified
grouping efficiency (MGE) as found in the literature. The results support the
better performance of the proposed algorithm. The Novelty of this study lies in
the simple and efficient methodology to produce quick solutions for shop floor
managers with least computational efforts and time.
| Sourav Sengupta, Tamal Ghosh, Pranab K Dan, Manojit Chattopadhyay | null | 1212.5101 | null | null |
Nonparametric ridge estimation | math.ST cs.LG stat.ML stat.TH | We study the problem of estimating the ridges of a density function. Ridge
estimation is an extension of mode finding and is useful for understanding the
structure of a density. It can also be used to find hidden structure in point
cloud data. We show that, under mild regularity conditions, the ridges of the
kernel density estimator consistently estimate the ridges of the true density.
When the data are noisy measurements of a manifold, we show that the ridges are
close and topologically similar to the hidden manifold. To find the estimated
ridges in practice, we adapt the modified mean-shift algorithm proposed by
Ozertem and Erdogmus [J. Mach. Learn. Res. 12 (2011) 1249-1286]. Some numerical
experiments verify that the algorithm is accurate.
| Christopher R. Genovese, Marco Perone-Pacifico, Isabella Verdinelli,
Larry Wasserman | 10.1214/14-AOS1218 | 1212.5156 | null | null |
Fuzzy soft rough K-Means clustering approach for gene expression data | cs.LG cs.CE | Clustering is one of the widely used data mining techniques for medical
diagnosis. Clustering can be considered as the most important unsupervised
learning technique. Most of the clustering methods group data based on distance
and few methods cluster data based on similarity. The clustering algorithms
classify gene expression data into clusters and the functionally related genes
are grouped together in an efficient manner. The groupings are constructed such
that the degree of relationship is strong among members of the same cluster and
weak among members of different clusters. In this work, we focus on a
similarity relationship among genes with similar expression patterns so that a
consequential and simple analytical decision can be made from the proposed
Fuzzy Soft Rough K-Means algorithm. The algorithm is developed based on Fuzzy
Soft sets and Rough sets. Comparative analysis of the proposed work is made
with bench mark algorithms like K-Means and Rough K-Means and efficiency of the
proposed algorithm is illustrated in this work by using various cluster
validity measures such as DB index and Xie-Beni index.
| K. Dhanalakshmi, H. Hannah Inbarani | null | 1212.5359 | null | null |
Soft Set Based Feature Selection Approach for Lung Cancer Images | cs.LG cs.CE | Lung cancer is the deadliest type of cancer for both men and women. Feature
selection plays a vital role in cancer classification. This paper investigates
the feature selection process in Computed Tomographic (CT) lung cancer images
using soft set theory. We propose a new soft set based unsupervised feature
selection algorithm. Nineteen features are extracted from the segmented lung
images using gray level co-occurence matrix (GLCM) and gray level different
matrix (GLDM). In this paper, an efficient Unsupervised Soft Set based Quick
Reduct (SSUSQR) algorithm is presented. This method is used to select features
from the data set and compared with existing rough set based unsupervised
feature selection methods. Then K-Means and Self Organizing Map (SOM)
clustering algorithms are used to cluster the data. The performance of the
feature selection algorithms is evaluated based on performance of clustering
techniques. The results show that the proposed method effectively removes
redundant features.
| G. Jothi, H. Hannah Inbarani | null | 1212.5391 | null | null |
Reinforcement learning for port-Hamiltonian systems | cs.SY cs.LG | Passivity-based control (PBC) for port-Hamiltonian systems provides an
intuitive way of achieving stabilization by rendering a system passive with
respect to a desired storage function. However, in most instances the control
law is obtained without any performance considerations and it has to be
calculated by solving a complex partial differential equation (PDE). In order
to address these issues we introduce a reinforcement learning approach into the
energy-balancing passivity-based control (EB-PBC) method, which is a form of
PBC in which the closed-loop energy is equal to the difference between the
stored and supplied energies. We propose a technique to parameterize EB-PBC
that preserves the systems's PDE matching conditions, does not require the
specification of a global desired Hamiltonian, includes performance criteria,
and is robust to extra non-linearities such as control input saturation. The
parameters of the control law are found using actor-critic reinforcement
learning, enabling learning near-optimal control policies satisfying a desired
closed-loop energy landscape. The advantages are that near-optimal controllers
can be generated using standard energy shaping techniques and that the
solutions learned can be interpreted in terms of energy shaping and damping
injection, which makes it possible to numerically assess stability using
passivity theory. From the reinforcement learning perspective, our proposal
allows for the class of port-Hamiltonian systems to be incorporated in the
actor-critic framework, speeding up the learning thanks to the resulting
parameterization of the policy. The method has been successfully applied to the
pendulum swing-up problem in simulations and real-life experiments.
| Olivier Sprangers and Gabriel A. D. Lopes and Robert Babuska | 10.1109/TCYB.2014.2343194 | 1212.5524 | null | null |
Random Spanning Trees and the Prediction of Weighted Graphs | cs.LG stat.ML | We investigate the problem of sequentially predicting the binary labels on
the nodes of an arbitrary weighted graph. We show that, under a suitable
parametrization of the problem, the optimal number of prediction mistakes can
be characterized (up to logarithmic factors) by the cutsize of a random
spanning tree of the graph. The cutsize is induced by the unknown adversarial
labeling of the graph nodes. In deriving our characterization, we obtain a
simple randomized algorithm achieving in expectation the optimal mistake bound
on any polynomially connected weighted graph. Our algorithm draws a random
spanning tree of the original graph and then predicts the nodes of this tree in
constant expected amortized time and linear space. Experiments on real-world
datasets show that our method compares well to both global (Perceptron) and
local (label propagation) methods, while being generally faster in practice.
| Nicolo' Cesa-Bianchi, Claudio Gentile, Fabio Vitale, Giovanni Zappella | null | 1212.5637 | null | null |
ADADELTA: An Adaptive Learning Rate Method | cs.LG | We present a novel per-dimension learning rate method for gradient descent
called ADADELTA. The method dynamically adapts over time using only first order
information and has minimal computational overhead beyond vanilla stochastic
gradient descent. The method requires no manual tuning of a learning rate and
appears robust to noisy gradient information, different model architecture
choices, various data modalities and selection of hyperparameters. We show
promising results compared to other methods on the MNIST digit classification
task using a single machine and on a large scale voice dataset in a distributed
cluster environment.
| Matthew D. Zeiler | null | 1212.5701 | null | null |
Data complexity measured by principal graphs | cs.LG cs.IT math.IT | How to measure the complexity of a finite set of vectors embedded in a
multidimensional space? This is a non-trivial question which can be approached
in many different ways. Here we suggest a set of data complexity measures using
universal approximators, principal cubic complexes. Principal cubic complexes
generalise the notion of principal manifolds for datasets with non-trivial
topologies. The type of the principal cubic complex is determined by its
dimension and a grammar of elementary graph transformations. The simplest
grammar produces principal trees.
We introduce three natural types of data complexity: 1) geometric (deviation
of the data's approximator from some "idealized" configuration, such as
deviation from harmonicity); 2) structural (how many elements of a principal
graph are needed to approximate the data), and 3) construction complexity (how
many applications of elementary graph transformations are needed to construct
the principal object starting from the simplest one).
We compute these measures for several simulated and real-life data
distributions and show them in the "accuracy-complexity" plots, helping to
optimize the accuracy/complexity ratio. We discuss various issues connected
with measuring data complexity. Software for computing data complexity measures
from principal cubic complexes is provided as well.
| Andrei Zinovyev and Evgeny Mirkes | 10.1016/j.camwa.2012.12.009 | 1212.5841 | null | null |
A short note on the tail bound of Wishart distribution | math.ST cs.LG stat.TH | We study the tail bound of the emperical covariance of multivariate normal
distribution. Following the work of (Gittens & Tropp, 2011), we provide a tail
bound with a small constant.
| Shenghuo Zhu | null | 1212.5860 | null | null |
Distributed optimization of deeply nested systems | cs.LG cs.NE math.OC stat.ML | In science and engineering, intelligent processing of complex signals such as
images, sound or language is often performed by a parameterized hierarchy of
nonlinear processing layers, sometimes biologically inspired. Hierarchical
systems (or, more generally, nested systems) offer a way to generate complex
mappings using simple stages. Each layer performs a different operation and
achieves an ever more sophisticated representation of the input, as, for
example, in an deep artificial neural network, an object recognition cascade in
computer vision or a speech front-end processing. Joint estimation of the
parameters of all the layers and selection of an optimal architecture is widely
considered to be a difficult numerical nonconvex optimization problem,
difficult to parallelize for execution in a distributed computation
environment, and requiring significant human expert effort, which leads to
suboptimal systems in practice. We describe a general mathematical strategy to
learn the parameters and, to some extent, the architecture of nested systems,
called the method of auxiliary coordinates (MAC). This replaces the original
problem involving a deeply nested function with a constrained problem involving
a different function in an augmented space without nesting. The constrained
problem may be solved with penalty-based methods using alternating optimization
over the parameters and the auxiliary coordinates. MAC has provable
convergence, is easy to implement reusing existing algorithms for single
layers, can be parallelized trivially and massively, applies even when
parameter derivatives are not available or not desirable, and is competitive
with state-of-the-art nonlinear optimizers even in the serial computation
setting, often providing reasonable models within a few iterations.
| Miguel \'A. Carreira-Perpi\~n\'an and Weiran Wang | null | 1212.5921 | null | null |
Fully scalable online-preprocessing algorithm for short oligonucleotide
microarray atlases | q-bio.QM cs.CE cs.LG q-bio.GN stat.AP stat.ML | Accumulation of standardized data collections is opening up novel
opportunities for holistic characterization of genome function. The limited
scalability of current preprocessing techniques has, however, formed a
bottleneck for full utilization of contemporary microarray collections. While
short oligonucleotide arrays constitute a major source of genome-wide profiling
data, scalable probe-level preprocessing algorithms have been available only
for few measurement platforms based on pre-calculated model parameters from
restricted reference training sets. To overcome these key limitations, we
introduce a fully scalable online-learning algorithm that provides tools to
process large microarray atlases including tens of thousands of arrays. Unlike
the alternatives, the proposed algorithm scales up in linear time with respect
to sample size and is readily applicable to all short oligonucleotide
platforms. This is the only available preprocessing algorithm that can learn
probe-level parameters based on sequential hyperparameter updates at small,
consecutive batches of data, thus circumventing the extensive memory
requirements of the standard approaches and opening up novel opportunities to
take full advantage of contemporary microarray data collections. Moreover,
using the most comprehensive data collections to estimate probe-level effects
can assist in pinpointing individual probes affected by various biases and
provide new tools to guide array design and quality control. The implementation
is freely available in R/Bioconductor at
http://www.bioconductor.org/packages/devel/bioc/html/RPA.html
| Leo Lahti, Aurora Torrente, Laura L. Elo, Alvis Brazma, Johan Rung | 10.1093/nar/gkt229 | 1212.5932 | null | null |
Exponentially Weighted Moving Average Charts for Detecting Concept Drift | stat.ML cs.LG stat.AP | Classifying streaming data requires the development of methods which are
computationally efficient and able to cope with changes in the underlying
distribution of the stream, a phenomenon known in the literature as concept
drift. We propose a new method for detecting concept drift which uses an
Exponentially Weighted Moving Average (EWMA) chart to monitor the
misclassification rate of an streaming classifier. Our approach is modular and
can hence be run in parallel with any underlying classifier to provide an
additional layer of concept drift detection. Moreover our method is
computationally efficient with overhead O(1) and works in a fully online manner
with no need to store data points in memory. Unlike many existing approaches to
concept drift detection, our method allows the rate of false positive
detections to be controlled and kept constant over time.
| Gordon J. Ross, Niall M. Adams, Dimitris K. Tasoulis, David J. Hand | 10.1016/j.patrec.2011.08.019 | 1212.6018 | null | null |
Tangent Bundle Manifold Learning via Grassmann&Stiefel Eigenmaps | cs.LG | One of the ultimate goals of Manifold Learning (ML) is to reconstruct an
unknown nonlinear low-dimensional manifold embedded in a high-dimensional
observation space by a given set of data points from the manifold. We derive a
local lower bound for the maximum reconstruction error in a small neighborhood
of an arbitrary point. The lower bound is defined in terms of the distance
between tangent spaces to the original manifold and the estimated manifold at
the considered point and reconstructed point, respectively. We propose an
amplification of the ML, called Tangent Bundle ML, in which the proximity not
only between the original manifold and its estimator but also between their
tangent spaces is required. We present a new algorithm that solves this problem
and gives a new solution for the ML also.
| Alexander V. Bernstein and Alexander P. Kuleshov | null | 1212.6031 | null | null |
Hyperplane Arrangements and Locality-Sensitive Hashing with Lift | cs.LG cs.IR stat.ML | Locality-sensitive hashing converts high-dimensional feature vectors, such as
image and speech, into bit arrays and allows high-speed similarity calculation
with the Hamming distance. There is a hashing scheme that maps feature vectors
to bit arrays depending on the signs of the inner products between feature
vectors and the normal vectors of hyperplanes placed in the feature space. This
hashing can be seen as a discretization of the feature space by hyperplanes. If
labels for data are given, one can determine the hyperplanes by using learning
algorithms. However, many proposed learning methods do not consider the
hyperplanes' offsets. Not doing so decreases the number of partitioned regions,
and the correlation between Hamming distances and Euclidean distances becomes
small. In this paper, we propose a lift map that converts learning algorithms
without the offsets to the ones that take into account the offsets. With this
method, the learning methods without the offsets give the discretizations of
spaces as if it takes into account the offsets. For the proposed method, we
input several high-dimensional feature data sets and studied the relationship
between the statistical characteristics of data, the number of hyperplanes, and
the effect of the proposed method.
| Makiko Konoshima and Yui Noma | null | 1212.6110 | null | null |
Transfer Learning Using Logistic Regression in Credit Scoring | cs.LG cs.CE | The credit scoring risk management is a fast growing field due to consumer's
credit requests. Credit requests, of new and existing customers, are often
evaluated by classical discrimination rules based on customers information.
However, these kinds of strategies have serious limits and don't take into
account the characteristics difference between current customers and the future
ones. The aim of this paper is to measure credit worthiness for non customers
borrowers and to model potential risk given a heterogeneous population formed
by borrowers customers of the bank and others who are not. We hold on previous
works done in generalized gaussian discrimination and transpose them into the
logistic model to bring out efficient discrimination rules for non customers'
subpopulation.
Therefore we obtain several simple models of connection between parameters of
both logistic models associated respectively to the two subpopulations. The
German credit data set is selected to experiment and to compare these models.
Experimental results show that the use of links between the two subpopulations
improve the classification accuracy for the new loan applicants.
| Farid Beninel, Waad Bouaguel, Ghazi Belmufti | null | 1212.6167 | null | null |
Gaussian Process Regression with Heteroscedastic or Non-Gaussian
Residuals | stat.ML cs.LG | Gaussian Process (GP) regression models typically assume that residuals are
Gaussian and have the same variance for all observations. However, applications
with input-dependent noise (heteroscedastic residuals) frequently arise in
practice, as do applications in which the residuals do not have a Gaussian
distribution. In this paper, we propose a GP Regression model with a latent
variable that serves as an additional unobserved covariate for the regression.
This model (which we call GPLC) allows for heteroscedasticity since it allows
the function to have a changing partial derivative with respect to this
unobserved covariate. With a suitable covariance function, our GPLC model can
handle (a) Gaussian residuals with input-dependent variance, or (b)
non-Gaussian residuals with input-dependent variance, or (c) Gaussian residuals
with constant variance. We compare our model, using synthetic datasets, with a
model proposed by Goldberg, Williams and Bishop (1998), which we refer to as
GPLV, which only deals with case (a), as well as a standard GP model which can
handle only case (c). Markov Chain Monte Carlo methods are developed for both
modelsl. Experiments show that when the data is heteroscedastic, both GPLC and
GPLV give better results (smaller mean squared error and negative
log-probability density) than standard GP regression. In addition, when the
residual are Gaussian, our GPLC model is generally nearly as good as GPLV,
while when the residuals are non-Gaussian, our GPLC model is better than GPLV.
| Chunyi Wang and Radford M. Neal | null | 1212.6246 | null | null |
Echo State Queueing Network: a new reservoir computing learning tool | cs.NE cs.AI cs.LG | In the last decade, a new computational paradigm was introduced in the field
of Machine Learning, under the name of Reservoir Computing (RC). RC models are
neural networks which a recurrent part (the reservoir) that does not
participate in the learning process, and the rest of the system where no
recurrence (no neural circuit) occurs. This approach has grown rapidly due to
its success in solving learning tasks and other computational applications.
Some success was also observed with another recently proposed neural network
designed using Queueing Theory, the Random Neural Network (RandNN). Both
approaches have good properties and identified drawbacks. In this paper, we
propose a new RC model called Echo State Queueing Network (ESQN), where we use
ideas coming from RandNNs for the design of the reservoir. ESQNs consist in
ESNs where the reservoir has a new dynamics inspired by recurrent RandNNs. The
paper positions ESQNs in the global Machine Learning area, and provides
examples of their use and performances. We show on largely used benchmarks that
ESQNs are very accurate tools, and we illustrate how they compare with standard
ESNs.
| Sebasti\'an Basterrech and Gerardo Rubino | 10.1109/CCNC.2013.6488435 | 1212.6276 | null | null |
On-line relational SOM for dissimilarity data | stat.ML cs.LG | In some applications and in order to address real world situations better,
data may be more complex than simple vectors. In some examples, they can be
known through their pairwise dissimilarities only. Several variants of the Self
Organizing Map algorithm were introduced to generalize the original algorithm
to this framework. Whereas median SOM is based on a rough representation of the
prototypes, relational SOM allows representing these prototypes by a virtual
combination of all elements in the data set. However, this latter approach
suffers from two main drawbacks. First, its complexity can be large. Second,
only a batch version of this algorithm has been studied so far and it often
provides results having a bad topographic organization. In this article, an
on-line version of relational SOM is described and justified. The algorithm is
tested on several datasets, including categorical data and graphs, and compared
with the batch version and with other SOM algorithms for non vector data.
| Madalina Olteanu (SAMM), Nathalie Villa-Vialaneix (SAMM), Marie
Cottrell (SAMM) | null | 1212.6316 | null | null |
Focus of Attention for Linear Predictors | stat.ML cs.AI cs.LG | We present a method to stop the evaluation of a prediction process when the
result of the full evaluation is obvious. This trait is highly desirable in
prediction tasks where a predictor evaluates all its features for every example
in large datasets. We observe that some examples are easier to classify than
others, a phenomenon which is characterized by the event when most of the
features agree on the class of an example. By stopping the feature evaluation
when encountering an easy- to-classify example, the predictor can achieve
substantial gains in computation. Our method provides a natural attention
mechanism for linear predictors where the predictor concentrates most of its
computation on hard-to-classify examples and quickly discards easy-to-classify
ones. By modifying a linear prediction algorithm such as an SVM or AdaBoost to
include our attentive method we prove that the average number of features
computed is O(sqrt(n log 1/sqrt(delta))) where n is the original number of
features, and delta is the error rate incurred due to early stopping. We
demonstrate the effectiveness of Attentive Prediction on MNIST, Real-sim,
Gisette, and synthetic datasets.
| Raphael Pelossof and Zhiliang Ying | null | 1212.6659 | null | null |
Maximizing a Nonnegative, Monotone, Submodular Function Constrained to
Matchings | cs.DS cs.AI cs.CC cs.LG stat.ML | Submodular functions have many applications. Matchings have many
applications. The bitext word alignment problem can be modeled as the problem
of maximizing a nonnegative, monotone, submodular function constrained to
matchings in a complete bipartite graph where each vertex corresponds to a word
in the two input sentences and each edge represents a potential word-to-word
translation. We propose a more general problem of maximizing a nonnegative,
monotone, submodular function defined on the edge set of a complete graph
constrained to matchings; we call this problem the CSM-Matching problem.
CSM-Matching also generalizes the maximum-weight matching problem, which has a
polynomial-time algorithm; however, we show that it is NP-hard to approximate
CSM-Matching within a factor of e/(e-1) by reducing the max k-cover problem to
it. Our main result is a simple, greedy, 3-approximation algorithm for
CSM-Matching. Then we reduce CSM-Matching to maximizing a nonnegative,
monotone, submodular function over two matroids, i.e., CSM-2-Matroids.
CSM-2-Matroids has a (2+epsilon)-approximation algorithm - called LSV2. We show
that we can find a (4+epsilon)-approximate solution to CSM-Matching using LSV2.
We extend this approach to similar problems.
| Sagar Kale | null | 1212.6846 | null | null |
Training a Functional Link Neural Network Using an Artificial Bee Colony
for Solving a Classification Problems | cs.NE cs.LG | Artificial Neural Networks have emerged as an important tool for
classification and have been widely used to classify a non-linear separable
pattern. The most popular artificial neural networks model is a Multilayer
Perceptron (MLP) as it is able to perform classification task with significant
success. However due to the complexity of MLP structure and also problems such
as local minima trapping, over fitting and weight interference have made neural
network training difficult. Thus, the easy way to avoid these problems is to
remove the hidden layers. This paper presents the ability of Functional Link
Neural Network (FLNN) to overcome the complexity structure of MLP by using
single layer architecture and propose an Artificial Bee Colony (ABC)
optimization for training the FLNN. The proposed technique is expected to
provide better learning scheme for a classifier in order to get more accurate
classification result
| Yana Mazwin Mohmad Hassim and Rozaida Ghazali | null | 1212.6922 | null | null |
Fast Solutions to Projective Monotone Linear Complementarity Problems | cs.LG math.OC | We present a new interior-point potential-reduction algorithm for solving
monotone linear complementarity problems (LCPs) that have a particular special
structure: their matrix $M\in{\mathbb R}^{n\times n}$ can be decomposed as
$M=\Phi U + \Pi_0$, where the rank of $\Phi$ is $k<n$, and $\Pi_0$ denotes
Euclidean projection onto the nullspace of $\Phi^\top$. We call such LCPs
projective. Our algorithm solves a monotone projective LCP to relative accuracy
$\epsilon$ in $O(\sqrt n \ln(1/\epsilon))$ iterations, with each iteration
requiring $O(nk^2)$ flops. This complexity compares favorably with
interior-point algorithms for general monotone LCPs: these algorithms also
require $O(\sqrt n \ln(1/\epsilon))$ iterations, but each iteration needs to
solve an $n\times n$ system of linear equations, a much higher cost than our
algorithm when $k\ll n$. Our algorithm works even though the solution to a
projective LCP is not restricted to lie in any low-rank subspace.
| Geoffrey J. Gordon | null | 1212.6958 | null | null |
Bethe Bounds and Approximating the Global Optimum | cs.LG stat.ML | Inference in general Markov random fields (MRFs) is NP-hard, though
identifying the maximum a posteriori (MAP) configuration of pairwise MRFs with
submodular cost functions is efficiently solvable using graph cuts. Marginal
inference, however, even for this restricted class, is in #P. We prove new
formulations of derivatives of the Bethe free energy, provide bounds on the
derivatives and bracket the locations of stationary points, introducing a new
technique called Bethe bound propagation. Several results apply to pairwise
models whether associative or not. Applying these to discretized
pseudo-marginals in the associative case we present a polynomial time
approximation scheme for global optimization provided the maximum degree is
$O(\log n)$, and discuss several extensions.
| Adrian Weller and Tony Jebara | null | 1301.0015 | null | null |
On Distributed Online Classification in the Midst of Concept Drifts | math.OC cs.DC cs.LG cs.SI physics.soc-ph | In this work, we analyze the generalization ability of distributed online
learning algorithms under stationary and non-stationary environments. We derive
bounds for the excess-risk attained by each node in a connected network of
learners and study the performance advantage that diffusion strategies have
over individual non-cooperative processing. We conduct extensive simulations to
illustrate the results.
| Zaid J. Towfic, Jianshu Chen, Ali H. Sayed | 10.1016/j.neucom.2012.12.043 | 1301.0047 | null | null |
CloudSVM : Training an SVM Classifier in Cloud Computing Systems | cs.LG cs.DC | In conventional method, distributed support vector machines (SVM) algorithms
are trained over pre-configured intranet/internet environments to find out an
optimal classifier. These methods are very complicated and costly for large
datasets. Hence, we propose a method that is referred as the Cloud SVM training
mechanism (CloudSVM) in a cloud computing environment with MapReduce technique
for distributed machine learning applications. Accordingly, (i) SVM algorithm
is trained in distributed cloud storage servers that work concurrently; (ii)
merge all support vectors in every trained cloud node; and (iii) iterate these
two steps until the SVM converges to the optimal classifier function. Large
scale data sets are not possible to train using SVM algorithm on a single
computer. The results of this study are important for training of large scale
data sets for machine learning applications. We provided that iterative
training of splitted data set in cloud computing environment using SVM will
converge to a global optimal classifier in finite iteration size.
| F. Ozgur Catak and M. Erdal Balaban | null | 1301.0082 | null | null |
Policy Evaluation with Variance Related Risk Criteria in Markov Decision
Processes | cs.LG stat.ML | In this paper we extend temporal difference policy evaluation algorithms to
performance criteria that include the variance of the cumulative reward. Such
criteria are useful for risk management, and are important in domains such as
finance and process control. We propose both TD(0) and LSTD(lambda) variants
with linear function approximation, prove their convergence, and demonstrate
their utility in a 4-dimensional continuous state space problem.
| Aviv Tamar, Dotan Di Castro, Shie Mannor | null | 1301.0104 | null | null |
Semi-Supervised Domain Adaptation with Non-Parametric Copulas | stat.ML cs.LG | A new framework based on the theory of copulas is proposed to address semi-
supervised domain adaptation problems. The presented method factorizes any
multivariate density into a product of marginal distributions and bivariate
cop- ula functions. Therefore, changes in each of these factors can be detected
and corrected to adapt a density model accross different learning domains.
Impor- tantly, we introduce a novel vine copula model, which allows for this
factorization in a non-parametric manner. Experimental results on regression
problems with real-world data illustrate the efficacy of the proposed approach
when compared to state-of-the-art techniques.
| David Lopez-Paz, Jos\'e Miguel Hern\'andez-Lobato, Bernhard
Sch\"olkopf | null | 1301.0142 | null | null |
A Novel Design Specification Distance(DSD) Based K-Mean Clustering
Performace Evluation on Engineering Materials Database | cs.LG | Organizing data into semantically more meaningful is one of the fundamental
modes of understanding and learning. Cluster analysis is a formal study of
methods for understanding and algorithm for learning. K-mean clustering
algorithm is one of the most fundamental and simple clustering algorithms. When
there is no prior knowledge about the distribution of data sets, K-mean is the
first choice for clustering with an initial number of clusters. In this paper a
novel distance metric called Design Specification (DS) distance measure
function is integrated with K-mean clustering algorithm to improve cluster
accuracy. The K-means algorithm with proposed distance measure maximizes the
cluster accuracy to 99.98% at P = 1.525, which is determined through the
iterative procedure. The performance of Design Specification (DS) distance
measure function with K - mean algorithm is compared with the performances of
other standard distance functions such as Euclidian, squared Euclidean, City
Block, and Chebshew similarity measures deployed with K-mean algorithm.The
proposed method is evaluated on the engineering materials database. The
experiments on cluster analysis and the outlier profiling show that these is an
excellent improvement in the performance of the proposed method.
| Doreswamy, K. S. Hemanth | null | 1301.0179 | null | null |
Follow the Leader If You Can, Hedge If You Must | cs.LG stat.ML | Follow-the-Leader (FTL) is an intuitive sequential prediction strategy that
guarantees constant regret in the stochastic setting, but has terrible
performance for worst-case data. Other hedging strategies have better
worst-case guarantees but may perform much worse than FTL if the data are not
maximally adversarial. We introduce the FlipFlop algorithm, which is the first
method that provably combines the best of both worlds.
As part of our construction, we develop AdaHedge, which is a new way of
dynamically tuning the learning rate in Hedge without using the doubling trick.
AdaHedge refines a method by Cesa-Bianchi, Mansour and Stoltz (2007), yielding
slightly improved worst-case guarantees. By interleaving AdaHedge and FTL, the
FlipFlop algorithm achieves regret within a constant factor of the FTL regret,
without sacrificing AdaHedge's worst-case guarantees.
AdaHedge and FlipFlop do not need to know the range of the losses in advance;
moreover, unlike earlier methods, both have the intuitive property that the
issued weights are invariant under rescaling and translation of the losses. The
losses are also allowed to be negative, in which case they may be interpreted
as gains.
| Steven de Rooij, Tim van Erven, Peter D. Gr\"unwald, Wouter M. Koolen | null | 1301.0534 | null | null |
Learning Hierarchical Object Maps Of Non-Stationary Environments with
mobile robots | cs.LG cs.RO stat.ML | Building models, or maps, of robot environments is a highly active research
area; however, most existing techniques construct unstructured maps and assume
static environments. In this paper, we present an algorithm for learning object
models of non-stationary objects found in office-type environments. Our
algorithm exploits the fact that many objects found in office environments look
alike (e.g., chairs, recycling bins). It does so through a two-level
hierarchical representation, which links individual objects with generic shape
templates of object classes. We derive an approximate EM algorithm for learning
shape parameters at both levels of the hierarchy, using local occupancy grid
maps for representing shape. Additionally, we develop a Bayesian model
selection algorithm that enables the robot to estimate the total number of
objects and object templates in the environment. Experimental results using a
real robot equipped with a laser range finder indicate that our approach
performs well at learning object-based maps of simple office environments. The
approach outperforms a previously developed non-hierarchical algorithm that
models objects but lacks class templates.
| Dragomir Anguelov, Rahul Biswas, Daphne Koller, Benson Limketkai,
Sebastian Thrun | null | 1301.0551 | null | null |
Tree-dependent Component Analysis | cs.LG stat.ML | We present a generalization of independent component analysis (ICA), where
instead of looking for a linear transform that makes the data components
independent, we look for a transform that makes the data components well fit by
a tree-structured graphical model. Treating the problem as a semiparametric
statistical problem, we show that the optimal transform is found by minimizing
a contrast function based on mutual information, a function that directly
extends the contrast function used for classical ICA. We provide two
approximations of this contrast function, one using kernel density estimation,
and another using kernel generalized variance. This tree-dependent component
analysis framework leads naturally to an efficient general multivariate density
estimation technique where only bivariate density estimation needs to be
performed.
| Francis R. Bach, Michael I. Jordan | null | 1301.0554 | null | null |
Learning with Scope, with Application to Information Extraction and
Classification | cs.LG cs.IR stat.ML | In probabilistic approaches to classification and information extraction, one
typically builds a statistical model of words under the assumption that future
data will exhibit the same regularities as the training data. In many data
sets, however, there are scope-limited features whose predictive power is only
applicable to a certain subset of the data. For example, in information
extraction from web pages, word formatting may be indicative of extraction
category in different ways on different web pages. The difficulty with using
such features is capturing and exploiting the new regularities encountered in
previously unseen data. In this paper, we propose a hierarchical probabilistic
model that uses both local/scope-limited features, such as word formatting, and
global features, such as word content. The local regularities are modeled as an
unobserved random parameter which is drawn once for each local data set. This
random parameter is estimated during the inference process and then used to
perform classification with both the local and global features--- a procedure
which is akin to automatically retuning the classifier to the local
regularities on each newly encountered web page. Exact inference is intractable
and we present approximations via point estimates and variational methods.
Empirical results on large collections of web data demonstrate that this method
significantly improves performance from traditional models of global features
alone.
| David Blei, J Andrew Bagnell, Andrew McCallum | null | 1301.0556 | null | null |
Continuation Methods for Mixing Heterogenous Sources | cs.LG stat.ML | A number of modern learning tasks involve estimation from heterogeneous
information sources. This includes classification with labeled and unlabeled
data as well as other problems with analogous structure such as competitive
(game theoretic) problems. The associated estimation problems can be typically
reduced to solving a set of fixed point equations (consistency conditions). We
introduce a general method for combining a preferred information source with
another in this setting by evolving continuous paths of fixed points at
intermediate allocations. We explicitly identify critical points along the
unique paths to either increase the stability of estimation or to ensure a
significant departure from the initial source. The homotopy continuation
approach is guaranteed to terminate at the second source, and involves no
combinatorial effort. We illustrate the power of these ideas both in
classification tasks with labeled and unlabeled data, as well as in the context
of a competitive (min-max) formulation of DNA sequence motif discovery.
| Adrian Corduneanu, Tommi S. Jaakkola | null | 1301.0562 | null | null |
Interpolating Conditional Density Trees | cs.LG cs.AI stat.ML | Joint distributions over many variables are frequently modeled by decomposing
them into products of simpler, lower-dimensional conditional distributions,
such as in sparsely connected Bayesian networks. However, automatically
learning such models can be very computationally expensive when there are many
datapoints and many continuous variables with complex nonlinear relationships,
particularly when no good ways of decomposing the joint distribution are known
a priori. In such situations, previous research has generally focused on the
use of discretization techniques in which each continuous variable has a single
discretization that is used throughout the entire network. \ In this paper, we
present and compare a wide variety of tree-based algorithms for learning and
evaluating conditional density estimates over continuous variables. These trees
can be thought of as discretizations that vary according to the particular
interactions being modeled; however, the density within a given leaf of the
tree need not be assumed constant, and we show that such nonuniform leaf
densities lead to more accurate density estimation. We have developed Bayesian
network structure-learning algorithms that employ these tree-based conditional
density representations, and we show that they can be used to practically learn
complex joint probability models over dozens of continuous variables from
thousands of datapoints. We focus on finding models that are simultaneously
accurate, fast to learn, and fast to evaluate once they are learned.
| Scott Davies, Andrew Moore | null | 1301.0563 | null | null |
An Information-Theoretic External Cluster-Validity Measure | cs.LG stat.ML | In this paper we propose a measure of clustering quality or accuracy that is
appropriate in situations where it is desirable to evaluate a clustering
algorithm by somehow comparing the clusters it produces with ``ground truth'
consisting of classes assigned to the patterns by manual means or some other
means in whose veracity there is confidence. Such measures are refered to as
``external'. Our measure also has the characteristic of allowing clusterings
with different numbers of clusters to be compared in a quantitative and
principled way. Our evaluation scheme quantitatively measures how useful the
cluster labels of the patterns are as predictors of their class labels. In
cases where all clusterings to be compared have the same number of clusters,
the measure is equivalent to the mutual information between the cluster labels
and the class labels. In cases where the numbers of clusters are different,
however, it computes the reduction in the number of bits that would be required
to encode (compress) the class labels if both the encoder and decoder have free
acccess to the cluster labels. To achieve this encoding the estimated
conditional probabilities of the class labels given the cluster labels must
also be encoded. These estimated probabilities can be seen as a model for the
class labels and their associated code length as a model cost.
| Byron E Dom | null | 1301.0565 | null | null |
The Thing That We Tried Didn't Work Very Well : Deictic Representation
in Reinforcement Learning | cs.LG cs.AI | Most reinforcement learning methods operate on propositional representations
of the world state. Such representations are often intractably large and
generalize poorly. Using a deictic representation is believed to be a viable
alternative: they promise generalization while allowing the use of existing
reinforcement-learning methods. Yet, there are few experiments on learning with
deictic representations reported in the literature. In this paper we explore
the effectiveness of two forms of deictic representation and a na\"{i}ve
propositional representation in a simple blocks-world domain. We find,
empirically, that the deictic representations actually worsen learning
performance. We conclude with a discussion of possible causes of these results
and strategies for more effective learning in domains with objects.
| Sarah Finney, Natalia Gardiol, Leslie Pack Kaelbling, Tim Oates | null | 1301.0567 | null | null |
Dimension Correction for Hierarchical Latent Class Models | cs.LG stat.ML | Model complexity is an important factor to consider when selecting among
graphical models. When all variables are observed, the complexity of a model
can be measured by its standard dimension, i.e. the number of independent
parameters. When hidden variables are present, however, standard dimension
might no longer be appropriate. One should instead use effective dimension
(Geiger et al. 1996). This paper is concerned with the computation of effective
dimension. First we present an upper bound on the effective dimension of a
latent class (LC) model. This bound is tight and its computation is easy. We
then consider a generalization of LC models called hierarchical latent class
(HLC) models (Zhang 2002). We show that the effective dimension of an HLC model
can be obtained from the effective dimensions of some related LC models. We
also demonstrate empirically that using effective dimension in place of
standard dimension improves the quality of models learned from data.
| Tomas Kocka, Nevin Lianwen Zhang | null | 1301.0578 | null | null |
Almost-everywhere algorithmic stability and generalization error | cs.LG stat.ML | We explore in some detail the notion of algorithmic stability as a viable
framework for analyzing the generalization error of learning algorithms. We
introduce the new notion of training stability of a learning algorithm and show
that, in a general setting, it is sufficient for good bounds on generalization
error. In the PAC setting, training stability is both necessary and sufficient
for learnability.\ The approach based on training stability makes no reference
to VC dimension or VC entropy. There is no need to prove uniform convergence,
and generalization error is bounded directly via an extended McDiarmid
inequality. As a result it potentially allows us to deal with a broader class
of learning algorithms than Empirical Risk Minimization. \ We also explore the
relationships among VC dimension, generalization error, and various notions of
stability. Several examples of learning algorithms are considered.
| Samuel Kutin, Partha Niyogi | null | 1301.0579 | null | null |
Decayed MCMC Filtering | cs.AI cs.LG cs.SY | Filtering---estimating the state of a partially observable Markov process
from a sequence of observations---is one of the most widely studied problems in
control theory, AI, and computational statistics. Exact computation of the
posterior distribution is generally intractable for large discrete systems and
for nonlinear continuous systems, so a good deal of effort has gone into
developing robust approximation algorithms. This paper describes a simple
stochastic approximation algorithm for filtering called {em decayed MCMC}. The
algorithm applies Markov chain Monte Carlo sampling to the space of state
trajectories using a proposal distribution that favours flips of more recent
state variables. The formal analysis of the algorithm involves a generalization
of standard coupling arguments for MCMC convergence. We prove that for any
ergodic underlying Markov process, the convergence time of decayed MCMC with
inverse-polynomial decay remains bounded as the length of the observation
sequence grows. We show experimentally that decayed MCMC is at least
competitive with other approximation algorithms such as particle filtering.
| Bhaskara Marthi, Hanna Pasula, Stuart Russell, Yuval Peres | null | 1301.0584 | null | null |
Staged Mixture Modelling and Boosting | cs.LG stat.ML | In this paper, we introduce and evaluate a data-driven staged mixture
modeling technique for building density, regression, and classification models.
Our basic approach is to sequentially add components to a finite mixture model
using the structural expectation maximization (SEM) algorithm. We show that our
technique is qualitatively similar to boosting. This correspondence is a
natural byproduct of the fact that we use the SEM algorithm to sequentially fit
the mixture model. Finally, in our experimental evaluation, we demonstrate the
effectiveness of our approach on a variety of prediction and density estimation
tasks using real-world data.
| Christopher Meek, Bo Thiesson, David Heckerman | null | 1301.0586 | null | null |
Optimal Time Bounds for Approximate Clustering | cs.DS cs.LG stat.ML | Clustering is a fundamental problem in unsupervised learning, and has been
studied widely both as a problem of learning mixture models and as an
optimization problem. In this paper, we study clustering with respect the
emph{k-median} objective function, a natural formulation of clustering in which
we attempt to minimize the average distance to cluster centers. One of the main
contributions of this paper is a simple but powerful sampling technique that we
call emph{successive sampling} that could be of independent interest. We show
that our sampling procedure can rapidly identify a small set of points (of size
just O(klog{n/k})) that summarize the input points for the purpose of
clustering. Using successive sampling, we develop an algorithm for the k-median
problem that runs in O(nk) time for a wide range of values of k and is
guaranteed, with high probability, to return a solution with cost at most a
constant factor times optimal. We also establish a lower bound of Omega(nk) on
any randomized constant-factor approximation algorithm for the k-median problem
that succeeds with even a negligible (say 1/100) probability. Thus we establish
a tight time bound of Theta(nk) for the k-median problem for a wide range of
values of k. The best previous upper bound for the problem was O(nk), where the
O-notation hides polylogarithmic factors in n and k. The best previous lower
bound of O(nk) applied only to deterministic k-median algorithms. While we
focus our presentation on the k-median objective, all our upper bounds are
valid for the k-means objective as well. In this context our algorithm compares
favorably to the widely used k-means heuristic, which requires O(nk) time for
just one iteration and provides no useful approximation guarantees.
| Ramgopal Mettu, Greg Plaxton | null | 1301.0587 | null | null |
Expectation-Propogation for the Generative Aspect Model | cs.LG cs.IR stat.ML | The generative aspect model is an extension of the multinomial model for text
that allows word probabilities to vary stochastically across documents.
Previous results with aspect models have been promising, but hindered by the
computational difficulty of carrying out inference and learning. This paper
demonstrates that the simple variational methods of Blei et al (2001) can lead
to inaccurate inferences and biased learning for the generative aspect model.
We develop an alternative approach that leads to higher accuracy at comparable
cost. An extension of Expectation-Propagation is used for inference and then
embedded in an EM algorithm for learning. Experimental results are presented
for both synthetic and real data sets.
| Thomas P. Minka, John Lafferty | null | 1301.0588 | null | null |
Bayesian Network Classifiers in a High Dimensional Framework | cs.LG stat.ML | We present a growing dimension asymptotic formalism. The perspective in this
paper is classification theory and we show that it can accommodate
probabilistic networks classifiers, including naive Bayes model and its
augmented version. When represented as a Bayesian network these classifiers
have an important advantage: The corresponding discriminant function turns out
to be a specialized case of a generalized additive model, which makes it
possible to get closed form expressions for the asymptotic misclassification
probabilities used here as a measure of classification accuracy. Moreover, in
this paper we propose a new quantity for assessing the discriminative power of
a set of features which is then used to elaborate the augmented naive Bayes
classifier. The result is a weighted form of the augmented naive Bayes that
distributes weights among the sets of features according to their
discriminative power. We derive the asymptotic distribution of the sample based
discriminative power and show that it is seriously overestimated in a high
dimensional case. We then apply this result to find the optimal, in a sense of
minimum misclassification probability, type of weighting.
| Tatjana Pavlenko, Dietrich von Rosen | null | 1301.0593 | null | null |
Asymptotic Model Selection for Naive Bayesian Networks | cs.AI cs.LG | We develop a closed form asymptotic formula to compute the marginal
likelihood of data given a naive Bayesian network model with two hidden states
and binary features. This formula deviates from the standard BIC score. Our
work provides a concrete example that the BIC score is generally not valid for
statistical models that belong to a stratified exponential family. This stands
in contrast to linear and curved exponential families, where the BIC score has
been proven to provide a correct approximation for the marginal likelihood.
| Dmitry Rusakov, Dan Geiger | null | 1301.0598 | null | null |
Advances in Boosting (Invited Talk) | cs.LG stat.ML | Boosting is a general method of generating many simple classification rules
and combining them into a single, highly accurate rule. In this talk, I will
review the AdaBoost boosting algorithm and some of its underlying theory, and
then look at how this theory has helped us to face some of the challenges of
applying AdaBoost in two domains: In the first of these, we used boosting for
predicting and modeling the uncertainty of prices in complicated, interacting
auctions. The second application was to the classification of caller utterances
in a telephone spoken-dialogue system where we faced two challenges: the need
to incorporate prior knowledge to compensate for initially insufficient data;
and a later need to filter the large stream of unlabeled examples being
collected to select the ones whose labels are likely to be most informative.
| Robert E. Schapire | null | 1301.0599 | null | null |
An MDP-based Recommender System | cs.LG cs.AI cs.IR | Typical Recommender systems adopt a static view of the recommendation process
and treat it as a prediction problem. We argue that it is more appropriate to
view the problem of generating recommendations as a sequential decision problem
and, consequently, that Markov decision processes (MDP) provide a more
appropriate model for Recommender systems. MDPs introduce two benefits: they
take into account the long-term effects of each recommendation, and they take
into account the expected value of each recommendation. To succeed in practice,
an MDP-based Recommender system must employ a strong initial model; and the
bulk of this paper is concerned with the generation of such a model. In
particular, we suggest the use of an n-gram predictive model for generating the
initial MDP. Our n-gram model induces a Markov-chain model of user behavior
whose predictive accuracy is greater than that of existing predictive models.
We describe our predictive model in detail and evaluate its performance on real
data. In addition, we show how the model can be used in an MDP-based
Recommender system.
| Guy Shani, Ronen I. Brafman, David Heckerman | null | 1301.0600 | null | null |
Reinforcement Learning with Partially Known World Dynamics | cs.LG stat.ML | Reinforcement learning would enjoy better success on real-world problems if
domain knowledge could be imparted to the algorithm by the modelers. Most
problems have both hidden state and unknown dynamics. Partially observable
Markov decision processes (POMDPs) allow for the modeling of both.
Unfortunately, they do not provide a natural framework in which to specify
knowledge about the domain dynamics. The designer must either admit to knowing
nothing about the dynamics or completely specify the dynamics (thereby turning
it into a planning problem). We propose a new framework called a partially
known Markov decision process (PKMDP) which allows the designer to specify
known dynamics while still leaving portions of the environment s dynamics
unknown.The model represents NOT ONLY the environment dynamics but also the
agents knowledge of the dynamics. We present a reinforcement learning algorithm
for this model based on importance sampling. The algorithm incorporates
planning based on the known dynamics and learning about the unknown dynamics.
Our results clearly demonstrate the ability to add domain knowledge and the
resulting benefits for learning.
| Christian R. Shelton | null | 1301.0601 | null | null |
Unsupervised Active Learning in Large Domains | cs.LG stat.ML | Active learning is a powerful approach to analyzing data effectively. We show
that the feasibility of active learning depends crucially on the choice of
measure with respect to which the query is being optimized. The standard
information gain, for example, does not permit an accurate evaluation with a
small committee, a representative subset of the model space. We propose a
surrogate measure requiring only a small committee and discuss the properties
of this new measure. We devise, in addition, a bootstrap approach for committee
selection. The advantages of this approach are illustrated in the context of
recovering (regulatory) network models.
| Harald Steck, Tommi S. Jaakkola | null | 1301.0602 | null | null |
Discriminative Probabilistic Models for Relational Data | cs.LG cs.AI stat.ML | In many supervised learning tasks, the entities to be labeled are related to
each other in complex ways and their labels are not independent. For example,
in hypertext classification, the labels of linked pages are highly correlated.
A standard approach is to classify each entity independently, ignoring the
correlations between them. Recently, Probabilistic Relational Models, a
relational version of Bayesian networks, were used to define a joint
probabilistic model for a collection of related entities. In this paper, we
present an alternative framework that builds on (conditional) Markov networks
and addresses two limitations of the previous approach. First, undirected
models do not impose the acyclicity constraint that hinders representation of
many important relational dependencies in directed models. Second, undirected
models are well suited for discriminative training, where we optimize the
conditional likelihood of the labels given the features, which generally
improves classification accuracy. We show how to train these models
effectively, and how to use approximate probabilistic inference over the
learned model for collective classification of multiple related entities. We
provide experimental results on a webpage classification task, showing that
accuracy can be significantly improved by modeling relational dependencies.
| Ben Taskar, Pieter Abbeel, Daphne Koller | null | 1301.0604 | null | null |
A New Class of Upper Bounds on the Log Partition Function | cs.LG stat.ML | Bounds on the log partition function are important in a variety of contexts,
including approximate inference, model fitting, decision theory, and large
deviations analysis. We introduce a new class of upper bounds on the log
partition function, based on convex combinations of distributions in the
exponential domain, that is applicable to an arbitrary undirected graphical
model. In the special case of convex combinations of tree-structured
distributions, we obtain a family of variational problems, similar to the Bethe
free energy, but distinguished by the following desirable properties: i. they
are cnvex, and have a unique global minimum; and ii. the global minimum gives
an upper bound on the log partition function. The global minimum is defined by
stationary conditions very similar to those defining fixed points of belief
propagation or tree-based reparameterization Wainwright et al., 2001. As with
BP fixed points, the elements of the minimizing argument can be used as
approximations to the marginals of the original model. The analysis described
here can be extended to structures of higher treewidth e.g., hypertrees,
thereby making connections with more advanced approximations e.g., Kikuchi and
variants Yedidia et al., 2001; Minka, 2001.
| Martin Wainwright, Tommi S. Jaakkola, Alan Willsky | null | 1301.0610 | null | null |
IPF for Discrete Chain Factor Graphs | cs.LG cs.AI stat.ML | Iterative Proportional Fitting (IPF), combined with EM, is commonly used as
an algorithm for likelihood maximization in undirected graphical models. In
this paper, we present two iterative algorithms that generalize upon IPF. The
first one is for likelihood maximization in discrete chain factor graphs, which
we define as a wide class of discrete variable models including undirected
graphical models and Bayesian networks, but also chain graphs and sigmoid
belief networks. The second one is for conditional likelihood maximization in
standard undirected models and Bayesian networks. In both algorithms, the
iteration steps are expressed in closed form. Numerical simulations show that
the algorithms are competitive with state of the art methods.
| Wim Wiegerinck, Tom Heskes | null | 1301.0613 | null | null |
The Sum-over-Forests density index: identifying dense regions in a graph | cs.LG stat.ML | This work introduces a novel nonparametric density index defined on graphs,
the Sum-over-Forests (SoF) density index. It is based on a clear and intuitive
idea: high-density regions in a graph are characterized by the fact that they
contain a large amount of low-cost trees with high outdegrees while low-density
regions contain few ones. Therefore, a Boltzmann probability distribution on
the countable set of forests in the graph is defined so that large (high-cost)
forests occur with a low probability while short (low-cost) forests occur with
a high probability. Then, the SoF density index of a node is defined as the
expected outdegree of this node in a non-trivial tree of the forest, thus
providing a measure of density around that node. Following the matrix-forest
theorem, and a statistical physics framework, it is shown that the SoF density
index can be easily computed in closed form through a simple matrix inversion.
Experiments on artificial and real data sets show that the proposed index
performs well on finding dense regions, for graphs of various origins.
| Mathieu Senelle, Silvia Garcia-Diez, Amin Mantrach, Masashi Shimbo,
Marco Saerens, Fran\c{c}ois Fouss | null | 1301.0725 | null | null |
Borrowing strengh in hierarchical Bayes: Posterior concentration of the
Dirichlet base measure | math.ST cs.LG math.PR stat.TH | This paper studies posterior concentration behavior of the base probability
measure of a Dirichlet measure, given observations associated with the sampled
Dirichlet processes, as the number of observations tends to infinity. The base
measure itself is endowed with another Dirichlet prior, a construction known as
the hierarchical Dirichlet processes (Teh et al. [J. Amer. Statist. Assoc. 101
(2006) 1566-1581]). Convergence rates are established in transportation
distances (i.e., Wasserstein metrics) under various conditions on the geometry
of the support of the true base measure. As a consequence of the theory, we
demonstrate the benefit of "borrowing strength" in the inference of multiple
groups of data - a powerful insight often invoked to motivate hierarchical
modeling. In certain settings, the gain in efficiency due to the latent
hierarchy can be dramatic, improving from a standard nonparametric rate to a
parametric rate of convergence. Tools developed include transportation
distances for nonparametric Bayesian hierarchies of random measures, the
existence of tests for Dirichlet measures, and geometric properties of the
support of Dirichlet measures.
| XuanLong Nguyen | 10.3150/15-BEJ703 | 1301.0802 | null | null |
Finding the True Frequent Itemsets | cs.LG cs.DB cs.DS stat.ML | Frequent Itemsets (FIs) mining is a fundamental primitive in data mining. It
requires to identify all itemsets appearing in at least a fraction $\theta$ of
a transactional dataset $\mathcal{D}$. Often though, the ultimate goal of
mining $\mathcal{D}$ is not an analysis of the dataset \emph{per se}, but the
understanding of the underlying process that generated it. Specifically, in
many applications $\mathcal{D}$ is a collection of samples obtained from an
unknown probability distribution $\pi$ on transactions, and by extracting the
FIs in $\mathcal{D}$ one attempts to infer itemsets that are frequently (i.e.,
with probability at least $\theta$) generated by $\pi$, which we call the True
Frequent Itemsets (TFIs). Due to the inherently stochastic nature of the
generative process, the set of FIs is only a rough approximation of the set of
TFIs, as it often contains a huge number of \emph{false positives}, i.e.,
spurious itemsets that are not among the TFIs. In this work we design and
analyze an algorithm to identify a threshold $\hat{\theta}$ such that the
collection of itemsets with frequency at least $\hat{\theta}$ in $\mathcal{D}$
contains only TFIs with probability at least $1-\delta$, for some
user-specified $\delta$. Our method uses results from statistical learning
theory involving the (empirical) VC-dimension of the problem at hand. This
allows us to identify almost all the TFIs without including any false positive.
We also experimentally compare our method with the direct mining of
$\mathcal{D}$ at frequency $\theta$ and with techniques based on widely-used
standard bounds (i.e., the Chernoff bounds) of the binomial distribution, and
show that our algorithm outperforms these methods and achieves even better
results than what is guaranteed by the theoretical analysis.
| Matteo Riondato and Fabio Vandin | null | 1301.1218 | null | null |
Dynamical Models and Tracking Regret in Online Convex Programming | stat.ML cs.LG | This paper describes a new online convex optimization method which
incorporates a family of candidate dynamical models and establishes novel
tracking regret bounds that scale with the comparator's deviation from the best
dynamical model in this family. Previous online optimization methods are
designed to have a total accumulated loss comparable to that of the best
comparator sequence, and existing tracking or shifting regret bounds scale with
the overall variation of the comparator sequence. In many practical scenarios,
however, the environment is nonstationary and comparator sequences with small
variation are quite weak, resulting in large losses. The proposed Dynamic
Mirror Descent method, in contrast, can yield low regret relative to highly
variable comparator sequences by both tracking the best dynamical model and
forming predictions based on that model. This concept is demonstrated
empirically in the context of sequential compressive observations of a dynamic
scene and tracking a dynamic social network.
| Eric C. Hall and Rebecca M. Willett | null | 1301.1254 | null | null |
Automated Variational Inference in Probabilistic Programming | stat.ML cs.AI cs.LG | We present a new algorithm for approximate inference in probabilistic
programs, based on a stochastic gradient for variational programs. This method
is efficient without restrictions on the probabilistic program; it is
particularly practical for distributions which are not analytically tractable,
including highly structured distributions that arise in probabilistic programs.
We show how to automatically derive mean-field probabilistic programs and
optimize them, and demonstrate that our perspective improves inference
efficiency over other algorithms.
| David Wingate, Theophane Weber | null | 1301.1299 | null | null |
Coupled Neural Associative Memories | cs.NE cs.IT cs.LG math.IT | We propose a novel architecture to design a neural associative memory that is
capable of learning a large number of patterns and recalling them later in
presence of noise. It is based on dividing the neurons into local clusters and
parallel plains, very similar to the architecture of the visual cortex of
macaque brain. The common features of our proposed architecture with those of
spatially-coupled codes enable us to show that the performance of such networks
in eliminating noise is drastically better than the previous approaches while
maintaining the ability of learning an exponentially large number of patterns.
Previous work either failed in providing good performance during the recall
phase or in offering large pattern retrieval (storage) capacities. We also
present computational experiments that lend additional support to the
theoretical analysis.
| Amin Karbasi, Amir Hesam Salavati, and Amin Shokrollahi | null | 1301.1555 | null | null |
An Efficient Algorithm for Upper Bound on the Partition Function of
Nucleic Acids | q-bio.BM cs.LG | It has been shown that minimum free energy structure for RNAs and RNA-RNA
interaction is often incorrect due to inaccuracies in the energy parameters and
inherent limitations of the energy model. In contrast, ensemble based
quantities such as melting temperature and equilibrium concentrations can be
more reliably predicted. Even structure prediction by sampling from the
ensemble and clustering those structures by Sfold [7] has proven to be more
reliable than minimum free energy structure prediction. The main obstacle for
ensemble based approaches is the computational complexity of the partition
function and base pairing probabilities. For instance, the space complexity of
the partition function for RNA-RNA interaction is $O(n^4)$ and the time
complexity is $O(n^6)$ which are prohibitively large [4,12]. Our goal in this
paper is to give a fast algorithm, based on sparse folding, to calculate an
upper bound on the partition function. Our work is based on the recent
algorithm of Hazan and Jaakkola [10]. The space complexity of our algorithm is
the same as that of sparse folding algorithms, and the time complexity of our
algorithm is $O(MFE(n)\ell)$ for single RNA and $O(MFE(m, n)\ell)$ for RNA-RNA
interaction in practice, in which $MFE$ is the running time of sparse folding
and $\ell \leq n$ ($\ell \leq n + m$) is a sequence dependent parameter.
| Hamidreza Chitsaz and Elmirasadat Forouzmand and Gholamreza Haffari | null | 1301.1590 | null | null |
The RNA Newton Polytope and Learnability of Energy Parameters | q-bio.BM cs.CE cs.LG | Despite nearly two scores of research on RNA secondary structure and RNA-RNA
interaction prediction, the accuracy of the state-of-the-art algorithms are
still far from satisfactory. Researchers have proposed increasingly complex
energy models and improved parameter estimation methods in anticipation of
endowing their methods with enough power to solve the problem. The output has
disappointingly been only modest improvements, not matching the expectations.
Even recent massively featured machine learning approaches were not able to
break the barrier. In this paper, we introduce the notion of learnability of
the parameters of an energy model as a measure of its inherent capability. We
say that the parameters of an energy model are learnable iff there exists at
least one set of such parameters that renders every known RNA structure to date
the minimum free energy structure. We derive a necessary condition for the
learnability and give a dynamic programming algorithm to assess it. Our
algorithm computes the convex hull of the feature vectors of all feasible
structures in the ensemble of a given input sequence. Interestingly, that
convex hull coincides with the Newton polytope of the partition function as a
polynomial in energy parameters. We demonstrated the application of our theory
to a simple energy model consisting of a weighted count of A-U and C-G base
pairs. Our results show that this simple energy model satisfies the necessary
condition for less than one third of the input unpseudoknotted
sequence-structure pairs chosen from the RNA STRAND v2.0 database. For another
one third, the necessary condition is barely violated, which suggests that
augmenting this simple energy model with more features such as the Turner loops
may solve the problem. The necessary condition is severely violated for 8%,
which provides a small set of hard cases that require further investigation.
| Elmirasadat Forouzmand and Hamidreza Chitsaz | null | 1301.1608 | null | null |
Linear Bandits in High Dimension and Recommendation Systems | cs.LG stat.ML | A large number of online services provide automated recommendations to help
users to navigate through a large collection of items. New items (products,
videos, songs, advertisements) are suggested on the basis of the user's past
history and --when available-- her demographic profile. Recommendations have to
satisfy the dual goal of helping the user to explore the space of available
items, while allowing the system to probe the user's preferences.
We model this trade-off using linearly parametrized multi-armed bandits,
propose a policy and prove upper and lower bounds on the cumulative "reward"
that coincide up to constants in the data poor (high-dimensional) regime. Prior
work on linear bandits has focused on the data rich (low-dimensional) regime
and used cumulative "risk" as the figure of merit. For this data rich regime,
we provide a simple modification for our policy that achieves near-optimal risk
performance under more restrictive assumptions on the geometry of the problem.
We test (a variation of) the scheme used for establishing achievability on the
Netflix and MovieLens datasets and obtain good agreement with the qualitative
predictions of the theory we develop.
| Yash Deshpande and Andrea Montanari | null | 1301.1722 | null | null |
Risk-Aversion in Multi-armed Bandits | cs.LG | Stochastic multi-armed bandits solve the Exploration-Exploitation dilemma and
ultimately maximize the expected reward. Nonetheless, in many practical
problems, maximizing the expected reward is not the most desirable objective.
In this paper, we introduce a novel setting based on the principle of
risk-aversion where the objective is to compete against the arm with the best
risk-return trade-off. This setting proves to be intrinsically more difficult
than the standard multi-arm bandit setting due in part to an exploration risk
which introduces a regret associated to the variability of an algorithm. Using
variance as a measure of risk, we introduce two new algorithms, investigate
their theoretical guarantees, and report preliminary empirical results.
| Amir Sani (INRIA Lille - Nord Europe), Alessandro Lazaric (INRIA Lille
- Nord Europe), R\'emi Munos (INRIA Lille - Nord Europe) | null | 1301.1936 | null | null |
Bayesian Optimization in a Billion Dimensions via Random Embeddings | stat.ML cs.LG | Bayesian optimization techniques have been successfully applied to robotics,
planning, sensor placement, recommendation, advertising, intelligent user
interfaces and automatic algorithm configuration. Despite these successes, the
approach is restricted to problems of moderate dimension, and several workshops
on Bayesian optimization have identified its scaling to high-dimensions as one
of the holy grails of the field. In this paper, we introduce a novel random
embedding idea to attack this problem. The resulting Random EMbedding Bayesian
Optimization (REMBO) algorithm is very simple, has important invariance
properties, and applies to domains with both categorical and continuous
variables. We present a thorough theoretical analysis of REMBO. Empirical
results confirm that REMBO can effectively solve problems with billions of
dimensions, provided the intrinsic dimensionality is low. They also show that
REMBO achieves state-of-the-art performance in optimizing the 47 discrete
parameters of a popular mixed integer linear programming solver.
| Ziyu Wang, Frank Hutter, Masrour Zoghi, David Matheson, Nando de
Freitas | null | 1301.1942 | null | null |
Error Correction in Learning using SVMs | cs.LG | This paper is concerned with learning binary classifiers under adversarial
label-noise. We introduce the problem of error-correction in learning where the
goal is to recover the original clean data from a label-manipulated version of
it, given (i) no constraints on the adversary other than an upper-bound on the
number of errors, and (ii) some regularity properties for the original data. We
present a simple and practical error-correction algorithm called SubSVMs that
learns individual SVMs on several small-size (log-size), class-balanced, random
subsets of the data and then reclassifies the training points using a majority
vote. Our analysis reveals the need for the two main ingredients of SubSVMs,
namely class-balanced sampling and subsampled bagging. Experimental results on
synthetic as well as benchmark UCI data demonstrate the effectiveness of our
approach. In addition to noise-tolerance, log-size subsampled bagging also
yields significant run-time benefits over standard SVMs.
| Srivatsan Laxman, Sushil Mittal and Ramarathnam Venkatesan | null | 1301.2012 | null | null |
Heteroscedastic Relevance Vector Machine | stat.ML cs.LG | In this work we propose a heteroscedastic generalization to RVM, a fast
Bayesian framework for regression, based on some recent similar works. We use
variational approximation and expectation propagation to tackle the problem.
The work is still under progress and we are examining the results and comparing
with the previous works.
| Daniel Khashabi, Mojtaba Ziyadi, Feng Liang | null | 1301.2015 | null | null |
Training Effective Node Classifiers for Cascade Classification | cs.CV cs.LG stat.ML | Cascade classifiers are widely used in real-time object detection. Different
from conventional classifiers that are designed for a low overall
classification error rate, a classifier in each node of the cascade is required
to achieve an extremely high detection rate and moderate false positive rate.
Although there are a few reported methods addressing this requirement in the
context of object detection, there is no principled feature selection method
that explicitly takes into account this asymmetric node learning objective. We
provide such an algorithm here. We show that a special case of the biased
minimax probability machine has the same formulation as the linear asymmetric
classifier (LAC) of Wu et al (2005). We then design a new boosting algorithm
that directly optimizes the cost function of LAC. The resulting
totally-corrective boosting algorithm is implemented by the column generation
technique in convex optimization. Experimental results on object detection
verify the effectiveness of the proposed boosting algorithm as a node
classifier in cascade object detection, and show performance better than that
of the current state-of-the-art.
| Chunhua Shen and Peng Wang and Sakrapee Paisitkriangkrai and Anton van
den Hengel | null | 1301.2032 | null | null |
Domain Generalization via Invariant Feature Representation | stat.ML cs.LG | This paper investigates domain generalization: How to take knowledge acquired
from an arbitrary number of related domains and apply it to previously unseen
domains? We propose Domain-Invariant Component Analysis (DICA), a kernel-based
optimization algorithm that learns an invariant transformation by minimizing
the dissimilarity across domains, whilst preserving the functional relationship
between input and output variables. A learning-theoretic analysis shows that
reducing dissimilarity improves the expected generalization ability of
classifiers on new domains, motivating the proposed algorithm. Experimental
results on synthetic and real-world datasets demonstrate that DICA successfully
learns invariant features and improves classifier performance in practice.
| Krikamol Muandet, David Balduzzi, Bernhard Sch\"olkopf | null | 1301.2115 | null | null |
Network-based clustering with mixtures of L1-penalized Gaussian
graphical models: an empirical investigation | stat.ML cs.LG stat.ME | In many applications, multivariate samples may harbor previously unrecognized
heterogeneity at the level of conditional independence or network structure.
For example, in cancer biology, disease subtypes may differ with respect to
subtype-specific interplay between molecular components. Then, both subtype
discovery and estimation of subtype-specific networks present important and
related challenges. To enable such analyses, we put forward a mixture model
whose components are sparse Gaussian graphical models. This brings together
model-based clustering and graphical modeling to permit simultaneous estimation
of cluster assignments and cluster-specific networks. We carry out estimation
within an L1-penalized framework, and investigate several specific penalization
regimes. We present empirical results on simulated data and provide general
recommendations for the formulation and use of mixtures of L1-penalized
Gaussian graphical models.
| Steven M. Hill and Sach Mukherjee | null | 1301.2194 | null | null |
Conditions Under Which Conditional Independence and Scoring Methods Lead
to Identical Selection of Bayesian Network Models | cs.AI cs.LG stat.ML | It is often stated in papers tackling the task of inferring Bayesian network
structures from data that there are these two distinct approaches: (i) Apply
conditional independence tests when testing for the presence or otherwise of
edges; (ii) Search the model space using a scoring metric. Here I argue that
for complete data and a given node ordering this division is a myth, by showing
that cross entropy methods for checking conditional independence are
mathematically identical to methods based upon discriminating between models by
their overall goodness-of-fit logarithmic scores.
| Robert G. Cowell | null | 1301.2262 | null | null |
Variational MCMC | cs.LG stat.CO stat.ML | We propose a new class of learning algorithms that combines variational
approximation and Markov chain Monte Carlo (MCMC) simulation. Naive algorithms
that use the variational approximation as proposal distribution can perform
poorly because this approximation tends to underestimate the true variance and
other features of the data. We solve this problem by introducing more
sophisticated MCMC algorithms. One of these algorithms is a mixture of two MCMC
kernels: a random walk Metropolis kernel and a blockMetropolis-Hastings (MH)
kernel with a variational approximation as proposaldistribution. The MH kernel
allows one to locate regions of high probability efficiently. The Metropolis
kernel allows us to explore the vicinity of these regions. This algorithm
outperforms variationalapproximations because it yields slightly better
estimates of the mean and considerably better estimates of higher moments, such
as covariances. It also outperforms standard MCMC algorithms because it locates
theregions of high probability quickly, thus speeding up convergence. We
demonstrate this algorithm on the problem of Bayesian parameter estimation for
logistic (sigmoid) belief networks.
| Nando de Freitas, Pedro Hojen-Sorensen, Michael I. Jordan, Stuart
Russell | null | 1301.2266 | null | null |
Incorporating Expressive Graphical Models in Variational Approximations:
Chain-Graphs and Hidden Variables | cs.AI cs.LG | Global variational approximation methods in graphical models allow efficient
approximate inference of complex posterior distributions by using a simpler
model. The choice of the approximating model determines a tradeoff between the
complexity of the approximation procedure and the quality of the approximation.
In this paper, we consider variational approximations based on two classes of
models that are richer than standard Bayesian networks, Markov networks or
mixture models. As such, these classes allow to find better tradeoffs in the
spectrum of approximations. The first class of models are chain graphs, which
capture distributions that are partially directed. The second class of models
are directed graphs (Bayesian networks) with additional latent variables. Both
classes allow representation of multi-variable dependencies that cannot be
easily represented within a Bayesian network.
| Tal El-Hay, Nir Friedman | null | 1301.2268 | null | null |
Learning the Dimensionality of Hidden Variables | cs.LG cs.AI stat.ML | A serious problem in learning probabilistic models is the presence of hidden
variables. These variables are not observed, yet interact with several of the
observed variables. Detecting hidden variables poses two problems: determining
the relations to other variables in the model and determining the number of
states of the hidden variable. In this paper, we address the latter problem in
the context of Bayesian networks. We describe an approach that utilizes a
score-based agglomerative state-clustering. As we show, this approach allows us
to efficiently evaluate models with a range of cardinalities for the hidden
variable. We show how to extend this procedure to deal with multiple
interacting hidden variables. We demonstrate the effectiveness of this approach
by evaluating it on synthetic and real-life data. We show that our approach
learns models with hidden variables that generalize better and have better
structure than previous approaches.
| Gal Elidan, Nir Friedman | null | 1301.2269 | null | null |
Multivariate Information Bottleneck | cs.LG cs.AI stat.ML | The Information bottleneck method is an unsupervised non-parametric data
organization technique. Given a joint distribution P(A,B), this method
constructs a new variable T that extracts partitions, or clusters, over the
values of A that are informative about B. The information bottleneck has
already been applied to document classification, gene expression, neural code,
and spectral analysis. In this paper, we introduce a general principled
framework for multivariate extensions of the information bottleneck method.
This allows us to consider multiple systems of data partitions that are
inter-related. Our approach utilizes Bayesian networks for specifying the
systems of clusters and what information each captures. We show that this
construction provides insight about bottleneck variations and enables us to
characterize solutions of these variations. We also present a general framework
for iterative algorithms for constructing solutions, and apply it to several
examples.
| Nir Friedman, Ori Mosenzon, Noam Slonim, Naftali Tishby | null | 1301.2270 | null | null |
Discovering Multiple Constraints that are Frequently Approximately
Satisfied | cs.LG stat.ML | Some high-dimensional data.sets can be modelled by assuming that there are
many different linear constraints, each of which is Frequently Approximately
Satisfied (FAS) by the data. The probability of a data vector under the model
is then proportional to the product of the probabilities of its constraint
violations. We describe three methods of learning products of constraints using
a heavy-tailed probability distribution for the violations.
| Geoffrey E. Hinton, Yee Whye Teh | null | 1301.2278 | null | null |
Estimating Well-Performing Bayesian Networks using Bernoulli Mixtures | cs.LG cs.AI stat.ML | A novel method for estimating Bayesian network (BN) parameters from data is
presented which provides improved performance on test data. Previous research
has shown the value of representing conditional probability distributions
(CPDs) via neural networks(Neal 1992), noisy-OR gates (Neal 1992, Diez 1993)and
decision trees (Friedman and Goldszmidt 1996).The Bernoulli mixture network
(BMN) explicitly represents the CPDs of discrete BN nodes as mixtures of local
distributions,each having a different set of parents.This increases the space
of possible structures which can be considered,enabling the CPDs to have
finer-grained dependencies.The resulting estimation procedure induces a
modelthat is better able to emulate the underlying interactions occurring in
the data than conventional conditional Bernoulli network models.The results for
artificially generated data indicate that overfitting is best reduced by
restricting the complexity of candidate mixture substructures local to each
node. Furthermore, mixtures of very simple substructures can perform almost as
well as more complex ones.The BMN is also applied to data collected from an
online adventure game with an application to keyhole plan recognition. The
results show that the BMN-based model brings a dramatic improvement in
performance over a conventional BN model.
| Geoff A. Jarrad | null | 1301.2280 | null | null |
Improved learning of Bayesian networks | cs.LG cs.AI stat.ML | The search space of Bayesian Network structures is usually defined as Acyclic
Directed Graphs (DAGs) and the search is done by local transformations of DAGs.
But the space of Bayesian Networks is ordered by DAG Markov model inclusion and
it is natural to consider that a good search policy should take this into
account. First attempt to do this (Chickering 1996) was using equivalence
classes of DAGs instead of DAGs itself. This approach produces better results
but it is significantly slower. We present a compromise between these two
approaches. It uses DAGs to search the space in such a way that the ordering by
inclusion is taken into account. This is achieved by repetitive usage of local
moves within the equivalence class of DAGs. We show that this new approach
produces better results than the original DAGs approach without substantial
change in time complexity. We present empirical results, within the framework
of heuristic search and Markov Chain Monte Carlo, provided through the Alarm
dataset.
| Tomas Kocka, Robert Castelo | null | 1301.2283 | null | null |
Classifier Learning with Supervised Marginal Likelihood | cs.LG stat.ML | It has been argued that in supervised classification tasks, in practice it
may be more sensible to perform model selection with respect to some more
focused model selection score, like the supervised (conditional) marginal
likelihood, than with respect to the standard marginal likelihood criterion.
However, for most Bayesian network models, computing the supervised marginal
likelihood score takes exponential time with respect to the amount of observed
data. In this paper, we consider diagnostic Bayesian network classifiers where
the significant model parameters represent conditional distributions for the
class variable, given the values of the predictor variables, in which case the
supervised marginal likelihood can be computed in linear time with respect to
the data. As the number of model parameters grows in this case exponentially
with respect to the number of predictors, we focus on simple diagnostic models
where the number of relevant predictors is small, and suggest two approaches
for applying this type of models in classification. The first approach is based
on mixtures of simple diagnostic models, while in the second approach we apply
the small predictor sets of the simple diagnostic models for augmenting the
Naive Bayes classifier.
| Petri Kontkanen, Petri Myllymaki, Henry Tirri | null | 1301.2284 | null | null |
Iterative Markov Chain Monte Carlo Computation of Reference Priors and
Minimax Risk | cs.LG stat.ML | We present an iterative Markov chainMonte Carlo algorithm for
computingreference priors and minimax risk forgeneral parametric families.
Ourapproach uses MCMC techniques based onthe Blahut-Arimoto algorithm
forcomputing channel capacity ininformation theory. We give astatistical
analysis of the algorithm,bounding the number of samples requiredfor the
stochastic algorithm to closelyapproximate the deterministic algorithmin each
iteration. Simulations arepresented for several examples fromexponential
families. Although we focuson applications to reference priors andminimax risk,
the methods and analysiswe develop are applicable to a muchbroader class of
optimization problemsand iterative algorithms.
| John Lafferty, Larry A. Wasserman | null | 1301.2286 | null | null |
A Bayesian Multiresolution Independence Test for Continuous Variables | cs.AI cs.LG | In this paper we present a method ofcomputing the posterior probability
ofconditional independence of two or morecontinuous variables from
data,examined at several resolutions. Ourapproach is motivated by
theobservation that the appearance ofcontinuous data varies widely atvarious
resolutions, producing verydifferent independence estimatesbetween the
variablesinvolved. Therefore, it is difficultto ascertain independence
withoutexamining data at several carefullyselected resolutions. In our paper,
weaccomplish this using the exactcomputation of the posteriorprobability of
independence, calculatedanalytically given a resolution. Ateach examined
resolution, we assume amultinomial distribution with Dirichletpriors for the
discretized tableparameters, and compute the posteriorusing Bayesian
integration. Acrossresolutions, we use a search procedureto approximate the
Bayesian integral ofprobability over an exponential numberof possible
histograms. Our methodgeneralizes to an arbitrary numbervariables in a
straightforward manner.The test is suitable for Bayesiannetwork learning
algorithms that useindependence tests to infer the networkstructure, in domains
that contain anymix of continuous, ordinal andcategorical variables.
| Dimitris Margaritis, Sebastian Thrun | null | 1301.2292 | null | null |
Expectation Propagation for approximate Bayesian inference | cs.AI cs.LG | This paper presents a new deterministic approximation technique in Bayesian
networks. This method, "Expectation Propagation", unifies two previous
techniques: assumed-density filtering, an extension of the Kalman filter, and
loopy belief propagation, an extension of belief propagation in Bayesian
networks. All three algorithms try to recover an approximate distribution which
is close in KL divergence to the true distribution. Loopy belief propagation,
because it propagates exact belief states, is useful for a limited class of
belief networks, such as those which are purely discrete. Expectation
Propagation approximates the belief states by only retaining certain
expectations, such as mean and variance, and iterates until these expectations
are consistent throughout the network. This makes it applicable to hybrid
networks with discrete and continuous nodes. Expectation Propagation also
extends belief propagation in the opposite direction - it can propagate richer
belief states that incorporate correlations between nodes. Experiments with
Gaussian mixture models show Expectation Propagation to be convincingly better
than methods with similar computational cost: Laplace's method, variational
Bayes, and Monte Carlo. Expectation Propagation also provides an efficient
algorithm for training Bayes point machine classifiers.
| Thomas P. Minka | null | 1301.2294 | null | null |
Probabilistic Models for Unified Collaborative and Content-Based
Recommendation in Sparse-Data Environments | cs.IR cs.LG stat.ML | Recommender systems leverage product and community information to target
products to consumers. Researchers have developed collaborative recommenders,
content-based recommenders, and (largely ad-hoc) hybrid systems. We propose a
unified probabilistic framework for merging collaborative and content-based
recommendations. We extend Hofmann's [1999] aspect model to incorporate
three-way co-occurrence data among users, items, and item content. The relative
influence of collaboration data versus content data is not imposed as an
exogenous parameter, but rather emerges naturally from the given data sources.
Global probabilistic models coupled with standard Expectation Maximization (EM)
learning algorithms tend to drastically overfit in sparse-data situations, as
is typical in recommendation applications. We show that secondary content
information can often be used to overcome sparsity. Experiments on data from
the ResearchIndex library of Computer Science publications show that
appropriate mixture models incorporating secondary data produce significantly
better quality recommenders than k-nearest neighbors (k-NN). Global
probabilistic models also allow more general inferences than local methods like
k-NN.
| Alexandrin Popescul, Lyle H. Ungar, David M Pennock, Steve Lawrence | null | 1301.2303 | null | null |
Symmetric Collaborative Filtering Using the Noisy Sensor Model | cs.IR cs.LG | Collaborative filtering is the process of making recommendations regarding
the potential preference of a user, for example shopping on the Internet, based
on the preference ratings of the user and a number of other users for various
items. This paper considers collaborative filtering based on
explicitmulti-valued ratings. To evaluate the algorithms, weconsider only {em
pure} collaborative filtering, using ratings exclusively, and no other
information about the people or items.Our approach is to predict a user's
preferences regarding a particularitem by using other people who rated that
item and other items ratedby the user as noisy sensors. The noisy sensor model
uses Bayes' theorem to compute the probability distribution for the
user'srating of a new item. We give two variant models: in one, we learn a{em
classical normal linear regression} model of how users rate items; in
another,we assume different users rate items the same, but the accuracy of
thesensors needs to be learned. We compare these variant models
withstate-of-the-art techniques and show how they are significantly
better,whether a user has rated only two items or many. We reportempirical
results using the EachMovie database
footnote{http://research.compaq.com/SRC/eachmovie/} of movie ratings. Wealso
show that by considering items similarity along with theusers similarity, the
accuracy of the prediction increases.
| Rita Sharma, David L Poole | null | 1301.2309 | null | null |
Policy Improvement for POMDPs Using Normalized Importance Sampling | cs.AI cs.LG | We present a new method for estimating the expected return of a POMDP from
experience. The method does not assume any knowledge of the POMDP and allows
the experience to be gathered from an arbitrary sequence of policies. The
return is estimated for any new policy of the POMDP. We motivate the estimator
from function-approximation and importance sampling points-of-view and derive
its theoretical properties. Although the estimator is biased, it has low
variance and the bias is often irrelevant when the estimator is used for
pair-wise comparisons. We conclude by extending the estimator to policies with
memory and compare its performance in a greedy search algorithm to REINFORCE
algorithms showing an order of magnitude reduction in the number of trials
required.
| Christian R. Shelton | null | 1301.2310 | null | null |
Maximum Likelihood Bounded Tree-Width Markov Networks | cs.LG cs.AI stat.ML | Chow and Liu (1968) studied the problem of learning a maximumlikelihood
Markov tree. We generalize their work to more complexMarkov networks by
considering the problem of learning a maximumlikelihood Markov network of
bounded complexity. We discuss howtree-width is in many ways the appropriate
measure of complexity andthus analyze the problem of learning a maximum
likelihood Markovnetwork of bounded tree-width.Similar to the work of Chow and
Liu, we are able to formalize thelearning problem as a combinatorial
optimization problem on graphs. Weshow that learning a maximum likelihood
Markov network of boundedtree-width is equivalent to finding a maximum weight
hypertree. Thisequivalence gives rise to global, integer-programming
based,approximation algorithms with provable performance guarantees, for
thelearning problem. This contrasts with heuristic local-searchalgorithms which
were previously suggested (e.g. by Malvestuto 1991).The equivalence also allows
us to study the computational hardness ofthe learning problem. We show that
learning a maximum likelihoodMarkov network of bounded tree-width is NP-hard,
and discuss thehardness of approximation.
| Nathan Srebro | null | 1301.2311 | null | null |
The Optimal Reward Baseline for Gradient-Based Reinforcement Learning | cs.LG cs.AI stat.ML | There exist a number of reinforcement learning algorithms which learnby
climbing the gradient of expected reward. Their long-runconvergence has been
proved, even in partially observableenvironments with non-deterministic
actions, and without the need fora system model. However, the variance of the
gradient estimator hasbeen found to be a significant practical problem. Recent
approacheshave discounted future rewards, introducing a bias-variance
trade-offinto the gradient estimate. We incorporate a reward baseline into
thelearning system, and show that it affects variance without
introducingfurther bias. In particular, as we approach the
zero-bias,high-variance parameterization, the optimal (or variance
minimizing)constant reward baseline is equal to the long-term average
expectedreward. Modified policy-gradient algorithms are presented, and anumber
of experiments demonstrate their improvement over previous work.
| Lex Weaver, Nigel Tao | null | 1301.2315 | null | null |
Cross-covariance modelling via DAGs with hidden variables | cs.LG stat.ML | DAG models with hidden variables present many difficulties that are not
present when all nodes are observed. In particular, fully observed DAG models
are identified and correspond to well-defined sets ofdistributions, whereas
this is not true if nodes are unobserved. Inthis paper we characterize exactly
the set of distributions given by a class of one-dimensional Gaussian latent
variable models. These models relate two blocks of observed variables, modeling
only the cross-covariance matrix. We describe the relation of this model to the
singular value decomposition of the cross-covariance matrix. We show that,
although the model is underidentified, useful information may be extracted. We
further consider an alternative parametrization in which one latent variable is
associated with each block. Our analysis leads to some novel covariance
equivalence results for Gaussian hidden variable models.
| Jacob A. Wegelin, Thomas S. Richardson | null | 1301.2316 | null | null |
Belief Optimization for Binary Networks: A Stable Alternative to Loopy
Belief Propagation | cs.AI cs.LG | We present a novel inference algorithm for arbitrary, binary, undirected
graphs. Unlike loopy belief propagation, which iterates fixed point equations,
we directly descend on the Bethe free energy. The algorithm consists of two
phases, first we update the pairwise probabilities, given the marginal
probabilities at each unit,using an analytic expression. Next, we update the
marginal probabilities, given the pairwise probabilities by following the
negative gradient of the Bethe free energy. Both steps are guaranteed to
decrease the Bethe free energy, and since it is lower bounded, the algorithm is
guaranteed to converge to a local minimum. We also show that the Bethe free
energy is equal to the TAP free energy up to second order in the weights. In
experiments we confirm that when belief propagation converges it usually finds
identical solutions as our belief optimization method. However, in cases where
belief propagation fails to converge, belief optimization continues to converge
to reasonable beliefs. The stable nature of belief optimization makes it
ideally suited for learning graphical models from data.
| Max Welling, Yee Whye Teh | null | 1301.2317 | null | null |
Statistical Modeling in Continuous Speech Recognition (CSR)(Invited
Talk) | cs.LG cs.AI stat.ML | Automatic continuous speech recognition (CSR) is sufficiently mature that a
variety of real world applications are now possible including large vocabulary
transcription and interactive spoken dialogues. This paper reviews the
evolution of the statistical modelling techniques which underlie current-day
systems, specifically hidden Markov models (HMMs) and N-grams. Starting from a
description of the speech signal and its parameterisation, the various
modelling assumptions and their consequences are discussed. It then describes
various techniques by which the effects of these assumptions can be mitigated.
Despite the progress that has been made, the limitations of current modelling
techniques are still evident. The paper therefore concludes with a brief review
of some of the more fundamental modelling work now in progress.
| Steve Young | null | 1301.2318 | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.