categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
cs.LG cs.AI stat.ML | null | 1301.3764 | null | null | http://arxiv.org/pdf/1301.3764v2 | 2013-03-27T18:30:41Z | 2013-01-16T17:48:38Z | Adaptive learning rates and parallelization for stochastic, sparse,
non-smooth gradients | Recent work has established an empirically successful framework for adapting
learning rates for stochastic gradient descent (SGD). This effectively removes
all needs for tuning, while automatically reducing learning rates over time on
stationary problems, and permitting learning rates to grow appropriately in
non-stationary tasks. Here, we extend the idea in three directions, addressing
proper minibatch parallelization, including reweighted updates for sparse or
orthogonal gradients, improving robustness on non-smooth loss functions, in the
process replacing the diagonal Hessian estimation procedure that may not always
be available by a robust finite-difference approximation. The final algorithm
integrates all these components, has linear complexity and is hyper-parameter
free.
| [
"['Tom Schaul' 'Yann LeCun']",
"Tom Schaul, Yann LeCun"
] |
cs.LG cs.CV | null | 1301.3775 | null | null | http://arxiv.org/pdf/1301.3775v4 | 2013-03-19T18:43:29Z | 2013-01-16T18:07:01Z | Discriminative Recurrent Sparse Auto-Encoders | We present the discriminative recurrent sparse auto-encoder model, comprising
a recurrent encoder of rectified linear units, unrolled for a fixed number of
iterations, and connected to two linear decoders that reconstruct the input and
predict its supervised classification. Training via
backpropagation-through-time initially minimizes an unsupervised sparse
reconstruction error; the loss function is then augmented with a discriminative
term on the supervised classification. The depth implicit in the
temporally-unrolled form allows the system to exhibit all the power of deep
networks, while substantially reducing the number of trainable parameters.
From an initially unstructured network the hidden units differentiate into
categorical-units, each of which represents an input prototype with a
well-defined class; and part-units representing deformations of these
prototypes. The learned organization of the recurrent encoder is hierarchical:
part-units are driven directly by the input, whereas the activity of
categorical-units builds up over time through interactions with the part-units.
Even using a small number of hidden units per layer, discriminative recurrent
sparse auto-encoders achieve excellent performance on MNIST.
| [
"Jason Tyler Rolfe and Yann LeCun",
"['Jason Tyler Rolfe' 'Yann LeCun']"
] |
cs.LG | 10.1016/j.neucom.2013.02.024 | 1301.3816 | null | null | http://arxiv.org/abs/1301.3816v1 | 2013-01-16T20:16:02Z | 2013-01-16T20:16:02Z | Learning Output Kernels for Multi-Task Problems | Simultaneously solving multiple related learning tasks is beneficial under a
variety of circumstances, but the prior knowledge necessary to correctly model
task relationships is rarely available in practice. In this paper, we develop a
novel kernel-based multi-task learning technique that automatically reveals
structural inter-task relationships. Building over the framework of output
kernel learning (OKL), we introduce a method that jointly learns multiple
functions and a low-rank multi-task kernel by solving a non-convex
regularization problem. Optimization is carried out via a block coordinate
descent strategy, where each subproblem is solved using suitable conjugate
gradient (CG) type iterative methods for linear operator equations. The
effectiveness of the proposed approach is demonstrated on pharmacological and
collaborative filtering data.
| [
"['Francesco Dinuzzo']",
"Francesco Dinuzzo"
] |
cs.LG cs.NE stat.ML | null | 1301.3833 | null | null | http://arxiv.org/pdf/1301.3833v1 | 2013-01-16T15:48:42Z | 2013-01-16T15:48:42Z | Reversible Jump MCMC Simulated Annealing for Neural Networks | We propose a novel reversible jump Markov chain Monte Carlo (MCMC) simulated
annealing algorithm to optimize radial basis function (RBF) networks. This
algorithm enables us to maximize the joint posterior distribution of the
network parameters and the number of basis functions. It performs a global
search in the joint space of the parameters and number of parameters, thereby
surmounting the problem of local minima. We also show that by calibrating a
Bayesian model, we can obtain the classical AIC, BIC and MDL model selection
criteria within a penalized likelihood framework. Finally, we show
theoretically and empirically that the algorithm converges to the modes of the
full posterior distribution in an efficient way.
| [
"Christophe Andrieu, Nando de Freitas, Arnaud Doucet",
"['Christophe Andrieu' 'Nando de Freitas' 'Arnaud Doucet']"
] |
cs.LG cs.AI stat.ML | null | 1301.3837 | null | null | http://arxiv.org/pdf/1301.3837v1 | 2013-01-16T15:48:59Z | 2013-01-16T15:48:59Z | Dynamic Bayesian Multinets | In this work, dynamic Bayesian multinets are introduced where a Markov chain
state at time t determines conditional independence patterns between random
variables lying within a local time window surrounding t. It is shown how
information-theoretic criterion functions can be used to induce sparse,
discriminative, and class-conditional network structures that yield an optimal
approximation to the class posterior probability, and therefore are useful for
the classification task. Using a new structure learning heuristic, the
resulting models are tested on a medium-vocabulary isolated-word speech
recognition task. It is demonstrated that these discriminatively structured
dynamic Bayesian multinets, when trained in a maximum likelihood setting using
EM, can outperform both HMMs and other dynamic Bayesian networks with a similar
number of parameters.
| [
"Jeff A. Bilmes",
"['Jeff A. Bilmes']"
] |
cs.LG stat.ML | null | 1301.3838 | null | null | http://arxiv.org/pdf/1301.3838v1 | 2013-01-16T15:49:03Z | 2013-01-16T15:49:03Z | Variational Relevance Vector Machines | The Support Vector Machine (SVM) of Vapnik (1998) has become widely
established as one of the leading approaches to pattern recognition and machine
learning. It expresses predictions in terms of a linear combination of kernel
functions centred on a subset of the training data, known as support vectors.
Despite its widespread success, the SVM suffers from some important
limitations, one of the most significant being that it makes point predictions
rather than generating predictive distributions. Recently Tipping (1999) has
formulated the Relevance Vector Machine (RVM), a probabilistic model whose
functional form is equivalent to the SVM. It achieves comparable recognition
accuracy to the SVM, yet provides a full predictive distribution, and also
requires substantially fewer kernel functions.
The original treatment of the RVM relied on the use of type II maximum
likelihood (the `evidence framework') to provide point estimates of the
hyperparameters which govern model sparsity. In this paper we show how the RVM
can be formulated and solved within a completely Bayesian paradigm through the
use of variational inference, thereby giving a posterior distribution over both
parameters and hyperparameters. We demonstrate the practicality and performance
of the variational RVM using both synthetic and real world examples.
| [
"['Christopher M. Bishop' 'Michael Tipping']",
"Christopher M. Bishop, Michael Tipping"
] |
cs.AI cs.LG | null | 1301.3840 | null | null | http://arxiv.org/pdf/1301.3840v1 | 2013-01-16T15:49:11Z | 2013-01-16T15:49:11Z | Utilities as Random Variables: Density Estimation and Structure
Discovery | Decision theory does not traditionally include uncertainty over utility
functions. We argue that the a person's utility value for a given outcome can
be treated as we treat other domain attributes: as a random variable with a
density function over its possible values. We show that we can apply
statistical density estimation techniques to learn such a density function from
a database of partially elicited utility functions. In particular, we define a
Bayesian learning framework for this problem, assuming the distribution over
utilities is a mixture of Gaussians, where the mixture components represent
statistically coherent subpopulations. We can also extend our techniques to the
problem of discovering generalized additivity structure in the utility
functions in the population. We define a Bayesian model selection criterion for
utility function structure and a search procedure over structures. The
factorization of the utilities in the learned model, and the generalization
obtained from density estimation, allows us to provide robust estimates of
utilities using a significantly smaller number of utility elicitation
questions. We experiment with our technique on synthetic utility data and on a
real database of utility functions in the domain of prenatal diagnosis.
| [
"['Urszula Chajewska' 'Daphne Koller']",
"Urszula Chajewska, Daphne Koller"
] |
cs.LG stat.ML | null | 1301.3843 | null | null | http://arxiv.org/pdf/1301.3843v1 | 2013-01-16T15:49:23Z | 2013-01-16T15:49:23Z | Bayesian Classification and Feature Selection from Finite Data Sets | Feature selection aims to select the smallest subset of features for a
specified level of performance. The optimal achievable classification
performance on a feature subset is summarized by its Receiver Operating Curve
(ROC). When infinite data is available, the Neyman- Pearson (NP) design
procedure provides the most efficient way of obtaining this curve. In practice
the design procedure is applied to density estimates from finite data sets. We
perform a detailed statistical analysis of the resulting error propagation on
finite alphabets. We show that the estimated performance curve (EPC) produced
by the design procedure is arbitrarily accurate given sufficient data,
independent of the size of the feature set. However, the underlying likelihood
ranking procedure is highly sensitive to errors that reduces the probability
that the EPC is in fact the ROC. In the worst case, guaranteeing that the EPC
is equal to the ROC may require data sizes exponential in the size of the
feature set. These results imply that in theory the NP design approach may only
be valid for characterizing relatively small feature subsets, even when the
performance of any given classifier can be estimated very accurately. We
discuss the practical limitations for on-line methods that ensures that the NP
procedure operates in a statistically valid region.
| [
"['Frans Coetzee' 'Steve Lawrence' 'C. Lee Giles']",
"Frans Coetzee, Steve Lawrence, C. Lee Giles"
] |
cs.LG stat.ML | null | 1301.3849 | null | null | http://arxiv.org/pdf/1301.3849v1 | 2013-01-16T15:49:46Z | 2013-01-16T15:49:46Z | Experiments with Random Projection | Recent theoretical work has identified random projection as a promising
dimensionality reduction technique for learning mixtures of Gausians. Here we
summarize these results and illustrate them by a wide variety of experiments on
synthetic and real data.
| [
"Sanjoy Dasgupta",
"['Sanjoy Dasgupta']"
] |
cs.LG stat.ML | null | 1301.3850 | null | null | http://arxiv.org/pdf/1301.3850v1 | 2013-01-16T15:49:50Z | 2013-01-16T15:49:50Z | A Two-round Variant of EM for Gaussian Mixtures | Given a set of possible models (e.g., Bayesian network structures) and a data
sample, in the unsupervised model selection problem the task is to choose the
most accurate model with respect to the domain joint probability distribution.
In contrast to this, in supervised model selection it is a priori known that
the chosen model will be used in the future for prediction tasks involving more
``focused' predictive distributions. Although focused predictive distributions
can be produced from the joint probability distribution by marginalization, in
practice the best model in the unsupervised sense does not necessarily perform
well in supervised domains. In particular, the standard marginal likelihood
score is a criterion for the unsupervised task, and, although frequently used
for supervised model selection also, does not perform well in such tasks. In
this paper we study the performance of the marginal likelihood score
empirically in supervised Bayesian network selection tasks by using a large
number of publicly available classification data sets, and compare the results
to those obtained by alternative model selection criteria, including empirical
crossvalidation methods, an approximation of a supervised marginal likelihood
measure, and a supervised version of Dawids prequential(predictive sequential)
principle.The results demonstrate that the marginal likelihood score does NOT
perform well FOR supervised model selection, WHILE the best results are
obtained BY using Dawids prequential r napproach.
| [
"Sanjoy Dasgupta, Leonard Schulman",
"['Sanjoy Dasgupta' 'Leonard Schulman']"
] |
cs.LG stat.ML | null | 1301.3851 | null | null | http://arxiv.org/pdf/1301.3851v1 | 2013-01-16T15:49:54Z | 2013-01-16T15:49:54Z | Minimum Message Length Clustering Using Gibbs Sampling | The K-Mean and EM algorithms are popular in clustering and mixture modeling,
due to their simplicity and ease of implementation. However, they have several
significant limitations. Both coverage to a local optimum of their respective
objective functions (ignoring the uncertainty in the model space), require the
apriori specification of the number of classes/clsuters, and are inconsistent.
In this work we overcome these limitations by using the Minimum Message Length
(MML) principle and a variation to the K-Means/EM observation assignment and
parameter calculation scheme. We maintain the simplicity of these approaches
while constructing a Bayesian mixture modeling tool that samples/searches the
model space using a Markov Chain Monte Carlo (MCMC) sampler known as a Gibbs
sampler. Gibbs sampling allows us to visit each model according to its
posterior probability. Therefore, if the model space is multi-modal we will
visit all models and not get stuck in local optima. We call our approach
multiple chains at equilibrium (MCE) MML sampling.
| [
"Ian Davidson",
"['Ian Davidson']"
] |
cs.LG cs.AI stat.ML | null | 1301.3852 | null | null | http://arxiv.org/pdf/1301.3852v1 | 2013-01-16T15:49:57Z | 2013-01-16T15:49:57Z | Mix-nets: Factored Mixtures of Gaussians in Bayesian Networks With Mixed
Continuous And Discrete Variables | Recently developed techniques have made it possible to quickly learn accurate
probability density functions from data in low-dimensional continuous space. In
particular, mixtures of Gaussians can be fitted to data very quickly using an
accelerated EM algorithm that employs multiresolution kd-trees (Moore, 1999).
In this paper, we propose a kind of Bayesian networks in which low-dimensional
mixtures of Gaussians over different subsets of the domain's variables are
combined into a coherent joint probability model over the entire domain. The
network is also capable of modeling complex dependencies between discrete
variables and continuous variables without requiring discretization of the
continuous variables. We present efficient heuristic algorithms for
automatically learning these networks from data, and perform comparative
experiments illustrated how well these networks model real scientific data and
synthetic data. We also briefly discuss some possible improvements to the
networks, as well as possible applications.
| [
"Scott Davies, Andrew Moore",
"['Scott Davies' 'Andrew Moore']"
] |
cs.LG cs.AI stat.CO | null | 1301.3853 | null | null | http://arxiv.org/pdf/1301.3853v1 | 2013-01-16T15:50:01Z | 2013-01-16T15:50:01Z | Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks | Particle filters (PFs) are powerful sampling-based inference/learning
algorithms for dynamic Bayesian networks (DBNs). They allow us to treat, in a
principled way, any type of probability distribution, nonlinearity and
non-stationarity. They have appeared in several fields under such names as
"condensation", "sequential Monte Carlo" and "survival of the fittest". In this
paper, we show how we can exploit the structure of the DBN to increase the
efficiency of particle filtering, using a technique known as
Rao-Blackwellisation. Essentially, this samples some of the variables, and
marginalizes out the rest exactly, using the Kalman filter, HMM filter,
junction tree algorithm, or any other finite dimensional optimal filter. We
show that Rao-Blackwellised particle filters (RBPFs) lead to more accurate
estimates than standard PFs. We demonstrate RBPFs on two problems, namely
non-stationary online regression with radial basis function networks and robot
localization and map building. We also discuss other potential application
areas and provide references to some finite dimensional optimal filters.
| [
"Arnaud Doucet, Nando de Freitas, Kevin Murphy, Stuart Russell",
"['Arnaud Doucet' 'Nando de Freitas' 'Kevin Murphy' 'Stuart Russell']"
] |
cs.CV cs.LG stat.ML | null | 1301.3854 | null | null | http://arxiv.org/pdf/1301.3854v1 | 2013-01-16T15:50:06Z | 2013-01-16T15:50:06Z | Learning Graphical Models of Images, Videos and Their Spatial
Transformations | Mixtures of Gaussians, factor analyzers (probabilistic PCA) and hidden Markov
models are staples of static and dynamic data modeling and image and video
modeling in particular. We show how topographic transformations in the input,
such as translation and shearing in images, can be accounted for in these
models by including a discrete transformation variable. The resulting models
perform clustering, dimensionality reduction and time-series analysis in a way
that is invariant to transformations in the input. Using the EM algorithm,
these transformation-invariant models can be fit to static data and time
series. We give results on filtering microscopy images, face and facial pose
clustering, handwritten digit modeling and recognition, video clustering,
object tracking, and removal of distractions from video sequences.
| [
"['Brendan J. Frey' 'Nebojsa Jojic']",
"Brendan J. Frey, Nebojsa Jojic"
] |
cs.LG cs.AI stat.ML | null | 1301.3856 | null | null | http://arxiv.org/pdf/1301.3856v1 | 2013-01-16T15:50:14Z | 2013-01-16T15:50:14Z | Being Bayesian about Network Structure | In many domains, we are interested in analyzing the structure of the
underlying distribution, e.g., whether one variable is a direct parent of the
other. Bayesian model-selection attempts to find the MAP model and use its
structure to answer these questions. However, when the amount of available data
is modest, there might be many models that have non-negligible posterior. Thus,
we want compute the Bayesian posterior of a feature, i.e., the total posterior
probability of all models that contain it. In this paper, we propose a new
approach for this task. We first show how to efficiently compute a sum over the
exponential number of networks that are consistent with a fixed ordering over
network variables. This allows us to compute, for a given ordering, both the
marginal probability of the data and the posterior of a feature. We then use
this result as the basis for an algorithm that approximates the Bayesian
posterior of a feature. Our approach uses a Markov Chain Monte Carlo (MCMC)
method, but over orderings rather than over network structures. The space of
orderings is much smaller and more regular than the space of structures, and
has a smoother posterior `landscape'. We present empirical results on synthetic
and real-life datasets that compare our approach to full model averaging (when
possible), to MCMC over network structures, and to a non-Bayesian bootstrap
approach.
| [
"['Nir Friedman' 'Daphne Koller']",
"Nir Friedman, Daphne Koller"
] |
cs.AI cs.LG stat.ML | null | 1301.3857 | null | null | http://arxiv.org/pdf/1301.3857v1 | 2013-01-16T15:50:18Z | 2013-01-16T15:50:18Z | Gaussian Process Networks | In this paper we address the problem of learning the structure of a Bayesian
network in domains with continuous variables. This task requires a procedure
for comparing different candidate structures. In the Bayesian framework, this
is done by evaluating the {em marginal likelihood/} of the data given a
candidate structure. This term can be computed in closed-form for standard
parametric families (e.g., Gaussians), and can be approximated, at some
computational cost, for some semi-parametric families (e.g., mixtures of
Gaussians).
We present a new family of continuous variable probabilistic networks that
are based on {em Gaussian Process/} priors. These priors are semi-parametric in
nature and can learn almost arbitrary noisy functional relations. Using these
priors, we can directly compute marginal likelihoods for structure learning.
The resulting method can discover a wide range of functional dependencies in
multivariate data. We develop the Bayesian score of Gaussian Process Networks
and describe how to learn them from data. We present empirical results on
artificial data as well as on real-life domains with non-linear dependencies.
| [
"Nir Friedman, Iftach Nachman",
"['Nir Friedman' 'Iftach Nachman']"
] |
cs.AI cs.LG | null | 1301.3861 | null | null | http://arxiv.org/pdf/1301.3861v1 | 2013-01-16T15:50:34Z | 2013-01-16T15:50:34Z | Inference for Belief Networks Using Coupling From the Past | Inference for belief networks using Gibbs sampling produces a distribution
for unobserved variables that differs from the correct distribution by a
(usually) unknown error, since convergence to the right distribution occurs
only asymptotically. The method of "coupling from the past" samples from
exactly the correct distribution by (conceptually) running dependent Gibbs
sampling simulations from every possible starting state from a time far enough
in the past that all runs reach the same state at time t=0. Explicitly
considering every possible state is intractable for large networks, however. We
propose a method for layered noisy-or networks that uses a compact, but often
imprecise, summary of a set of states. This method samples from exactly the
correct distribution, and requires only about twice the time per step as
ordinary Gibbs sampling, but it may require more simulation steps than would be
needed if chains were tracked exactly.
| [
"Michael Harvey, Radford M. Neal",
"['Michael Harvey' 'Radford M. Neal']"
] |
cs.AI cs.IR cs.LG | null | 1301.3862 | null | null | http://arxiv.org/pdf/1301.3862v1 | 2013-01-16T15:50:38Z | 2013-01-16T15:50:38Z | Dependency Networks for Collaborative Filtering and Data Visualization | We describe a graphical model for probabilistic relationships---an
alternative to the Bayesian network---called a dependency network. The graph of
a dependency network, unlike a Bayesian network, is potentially cyclic. The
probability component of a dependency network, like a Bayesian network, is a
set of conditional distributions, one for each node given its parents. We
identify several basic properties of this representation and describe a
computationally efficient procedure for learning the graph and probability
components from data. We describe the application of this representation to
probabilistic inference, collaborative filtering (the task of predicting
preferences), and the visualization of acausal predictive relationships.
| [
"['David Heckerman' 'David Maxwell Chickering' 'Christopher Meek'\n 'Robert Rounthwaite' 'Carl Kadie']",
"David Heckerman, David Maxwell Chickering, Christopher Meek, Robert\n Rounthwaite, Carl Kadie"
] |
cs.LG stat.ML | null | 1301.3865 | null | null | http://arxiv.org/pdf/1301.3865v1 | 2013-01-16T15:50:50Z | 2013-01-16T15:50:50Z | Feature Selection and Dualities in Maximum Entropy Discrimination | Incorporating feature selection into a classification or regression method
often carries a number of advantages. In this paper we formalize feature
selection specifically from a discriminative perspective of improving
classification/regression accuracy. The feature selection method is developed
as an extension to the recently proposed maximum entropy discrimination (MED)
framework. We describe MED as a flexible (Bayesian) regularization approach
that subsumes, e.g., support vector classification, regression and exponential
family models. For brevity, we restrict ourselves primarily to feature
selection in the context of linear classification/regression methods and
demonstrate that the proposed approach indeed carries substantial improvements
in practice. Moreover, we discuss and develop various extensions of feature
selection, including the problem of dealing with example specific but
unobserved degrees of freedom -- alignments or invariants.
| [
"Tony S. Jebara, Tommi S. Jaakkola",
"['Tony S. Jebara' 'Tommi S. Jaakkola']"
] |
cs.LG cs.AI stat.ML | null | 1301.3875 | null | null | http://arxiv.org/pdf/1301.3875v1 | 2013-01-16T15:51:30Z | 2013-01-16T15:51:30Z | Tractable Bayesian Learning of Tree Belief Networks | In this paper we present decomposable priors, a family of priors over
structure and parameters of tree belief nets for which Bayesian learning with
complete observations is tractable, in the sense that the posterior is also
decomposable and can be completely determined analytically in polynomial time.
This follows from two main results: First, we show that factored distributions
over spanning trees in a graph can be integrated in closed form. Second, we
examine priors over tree parameters and show that a set of assumptions similar
to (Heckerman and al. 1995) constrain the tree parameter priors to be a
compactly parameterized product of Dirichlet distributions. Beside allowing for
exact Bayesian learning, these results permit us to formulate a new class of
tractable latent variable models in which the likelihood of a data point is
computed through an ensemble average over tree structures.
| [
"Marina Meila, Tommi S. Jaakkola",
"['Marina Meila' 'Tommi S. Jaakkola']"
] |
cs.LG cs.DS stat.ML | null | 1301.3877 | null | null | http://arxiv.org/pdf/1301.3877v1 | 2013-01-16T15:51:38Z | 2013-01-16T15:51:38Z | The Anchors Hierachy: Using the triangle inequality to survive high
dimensional data | This paper is about metric data structures in high-dimensional or
non-Euclidean space that permit cached sufficient statistics accelerations of
learning algorithms.
It has recently been shown that for less than about 10 dimensions, decorating
kd-trees with additional "cached sufficient statistics" such as first and
second moments and contingency tables can provide satisfying acceleration for a
very wide range of statistical learning tasks such as kernel regression,
locally weighted regression, k-means clustering, mixture modeling and Bayes Net
learning.
In this paper, we begin by defining the anchors hierarchy - a fast data
structure and algorithm for localizing data based only on a
triangle-inequality-obeying distance metric. We show how this, in its own
right, gives a fast and effective clustering of data. But more importantly we
show how it can produce a well-balanced structure similar to a Ball-Tree
(Omohundro, 1991) or a kind of metric tree (Uhlmann, 1991; Ciaccia, Patella, &
Zezula, 1997) in a way that is neither "top-down" nor "bottom-up" but instead
"middle-out". We then show how this structure, decorated with cached sufficient
statistics, allows a wide variety of statistical learning algorithms to be
accelerated even in thousands of dimensions.
| [
"Andrew Moore",
"['Andrew Moore']"
] |
cs.AI cs.LG | null | 1301.3878 | null | null | http://arxiv.org/pdf/1301.3878v1 | 2013-01-16T15:51:42Z | 2013-01-16T15:51:42Z | PEGASUS: A Policy Search Method for Large MDPs and POMDPs | We propose a new approach to the problem of searching a space of policies for
a Markov decision process (MDP) or a partially observable Markov decision
process (POMDP), given a model. Our approach is based on the following
observation: Any (PO)MDP can be transformed into an "equivalent" POMDP in which
all state transitions (given the current state and action) are deterministic.
This reduces the general problem of policy search to one in which we need only
consider POMDPs with deterministic transitions. We give a natural way of
estimating the value of all policies in these transformed POMDPs. Policy search
is then simply performed by searching for a policy with high estimated value.
We also establish conditions under which our value estimates will be good,
recovering theoretical results similar to those of Kearns, Mansour and Ng
(1999), but with "sample complexity" bounds that have only a polynomial rather
than exponential dependence on the horizon time. Our method applies to
arbitrary POMDPs, including ones with infinite state and action spaces. We also
present empirical results for our approach on a small discrete problem, and on
a complex continuous state/continuous action problem involving learning to ride
a bicycle.
| [
"['Andrew Y. Ng' 'Michael I. Jordan']",
"Andrew Y. Ng, Michael I. Jordan"
] |
cs.AI cs.LG stat.ML | null | 1301.3882 | null | null | http://arxiv.org/pdf/1301.3882v1 | 2013-01-16T15:51:58Z | 2013-01-16T15:51:58Z | Adaptive Importance Sampling for Estimation in Structured Domains | Sampling is an important tool for estimating large, complex sums and
integrals over high dimensional spaces. For instance, important sampling has
been used as an alternative to exact methods for inference in belief networks.
Ideally, we want to have a sampling distribution that provides optimal-variance
estimators. In this paper, we present methods that improve the sampling
distribution by systematically adapting it as we obtain information from the
samples. We present a stochastic-gradient-descent method for sequentially
updating the sampling distribution based on the direct minization of the
variance. We also present other stochastic-gradient-descent methods based on
the minimizationof typical notions of distance between the current sampling
distribution and approximations of the target, optimal distribution. We finally
validate and compare the different methods empirically by applying them to the
problem of action evaluation in influence diagrams.
| [
"Luis E. Ortiz, Leslie Pack Kaelbling",
"['Luis E. Ortiz' 'Leslie Pack Kaelbling']"
] |
cs.LG stat.CO stat.ML | null | 1301.3890 | null | null | http://arxiv.org/pdf/1301.3890v1 | 2013-01-16T15:52:30Z | 2013-01-16T15:52:30Z | Monte Carlo Inference via Greedy Importance Sampling | We present a new method for conducting Monte Carlo inference in graphical
models which combines explicit search with generalized importance sampling. The
idea is to reduce the variance of importance sampling by searching for
significant points in the target distribution. We prove that it is possible to
introduce search and still maintain unbiasedness. We then demonstrate our
procedure on a few simple inference tasks and show that it can improve the
inference quality of standard MCMC methods, including Gibbs sampling,
Metropolis sampling, and Hybrid Monte Carlo. This paper extends previous work
which showed how greedy importance sampling could be correctly realized in the
one-dimensional case.
| [
"Dale Schuurmans, Finnegan Southey",
"['Dale Schuurmans' 'Finnegan Southey']"
] |
cs.LG stat.ML | null | 1301.3891 | null | null | http://arxiv.org/pdf/1301.3891v1 | 2013-01-16T15:52:33Z | 2013-01-16T15:52:33Z | Combining Feature and Prototype Pruning by Uncertainty Minimization | We focus in this paper on dataset reduction techniques for use in k-nearest
neighbor classification. In such a context, feature and prototype selections
have always been independently treated by the standard storage reduction
algorithms. While this certifying is theoretically justified by the fact that
each subproblem is NP-hard, we assume in this paper that a joint storage
reduction is in fact more intuitive and can in practice provide better results
than two independent processes. Moreover, it avoids a lot of distance
calculations by progressively removing useless instances during the feature
pruning. While standard selection algorithms often optimize the accuracy to
discriminate the set of solutions, we use in this paper a criterion based on an
uncertainty measure within a nearest-neighbor graph. This choice comes from
recent results that have proven that accuracy is not always the suitable
criterion to optimize. In our approach, a feature or an instance is removed if
its deletion improves information of the graph. Numerous experiments are
presented in this paper and a statistical analysis shows the relevance of our
approach, and its tolerance in the presence of noise.
| [
"Marc Sebban, Richard Nock",
"['Marc Sebban' 'Richard Nock']"
] |
cs.LG cs.AI stat.ML | null | 1301.3895 | null | null | http://arxiv.org/pdf/1301.3895v1 | 2013-01-16T15:52:49Z | 2013-01-16T15:52:49Z | Dynamic Trees: A Structured Variational Method Giving Efficient
Propagation Rules | Dynamic trees are mixtures of tree structured belief networks. They solve
some of the problems of fixed tree networks at the cost of making exact
inference intractable. For this reason approximate methods such as sampling or
mean field approaches have been used. However, mean field approximations assume
a factorized distribution over node states. Such a distribution seems unlickely
in the posterior, as nodes are highly correlated in the prior. Here a
structured variational approach is used, where the posterior distribution over
the non-evidential nodes is itself approximated by a dynamic tree. It turns out
that this form can be used tractably and efficiently. The result is a set of
update rules which can propagate information through the network to obtain both
a full variational approximation, and the relevant marginals. The progagtion
rules are more efficient than the mean field approach and give noticeable
quantitative and qualitative improvement in the inference. The marginals
calculated give better approximations to the posterior than loopy propagation
on a small toy problem.
| [
"['Amos J. Storkey']",
"Amos J. Storkey"
] |
cs.LG stat.ML | null | 1301.3896 | null | null | http://arxiv.org/pdf/1301.3896v1 | 2013-01-16T15:52:53Z | 2013-01-16T15:52:53Z | An Uncertainty Framework for Classification | We define a generalized likelihood function based on uncertainty measures and
show that maximizing such a likelihood function for different measures induces
different types of classifiers. In the probabilistic framework, we obtain
classifiers that optimize the cross-entropy function. In the possibilistic
framework, we obtain classifiers that maximize the interclass margin.
Furthermore, we show that the support vector machine is a sub-class of these
maximum-margin classifiers.
| [
"Loo-Nin Teow, Kia-Fock Loe",
"['Loo-Nin Teow' 'Kia-Fock Loe']"
] |
cs.AI cs.LG stat.ML | null | 1301.3897 | null | null | http://arxiv.org/pdf/1301.3897v1 | 2013-01-16T15:52:56Z | 2013-01-16T15:52:56Z | A Branch-and-Bound Algorithm for MDL Learning Bayesian Networks | This paper extends the work in [Suzuki, 1996] and presents an efficient
depth-first branch-and-bound algorithm for learning Bayesian network
structures, based on the minimum description length (MDL) principle, for a
given (consistent) variable ordering. The algorithm exhaustively searches
through all network structures and guarantees to find the network with the best
MDL score. Preliminary experiments show that the algorithm is efficient, and
that the time complexity grows slowly with the sample size. The algorithm is
useful for empirically studying both the performance of suboptimal heuristic
search algorithms and the adequacy of the MDL principle in learning Bayesian
networks.
| [
"['Jin Tian']",
"Jin Tian"
] |
cs.LG cs.AI stat.ML | null | 1301.3899 | null | null | http://arxiv.org/pdf/1301.3899v1 | 2013-01-16T15:53:05Z | 2013-01-16T15:53:05Z | Model-Based Hierarchical Clustering | We present an approach to model-based hierarchical clustering by formulating
an objective function based on a Bayesian analysis. This model organizes the
data into a cluster hierarchy while specifying a complex feature-set
partitioning that is a key component of our model. Features can have either a
unique distribution in every cluster or a common distribution over some (or
even all) of the clusters. The cluster subsets over which these features have
such a common distribution correspond to the nodes (clusters) of the tree
representing the hierarchy. We apply this general model to the problem of
document clustering for which we use a multinomial likelihood function and
Dirichlet priors. Our algorithm consists of a two-stage process wherein we
first perform a flat clustering followed by a modified hierarchical
agglomerative merging process that includes determining the features that will
have common distributions over the merged clusters. The regularization induced
by using the marginal likelihood automatically determines the optimal model
structure including number of clusters, the depth of the tree and the subset of
features to be modeled as having a common distribution at each node. We present
experimental results on both synthetic data and a real document collection.
| [
"['Shivakumar Vaithyanathan' 'Byron E Dom']",
"Shivakumar Vaithyanathan, Byron E Dom"
] |
cs.LG cs.AI stat.ML | null | 1301.3901 | null | null | http://arxiv.org/pdf/1301.3901v1 | 2013-01-16T15:53:17Z | 2013-01-16T15:53:17Z | Variational Approximations between Mean Field Theory and the Junction
Tree Algorithm | Recently, variational approximations such as the mean field approximation
have received much interest. We extend the standard mean field method by using
an approximating distribution that factorises into cluster potentials. This
includes undirected graphs, directed acyclic graphs and junction trees. We
derive generalized mean field equations to optimize the cluster potentials. We
show that the method bridges the gap between the standard mean field
approximation and the exact junction tree algorithm. In addition, we address
the problem of how to choose the graphical structure of the approximating
distribution. From the generalised mean field equations we derive rules to
simplify the structure of the approximating distribution in advance without
affecting the quality of the approximation. We also show how the method fits
into some other variational approximations that are currently popular.
| [
"Wim Wiegerinck",
"['Wim Wiegerinck']"
] |
cs.LG stat.ML | null | 1301.3966 | null | null | http://arxiv.org/pdf/1301.3966v1 | 2013-01-17T02:15:49Z | 2013-01-17T02:15:49Z | Efficient Sample Reuse in Policy Gradients with Parameter-based
Exploration | The policy gradient approach is a flexible and powerful reinforcement
learning method particularly for problems with continuous actions such as robot
control. A common challenge in this scenario is how to reduce the variance of
policy gradient estimates for reliable policy updates. In this paper, we
combine the following three ideas and give a highly effective policy gradient
method: (a) the policy gradients with parameter based exploration, which is a
recently proposed policy search method with low variance of gradient estimates,
(b) an importance sampling technique, which allows us to reuse previously
gathered data in a consistent way, and (c) an optimal baseline, which minimizes
the variance of gradient estimates with their unbiasedness being maintained.
For the proposed method, we give theoretical analysis of the variance of
gradient estimates and show its usefulness through extensive experiments.
| [
"['Tingting Zhao' 'Hirotaka Hachiya' 'Voot Tangkaratt' 'Jun Morimoto'\n 'Masashi Sugiyama']",
"Tingting Zhao, Hirotaka Hachiya, Voot Tangkaratt, Jun Morimoto,\n Masashi Sugiyama"
] |
cs.LG cs.CV cs.NE stat.ML | null | 1301.4083 | null | null | http://arxiv.org/pdf/1301.4083v6 | 2013-07-13T16:38:36Z | 2013-01-17T13:06:52Z | Knowledge Matters: Importance of Prior Information for Optimization | We explore the effect of introducing prior information into the intermediate
level of neural networks for a learning task on which all the state-of-the-art
machine learning algorithms tested failed to learn. We motivate our work from
the hypothesis that humans learn such intermediate concepts from other
individuals via a form of supervision or guidance using a curriculum. The
experiments we have conducted provide positive evidence in favor of this
hypothesis. In our experiments, a two-tiered MLP architecture is trained on a
dataset with 64x64 binary inputs images, each image with three sprites. The
final task is to decide whether all the sprites are the same or one of them is
different. Sprites are pentomino tetris shapes and they are placed in an image
with different locations using scaling and rotation transformations. The first
part of the two-tiered MLP is pre-trained with intermediate-level targets being
the presence of sprites at each location, while the second part takes the
output of the first part as input and predicts the final task's target binary
event. The two-tiered MLP architecture, with a few tens of thousand examples,
was able to learn the task perfectly, whereas all other algorithms (include
unsupervised pre-training, but also traditional algorithms like SVMs, decision
trees and boosting) all perform no better than chance. We hypothesize that the
optimization difficulty involved when the intermediate pre-training is not
performed is due to the {\em composition} of two highly non-linear tasks. Our
findings are also consistent with hypotheses on cultural learning inspired by
the observations of optimization problems with deep learning, presumably
because of effective local minima.
| [
"\\c{C}a\\u{g}lar G\\\"ul\\c{c}ehre and Yoshua Bengio",
"['Çağlar Gülçehre' 'Yoshua Bengio']"
] |
cs.LG cs.CV stat.ML | null | 1301.4157 | null | null | http://arxiv.org/pdf/1301.4157v1 | 2013-01-17T16:48:46Z | 2013-01-17T16:48:46Z | On the Product Rule for Classification Problems | We discuss theoretical aspects of the product rule for classification
problems in supervised machine learning for the case of combining classifiers.
We show that (1) the product rule arises from the MAP classifier supposing
equivalent priors and conditional independence given a class; (2) under some
conditions, the product rule is equivalent to minimizing the sum of the squared
distances to the respective centers of the classes related with different
features, such distances being weighted by the spread of the classes; (3)
observing some hypothesis, the product rule is equivalent to concatenating the
vectors of features.
| [
"Marcelo Cicconet",
"['Marcelo Cicconet']"
] |
cs.LG stat.CO stat.ML | null | 1301.4168 | null | null | http://arxiv.org/pdf/1301.4168v2 | 2013-03-16T01:55:06Z | 2013-01-17T17:37:56Z | Herded Gibbs Sampling | The Gibbs sampler is one of the most popular algorithms for inference in
statistical models. In this paper, we introduce a herding variant of this
algorithm, called herded Gibbs, that is entirely deterministic. We prove that
herded Gibbs has an $O(1/T)$ convergence rate for models with independent
variables and for fully connected probabilistic graphical models. Herded Gibbs
is shown to outperform Gibbs in the tasks of image denoising with MRFs and
named entity recognition with CRFs. However, the convergence for herded Gibbs
for sparsely connected probabilistic graphical models is still an open problem.
| [
"Luke Bornn, Yutian Chen, Nando de Freitas, Mareija Eskelin, Jing Fang,\n Max Welling",
"['Luke Bornn' 'Yutian Chen' 'Nando de Freitas' 'Mareija Eskelin'\n 'Jing Fang' 'Max Welling']"
] |
cs.IR cs.LG stat.ML | null | 1301.4171 | null | null | http://arxiv.org/pdf/1301.4171v1 | 2013-01-17T17:46:27Z | 2013-01-17T17:46:27Z | Affinity Weighted Embedding | Supervised (linear) embedding models like Wsabie and PSI have proven
successful at ranking, recommendation and annotation tasks. However, despite
being scalable to large datasets they do not take full advantage of the extra
data due to their linear nature, and typically underfit. We propose a new class
of models which aim to provide improved performance while retaining many of the
benefits of the existing class of embedding models. Our new approach works by
iteratively learning a linear embedding model where the next iteration's
features and labels are reweighted as a function of the previous iteration. We
describe several variants of the family, and give some initial results.
| [
"['Jason Weston' 'Ron Weiss' 'Hector Yee']",
"Jason Weston, Ron Weiss, Hector Yee"
] |
cs.LG stat.ML | null | 1301.4293 | null | null | http://arxiv.org/pdf/1301.4293v2 | 2013-01-28T20:10:21Z | 2013-01-18T04:37:30Z | Latent Relation Representations for Universal Schemas | Traditional relation extraction predicts relations within some fixed and
finite target schema. Machine learning approaches to this task require either
manual annotation or, in the case of distant supervision, existing structured
sources of the same schema. The need for existing datasets can be avoided by
using a universal schema: the union of all involved schemas (surface form
predicates as in OpenIE, and relations in the schemas of pre-existing
databases). This schema has an almost unlimited set of relations (due to
surface forms), and supports integration with existing structured data (through
the relation types of existing databases). To populate a database of such
schema we present a family of matrix factorization models that predict affinity
between database tuples and relations. We show that this achieves substantially
higher accuracy than the traditional classification approach. More importantly,
by operating simultaneously on relations observed in text and in pre-existing
structured DBs such as Freebase, we are able to reason about unstructured and
structured data in mutually-supporting ways. By doing so our approach
outperforms state-of-the-art distant supervision systems.
| [
"['Sebastian Riedel' 'Limin Yao' 'Andrew McCallum']",
"Sebastian Riedel, Limin Yao, Andrew McCallum"
] |
cs.LG math.OC stat.ML | null | 1301.4666 | null | null | http://arxiv.org/pdf/1301.4666v6 | 2015-08-14T18:02:18Z | 2013-01-20T15:54:22Z | A Linearly Convergent Conditional Gradient Algorithm with Applications
to Online and Stochastic Optimization | Linear optimization is many times algorithmically simpler than non-linear
convex optimization. Linear optimization over matroid polytopes, matching
polytopes and path polytopes are example of problems for which we have simple
and efficient combinatorial algorithms, but whose non-linear convex counterpart
is harder and admits significantly less efficient algorithms. This motivates
the computational model of convex optimization, including the offline, online
and stochastic settings, using a linear optimization oracle. In this
computational model we give several new results that improve over the previous
state-of-the-art. Our main result is a novel conditional gradient algorithm for
smooth and strongly convex optimization over polyhedral sets that performs only
a single linear optimization step over the domain on each iteration and enjoys
a linear convergence rate. This gives an exponential improvement in convergence
rate over previous results.
Based on this new conditional gradient algorithm we give the first algorithms
for online convex optimization over polyhedral sets that perform only a single
linear optimization step over the domain while having optimal regret
guarantees, answering an open question of Kalai and Vempala, and Hazan and
Kale. Our online algorithms also imply conditional gradient algorithms for
non-smooth and stochastic convex optimization with the same convergence rates
as projected (sub)gradient methods.
| [
"['Dan Garber' 'Elad Hazan']",
"Dan Garber, Elad Hazan"
] |
stat.ML cs.LG math.ST stat.TH | null | 1301.4679 | null | null | http://arxiv.org/pdf/1301.4679v2 | 2013-06-25T06:17:24Z | 2013-01-20T20:01:54Z | Cellular Tree Classifiers | The cellular tree classifier model addresses a fundamental problem in the
design of classifiers for a parallel or distributed computing world: Given a
data set, is it sufficient to apply a majority rule for classification, or
shall one split the data into two or more parts and send each part to a
potentially different computer (or cell) for further processing? At first
sight, it seems impossible to define with this paradigm a consistent classifier
as no cell knows the "original data size", $n$. However, we show that this is
not so by exhibiting two different consistent classifiers. The consistency is
universal but is only shown for distributions with nonatomic marginals.
| [
"G\\'erard Biau (LPMA, LSTA, DMA, INRIA Paris - Rocquencourt), Luc\n Devroye (SOCS)",
"['Gérard Biau' 'Luc Devroye']"
] |
cs.DC cs.AI cs.LG | 10.1109/ISPA.2011.24 | 1301.4753 | null | null | http://arxiv.org/abs/1301.4753v1 | 2013-01-21T04:57:05Z | 2013-01-21T04:57:05Z | Pattern Matching for Self- Tuning of MapReduce Jobs | In this paper, we study CPU utilization time patterns of several MapReduce
applications. After extracting running patterns of several applications, they
are saved in a reference database to be later used to tweak system parameters
to efficiently execute unknown applications in future. To achieve this goal,
CPU utilization patterns of new applications are compared with the already
known ones in the reference database to find/predict their most probable
execution patterns. Because of different patterns lengths, the Dynamic Time
Warping (DTW) is utilized for such comparison; a correlation analysis is then
applied to DTWs outcomes to produce feasible similarity patterns. Three real
applications (WordCount, Exim Mainlog parsing and Terasort) are used to
evaluate our hypothesis in tweaking system parameters in executing similar
applications. Results were very promising and showed effectiveness of our
approach on pseudo-distributed MapReduce platforms.
| [
"['Nikzad Babaii Rizvandi' 'Javid Taheri' 'Albert Y. Zomaya']",
"Nikzad Babaii Rizvandi, Javid Taheri, Albert Y.Zomaya"
] |
cs.LG cs.SI stat.ML | null | 1301.4767 | null | null | http://arxiv.org/pdf/1301.4767v2 | 2013-02-28T17:57:53Z | 2013-01-21T07:02:50Z | A Linear Time Active Learning Algorithm for Link Classification -- Full
Version -- | We present very efficient active learning algorithms for link classification
in signed networks. Our algorithms are motivated by a stochastic model in which
edge labels are obtained through perturbations of a initial sign assignment
consistent with a two-clustering of the nodes. We provide a theoretical
analysis within this model, showing that we can achieve an optimal (to whithin
a constant factor) number of mistakes on any graph G = (V,E) such that |E| =
\Omega(|V|^{3/2}) by querying O(|V|^{3/2}) edge labels. More generally, we show
an algorithm that achieves optimality to within a factor of O(k) by querying at
most order of |V| + (|V|/k)^{3/2} edge labels. The running time of this
algorithm is at most of order |E| + |V|\log|V|.
| [
"['Nicolo Cesa-Bianchi' 'Claudio Gentile' 'Fabio Vitale'\n 'Giovanni Zappella']",
"Nicolo Cesa-Bianchi, Claudio Gentile, Fabio Vitale, Giovanni Zappella"
] |
cs.LG cs.DS stat.ML | null | 1301.4769 | null | null | http://arxiv.org/pdf/1301.4769v2 | 2013-02-28T17:44:24Z | 2013-01-21T07:28:44Z | A Correlation Clustering Approach to Link Classification in Signed
Networks -- Full Version -- | Motivated by social balance theory, we develop a theory of link
classification in signed networks using the correlation clustering index as
measure of label regularity. We derive learning bounds in terms of correlation
clustering within three fundamental transductive learning settings: online,
batch and active. Our main algorithmic contribution is in the active setting,
where we introduce a new family of efficient link classifiers based on covering
the input graph with small circuits. These are the first active algorithms for
link classification with mistake bounds that hold for arbitrary signed
networks.
| [
"['Nicolo Cesa-Bianchi' 'Claudio Gentile' 'Fabio Vitale'\n 'Giovanni Zappella']",
"Nicolo Cesa-Bianchi, Claudio Gentile, Fabio Vitale, Giovanni Zappella"
] |
cs.LG cs.AI cs.CV cs.NE cs.RO | 10.1016/j.robot.2012.05.008 | 1301.4862 | null | null | http://arxiv.org/abs/1301.4862v1 | 2013-01-21T13:26:07Z | 2013-01-21T13:26:07Z | Active Learning of Inverse Models with Intrinsically Motivated Goal
Exploration in Robots | We introduce the Self-Adaptive Goal Generation - Robust Intelligent Adaptive
Curiosity (SAGG-RIAC) architecture as an intrinsi- cally motivated goal
exploration mechanism which allows active learning of inverse models in
high-dimensional redundant robots. This allows a robot to efficiently and
actively learn distributions of parameterized motor skills/policies that solve
a corresponding distribution of parameterized tasks/goals. The architecture
makes the robot sample actively novel parameterized tasks in the task space,
based on a measure of competence progress, each of which triggers low-level
goal-directed learning of the motor policy pa- rameters that allow to solve it.
For both learning and generalization, the system leverages regression
techniques which allow to infer the motor policy parameters corresponding to a
given novel parameterized task, and based on the previously learnt
correspondences between policy and task parameters. We present experiments with
high-dimensional continuous sensorimotor spaces in three different robotic
setups: 1) learning the inverse kinematics in a highly-redundant robotic arm,
2) learning omnidirectional locomotion with motor primitives in a quadruped
robot, 3) an arm learning to control a fishing rod with a flexible wire. We
show that 1) exploration in the task space can be a lot faster than exploration
in the actuator space for learning inverse models in redundant robots; 2)
selecting goals maximizing competence progress creates developmental
trajectories driving the robot to progressively focus on tasks of increasing
complexity and is statistically significantly more efficient than selecting
tasks randomly, as well as more efficient than different standard active motor
babbling methods; 3) this architecture allows the robot to actively discover
which parts of its task space it can learn to reach and which part it cannot.
| [
"Adrien Baranes and Pierre-Yves Oudeyer",
"['Adrien Baranes' 'Pierre-Yves Oudeyer']"
] |
cs.LG math.PR stat.ML | null | 1301.4917 | null | null | http://arxiv.org/pdf/1301.4917v1 | 2013-01-21T16:27:17Z | 2013-01-21T16:27:17Z | Dirichlet draws are sparse with high probability | This note provides an elementary proof of the folklore fact that draws from a
Dirichlet distribution (with parameters less than 1) are typically sparse (most
coordinates are small).
| [
"['Matus Telgarsky']",
"Matus Telgarsky"
] |
stat.ML cs.LG stat.AP | null | 1301.4944 | null | null | http://arxiv.org/pdf/1301.4944v1 | 2013-01-21T18:17:05Z | 2013-01-21T18:17:05Z | Evaluation of a Supervised Learning Approach for Stock Market Operations | Data mining methods have been widely applied in financial markets, with the
purpose of providing suitable tools for prices forecasting and automatic
trading. Particularly, learning methods aim to identify patterns in time series
and, based on such patterns, to recommend buy/sell operations. The objective of
this work is to evaluate the performance of Random Forests, a supervised
learning method based on ensembles of decision trees, for decision support in
stock markets. Preliminary results indicate good rates of successful operations
and good rates of return per operation, providing a strong motivation for
further research in this topic.
| [
"['Marcelo S. Lauretto' 'Barbara B. C. Silva' 'Pablo M. Andrade']",
"Marcelo S. Lauretto, Barbara B. C. Silva and Pablo M. Andrade"
] |
cs.CV cs.LG stat.ML | null | 1301.5063 | null | null | http://arxiv.org/pdf/1301.5063v2 | 2013-04-03T18:43:47Z | 2013-01-22T03:40:52Z | Heteroscedastic Conditional Ordinal Random Fields for Pain Intensity
Estimation from Facial Images | We propose a novel method for automatic pain intensity estimation from facial
images based on the framework of kernel Conditional Ordinal Random Fields
(KCORF). We extend this framework to account for heteroscedasticity on the
output labels(i.e., pain intensity scores) and introduce a novel dynamic
features, dynamic ranks, that impose temporal ordinal constraints on the static
ranks (i.e., intensity scores). Our experimental results show that the proposed
approach outperforms state-of-the art methods for sequence classification with
ordinal data and other ordinal regression models. The approach performs
significantly better than other models in terms of Intra-Class Correlation
measure, which is the most accepted evaluation measure in the tasks of facial
behaviour intensity estimation.
| [
"Ognjen Rudovic, Maja Pantic, Vladimir Pavlovic",
"['Ognjen Rudovic' 'Maja Pantic' 'Vladimir Pavlovic']"
] |
stat.ML cs.LG | null | 1301.5088 | null | null | http://arxiv.org/pdf/1301.5088v1 | 2013-01-22T07:10:34Z | 2013-01-22T07:10:34Z | Piecewise Linear Multilayer Perceptrons and Dropout | We propose a new type of hidden layer for a multilayer perceptron, and
demonstrate that it obtains the best reported performance for an MLP on the
MNIST dataset.
| [
"['Ian J. Goodfellow']",
"Ian J. Goodfellow"
] |
cs.LG stat.ML | null | 1301.5112 | null | null | http://arxiv.org/pdf/1301.5112v1 | 2013-01-22T09:00:28Z | 2013-01-22T09:00:28Z | Active Learning on Trees and Graphs | We investigate the problem of active learning on a given tree whose nodes are
assigned binary labels in an adversarial way. Inspired by recent results by
Guillory and Bilmes, we characterize (up to constant factors) the optimal
placement of queries so to minimize the mistakes made on the non-queried nodes.
Our query selection algorithm is extremely efficient, and the optimal number of
mistakes on the non-queried nodes is achieved by a simple and efficient mincut
classifier. Through a simple modification of the query selection algorithm we
also show optimality (up to constant factors) with respect to the trade-off
between number of queries and number of mistakes on non-queried nodes. By using
spanning trees, our algorithms can be efficiently applied to general graphs,
although the problem of finding optimal and efficient active learning
algorithms for general graphs remains open. Towards this end, we provide a
lower bound on the number of mistakes made on arbitrary graphs by any active
learning algorithm using a number of queries which is up to a constant fraction
of the graph size.
| [
"['Nicolo Cesa-Bianchi' 'Claudio Gentile' 'Fabio Vitale'\n 'Giovanni Zappella']",
"Nicolo Cesa-Bianchi, Claudio Gentile, Fabio Vitale, Giovanni Zappella"
] |
cs.LG | null | 1301.5160 | null | null | http://arxiv.org/pdf/1301.5160v2 | 2013-02-28T17:31:08Z | 2013-01-22T11:59:04Z | See the Tree Through the Lines: The Shazoo Algorithm -- Full Version -- | Predicting the nodes of a given graph is a fascinating theoretical problem
with applications in several domains. Since graph sparsification via spanning
trees retains enough information while making the task much easier, trees are
an important special case of this problem. Although it is known how to predict
the nodes of an unweighted tree in a nearly optimal way, in the weighted case a
fully satisfactory algorithm is not available yet. We fill this hole and
introduce an efficient node predictor, Shazoo, which is nearly optimal on any
weighted tree. Moreover, we show that Shazoo can be viewed as a common
nontrivial generalization of both previous approaches for unweighted trees and
weighted lines. Experiments on real-world datasets confirm that Shazoo performs
well in that it fully exploits the structure of the input tree, and gets very
close to (and sometimes better than) less scalable energy minimization methods.
| [
"Fabio Vitale, Nicolo Cesa-Bianchi, Claudio Gentile, Giovanni Zappella",
"['Fabio Vitale' 'Nicolo Cesa-Bianchi' 'Claudio Gentile'\n 'Giovanni Zappella']"
] |
stat.ML cs.LG | null | 1301.5220 | null | null | http://arxiv.org/pdf/1301.5220v2 | 2015-04-03T09:29:02Z | 2013-01-22T16:11:33Z | Properties of the Least Squares Temporal Difference learning algorithm | This paper presents four different ways of looking at the well-known Least
Squares Temporal Differences (LSTD) algorithm for computing the value function
of a Markov Reward Process, each of them leading to different insights: the
operator-theory approach via the Galerkin method, the statistical approach via
instrumental variables, the linear dynamical system view as well as the limit
of the TD iteration. We also give a geometric view of the algorithm as an
oblique projection. Furthermore, there is an extensive comparison of the
optimization problem solved by LSTD as compared to Bellman Residual
Minimization (BRM). We then review several schemes for the regularization of
the LSTD solution. We then proceed to treat the modification of LSTD for the
case of episodic Markov Reward Processes.
| [
"['Kamil Ciosek']",
"Kamil Ciosek"
] |
stat.ML cs.LG math.ST stat.TH | null | 1301.5288 | null | null | http://arxiv.org/pdf/1301.5288v3 | 2013-07-17T15:11:46Z | 2013-01-22T19:19:38Z | The connection between Bayesian estimation of a Gaussian random field
and RKHS | Reconstruction of a function from noisy data is often formulated as a
regularized optimization problem over an infinite-dimensional reproducing
kernel Hilbert space (RKHS). The solution describes the observed data and has a
small RKHS norm. When the data fit is measured using a quadratic loss, this
estimator has a known statistical interpretation. Given the noisy measurements,
the RKHS estimate represents the posterior mean (minimum variance estimate) of
a Gaussian random field with covariance proportional to the kernel associated
with the RKHS. In this paper, we provide a statistical interpretation when more
general losses are used, such as absolute value, Vapnik or Huber. Specifically,
for any finite set of sampling locations (including where the data were
collected), the MAP estimate for the signal samples is given by the RKHS
estimate evaluated at these locations.
| [
"Aleksandr Y. Aravkin and Bradley M. Bell and James V. Burke and\n Gianluigi Pillonetto",
"['Aleksandr Y. Aravkin' 'Bradley M. Bell' 'James V. Burke'\n 'Gianluigi Pillonetto']"
] |
stat.ML cs.LG | null | 1301.5332 | null | null | http://arxiv.org/pdf/1301.5332v1 | 2013-01-22T21:10:53Z | 2013-01-22T21:10:53Z | Online Learning with Pairwise Loss Functions | Efficient online learning with pairwise loss functions is a crucial component
in building large-scale learning system that maximizes the area under the
Receiver Operator Characteristic (ROC) curve. In this paper we investigate the
generalization performance of online learning algorithms with pairwise loss
functions. We show that the existing proof techniques for generalization bounds
of online algorithms with a univariate loss can not be directly applied to
pairwise losses. In this paper, we derive the first result providing
data-dependent bounds for the average risk of the sequence of hypotheses
generated by an arbitrary online learner in terms of an easily computable
statistic, and show how to extract a low risk hypothesis from the sequence. We
demonstrate the generality of our results by applying it to two important
problems in machine learning. First, we analyze two online algorithms for
bipartite ranking; one being a natural extension of the perceptron algorithm
and the other using online convex optimization. Secondly, we provide an
analysis for the risk bound for an online algorithm for supervised metric
learning.
| [
"['Yuyang Wang' 'Roni Khardon' 'Dmitry Pechyony' 'Rosie Jones']",
"Yuyang Wang, Roni Khardon, Dmitry Pechyony, Rosie Jones"
] |
cs.LG cs.CV | null | 1301.5348 | null | null | http://arxiv.org/pdf/1301.5348v2 | 2013-04-16T00:17:54Z | 2013-01-15T21:36:06Z | Why Size Matters: Feature Coding as Nystrom Sampling | Recently, the computer vision and machine learning community has been in
favor of feature extraction pipelines that rely on a coding step followed by a
linear classifier, due to their overall simplicity, well understood properties
of linear classifiers, and their computational efficiency. In this paper we
propose a novel view of this pipeline based on kernel methods and Nystrom
sampling. In particular, we focus on the coding of a data point with a local
representation based on a dictionary with fewer elements than the number of
data points, and view it as an approximation to the actual function that would
compute pair-wise similarity to all data points (often too many to compute in
practice), followed by a Nystrom sampling step to select a subset of all data
points.
Furthermore, since bounds are known on the approximation power of Nystrom
sampling as a function of how many samples (i.e. dictionary size) we consider,
we can derive bounds on the approximation of the exact (but expensive to
compute) kernel matrix, and use it as a proxy to predict accuracy as a function
of the dictionary size, which has been observed to increase but also to
saturate as we increase its size. This model may help explaining the positive
effect of the codebook size and justifying the need to stack more layers (often
referred to as deep learning), as flat models empirically saturate as we add
more complexity.
| [
"Oriol Vinyals, Yangqing Jia, Trevor Darrell",
"['Oriol Vinyals' 'Yangqing Jia' 'Trevor Darrell']"
] |
cs.LG cs.AI stat.ML | null | 1301.5488 | null | null | http://arxiv.org/pdf/1301.5488v1 | 2013-01-23T12:54:09Z | 2013-01-23T12:54:09Z | Multi-class Generalized Binary Search for Active Inverse Reinforcement
Learning | This paper addresses the problem of learning a task from demonstration. We
adopt the framework of inverse reinforcement learning, where tasks are
represented in the form of a reward function. Our contribution is a novel
active learning algorithm that enables the learning agent to query the expert
for more informative demonstrations, thus leading to more sample-efficient
learning. For this novel algorithm (Generalized Binary Search for Inverse
Reinforcement Learning, or GBS-IRL), we provide a theoretical bound on sample
complexity and illustrate its applicability on several different tasks. To our
knowledge, GBS-IRL is the first active IRL algorithm with provable sample
complexity bounds. We also discuss our method in light of other existing
methods in the literature and its general applicability in multi-class
classification problems. Finally, motivated by recent work on learning from
demonstration in robots, we also discuss how different forms of human feedback
can be integrated in a transparent manner in our learning framework.
| [
"Francisco Melo and Manuel Lopes",
"['Francisco Melo' 'Manuel Lopes']"
] |
stat.ML cs.LG | null | 1301.5650 | null | null | http://arxiv.org/pdf/1301.5650v2 | 2013-06-20T14:30:04Z | 2013-01-23T21:18:07Z | Regularization and nonlinearities for neural language models: when are
they needed? | Neural language models (LMs) based on recurrent neural networks (RNN) are
some of the most successful word and character-level LMs. Why do they work so
well, in particular better than linear neural LMs? Possible explanations are
that RNNs have an implicitly better regularization or that RNNs have a higher
capacity for storing patterns due to their nonlinearities or both. Here we
argue for the first explanation in the limit of little training data and the
second explanation for large amounts of text data. We show state-of-the-art
performance on the popular and small Penn dataset when RNN LMs are regularized
with random dropout. Nonetheless, we show even better performance from a
simplified, much less expressive linear RNN model without off-diagonal entries
in the recurrent matrix. We call this model an impulse-response LM (IRLM).
Using random dropout, column normalization and annealed learning rates, IRLMs
develop neurons that keep a memory of up to 50 words in the past and achieve a
perplexity of 102.5 on the Penn dataset. On two large datasets however, the
same regularization methods are unsuccessful for both models and the RNN's
expressivity allows it to overtake the IRLM by 10 and 20 percent perplexity,
respectively. Despite the perplexity gap, IRLMs still outperform RNNs on the
Microsoft Research Sentence Completion (MRSC) task. We develop a slightly
modified IRLM that separates long-context units (LCUs) from short-context units
and show that the LCUs alone achieve a state-of-the-art performance on the MRSC
task of 60.8%. Our analysis indicates that a fruitful direction of research for
neural LMs lies in developing more accessible internal representations, and
suggests an optimization regime of very high momentum terms for effectively
training such models.
| [
"Marius Pachitariu and Maneesh Sahani",
"['Marius Pachitariu' 'Maneesh Sahani']"
] |
cs.CL cs.LG stat.ML | null | 1301.5686 | null | null | http://arxiv.org/pdf/1301.5686v2 | 2013-01-26T18:00:19Z | 2013-01-24T02:02:13Z | Transfer Topic Modeling with Ease and Scalability | The increasing volume of short texts generated on social media sites, such as
Twitter or Facebook, creates a great demand for effective and efficient topic
modeling approaches. While latent Dirichlet allocation (LDA) can be applied, it
is not optimal due to its weakness in handling short texts with fast-changing
topics and scalability concerns. In this paper, we propose a transfer learning
approach that utilizes abundant labeled documents from other domains (such as
Yahoo! News or Wikipedia) to improve topic modeling, with better model fitting
and result interpretation. Specifically, we develop Transfer Hierarchical LDA
(thLDA) model, which incorporates the label information from other domains via
informative priors. In addition, we develop a parallel implementation of our
model for large-scale applications. We demonstrate the effectiveness of our
thLDA model on both a microblogging dataset and standard text collections
including AP and RCV1 datasets.
| [
"['Jeon-Hyung Kang' 'Jun Ma' 'Yan Liu']",
"Jeon-Hyung Kang, Jun Ma, Yan Liu"
] |
math.OC cs.LG math.PR | null | 1301.5734 | null | null | http://arxiv.org/pdf/1301.5734v1 | 2013-01-24T09:19:00Z | 2013-01-24T09:19:00Z | Reinforcement learning from comparisons: Three alternatives is enough,
two is not | The paper deals with the problem of finding the best alternatives on the
basis of pairwise comparisons when these comparisons need not be transitive. In
this setting, we study a reinforcement urn model. We prove convergence to the
optimal solution when reinforcement of a winning alternative occurs each time
after considering three random alternatives. The simpler process, which
reinforces the winner of a random pair does not always converges: it may cycle.
| [
"Benoit Laslier and Jean-Francois Laslier",
"['Benoit Laslier' 'Jean-Francois Laslier']"
] |
cs.IT cond-mat.stat-mech cs.LG math.IT | 10.1109/ISIT.2013.6620308 | 1301.5898 | null | null | http://arxiv.org/abs/1301.5898v1 | 2013-01-24T20:57:35Z | 2013-01-24T20:57:35Z | Phase Diagram and Approximate Message Passing for Blind Calibration and
Dictionary Learning | We consider dictionary learning and blind calibration for signals and
matrices created from a random ensemble. We study the mean-squared error in the
limit of large signal dimension using the replica method and unveil the
appearance of phase transitions delimiting impossible, possible-but-hard and
possible inference regions. We also introduce an approximate message passing
algorithm that asymptotically matches the theoretical performance, and show
through numerical tests that it performs very well, for the calibration
problem, for tractable system sizes.
| [
"['Florent Krzakala' 'Marc Mézard' 'Lenka Zdeborová']",
"Florent Krzakala, Marc M\\'ezard, Lenka Zdeborov\\'a"
] |
cs.AI cs.LG cs.LO | null | 1301.6039 | null | null | http://arxiv.org/pdf/1301.6039v4 | 2014-03-07T12:30:49Z | 2013-01-25T13:29:29Z | Recycling Proof Patterns in Coq: Case Studies | Development of Interactive Theorem Provers has led to the creation of big
libraries and varied infrastructures for formal proofs. However, despite (or
perhaps due to) their sophistication, the re-use of libraries by non-experts or
across domains is a challenge. In this paper, we provide detailed case studies
and evaluate the machine-learning tool ML4PG built to interactively data-mine
the electronic libraries of proofs, and to provide user guidance on the basis
of proof patterns found in the existing libraries.
| [
"J\\'onathan Heras and Ekaterina Komendantskaya",
"['Jónathan Heras' 'Ekaterina Komendantskaya']"
] |
cs.LG | null | 1301.6058 | null | null | http://arxiv.org/pdf/1301.6058v1 | 2013-01-25T15:09:39Z | 2013-01-25T15:09:39Z | Weighted Last-Step Min-Max Algorithm with Improved Sub-Logarithmic
Regret | In online learning the performance of an algorithm is typically compared to
the performance of a fixed function from some class, with a quantity called
regret. Forster proposed a last-step min-max algorithm which was somewhat
simpler than the algorithm of Vovk, yet with the same regret. In fact the
algorithm he analyzed assumed that the choices of the adversary are bounded,
yielding artificially only the two extreme cases. We fix this problem by
weighing the examples in such a way that the min-max problem will be well
defined, and provide analysis with logarithmic regret that may have better
multiplicative factor than both bounds of Forster and Vovk. We also derive a
new bound that may be sub-logarithmic, as a recent bound of Orabona et.al, but
may have better multiplicative factor. Finally, we analyze the algorithm in a
weak-type of non-stationary setting, and show a bound that is sub-linear if the
non-stationarity is sub-linear as well.
| [
"Edward Moroshko, Koby Crammer",
"['Edward Moroshko' 'Koby Crammer']"
] |
cs.LG cond-mat.dis-nn cond-mat.stat-mech cs.IT math.IT | null | 1301.6199 | null | null | http://arxiv.org/pdf/1301.6199v2 | 2014-02-05T13:21:56Z | 2013-01-26T01:27:46Z | Sample Complexity of Bayesian Optimal Dictionary Learning | We consider a learning problem of identifying a dictionary matrix D (M times
N dimension) from a sample set of M dimensional vectors Y = N^{-1/2} DX, where
X is a sparse matrix (N times P dimension) in which the density of non-zero
entries is 0<rho< 1. In particular, we focus on the minimum sample size P_c
(sample complexity) necessary for perfectly identifying D of the optimal
learning scheme when D and X are independently generated from certain
distributions. By using the replica method of statistical mechanics, we show
that P_c=O(N) holds as long as alpha = M/N >rho is satisfied in the limit of N
to infinity. Our analysis also implies that the posterior distribution given Y
is condensed only at the correct dictionary D when the compression rate alpha
is greater than a certain critical value alpha_M(rho). This suggests that
belief propagation may allow us to learn D with a low computational complexity
using O(N) samples.
| [
"['Ayaka Sakata' 'Yoshiyuki Kabashima']",
"Ayaka Sakata and Yoshiyuki Kabashima"
] |
cs.SI cs.IR cs.LG | null | 1301.6277 | null | null | http://arxiv.org/pdf/1301.6277v1 | 2013-01-26T18:26:36Z | 2013-01-26T18:26:36Z | LA-LDA: A Limited Attention Topic Model for Social Recommendation | Social media users have finite attention which limits the number of incoming
messages from friends they can process. Moreover, they pay more attention to
opinions and recommendations of some friends more than others. In this paper,
we propose LA-LDA, a latent topic model which incorporates limited,
non-uniformly divided attention in the diffusion process by which opinions and
information spread on the social network. We show that our proposed model is
able to learn more accurate user models from users' social network and item
adoption behavior than models which do not take limited attention into account.
We analyze voting on news items on the social news aggregator Digg and show
that our proposed model is better able to predict held out votes than
alternative models. Our study demonstrates that psycho-socially motivated
models have better ability to describe and predict observed behavior than
models which only consider topics.
| [
"['Jeon-Hyung Kang' 'Kristina Lerman' 'Lise Getoor']",
"Jeon-Hyung Kang, Kristina Lerman, Lise Getoor"
] |
cs.LG q-bio.QM stat.ML | null | 1301.6314 | null | null | http://arxiv.org/pdf/1301.6314v2 | 2013-08-14T20:51:50Z | 2013-01-27T03:45:30Z | Equitability Analysis of the Maximal Information Coefficient, with
Comparisons | A measure of dependence is said to be equitable if it gives similar scores to
equally noisy relationships of different types. Equitability is important in
data exploration when the goal is to identify a relatively small set of
strongest associations within a dataset as opposed to finding as many non-zero
associations as possible, which often are too many to sift through. Thus an
equitable statistic, such as the maximal information coefficient (MIC), can be
useful for analyzing high-dimensional data sets. Here, we explore both
equitability and the properties of MIC, and discuss several aspects of the
theory and practice of MIC. We begin by presenting an intuition behind the
equitability of MIC through the exploration of the maximization and
normalization steps in its definition. We then examine the speed and optimality
of the approximation algorithm used to compute MIC, and suggest some directions
for improving both. Finally, we demonstrate in a range of noise models and
sample sizes that MIC is more equitable than natural alternatives, such as
mutual information estimation and distance correlation.
| [
"David Reshef (1), Yakir Reshef (1), Michael Mitzenmacher (2), Pardis\n Sabeti (2) (1, 2 - contributed equally)",
"['David Reshef' 'Yakir Reshef' 'Michael Mitzenmacher' 'Pardis Sabeti']"
] |
cs.LG | null | 1301.6316 | null | null | http://arxiv.org/pdf/1301.6316v3 | 2013-03-18T18:37:37Z | 2013-01-27T04:51:21Z | Hierarchical Data Representation Model - Multi-layer NMF | In this paper, we propose a data representation model that demonstrates
hierarchical feature learning using nsNMF. We extend unit algorithm into
several layers. Experiments with document and image data successfully
discovered feature hierarchies. We also prove that proposed method results in
much better classification and reconstruction performance, especially for small
number of features. feature hierarchies.
| [
"Hyun Ah Song, Soo-Young Lee",
"['Hyun Ah Song' 'Soo-Young Lee']"
] |
cs.CV cs.LG stat.ML | null | 1301.6324 | null | null | http://arxiv.org/pdf/1301.6324v1 | 2013-01-27T06:55:55Z | 2013-01-27T06:55:55Z | An improvement to k-nearest neighbor classifier | K-Nearest neighbor classifier (k-NNC) is simple to use and has little design
time like finding k values in k-nearest neighbor classifier, hence these are
suitable to work with dynamically varying data-sets. There exists some
fundamental improvements over the basic k-NNC, like weighted k-nearest
neighbors classifier (where weights to nearest neighbors are given based on
linear interpolation), using artificially generated training set called
bootstrapped training set, etc. These improvements are orthogonal to space
reduction and classification time reduction techniques, hence can be coupled
with any of them. The paper proposes another improvement to the basic k-NNC
where the weights to nearest neighbors are given based on Gaussian distribution
(instead of linear interpolation as done in weighted k-NNC) which is also
independent of any space reduction and classification time reduction technique.
We formally show that our proposed method is closely related to non-parametric
density estimation using a Gaussian kernel. We experimentally demonstrate using
various standard data-sets that the proposed method is better than the existing
ones in most cases.
| [
"['T. Hitendra Sarma' 'P. Viswanath' 'D. Sai Koti Reddy' 'S. Sri Raghava']",
"T. Hitendra Sarma, P. Viswanath, D. Sai Koti Reddy and S. Sri Raghava"
] |
cs.LG cs.DB stat.ML | null | 1301.6626 | null | null | http://arxiv.org/pdf/1301.6626v1 | 2013-01-28T18:00:33Z | 2013-01-28T18:00:33Z | Discriminative Feature Selection for Uncertain Graph Classification | Mining discriminative features for graph data has attracted much attention in
recent years due to its important role in constructing graph classifiers,
generating graph indices, etc. Most measurement of interestingness of
discriminative subgraph features are defined on certain graphs, where the
structure of graph objects are certain, and the binary edges within each graph
represent the "presence" of linkages among the nodes. In many real-world
applications, however, the linkage structure of the graphs is inherently
uncertain. Therefore, existing measurements of interestingness based upon
certain graphs are unable to capture the structural uncertainty in these
applications effectively. In this paper, we study the problem of discriminative
subgraph feature selection from uncertain graphs. This problem is challenging
and different from conventional subgraph mining problems because both the
structure of the graph objects and the discrimination score of each subgraph
feature are uncertain. To address these challenges, we propose a novel
discriminative subgraph feature selection method, DUG, which can find
discriminative subgraph features in uncertain graphs based upon different
statistical measures including expectation, median, mode and phi-probability.
We first compute the probability distribution of the discrimination scores for
each subgraph feature based on dynamic programming. Then a branch-and-bound
algorithm is proposed to search for discriminative subgraphs efficiently.
Extensive experiments on various neuroimaging applications (i.e., Alzheimer's
Disease, ADHD and HIV) have been performed to analyze the gain in performance
by taking into account structural uncertainties in identifying discriminative
subgraph features for graph classification.
| [
"Xiangnan Kong, Philip S. Yu, Xue Wang, Ann B. Ragin",
"['Xiangnan Kong' 'Philip S. Yu' 'Xue Wang' 'Ann B. Ragin']"
] |
cs.SI cs.LG physics.soc-ph | null | 1301.6630 | null | null | http://arxiv.org/pdf/1301.6630v2 | 2013-02-08T14:33:18Z | 2013-01-28T18:17:22Z | Political Disaffection: a case study on the Italian Twitter community | In our work we analyse the political disaffection or "the subjective feeling
of powerlessness, cynicism, and lack of confidence in the political process,
politicians, and democratic institutions, but with no questioning of the
political regime" by exploiting Twitter data through machine learning
techniques. In order to validate the quality of the time-series generated by
the Twitter data, we highlight the relations of these data with political
disaffection as measured by means of public opinion surveys. Moreover, we show
that important political news of Italian newspapers are often correlated with
the highest peaks of the produced time-series.
| [
"Corrado Monti, Alessandro Rozza, Giovanni Zappella, Matteo Zignani,\n Adam Arvidsson, Monica Poletti",
"['Corrado Monti' 'Alessandro Rozza' 'Giovanni Zappella' 'Matteo Zignani'\n 'Adam Arvidsson' 'Monica Poletti']"
] |
cs.LG | null | 1301.6659 | null | null | http://arxiv.org/pdf/1301.6659v4 | 2013-08-01T22:06:49Z | 2013-01-28T20:01:57Z | Clustering-Based Matrix Factorization | Recommender systems are emerging technologies that nowadays can be found in
many applications such as Amazon, Netflix, and so on. These systems help users
to find relevant information, recommendations, and their preferred items.
Slightly improvement of the accuracy of these recommenders can highly affect
the quality of recommendations. Matrix Factorization is a popular method in
Recommendation Systems showing promising results in accuracy and complexity. In
this paper we propose an extension of matrix factorization which adds general
neighborhood information on the recommendation model. Users and items are
clustered into different categories to see how these categories share
preferences. We then employ these shared interests of categories in a fusion by
Biased Matrix Factorization to achieve more accurate recommendations. This is a
complement for the current neighborhood aware matrix factorization models which
rely on using direct neighborhood information of users and items. The proposed
model is tested on two well-known recommendation system datasets: Movielens100k
and Netflix. Our experiment shows applying the general latent features of
categories into factorized recommender models improves the accuracy of
recommendations. The current neighborhood-aware models need a great number of
neighbors to acheive good accuracies. To the best of our knowledge, the
proposed model is better than or comparable with the current neighborhood-aware
models when they consider fewer number of neighbors.
| [
"Nima Mirbakhsh and Charles X. Ling",
"['Nima Mirbakhsh' 'Charles X. Ling']"
] |
cs.LG stat.ML | null | 1301.6676 | null | null | http://arxiv.org/pdf/1301.6676v1 | 2013-01-23T15:56:44Z | 2013-01-23T15:56:44Z | Inferring Parameters and Structure of Latent Variable Models by
Variational Bayes | Current methods for learning graphical models with latent variables and a
fixed structure estimate optimal values for the model parameters. Whereas this
approach usually produces overfitting and suboptimal generalization
performance, carrying out the Bayesian program of computing the full posterior
distributions over the parameters remains a difficult problem. Moreover,
learning the structure of models with latent variables, for which the Bayesian
approach is crucial, is yet a harder problem. In this paper I present the
Variational Bayes framework, which provides a solution to these problems. This
approach approximates full posterior distributions over model parameters and
structures, as well as latent variables, in an analytical manner without
resorting to sampling methods. Unlike in the Laplace approximation, these
posteriors are generally non-Gaussian and no Hessian needs to be computed. The
resulting algorithm generalizes the standard Expectation Maximization
algorithm, and its convergence is guaranteed. I demonstrate that this algorithm
can be applied to a large class of models in several domains, including
unsupervised clustering and blind source separation.
| [
"Hagai Attias",
"['Hagai Attias']"
] |
cs.LG stat.ML | null | 1301.6677 | null | null | http://arxiv.org/pdf/1301.6677v1 | 2013-01-23T15:56:48Z | 2013-01-23T15:56:48Z | Relative Loss Bounds for On-line Density Estimation with the Exponential
Family of Distributions | We consider on-line density estimation with a parameterized density from the
exponential family. The on-line algorithm receives one example at a time and
maintains a parameter that is essentially an average of the past examples.
After receiving an example the algorithm incurs a loss which is the negative
log-likelihood of the example w.r.t. the past parameter of the algorithm. An
off-line algorithm can choose the best parameter based on all the examples. We
prove bounds on the additional total loss of the on-line algorithm over the
total loss of the off-line algorithm. These relative loss bounds hold for an
arbitrary sequence of examples. The goal is to design algorithms with the best
possible relative loss bounds. We use a certain divergence to derive and
analyze the algorithms. This divergence is a relative entropy between two
exponential distributions.
| [
"Katy S. Azoury, Manfred K. Warmuth",
"['Katy S. Azoury' 'Manfred K. Warmuth']"
] |
cs.AI cs.LG | null | 1301.6683 | null | null | http://arxiv.org/pdf/1301.6683v1 | 2013-01-23T15:57:10Z | 2013-01-23T15:57:10Z | Discovering the Hidden Structure of Complex Dynamic Systems | Dynamic Bayesian networks provide a compact and natural representation for
complex dynamic systems. However, in many cases, there is no expert available
from whom a model can be elicited. Learning provides an alternative approach
for constructing models of dynamic systems. In this paper, we address some of
the crucial computational aspects of learning the structure of dynamic systems,
particularly those where some relevant variables are partially observed or even
entirely unknown. Our approach is based on the Structural Expectation
Maximization (SEM) algorithm. The main computational cost of the SEM algorithm
is the gathering of expected sufficient statistics. We propose a novel
approximation scheme that allows these sufficient statistics to be computed
efficiently. We also investigate the fundamental problem of discovering the
existence of hidden variables without exhaustive and expensive search. Our
approach is based on the observation that, in dynamic systems, ignoring a
hidden variable typically results in a violation of the Markov property. Thus,
our algorithm searches for such violations in the data, and introduces hidden
variables to explain them. We provide empirical results showing that the
algorithm is able to learn the dynamics of complex systems in a computationally
tractable way.
| [
"['Xavier Boyen' 'Nir Friedman' 'Daphne Koller']",
"Xavier Boyen, Nir Friedman, Daphne Koller"
] |
cs.LG cs.AI stat.ML | null | 1301.6684 | null | null | http://arxiv.org/pdf/1301.6684v1 | 2013-01-23T15:57:14Z | 2013-01-23T15:57:14Z | Comparing Bayesian Network Classifiers | In this paper, we empirically evaluate algorithms for learning four types of
Bayesian network (BN) classifiers - Naive-Bayes, tree augmented Naive-Bayes, BN
augmented Naive-Bayes and general BNs, where the latter two are learned using
two variants of a conditional-independence (CI) based BN-learning algorithm.
Experimental results show the obtained classifiers, learned using the CI based
algorithms, are competitive with (or superior to) the best known classifiers,
based on both Bayesian networks and other formalisms; and that the
computational time for learning and using these classifiers is relatively
small. Moreover, these results also suggest a way to learn yet more effective
classifiers; we demonstrate empirically that this new algorithm does work as
expected. Collectively, these results argue that BN classifiers deserve more
attention in machine learning and data mining communities.
| [
"Jie Cheng, Russell Greiner",
"['Jie Cheng' 'Russell Greiner']"
] |
cs.LG stat.ML | null | 1301.6685 | null | null | http://arxiv.org/pdf/1301.6685v2 | 2015-05-16T23:09:53Z | 2013-01-23T15:57:18Z | Fast Learning from Sparse Data | We describe two techniques that significantly improve the running time of
several standard machine-learning algorithms when data is sparse. The first
technique is an algorithm that effeciently extracts one-way and two-way
counts--either real or expected-- from discrete data. Extracting such counts is
a fundamental step in learning algorithms for constructing a variety of models
including decision trees, decision graphs, Bayesian networks, and naive-Bayes
clustering models. The second technique is an algorithm that efficiently
performs the E-step of the EM algorithm (i.e. inference) when applied to a
naive-Bayes clustering model. Using real-world data sets, we demonstrate a
dramatic decrease in running time for algorithms that incorporate these
techniques.
| [
"['David Maxwell Chickering' 'David Heckerman']",
"David Maxwell Chickering, David Heckerman"
] |
cs.AI cs.LG | null | 1301.6688 | null | null | http://arxiv.org/pdf/1301.6688v1 | 2013-01-23T15:57:30Z | 2013-01-23T15:57:30Z | Learning Polytrees | We consider the task of learning the maximum-likelihood polytree from data.
Our first result is a performance guarantee establishing that the optimal
branching (or Chow-Liu tree), which can be computed very easily, constitutes a
good approximation to the best polytree. We then show that it is not possible
to do very much better, since the learning problem is NP-hard even to
approximately solve within some constant factor.
| [
"Sanjoy Dasgupta",
"['Sanjoy Dasgupta']"
] |
cs.AI cs.LG | null | 1301.6690 | null | null | http://arxiv.org/pdf/1301.6690v1 | 2013-01-23T15:57:38Z | 2013-01-23T15:57:38Z | Model-Based Bayesian Exploration | Reinforcement learning systems are often concerned with balancing exploration
of untested actions against exploitation of actions that are known to be good.
The benefit of exploration can be estimated using the classical notion of Value
of Information - the expected improvement in future decision quality arising
from the information acquired by exploration. Estimating this quantity requires
an assessment of the agent's uncertainty about its current value estimates for
states. In this paper we investigate ways of representing and reasoning about
this uncertainty in algorithms where the system attempts to learn a model of
its environment. We explicitly represent uncertainty about the parameters of
the model and build probability distributions over Q-values based on these.
These distributions are used to compute a myopic approximation to the value of
information for each action and hence to select the action that best balances
exploration and exploitation.
| [
"['Richard Dearden' 'Nir Friedman' 'David Andre']",
"Richard Dearden, Nir Friedman, David Andre"
] |
cs.LG cs.AI stat.ML | null | 1301.6695 | null | null | http://arxiv.org/pdf/1301.6695v1 | 2013-01-23T15:58:00Z | 2013-01-23T15:58:00Z | Data Analysis with Bayesian Networks: A Bootstrap Approach | In recent years there has been significant progress in algorithms and methods
for inducing Bayesian networks from data. However, in complex data analysis
problems, we need to go beyond being satisfied with inducing networks with high
scores. We need to provide confidence measures on features of these networks:
Is the existence of an edge between two nodes warranted? Is the Markov blanket
of a given node robust? Can we say something about the ordering of the
variables? We should be able to address these questions, even when the amount
of data is not enough to induce a high scoring network. In this paper we
propose Efron's Bootstrap as a computationally efficient approach for answering
these questions. In addition, we propose to use these confidence measures to
induce better structures from the data, and to detect the presence of latent
variables.
| [
"['Nir Friedman' 'Moises Goldszmidt' 'Abraham Wyner']",
"Nir Friedman, Moises Goldszmidt, Abraham Wyner"
] |
cs.LG cs.AI stat.ML | null | 1301.6696 | null | null | http://arxiv.org/pdf/1301.6696v1 | 2013-01-23T15:58:05Z | 2013-01-23T15:58:05Z | Learning Bayesian Network Structure from Massive Datasets: The "Sparse
Candidate" Algorithm | Learning Bayesian networks is often cast as an optimization problem, where
the computational task is to find a structure that maximizes a statistically
motivated score. By and large, existing learning tools address this
optimization problem using standard heuristic search techniques. Since the
search space is extremely large, such search procedures can spend most of the
time examining candidates that are extremely unreasonable. This problem becomes
critical when we deal with data sets that are large either in the number of
instances, or the number of attributes. In this paper, we introduce an
algorithm that achieves faster learning by restricting the search space. This
iterative algorithm restricts the parents of each variable to belong to a small
subset of candidates. We then search for a network that satisfies these
constraints. The learned network is then used for selecting better candidates
for the next iteration. We evaluate this algorithm both on synthetic and
real-life data. Our results show that it is significantly faster than
alternative search procedures without loss of quality in the learned
structures.
| [
"Nir Friedman, Iftach Nachman, Dana Pe'er",
"['Nir Friedman' 'Iftach Nachman' \"Dana Pe'er\"]"
] |
cs.LG stat.ML | null | 1301.6697 | null | null | null | null | null | Parameter Priors for Directed Acyclic Graphical Models and the
Characterization of Several Probability Distributions | We show that the only parameter prior for complete Gaussian DAG models that
satisfies global parameter independence, complete model equivalence, and some
weak regularity assumptions, is the normal-Wishart distribution. Our analysis
is based on the following new characterization of the Wishart distribution: let
W be an n x n, n >= 3, positive-definite symmetric matrix of random variables
and f(W) be a pdf of W. Then, f(W) is a Wishart distribution if and only if
W_{11}-W_{12}W_{22}^{-1}W_{12}' is independent of {W_{12}, W_{22}} for every
block partitioning W_{11}, W_{12}, W_{12}', W_{22} of W. Similar
characterizations of the normal and normal-Wishart distributions are provided
as well. We also show how to construct a prior for every DAG model over X from
the prior of a single regression model.
| [
"Dan Geiger, David Heckerman"
] |
cs.LG cs.IR stat.ML | null | 1301.6705 | null | null | http://arxiv.org/pdf/1301.6705v1 | 2013-01-23T15:58:43Z | 2013-01-23T15:58:43Z | Probabilistic Latent Semantic Analysis | Probabilistic Latent Semantic Analysis is a novel statistical technique for
the analysis of two-mode and co-occurrence data, which has applications in
information retrieval and filtering, natural language processing, machine
learning from text, and in related areas. Compared to standard Latent Semantic
Analysis which stems from linear algebra and performs a Singular Value
Decomposition of co-occurrence tables, the proposed method is based on a
mixture decomposition derived from a latent class model. This results in a more
principled approach which has a solid foundation in statistics. In order to
avoid overfitting, we propose a widely applicable generalization of maximum
likelihood model fitting by tempered EM. Our approach yields substantial and
consistent improvements over Latent Semantic Analysis in a number of
experiments.
| [
"Thomas Hofmann",
"['Thomas Hofmann']"
] |
cs.LG stat.ML | null | 1301.6710 | null | null | http://arxiv.org/pdf/1301.6710v1 | 2013-01-23T15:59:02Z | 2013-01-23T15:59:02Z | On Supervised Selection of Bayesian Networks | Given a set of possible models (e.g., Bayesian network structures) and a data
sample, in the unsupervised model selection problem the task is to choose the
most accurate model with respect to the domain joint probability distribution.
In contrast to this, in supervised model selection it is a priori known that
the chosen model will be used in the future for prediction tasks involving more
``focused' predictive distributions. Although focused predictive distributions
can be produced from the joint probability distribution by marginalization, in
practice the best model in the unsupervised sense does not necessarily perform
well in supervised domains. In particular, the standard marginal likelihood
score is a criterion for the unsupervised task, and, although frequently used
for supervised model selection also, does not perform well in such tasks. In
this paper we study the performance of the marginal likelihood score
empirically in supervised Bayesian network selection tasks by using a large
number of publicly available classification data sets, and compare the results
to those obtained by alternative model selection criteria, including empirical
crossvalidation methods, an approximation of a supervised marginal likelihood
measure, and a supervised version of Dawids prequential(predictive sequential)
principle.The results demonstrate that the marginal likelihood score does NOT
perform well FOR supervised model selection, WHILE the best results are
obtained BY using Dawids prequential r napproach.
| [
"Petri Kontkanen, Petri Myllymaki, Tomi Silander, Henry Tirri",
"['Petri Kontkanen' 'Petri Myllymaki' 'Tomi Silander' 'Henry Tirri']"
] |
cs.LG cs.AI stat.ML | null | 1301.6723 | null | null | http://arxiv.org/pdf/1301.6723v1 | 2013-01-23T15:59:54Z | 2013-01-23T15:59:54Z | A Bayesian Network Classifier that Combines a Finite Mixture Model and a
Naive Bayes Model | In this paper we present a new Bayesian network model for classification that
combines the naive-Bayes (NB) classifier and the finite-mixture (FM)
classifier. The resulting classifier aims at relaxing the strong assumptions on
which the two component models are based, in an attempt to improve on their
classification performance, both in terms of accuracy and in terms of
calibration of the estimated probabilities. The proposed classifier is obtained
by superimposing a finite mixture model on the set of feature variables of a
naive Bayes model. We present experimental results that compare the predictive
performance on real datasets of the new classifier with the predictive
performance of the NB classifier and the FM classifier.
| [
"Stefano Monti, Gregory F. Cooper",
"['Stefano Monti' 'Gregory F. Cooper']"
] |
cs.AI cs.LG stat.ML | null | 1301.6724 | null | null | http://arxiv.org/pdf/1301.6724v1 | 2013-01-23T15:59:58Z | 2013-01-23T15:59:58Z | A Variational Approximation for Bayesian Networks with Discrete and
Continuous Latent Variables | We show how to use a variational approximation to the logistic function to
perform approximate inference in Bayesian networks containing discrete nodes
with continuous parents. Essentially, we convert the logistic function to a
Gaussian, which facilitates exact inference, and then iteratively adjust the
variational parameters to improve the quality of the approximation. We
demonstrate experimentally that this approximation is faster and potentially
more accurate than sampling. We also introduce a simple new technique for
handling evidence, which allows us to handle arbitrary distributions on
observed nodes, as well as achieving a significant speedup in networks with
discrete variables of large cardinality.
| [
"Kevin Murphy",
"['Kevin Murphy']"
] |
cs.AI cs.LG | null | 1301.6725 | null | null | http://arxiv.org/pdf/1301.6725v1 | 2013-01-23T16:00:02Z | 2013-01-23T16:00:02Z | Loopy Belief Propagation for Approximate Inference: An Empirical Study | Recently, researchers have demonstrated that loopy belief propagation - the
use of Pearls polytree algorithm IN a Bayesian network WITH loops OF error-
correcting codes.The most dramatic instance OF this IS the near Shannon - limit
performance OF Turbo Codes codes whose decoding algorithm IS equivalent TO
loopy belief propagation IN a chain - structured Bayesian network. IN this
paper we ask : IS there something special about the error - correcting code
context, OR does loopy propagation WORK AS an approximate inference schemeIN a
more general setting? We compare the marginals computed using loopy propagation
TO the exact ones IN four Bayesian network architectures, including two real -
world networks : ALARM AND QMR.We find that the loopy beliefs often converge
AND WHEN they do, they give a good approximation TO the correct
marginals.However,ON the QMR network, the loopy beliefs oscillated AND had no
obvious relationship TO the correct posteriors. We present SOME initial
investigations INTO the cause OF these oscillations, AND show that SOME simple
methods OF preventing them lead TO the wrong results.
| [
"Kevin Murphy, Yair Weiss, Michael I. Jordan",
"['Kevin Murphy' 'Yair Weiss' 'Michael I. Jordan']"
] |
cs.AI cs.LG | null | 1301.6726 | null | null | http://arxiv.org/pdf/1301.6726v1 | 2013-01-23T16:00:06Z | 2013-01-23T16:00:06Z | Learning Bayesian Networks from Incomplete Data with Stochastic Search
Algorithms | This paper describes stochastic search approaches, including a new stochastic
algorithm and an adaptive mutation operator, for learning Bayesian networks
from incomplete data. This problem is characterized by a huge solution space
with a highly multimodal landscape. State-of-the-art approaches all involve
using deterministic approaches such as the expectation-maximization algorithm.
These approaches are guaranteed to find local maxima, but do not explore the
landscape for other modes. Our approach evolves structure and the missing data.
We compare our stochastic algorithms and show they all produce accurate
results.
| [
"James W. Myers, Kathryn Blackmond Laskey, Tod S. Levitt",
"['James W. Myers' 'Kathryn Blackmond Laskey' 'Tod S. Levitt']"
] |
cs.AI cs.LG stat.ML | null | 1301.6727 | null | null | http://arxiv.org/pdf/1301.6727v1 | 2013-01-23T16:00:10Z | 2013-01-23T16:00:10Z | Learning Bayesian Networks with Restricted Causal Interactions | A major problem for the learning of Bayesian networks (BNs) is the
exponential number of parameters needed for conditional probability tables.
Recent research reduces this complexity by modeling local structure in the
probability tables. We examine the use of log-linear local models. While
log-linear models in this context are not new (Whittaker, 1990; Buntine, 1991;
Neal, 1992; Heckerman and Meek, 1997), for structure learning they are
generally subsumed under a naive Bayes model. We describe an alternative
interpretation, and use a Minimum Message Length (MML) (Wallace, 1987) metric
for structure learning of networks exhibiting causal independence, which we
term first-order networks (FONs). We also investigate local model selection on
a node-by-node basis.
| [
"Julian R. Neil, Chris S. Wallace, Kevin B. Korb",
"['Julian R. Neil' 'Chris S. Wallace' 'Kevin B. Korb']"
] |
cs.LG stat.ML | null | 1301.6730 | null | null | http://arxiv.org/pdf/1301.6730v1 | 2013-01-23T16:00:21Z | 2013-01-23T16:00:21Z | Accelerating EM: An Empirical Study | Many applications require that we learn the parameters of a model from data.
EM is a method used to learn the parameters of probabilistic models for which
the data for some of the variables in the models is either missing or hidden.
There are instances in which this method is slow to converge. Therefore,
several accelerations have been proposed to improve the method. None of the
proposed acceleration methods are theoretically dominant and experimental
comparisons are lacking. In this paper, we present the different proposed
accelerations and try to compare them experimentally. From the results of the
experiments, we argue that some acceleration of EM is always possible, but that
which acceleration is superior depends on properties of the problem.
| [
"Luis E. Ortiz, Leslie Pack Kaelbling",
"['Luis E. Ortiz' 'Leslie Pack Kaelbling']"
] |
cs.LG stat.ML | null | 1301.6731 | null | null | http://arxiv.org/pdf/1301.6731v1 | 2013-01-23T16:00:25Z | 2013-01-23T16:00:25Z | Variational Learning in Mixed-State Dynamic Graphical Models | Many real-valued stochastic time-series are locally linear (Gassian), but
globally non-linear. For example, the trajectory of a human hand gesture can be
viewed as a linear dynamic system driven by a nonlinear dynamic system that
represents muscle actions. We present a mixed-state dynamic graphical model in
which a hidden Markov model drives a linear dynamic system. This combination
allows us to model both the discrete and continuous causes of trajectories such
as human gestures. The number of computations needed for exact inference is
exponential in the sequence length, so we derive an approximate variational
inference technique that can also be used to learn the parameters of the
discrete and continuous models. We show how the mixed-state model and the
variational technique can be used to classify human hand gestures made with a
computer mouse.
| [
"['Vladimir Pavlovic' 'Brendan J. Frey' 'Thomas S. Huang']",
"Vladimir Pavlovic, Brendan J. Frey, Thomas S. Huang"
] |
cs.LG stat.ML | null | 1301.6738 | null | null | http://arxiv.org/pdf/1301.6738v1 | 2013-01-23T16:00:53Z | 2013-01-23T16:00:53Z | Approximate Learning in Complex Dynamic Bayesian Networks | In this paper we extend the work of Smith and Papamichail (1999) and present
fast approximate Bayesian algorithms for learning in complex scenarios where at
any time frame, the relationships between explanatory state space variables can
be described by a Bayesian network that evolve dynamically over time and the
observations taken are not necessarily Gaussian. It uses recent developments in
approximate Bayesian forecasting methods in combination with more familiar
Gaussian propagation algorithms on junction trees. The procedure for learning
state parameters from data is given explicitly for common sampling
distributions and the methodology is illustrated through a real application.
The efficiency of the dynamic approximation is explored by using the Hellinger
divergence measure and theoretical bounds for the efficacy of such a procedure
are discussed.
| [
"['Raffaella Settimi' 'Jim Q. Smith' 'A. S. Gargoum']",
"Raffaella Settimi, Jim Q. Smith, A. S. Gargoum"
] |
cs.IR cs.LG stat.ML | null | 1301.6770 | null | null | http://arxiv.org/pdf/1301.6770v1 | 2013-01-28T21:04:45Z | 2013-01-28T21:04:45Z | An alternative text representation to TF-IDF and Bag-of-Words | In text mining, information retrieval, and machine learning, text documents
are commonly represented through variants of sparse Bag of Words (sBoW) vectors
(e.g. TF-IDF). Although simple and intuitive, sBoW style representations suffer
from their inherent over-sparsity and fail to capture word-level synonymy and
polysemy. Especially when labeled data is limited (e.g. in document
classification), or the text documents are short (e.g. emails or abstracts),
many features are rarely observed within the training corpus. This leads to
overfitting and reduced generalization accuracy. In this paper we propose Dense
Cohort of Terms (dCoT), an unsupervised algorithm to learn improved sBoW
document features. dCoT explicitly models absent words by removing and
reconstructing random sub-sets of words in the unlabeled corpus. With this
approach, dCoT learns to reconstruct frequent words from co-occurring
infrequent words and maps the high dimensional sparse sBoW vectors into a
low-dimensional dense representation. We show that the feature removal can be
marginalized out and that the reconstruction can be solved for in closed-form.
We demonstrate empirically, on several benchmark datasets, that dCoT features
significantly improve the classification accuracy across several document
classification tasks.
| [
"['Zhixiang' 'Xu' 'Minmin Chen' 'Kilian Q. Weinberger' 'Fei Sha']",
"Zhixiang (Eddie) Xu, Minmin Chen, Kilian Q. Weinberger, Fei Sha"
] |
cs.IT cs.CV cs.LG math.IT | null | 1301.6791 | null | null | http://arxiv.org/pdf/1301.6791v6 | 2013-10-11T16:47:24Z | 2013-01-28T22:01:22Z | Guarantees of Total Variation Minimization for Signal Recovery | In this paper, we consider using total variation minimization to recover
signals whose gradients have a sparse support, from a small number of
measurements. We establish the proof for the performance guarantee of total
variation (TV) minimization in recovering \emph{one-dimensional} signal with
sparse gradient support. This partially answers the open problem of proving the
fidelity of total variation minimization in such a setting \cite{TVMulti}. In
particular, we have shown that the recoverable gradient sparsity can grow
linearly with the signal dimension when TV minimization is used. Recoverable
sparsity thresholds of TV minimization are explicitly computed for
1-dimensional signal by using the Grassmann angle framework. We also extend our
results to TV minimization for multidimensional signals. Stability of
recovering signal itself using 1-D TV minimization has also been established
through a property called "almost Euclidean property for 1-dimensional TV
norm". We further give a lower bound on the number of random Gaussian
measurements for recovering 1-dimensional signal vectors with $N$ elements and
$K$-sparse gradients. Interestingly, the number of needed measurements is lower
bounded by $\Omega((NK)^{\frac{1}{2}})$, rather than the $O(K\log(N/K))$ bound
frequently appearing in recovering $K$-sparse signal vectors.
| [
"Jian-Feng Cai and Weiyu Xu",
"['Jian-Feng Cai' 'Weiyu Xu']"
] |
cs.CL cs.LG | null | 1301.6939 | null | null | http://arxiv.org/pdf/1301.6939v2 | 2013-01-30T12:01:23Z | 2013-01-29T14:59:34Z | Multi-Step Regression Learning for Compositional Distributional
Semantics | We present a model for compositional distributional semantics related to the
framework of Coecke et al. (2010), and emulating formal semantics by
representing functions as tensors and arguments as vectors. We introduce a new
learning method for tensors, generalising the approach of Baroni and Zamparelli
(2010). We evaluate it on two benchmark data sets, and find it to outperform
existing leading methods. We argue in our analysis that the nature of this
learning method also renders it suitable for solving more subtle problems
compositional distributional models might face.
| [
"Edward Grefenstette, Georgiana Dinu, Yao-Zhong Zhang, Mehrnoosh\n Sadrzadeh and Marco Baroni",
"['Edward Grefenstette' 'Georgiana Dinu' 'Yao-Zhong Zhang'\n 'Mehrnoosh Sadrzadeh' 'Marco Baroni']"
] |
stat.ML cs.LG | null | 1301.6944 | null | null | http://arxiv.org/pdf/1301.6944v1 | 2013-01-29T15:09:56Z | 2013-01-29T15:09:56Z | On the Consistency of the Bootstrap Approach for Support Vector Machines
and Related Kernel Based Methods | It is shown that bootstrap approximations of support vector machines (SVMs)
based on a general convex and smooth loss function and on a general kernel are
consistent. This result is useful to approximate the unknown finite sample
distribution of SVMs by the bootstrap approach.
| [
"Andreas Christmann and Robert Hable",
"['Andreas Christmann' 'Robert Hable']"
] |
stat.ML cs.LG cs.SI | null | 1301.7047 | null | null | http://arxiv.org/pdf/1301.7047v1 | 2013-01-29T20:22:46Z | 2013-01-29T20:22:46Z | Link prediction for partially observed networks | Link prediction is one of the fundamental problems in network analysis. In
many applications, notably in genetics, a partially observed network may not
contain any negative examples of absent edges, which creates a difficulty for
many existing supervised learning approaches. We develop a new method which
treats the observed network as a sample of the true network with different
sampling rates for positive and negative examples. We obtain a relative ranking
of potential links by their probabilities, utilizing information on node
covariates as well as on network topology. Empirically, the method performs
well under many settings, including when the observed network is sparse. We
apply the method to a protein-protein interaction network and a school
friendship network.
| [
"Yunpeng Zhao, Elizaveta Levina and Ji Zhu",
"['Yunpeng Zhao' 'Elizaveta Levina' 'Ji Zhu']"
] |
cs.IR cs.LG | null | 1301.7363 | null | null | http://arxiv.org/pdf/1301.7363v1 | 2013-01-30T15:02:44Z | 2013-01-30T15:02:44Z | Empirical Analysis of Predictive Algorithms for Collaborative Filtering | Collaborative filtering or recommender systems use a database about user
preferences to predict additional topics or products a new user might like. In
this paper we describe several algorithms designed for this task, including
techniques based on correlation coefficients, vector-based similarity
calculations, and statistical Bayesian methods. We compare the predictive
accuracy of the various methods in a set of representative problem domains. We
use two basic classes of evaluation metrics. The first characterizes accuracy
over a set of individual predictions in terms of average absolute deviation.
The second estimates the utility of a ranked list of suggested items. This
metric uses an estimate of the probability that a user will see a
recommendation in an ordered list. Experiments were run for datasets associated
with 3 application areas, 4 experimental protocols, and the 2 evaluation
metrics for the various algorithms. Results indicate that for a wide range of
conditions, Bayesian networks with decision trees at each node and correlation
methods outperform Bayesian-clustering and vector-similarity methods. Between
correlation and Bayesian networks, the preferred method depends on the nature
of the dataset, nature of the application (ranked versus one-by-one
presentation), and the availability of votes with which to make predictions.
Other considerations include the size of database, speed of predictions, and
learning time.
| [
"John S. Breese, David Heckerman, Carl Kadie",
"['John S. Breese' 'David Heckerman' 'Carl Kadie']"
] |
cs.LG cs.AI stat.ML | null | 1301.7373 | null | null | http://arxiv.org/pdf/1301.7373v1 | 2013-01-30T15:03:37Z | 2013-01-30T15:03:37Z | The Bayesian Structural EM Algorithm | In recent years there has been a flurry of works on learning Bayesian
networks from data. One of the hard problems in this area is how to effectively
learn the structure of a belief network from incomplete data- that is, in the
presence of missing values or hidden variables. In a recent paper, I introduced
an algorithm called Structural EM that combines the standard Expectation
Maximization (EM) algorithm, which optimizes parameters, with structure search
for model selection. That algorithm learns networks based on penalized
likelihood scores, which include the BIC/MDL score and various approximations
to the Bayesian score. In this paper, I extend Structural EM to deal directly
with Bayesian model selection. I prove the convergence of the resulting
algorithm and show how to apply it for learning a large class of probabilistic
models, including Bayesian networks and some variants thereof.
| [
"Nir Friedman",
"['Nir Friedman']"
] |
cs.AI cs.LG | null | 1301.7374 | null | null | http://arxiv.org/pdf/1301.7374v1 | 2013-01-30T15:03:42Z | 2013-01-30T15:03:42Z | Learning the Structure of Dynamic Probabilistic Networks | Dynamic probabilistic networks are a compact representation of complex
stochastic processes. In this paper we examine how to learn the structure of a
DPN from data. We extend structure scoring rules for standard probabilistic
networks to the dynamic case, and show how to search for structure when some of
the variables are hidden. Finally, we examine two applications where such a
technology might be useful: predicting and classifying dynamic behaviors, and
learning causal orderings in biological processes. We provide empirical results
that demonstrate the applicability of our methods in both domains.
| [
"Nir Friedman, Kevin Murphy, Stuart Russell",
"['Nir Friedman' 'Kevin Murphy' 'Stuart Russell']"
] |
cs.LG stat.ML | null | 1301.7375 | null | null | http://arxiv.org/pdf/1301.7375v1 | 2013-01-30T15:03:47Z | 2013-01-30T15:03:47Z | Learning by Transduction | We describe a method for predicting a classification of an object given
classifications of the objects in the training set, assuming that the pairs
object/classification are generated by an i.i.d. process from a continuous
probability distribution. Our method is a modification of Vapnik's
support-vector machine; its main novelty is that it gives not only the
prediction itself but also a practicable measure of the evidence found in
support of that prediction. We also describe a procedure for assigning degrees
of confidence to predictions made by the support vector machine. Some
experimental results are presented, and possible extensions of the algorithms
are discussed.
| [
"['Alex Gammerman' 'Volodya Vovk' 'Vladimir Vapnik']",
"Alex Gammerman, Volodya Vovk, Vladimir Vapnik"
] |
cs.LG stat.ML | null | 1301.7376 | null | null | http://arxiv.org/pdf/1301.7376v1 | 2013-01-30T15:03:52Z | 2013-01-30T15:03:52Z | Graphical Models and Exponential Families | We provide a classification of graphical models according to their
representation as subfamilies of exponential families. Undirected graphical
models with no hidden variables are linear exponential families (LEFs),
directed acyclic graphical models and chain graphs with no hidden variables,
including Bayesian networks with several families of local distributions, are
curved exponential families (CEFs) and graphical models with hidden variables
are stratified exponential families (SEFs). An SEF is a finite union of CEFs
satisfying a frontier condition. In addition, we illustrate how one can
automatically generate independence and non-independence constraints on the
distributions over the observable variables implied by a Bayesian network with
hidden variables. The relevance of these results for model selection is
examined.
| [
"['Dan Geiger' 'Christopher Meek']",
"Dan Geiger, Christopher Meek"
] |
cs.LG stat.ML | null | 1301.7378 | null | null | http://arxiv.org/pdf/1301.7378v1 | 2013-01-30T15:04:02Z | 2013-01-30T15:04:02Z | Minimum Encoding Approaches for Predictive Modeling | We analyze differences between two information-theoretically motivated
approaches to statistical inference and model selection: the Minimum
Description Length (MDL) principle, and the Minimum Message Length (MML)
principle. Based on this analysis, we present two revised versions of MML: a
pointwise estimator which gives the MML-optimal single parameter model, and a
volumewise estimator which gives the MML-optimal region in the parameter space.
Our empirical results suggest that with small data sets, the MDL approach
yields more accurate predictions than the MML estimators. The empirical results
also demonstrate that the revised MML estimators introduced here perform better
than the original MML estimator suggested by Wallace and Freeman.
| [
"['Peter D Grunwald' 'Petri Kontkanen' 'Petri Myllymaki' 'Tomi Silander'\n 'Henry Tirri']",
"Peter D Grunwald, Petri Kontkanen, Petri Myllymaki, Tomi Silander,\n Henry Tirri"
] |
cs.LG stat.ML | null | 1301.7390 | null | null | http://arxiv.org/pdf/1301.7390v1 | 2013-01-30T15:04:59Z | 2013-01-30T15:04:59Z | Hierarchical Mixtures-of-Experts for Exponential Family Regression
Models with Generalized Linear Mean Functions: A Survey of Approximation and
Consistency Results | We investigate a class of hierarchical mixtures-of-experts (HME) models where
exponential family regression models with generalized linear mean functions of
the form psi(ga+fx^Tfgb) are mixed. Here psi(...) is the inverse link function.
Suppose the true response y follows an exponential family regression model with
mean function belonging to a class of smooth functions of the form psi(h(fx))
where h(...)in W_2^infty (a Sobolev class over [0,1]^{s}). It is shown that the
HME probability density functions can approximate the true density, at a rate
of O(m^{-2/s}) in L_p norm, and at a rate of O(m^{-4/s}) in Kullback-Leibler
divergence. These rates can be achieved within the family of HME structures
with no more than s-layers, where s is the dimension of the predictor fx. It is
also shown that likelihood-based inference based on HME is consistent in
recovering the truth, in the sense that as the sample size n and the number of
experts m both increase, the mean square error of the predicted mean response
goes to zero. Conditions for such results to hold are stated and discussed.
| [
"['Wenxin Jiang' 'Martin A. Tanner']",
"Wenxin Jiang, Martin A. Tanner"
] |
cs.LG stat.ML | null | 1301.7392 | null | null | http://arxiv.org/pdf/1301.7392v1 | 2013-01-30T15:05:09Z | 2013-01-30T15:05:09Z | Large Deviation Methods for Approximate Probabilistic Inference | We study two-layer belief networks of binary random variables in which the
conditional probabilities Pr[childlparents] depend monotonically on weighted
sums of the parents. In large networks where exact probabilistic inference is
intractable, we show how to compute upper and lower bounds on many
probabilities of interest. In particular, using methods from large deviation
theory, we derive rigorous bounds on marginal probabilities such as
Pr[children] and prove rates of convergence for the accuracy of our bounds as a
function of network size. Our results apply to networks with generic transfer
function parameterizations of the conditional probability tables, such as
sigmoid and noisy-OR. They also explicitly illustrate the types of averaging
behavior that can simplify the problem of inference in large networks.
| [
"['Michael Kearns' 'Lawrence Saul']",
"Michael Kearns, Lawrence Saul"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.