categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
cs.LG stat.ML | null | 1206.6431 | null | null | http://arxiv.org/pdf/1206.6431v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Exact Maximum Margin Structure Learning of Bayesian Networks | Recently, there has been much interest in finding globally optimal Bayesian
network structures. These techniques were developed for generative scores and
can not be directly extended to discriminative scores, as desired for
classification. In this paper, we propose an exact method for finding network
structures maximizing the probabilistic soft margin, a successfully applied
discriminative score. Our method is based on branch-and-bound techniques within
a linear programming framework and maintains an any-time solution, together
with worst-case sub-optimality bounds. We apply a set of order constraints for
enforcing the network structure to be acyclic, which allows a compact problem
representation and the use of general-purpose optimization techniques. In
classification experiments, our methods clearly outperform generatively trained
network structures and compete with support vector machines.
| [
"Robert Peharz (Graz University of Technology), Franz Pernkopf (Graz\n University of Technology)",
"['Robert Peharz' 'Franz Pernkopf']"
] |
cs.LG cs.CE stat.ML | null | 1206.6432 | null | null | http://arxiv.org/pdf/1206.6432v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Sparse Support Vector Infinite Push | In this paper, we address the problem of embedded feature selection for
ranking on top of the list problems. We pose this problem as a regularized
empirical risk minimization with $p$-norm push loss function ($p=\infty$) and
sparsity inducing regularizers. We leverage the issues related to this
challenging optimization problem by considering an alternating direction method
of multipliers algorithm which is built upon proximal operators of the loss
function and the regularizer. Our main technical contribution is thus to
provide a numerical scheme for computing the infinite push loss function
proximal operator. Experimental results on toy, DNA microarray and BCI problems
show how our novel algorithm compares favorably to competitors for ranking on
top while using fewer variables in the scoring function.
| [
"['Alain Rakotomamonjy']",
"Alain Rakotomamonjy (Universite de Rouen)"
] |
stat.ME cs.LG stat.ML | null | 1206.6433 | null | null | http://arxiv.org/pdf/1206.6433v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Copula Mixture Model for Dependency-seeking Clustering | We introduce a copula mixture model to perform dependency-seeking clustering
when co-occurring samples from different data sources are available. The model
takes advantage of the great flexibility offered by the copulas framework to
extend mixtures of Canonical Correlation Analysis to multivariate data with
arbitrary continuous marginal densities. We formulate our model as a
non-parametric Bayesian mixture, while providing efficient MCMC inference.
Experiments on synthetic and real data demonstrate that the increased
flexibility of the copula mixture significantly improves the clustering and the
interpretability of the results.
| [
"Melanie Rey (University of Basel), Volker Roth (University of Basel)",
"['Melanie Rey' 'Volker Roth']"
] |
cs.LG stat.ML | null | 1206.6434 | null | null | http://arxiv.org/pdf/1206.6434v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | A Generative Process for Sampling Contractive Auto-Encoders | The contractive auto-encoder learns a representation of the input data that
captures the local manifold structure around each data point, through the
leading singular vectors of the Jacobian of the transformation from input to
representation. The corresponding singular values specify how much local
variation is plausible in directions associated with the corresponding singular
vectors, while remaining in a high-density region of the input space. This
paper proposes a procedure for generating samples that are consistent with the
local structure captured by a contractive auto-encoder. The associated
stochastic process defines a distribution from which one can sample, and which
experimentally appears to converge quickly and mix well between modes, compared
to Restricted Boltzmann Machines and Deep Belief Networks. The intuitions
behind this procedure can also be used to train the second layer of contraction
that pools lower-level features and learns to be invariant to the local
directions of variation discovered in the first layer. We show that this can
help learn and represent invariances present in the data and improve
classification error.
| [
"Salah Rifai (Universite de Montreal), Yoshua Bengio (Universite de\n Montreal), Yann Dauphin (Universite de Montreal), Pascal Vincent (Universite\n de Montreal)",
"['Salah Rifai' 'Yoshua Bengio' 'Yann Dauphin' 'Pascal Vincent']"
] |
cs.LG stat.ML | null | 1206.6435 | null | null | http://arxiv.org/pdf/1206.6435v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Rethinking Collapsed Variational Bayes Inference for LDA | We propose a novel interpretation of the collapsed variational Bayes
inference with a zero-order Taylor expansion approximation, called CVB0
inference, for latent Dirichlet allocation (LDA). We clarify the properties of
the CVB0 inference by using the alpha-divergence. We show that the CVB0
inference is composed of two different divergence projections: alpha=1 and -1.
This interpretation will help shed light on CVB0 works.
| [
"['Issei Sato' 'Hiroshi Nakagawa']",
"Issei Sato (The University of Tokyo), Hiroshi Nakagawa (The University\n of Tokyo)"
] |
cs.LG stat.ML | null | 1206.6436 | null | null | http://arxiv.org/pdf/1206.6436v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Efficient Structured Prediction with Latent Variables for General
Graphical Models | In this paper we propose a unified framework for structured prediction with
latent variables which includes hidden conditional random fields and latent
structured support vector machines as special cases. We describe a local
entropy approximation for this general formulation using duality, and derive an
efficient message passing algorithm that is guaranteed to converge. We
demonstrate its effectiveness in the tasks of image segmentation as well as 3D
indoor scene understanding from single images, showing that our approach is
superior to latent structured support vector machines and hidden conditional
random fields.
| [
"['Alexander Schwing' 'Tamir Hazan' 'Marc Pollefeys' 'Raquel Urtasun']",
"Alexander Schwing (ETH Zurich), Tamir Hazan (TTIC), Marc Pollefeys\n (ETH Zurich), Raquel Urtasun (TTIC)"
] |
cs.CV cs.LG stat.ML | null | 1206.6437 | null | null | http://arxiv.org/pdf/1206.6437v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Large Scale Variational Bayesian Inference for Structured Scale Mixture
Models | Natural image statistics exhibit hierarchical dependencies across multiple
scales. Representing such prior knowledge in non-factorial latent tree models
can boost performance of image denoising, inpainting, deconvolution or
reconstruction substantially, beyond standard factorial "sparse" methodology.
We derive a large scale approximate Bayesian inference algorithm for linear
models with non-factorial (latent tree-structured) scale mixture priors.
Experimental results on a range of denoising and inpainting problems
demonstrate substantially improved performance compared to MAP estimation or to
inference with factorial priors.
| [
"['Young Jun Ko' 'Matthias Seeger']",
"Young Jun Ko (Ecole Polytechnique Federale de Lausanne), Matthias\n Seeger (Ecole Polytechnique Federale de Lausanne)"
] |
cs.LG stat.ML | null | 1206.6438 | null | null | http://arxiv.org/pdf/1206.6438v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Information-Theoretical Learning of Discriminative Clusters for
Unsupervised Domain Adaptation | We study the problem of unsupervised domain adaptation, which aims to adapt
classifiers trained on a labeled source domain to an unlabeled target domain.
Many existing approaches first learn domain-invariant features and then
construct classifiers with them. We propose a novel approach that jointly learn
the both. Specifically, while the method identifies a feature space where data
in the source and the target domains are similarly distributed, it also learns
the feature space discriminatively, optimizing an information-theoretic metric
as an proxy to the expected misclassification error on the target domain. We
show how this optimization can be effectively carried out with simple
gradient-based methods and how hyperparameters can be cross-validated without
demanding any labeled data from the target domain. Empirical studies on
benchmark tasks of object recognition and sentiment analysis validated our
modeling assumptions and demonstrated significant improvement of our method
over competing ones in classification accuracies.
| [
"Yuan Shi (University of Southern California), Fei Sha (University of\n Southern California)",
"['Yuan Shi' 'Fei Sha']"
] |
cs.CE cs.LG stat.AP | null | 1206.6439 | null | null | http://arxiv.org/pdf/1206.6439v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Gap Filling in the Plant Kingdom---Trait Prediction Using Hierarchical
Probabilistic Matrix Factorization | Plant traits are a key to understanding and predicting the adaptation of
ecosystems to environmental changes, which motivates the TRY project aiming at
constructing a global database for plant traits and becoming a standard
resource for the ecological community. Despite its unprecedented coverage, a
large percentage of missing data substantially constrains joint trait analysis.
Meanwhile, the trait data is characterized by the hierarchical phylogenetic
structure of the plant kingdom. While factorization based matrix completion
techniques have been widely used to address the missing data problem,
traditional matrix factorization methods are unable to leverage the
phylogenetic structure. We propose hierarchical probabilistic matrix
factorization (HPMF), which effectively uses hierarchical phylogenetic
information for trait prediction. We demonstrate HPMF's high accuracy,
effectiveness of incorporating hierarchical structure and ability to capture
trait correlation through experiments.
| [
"Hanhuai Shan (University of Minnesota), Jens Kattge (Max Planck\n Institute for Biogeochemistry), Peter Reich (University of Minnesota),\n Arindam Banerjee (University of Minnesota), Franziska Schrodt (University of\n Minnesota), Markus Reichstein (Max Planck Institute for Biogeochemistry)",
"['Hanhuai Shan' 'Jens Kattge' 'Peter Reich' 'Arindam Banerjee'\n 'Franziska Schrodt' 'Markus Reichstein']"
] |
cs.LG stat.ML | null | 1206.6440 | null | null | http://arxiv.org/pdf/1206.6440v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Predicting Preference Flips in Commerce Search | Traditional approaches to ranking in web search follow the paradigm of
rank-by-score: a learned function gives each query-URL combination an absolute
score and URLs are ranked according to this score. This paradigm ensures that
if the score of one URL is better than another then one will always be ranked
higher than the other. Scoring contradicts prior work in behavioral economics
that showed that users' preferences between two items depend not only on the
items but also on the presented alternatives. Thus, for the same query, users'
preference between items A and B depends on the presence/absence of item C. We
propose a new model of ranking, the Random Shopper Model, that allows and
explains such behavior. In this model, each feature is viewed as a Markov chain
over the items to be ranked, and the goal is to find a weighting of the
features that best reflects their importance. We show that our model can be
learned under the empirical risk minimization framework, and give an efficient
learning algorithm. Experiments on commerce search logs demonstrate that our
algorithm outperforms scoring-based approaches including regression and
listwise ranking.
| [
"['Or Sheffet' 'Nina Mishra' 'Samuel Ieong']",
"Or Sheffet (Carnegie Mellon University), Nina Mishra (Microsoft\n Research), Samuel Ieong (Microsoft Research)"
] |
cs.LG cs.IR stat.ML | null | 1206.6441 | null | null | http://arxiv.org/pdf/1206.6441v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | A Topic Model for Melodic Sequences | We examine the problem of learning a probabilistic model for melody directly
from musical sequences belonging to the same genre. This is a challenging task
as one needs to capture not only the rich temporal structure evident in music,
but also the complex statistical dependencies among different music components.
To address this problem we introduce the Variable-gram Topic Model, which
couples the latent topic formalism with a systematic model for contextual
information. We evaluate the model on next-step prediction. Additionally, we
present a novel way of model evaluation, where we directly compare model
samples with data sequences using the Maximum Mean Discrepancy of string
kernels, to assess how close is the model distribution to the data
distribution. We show that the model has the highest performance under both
evaluation measures when compared to LDA, the Topic Bigram and related
non-topic models.
| [
"['Athina Spiliopoulou' 'Amos Storkey']",
"Athina Spiliopoulou (University of Edinburgh), Amos Storkey\n (University of Edinburgh)"
] |
cs.LG stat.ML | null | 1206.6442 | null | null | http://arxiv.org/pdf/1206.6442v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Minimizing The Misclassification Error Rate Using a Surrogate Convex
Loss | We carefully study how well minimizing convex surrogate loss functions,
corresponds to minimizing the misclassification error rate for the problem of
binary classification with linear predictors. In particular, we show that
amongst all convex surrogate losses, the hinge loss gives essentially the best
possible bound, of all convex loss functions, for the misclassification error
rate of the resulting linear predictor in terms of the best possible margin
error rate. We also provide lower bounds for specific convex surrogates that
show how different commonly used losses qualitatively differ from each other.
| [
"Shai Ben-David (University of Waterloo), David Loker (University of\n Waterloo), Nathan Srebro (TTIC), Karthik Sridharan (University of\n Pennsylvania)",
"['Shai Ben-David' 'David Loker' 'Nathan Srebro' 'Karthik Sridharan']"
] |
cs.LG cs.GT stat.ML | null | 1206.6443 | null | null | http://arxiv.org/pdf/1206.6443v2 | 2012-09-04T17:50:18Z | 2012-06-27T19:59:59Z | Isoelastic Agents and Wealth Updates in Machine Learning Markets | Recently, prediction markets have shown considerable promise for developing
flexible mechanisms for machine learning. In this paper, agents with isoelastic
utilities are considered. It is shown that the costs associated with
homogeneous markets of agents with isoelastic utilities produce equilibrium
prices corresponding to alpha-mixtures, with a particular form of mixing
component relating to each agent's wealth. We also demonstrate that wealth
accumulation for logarithmic and other isoelastic agents (through payoffs on
prediction of training targets) can implement both Bayesian model updates and
mixture weight updates by imposing different market payoff structures. An
iterative algorithm is given for market equilibrium computation. We demonstrate
that inhomogeneous markets of agents with isoelastic utilities outperform state
of the art aggregate classifiers such as random forests, as well as single
classifiers (neural networks, decision trees) on a number of machine learning
benchmarks, and show that isoelastic combination methods are generally better
than their logarithmic counterparts.
| [
"Amos Storkey (University of Edinburgh), Jono Millin (University of\n Edinburgh), Krzysztof Geras (University of Edinburgh)",
"['Amos Storkey' 'Jono Millin' 'Krzysztof Geras']"
] |
cs.LG stat.ML | null | 1206.6444 | null | null | http://arxiv.org/pdf/1206.6444v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Statistical Linear Estimation with Penalized Estimators: an Application
to Reinforcement Learning | Motivated by value function estimation in reinforcement learning, we study
statistical linear inverse problems, i.e., problems where the coefficients of a
linear system to be solved are observed in noise. We consider penalized
estimators, where performance is evaluated using a matrix-weighted two-norm of
the defect of the estimator measured with respect to the true, unknown
coefficients. Two objective functions are considered depending whether the
error of the defect measured with respect to the noisy coefficients is squared
or unsquared. We propose simple, yet novel and theoretically well-founded
data-dependent choices for the regularization parameters for both cases that
avoid data-splitting. A distinguishing feature of our analysis is that we
derive deterministic error bounds in terms of the error of the coefficients,
thus allowing the complete separation of the analysis of the stochastic
properties of these errors. We show that our results lead to new insights and
bounds for linear value function estimation in reinforcement learning.
| [
"Bernardo Avila Pires (University of Alberta), Csaba Szepesvari\n (University of Alberta)",
"['Bernardo Avila Pires' 'Csaba Szepesvari']"
] |
cs.CV cs.LG stat.ML | null | 1206.6445 | null | null | http://arxiv.org/pdf/1206.6445v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Deep Lambertian Networks | Visual perception is a challenging problem in part due to illumination
variations. A possible solution is to first estimate an illumination invariant
representation before using it for recognition. The object albedo and surface
normals are examples of such representations. In this paper, we introduce a
multilayer generative model where the latent variables include the albedo,
surface normals, and the light source. Combining Deep Belief Nets with the
Lambertian reflectance assumption, our model can learn good priors over the
albedo from 2D images. Illumination variations can be explained by changing
only the lighting latent variable in our model. By transferring learned
knowledge from similar objects, albedo and surface normals estimation from a
single image is possible in our model. Experiments demonstrate that our model
is able to generalize as well as improve over standard baselines in one-shot
face recognition.
| [
"Yichuan Tang (University of Toronto), Ruslan Salakhutdinov (University\n of Toronto), Geoffrey Hinton (University of Toronto)",
"['Yichuan Tang' 'Ruslan Salakhutdinov' 'Geoffrey Hinton']"
] |
cs.LG stat.ML | null | 1206.6446 | null | null | http://arxiv.org/pdf/1206.6446v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Agglomerative Bregman Clustering | This manuscript develops the theory of agglomerative clustering with Bregman
divergences. Geometric smoothing techniques are developed to deal with
degenerate clusters. To allow for cluster models based on exponential families
with overcomplete representations, Bregman divergences are developed for
nondifferentiable convex functions.
| [
"['Matus Telgarsky' 'Sanjoy Dasgupta']",
"Matus Telgarsky (UCSD), Sanjoy Dasgupta (UCSD)"
] |
cs.LG cs.CV stat.AP stat.ML | null | 1206.6447 | null | null | http://arxiv.org/pdf/1206.6447v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Small-sample Brain Mapping: Sparse Recovery on Spatially Correlated
Designs with Randomization and Clustering | Functional neuroimaging can measure the brain?s response to an external
stimulus. It is used to perform brain mapping: identifying from these
observations the brain regions involved. This problem can be cast into a linear
supervised learning task where the neuroimaging data are used as predictors for
the stimulus. Brain mapping is then seen as a support recovery problem. On
functional MRI (fMRI) data, this problem is particularly challenging as i) the
number of samples is small due to limited acquisition time and ii) the
variables are strongly correlated. We propose to overcome these difficulties
using sparse regression models over new variables obtained by clustering of the
original variables. The use of randomization techniques, e.g. bootstrap
samples, and clustering of the variables improves the recovery properties of
sparse methods. We demonstrate the benefit of our approach on an extensive
simulation study as well as two fMRI datasets.
| [
"Gael Varoquaux (INRIA), Alexandre Gramfort (INRIA), Bertrand Thirion\n (INRIA)",
"['Gael Varoquaux' 'Alexandre Gramfort' 'Bertrand Thirion']"
] |
cs.LG stat.ML | null | 1206.6448 | null | null | http://arxiv.org/pdf/1206.6448v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Online Alternating Direction Method | Online optimization has emerged as powerful tool in large scale optimization.
In this paper, we introduce efficient online algorithms based on the
alternating directions method (ADM). We introduce a new proof technique for ADM
in the batch setting, which yields the O(1/T) convergence rate of ADM and forms
the basis of regret analysis in the online setting. We consider two scenarios
in the online setting, based on whether the solution needs to lie in the
feasible set or not. In both settings, we establish regret bounds for both the
objective function as well as constraint violation for general and strongly
convex functions. Preliminary results are presented to illustrate the
performance of the proposed algorithms.
| [
"Huahua Wang (University of Minnesota), Arindam Banerjee (University of\n Minnesota)",
"['Huahua Wang' 'Arindam Banerjee']"
] |
cs.LG stat.ML | null | 1206.6449 | null | null | http://arxiv.org/pdf/1206.6449v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Monte Carlo Bayesian Reinforcement Learning | Bayesian reinforcement learning (BRL) encodes prior knowledge of the world in
a model and represents uncertainty in model parameters by maintaining a
probability distribution over them. This paper presents Monte Carlo BRL
(MC-BRL), a simple and general approach to BRL. MC-BRL samples a priori a
finite set of hypotheses for the model parameter values and forms a discrete
partially observable Markov decision process (POMDP) whose state space is a
cross product of the state space for the reinforcement learning task and the
sampled model parameter space. The POMDP does not require conjugate
distributions for belief representation, as earlier works do, and can be solved
relatively easily with point-based approximation algorithms. MC-BRL naturally
handles both fully and partially observable worlds. Theoretical and
experimental results show that the discrete POMDP approximates the underlying
BRL task well with guaranteed performance.
| [
"['Yi Wang' 'Kok Sung Won' 'David Hsu' 'Wee Sun Lee']",
"Yi Wang (NUS), Kok Sung Won (NUS), David Hsu (NUS), Wee Sun Lee (NUS)"
] |
cs.LG stat.ML | null | 1206.6450 | null | null | http://arxiv.org/pdf/1206.6450v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Conditional Sparse Coding and Grouped Multivariate Regression | We study the problem of multivariate regression where the data are naturally
grouped, and a regression matrix is to be estimated for each group. We propose
an approach in which a dictionary of low rank parameter matrices is estimated
across groups, and a sparse linear combination of the dictionary elements is
estimated to form a model within each group. We refer to the method as
conditional sparse coding since it is a coding procedure for the response
vectors Y conditioned on the covariate vectors X. This approach captures the
shared information across the groups while adapting to the structure within
each group. It exploits the same intuition behind sparse coding that has been
successfully developed in computer vision and computational neuroscience. We
propose an algorithm for conditional sparse coding, analyze its theoretical
properties in terms of predictive accuracy, and present the results of
simulation and brain imaging experiments that compare the new technique to
reduced rank regression.
| [
"['Min Xu' 'John Lafferty']",
"Min Xu (Carnegie Mellon University), John Lafferty (University of\n Chicago)"
] |
cs.LG stat.ML | null | 1206.6451 | null | null | http://arxiv.org/pdf/1206.6451v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | The Greedy Miser: Learning under Test-time Budgets | As machine learning algorithms enter applications in industrial settings,
there is increased interest in controlling their cpu-time during testing. The
cpu-time consists of the running time of the algorithm and the extraction time
of the features. The latter can vary drastically when the feature set is
diverse. In this paper, we propose an algorithm, the Greedy Miser, that
incorporates the feature extraction cost during training to explicitly minimize
the cpu-time during testing. The algorithm is a straightforward extension of
stage-wise regression and is equally suitable for regression or multi-class
classification. Compared to prior work, it is significantly more cost-effective
and scales to larger data sets.
| [
"Zhixiang Xu (Washington University, St. Louis), Kilian Weinberger\n (Washington University, St. Louis), Olivier Chapelle (Criteo)",
"['Zhixiang Xu' 'Kilian Weinberger' 'Olivier Chapelle']"
] |
cs.LG math.OC stat.ML | null | 1206.6452 | null | null | http://arxiv.org/pdf/1206.6452v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Smoothness and Structure Learning by Proxy | As data sets grow in size, the ability of learning methods to find structure
in them is increasingly hampered by the time needed to search the large spaces
of possibilities and generate a score for each that takes all of the observed
data into account. For instance, Bayesian networks, the model chosen in this
paper, have a super-exponentially large search space for a fixed number of
variables. One possible method to alleviate this problem is to use a proxy,
such as a Gaussian Process regressor, in place of the true scoring function,
training it on a selection of sampled networks. We prove here that the use of
such a proxy is well-founded, as we can bound the smoothness of a commonly-used
scoring function for Bayesian network structure learning. We show here that,
compared to an identical search strategy using the network?s exact scores, our
proxy-based search is able to get equivalent or better scores on a number of
data sets in a fraction of the time.
| [
"['Benjamin Yackley' 'Terran Lane']",
"Benjamin Yackley (University of New Mexico), Terran Lane (University\n of New Mexico)"
] |
cs.LG stat.ML | null | 1206.6453 | null | null | http://arxiv.org/pdf/1206.6453v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Adaptive Canonical Correlation Analysis Based On Matrix Manifolds | In this paper, we formulate the Canonical Correlation Analysis (CCA) problem
on matrix manifolds. This framework provides a natural way for dealing with
matrix constraints and tools for building efficient algorithms even in an
adaptive setting. Finally, an adaptive CCA algorithm is proposed and applied to
a change detection problem in EEG signals.
| [
"Florian Yger (LITIS), Maxime Berar (LITIS), Gilles Gasso (INSA de\n Rouen), Alain Rakotomamonjy (INSA de Rouen)",
"['Florian Yger' 'Maxime Berar' 'Gilles Gasso' 'Alain Rakotomamonjy']"
] |
cs.LG stat.ML | null | 1206.6454 | null | null | http://arxiv.org/pdf/1206.6454v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Hierarchical Exploration for Accelerating Contextual Bandits | Contextual bandit learning is an increasingly popular approach to optimizing
recommender systems via user feedback, but can be slow to converge in practice
due to the need for exploring a large feature space. In this paper, we propose
a coarse-to-fine hierarchical approach for encoding prior knowledge that
drastically reduces the amount of exploration required. Intuitively, user
preferences can be reasonably embedded in a coarse low-dimensional feature
space that can be explored efficiently, requiring exploration in the
high-dimensional space only as necessary. We introduce a bandit algorithm that
explores within this coarse-to-fine spectrum, and prove performance guarantees
that depend on how well the coarse space captures the user's preferences. We
demonstrate substantial improvement over conventional bandit algorithms through
extensive simulation as well as a live user study in the setting of
personalized news recommendation.
| [
"Yisong Yue (Carnegie Mellon University), Sue Ann Hong (Carnegie Mellon\n University), Carlos Guestrin (Carnegie Mellon University)",
"['Yisong Yue' 'Sue Ann Hong' 'Carlos Guestrin']"
] |
cs.LG stat.ML | null | 1206.6455 | null | null | http://arxiv.org/pdf/1206.6455v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Regularizers versus Losses for Nonlinear Dimensionality Reduction: A
Factored View with New Convex Relaxations | We demonstrate that almost all non-parametric dimensionality reduction
methods can be expressed by a simple procedure: regularized loss minimization
plus singular value truncation. By distinguishing the role of the loss and
regularizer in such a process, we recover a factored perspective that reveals
some gaps in the current literature. Beyond identifying a useful new loss for
manifold unfolding, a key contribution is to derive new convex regularizers
that combine distance maximization with rank reduction. These regularizers can
be applied to any loss.
| [
"Yaoliang Yu (University of Alberta), James Neufeld (University of\n Alberta), Ryan Kiros (University of Alberta), Xinhua Zhang (University of\n Alberta), Dale Schuurmans (University of Alberta)",
"['Yaoliang Yu' 'James Neufeld' 'Ryan Kiros' 'Xinhua Zhang'\n 'Dale Schuurmans']"
] |
stat.AP cs.LG stat.ME | null | 1206.6456 | null | null | http://arxiv.org/pdf/1206.6456v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Lognormal and Gamma Mixed Negative Binomial Regression | In regression analysis of counts, a lack of simple and efficient algorithms
for posterior computation has made Bayesian approaches appear unattractive and
thus underdeveloped. We propose a lognormal and gamma mixed negative binomial
(NB) regression model for counts, and present efficient closed-form Bayesian
inference; unlike conventional Poisson models, the proposed approach has two
free parameters to include two different kinds of random effects, and allows
the incorporation of prior information, such as sparsity in the regression
coefficients. By placing a gamma distribution prior on the NB dispersion
parameter r, and connecting a lognormal distribution prior with the logit of
the NB probability parameter p, efficient Gibbs sampling and variational Bayes
inference are both developed. The closed-form updates are obtained by
exploiting conditional conjugacy via both a compound Poisson representation and
a Polya-Gamma distribution based data augmentation approach. The proposed
Bayesian inference can be implemented routinely, while being easily
generalizable to more complex settings involving multivariate dependence
structures. The algorithms are illustrated using real examples.
| [
"['Mingyuan Zhou' 'Lingbo Li' 'David Dunson' 'Lawrence Carin']",
"Mingyuan Zhou (Duke University), Lingbo Li (Duke University), David\n Dunson (Duke University), Lawrence Carin (Duke University)"
] |
cs.LG stat.ML | null | 1206.6457 | null | null | http://arxiv.org/pdf/1206.6457v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Exponential Regret Bounds for Gaussian Process Bandits with
Deterministic Observations | This paper analyzes the problem of Gaussian process (GP) bandits with
deterministic observations. The analysis uses a branch and bound algorithm that
is related to the UCB algorithm of (Srinivas et al, 2010). For GPs with
Gaussian observation noise, with variance strictly greater than zero, Srinivas
et al proved that the regret vanishes at the approximate rate of
$O(1/\sqrt{t})$, where t is the number of observations. To complement their
result, we attack the deterministic case and attain a much faster exponential
convergence rate. Under some regularity assumptions, we show that the regret
decreases asymptotically according to $O(e^{-\frac{\tau t}{(\ln t)^{d/4}}})$
with high probability. Here, d is the dimension of the search space and tau is
a constant that depends on the behaviour of the objective function near its
global maximum.
| [
"['Nando de Freitas' 'Alex Smola' 'Masrour Zoghi']",
"Nando de Freitas (University of British Columbia), Alex Smola (Yahoo!\n Research), Masrour Zoghi (University of British Columbia)"
] |
cs.LG stat.ML | null | 1206.6458 | null | null | http://arxiv.org/pdf/1206.6458v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Batch Active Learning via Coordinated Matching | Most prior work on active learning of classifiers has focused on sequentially
selecting one unlabeled example at a time to be labeled in order to reduce the
overall labeling effort. In many scenarios, however, it is desirable to label
an entire batch of examples at once, for example, when labels can be acquired
in parallel. This motivates us to study batch active learning, which
iteratively selects batches of $k>1$ examples to be labeled. We propose a novel
batch active learning method that leverages the availability of high-quality
and efficient sequential active-learning policies by attempting to approximate
their behavior when applied for $k$ steps. Specifically, our algorithm first
uses Monte-Carlo simulation to estimate the distribution of unlabeled examples
selected by a sequential policy over $k$ step executions. The algorithm then
attempts to select a set of $k$ examples that best matches this distribution,
leading to a combinatorial optimization problem that we term "bounded
coordinated matching". While we show this problem is NP-hard in general, we
give an efficient greedy solution, which inherits approximation bounds from
supermodular minimization theory. Our experimental results on eight benchmark
datasets show that the proposed approach is highly effective
| [
"['Javad Azimi' 'Alan Fern' 'Xiaoli Zhang-Fern' 'Glencora Borradaile'\n 'Brent Heeringa']",
"Javad Azimi (Oregon State University), Alan Fern (Oregon State\n University), Xiaoli Zhang-Fern (Oregon State University), Glencora Borradaile\n (Oregon State University), Brent Heeringa (Williams College)"
] |
cs.CE cs.LG stat.ME | null | 1206.6459 | null | null | http://arxiv.org/pdf/1206.6459v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Bayesian Conditional Cointegration | Cointegration is an important topic for time-series, and describes a
relationship between two series in which a linear combination is stationary.
Classically, the test for cointegration is based on a two stage process in
which first the linear relation between the series is estimated by Ordinary
Least Squares. Subsequently a unit root test is performed on the residuals. A
well-known deficiency of this classical approach is that it can lead to
erroneous conclusions about the presence of cointegration. As an alternative,
we present a framework for estimating whether cointegration exists using
Bayesian inference which is empirically superior to the classical approach.
Finally, we apply our technique to model segmented cointegration in which
cointegration may exist only for limited time. In contrast to previous
approaches our model makes no restriction on the number of possible
cointegration segments.
| [
"Chris Bracegirdle (University College London), David Barber\n (University College London)",
"['Chris Bracegirdle' 'David Barber']"
] |
cs.LG cs.AI stat.ML | null | 1206.6460 | null | null | http://arxiv.org/pdf/1206.6460v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Output Space Search for Structured Prediction | We consider a framework for structured prediction based on search in the
space of complete structured outputs. Given a structured input, an output is
produced by running a time-bounded search procedure guided by a learned cost
function, and then returning the least cost output uncovered during the search.
This framework can be instantiated for a wide range of search spaces and search
procedures, and easily incorporates arbitrary structured-prediction loss
functions. In this paper, we make two main technical contributions. First, we
define the limited-discrepancy search space over structured outputs, which is
able to leverage powerful classification learning algorithms to improve the
search space quality. Second, we give a generic cost function learning
approach, where the key idea is to learn a cost function that attempts to mimic
the behavior of conducting searches guided by the true loss function. Our
experiments on six benchmark domains demonstrate that using our framework with
only a small amount of search is sufficient for significantly improving on
state-of-the-art structured-prediction performance.
| [
"Janardhan Rao Doppa (Oregon State University), Alan Fern (Oregon State\n University), Prasad Tadepalli (Oregon State University)",
"['Janardhan Rao Doppa' 'Alan Fern' 'Prasad Tadepalli']"
] |
cs.LG stat.ML | null | 1206.6461 | null | null | http://arxiv.org/pdf/1206.6461v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | On the Sample Complexity of Reinforcement Learning with a Generative
Model | We consider the problem of learning the optimal action-value function in the
discounted-reward Markov decision processes (MDPs). We prove a new PAC bound on
the sample-complexity of model-based value iteration algorithm in the presence
of the generative model, which indicates that for an MDP with N state-action
pairs and the discount factor \gamma\in[0,1) only
O(N\log(N/\delta)/((1-\gamma)^3\epsilon^2)) samples are required to find an
\epsilon-optimal estimation of the action-value function with the probability
1-\delta. We also prove a matching lower bound of \Theta
(N\log(N/\delta)/((1-\gamma)^3\epsilon^2)) on the sample complexity of
estimating the optimal action-value function by every RL algorithm. To the best
of our knowledge, this is the first matching result on the sample complexity of
estimating the optimal (action-) value function in which the upper bound
matches the lower bound of RL in terms of N, \epsilon, \delta and 1/(1-\gamma).
Also, both our lower bound and our upper bound significantly improve on the
state-of-the-art in terms of 1/(1-\gamma).
| [
"Mohammad Gheshlaghi Azar (Radboud University), Remi Munos (INRIA\n Lille), Bert Kappen (Radboud University)",
"['Mohammad Gheshlaghi Azar' 'Remi Munos' 'Bert Kappen']"
] |
cs.LG cs.CV cs.RO stat.ML | null | 1206.6462 | null | null | http://arxiv.org/pdf/1206.6462v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Learning Object Arrangements in 3D Scenes using Human Context | We consider the problem of learning object arrangements in a 3D scene. The
key idea here is to learn how objects relate to human poses based on their
affordances, ease of use and reachability. In contrast to modeling
object-object relationships, modeling human-object relationships scales
linearly in the number of objects. We design appropriate density functions
based on 3D spatial features to capture this. We learn the distribution of
human poses in a scene using a variant of the Dirichlet process mixture model
that allows sharing of the density function parameters across the same object
types. Then we can reason about arrangements of the objects in the room based
on these meaningful human poses. In our extensive experiments on 20 different
rooms with a total of 47 objects, our algorithm predicted correct placements
with an average error of 1.6 meters from ground truth. In arranging five real
scenes, it received a score of 4.3/5 compared to 3.7 for the best baseline
method.
| [
"['Yun Jiang' 'Marcus Lim' 'Ashutosh Saxena']",
"Yun Jiang (Cornell University), Marcus Lim (Cornell University),\n Ashutosh Saxena (Cornell University)"
] |
cs.LG stat.ML | null | 1206.6463 | null | null | http://arxiv.org/pdf/1206.6463v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | An Iterative Locally Linear Embedding Algorithm | Local Linear embedding (LLE) is a popular dimension reduction method. In this
paper, we first show LLE with nonnegative constraint is equivalent to the
widely used Laplacian embedding. We further propose to iterate the two steps in
LLE repeatedly to improve the results. Thirdly, we relax the kNN constraint of
LLE and present a sparse similarity learning algorithm. The final Iterative LLE
combines these three improvements. Extensive experiment results show that
iterative LLE algorithm significantly improve both classification and
clustering results.
| [
"Deguang Kong (The University of Texas at Arlington), Chris H.Q. Ding\n (The University of Texas at Arlington), Heng Huang (The University of Texas\n at Arlington), Feiping Nie (The University of Texas at Arlington)",
"['Deguang Kong' 'Chris H. Q. Ding' 'Heng Huang' 'Feiping Nie']"
] |
cs.LG stat.ML | null | 1206.6464 | null | null | http://arxiv.org/pdf/1206.6464v2 | 2012-09-04T18:32:03Z | 2012-06-27T19:59:59Z | Estimating the Hessian by Back-propagating Curvature | In this work we develop Curvature Propagation (CP), a general technique for
efficiently computing unbiased approximations of the Hessian of any function
that is computed using a computational graph. At the cost of roughly two
gradient evaluations, CP can give a rank-1 approximation of the whole Hessian,
and can be repeatedly applied to give increasingly precise unbiased estimates
of any or all of the entries of the Hessian. Of particular interest is the
diagonal of the Hessian, for which no general approach is known to exist that
is both efficient and accurate. We show in experiments that CP turns out to
work well in practice, giving very accurate estimates of the Hessian of neural
networks, for example, with a relatively small amount of work. We also apply CP
to Score Matching, where a diagonal of a Hessian plays an integral role in the
Score Matching objective, and where it is usually computed exactly using
inefficient algorithms which do not scale to larger and more complex models.
| [
"['James Martens' 'Ilya Sutskever' 'Kevin Swersky']",
"James Martens (University of Toronto), Ilya Sutskever (University of\n Toronto), Kevin Swersky (University of Toronto)"
] |
cs.LG stat.ML | null | 1206.6465 | null | null | http://arxiv.org/pdf/1206.6465v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Bayesian Efficient Multiple Kernel Learning | Multiple kernel learning algorithms are proposed to combine kernels in order
to obtain a better similarity measure or to integrate feature representations
coming from different data sources. Most of the previous research on such
methods is focused on the computational efficiency issue. However, it is still
not feasible to combine many kernels using existing Bayesian approaches due to
their high time complexity. We propose a fully conjugate Bayesian formulation
and derive a deterministic variational approximation, which allows us to
combine hundreds or thousands of kernels very efficiently. We briefly explain
how the proposed method can be extended for multiclass learning and
semi-supervised learning. Experiments with large numbers of kernels on
benchmark data sets show that our inference method is quite fast, requiring
less than a minute. On one bioinformatics and three image recognition data
sets, our method outperforms previously reported results with better
generalization performance.
| [
"['Mehmet Gonen']",
"Mehmet Gonen (Aalto University)"
] |
cs.LG stat.ML | null | 1206.6467 | null | null | http://arxiv.org/pdf/1206.6467v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Semi-Supervised Collective Classification via Hybrid Label
Regularization | Many classification problems involve data instances that are interlinked with
each other, such as webpages connected by hyperlinks. Techniques for
"collective classification" (CC) often increase accuracy for such data graphs,
but usually require a fully-labeled training graph. In contrast, we examine how
to improve the semi-supervised learning of CC models when given only a
sparsely-labeled graph, a common situation. We first describe how to use novel
combinations of classifiers to exploit the different characteristics of the
relational features vs. the non-relational features. We also extend the ideas
of "label regularization" to such hybrid classifiers, enabling them to leverage
the unlabeled data to bias the learning process. We find that these techniques,
which are efficient and easy to implement, significantly increase accuracy on
three real datasets. In addition, our results explain conflicting findings from
prior related studies.
| [
"['Luke McDowell' 'David Aha']",
"Luke McDowell (U.S. Naval Academy), David Aha (U.S. Naval Research\n Laboratory)"
] |
cs.LG cs.SD stat.ML | null | 1206.6468 | null | null | http://arxiv.org/pdf/1206.6468v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Variational Inference in Non-negative Factorial Hidden Markov Models for
Efficient Audio Source Separation | The past decade has seen substantial work on the use of non-negative matrix
factorization and its probabilistic counterparts for audio source separation.
Although able to capture audio spectral structure well, these models neglect
the non-stationarity and temporal dynamics that are important properties of
audio. The recently proposed non-negative factorial hidden Markov model
(N-FHMM) introduces a temporal dimension and improves source separation
performance. However, the factorial nature of this model makes the complexity
of inference exponential in the number of sound sources. Here, we present a
Bayesian variant of the N-FHMM suited to an efficient variational inference
algorithm, whose complexity is linear in the number of sound sources. Our
algorithm performs comparably to exact inference in the original N-FHMM but is
significantly faster. In typical configurations of the N-FHMM, our method
achieves around a 30x increase in speed.
| [
"['Gautham Mysore' 'Maneesh Sahani']",
"Gautham Mysore (Adobe Systems), Maneesh Sahani (University College\n London)"
] |
cs.LG stat.ML | null | 1206.6469 | null | null | http://arxiv.org/pdf/1206.6469v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Inferring Latent Structure From Mixed Real and Categorical Relational
Data | We consider analysis of relational data (a matrix), in which the rows
correspond to subjects (e.g., people) and the columns correspond to attributes.
The elements of the matrix may be a mix of real and categorical. Each subject
and attribute is characterized by a latent binary feature vector, and an
inferred matrix maps each row-column pair of binary feature vectors to an
observed matrix element. The latent binary features of the rows are modeled via
a multivariate Gaussian distribution with low-rank covariance matrix, and the
Gaussian random variables are mapped to latent binary features via a probit
link. The same type construction is applied jointly to the columns. The model
infers latent, low-dimensional binary features associated with each row and
each column, as well correlation structure between all rows and between all
columns.
| [
"Esther Salazar (Duke University), Matthew Cain (Duke University),\n Elise Darling (Duke University), Stephen Mitroff (Duke University), Lawrence\n Carin (Duke University)",
"['Esther Salazar' 'Matthew Cain' 'Elise Darling' 'Stephen Mitroff'\n 'Lawrence Carin']"
] |
cs.LG cs.DM cs.NA stat.ML | null | 1206.6470 | null | null | http://arxiv.org/pdf/1206.6470v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | A Combinatorial Algebraic Approach for the Identifiability of Low-Rank
Matrix Completion | In this paper, we review the problem of matrix completion and expose its
intimate relations with algebraic geometry, combinatorics and graph theory. We
present the first necessary and sufficient combinatorial conditions for
matrices of arbitrary rank to be identifiable from a set of matrix entries,
yielding theoretical constraints and new algorithms for the problem of matrix
completion. We conclude by algorithmically evaluating the tightness of the
given conditions and algorithms for practically relevant matrix sizes, showing
that the algebraic-combinatoric approach can lead to improvements over
state-of-the-art matrix completion methods.
| [
"['Franz Kiraly' 'Ryota Tomioka']",
"Franz Kiraly (TU Berlin), Ryota Tomioka (University of Tokyo)"
] |
cs.LG stat.ML | null | 1206.6471 | null | null | http://arxiv.org/pdf/1206.6471v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | On Causal and Anticausal Learning | We consider the problem of function estimation in the case where an
underlying causal model can be inferred. This has implications for popular
scenarios such as covariate shift, concept drift, transfer learning and
semi-supervised learning. We argue that causal knowledge may facilitate some
approaches for a given problem, and rule out others. In particular, we
formulate a hypothesis for when semi-supervised learning can help, and
corroborate it with empirical results.
| [
"['Bernhard Schoelkopf' 'Dominik Janzing' 'Jonas Peters' 'Eleni Sgouritsa'\n 'Kun Zhang' 'Joris Mooij']",
"Bernhard Schoelkopf (Max Planck Institute for Intelligent Systems),\n Dominik Janzing (Max Planck Institute for Intelligent Systems), Jonas Peters\n (Max Planck Institute for Intelligent Systems), Eleni Sgouritsa (Max Planck\n Institute for Intelligent Systems), Kun Zhang (Max Planck Institute for\n Intelligent Systems), Joris Mooij (Radboud University)"
] |
cs.LG stat.ML | null | 1206.6472 | null | null | http://arxiv.org/pdf/1206.6472v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | An Efficient Approach to Sparse Linear Discriminant Analysis | We present a novel approach to the formulation and the resolution of sparse
Linear Discriminant Analysis (LDA). Our proposal, is based on penalized Optimal
Scoring. It has an exact equivalence with penalized LDA, contrary to the
multi-class approaches based on the regression of class indicator that have
been proposed so far. Sparsity is obtained thanks to a group-Lasso penalty that
selects the same features in all discriminant directions. Our experiments
demonstrate that this approach generates extremely parsimonious models without
compromising prediction performances. Besides prediction, the resulting sparse
discriminant directions are also amenable to low-dimensional representations of
data. Our algorithm is highly efficient for medium to large number of
variables, and is thus particularly well suited to the analysis of gene
expression data.
| [
"Luis Francisco Sanchez Merchante (UTC/CNRS), Yves Grandvalet\n (UTC/CNRS), Gerrad Govaert (UTC/CNRS)",
"['Luis Francisco Sanchez Merchante' 'Yves Grandvalet' 'Gerrad Govaert']"
] |
cs.AI cs.LG | null | 1206.6473 | null | null | http://arxiv.org/pdf/1206.6473v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Compositional Planning Using Optimal Option Models | In this paper we introduce a framework for option model composition. Option
models are temporal abstractions that, like macro-operators in classical
planning, jump directly from a start state to an end state. Prior work has
focused on constructing option models from primitive actions, by intra-option
model learning; or on using option models to construct a value function, by
inter-option planning. We present a unified view of intra- and inter-option
model learning, based on a major generalisation of the Bellman equation. Our
fundamental operation is the recursive composition of option models into other
option models. This key idea enables compositional planning over many levels of
abstraction. We illustrate our framework using a dynamic programming algorithm
that simultaneously constructs optimal option models for multiple subgoals, and
also searches over those option models to provide rapid progress towards other
subgoals.
| [
"['David Silver' 'Kamil Ciosek']",
"David Silver (University College London), Kamil Ciosek (University\n College London)"
] |
cs.DS cs.LG cs.NA stat.ML | null | 1206.6474 | null | null | http://arxiv.org/pdf/1206.6474v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Estimation of Simultaneously Sparse and Low Rank Matrices | The paper introduces a penalized matrix estimation procedure aiming at
solutions which are sparse and low-rank at the same time. Such structures arise
in the context of social networks or protein interactions where underlying
graphs have adjacency matrices which are block-diagonal in the appropriate
basis. We introduce a convex mixed penalty which involves $\ell_1$-norm and
trace norm simultaneously. We obtain an oracle inequality which indicates how
the two effects interact according to the nature of the target matrix. We bound
generalization error in the link prediction problem. We also develop proximal
descent strategies to solve the optimization problem efficiently and evaluate
performance on synthetic and real data sets.
| [
"['Emile Richard' 'Pierre-Andre Savalle' 'Nicolas Vayatis']",
"Emile Richard (ENS Cachan), Pierre-Andre Savalle (Ecole Centrale de\n Paris), Nicolas Vayatis (ENS Cachan)"
] |
cs.LG stat.ML | null | 1206.6475 | null | null | http://arxiv.org/pdf/1206.6475v2 | 2012-09-04T17:42:41Z | 2012-06-27T19:59:59Z | A Split-Merge Framework for Comparing Clusterings | Clustering evaluation measures are frequently used to evaluate the
performance of algorithms. However, most measures are not properly normalized
and ignore some information in the inherent structure of clusterings. We model
the relation between two clusterings as a bipartite graph and propose a general
component-based decomposition formula based on the components of the graph.
Most existing measures are examples of this formula. In order to satisfy
consistency in the component, we further propose a split-merge framework for
comparing clusterings of different data sets. Our framework gives measures that
are conditionally normalized, and it can make use of data point information,
such as feature vectors and pairwise distances. We use an entropy-based
instance of the framework and a coreference resolution data set to demonstrate
empirically the utility of our framework over other measures.
| [
"['Qiaoliang Xiang' 'Qi Mao' 'Kian Ming Chai' 'Hai Leong Chieu'\n 'Ivor Tsang' 'Zhendong Zhao']",
"Qiaoliang Xiang (Nanyang Technological University), Qi Mao (Nanyang\n Technological University), Kian Ming Chai (DSO National Laboratories), Hai\n Leong Chieu (DSO National Laboratories), Ivor Tsang (Nanyang Technological\n University), Zhendong Zhao (Macquarie University)"
] |
cs.LG cs.AI stat.ML | null | 1206.6476 | null | null | http://arxiv.org/pdf/1206.6476v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Similarity Learning for Provably Accurate Sparse Linear Classification | In recent years, the crucial importance of metrics in machine learning
algorithms has led to an increasing interest for optimizing distance and
similarity functions. Most of the state of the art focus on learning
Mahalanobis distances (requiring to fulfill a constraint of positive
semi-definiteness) for use in a local k-NN algorithm. However, no theoretical
link is established between the learned metrics and their performance in
classification. In this paper, we make use of the formal framework of good
similarities introduced by Balcan et al. to design an algorithm for learning a
non PSD linear similarity optimized in a nonlinear feature space, which is then
used to build a global linear classifier. We show that our approach has uniform
stability and derive a generalization bound on the classification error.
Experiments performed on various datasets confirm the effectiveness of our
approach compared to state-of-the-art methods and provide evidence that (i) it
is fast, (ii) robust to overfitting and (iii) produces very sparse classifiers.
| [
"Aurelien Bellet (University of Saint-Etienne), Amaury Habrard\n (University of Saint-Etienne), Marc Sebban (University of Saint-Etienne)",
"['Aurelien Bellet' 'Amaury Habrard' 'Marc Sebban']"
] |
cs.LG stat.ML | null | 1206.6477 | null | null | http://arxiv.org/pdf/1206.6477v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Discovering Support and Affiliated Features from Very High Dimensions | In this paper, a novel learning paradigm is presented to automatically
identify groups of informative and correlated features from very high
dimensions. Specifically, we explicitly incorporate correlation measures as
constraints and then propose an efficient embedded feature selection method
using recently developed cutting plane strategy. The benefits of the proposed
algorithm are two-folds. First, it can identify the optimal discriminative and
uncorrelated feature subset to the output labels, denoted here as Support
Features, which brings about significant improvements in prediction performance
over other state of the art feature selection methods considered in the paper.
Second, during the learning process, the underlying group structures of
correlated features associated with each support feature, denoted as Affiliated
Features, can also be discovered without any additional cost. These affiliated
features serve to improve the interpretations on the learning tasks. Extensive
empirical studies on both synthetic and very high dimensional real-world
datasets verify the validity and efficiency of the proposed method.
| [
"['Yiteng Zhai' 'Mingkui Tan' 'Ivor Tsang' 'Yew Soon Ong']",
"Yiteng Zhai (Nanyang Technological University), Mingkui Tan (Nanyang\n Technological University), Ivor Tsang (Nanyang Technological University), Yew\n Soon Ong (Nanyang Technological University)"
] |
cs.LG stat.ML | null | 1206.6478 | null | null | http://arxiv.org/pdf/1206.6478v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Maximum Margin Output Coding | In this paper we study output coding for multi-label prediction. For a
multi-label output coding to be discriminative, it is important that codewords
for different label vectors are significantly different from each other. In the
meantime, unlike in traditional coding theory, codewords in output coding are
to be predicted from the input, so it is also critical to have a predictable
label encoding.
To find output codes that are both discriminative and predictable, we first
propose a max-margin formulation that naturally captures these two properties.
We then convert it to a metric learning formulation, but with an exponentially
large number of constraints as commonly encountered in structured prediction
problems. Without a label structure for tractable inference, we use
overgenerating (i.e., relaxation) techniques combined with the cutting plane
method for optimization.
In our empirical study, the proposed output coding scheme outperforms a
variety of existing multi-label prediction methods for image, text and music
classification.
| [
"['Yi Zhang' 'Jeff Schneider']",
"Yi Zhang (Carnegie Mellon University), Jeff Schneider (Carnegie Mellon\n University)"
] |
cs.LG stat.ML | null | 1206.6479 | null | null | http://arxiv.org/pdf/1206.6479v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | The Landmark Selection Method for Multiple Output Prediction | Conditional modeling x \to y is a central problem in machine learning. A
substantial research effort is devoted to such modeling when x is high
dimensional. We consider, instead, the case of a high dimensional y, where x is
either low dimensional or high dimensional. Our approach is based on selecting
a small subset y_L of the dimensions of y, and proceed by modeling (i) x \to
y_L and (ii) y_L \to y. Composing these two models, we obtain a conditional
model x \to y that possesses convenient statistical properties. Multi-label
classification and multivariate regression experiments on several datasets show
that this model outperforms the one vs. all approach as well as several
sophisticated multiple output prediction methods.
| [
"Krishnakumar Balasubramanian (Georgia Institute of Technology), Guy\n Lebanon (Georgia Institute of Technology)",
"['Krishnakumar Balasubramanian' 'Guy Lebanon']"
] |
cs.LG stat.ML | null | 1206.6480 | null | null | http://arxiv.org/pdf/1206.6480v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | A Dantzig Selector Approach to Temporal Difference Learning | LSTD is a popular algorithm for value function approximation. Whenever the
number of features is larger than the number of samples, it must be paired with
some form of regularization. In particular, L1-regularization methods tend to
perform feature selection by promoting sparsity, and thus, are well-suited for
high-dimensional problems. However, since LSTD is not a simple regression
algorithm, but it solves a fixed--point problem, its integration with
L1-regularization is not straightforward and might come with some drawbacks
(e.g., the P-matrix assumption for LASSO-TD). In this paper, we introduce a
novel algorithm obtained by integrating LSTD with the Dantzig Selector. We
investigate the performance of the proposed algorithm and its relationship with
the existing regularized approaches, and show how it addresses some of their
drawbacks.
| [
"['Matthieu Geist' 'Bruno Scherrer' 'Alessandro Lazaric'\n 'Mohammad Ghavamzadeh']",
"Matthieu Geist (Supelec), Bruno Scherrer (INRIA Nancy), Alessandro\n Lazaric (INRIA Lille), Mohammad Ghavamzadeh (INRIA Lille)"
] |
cs.CL cs.IR cs.LG | null | 1206.6481 | null | null | http://arxiv.org/pdf/1206.6481v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Cross Language Text Classification via Subspace Co-Regularized
Multi-View Learning | In many multilingual text classification problems, the documents in different
languages often share the same set of categories. To reduce the labeling cost
of training a classification model for each individual language, it is
important to transfer the label knowledge gained from one language to another
language by conducting cross language classification. In this paper we develop
a novel subspace co-regularized multi-view learning method for cross language
text classification. This method is built on parallel corpora produced by
machine translation. It jointly minimizes the training error of each classifier
in each language while penalizing the distance between the subspace
representations of parallel documents. Our empirical study on a large set of
cross language text classification tasks shows the proposed method consistently
outperforms a number of inductive methods, domain adaptation methods, and
multi-view learning methods.
| [
"Yuhong Guo (Temple University), Min Xiao (Temple University)",
"['Yuhong Guo' 'Min Xiao']"
] |
cs.CV cs.LG stat.ML | null | 1206.6482 | null | null | http://arxiv.org/pdf/1206.6482v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Modeling Images using Transformed Indian Buffet Processes | Latent feature models are attractive for image modeling, since images
generally contain multiple objects. However, many latent feature models ignore
that objects can appear at different locations or require pre-segmentation of
images. While the transformed Indian buffet process (tIBP) provides a method
for modeling transformation-invariant features in unsegmented binary images,
its current form is inappropriate for real images because of its computational
cost and modeling assumptions. We combine the tIBP with likelihoods appropriate
for real images and develop an efficient inference, using the cross-correlation
between images and features, that is theoretically and empirically faster than
existing inference techniques. Our method discovers reasonable components and
achieve effective image reconstruction in natural images.
| [
"Ke Zhai (University of Maryland), Yuening Hu (University of Maryland),\n Sinead Williamson (Carnegie Mellon University), Jordan Boyd-Graber\n (University of Maryland)",
"['Ke Zhai' 'Yuening Hu' 'Sinead Williamson' 'Jordan Boyd-Graber']"
] |
cs.LG stat.ML | null | 1206.6483 | null | null | http://arxiv.org/pdf/1206.6483v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Subgraph Matching Kernels for Attributed Graphs | We propose graph kernels based on subgraph matchings, i.e.
structure-preserving bijections between subgraphs. While recently proposed
kernels based on common subgraphs (Wale et al., 2008; Shervashidze et al.,
2009) in general can not be applied to attributed graphs, our approach allows
to rate mappings of subgraphs by a flexible scoring scheme comparing vertex and
edge attributes by kernels. We show that subgraph matching kernels generalize
several known kernels. To compute the kernel we propose a graph-theoretical
algorithm inspired by a classical relation between common subgraphs of two
graphs and cliques in their product graph observed by Levi (1973). Encouraging
experimental results on a classification task of real-world graphs are
presented.
| [
"['Nils Kriege' 'Petra Mutzel']",
"Nils Kriege (TU Dortmund), Petra Mutzel (TU Dortmund)"
] |
cs.LG cs.AI stat.ML | null | 1206.6484 | null | null | http://arxiv.org/pdf/1206.6484v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Apprenticeship Learning for Model Parameters of Partially Observable
Environments | We consider apprenticeship learning, i.e., having an agent learn a task by
observing an expert demonstrating the task in a partially observable
environment when the model of the environment is uncertain. This setting is
useful in applications where the explicit modeling of the environment is
difficult, such as a dialogue system. We show that we can extract information
about the environment model by inferring action selection process behind the
demonstration, under the assumption that the expert is choosing optimal actions
based on knowledge of the true model of the target environment. Proposed
algorithms can achieve more accurate estimates of POMDP parameters and better
policies from a short demonstration, compared to methods that learns only from
the reaction from the environment.
| [
"Takaki Makino (University of Tokyo), Johane Takeuchi (Honda Research\n Institute Japan)",
"['Takaki Makino' 'Johane Takeuchi']"
] |
cs.LG stat.ML | null | 1206.6485 | null | null | http://arxiv.org/pdf/1206.6485v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Greedy Algorithms for Sparse Reinforcement Learning | Feature selection and regularization are becoming increasingly prominent
tools in the efforts of the reinforcement learning (RL) community to expand the
reach and applicability of RL. One approach to the problem of feature selection
is to impose a sparsity-inducing form of regularization on the learning method.
Recent work on $L_1$ regularization has adapted techniques from the supervised
learning literature for use with RL. Another approach that has received renewed
attention in the supervised learning community is that of using a simple
algorithm that greedily adds new features. Such algorithms have many of the
good properties of the $L_1$ regularization methods, while also being extremely
efficient and, in some cases, allowing theoretical guarantees on recovery of
the true form of a sparse target function from sampled data. This paper
considers variants of orthogonal matching pursuit (OMP) applied to
reinforcement learning. The resulting algorithms are analyzed and compared
experimentally with existing $L_1$ regularized approaches. We demonstrate that
perhaps the most natural scenario in which one might hope to achieve sparse
recovery fails; however, one variant, OMP-BRM, provides promising theoretical
guarantees under certain assumptions on the feature dictionary. Another
variant, OMP-TD, empirically outperforms prior methods both in approximation
accuracy and efficiency on several benchmark problems.
| [
"Christopher Painter-Wakefield (Duke University), Ronald Parr (Duke\n University)",
"['Christopher Painter-Wakefield' 'Ronald Parr']"
] |
cs.LG stat.ML | null | 1206.6486 | null | null | http://arxiv.org/pdf/1206.6486v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Flexible Modeling of Latent Task Structures in Multitask Learning | Multitask learning algorithms are typically designed assuming some fixed, a
priori known latent structure shared by all the tasks. However, it is usually
unclear what type of latent task structure is the most appropriate for a given
multitask learning problem. Ideally, the "right" latent task structure should
be learned in a data-driven manner. We present a flexible, nonparametric
Bayesian model that posits a mixture of factor analyzers structure on the
tasks. The nonparametric aspect makes the model expressive enough to subsume
many existing models of latent task structures (e.g, mean-regularized tasks,
clustered tasks, low-rank or linear/non-linear subspace assumption on tasks,
etc.). Moreover, it can also learn more general task structures, addressing the
shortcomings of such models. We present a variational inference algorithm for
our model. Experimental results on synthetic and real-world datasets, on both
regression and classification problems, demonstrate the effectiveness of the
proposed method.
| [
"Alexandre Passos (UMass Amherst), Piyush Rai (University of Utah),\n Jacques Wainer (University of Campinas), Hal Daume III (University of\n Maryland)",
"['Alexandre Passos' 'Piyush Rai' 'Jacques Wainer' 'Hal Daume III']"
] |
cs.LG cs.GT stat.ML | null | 1206.6487 | null | null | http://arxiv.org/pdf/1206.6487v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | An Adaptive Algorithm for Finite Stochastic Partial Monitoring | We present a new anytime algorithm that achieves near-optimal regret for any
instance of finite stochastic partial monitoring. In particular, the new
algorithm achieves the minimax regret, within logarithmic factors, for both
"easy" and "hard" problems. For easy problems, it additionally achieves
logarithmic individual regret. Most importantly, the algorithm is adaptive in
the sense that if the opponent strategy is in an "easy region" of the strategy
space then the regret grows as if the problem was easy. As an implication, we
show that under some reasonable additional assumptions, the algorithm enjoys an
O(\sqrt{T}) regret in Dynamic Pricing, proven to be hard by Bartok et al.
(2011).
| [
"['Gabor Bartok' 'Navid Zolghadr' 'Csaba Szepesvari']",
"Gabor Bartok (University of Alberta), Navid Zolghadr (University of\n Alberta), Csaba Szepesvari (University of Alberta)"
] |
stat.ME cs.LG stat.ML | null | 1206.6488 | null | null | http://arxiv.org/pdf/1206.6488v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | The Nonparanormal SKEPTIC | We propose a semiparametric approach, named nonparanormal skeptic, for
estimating high dimensional undirected graphical models. In terms of modeling,
we consider the nonparanormal family proposed by Liu et al (2009). In terms of
estimation, we exploit nonparametric rank-based correlation coefficient
estimators including the Spearman's rho and Kendall's tau. In high dimensional
settings, we prove that the nonparanormal skeptic achieves the optimal
parametric rate of convergence in both graph and parameter estimation. This
result suggests that the nonparanormal graphical models are a safe replacement
of the Gaussian graphical models, even when the data are Gaussian.
| [
"['Han Liu' 'Fang Han' 'Ming Yuan' 'John Lafferty' 'Larry Wasserman']",
"Han Liu (Johns Hopkins University), Fang Han (Johns Hopkins\n University), Ming Yuan (Georgia Institute of Technology), John Lafferty\n (University of Chicago), Larry Wasserman (Carnegie Mellon University)"
] |
cs.LG stat.ML | null | 1206.6813 | null | null | http://arxiv.org/pdf/1206.6813v1 | 2012-06-27T15:36:47Z | 2012-06-27T15:36:47Z | A concentration theorem for projections | X in R^D has mean zero and finite second moments. We show that there is a
precise sense in which almost all linear projections of X into R^d (for d < D)
look like a scale-mixture of spherical Gaussians -- specifically, a mixture of
distributions N(0, sigma^2 I_d) where the weight of the particular sigma
component is P (| X |^2 = sigma^2 D). The extent of this effect depends upon
the ratio of d to D, and upon a particular coefficient of eccentricity of X's
distribution. We explore this result in a variety of experiments.
| [
"Sanjoy Dasgupta, Daniel Hsu, Nakul Verma",
"['Sanjoy Dasgupta' 'Daniel Hsu' 'Nakul Verma']"
] |
cs.AI cs.LG | null | 1206.6814 | null | null | http://arxiv.org/pdf/1206.6814v1 | 2012-06-27T15:37:14Z | 2012-06-27T15:37:14Z | An Empirical Comparison of Algorithms for Aggregating Expert Predictions | Predicting the outcomes of future events is a challenging problem for which a
variety of solution methods have been explored and attempted. We present an
empirical comparison of a variety of online and offline adaptive algorithms for
aggregating experts' predictions of the outcomes of five years of US National
Football League games (1319 games) using expert probability elicitations
obtained from an Internet contest called ProbabilitySports. We find that it is
difficult to improve over simple averaging of the predictions in terms of
prediction accuracy, but that there is room for improvement in quadratic loss.
Somewhat surprisingly, a Bayesian estimation algorithm which estimates the
variance of each expert's prediction exhibits the most consistent superior
performance over simple averaging among our collection of algorithms.
| [
"Varsha Dani, Omid Madani, David M Pennock, Sumit Sanghai, Brian\n Galebach",
"['Varsha Dani' 'Omid Madani' 'David M Pennock' 'Sumit Sanghai'\n 'Brian Galebach']"
] |
cs.LG stat.ML | null | 1206.6815 | null | null | http://arxiv.org/pdf/1206.6815v1 | 2012-06-27T15:38:14Z | 2012-06-27T15:38:14Z | Discriminative Learning via Semidefinite Probabilistic Models | Discriminative linear models are a popular tool in machine learning. These
can be generally divided into two types: The first is linear classifiers, such
as support vector machines, which are well studied and provide state-of-the-art
results. One shortcoming of these models is that their output (known as the
'margin') is not calibrated, and cannot be translated naturally into a
distribution over the labels. Thus, it is difficult to incorporate such models
as components of larger systems, unlike probabilistic based approaches. The
second type of approach constructs class conditional distributions using a
nonlinearity (e.g. log-linear models), but is occasionally worse in terms of
classification error. We propose a supervised learning method which combines
the best of both approaches. Specifically, our method provides a distribution
over the labels, which is a linear function of the model parameters. As a
consequence, differences between probabilities are linear functions, a property
which most probabilistic models (e.g. log-linear) do not have.
Our model assumes that classes correspond to linear subspaces (rather than to
half spaces). Using a relaxed projection operator, we construct a measure which
evaluates the degree to which a given vector 'belongs' to a subspace, resulting
in a distribution over labels. Interestingly, this view is closely related to
similar concepts in quantum detection theory. The resulting models can be
trained either to maximize the margin or to optimize average likelihood
measures. The corresponding optimization problems are semidefinite programs
which can be solved efficiently. We illustrate the performance of our algorithm
on real world datasets, and show that it outperforms 2nd order kernel methods.
| [
"Koby Crammer, Amir Globerson",
"['Koby Crammer' 'Amir Globerson']"
] |
cs.LG cs.CE stat.ML | null | 1206.6824 | null | null | http://arxiv.org/pdf/1206.6824v1 | 2012-06-27T15:41:07Z | 2012-06-27T15:41:07Z | Gene Expression Time Course Clustering with Countably Infinite Hidden
Markov Models | Most existing approaches to clustering gene expression time course data treat
the different time points as independent dimensions and are invariant to
permutations, such as reversal, of the experimental time course. Approaches
utilizing HMMs have been shown to be helpful in this regard, but are hampered
by having to choose model architectures with appropriate complexities. Here we
propose for a clustering application an HMM with a countably infinite state
space; inference in this model is possible by recasting it in the hierarchical
Dirichlet process (HDP) framework (Teh et al. 2006), and hence we call it the
HDP-HMM. We show that the infinite model outperforms model selection methods
over finite models, and traditional time-independent methods, as measured by a
variety of external and internal indices for clustering on two large publicly
available data sets. Moreover, we show that the infinite models utilize more
hidden states and employ richer architectures (e.g. state-to-state transitions)
without the damaging effects of overfitting.
| [
"Matthew Beal, Praveen Krishnamurthy",
"['Matthew Beal' 'Praveen Krishnamurthy']"
] |
cs.LG cs.AI stat.ML | null | 1206.6828 | null | null | http://arxiv.org/pdf/1206.6828v1 | 2012-06-27T16:15:14Z | 2012-06-27T16:15:14Z | Advances in exact Bayesian structure discovery in Bayesian networks | We consider a Bayesian method for learning the Bayesian network structure
from complete data. Recently, Koivisto and Sood (2004) presented an algorithm
that for any single edge computes its marginal posterior probability in O(n
2^n) time, where n is the number of attributes; the number of parents per
attribute is bounded by a constant. In this paper we show that the posterior
probabilities for all the n (n - 1) potential edges can be computed in O(n 2^n)
total time. This result is achieved by a forward-backward technique and fast
Moebius transform algorithms, which are of independent interest. The resulting
speedup by a factor of about n^2 allows us to experimentally study the
statistical power of learning moderate-size networks. We report results from a
simulation study that covers data sets with 20 to 10,000 records over 5 to 25
discrete attributes
| [
"Mikko Koivisto",
"['Mikko Koivisto']"
] |
stat.ME cs.AI cs.LG | null | 1206.6830 | null | null | http://arxiv.org/pdf/1206.6830v1 | 2012-06-27T16:15:42Z | 2012-06-27T16:15:42Z | The AI&M Procedure for Learning from Incomplete Data | We investigate methods for parameter learning from incomplete data that is
not missing at random. Likelihood-based methods then require the optimization
of a profile likelihood that takes all possible missingness mechanisms into
account. Optimzing this profile likelihood poses two main difficulties:
multiple (local) maxima, and its very high-dimensional parameter space. In this
paper a new method is presented for optimizing the profile likelihood that
addresses the second difficulty: in the proposed AI&M (adjusting imputation and
mazimization) procedure the optimization is performed by operations in the
space of data completions, rather than directly in the parameter space of the
profile likelihood. We apply the AI&M method to learning parameters for
Bayesian networks. The method is compared against conservative inference, which
takes into account each possible data completion, and against EM. The results
indicate that likelihood-based inference is still feasible in the case of
unknown missingness mechanisms, and that conservative inference is
unnecessarily weak. On the other hand, our results also provide evidence that
the EM algorithm is still quite effective when the data is not missing at
random.
| [
"Manfred Jaeger",
"['Manfred Jaeger']"
] |
cs.LG stat.ML | null | 1206.6832 | null | null | http://arxiv.org/pdf/1206.6832v1 | 2012-06-27T16:17:52Z | 2012-06-27T16:17:52Z | Convex Structure Learning for Bayesian Networks: Polynomial Feature
Selection and Approximate Ordering | We present a new approach to learning the structure and parameters of a
Bayesian network based on regularized estimation in an exponential family
representation. Here we show that, given a fixed variable order, the optimal
structure and parameters can be learned efficiently, even without restricting
the size of the parent sets. We then consider the problem of optimizing the
variable order for a given set of features. This is still a computationally
hard problem, but we present a convex relaxation that yields an optimal 'soft'
ordering in polynomial time. One novel aspect of the approach is that we do not
perform a discrete search over DAG structures, nor over variable orders, but
instead solve a continuous relaxation that can then be rounded to obtain a
valid network structure. We conduct an experimental comparison against standard
structure search procedures over standard objectives, which cope with local
minima, and evaluate the advantages of using convex relaxations that reduce the
effects of local minima.
| [
"Yuhong Guo, Dale Schuurmans",
"['Yuhong Guo' 'Dale Schuurmans']"
] |
cs.LG cs.CE cs.NA stat.ML | null | 1206.6833 | null | null | http://arxiv.org/pdf/1206.6833v1 | 2012-06-27T16:18:05Z | 2012-06-27T16:18:05Z | Matrix Tile Analysis | Many tasks require finding groups of elements in a matrix of numbers, symbols
or class likelihoods. One approach is to use efficient bi- or tri-linear
factorization techniques including PCA, ICA, sparse matrix factorization and
plaid analysis. These techniques are not appropriate when addition and
multiplication of matrix elements are not sensibly defined. More directly,
methods like bi-clustering can be used to classify matrix elements, but these
methods make the overly-restrictive assumption that the class of each element
is a function of a row class and a column class. We introduce a general
computational problem, `matrix tile analysis' (MTA), which consists of
decomposing a matrix into a set of non-overlapping tiles, each of which is
defined by a subset of usually nonadjacent rows and columns. MTA does not
require an algebra for combining tiles, but must search over discrete
combinations of tile assignments. Exact MTA is a computationally intractable
integer programming problem, but we describe an approximate iterative technique
and a computationally efficient sum-product relaxation of the integer program.
We compare the effectiveness of these methods to PCA and plaid on hundreds of
randomly generated tasks. Using double-gene-knockout data, we show that MTA
finds groups of interacting yeast genes that have biologically-related
functions.
| [
"Inmar Givoni, Vincent Cheung, Brendan J. Frey",
"['Inmar Givoni' 'Vincent Cheung' 'Brendan J. Frey']"
] |
cs.AI cs.LG | null | 1206.6838 | null | null | http://arxiv.org/pdf/1206.6838v1 | 2012-06-27T16:19:16Z | 2012-06-27T16:19:16Z | Continuous Time Markov Networks | A central task in many applications is reasoning about processes that change
in a continuous time. The mathematical framework of Continuous Time Markov
Processes provides the basic foundations for modeling such systems. Recently,
Nodelman et al introduced continuous time Bayesian networks (CTBNs), which
allow a compact representation of continuous-time processes over a factored
state space. In this paper, we introduce continuous time Markov networks
(CTMNs), an alternative representation language that represents a different
type of continuous-time dynamics. In many real life processes, such as
biological and chemical systems, the dynamics of the process can be naturally
described as an interplay between two forces - the tendency of each entity to
change its state, and the overall fitness or energy function of the entire
system. In our model, the first force is described by a continuous-time
proposal process that suggests possible local changes to the state of the
system at different rates. The second force is represented by a Markov network
that encodes the fitness, or desirability, of different states; a proposed
local change is then accepted with a probability that is a function of the
change in the fitness distribution. We show that the fitness distribution is
also the stationary distribution of the Markov process, so that this
representation provides a characterization of a temporal process whose
stationary distribution has a compact graphical representation. This allows us
to naturally capture a different type of structure in complex dynamical
processes, such as evolving biological sequences. We describe the semantics of
the representation, its basic properties, and how it compares to CTBNs. We also
provide algorithms for learning such models from data, and discuss its
applicability to biological sequence evolution.
| [
"['Tal El-Hay' 'Nir Friedman' 'Daphne Koller' 'Raz Kupferman']",
"Tal El-Hay, Nir Friedman, Daphne Koller, Raz Kupferman"
] |
cs.LG cs.AI stat.ML | null | 1206.6842 | null | null | http://arxiv.org/pdf/1206.6842v1 | 2012-06-27T16:20:30Z | 2012-06-27T16:20:30Z | Chi-square Tests Driven Method for Learning the Structure of Factored
MDPs | SDYNA is a general framework designed to address large stochastic
reinforcement learning problems. Unlike previous model based methods in FMDPs,
it incrementally learns the structure and the parameters of a RL problem using
supervised learning techniques. Then, it integrates decision-theoric planning
algorithms based on FMDPs to compute its policy. SPITI is an instanciation of
SDYNA that exploits ITI, an incremental decision tree algorithm, to learn the
reward function and the Dynamic Bayesian Networks with local structures
representing the transition function of the problem. These representations are
used by an incremental version of the Structured Value Iteration algorithm. In
order to learn the structure, SPITI uses Chi-Square tests to detect the
independence between two probability distributions. Thus, we study the relation
between the threshold used in the Chi-Square test, the size of the model built
and the relative error of the value function of the induced policy with respect
to the optimal value. We show that, on stochastic problems, one can tune the
threshold so as to generate both a compact model and an efficient policy. Then,
we show that SPITI, while keeping its model compact, uses the generalization
property of its learning method to perform better than a stochastic classical
tabular algorithm in large RL problem with an unknown structure. We also
introduce a new measure based on Chi-Square to qualify the accuracy of the
model learned by SPITI. We qualitatively show that the generalization property
in SPITI within the FMDP framework may prevent an exponential growth of the
time required to learn the structure of large stochastic RL problems.
| [
"Thomas Degris, Olivier Sigaud, Pierre-Henri Wuillemin",
"['Thomas Degris' 'Olivier Sigaud' 'Pierre-Henri Wuillemin']"
] |
stat.ME cs.LG stat.ML | null | 1206.6845 | null | null | http://arxiv.org/pdf/1206.6845v1 | 2012-06-27T16:21:35Z | 2012-06-27T16:21:35Z | Gibbs Sampling for (Coupled) Infinite Mixture Models in the Stick
Breaking Representation | Nonparametric Bayesian approaches to clustering, information retrieval,
language modeling and object recognition have recently shown great promise as a
new paradigm for unsupervised data analysis. Most contributions have focused on
the Dirichlet process mixture models or extensions thereof for which efficient
Gibbs samplers exist. In this paper we explore Gibbs samplers for infinite
complexity mixture models in the stick breaking representation. The advantage
of this representation is improved modeling flexibility. For instance, one can
design the prior distribution over cluster sizes or couple multiple infinite
mixture models (e.g. over time) at the level of their parameters (i.e. the
dependent Dirichlet process model). However, Gibbs samplers for infinite
mixture models (as recently introduced in the statistics literature) seem to
mix poorly over cluster labels. Among others issues, this can have the adverse
effect that labels for the same cluster in coupled mixture models are mixed up.
We introduce additional moves in these samplers to improve mixing over cluster
labels and to bring clusters into correspondence. An application to modeling of
storm trajectories is used to illustrate these ideas.
| [
"Ian Porteous, Alexander T. Ihler, Padhraic Smyth, Max Welling",
"['Ian Porteous' 'Alexander T. Ihler' 'Padhraic Smyth' 'Max Welling']"
] |
cs.LG cs.AI stat.ML | null | 1206.6846 | null | null | http://arxiv.org/pdf/1206.6846v1 | 2012-06-27T16:23:17Z | 2012-06-27T16:23:17Z | Approximate Separability for Weak Interaction in Dynamic Systems | One approach to monitoring a dynamic system relies on decomposition of the
system into weakly interacting subsystems. An earlier paper introduced a notion
of weak interaction called separability, and showed that it leads to exact
propagation of marginals for prediction. This paper addresses two questions
left open by the earlier paper: can we define a notion of approximate
separability that occurs naturally in practice, and do separability and
approximate separability lead to accurate monitoring? The answer to both
questions is afirmative. The paper also analyzes the structure of approximately
separable decompositions, and provides some explanation as to why these models
perform well.
| [
"Avi Pfeffer",
"['Avi Pfeffer']"
] |
cs.LG cs.AI stat.ML | null | 1206.6847 | null | null | http://arxiv.org/pdf/1206.6847v1 | 2012-06-27T16:23:41Z | 2012-06-27T16:23:41Z | Identifying the Relevant Nodes Without Learning the Model | We propose a method to identify all the nodes that are relevant to compute
all the conditional probability distributions for a given set of nodes. Our
method is simple, effcient, consistent, and does not require learning a
Bayesian network first. Therefore, our method can be applied to
high-dimensional databases, e.g. gene expression databases.
| [
"Jose M. Pena, Roland Nilsson, Johan Bj\\\"orkegren, Jesper Tegn\\'er",
"['Jose M. Pena' 'Roland Nilsson' 'Johan Björkegren' 'Jesper Tegnér']"
] |
cs.LG cs.AI stat.ML | null | 1206.6851 | null | null | http://arxiv.org/pdf/1206.6851v1 | 2012-06-27T16:24:43Z | 2012-06-27T16:24:43Z | A compact, hierarchical Q-function decomposition | Previous work in hierarchical reinforcement learning has faced a dilemma:
either ignore the values of different possible exit states from a subroutine,
thereby risking suboptimal behavior, or represent those values explicitly
thereby incurring a possibly large representation cost because exit values
refer to nonlocal aspects of the world (i.e., all subsequent rewards). This
paper shows that, in many cases, one can avoid both of these problems. The
solution is based on recursively decomposing the exit value function in terms
of Q-functions at higher levels of the hierarchy. This leads to an intuitively
appealing runtime architecture in which a parent subroutine passes to its child
a value function on the exit states and the child reasons about how its choices
affect the exit value. We also identify structural conditions on the value
function and transition distributions that allow much more concise
representations of exit state distributions, leading to further state
abstraction. In essence, the only variables whose exit values need be
considered are those that the parent cares about and the child affects. We
demonstrate the utility of our algorithms on a series of increasingly complex
environments.
| [
"Bhaskara Marthi, Stuart Russell, David Andre",
"['Bhaskara Marthi' 'Stuart Russell' 'David Andre']"
] |
cs.LG cs.AI stat.ML | null | 1206.6852 | null | null | http://arxiv.org/pdf/1206.6852v1 | 2012-06-27T16:24:57Z | 2012-06-27T16:24:57Z | Structured Priors for Structure Learning | Traditional approaches to Bayes net structure learning typically assume
little regularity in graph structure other than sparseness. However, in many
cases, we expect more systematicity: variables in real-world systems often
group into classes that predict the kinds of probabilistic dependencies they
participate in. Here we capture this form of prior knowledge in a hierarchical
Bayesian framework, and exploit it to enable structure learning and type
discovery from small datasets. Specifically, we present a nonparametric
generative model for directed acyclic graphs as a prior for Bayes net structure
learning. Our model assumes that variables come in one or more classes and that
the prior probability of an edge existing between two variables is a function
only of their classes. We derive an MCMC algorithm for simultaneous inference
of the number of classes, the class assignments of variables, and the Bayes net
structure over variables. For several realistic, sparse datasets, we show that
the bias towards systematicity of connections provided by our model yields more
accurate learned networks than a traditional, uniform prior approach, and that
the classes found by our model are appropriate.
| [
"['Vikash Mansinghka' 'Charles Kemp' 'Thomas Griffiths' 'Joshua Tenenbaum']",
"Vikash Mansinghka, Charles Kemp, Thomas Griffiths, Joshua Tenenbaum"
] |
cs.LG cs.NA stat.ML | null | 1206.6857 | null | null | http://arxiv.org/pdf/1206.6857v1 | 2012-06-27T16:26:27Z | 2012-06-27T16:26:27Z | Faster Gaussian Summation: Theory and Experiment | We provide faster algorithms for the problem of Gaussian summation, which
occurs in many machine learning methods. We develop two new extensions - an
O(Dp) Taylor expansion for the Gaussian kernel with rigorous error bounds and a
new error control scheme integrating any arbitrary approximation method -
within the best discretealgorithmic framework using adaptive hierarchical data
structures. We rigorously evaluate these techniques empirically in the context
of optimal bandwidth selection in kernel density estimation, revealing the
strengths and weaknesses of current state-of-the-art approaches for the first
time. Our results demonstrate that the new error control scheme yields improved
performance, whereas the series expansion approach is only effective in low
dimensions (five or less).
| [
"Dongryeol Lee, Alexander G. Gray",
"['Dongryeol Lee' 'Alexander G. Gray']"
] |
cs.IR cs.LG | null | 1206.6858 | null | null | http://arxiv.org/pdf/1206.6858v1 | 2012-06-27T16:26:46Z | 2012-06-27T16:26:46Z | Sequential Document Representations and Simplicial Curves | The popular bag of words assumption represents a document as a histogram of
word occurrences. While computationally efficient, such a representation is
unable to maintain any sequential information. We present a continuous and
differentiable sequential document representation that goes beyond the bag of
words assumption, and yet is efficient and effective. This representation
employs smooth curves in the multinomial simplex to account for sequential
information. We discuss the representation and its geometric properties and
demonstrate its applicability for the task of text classification.
| [
"Guy Lebanon",
"['Guy Lebanon']"
] |
cs.LG stat.ML | null | 1206.6860 | null | null | http://arxiv.org/pdf/1206.6860v1 | 2012-06-27T16:27:25Z | 2012-06-27T16:27:25Z | Predicting Conditional Quantiles via Reduction to Classification | We show how to reduce the process of predicting general order statistics (and
the median in particular) to solving classification. The accompanying
theoretical statement shows that the regret of the classifier bounds the regret
of the quantile regression under a quantile loss. We also test this reduction
empirically against existing quantile regression methods on large real-world
datasets and discover that it provides state-of-the-art performance.
| [
"['John Langford' 'Roberto Oliveira' 'Bianca Zadrozny']",
"John Langford, Roberto Oliveira, Bianca Zadrozny"
] |
cs.LG cs.AI stat.ML | null | 1206.6862 | null | null | http://arxiv.org/pdf/1206.6862v1 | 2012-06-27T16:28:06Z | 2012-06-27T16:28:06Z | On the Number of Samples Needed to Learn the Correct Structure of a
Bayesian Network | Bayesian Networks (BNs) are useful tools giving a natural and compact
representation of joint probability distributions. In many applications one
needs to learn a Bayesian Network (BN) from data. In this context, it is
important to understand the number of samples needed in order to guarantee a
successful learning. Previous work have studied BNs sample complexity, yet it
mainly focused on the requirement that the learned distribution will be close
to the original distribution which generated the data. In this work, we study a
different aspect of the learning, namely the number of samples needed in order
to learn the correct structure of the network. We give both asymptotic results,
valid in the large sample limit, and experimental results, demonstrating the
learning behavior for feasible sample sizes. We show that structure learning is
a more difficult task, compared to approximating the correct distribution, in
the sense that it requires a much larger number of samples, regardless of the
computational power available for the learner.
| [
"['Or Zuk' 'Shiri Margel' 'Eytan Domany']",
"Or Zuk, Shiri Margel, Eytan Domany"
] |
cs.LG stat.ML | null | 1206.6863 | null | null | http://arxiv.org/pdf/1206.6863v1 | 2012-06-27T16:28:18Z | 2012-06-27T16:28:18Z | Bayesian Multicategory Support Vector Machines | We show that the multi-class support vector machine (MSVM) proposed by Lee
et. al. (2004), can be viewed as a MAP estimation procedure under an
appropriate probabilistic interpretation of the classifier. We also show that
this interpretation can be extended to a hierarchical Bayesian architecture and
to a fully-Bayesian inference procedure for multi-class classification based on
data augmentation. We present empirical results that show that the advantages
of the Bayesian formalism are obtained without a loss in classification
accuracy.
| [
"Zhihua Zhang, Michael I. Jordan",
"['Zhihua Zhang' 'Michael I. Jordan']"
] |
cs.AI cs.DB cs.LG | null | 1206.6864 | null | null | http://arxiv.org/pdf/1206.6864v1 | 2012-06-27T16:28:29Z | 2012-06-27T16:28:29Z | Infinite Hidden Relational Models | In many cases it makes sense to model a relationship symmetrically, not
implying any particular directionality. Consider the classical example of a
recommendation system where the rating of an item by a user should
symmetrically be dependent on the attributes of both the user and the item. The
attributes of the (known) relationships are also relevant for predicting
attributes of entities and for predicting attributes of new relations. In
recommendation systems, the exploitation of relational attributes is often
referred to as collaborative filtering. Again, in many applications one might
prefer to model the collaborative effect in a symmetrical way. In this paper we
present a relational model, which is completely symmetrical. The key innovation
is that we introduce for each entity (or object) an infinite-dimensional latent
variable as part of a Dirichlet process (DP) model. We discuss inference in the
model, which is based on a DP Gibbs sampler, i.e., the Chinese restaurant
process. We extend the Chinese restaurant process to be applicable to
relational modeling. Our approach is evaluated in three applications. One is a
recommendation system based on the MovieLens data set. The second application
concerns the prediction of the function of yeast genes/proteins on the data set
of KDD Cup 2001 using a multi-relational model. The third application involves
a relational medical domain. The experimental results show that our model gives
significantly improved estimates of attributes describing relationships or
entities in complex relational models.
| [
"Zhao Xu, Volker Tresp, Kai Yu, Hans-Peter Kriegel",
"['Zhao Xu' 'Volker Tresp' 'Kai Yu' 'Hans-Peter Kriegel']"
] |
cs.LG cs.AI stat.ML | null | 1206.6865 | null | null | http://arxiv.org/pdf/1206.6865v1 | 2012-06-27T16:28:41Z | 2012-06-27T16:28:41Z | A Non-Parametric Bayesian Method for Inferring Hidden Causes | We present a non-parametric Bayesian approach to structure learning with
hidden causes. Previous Bayesian treatments of this problem define a prior over
the number of hidden causes and use algorithms such as reversible jump Markov
chain Monte Carlo to move between solutions. In contrast, we assume that the
number of hidden causes is unbounded, but only a finite number influence
observable variables. This makes it possible to use a Gibbs sampler to
approximate the distribution over causal structures. We evaluate the
performance of both approaches in discovering hidden causes in simulated data,
and use our non-parametric approach to discover hidden causes in a real medical
dataset.
| [
"Frank Wood, Thomas Griffiths, Zoubin Ghahramani",
"['Frank Wood' 'Thomas Griffiths' 'Zoubin Ghahramani']"
] |
cs.LG stat.ML | null | 1206.6868 | null | null | http://arxiv.org/pdf/1206.6868v1 | 2012-06-27T16:29:18Z | 2012-06-27T16:29:18Z | Bayesian Random Fields: The Bethe-Laplace Approximation | While learning the maximum likelihood value of parameters of an undirected
graphical model is hard, modelling the posterior distribution over parameters
given data is harder. Yet, undirected models are ubiquitous in computer vision
and text modelling (e.g. conditional random fields). But where Bayesian
approaches for directed models have been very successful, a proper Bayesian
treatment of undirected models in still in its infant stages. We propose a new
method for approximating the posterior of the parameters given data based on
the Laplace approximation. This approximation requires the computation of the
covariance matrix over features which we compute using the linear response
approximation based in turn on loopy belief propagation. We develop the theory
for conditional and 'unconditional' random fields with or without hidden
variables. In the conditional setting we introduce a new variant of bagging
suitable for structured domains. Here we run the loopy max-product algorithm on
a 'super-graph' composed of graphs for individual models sampled from the
posterior and connected by constraints. Experiments on real world data validate
the proposed methods.
| [
"Max Welling, Sridevi Parise",
"['Max Welling' 'Sridevi Parise']"
] |
cs.LG cs.AI stat.ML | null | 1206.6870 | null | null | http://arxiv.org/pdf/1206.6870v1 | 2012-06-27T16:29:41Z | 2012-06-27T16:29:41Z | Incremental Model-based Learners With Formal Learning-Time Guarantees | Model-based learning algorithms have been shown to use experience efficiently
when learning to solve Markov Decision Processes (MDPs) with finite state and
action spaces. However, their high computational cost due to repeatedly solving
an internal model inhibits their use in large-scale problems. We propose a
method based on real-time dynamic programming (RTDP) to speed up two
model-based algorithms, RMAX and MBIE (model-based interval estimation),
resulting in computationally much faster algorithms with little loss compared
to existing bounds. Specifically, our two new learning algorithms, RTDP-RMAX
and RTDP-IE, have considerably smaller computational demands than RMAX and
MBIE. We develop a general theoretical framework that allows us to prove that
both are efficient learners in a PAC (probably approximately correct) sense. We
also present an experimental evaluation of these new algorithms that helps
quantify the tradeoff between computational and experience demands.
| [
"['Alexander L. Strehl' 'Lihong Li' 'Michael L. Littman']",
"Alexander L. Strehl, Lihong Li, Michael L. Littman"
] |
cs.LG stat.ML | null | 1206.6871 | null | null | http://arxiv.org/pdf/1206.6871v1 | 2012-06-27T16:29:52Z | 2012-06-27T16:29:52Z | Ranking by Dependence - A Fair Criteria | Estimating the dependences between random variables, and ranking them
accordingly, is a prevalent problem in machine learning. Pursuing frequentist
and information-theoretic approaches, we first show that the p-value and the
mutual information can fail even in simplistic situations. We then propose two
conditions for regularizing an estimator of dependence, which leads to a simple
yet effective new measure. We discuss its advantages and compare it to
well-established model-selection criteria. Apart from that, we derive a simple
constraint for regularizing parameter estimates in a graphical model. This
results in an analytical approximation for the optimal value of the equivalent
sample size, which agrees very well with the more involved Bayesian approach in
our experiments.
| [
"['Harald Steck']",
"Harald Steck"
] |
cs.CV cs.LG cs.RO | null | 1206.6872 | null | null | http://arxiv.org/pdf/1206.6872v1 | 2012-06-27T16:30:05Z | 2012-06-27T16:30:05Z | A Self-Supervised Terrain Roughness Estimator for Off-Road Autonomous
Driving | We present a machine learning approach for estimating the second derivative
of a drivable surface, its roughness. Robot perception generally focuses on the
first derivative, obstacle detection. However, the second derivative is also
important due to its direct relation (with speed) to the shock the vehicle
experiences. Knowing the second derivative allows a vehicle to slow down in
advance of rough terrain. Estimating the second derivative is challenging due
to uncertainty. For example, at range, laser readings may be so sparse that
significant information about the surface is missing. Also, a high degree of
precision is required in projecting laser readings. This precision may be
unavailable due to latency or error in the pose estimation. We model these
sources of error as a multivariate polynomial. Its coefficients are learned
using the shock data as ground truth -- the accelerometers are used to train
the lasers. The resulting classifier operates on individual laser readings from
a road surface described by a 3D point cloud. The classifier identifies
sections of road where the second derivative is likely to be large. Thus, the
vehicle can slow down in advance, reducing the shock it experiences. The
algorithm is an evolution of one we used in the 2005 DARPA Grand Challenge. We
analyze it using data from that route.
| [
"['David Stavens' 'Sebastian Thrun']",
"David Stavens, Sebastian Thrun"
] |
cs.LG stat.ML | null | 1206.6873 | null | null | http://arxiv.org/pdf/1206.6873v1 | 2012-06-27T16:30:17Z | 2012-06-27T16:30:17Z | Variable noise and dimensionality reduction for sparse Gaussian
processes | The sparse pseudo-input Gaussian process (SPGP) is a new approximation method
for speeding up GP regression in the case of a large number of data points N.
The approximation is controlled by the gradient optimization of a small set of
M `pseudo-inputs', thereby reducing complexity from N^3 to NM^2. One limitation
of the SPGP is that this optimization space becomes impractically big for high
dimensional data sets. This paper addresses this limitation by performing
automatic dimensionality reduction. A projection of the input space to a low
dimensional space is learned in a supervised manner, alongside the
pseudo-inputs, which now live in this reduced space. The paper also
investigates the suitability of the SPGP for modeling data with input-dependent
noise. A further extension of the model is made to make it even more powerful
in this regard - we learn an uncertainty parameter for each pseudo-input. The
combination of sparsity, reduced dimension, and input-dependent noise makes it
possible to apply GPs to much larger and more complex data sets than was
previously practical. We demonstrate the benefits of these methods on several
synthetic and real world problems.
| [
"['Edward Snelson' 'Zoubin Ghahramani']",
"Edward Snelson, Zoubin Ghahramani"
] |
cs.LG | null | 1206.6883 | null | null | http://arxiv.org/pdf/1206.6883v1 | 2012-06-28T18:57:01Z | 2012-06-28T18:57:01Z | Learning Neighborhoods for Metric Learning | Metric learning methods have been shown to perform well on different learning
tasks. Many of them rely on target neighborhood relationships that are computed
in the original feature space and remain fixed throughout learning. As a
result, the learned metric reflects the original neighborhood relations. We
propose a novel formulation of the metric learning problem in which, in
addition to the metric, the target neighborhood relations are also learned in a
two-step iterative approach. The new formulation can be seen as a
generalization of many existing metric learning methods. The formulation
includes a target neighbor assignment rule that assigns different numbers of
neighbors to instances according to their quality; `high quality' instances get
more neighbors. We experiment with two of its instantiations that correspond to
the metric learning algorithms LMNN and MCML and compare it to other metric
learning methods on a number of datasets. The experimental results show
state-of-the-art performance and provide evidence that learning the
neighborhood relations does improve predictive performance.
| [
"['Jun Wang' 'Adam Woznica' 'Alexandros Kalousis']",
"Jun Wang, Adam Woznica, Alexandros Kalousis"
] |
cs.LG cs.IR stat.ML | null | 1206.7112 | null | null | http://arxiv.org/pdf/1206.7112v1 | 2012-06-29T19:33:47Z | 2012-06-29T19:33:47Z | A Hybrid Method for Distance Metric Learning | We consider the problem of learning a measure of distance among vectors in a
feature space and propose a hybrid method that simultaneously learns from
similarity ratings assigned to pairs of vectors and class labels assigned to
individual vectors. Our method is based on a generative model in which class
labels can provide information that is not encoded in feature vectors but yet
relates to perceived similarity between objects. Experiments with synthetic
data as well as a real medical image retrieval problem demonstrate that
leveraging class labels through use of our method improves retrieval
performance significantly.
| [
"Yi-Hao Kao and Benjamin Van Roy and Daniel Rubin and Jiajing Xu and\n Jessica Faruque and Sandy Napel",
"['Yi-Hao Kao' 'Benjamin Van Roy' 'Daniel Rubin' 'Jiajing Xu'\n 'Jessica Faruque' 'Sandy Napel']"
] |
cs.LG stat.ML | null | 1207.0057 | null | null | http://arxiv.org/pdf/1207.0057v1 | 2012-06-30T07:45:11Z | 2012-06-30T07:45:11Z | Implicit Density Estimation by Local Moment Matching to Sample from
Auto-Encoders | Recent work suggests that some auto-encoder variants do a good job of
capturing the local manifold structure of the unknown data generating density.
This paper contributes to the mathematical understanding of this phenomenon and
helps define better justified sampling algorithms for deep learning based on
auto-encoder variants. We consider an MCMC where each step samples from a
Gaussian whose mean and covariance matrix depend on the previous state, defines
through its asymptotic distribution a target density. First, we show that good
choices (in the sense of consistency) for these mean and covariance functions
are the local expected value and local covariance under that target density.
Then we show that an auto-encoder with a contractive penalty captures
estimators of these local moments in its reconstruction function and its
Jacobian. A contribution of this work is thus a novel alternative to
maximum-likelihood density estimation, which we call local moment matching. It
also justifies a recently proposed sampling algorithm for the Contractive
Auto-Encoder and extends it to the Denoising Auto-Encoder.
| [
"['Yoshua Bengio' 'Guillaume Alain' 'Salah Rifai']",
"Yoshua Bengio and Guillaume Alain and Salah Rifai"
] |
cs.LG stat.ML | null | 1207.0099 | null | null | http://arxiv.org/pdf/1207.0099v1 | 2012-06-30T14:21:46Z | 2012-06-30T14:21:46Z | Density-Difference Estimation | We address the problem of estimating the difference between two probability
densities. A naive approach is a two-step procedure of first estimating two
densities separately and then computing their difference. However, such a
two-step procedure does not necessarily work well because the first step is
performed without regard to the second step and thus a small error incurred in
the first stage can cause a big error in the second stage. In this paper, we
propose a single-shot procedure for directly estimating the density difference
without separately estimating two densities. We derive a non-parametric
finite-sample error bound for the proposed single-shot density-difference
estimator and show that it achieves the optimal convergence rate. The
usefulness of the proposed method is also demonstrated experimentally.
| [
"Masashi Sugiyama, Takafumi Kanamori, Taiji Suzuki, Marthinus\n Christoffel du Plessis, Song Liu, Ichiro Takeuchi",
"['Masashi Sugiyama' 'Takafumi Kanamori' 'Taiji Suzuki'\n 'Marthinus Christoffel du Plessis' 'Song Liu' 'Ichiro Takeuchi']"
] |
cs.CV cs.LG | null | 1207.0151 | null | null | http://arxiv.org/pdf/1207.0151v1 | 2012-06-30T21:04:13Z | 2012-06-30T21:04:13Z | Differentiable Pooling for Hierarchical Feature Learning | We introduce a parametric form of pooling, based on a Gaussian, which can be
optimized alongside the features in a single global objective function. By
contrast, existing pooling schemes are based on heuristics (e.g. local maximum)
and have no clear link to the cost function of the model. Furthermore, the
variables of the Gaussian explicitly store location information, distinct from
the appearance captured by the features, thus providing a what/where
decomposition of the input signal. Although the differentiable pooling scheme
can be incorporated in a wide range of hierarchical models, we demonstrate it
in the context of a Deconvolutional Network model (Zeiler et al. ICCV 2011). We
also explore a number of secondary issues within this model and present
detailed experiments on MNIST digits.
| [
"['Matthew D. Zeiler' 'Rob Fergus']",
"Matthew D. Zeiler and Rob Fergus"
] |
cs.LG | null | 1207.0166 | null | null | http://arxiv.org/pdf/1207.0166v3 | 2013-01-16T19:19:34Z | 2012-06-30T23:07:03Z | On Multilabel Classification and Ranking with Partial Feedback | We present a novel multilabel/ranking algorithm working in partial
information settings. The algorithm is based on 2nd-order descent methods, and
relies on upper-confidence bounds to trade-off exploration and exploitation. We
analyze this algorithm in a partial adversarial setting, where covariates can
be adversarial, but multilabel probabilities are ruled by (generalized) linear
models. We show O(T^{1/2} log T) regret bounds, which improve in several ways
on the existing results. We test the effectiveness of our upper-confidence
scheme by contrasting against full-information baselines on real-world
multilabel datasets, often obtaining comparable performance.
| [
"Claudio Gentile and Francesco Orabona",
"['Claudio Gentile' 'Francesco Orabona']"
] |
cs.LG stat.ML | null | 1207.0268 | null | null | http://arxiv.org/pdf/1207.0268v1 | 2012-07-02T02:57:30Z | 2012-07-02T02:57:30Z | Surrogate Regret Bounds for Bipartite Ranking via Strongly Proper Losses | The problem of bipartite ranking, where instances are labeled positive or
negative and the goal is to learn a scoring function that minimizes the
probability of mis-ranking a pair of positive and negative instances (or
equivalently, that maximizes the area under the ROC curve), has been widely
studied in recent years. A dominant theoretical and algorithmic framework for
the problem has been to reduce bipartite ranking to pairwise classification; in
particular, it is well known that the bipartite ranking regret can be
formulated as a pairwise classification regret, which in turn can be upper
bounded using usual regret bounds for classification problems. Recently,
Kotlowski et al. (2011) showed regret bounds for bipartite ranking in terms of
the regret associated with balanced versions of the standard (non-pairwise)
logistic and exponential losses. In this paper, we show that such
(non-pairwise) surrogate regret bounds for bipartite ranking can be obtained in
terms of a broad class of proper (composite) losses that we term as strongly
proper. Our proof technique is much simpler than that of Kotlowski et al.
(2011), and relies on properties of proper (composite) losses as elucidated
recently by Reid and Williamson (2010, 2011) and others. Our result yields
explicit surrogate bounds (with no hidden balancing terms) in terms of a
variety of strongly proper losses, including for example logistic, exponential,
squared and squared hinge losses as special cases. We also obtain tighter
surrogate bounds under certain low-noise conditions via a recent result of
Clemencon and Robbiano (2011).
| [
"['Shivani Agarwal']",
"Shivani Agarwal"
] |
cs.CL cs.LG | null | 1207.0396 | null | null | http://arxiv.org/pdf/1207.0396v1 | 2012-07-02T14:19:21Z | 2012-07-02T14:19:21Z | Applying Deep Belief Networks to Word Sense Disambiguation | In this paper, we applied a novel learning algorithm, namely, Deep Belief
Networks (DBN) to word sense disambiguation (WSD). DBN is a probabilistic
generative model composed of multiple layers of hidden units. DBN uses
Restricted Boltzmann Machine (RBM) to greedily train layer by layer as a
pretraining. Then, a separate fine tuning step is employed to improve the
discriminative power. We compared DBN with various state-of-the-art supervised
learning algorithms in WSD such as Support Vector Machine (SVM), Maximum
Entropy model (MaxEnt), Naive Bayes classifier (NB) and Kernel Principal
Component Analysis (KPCA). We used all words in the given paragraph,
surrounding context words and part-of-speech of surrounding words as our
knowledge sources. We conducted our experiment on the SENSEVAL-2 data set. We
observed that DBN outperformed all other learning algorithms.
| [
"['Peratham Wiriyathammabhum' 'Boonserm Kijsirikul' 'Hiroya Takamura'\n 'Manabu Okumura']",
"Peratham Wiriyathammabhum, Boonserm Kijsirikul, Hiroya Takamura,\n Manabu Okumura"
] |
cs.DS cs.LG | null | 1207.0560 | null | null | http://arxiv.org/pdf/1207.0560v4 | 2013-08-24T05:35:11Z | 2012-07-03T01:25:10Z | Algorithms for Approximate Minimization of the Difference Between
Submodular Functions, with Applications | We extend the work of Narasimhan and Bilmes [30] for minimizing set functions
representable as a difference between submodular functions. Similar to [30],
our new algorithms are guaranteed to monotonically reduce the objective
function at every step. We empirically and theoretically show that the
per-iteration cost of our algorithms is much less than [30], and our algorithms
can be used to efficiently minimize a difference between submodular functions
under various combinatorial constraints, a problem not previously addressed. We
provide computational bounds and a hardness result on the mul- tiplicative
inapproximability of minimizing the difference between submodular functions. We
show, however, that it is possible to give worst-case additive bounds by
providing a polynomial time computable lower-bound on the minima. Finally we
show how a number of machine learning problems can be modeled as minimizing the
difference between submodular functions. We experimentally show the validity of
our algorithms by testing them on the problem of feature selection with
submodular cost features.
| [
"Rishabh Iyer and Jeff Bilmes",
"['Rishabh Iyer' 'Jeff Bilmes']"
] |
stat.ML cs.LG | null | 1207.0577 | null | null | http://arxiv.org/pdf/1207.0577v2 | 2013-10-10T17:19:31Z | 2012-07-03T06:07:13Z | Robust Dequantized Compressive Sensing | We consider the reconstruction problem in compressed sensing in which the
observations are recorded in a finite number of bits. They may thus contain
quantization errors (from being rounded to the nearest representable value) and
saturation errors (from being outside the range of representable values). Our
formulation has an objective of weighted $\ell_2$-$\ell_1$ type, along with
constraints that account explicitly for quantization and saturation errors, and
is solved with an augmented Lagrangian method. We prove a consistency result
for the recovered solution, stronger than those that have appeared to date in
the literature, showing in particular that asymptotic consistency can be
obtained without oversampling. We present extensive computational comparisons
with formulations proposed previously, and variants thereof.
| [
"['Ji Liu' 'Stephen J. Wright']",
"Ji Liu and Stephen J. Wright"
] |
cs.NE cs.CV cs.LG | null | 1207.0580 | null | null | http://arxiv.org/pdf/1207.0580v1 | 2012-07-03T06:35:15Z | 2012-07-03T06:35:15Z | Improving neural networks by preventing co-adaptation of feature
detectors | When a large feedforward neural network is trained on a small training set,
it typically performs poorly on held-out test data. This "overfitting" is
greatly reduced by randomly omitting half of the feature detectors on each
training case. This prevents complex co-adaptations in which a feature detector
is only helpful in the context of several other specific feature detectors.
Instead, each neuron learns to detect a feature that is generally helpful for
producing the correct answer given the combinatorially large variety of
internal contexts in which it must operate. Random "dropout" gives big
improvements on many benchmark tasks and sets new records for speech and object
recognition.
| [
"['Geoffrey E. Hinton' 'Nitish Srivastava' 'Alex Krizhevsky'\n 'Ilya Sutskever' 'Ruslan R. Salakhutdinov']",
"Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya\n Sutskever, Ruslan R. Salakhutdinov"
] |
cs.LG cs.CV | null | 1207.0677 | null | null | http://arxiv.org/pdf/1207.0677v1 | 2012-07-03T13:52:19Z | 2012-07-03T13:52:19Z | Local Water Diffusion Phenomenon Clustering From High Angular Resolution
Diffusion Imaging (HARDI) | The understanding of neurodegenerative diseases undoubtedly passes through
the study of human brain white matter fiber tracts. To date, diffusion magnetic
resonance imaging (dMRI) is the unique technique to obtain information about
the neural architecture of the human brain, thus permitting the study of white
matter connections and their integrity. However, a remaining challenge of the
dMRI community is to better characterize complex fiber crossing configurations,
where diffusion tensor imaging (DTI) is limited but high angular resolution
diffusion imaging (HARDI) now brings solutions. This paper investigates the
development of both identification and classification process of the local
water diffusion phenomenon based on HARDI data to automatically detect imaging
voxels where there are single and crossing fiber bundle populations. The
technique is based on knowledge extraction processes and is validated on a dMRI
phantom dataset with ground truth.
| [
"['Romain Giot' 'Christophe Charrier' 'Maxime Descoteaux']",
"Romain Giot (GREYC), Christophe Charrier (GREYC), Maxime Descoteaux\n (SCIL)"
] |
cs.AI cs.CL cs.LG | null | 1207.0742 | null | null | http://arxiv.org/pdf/1207.0742v1 | 2012-07-03T16:35:48Z | 2012-07-03T16:35:48Z | The OS* Algorithm: a Joint Approach to Exact Optimization and Sampling | Most current sampling algorithms for high-dimensional distributions are based
on MCMC techniques and are approximate in the sense that they are valid only
asymptotically. Rejection sampling, on the other hand, produces valid samples,
but is unrealistically slow in high-dimension spaces. The OS* algorithm that we
propose is a unified approach to exact optimization and sampling, based on
incremental refinements of a functional upper bound, which combines ideas of
adaptive rejection sampling and of A* optimization search. We show that the
choice of the refinement can be done in a way that ensures tractability in
high-dimension spaces, and we present first experiments in two different
settings: inference in high-order HMMs and in large discrete graphical models.
| [
"['Marc Dymetman' 'Guillaume Bouchard' 'Simon Carter']",
"Marc Dymetman and Guillaume Bouchard and Simon Carter"
] |
cs.LG | null | 1207.0783 | null | null | http://arxiv.org/pdf/1207.0783v1 | 2012-07-03T19:12:13Z | 2012-07-03T19:12:13Z | Hybrid Template Update System for Unimodal Biometric Systems | Semi-supervised template update systems allow to automatically take into
account the intra-class variability of the biometric data over time. Such
systems can be inefficient by including too many impostor's samples or skipping
too many genuine's samples. In the first case, the biometric reference drifts
from the real biometric data and attracts more often impostors. In the second
case, the biometric reference does not evolve quickly enough and also
progressively drifts from the real biometric data. We propose a hybrid system
using several biometric sub-references in order to increase per- formance of
self-update systems by reducing the previously cited errors. The proposition is
validated for a keystroke- dynamics authentication system (this modality
suffers of high variability over time) on two consequent datasets from the
state of the art.
| [
"Romain Giot (GREYC), Christophe Rosenberger (GREYC), Bernadette\n Dorizzi (EPH, SAMOVAR)",
"['Romain Giot' 'Christophe Rosenberger' 'Bernadette Dorizzi']"
] |
cs.LG | null | 1207.0784 | null | null | http://arxiv.org/pdf/1207.0784v1 | 2012-07-03T19:12:56Z | 2012-07-03T19:12:56Z | Web-Based Benchmark for Keystroke Dynamics Biometric Systems: A
Statistical Analysis | Most keystroke dynamics studies have been evaluated using a specific kind of
dataset in which users type an imposed login and password. Moreover, these
studies are optimistics since most of them use different acquisition protocols,
private datasets, controlled environment, etc. In order to enhance the accuracy
of keystroke dynamics' performance, the main contribution of this paper is
twofold. First, we provide a new kind of dataset in which users have typed both
an imposed and a chosen pairs of logins and passwords. In addition, the
keystroke dynamics samples are collected in a web-based uncontrolled
environment (OS, keyboards, browser, etc.). Such kind of dataset is important
since it provides us more realistic results of keystroke dynamics' performance
in comparison to the literature (controlled environment, etc.). Second, we
present a statistical analysis of well known assertions such as the
relationship between performance and password size, impact of fusion schemes on
system overall performance, and others such as the relationship between
performance and entropy. We put into obviousness in this paper some new results
on keystroke dynamics in realistic conditions.
| [
"['Romain Giot' 'Mohamad El-Abed' 'Christophe Rosenberger']",
"Romain Giot (GREYC), Mohamad El-Abed (GREYC), Christophe Rosenberger\n (GREYC)"
] |
stat.ML cs.CV cs.LG cs.MM | null | 1207.1019 | null | null | http://arxiv.org/pdf/1207.1019v1 | 2012-07-04T15:09:05Z | 2012-07-04T15:09:05Z | PAC-Bayesian Majority Vote for Late Classifier Fusion | A lot of attention has been devoted to multimedia indexing over the past few
years. In the literature, we often consider two kinds of fusion schemes: The
early fusion and the late fusion. In this paper we focus on late classifier
fusion, where one combines the scores of each modality at the decision level.
To tackle this problem, we investigate a recent and elegant well-founded
quadratic program named MinCq coming from the Machine Learning PAC-Bayes
theory. MinCq looks for the weighted combination, over a set of real-valued
functions seen as voters, leading to the lowest misclassification rate, while
making use of the voters' diversity. We provide evidence that this method is
naturally adapted to late fusion procedure. We propose an extension of MinCq by
adding an order- preserving pairwise loss for ranking, helping to improve Mean
Averaged Precision measure. We confirm the good behavior of the MinCq-based
fusion approaches with experiments on a real image benchmark.
| [
"Emilie Morvant (LIF), Amaury Habrard (LAHC), St\\'ephane Ayache (LIF)",
"['Emilie Morvant' 'Amaury Habrard' 'Stéphane Ayache']"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.