title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Compositional Planning Using Optimal Option Models | cs.AI cs.LG | In this paper we introduce a framework for option model composition. Option
models are temporal abstractions that, like macro-operators in classical
planning, jump directly from a start state to an end state. Prior work has
focused on constructing option models from primitive actions, by intra-option
model learning; or on using option models to construct a value function, by
inter-option planning. We present a unified view of intra- and inter-option
model learning, based on a major generalisation of the Bellman equation. Our
fundamental operation is the recursive composition of option models into other
option models. This key idea enables compositional planning over many levels of
abstraction. We illustrate our framework using a dynamic programming algorithm
that simultaneously constructs optimal option models for multiple subgoals, and
also searches over those option models to provide rapid progress towards other
subgoals.
| David Silver (University College London), Kamil Ciosek (University
College London) | null | 1206.6473 | null | null |
Estimation of Simultaneously Sparse and Low Rank Matrices | cs.DS cs.LG cs.NA stat.ML | The paper introduces a penalized matrix estimation procedure aiming at
solutions which are sparse and low-rank at the same time. Such structures arise
in the context of social networks or protein interactions where underlying
graphs have adjacency matrices which are block-diagonal in the appropriate
basis. We introduce a convex mixed penalty which involves $\ell_1$-norm and
trace norm simultaneously. We obtain an oracle inequality which indicates how
the two effects interact according to the nature of the target matrix. We bound
generalization error in the link prediction problem. We also develop proximal
descent strategies to solve the optimization problem efficiently and evaluate
performance on synthetic and real data sets.
| Emile Richard (ENS Cachan), Pierre-Andre Savalle (Ecole Centrale de
Paris), Nicolas Vayatis (ENS Cachan) | null | 1206.6474 | null | null |
A Split-Merge Framework for Comparing Clusterings | cs.LG stat.ML | Clustering evaluation measures are frequently used to evaluate the
performance of algorithms. However, most measures are not properly normalized
and ignore some information in the inherent structure of clusterings. We model
the relation between two clusterings as a bipartite graph and propose a general
component-based decomposition formula based on the components of the graph.
Most existing measures are examples of this formula. In order to satisfy
consistency in the component, we further propose a split-merge framework for
comparing clusterings of different data sets. Our framework gives measures that
are conditionally normalized, and it can make use of data point information,
such as feature vectors and pairwise distances. We use an entropy-based
instance of the framework and a coreference resolution data set to demonstrate
empirically the utility of our framework over other measures.
| Qiaoliang Xiang (Nanyang Technological University), Qi Mao (Nanyang
Technological University), Kian Ming Chai (DSO National Laboratories), Hai
Leong Chieu (DSO National Laboratories), Ivor Tsang (Nanyang Technological
University), Zhendong Zhao (Macquarie University) | null | 1206.6475 | null | null |
Similarity Learning for Provably Accurate Sparse Linear Classification | cs.LG cs.AI stat.ML | In recent years, the crucial importance of metrics in machine learning
algorithms has led to an increasing interest for optimizing distance and
similarity functions. Most of the state of the art focus on learning
Mahalanobis distances (requiring to fulfill a constraint of positive
semi-definiteness) for use in a local k-NN algorithm. However, no theoretical
link is established between the learned metrics and their performance in
classification. In this paper, we make use of the formal framework of good
similarities introduced by Balcan et al. to design an algorithm for learning a
non PSD linear similarity optimized in a nonlinear feature space, which is then
used to build a global linear classifier. We show that our approach has uniform
stability and derive a generalization bound on the classification error.
Experiments performed on various datasets confirm the effectiveness of our
approach compared to state-of-the-art methods and provide evidence that (i) it
is fast, (ii) robust to overfitting and (iii) produces very sparse classifiers.
| Aurelien Bellet (University of Saint-Etienne), Amaury Habrard
(University of Saint-Etienne), Marc Sebban (University of Saint-Etienne) | null | 1206.6476 | null | null |
Discovering Support and Affiliated Features from Very High Dimensions | cs.LG stat.ML | In this paper, a novel learning paradigm is presented to automatically
identify groups of informative and correlated features from very high
dimensions. Specifically, we explicitly incorporate correlation measures as
constraints and then propose an efficient embedded feature selection method
using recently developed cutting plane strategy. The benefits of the proposed
algorithm are two-folds. First, it can identify the optimal discriminative and
uncorrelated feature subset to the output labels, denoted here as Support
Features, which brings about significant improvements in prediction performance
over other state of the art feature selection methods considered in the paper.
Second, during the learning process, the underlying group structures of
correlated features associated with each support feature, denoted as Affiliated
Features, can also be discovered without any additional cost. These affiliated
features serve to improve the interpretations on the learning tasks. Extensive
empirical studies on both synthetic and very high dimensional real-world
datasets verify the validity and efficiency of the proposed method.
| Yiteng Zhai (Nanyang Technological University), Mingkui Tan (Nanyang
Technological University), Ivor Tsang (Nanyang Technological University), Yew
Soon Ong (Nanyang Technological University) | null | 1206.6477 | null | null |
Maximum Margin Output Coding | cs.LG stat.ML | In this paper we study output coding for multi-label prediction. For a
multi-label output coding to be discriminative, it is important that codewords
for different label vectors are significantly different from each other. In the
meantime, unlike in traditional coding theory, codewords in output coding are
to be predicted from the input, so it is also critical to have a predictable
label encoding.
To find output codes that are both discriminative and predictable, we first
propose a max-margin formulation that naturally captures these two properties.
We then convert it to a metric learning formulation, but with an exponentially
large number of constraints as commonly encountered in structured prediction
problems. Without a label structure for tractable inference, we use
overgenerating (i.e., relaxation) techniques combined with the cutting plane
method for optimization.
In our empirical study, the proposed output coding scheme outperforms a
variety of existing multi-label prediction methods for image, text and music
classification.
| Yi Zhang (Carnegie Mellon University), Jeff Schneider (Carnegie Mellon
University) | null | 1206.6478 | null | null |
The Landmark Selection Method for Multiple Output Prediction | cs.LG stat.ML | Conditional modeling x \to y is a central problem in machine learning. A
substantial research effort is devoted to such modeling when x is high
dimensional. We consider, instead, the case of a high dimensional y, where x is
either low dimensional or high dimensional. Our approach is based on selecting
a small subset y_L of the dimensions of y, and proceed by modeling (i) x \to
y_L and (ii) y_L \to y. Composing these two models, we obtain a conditional
model x \to y that possesses convenient statistical properties. Multi-label
classification and multivariate regression experiments on several datasets show
that this model outperforms the one vs. all approach as well as several
sophisticated multiple output prediction methods.
| Krishnakumar Balasubramanian (Georgia Institute of Technology), Guy
Lebanon (Georgia Institute of Technology) | null | 1206.6479 | null | null |
A Dantzig Selector Approach to Temporal Difference Learning | cs.LG stat.ML | LSTD is a popular algorithm for value function approximation. Whenever the
number of features is larger than the number of samples, it must be paired with
some form of regularization. In particular, L1-regularization methods tend to
perform feature selection by promoting sparsity, and thus, are well-suited for
high-dimensional problems. However, since LSTD is not a simple regression
algorithm, but it solves a fixed--point problem, its integration with
L1-regularization is not straightforward and might come with some drawbacks
(e.g., the P-matrix assumption for LASSO-TD). In this paper, we introduce a
novel algorithm obtained by integrating LSTD with the Dantzig Selector. We
investigate the performance of the proposed algorithm and its relationship with
the existing regularized approaches, and show how it addresses some of their
drawbacks.
| Matthieu Geist (Supelec), Bruno Scherrer (INRIA Nancy), Alessandro
Lazaric (INRIA Lille), Mohammad Ghavamzadeh (INRIA Lille) | null | 1206.6480 | null | null |
Cross Language Text Classification via Subspace Co-Regularized
Multi-View Learning | cs.CL cs.IR cs.LG | In many multilingual text classification problems, the documents in different
languages often share the same set of categories. To reduce the labeling cost
of training a classification model for each individual language, it is
important to transfer the label knowledge gained from one language to another
language by conducting cross language classification. In this paper we develop
a novel subspace co-regularized multi-view learning method for cross language
text classification. This method is built on parallel corpora produced by
machine translation. It jointly minimizes the training error of each classifier
in each language while penalizing the distance between the subspace
representations of parallel documents. Our empirical study on a large set of
cross language text classification tasks shows the proposed method consistently
outperforms a number of inductive methods, domain adaptation methods, and
multi-view learning methods.
| Yuhong Guo (Temple University), Min Xiao (Temple University) | null | 1206.6481 | null | null |
Modeling Images using Transformed Indian Buffet Processes | cs.CV cs.LG stat.ML | Latent feature models are attractive for image modeling, since images
generally contain multiple objects. However, many latent feature models ignore
that objects can appear at different locations or require pre-segmentation of
images. While the transformed Indian buffet process (tIBP) provides a method
for modeling transformation-invariant features in unsegmented binary images,
its current form is inappropriate for real images because of its computational
cost and modeling assumptions. We combine the tIBP with likelihoods appropriate
for real images and develop an efficient inference, using the cross-correlation
between images and features, that is theoretically and empirically faster than
existing inference techniques. Our method discovers reasonable components and
achieve effective image reconstruction in natural images.
| Ke Zhai (University of Maryland), Yuening Hu (University of Maryland),
Sinead Williamson (Carnegie Mellon University), Jordan Boyd-Graber
(University of Maryland) | null | 1206.6482 | null | null |
Subgraph Matching Kernels for Attributed Graphs | cs.LG stat.ML | We propose graph kernels based on subgraph matchings, i.e.
structure-preserving bijections between subgraphs. While recently proposed
kernels based on common subgraphs (Wale et al., 2008; Shervashidze et al.,
2009) in general can not be applied to attributed graphs, our approach allows
to rate mappings of subgraphs by a flexible scoring scheme comparing vertex and
edge attributes by kernels. We show that subgraph matching kernels generalize
several known kernels. To compute the kernel we propose a graph-theoretical
algorithm inspired by a classical relation between common subgraphs of two
graphs and cliques in their product graph observed by Levi (1973). Encouraging
experimental results on a classification task of real-world graphs are
presented.
| Nils Kriege (TU Dortmund), Petra Mutzel (TU Dortmund) | null | 1206.6483 | null | null |
Apprenticeship Learning for Model Parameters of Partially Observable
Environments | cs.LG cs.AI stat.ML | We consider apprenticeship learning, i.e., having an agent learn a task by
observing an expert demonstrating the task in a partially observable
environment when the model of the environment is uncertain. This setting is
useful in applications where the explicit modeling of the environment is
difficult, such as a dialogue system. We show that we can extract information
about the environment model by inferring action selection process behind the
demonstration, under the assumption that the expert is choosing optimal actions
based on knowledge of the true model of the target environment. Proposed
algorithms can achieve more accurate estimates of POMDP parameters and better
policies from a short demonstration, compared to methods that learns only from
the reaction from the environment.
| Takaki Makino (University of Tokyo), Johane Takeuchi (Honda Research
Institute Japan) | null | 1206.6484 | null | null |
Greedy Algorithms for Sparse Reinforcement Learning | cs.LG stat.ML | Feature selection and regularization are becoming increasingly prominent
tools in the efforts of the reinforcement learning (RL) community to expand the
reach and applicability of RL. One approach to the problem of feature selection
is to impose a sparsity-inducing form of regularization on the learning method.
Recent work on $L_1$ regularization has adapted techniques from the supervised
learning literature for use with RL. Another approach that has received renewed
attention in the supervised learning community is that of using a simple
algorithm that greedily adds new features. Such algorithms have many of the
good properties of the $L_1$ regularization methods, while also being extremely
efficient and, in some cases, allowing theoretical guarantees on recovery of
the true form of a sparse target function from sampled data. This paper
considers variants of orthogonal matching pursuit (OMP) applied to
reinforcement learning. The resulting algorithms are analyzed and compared
experimentally with existing $L_1$ regularized approaches. We demonstrate that
perhaps the most natural scenario in which one might hope to achieve sparse
recovery fails; however, one variant, OMP-BRM, provides promising theoretical
guarantees under certain assumptions on the feature dictionary. Another
variant, OMP-TD, empirically outperforms prior methods both in approximation
accuracy and efficiency on several benchmark problems.
| Christopher Painter-Wakefield (Duke University), Ronald Parr (Duke
University) | null | 1206.6485 | null | null |
Flexible Modeling of Latent Task Structures in Multitask Learning | cs.LG stat.ML | Multitask learning algorithms are typically designed assuming some fixed, a
priori known latent structure shared by all the tasks. However, it is usually
unclear what type of latent task structure is the most appropriate for a given
multitask learning problem. Ideally, the "right" latent task structure should
be learned in a data-driven manner. We present a flexible, nonparametric
Bayesian model that posits a mixture of factor analyzers structure on the
tasks. The nonparametric aspect makes the model expressive enough to subsume
many existing models of latent task structures (e.g, mean-regularized tasks,
clustered tasks, low-rank or linear/non-linear subspace assumption on tasks,
etc.). Moreover, it can also learn more general task structures, addressing the
shortcomings of such models. We present a variational inference algorithm for
our model. Experimental results on synthetic and real-world datasets, on both
regression and classification problems, demonstrate the effectiveness of the
proposed method.
| Alexandre Passos (UMass Amherst), Piyush Rai (University of Utah),
Jacques Wainer (University of Campinas), Hal Daume III (University of
Maryland) | null | 1206.6486 | null | null |
An Adaptive Algorithm for Finite Stochastic Partial Monitoring | cs.LG cs.GT stat.ML | We present a new anytime algorithm that achieves near-optimal regret for any
instance of finite stochastic partial monitoring. In particular, the new
algorithm achieves the minimax regret, within logarithmic factors, for both
"easy" and "hard" problems. For easy problems, it additionally achieves
logarithmic individual regret. Most importantly, the algorithm is adaptive in
the sense that if the opponent strategy is in an "easy region" of the strategy
space then the regret grows as if the problem was easy. As an implication, we
show that under some reasonable additional assumptions, the algorithm enjoys an
O(\sqrt{T}) regret in Dynamic Pricing, proven to be hard by Bartok et al.
(2011).
| Gabor Bartok (University of Alberta), Navid Zolghadr (University of
Alberta), Csaba Szepesvari (University of Alberta) | null | 1206.6487 | null | null |
The Nonparanormal SKEPTIC | stat.ME cs.LG stat.ML | We propose a semiparametric approach, named nonparanormal skeptic, for
estimating high dimensional undirected graphical models. In terms of modeling,
we consider the nonparanormal family proposed by Liu et al (2009). In terms of
estimation, we exploit nonparametric rank-based correlation coefficient
estimators including the Spearman's rho and Kendall's tau. In high dimensional
settings, we prove that the nonparanormal skeptic achieves the optimal
parametric rate of convergence in both graph and parameter estimation. This
result suggests that the nonparanormal graphical models are a safe replacement
of the Gaussian graphical models, even when the data are Gaussian.
| Han Liu (Johns Hopkins University), Fang Han (Johns Hopkins
University), Ming Yuan (Georgia Institute of Technology), John Lafferty
(University of Chicago), Larry Wasserman (Carnegie Mellon University) | null | 1206.6488 | null | null |
A concentration theorem for projections | cs.LG stat.ML | X in R^D has mean zero and finite second moments. We show that there is a
precise sense in which almost all linear projections of X into R^d (for d < D)
look like a scale-mixture of spherical Gaussians -- specifically, a mixture of
distributions N(0, sigma^2 I_d) where the weight of the particular sigma
component is P (| X |^2 = sigma^2 D). The extent of this effect depends upon
the ratio of d to D, and upon a particular coefficient of eccentricity of X's
distribution. We explore this result in a variety of experiments.
| Sanjoy Dasgupta, Daniel Hsu, Nakul Verma | null | 1206.6813 | null | null |
An Empirical Comparison of Algorithms for Aggregating Expert Predictions | cs.AI cs.LG | Predicting the outcomes of future events is a challenging problem for which a
variety of solution methods have been explored and attempted. We present an
empirical comparison of a variety of online and offline adaptive algorithms for
aggregating experts' predictions of the outcomes of five years of US National
Football League games (1319 games) using expert probability elicitations
obtained from an Internet contest called ProbabilitySports. We find that it is
difficult to improve over simple averaging of the predictions in terms of
prediction accuracy, but that there is room for improvement in quadratic loss.
Somewhat surprisingly, a Bayesian estimation algorithm which estimates the
variance of each expert's prediction exhibits the most consistent superior
performance over simple averaging among our collection of algorithms.
| Varsha Dani, Omid Madani, David M Pennock, Sumit Sanghai, Brian
Galebach | null | 1206.6814 | null | null |
Discriminative Learning via Semidefinite Probabilistic Models | cs.LG stat.ML | Discriminative linear models are a popular tool in machine learning. These
can be generally divided into two types: The first is linear classifiers, such
as support vector machines, which are well studied and provide state-of-the-art
results. One shortcoming of these models is that their output (known as the
'margin') is not calibrated, and cannot be translated naturally into a
distribution over the labels. Thus, it is difficult to incorporate such models
as components of larger systems, unlike probabilistic based approaches. The
second type of approach constructs class conditional distributions using a
nonlinearity (e.g. log-linear models), but is occasionally worse in terms of
classification error. We propose a supervised learning method which combines
the best of both approaches. Specifically, our method provides a distribution
over the labels, which is a linear function of the model parameters. As a
consequence, differences between probabilities are linear functions, a property
which most probabilistic models (e.g. log-linear) do not have.
Our model assumes that classes correspond to linear subspaces (rather than to
half spaces). Using a relaxed projection operator, we construct a measure which
evaluates the degree to which a given vector 'belongs' to a subspace, resulting
in a distribution over labels. Interestingly, this view is closely related to
similar concepts in quantum detection theory. The resulting models can be
trained either to maximize the margin or to optimize average likelihood
measures. The corresponding optimization problems are semidefinite programs
which can be solved efficiently. We illustrate the performance of our algorithm
on real world datasets, and show that it outperforms 2nd order kernel methods.
| Koby Crammer, Amir Globerson | null | 1206.6815 | null | null |
Gene Expression Time Course Clustering with Countably Infinite Hidden
Markov Models | cs.LG cs.CE stat.ML | Most existing approaches to clustering gene expression time course data treat
the different time points as independent dimensions and are invariant to
permutations, such as reversal, of the experimental time course. Approaches
utilizing HMMs have been shown to be helpful in this regard, but are hampered
by having to choose model architectures with appropriate complexities. Here we
propose for a clustering application an HMM with a countably infinite state
space; inference in this model is possible by recasting it in the hierarchical
Dirichlet process (HDP) framework (Teh et al. 2006), and hence we call it the
HDP-HMM. We show that the infinite model outperforms model selection methods
over finite models, and traditional time-independent methods, as measured by a
variety of external and internal indices for clustering on two large publicly
available data sets. Moreover, we show that the infinite models utilize more
hidden states and employ richer architectures (e.g. state-to-state transitions)
without the damaging effects of overfitting.
| Matthew Beal, Praveen Krishnamurthy | null | 1206.6824 | null | null |
Advances in exact Bayesian structure discovery in Bayesian networks | cs.LG cs.AI stat.ML | We consider a Bayesian method for learning the Bayesian network structure
from complete data. Recently, Koivisto and Sood (2004) presented an algorithm
that for any single edge computes its marginal posterior probability in O(n
2^n) time, where n is the number of attributes; the number of parents per
attribute is bounded by a constant. In this paper we show that the posterior
probabilities for all the n (n - 1) potential edges can be computed in O(n 2^n)
total time. This result is achieved by a forward-backward technique and fast
Moebius transform algorithms, which are of independent interest. The resulting
speedup by a factor of about n^2 allows us to experimentally study the
statistical power of learning moderate-size networks. We report results from a
simulation study that covers data sets with 20 to 10,000 records over 5 to 25
discrete attributes
| Mikko Koivisto | null | 1206.6828 | null | null |
The AI&M Procedure for Learning from Incomplete Data | stat.ME cs.AI cs.LG | We investigate methods for parameter learning from incomplete data that is
not missing at random. Likelihood-based methods then require the optimization
of a profile likelihood that takes all possible missingness mechanisms into
account. Optimzing this profile likelihood poses two main difficulties:
multiple (local) maxima, and its very high-dimensional parameter space. In this
paper a new method is presented for optimizing the profile likelihood that
addresses the second difficulty: in the proposed AI&M (adjusting imputation and
mazimization) procedure the optimization is performed by operations in the
space of data completions, rather than directly in the parameter space of the
profile likelihood. We apply the AI&M method to learning parameters for
Bayesian networks. The method is compared against conservative inference, which
takes into account each possible data completion, and against EM. The results
indicate that likelihood-based inference is still feasible in the case of
unknown missingness mechanisms, and that conservative inference is
unnecessarily weak. On the other hand, our results also provide evidence that
the EM algorithm is still quite effective when the data is not missing at
random.
| Manfred Jaeger | null | 1206.6830 | null | null |
Convex Structure Learning for Bayesian Networks: Polynomial Feature
Selection and Approximate Ordering | cs.LG stat.ML | We present a new approach to learning the structure and parameters of a
Bayesian network based on regularized estimation in an exponential family
representation. Here we show that, given a fixed variable order, the optimal
structure and parameters can be learned efficiently, even without restricting
the size of the parent sets. We then consider the problem of optimizing the
variable order for a given set of features. This is still a computationally
hard problem, but we present a convex relaxation that yields an optimal 'soft'
ordering in polynomial time. One novel aspect of the approach is that we do not
perform a discrete search over DAG structures, nor over variable orders, but
instead solve a continuous relaxation that can then be rounded to obtain a
valid network structure. We conduct an experimental comparison against standard
structure search procedures over standard objectives, which cope with local
minima, and evaluate the advantages of using convex relaxations that reduce the
effects of local minima.
| Yuhong Guo, Dale Schuurmans | null | 1206.6832 | null | null |
Matrix Tile Analysis | cs.LG cs.CE cs.NA stat.ML | Many tasks require finding groups of elements in a matrix of numbers, symbols
or class likelihoods. One approach is to use efficient bi- or tri-linear
factorization techniques including PCA, ICA, sparse matrix factorization and
plaid analysis. These techniques are not appropriate when addition and
multiplication of matrix elements are not sensibly defined. More directly,
methods like bi-clustering can be used to classify matrix elements, but these
methods make the overly-restrictive assumption that the class of each element
is a function of a row class and a column class. We introduce a general
computational problem, `matrix tile analysis' (MTA), which consists of
decomposing a matrix into a set of non-overlapping tiles, each of which is
defined by a subset of usually nonadjacent rows and columns. MTA does not
require an algebra for combining tiles, but must search over discrete
combinations of tile assignments. Exact MTA is a computationally intractable
integer programming problem, but we describe an approximate iterative technique
and a computationally efficient sum-product relaxation of the integer program.
We compare the effectiveness of these methods to PCA and plaid on hundreds of
randomly generated tasks. Using double-gene-knockout data, we show that MTA
finds groups of interacting yeast genes that have biologically-related
functions.
| Inmar Givoni, Vincent Cheung, Brendan J. Frey | null | 1206.6833 | null | null |
Continuous Time Markov Networks | cs.AI cs.LG | A central task in many applications is reasoning about processes that change
in a continuous time. The mathematical framework of Continuous Time Markov
Processes provides the basic foundations for modeling such systems. Recently,
Nodelman et al introduced continuous time Bayesian networks (CTBNs), which
allow a compact representation of continuous-time processes over a factored
state space. In this paper, we introduce continuous time Markov networks
(CTMNs), an alternative representation language that represents a different
type of continuous-time dynamics. In many real life processes, such as
biological and chemical systems, the dynamics of the process can be naturally
described as an interplay between two forces - the tendency of each entity to
change its state, and the overall fitness or energy function of the entire
system. In our model, the first force is described by a continuous-time
proposal process that suggests possible local changes to the state of the
system at different rates. The second force is represented by a Markov network
that encodes the fitness, or desirability, of different states; a proposed
local change is then accepted with a probability that is a function of the
change in the fitness distribution. We show that the fitness distribution is
also the stationary distribution of the Markov process, so that this
representation provides a characterization of a temporal process whose
stationary distribution has a compact graphical representation. This allows us
to naturally capture a different type of structure in complex dynamical
processes, such as evolving biological sequences. We describe the semantics of
the representation, its basic properties, and how it compares to CTBNs. We also
provide algorithms for learning such models from data, and discuss its
applicability to biological sequence evolution.
| Tal El-Hay, Nir Friedman, Daphne Koller, Raz Kupferman | null | 1206.6838 | null | null |
Chi-square Tests Driven Method for Learning the Structure of Factored
MDPs | cs.LG cs.AI stat.ML | SDYNA is a general framework designed to address large stochastic
reinforcement learning problems. Unlike previous model based methods in FMDPs,
it incrementally learns the structure and the parameters of a RL problem using
supervised learning techniques. Then, it integrates decision-theoric planning
algorithms based on FMDPs to compute its policy. SPITI is an instanciation of
SDYNA that exploits ITI, an incremental decision tree algorithm, to learn the
reward function and the Dynamic Bayesian Networks with local structures
representing the transition function of the problem. These representations are
used by an incremental version of the Structured Value Iteration algorithm. In
order to learn the structure, SPITI uses Chi-Square tests to detect the
independence between two probability distributions. Thus, we study the relation
between the threshold used in the Chi-Square test, the size of the model built
and the relative error of the value function of the induced policy with respect
to the optimal value. We show that, on stochastic problems, one can tune the
threshold so as to generate both a compact model and an efficient policy. Then,
we show that SPITI, while keeping its model compact, uses the generalization
property of its learning method to perform better than a stochastic classical
tabular algorithm in large RL problem with an unknown structure. We also
introduce a new measure based on Chi-Square to qualify the accuracy of the
model learned by SPITI. We qualitatively show that the generalization property
in SPITI within the FMDP framework may prevent an exponential growth of the
time required to learn the structure of large stochastic RL problems.
| Thomas Degris, Olivier Sigaud, Pierre-Henri Wuillemin | null | 1206.6842 | null | null |
Gibbs Sampling for (Coupled) Infinite Mixture Models in the Stick
Breaking Representation | stat.ME cs.LG stat.ML | Nonparametric Bayesian approaches to clustering, information retrieval,
language modeling and object recognition have recently shown great promise as a
new paradigm for unsupervised data analysis. Most contributions have focused on
the Dirichlet process mixture models or extensions thereof for which efficient
Gibbs samplers exist. In this paper we explore Gibbs samplers for infinite
complexity mixture models in the stick breaking representation. The advantage
of this representation is improved modeling flexibility. For instance, one can
design the prior distribution over cluster sizes or couple multiple infinite
mixture models (e.g. over time) at the level of their parameters (i.e. the
dependent Dirichlet process model). However, Gibbs samplers for infinite
mixture models (as recently introduced in the statistics literature) seem to
mix poorly over cluster labels. Among others issues, this can have the adverse
effect that labels for the same cluster in coupled mixture models are mixed up.
We introduce additional moves in these samplers to improve mixing over cluster
labels and to bring clusters into correspondence. An application to modeling of
storm trajectories is used to illustrate these ideas.
| Ian Porteous, Alexander T. Ihler, Padhraic Smyth, Max Welling | null | 1206.6845 | null | null |
Approximate Separability for Weak Interaction in Dynamic Systems | cs.LG cs.AI stat.ML | One approach to monitoring a dynamic system relies on decomposition of the
system into weakly interacting subsystems. An earlier paper introduced a notion
of weak interaction called separability, and showed that it leads to exact
propagation of marginals for prediction. This paper addresses two questions
left open by the earlier paper: can we define a notion of approximate
separability that occurs naturally in practice, and do separability and
approximate separability lead to accurate monitoring? The answer to both
questions is afirmative. The paper also analyzes the structure of approximately
separable decompositions, and provides some explanation as to why these models
perform well.
| Avi Pfeffer | null | 1206.6846 | null | null |
Identifying the Relevant Nodes Without Learning the Model | cs.LG cs.AI stat.ML | We propose a method to identify all the nodes that are relevant to compute
all the conditional probability distributions for a given set of nodes. Our
method is simple, effcient, consistent, and does not require learning a
Bayesian network first. Therefore, our method can be applied to
high-dimensional databases, e.g. gene expression databases.
| Jose M. Pena, Roland Nilsson, Johan Bj\"orkegren, Jesper Tegn\'er | null | 1206.6847 | null | null |
A compact, hierarchical Q-function decomposition | cs.LG cs.AI stat.ML | Previous work in hierarchical reinforcement learning has faced a dilemma:
either ignore the values of different possible exit states from a subroutine,
thereby risking suboptimal behavior, or represent those values explicitly
thereby incurring a possibly large representation cost because exit values
refer to nonlocal aspects of the world (i.e., all subsequent rewards). This
paper shows that, in many cases, one can avoid both of these problems. The
solution is based on recursively decomposing the exit value function in terms
of Q-functions at higher levels of the hierarchy. This leads to an intuitively
appealing runtime architecture in which a parent subroutine passes to its child
a value function on the exit states and the child reasons about how its choices
affect the exit value. We also identify structural conditions on the value
function and transition distributions that allow much more concise
representations of exit state distributions, leading to further state
abstraction. In essence, the only variables whose exit values need be
considered are those that the parent cares about and the child affects. We
demonstrate the utility of our algorithms on a series of increasingly complex
environments.
| Bhaskara Marthi, Stuart Russell, David Andre | null | 1206.6851 | null | null |
Structured Priors for Structure Learning | cs.LG cs.AI stat.ML | Traditional approaches to Bayes net structure learning typically assume
little regularity in graph structure other than sparseness. However, in many
cases, we expect more systematicity: variables in real-world systems often
group into classes that predict the kinds of probabilistic dependencies they
participate in. Here we capture this form of prior knowledge in a hierarchical
Bayesian framework, and exploit it to enable structure learning and type
discovery from small datasets. Specifically, we present a nonparametric
generative model for directed acyclic graphs as a prior for Bayes net structure
learning. Our model assumes that variables come in one or more classes and that
the prior probability of an edge existing between two variables is a function
only of their classes. We derive an MCMC algorithm for simultaneous inference
of the number of classes, the class assignments of variables, and the Bayes net
structure over variables. For several realistic, sparse datasets, we show that
the bias towards systematicity of connections provided by our model yields more
accurate learned networks than a traditional, uniform prior approach, and that
the classes found by our model are appropriate.
| Vikash Mansinghka, Charles Kemp, Thomas Griffiths, Joshua Tenenbaum | null | 1206.6852 | null | null |
Faster Gaussian Summation: Theory and Experiment | cs.LG cs.NA stat.ML | We provide faster algorithms for the problem of Gaussian summation, which
occurs in many machine learning methods. We develop two new extensions - an
O(Dp) Taylor expansion for the Gaussian kernel with rigorous error bounds and a
new error control scheme integrating any arbitrary approximation method -
within the best discretealgorithmic framework using adaptive hierarchical data
structures. We rigorously evaluate these techniques empirically in the context
of optimal bandwidth selection in kernel density estimation, revealing the
strengths and weaknesses of current state-of-the-art approaches for the first
time. Our results demonstrate that the new error control scheme yields improved
performance, whereas the series expansion approach is only effective in low
dimensions (five or less).
| Dongryeol Lee, Alexander G. Gray | null | 1206.6857 | null | null |
Sequential Document Representations and Simplicial Curves | cs.IR cs.LG | The popular bag of words assumption represents a document as a histogram of
word occurrences. While computationally efficient, such a representation is
unable to maintain any sequential information. We present a continuous and
differentiable sequential document representation that goes beyond the bag of
words assumption, and yet is efficient and effective. This representation
employs smooth curves in the multinomial simplex to account for sequential
information. We discuss the representation and its geometric properties and
demonstrate its applicability for the task of text classification.
| Guy Lebanon | null | 1206.6858 | null | null |
Predicting Conditional Quantiles via Reduction to Classification | cs.LG stat.ML | We show how to reduce the process of predicting general order statistics (and
the median in particular) to solving classification. The accompanying
theoretical statement shows that the regret of the classifier bounds the regret
of the quantile regression under a quantile loss. We also test this reduction
empirically against existing quantile regression methods on large real-world
datasets and discover that it provides state-of-the-art performance.
| John Langford, Roberto Oliveira, Bianca Zadrozny | null | 1206.6860 | null | null |
On the Number of Samples Needed to Learn the Correct Structure of a
Bayesian Network | cs.LG cs.AI stat.ML | Bayesian Networks (BNs) are useful tools giving a natural and compact
representation of joint probability distributions. In many applications one
needs to learn a Bayesian Network (BN) from data. In this context, it is
important to understand the number of samples needed in order to guarantee a
successful learning. Previous work have studied BNs sample complexity, yet it
mainly focused on the requirement that the learned distribution will be close
to the original distribution which generated the data. In this work, we study a
different aspect of the learning, namely the number of samples needed in order
to learn the correct structure of the network. We give both asymptotic results,
valid in the large sample limit, and experimental results, demonstrating the
learning behavior for feasible sample sizes. We show that structure learning is
a more difficult task, compared to approximating the correct distribution, in
the sense that it requires a much larger number of samples, regardless of the
computational power available for the learner.
| Or Zuk, Shiri Margel, Eytan Domany | null | 1206.6862 | null | null |
Bayesian Multicategory Support Vector Machines | cs.LG stat.ML | We show that the multi-class support vector machine (MSVM) proposed by Lee
et. al. (2004), can be viewed as a MAP estimation procedure under an
appropriate probabilistic interpretation of the classifier. We also show that
this interpretation can be extended to a hierarchical Bayesian architecture and
to a fully-Bayesian inference procedure for multi-class classification based on
data augmentation. We present empirical results that show that the advantages
of the Bayesian formalism are obtained without a loss in classification
accuracy.
| Zhihua Zhang, Michael I. Jordan | null | 1206.6863 | null | null |
Infinite Hidden Relational Models | cs.AI cs.DB cs.LG | In many cases it makes sense to model a relationship symmetrically, not
implying any particular directionality. Consider the classical example of a
recommendation system where the rating of an item by a user should
symmetrically be dependent on the attributes of both the user and the item. The
attributes of the (known) relationships are also relevant for predicting
attributes of entities and for predicting attributes of new relations. In
recommendation systems, the exploitation of relational attributes is often
referred to as collaborative filtering. Again, in many applications one might
prefer to model the collaborative effect in a symmetrical way. In this paper we
present a relational model, which is completely symmetrical. The key innovation
is that we introduce for each entity (or object) an infinite-dimensional latent
variable as part of a Dirichlet process (DP) model. We discuss inference in the
model, which is based on a DP Gibbs sampler, i.e., the Chinese restaurant
process. We extend the Chinese restaurant process to be applicable to
relational modeling. Our approach is evaluated in three applications. One is a
recommendation system based on the MovieLens data set. The second application
concerns the prediction of the function of yeast genes/proteins on the data set
of KDD Cup 2001 using a multi-relational model. The third application involves
a relational medical domain. The experimental results show that our model gives
significantly improved estimates of attributes describing relationships or
entities in complex relational models.
| Zhao Xu, Volker Tresp, Kai Yu, Hans-Peter Kriegel | null | 1206.6864 | null | null |
A Non-Parametric Bayesian Method for Inferring Hidden Causes | cs.LG cs.AI stat.ML | We present a non-parametric Bayesian approach to structure learning with
hidden causes. Previous Bayesian treatments of this problem define a prior over
the number of hidden causes and use algorithms such as reversible jump Markov
chain Monte Carlo to move between solutions. In contrast, we assume that the
number of hidden causes is unbounded, but only a finite number influence
observable variables. This makes it possible to use a Gibbs sampler to
approximate the distribution over causal structures. We evaluate the
performance of both approaches in discovering hidden causes in simulated data,
and use our non-parametric approach to discover hidden causes in a real medical
dataset.
| Frank Wood, Thomas Griffiths, Zoubin Ghahramani | null | 1206.6865 | null | null |
Bayesian Random Fields: The Bethe-Laplace Approximation | cs.LG stat.ML | While learning the maximum likelihood value of parameters of an undirected
graphical model is hard, modelling the posterior distribution over parameters
given data is harder. Yet, undirected models are ubiquitous in computer vision
and text modelling (e.g. conditional random fields). But where Bayesian
approaches for directed models have been very successful, a proper Bayesian
treatment of undirected models in still in its infant stages. We propose a new
method for approximating the posterior of the parameters given data based on
the Laplace approximation. This approximation requires the computation of the
covariance matrix over features which we compute using the linear response
approximation based in turn on loopy belief propagation. We develop the theory
for conditional and 'unconditional' random fields with or without hidden
variables. In the conditional setting we introduce a new variant of bagging
suitable for structured domains. Here we run the loopy max-product algorithm on
a 'super-graph' composed of graphs for individual models sampled from the
posterior and connected by constraints. Experiments on real world data validate
the proposed methods.
| Max Welling, Sridevi Parise | null | 1206.6868 | null | null |
Incremental Model-based Learners With Formal Learning-Time Guarantees | cs.LG cs.AI stat.ML | Model-based learning algorithms have been shown to use experience efficiently
when learning to solve Markov Decision Processes (MDPs) with finite state and
action spaces. However, their high computational cost due to repeatedly solving
an internal model inhibits their use in large-scale problems. We propose a
method based on real-time dynamic programming (RTDP) to speed up two
model-based algorithms, RMAX and MBIE (model-based interval estimation),
resulting in computationally much faster algorithms with little loss compared
to existing bounds. Specifically, our two new learning algorithms, RTDP-RMAX
and RTDP-IE, have considerably smaller computational demands than RMAX and
MBIE. We develop a general theoretical framework that allows us to prove that
both are efficient learners in a PAC (probably approximately correct) sense. We
also present an experimental evaluation of these new algorithms that helps
quantify the tradeoff between computational and experience demands.
| Alexander L. Strehl, Lihong Li, Michael L. Littman | null | 1206.6870 | null | null |
Ranking by Dependence - A Fair Criteria | cs.LG stat.ML | Estimating the dependences between random variables, and ranking them
accordingly, is a prevalent problem in machine learning. Pursuing frequentist
and information-theoretic approaches, we first show that the p-value and the
mutual information can fail even in simplistic situations. We then propose two
conditions for regularizing an estimator of dependence, which leads to a simple
yet effective new measure. We discuss its advantages and compare it to
well-established model-selection criteria. Apart from that, we derive a simple
constraint for regularizing parameter estimates in a graphical model. This
results in an analytical approximation for the optimal value of the equivalent
sample size, which agrees very well with the more involved Bayesian approach in
our experiments.
| Harald Steck | null | 1206.6871 | null | null |
A Self-Supervised Terrain Roughness Estimator for Off-Road Autonomous
Driving | cs.CV cs.LG cs.RO | We present a machine learning approach for estimating the second derivative
of a drivable surface, its roughness. Robot perception generally focuses on the
first derivative, obstacle detection. However, the second derivative is also
important due to its direct relation (with speed) to the shock the vehicle
experiences. Knowing the second derivative allows a vehicle to slow down in
advance of rough terrain. Estimating the second derivative is challenging due
to uncertainty. For example, at range, laser readings may be so sparse that
significant information about the surface is missing. Also, a high degree of
precision is required in projecting laser readings. This precision may be
unavailable due to latency or error in the pose estimation. We model these
sources of error as a multivariate polynomial. Its coefficients are learned
using the shock data as ground truth -- the accelerometers are used to train
the lasers. The resulting classifier operates on individual laser readings from
a road surface described by a 3D point cloud. The classifier identifies
sections of road where the second derivative is likely to be large. Thus, the
vehicle can slow down in advance, reducing the shock it experiences. The
algorithm is an evolution of one we used in the 2005 DARPA Grand Challenge. We
analyze it using data from that route.
| David Stavens, Sebastian Thrun | null | 1206.6872 | null | null |
Variable noise and dimensionality reduction for sparse Gaussian
processes | cs.LG stat.ML | The sparse pseudo-input Gaussian process (SPGP) is a new approximation method
for speeding up GP regression in the case of a large number of data points N.
The approximation is controlled by the gradient optimization of a small set of
M `pseudo-inputs', thereby reducing complexity from N^3 to NM^2. One limitation
of the SPGP is that this optimization space becomes impractically big for high
dimensional data sets. This paper addresses this limitation by performing
automatic dimensionality reduction. A projection of the input space to a low
dimensional space is learned in a supervised manner, alongside the
pseudo-inputs, which now live in this reduced space. The paper also
investigates the suitability of the SPGP for modeling data with input-dependent
noise. A further extension of the model is made to make it even more powerful
in this regard - we learn an uncertainty parameter for each pseudo-input. The
combination of sparsity, reduced dimension, and input-dependent noise makes it
possible to apply GPs to much larger and more complex data sets than was
previously practical. We demonstrate the benefits of these methods on several
synthetic and real world problems.
| Edward Snelson, Zoubin Ghahramani | null | 1206.6873 | null | null |
Learning Neighborhoods for Metric Learning | cs.LG | Metric learning methods have been shown to perform well on different learning
tasks. Many of them rely on target neighborhood relationships that are computed
in the original feature space and remain fixed throughout learning. As a
result, the learned metric reflects the original neighborhood relations. We
propose a novel formulation of the metric learning problem in which, in
addition to the metric, the target neighborhood relations are also learned in a
two-step iterative approach. The new formulation can be seen as a
generalization of many existing metric learning methods. The formulation
includes a target neighbor assignment rule that assigns different numbers of
neighbors to instances according to their quality; `high quality' instances get
more neighbors. We experiment with two of its instantiations that correspond to
the metric learning algorithms LMNN and MCML and compare it to other metric
learning methods on a number of datasets. The experimental results show
state-of-the-art performance and provide evidence that learning the
neighborhood relations does improve predictive performance.
| Jun Wang, Adam Woznica, Alexandros Kalousis | null | 1206.6883 | null | null |
A Hybrid Method for Distance Metric Learning | cs.LG cs.IR stat.ML | We consider the problem of learning a measure of distance among vectors in a
feature space and propose a hybrid method that simultaneously learns from
similarity ratings assigned to pairs of vectors and class labels assigned to
individual vectors. Our method is based on a generative model in which class
labels can provide information that is not encoded in feature vectors but yet
relates to perceived similarity between objects. Experiments with synthetic
data as well as a real medical image retrieval problem demonstrate that
leveraging class labels through use of our method improves retrieval
performance significantly.
| Yi-Hao Kao and Benjamin Van Roy and Daniel Rubin and Jiajing Xu and
Jessica Faruque and Sandy Napel | null | 1206.7112 | null | null |
Implicit Density Estimation by Local Moment Matching to Sample from
Auto-Encoders | cs.LG stat.ML | Recent work suggests that some auto-encoder variants do a good job of
capturing the local manifold structure of the unknown data generating density.
This paper contributes to the mathematical understanding of this phenomenon and
helps define better justified sampling algorithms for deep learning based on
auto-encoder variants. We consider an MCMC where each step samples from a
Gaussian whose mean and covariance matrix depend on the previous state, defines
through its asymptotic distribution a target density. First, we show that good
choices (in the sense of consistency) for these mean and covariance functions
are the local expected value and local covariance under that target density.
Then we show that an auto-encoder with a contractive penalty captures
estimators of these local moments in its reconstruction function and its
Jacobian. A contribution of this work is thus a novel alternative to
maximum-likelihood density estimation, which we call local moment matching. It
also justifies a recently proposed sampling algorithm for the Contractive
Auto-Encoder and extends it to the Denoising Auto-Encoder.
| Yoshua Bengio and Guillaume Alain and Salah Rifai | null | 1207.0057 | null | null |
Density-Difference Estimation | cs.LG stat.ML | We address the problem of estimating the difference between two probability
densities. A naive approach is a two-step procedure of first estimating two
densities separately and then computing their difference. However, such a
two-step procedure does not necessarily work well because the first step is
performed without regard to the second step and thus a small error incurred in
the first stage can cause a big error in the second stage. In this paper, we
propose a single-shot procedure for directly estimating the density difference
without separately estimating two densities. We derive a non-parametric
finite-sample error bound for the proposed single-shot density-difference
estimator and show that it achieves the optimal convergence rate. The
usefulness of the proposed method is also demonstrated experimentally.
| Masashi Sugiyama, Takafumi Kanamori, Taiji Suzuki, Marthinus
Christoffel du Plessis, Song Liu, Ichiro Takeuchi | null | 1207.0099 | null | null |
Differentiable Pooling for Hierarchical Feature Learning | cs.CV cs.LG | We introduce a parametric form of pooling, based on a Gaussian, which can be
optimized alongside the features in a single global objective function. By
contrast, existing pooling schemes are based on heuristics (e.g. local maximum)
and have no clear link to the cost function of the model. Furthermore, the
variables of the Gaussian explicitly store location information, distinct from
the appearance captured by the features, thus providing a what/where
decomposition of the input signal. Although the differentiable pooling scheme
can be incorporated in a wide range of hierarchical models, we demonstrate it
in the context of a Deconvolutional Network model (Zeiler et al. ICCV 2011). We
also explore a number of secondary issues within this model and present
detailed experiments on MNIST digits.
| Matthew D. Zeiler and Rob Fergus | null | 1207.0151 | null | null |
On Multilabel Classification and Ranking with Partial Feedback | cs.LG | We present a novel multilabel/ranking algorithm working in partial
information settings. The algorithm is based on 2nd-order descent methods, and
relies on upper-confidence bounds to trade-off exploration and exploitation. We
analyze this algorithm in a partial adversarial setting, where covariates can
be adversarial, but multilabel probabilities are ruled by (generalized) linear
models. We show O(T^{1/2} log T) regret bounds, which improve in several ways
on the existing results. We test the effectiveness of our upper-confidence
scheme by contrasting against full-information baselines on real-world
multilabel datasets, often obtaining comparable performance.
| Claudio Gentile and Francesco Orabona | null | 1207.0166 | null | null |
Surrogate Regret Bounds for Bipartite Ranking via Strongly Proper Losses | cs.LG stat.ML | The problem of bipartite ranking, where instances are labeled positive or
negative and the goal is to learn a scoring function that minimizes the
probability of mis-ranking a pair of positive and negative instances (or
equivalently, that maximizes the area under the ROC curve), has been widely
studied in recent years. A dominant theoretical and algorithmic framework for
the problem has been to reduce bipartite ranking to pairwise classification; in
particular, it is well known that the bipartite ranking regret can be
formulated as a pairwise classification regret, which in turn can be upper
bounded using usual regret bounds for classification problems. Recently,
Kotlowski et al. (2011) showed regret bounds for bipartite ranking in terms of
the regret associated with balanced versions of the standard (non-pairwise)
logistic and exponential losses. In this paper, we show that such
(non-pairwise) surrogate regret bounds for bipartite ranking can be obtained in
terms of a broad class of proper (composite) losses that we term as strongly
proper. Our proof technique is much simpler than that of Kotlowski et al.
(2011), and relies on properties of proper (composite) losses as elucidated
recently by Reid and Williamson (2010, 2011) and others. Our result yields
explicit surrogate bounds (with no hidden balancing terms) in terms of a
variety of strongly proper losses, including for example logistic, exponential,
squared and squared hinge losses as special cases. We also obtain tighter
surrogate bounds under certain low-noise conditions via a recent result of
Clemencon and Robbiano (2011).
| Shivani Agarwal | null | 1207.0268 | null | null |
Applying Deep Belief Networks to Word Sense Disambiguation | cs.CL cs.LG | In this paper, we applied a novel learning algorithm, namely, Deep Belief
Networks (DBN) to word sense disambiguation (WSD). DBN is a probabilistic
generative model composed of multiple layers of hidden units. DBN uses
Restricted Boltzmann Machine (RBM) to greedily train layer by layer as a
pretraining. Then, a separate fine tuning step is employed to improve the
discriminative power. We compared DBN with various state-of-the-art supervised
learning algorithms in WSD such as Support Vector Machine (SVM), Maximum
Entropy model (MaxEnt), Naive Bayes classifier (NB) and Kernel Principal
Component Analysis (KPCA). We used all words in the given paragraph,
surrounding context words and part-of-speech of surrounding words as our
knowledge sources. We conducted our experiment on the SENSEVAL-2 data set. We
observed that DBN outperformed all other learning algorithms.
| Peratham Wiriyathammabhum, Boonserm Kijsirikul, Hiroya Takamura,
Manabu Okumura | null | 1207.0396 | null | null |
Algorithms for Approximate Minimization of the Difference Between
Submodular Functions, with Applications | cs.DS cs.LG | We extend the work of Narasimhan and Bilmes [30] for minimizing set functions
representable as a difference between submodular functions. Similar to [30],
our new algorithms are guaranteed to monotonically reduce the objective
function at every step. We empirically and theoretically show that the
per-iteration cost of our algorithms is much less than [30], and our algorithms
can be used to efficiently minimize a difference between submodular functions
under various combinatorial constraints, a problem not previously addressed. We
provide computational bounds and a hardness result on the mul- tiplicative
inapproximability of minimizing the difference between submodular functions. We
show, however, that it is possible to give worst-case additive bounds by
providing a polynomial time computable lower-bound on the minima. Finally we
show how a number of machine learning problems can be modeled as minimizing the
difference between submodular functions. We experimentally show the validity of
our algorithms by testing them on the problem of feature selection with
submodular cost features.
| Rishabh Iyer and Jeff Bilmes | null | 1207.0560 | null | null |
Robust Dequantized Compressive Sensing | stat.ML cs.LG | We consider the reconstruction problem in compressed sensing in which the
observations are recorded in a finite number of bits. They may thus contain
quantization errors (from being rounded to the nearest representable value) and
saturation errors (from being outside the range of representable values). Our
formulation has an objective of weighted $\ell_2$-$\ell_1$ type, along with
constraints that account explicitly for quantization and saturation errors, and
is solved with an augmented Lagrangian method. We prove a consistency result
for the recovered solution, stronger than those that have appeared to date in
the literature, showing in particular that asymptotic consistency can be
obtained without oversampling. We present extensive computational comparisons
with formulations proposed previously, and variants thereof.
| Ji Liu and Stephen J. Wright | null | 1207.0577 | null | null |
Improving neural networks by preventing co-adaptation of feature
detectors | cs.NE cs.CV cs.LG | When a large feedforward neural network is trained on a small training set,
it typically performs poorly on held-out test data. This "overfitting" is
greatly reduced by randomly omitting half of the feature detectors on each
training case. This prevents complex co-adaptations in which a feature detector
is only helpful in the context of several other specific feature detectors.
Instead, each neuron learns to detect a feature that is generally helpful for
producing the correct answer given the combinatorially large variety of
internal contexts in which it must operate. Random "dropout" gives big
improvements on many benchmark tasks and sets new records for speech and object
recognition.
| Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya
Sutskever, Ruslan R. Salakhutdinov | null | 1207.0580 | null | null |
Local Water Diffusion Phenomenon Clustering From High Angular Resolution
Diffusion Imaging (HARDI) | cs.LG cs.CV | The understanding of neurodegenerative diseases undoubtedly passes through
the study of human brain white matter fiber tracts. To date, diffusion magnetic
resonance imaging (dMRI) is the unique technique to obtain information about
the neural architecture of the human brain, thus permitting the study of white
matter connections and their integrity. However, a remaining challenge of the
dMRI community is to better characterize complex fiber crossing configurations,
where diffusion tensor imaging (DTI) is limited but high angular resolution
diffusion imaging (HARDI) now brings solutions. This paper investigates the
development of both identification and classification process of the local
water diffusion phenomenon based on HARDI data to automatically detect imaging
voxels where there are single and crossing fiber bundle populations. The
technique is based on knowledge extraction processes and is validated on a dMRI
phantom dataset with ground truth.
| Romain Giot (GREYC), Christophe Charrier (GREYC), Maxime Descoteaux
(SCIL) | null | 1207.0677 | null | null |
The OS* Algorithm: a Joint Approach to Exact Optimization and Sampling | cs.AI cs.CL cs.LG | Most current sampling algorithms for high-dimensional distributions are based
on MCMC techniques and are approximate in the sense that they are valid only
asymptotically. Rejection sampling, on the other hand, produces valid samples,
but is unrealistically slow in high-dimension spaces. The OS* algorithm that we
propose is a unified approach to exact optimization and sampling, based on
incremental refinements of a functional upper bound, which combines ideas of
adaptive rejection sampling and of A* optimization search. We show that the
choice of the refinement can be done in a way that ensures tractability in
high-dimension spaces, and we present first experiments in two different
settings: inference in high-order HMMs and in large discrete graphical models.
| Marc Dymetman and Guillaume Bouchard and Simon Carter | null | 1207.0742 | null | null |
Hybrid Template Update System for Unimodal Biometric Systems | cs.LG | Semi-supervised template update systems allow to automatically take into
account the intra-class variability of the biometric data over time. Such
systems can be inefficient by including too many impostor's samples or skipping
too many genuine's samples. In the first case, the biometric reference drifts
from the real biometric data and attracts more often impostors. In the second
case, the biometric reference does not evolve quickly enough and also
progressively drifts from the real biometric data. We propose a hybrid system
using several biometric sub-references in order to increase per- formance of
self-update systems by reducing the previously cited errors. The proposition is
validated for a keystroke- dynamics authentication system (this modality
suffers of high variability over time) on two consequent datasets from the
state of the art.
| Romain Giot (GREYC), Christophe Rosenberger (GREYC), Bernadette
Dorizzi (EPH, SAMOVAR) | null | 1207.0783 | null | null |
Web-Based Benchmark for Keystroke Dynamics Biometric Systems: A
Statistical Analysis | cs.LG | Most keystroke dynamics studies have been evaluated using a specific kind of
dataset in which users type an imposed login and password. Moreover, these
studies are optimistics since most of them use different acquisition protocols,
private datasets, controlled environment, etc. In order to enhance the accuracy
of keystroke dynamics' performance, the main contribution of this paper is
twofold. First, we provide a new kind of dataset in which users have typed both
an imposed and a chosen pairs of logins and passwords. In addition, the
keystroke dynamics samples are collected in a web-based uncontrolled
environment (OS, keyboards, browser, etc.). Such kind of dataset is important
since it provides us more realistic results of keystroke dynamics' performance
in comparison to the literature (controlled environment, etc.). Second, we
present a statistical analysis of well known assertions such as the
relationship between performance and password size, impact of fusion schemes on
system overall performance, and others such as the relationship between
performance and entropy. We put into obviousness in this paper some new results
on keystroke dynamics in realistic conditions.
| Romain Giot (GREYC), Mohamad El-Abed (GREYC), Christophe Rosenberger
(GREYC) | null | 1207.0784 | null | null |
PAC-Bayesian Majority Vote for Late Classifier Fusion | stat.ML cs.CV cs.LG cs.MM | A lot of attention has been devoted to multimedia indexing over the past few
years. In the literature, we often consider two kinds of fusion schemes: The
early fusion and the late fusion. In this paper we focus on late classifier
fusion, where one combines the scores of each modality at the decision level.
To tackle this problem, we investigate a recent and elegant well-founded
quadratic program named MinCq coming from the Machine Learning PAC-Bayes
theory. MinCq looks for the weighted combination, over a set of real-valued
functions seen as voters, leading to the lowest misclassification rate, while
making use of the voters' diversity. We provide evidence that this method is
naturally adapted to late fusion procedure. We propose an extension of MinCq by
adding an order- preserving pairwise loss for ranking, helping to improve Mean
Averaged Precision measure. We confirm the good behavior of the MinCq-based
fusion approaches with experiments on a real image benchmark.
| Emilie Morvant (LIF), Amaury Habrard (LAHC), St\'ephane Ayache (LIF) | null | 1207.1019 | null | null |
Inferring land use from mobile phone activity | stat.ML cs.LG physics.data-an physics.soc-ph | Understanding the spatiotemporal distribution of people within a city is
crucial to many planning applications. Obtaining data to create required
knowledge, currently involves costly survey methods. At the same time
ubiquitous mobile sensors from personal GPS devices to mobile phones are
collecting massive amounts of data on urban systems. The locations,
communications, and activities of millions of people are recorded and stored by
new information technologies. This work utilizes novel dynamic data, generated
by mobile phone users, to measure spatiotemporal changes in population. In the
process, we identify the relationship between land use and dynamic population
over the course of a typical week. A machine learning classification algorithm
is used to identify clusters of locations with similar zoned uses and mobile
phone activity patterns. It is shown that the mobile phone data is capable of
delivering useful information on actual land use that supplements zoning
regulations.
| Jameson L. Toole, Michael Ulm, Dietmar Bauer, Marta C. Gonzalez | null | 1207.1115 | null | null |
Unsupervised spectral learning | cs.LG stat.ML | In spectral clustering and spectral image segmentation, the data is partioned
starting from a given matrix of pairwise similarities S. the matrix S is
constructed by hand, or learned on a separate training set. In this paper we
show how to achieve spectral clustering in unsupervised mode. Our algorithm
starts with a set of observed pairwise features, which are possible components
of an unknown, parametric similarity function. This function is learned
iteratively, at the same time as the clustering of the data. The algorithm
shows promosing results on synthetic and real data.
| Susan Shortreed, Marina Meila | null | 1207.1358 | null | null |
Learning from Sparse Data by Exploiting Monotonicity Constraints | cs.LG stat.ML | When training data is sparse, more domain knowledge must be incorporated into
the learning algorithm in order to reduce the effective size of the hypothesis
space. This paper builds on previous work in which knowledge about qualitative
monotonicities was formally represented and incorporated into learning
algorithms (e.g., Clark & Matwin's work with the CN2 rule learning algorithm).
We show how to interpret knowledge of qualitative influences, and in particular
of monotonicities, as constraints on probability distributions, and to
incorporate this knowledge into Bayesian network learning algorithms. We show
that this yields improved accuracy, particularly with very small training sets
(e.g. less than 10 examples).
| Eric E. Altendorf, Angelo C. Restificar, Thomas G. Dietterich | null | 1207.1364 | null | null |
Learning Factor Graphs in Polynomial Time & Sample Complexity | cs.LG stat.ML | We study computational and sample complexity of parameter and structure
learning in graphical models. Our main result shows that the class of factor
graphs with bounded factor size and bounded connectivity can be learned in
polynomial time and polynomial number of samples, assuming that the data is
generated by a network in this class. This result covers both parameter
estimation for a known network structure and structure learning. It implies as
a corollary that we can learn factor graphs for both Bayesian networks and
Markov networks of bounded degree, in polynomial time and sample complexity.
Unlike maximum likelihood estimation, our method does not require inference in
the underlying network, and so applies to networks where inference is
intractable. We also show that the error of our learned model degrades
gracefully when the generating distribution is not a member of the target class
of networks.
| Pieter Abbeel, Daphne Koller, Andrew Y. Ng | null | 1207.1366 | null | null |
On the Detection of Concept Changes in Time-Varying Data Stream by
Testing Exchangeability | cs.LG stat.ML | A martingale framework for concept change detection based on testing data
exchangeability was recently proposed (Ho, 2005). In this paper, we describe
the proposed change-detection test based on the Doob's Maximal Inequality and
show that it is an approximation of the sequential probability ratio test
(SPRT). The relationship between the threshold value used in the proposed test
and its size and power is deduced from the approximation. The mean delay time
before a change is detected is estimated using the average sample number of a
SPRT. The performance of the test using various threshold values is examined on
five different data stream scenarios simulated using two synthetic data sets.
Finally, experimental results show that the test is effective in detecting
changes in time-varying data streams simulated using three benchmark data sets.
| Shen-Shyang Ho, Harry Wechsler | null | 1207.1379 | null | null |
Bayes Blocks: An Implementation of the Variational Bayesian Building
Blocks Framework | cs.MS cs.LG stat.ML | A software library for constructing and learning probabilistic models is
presented. The library offers a set of building blocks from which a large
variety of static and dynamic models can be built. These include hierarchical
models for variances of other variables and many nonlinear models. The
underlying variational Bayesian machinery, providing for fast and robust
estimation but being mathematically rather involved, is almost completely
hidden from the user thus making it very easy to use the library. The building
blocks include Gaussian, rectified Gaussian and mixture-of-Gaussians variables
and computational nodes which can be combined rather freely.
| Markus Harva, Tapani Raiko, Antti Honkela, Harri Valpola, Juha
Karhunen | null | 1207.1380 | null | null |
Maximum Margin Bayesian Networks | cs.LG stat.ML | We consider the problem of learning Bayesian network classifiers that
maximize the marginover a set of classification variables. We find that this
problem is harder for Bayesian networks than for undirected graphical models
like maximum margin Markov networks. The main difficulty is that the parameters
in a Bayesian network must satisfy additional normalization constraints that an
undirected graphical model need not respect. These additional constraints
complicate the optimization task. Nevertheless, we derive an effective training
algorithm that solves the maximum margin training problem for a range of
Bayesian network topologies, and converges to an approximate solution for
arbitrary network topologies. Experimental results show that the method can
demonstrate improved generalization performance over Markov networks when the
directed graphical structure encodes relevant knowledge. In practice, the
training technique allows one to combine prior knowledge expressed as a
directed (causal) model with state of the art discriminative learning methods.
| Yuhong Guo, Dana Wilkinson, Dale Schuurmans | null | 1207.1382 | null | null |
Learning Bayesian Network Parameters with Prior Knowledge about
Context-Specific Qualitative Influences | cs.AI cs.LG stat.ML | We present a method for learning the parameters of a Bayesian network with
prior knowledge about the signs of influences between variables. Our method
accommodates not just the standard signs, but provides for context-specific
signs as well. We show how the various signs translate into order constraints
on the network parameters and how isotonic regression can be used to compute
order-constrained estimates from the available data. Our experimental results
show that taking prior knowledge about the signs of influences into account
leads to an improved fit of the true distribution, especially when only a small
sample of data is available. Moreover, the computed estimates are guaranteed to
be consistent with the specified signs, thereby resulting in a network that is
more likely to be accepted by experts in its domain of application.
| Ad Feelders, Linda C. van der Gaag | null | 1207.1387 | null | null |
Learning about individuals from group statistics | cs.LG stat.ML | We propose a new problem formulation which is similar to, but more
informative than, the binary multiple-instance learning problem. In this
setting, we are given groups of instances (described by feature vectors) along
with estimates of the fraction of positively-labeled instances per group. The
task is to learn an instance level classifier from this information. That is,
we are trying to estimate the unknown binary labels of individuals from
knowledge of group statistics. We propose a principled probabilistic model to
solve this problem that accounts for uncertainty in the parameters and in the
unknown individual labels. This model is trained with an efficient MCMC
algorithm. Its performance is demonstrated on both synthetic and real-world
data arising in general object recognition.
| Hendrik Kuck, Nando de Freitas | null | 1207.1393 | null | null |
Toward Practical N2 Monte Carlo: the Marginal Particle Filter | stat.CO cs.LG stat.ML | Sequential Monte Carlo techniques are useful for state estimation in
non-linear, non-Gaussian dynamic models. These methods allow us to approximate
the joint posterior distribution using sequential importance sampling. In this
framework, the dimension of the target distribution grows with each time step,
thus it is necessary to introduce some resampling steps to ensure that the
estimates provided by the algorithm have a reasonable variance. In many
applications, we are only interested in the marginal filtering distribution
which is defined on a space of fixed dimension. We present a Sequential Monte
Carlo algorithm called the Marginal Particle Filter which operates directly on
the marginal distribution, hence avoiding having to perform importance sampling
on a space of growing dimension. Using this idea, we also derive an improved
version of the auxiliary particle filter. We show theoretic and empirical
results which demonstrate a reduction in variance over conventional particle
filtering, and present techniques for reducing the cost of the marginal
particle filter with N particles from O(N2) to O(N logN).
| Mike Klaas, Nando de Freitas, Arnaud Doucet | null | 1207.1396 | null | null |
Obtaining Calibrated Probabilities from Boosting | cs.LG stat.ML | Boosted decision trees typically yield good accuracy, precision, and ROC
area. However, because the outputs from boosting are not well calibrated
posterior probabilities, boosting yields poor squared error and cross-entropy.
We empirically demonstrate why AdaBoost predicts distorted probabilities and
examine three calibration methods for correcting this distortion: Platt
Scaling, Isotonic Regression, and Logistic Correction. We also experiment with
boosting using log-loss instead of the usual exponential loss. Experiments show
that Logistic Correction and boosting with log-loss work well when boosting
weak models such as decision stumps, but yield poor performance when boosting
more complex models such as full decision trees. Platt Scaling and Isotonic
Regression, however, significantly improve the probabilities predicted by
| Alexandru Niculescu-Mizil, Richard A. Caruana | null | 1207.1403 | null | null |
A submodular-supermodular procedure with applications to discriminative
structure learning | cs.LG cs.DS stat.ML | In this paper, we present an algorithm for minimizing the difference between
two submodular functions using a variational framework which is based on (an
extension of) the concave-convex procedure [17]. Because several commonly used
metrics in machine learning, like mutual information and conditional mutual
information, are submodular, the problem of minimizing the difference of two
submodular problems arises naturally in many machine learning applications. Two
such applications are learning discriminatively structured graphical models and
feature selection under computational complexity constraints. A commonly used
metric for measuring discriminative capacity is the EAR measure which is the
difference between two conditional mutual information terms. Feature selection
taking complexity considerations into account also fall into this framework
because both the information that a set of features provide and the cost of
computing and using the features can be modeled as submodular functions. This
problem is NP-hard, and we give a polynomial time heuristic for it. We also
present results on synthetic data to show that classifiers based on
discriminative graphical models using this algorithm can significantly
outperform classifiers based on generative graphical models.
| Mukund Narasimhan, Jeff A. Bilmes | null | 1207.1404 | null | null |
A Conditional Random Field for Discriminatively-trained Finite-state
String Edit Distance | cs.LG cs.AI | The need to measure sequence similarity arises in information extraction,
object identity, data mining, biological sequence analysis, and other domains.
This paper presents discriminative string-edit CRFs, a finitestate conditional
random field model for edit sequences between strings. Conditional random
fields have advantages over generative approaches to this problem, such as pair
HMMs or the work of Ristad and Yianilos, because as conditionally-trained
methods, they enable the use of complex, arbitrary actions and features of the
input strings. As in generative models, the training data does not have to
specify the edit sequences between the given string pairs. Unlike generative
models, however, our model is trained on both positive and negative instances
of string pairs. We present positive experimental results on several data sets.
| Andrew McCallum, Kedar Bellare, Fernando Pereira | null | 1207.1406 | null | null |
Piecewise Training for Undirected Models | cs.LG stat.ML | For many large undirected models that arise in real-world applications, exact
maximumlikelihood training is intractable, because it requires computing
marginal distributions of the model. Conditional training is even more
difficult, because the partition function depends not only on the parameters,
but also on the observed input, requiring repeated inference over each training
example. An appealing idea for such models is to independently train a local
undirected classifier over each clique, afterwards combining the learned
weights into a single global model. In this paper, we show that this piecewise
method can be justified as minimizing a new family of upper bounds on the log
partition function. On three natural-language data sets, piecewise training is
more accurate than pseudolikelihood, and often performs comparably to global
training using belief propagation.
| Charles Sutton, Andrew McCallum | null | 1207.1409 | null | null |
Discovery of non-gaussian linear causal models using ICA | cs.LG cs.MS stat.ML | In recent years, several methods have been proposed for the discovery of
causal structure from non-experimental data (Spirtes et al. 2000; Pearl 2000).
Such methods make various assumptions on the data generating process to
facilitate its identification from purely observational data. Continuing this
line of research, we show how to discover the complete causal structure of
continuous-valued data, under the assumptions that (a) the data generating
process is linear, (b) there are no unobserved confounders, and (c) disturbance
variables have non-gaussian distributions of non-zero variances. The solution
relies on the use of the statistical method known as independent component
analysis (ICA), and does not require any pre-specified time-ordering of the
variables. We provide a complete Matlab package for performing this LiNGAM
analysis (short for Linear Non-Gaussian Acyclic Model), and demonstrate the
effectiveness of the method using artificially generated data.
| Shohei Shimizu, Aapo Hyvarinen, Yutaka Kano, Patrik O. Hoyer | null | 1207.1413 | null | null |
Two-Way Latent Grouping Model for User Preference Prediction | cs.IR cs.LG stat.ML | We introduce a novel latent grouping model for predicting the relevance of a
new document to a user. The model assumes a latent group structure for both
users and documents. We compared the model against a state-of-the-art method,
the User Rating Profile model, where only users have a latent group structure.
We estimate both models by Gibbs sampling. The new method predicts relevance
more accurately for new documents that have few known ratings. The reason is
that generalization over documents then becomes necessary and hence the twoway
grouping is profitable.
| Eerika Savia, Kai Puolamaki, Janne Sinkkonen, Samuel Kaski | null | 1207.1414 | null | null |
The DLR Hierarchy of Approximate Inference | cs.LG stat.ML | We propose a hierarchy for approximate inference based on the Dobrushin,
Lanford, Ruelle (DLR) equations. This hierarchy includes existing algorithms,
such as belief propagation, and also motivates novel algorithms such as
factorized neighbors (FN) algorithms and variants of mean field (MF)
algorithms. In particular, we show that extrema of the Bethe free energy
correspond to approximate solutions of the DLR equations. In addition, we
demonstrate a close connection between these approximate algorithms and Gibbs
sampling. Finally, we compare and contrast various of the algorithms in the DLR
hierarchy on spin-glass problems. The experiments show that algorithms higher
up in the hierarchy give more accurate results when they converge but tend to
be less stable.
| Michal Rosen-Zvi, Michael I. Jordan, Alan Yuille | null | 1207.1417 | null | null |
A Function Approximation Approach to Estimation of Policy Gradient for
POMDP with Structured Policies | cs.LG stat.ML | We consider the estimation of the policy gradient in partially observable
Markov decision processes (POMDP) with a special class of structured policies
that are finite-state controllers. We show that the gradient estimation can be
done in the Actor-Critic framework, by making the critic compute a "value"
function that does not depend on the states of POMDP. This function is the
conditional mean of the true value function that depends on the states. We show
that the critic can be implemented using temporal difference (TD) methods with
linear function approximations, and the analytical results on TD and
Actor-Critic can be transfered to this case. Although Actor-Critic algorithms
have been used extensively in Markov decision processes (MDP), up to now they
have not been proposed for POMDP as an alternative to the earlier proposal
GPOMDP algorithm, an actor-only method. Furthermore, we show that the same idea
applies to semi-Markov problems with a subset of finite-state controllers.
| Huizhen Yu | null | 1207.1421 | null | null |
Mining Associated Text and Images with Dual-Wing Harmoniums | cs.LG cs.DB stat.ML | We propose a multi-wing harmonium model for mining multimedia data that
extends and improves on earlier models based on two-layer random fields, which
capture bidirectional dependencies between hidden topic aspects and observed
inputs. This model can be viewed as an undirected counterpart of the two-layer
directed models such as LDA for similar tasks, but bears significant difference
in inference/learning cost tradeoffs, latent topic representations, and topic
mixing mechanisms. In particular, our model facilitates efficient inference and
robust topic mixing, and potentially provides high flexibilities in modeling
the latent topic spaces. A contrastive divergence and a variational algorithm
are derived for learning. We specialized our model to a dual-wing harmonium for
captioned images, incorporating a multivariate Poisson for word-counts and a
multivariate Gaussian for color histogram. We present empirical results on the
applications of this model to classification, retrieval and image annotation on
news video collections, and we report an extensive comparison with various
extant models.
| Eric P. Xing, Rong Yan, Alexander G. Hauptmann | null | 1207.1423 | null | null |
Ordering-Based Search: A Simple and Effective Algorithm for Learning
Bayesian Networks | cs.LG cs.AI stat.ML | One of the basic tasks for Bayesian networks (BNs) is that of learning a
network structure from data. The BN-learning problem is NP-hard, so the
standard solution is heuristic search. Many approaches have been proposed for
this task, but only a very small number outperform the baseline of greedy
hill-climbing with tabu lists; moreover, many of the proposed algorithms are
quite complex and hard to implement. In this paper, we propose a very simple
and easy-to-implement method for addressing this task. Our approach is based on
the well-known fact that the best network (of bounded in-degree) consistent
with a given node ordering can be found very efficiently. We therefore propose
a search not over the space of structures, but over the space of orderings,
selecting for each ordering the best network consistent with it. This search
space is much smaller, makes more global search steps, has a lower branching
factor, and avoids costly acyclicity checks. We present results for this
algorithm on both synthetic and real data sets, evaluating both the score of
the network found and in the running time. We show that ordering-based search
outperforms the standard baseline, and is competitive with recent algorithms
that are much harder to implement.
| Marc Teyssier, Daphne Koller | null | 1207.1429 | null | null |
Robust Online Hamiltonian Learning | quant-ph cs.LG | In this work we combine two distinct machine learning methodologies,
sequential Monte Carlo and Bayesian experimental design, and apply them to the
problem of inferring the dynamical parameters of a quantum system. We design
the algorithm with practicality in mind by including parameters that control
trade-offs between the requirements on computational and experimental
resources. The algorithm can be implemented online (during experimental data
collection), avoiding the need for storage and post-processing. Most
importantly, our algorithm is capable of learning Hamiltonian parameters even
when the parameters change from experiment-to-experiment, and also when
additional noise processes are present and unknown. The algorithm also
numerically estimates the Cramer-Rao lower bound, certifying its own
performance.
| Christopher E. Granade, Christopher Ferrie, Nathan Wiebe, D. G. Cory | 10.1088/1367-2630/14/10/103013 | 1207.1655 | null | null |
Forecasting electricity consumption by aggregating specialized experts | stat.ML cs.LG stat.AP | We consider the setting of sequential prediction of arbitrary sequences based
on specialized experts. We first provide a review of the relevant literature
and present two theoretical contributions: a general analysis of the specialist
aggregation rule of Freund et al. (1997) and an adaptation of fixed-share rules
of Herbster and Warmuth (1998) in this setting. We then apply these rules to
the sequential short-term (one-day-ahead) forecasting of electricity
consumption; to do so, we consider two data sets, a Slovakian one and a French
one, respectively concerned with hourly and half-hourly predictions. We follow
a general methodology to perform the stated empirical studies and detail in
particular tuning issues of the learning parameters. The introduced aggregation
rules demonstrate an improved accuracy on the data sets at hand; the
improvements lie in a reduced mean squared error but also in a more robust
behavior with respect to large occasional errors.
| Marie Devaine (DMA), Pierre Gaillard (DMA, INRIA Paris -
Rocquencourt), Yannig Goude, Gilles Stoltz (DMA, INRIA Paris - Rocquencourt,
GREGH) | null | 1207.1965 | null | null |
Estimating a Causal Order among Groups of Variables in Linear Models | stat.ML cs.LG stat.ME | The machine learning community has recently devoted much attention to the
problem of inferring causal relationships from statistical data. Most of this
work has focused on uncovering connections among scalar random variables. We
generalize existing methods to apply to collections of multi-dimensional random
vectors, focusing on techniques applicable to linear models. The performance of
the resulting algorithms is evaluated and compared in simulations, which show
that our methods can, in many cases, provide useful information on causal
relationships even for relatively small sample sizes.
| Doris Entner, Patrik O. Hoyer | null | 1207.1977 | null | null |
Pseudo-likelihood methods for community detection in large sparse
networks | cs.SI cs.LG math.ST physics.soc-ph stat.ML stat.TH | Many algorithms have been proposed for fitting network models with
communities, but most of them do not scale well to large networks, and often
fail on sparse networks. Here we propose a new fast pseudo-likelihood method
for fitting the stochastic block model for networks, as well as a variant that
allows for an arbitrary degree distribution by conditioning on degrees. We show
that the algorithms perform well under a range of settings, including on very
sparse networks, and illustrate on the example of a network of political blogs.
We also propose spectral clustering with perturbations, a method of independent
interest, which works well on sparse networks where regular spectral clustering
fails, and use it to provide an initial value for pseudo-likelihood. We prove
that pseudo-likelihood provides consistent estimates of the communities under a
mild condition on the starting value, for the case of a block model with two
communities.
| Arash A. Amini, Aiyou Chen, Peter J. Bickel, Elizaveta Levina | 10.1214/13-AOS1138 | 1207.2340 | null | null |
Kernelized Supervised Dictionary Learning | cs.CV cs.LG | In this paper, we propose supervised dictionary learning (SDL) by
incorporating information on class labels into the learning of the dictionary.
To this end, we propose to learn the dictionary in a space where the dependency
between the signals and their corresponding labels is maximized. To maximize
this dependency, the recently introduced Hilbert Schmidt independence criterion
(HSIC) is used. One of the main advantages of this novel approach for SDL is
that it can be easily kernelized by incorporating a kernel, particularly a
data-derived kernel such as normalized compression distance, into the
formulation. The learned dictionary is compact and the proposed approach is
fast. We show that it outperforms other unsupervised and supervised dictionary
learning approaches in the literature, using real-world data.
| Mehrdad J. Gangeh, Ali Ghodsi, Mohamed S. Kamel | 10.1109/TSP.2013.2274276 | 1207.2488 | null | null |
A Spectral Learning Approach to Range-Only SLAM | cs.LG cs.RO stat.ML | We present a novel spectral learning algorithm for simultaneous localization
and mapping (SLAM) from range data with known correspondences. This algorithm
is an instance of a general spectral system identification framework, from
which it inherits several desirable properties, including statistical
consistency and no local optima. Compared with popular batch optimization or
multiple-hypothesis tracking (MHT) methods for range-only SLAM, our spectral
approach offers guaranteed low computational requirements and good tracking
performance. Compared with popular extended Kalman filter (EKF) or extended
information filter (EIF) approaches, and many MHT ones, our approach does not
need to linearize a transition or measurement model; such linearizations can
cause severe errors in EKFs and EIFs, and to a lesser extent MHT, particularly
for the highly non-Gaussian posteriors encountered in range-only SLAM. We
provide a theoretical analysis of our method, including finite-sample error
bounds. Finally, we demonstrate on a real-world robotic SLAM problem that our
algorithm is not only theoretically justified, but works well in practice: in a
comparison of multiple methods, the lowest errors come from a combination of
our algorithm with batch optimization, but our method alone produces nearly as
good a result at far lower computational cost.
| Byron Boots and Geoffrey J. Gordon | null | 1207.2491 | null | null |
Near-Optimal Algorithms for Differentially-Private Principal Components | stat.ML cs.CR cs.LG | Principal components analysis (PCA) is a standard tool for identifying good
low-dimensional approximations to data in high dimension. Many data sets of
interest contain private or sensitive information about individuals. Algorithms
which operate on such data should be sensitive to the privacy risks in
publishing their outputs. Differential privacy is a framework for developing
tradeoffs between privacy and the utility of these outputs. In this paper we
investigate the theory and empirical performance of differentially private
approximations to PCA and propose a new method which explicitly optimizes the
utility of the output. We show that the sample complexity of the proposed
method differs from the existing procedure in the scaling with the data
dimension, and that our method is nearly optimal in terms of this scaling. We
furthermore illustrate our results, showing that on real data there is a large
performance gap between the existing method and our method.
| Kamalika Chaudhuri, Anand D. Sarwate, Kaushik Sinha | null | 1207.2812 | null | null |
Expectation Propagation in Gaussian Process Dynamical Systems: Extended
Version | stat.ML cs.LG cs.SY | Rich and complex time-series data, such as those generated from engineering
systems, financial markets, videos or neural recordings, are now a common
feature of modern data analysis. Explaining the phenomena underlying these
diverse data sets requires flexible and accurate models. In this paper, we
promote Gaussian process dynamical systems (GPDS) as a rich model class that is
appropriate for such analysis. In particular, we present a message passing
algorithm for approximate inference in GPDSs based on expectation propagation.
By posing inference as a general message passing problem, we iterate
forward-backward smoothing. Thus, we obtain more accurate posterior
distributions over latent structures, resulting in improved predictive
performance compared to state-of-the-art GPDS smoothers, which are special
cases of our general message passing algorithm. Hence, we provide a unifying
approach within which to contextualize message passing in GPDSs.
| Marc Peter Deisenroth and Shakir Mohamed | null | 1207.2940 | null | null |
Optimal rates for first-order stochastic convex optimization under
Tsybakov noise condition | cs.LG stat.ML | We focus on the problem of minimizing a convex function $f$ over a convex set
$S$ given $T$ queries to a stochastic first order oracle. We argue that the
complexity of convex minimization is only determined by the rate of growth of
the function around its minimizer $x^*_{f,S}$, as quantified by a Tsybakov-like
noise condition. Specifically, we prove that if $f$ grows at least as fast as
$\|x-x^*_{f,S}\|^\kappa$ around its minimum, for some $\kappa > 1$, then the
optimal rate of learning $f(x^*_{f,S})$ is
$\Theta(T^{-\frac{\kappa}{2\kappa-2}})$. The classic rate $\Theta(1/\sqrt T)$
for convex functions and $\Theta(1/T)$ for strongly convex functions are
special cases of our result for $\kappa \rightarrow \infty$ and $\kappa=2$, and
even faster rates are attained for $\kappa <2$. We also derive tight bounds for
the complexity of learning $x_{f,S}^*$, where the optimal rate is
$\Theta(T^{-\frac{1}{2\kappa-2}})$. Interestingly, these precise rates for
convex optimization also characterize the complexity of active learning and our
results further strengthen the connections between the two fields, both of
which rely on feedback-driven queries.
| Aaditya Ramdas and Aarti Singh | null | 1207.3012 | null | null |
Distributed Strongly Convex Optimization | cs.DC cs.LG stat.ML | A lot of effort has been invested into characterizing the convergence rates
of gradient based algorithms for non-linear convex optimization. Recently,
motivated by large datasets and problems in machine learning, the interest has
shifted towards distributed optimization. In this work we present a distributed
algorithm for strongly convex constrained optimization. Each node in a network
of n computers converges to the optimum of a strongly convex, L-Lipchitz
continuous, separable objective at a rate O(log (sqrt(n) T) / T) where T is the
number of iterations. This rate is achieved in the online setting where the
data is revealed one at a time to the nodes, and in the batch setting where
each node has access to its full local dataset from the start. The same
convergence rate is achieved in expectation when the subgradients used at each
node are corrupted with additive zero-mean noise.
| Konstantinos I. Tsianos and Michael G. Rabbat | null | 1207.3031 | null | null |
Supervised Texture Classification Using a Novel Compression-Based
Similarity Measure | cs.CV cs.LG | Supervised pixel-based texture classification is usually performed in the
feature space. We propose to perform this task in (dis)similarity space by
introducing a new compression-based (dis)similarity measure. The proposed
measure utilizes two dimensional MPEG-1 encoder, which takes into consideration
the spatial locality and connectivity of pixels in the images. The proposed
formulation has been carefully designed based on MPEG encoder functionality. To
this end, by design, it solely uses P-frame coding to find the (dis)similarity
among patches/images. We show that the proposed measure works properly on both
small and large patch sizes. Experimental results show that the proposed
approach significantly improves the performance of supervised pixel-based
texture classification on Brodatz and outdoor images compared to other
compression-based dissimilarity measures as well as approaches performed in
feature space. It also improves the computation speed by about 40% compared to
its rivals.
| Mehrdad J. Gangeh, Ali Ghodsi, and Mohamed S. Kamel | null | 1207.3071 | null | null |
Tracking Tetrahymena Pyriformis Cells using Decision Trees | cs.CV cs.LG eess.IV q-bio.CB stat.ML | Matching cells over time has long been the most difficult step in cell
tracking. In this paper, we approach this problem by recasting it as a
classification problem. We construct a feature set for each cell, and compute a
feature difference vector between a cell in the current frame and a cell in a
previous frame. Then we determine whether the two cells represent the same cell
over time by training decision trees as our binary classifiers. With the output
of decision trees, we are able to formulate an assignment problem for our cell
association task and solve it using a modified version of the Hungarian
algorithm.
| Quan Wang, Yan Ou, A. Agung Julius, Kim L. Boyer, Min Jun Kim | null | 1207.3127 | null | null |
The Price of Privacy in Untrusted Recommendation Engines | cs.LG cs.IT math.IT | Recent increase in online privacy concerns prompts the following question:
can a recommender system be accurate if users do not entrust it with their
private data? To answer this, we study the problem of learning item-clusters
under local differential privacy, a powerful, formal notion of data privacy. We
develop bounds on the sample-complexity of learning item-clusters from
privatized user inputs. Significantly, our results identify a sample-complexity
separation between learning in an information-rich and an information-scarce
regime, thereby highlighting the interaction between privacy and the amount of
information (ratings) available to each user.
In the information-rich regime, where each user rates at least a constant
fraction of items, a spectral clustering approach is shown to achieve a
sample-complexity lower bound derived from a simple information-theoretic
argument based on Fano's inequality. However, the information-scarce regime,
where each user rates only a vanishing fraction of items, is found to require a
fundamentally different approach both for lower bounds and algorithms. To this
end, we develop new techniques for bounding mutual information under a notion
of channel-mismatch, and also propose a new algorithm, MaxSense, and show that
it achieves optimal sample-complexity in this setting.
The techniques we develop for bounding mutual information may be of broader
interest. To illustrate this, we show their applicability to $(i)$ learning
based on 1-bit sketches, and $(ii)$ adaptive learning, where queries can be
adapted based on answers to past queries.
| Siddhartha Banerjee, Nidhi Hegde and Laurent Massouli\'e | null | 1207.3269 | null | null |
Incremental Learning of 3D-DCT Compact Representations for Robust Visual
Tracking | cs.CV cs.LG | Visual tracking usually requires an object appearance model that is robust to
changing illumination, pose and other factors encountered in video. In this
paper, we construct an appearance model using the 3D discrete cosine transform
(3D-DCT). The 3D-DCT is based on a set of cosine basis functions, which are
determined by the dimensions of the 3D signal and thus independent of the input
video data. In addition, the 3D-DCT can generate a compact energy spectrum
whose high-frequency coefficients are sparse if the appearance samples are
similar. By discarding these high-frequency coefficients, we simultaneously
obtain a compact 3D-DCT based object representation and a signal
reconstruction-based similarity measure (reflecting the information loss from
signal reconstruction). To efficiently update the object representation, we
propose an incremental 3D-DCT algorithm, which decomposes the 3D-DCT into
successive operations of the 2D discrete cosine transform (2D-DCT) and 1D
discrete cosine transform (1D-DCT) on the input video data.
| Xi Li and Anthony Dick and Chunhua Shen and Anton van den Hengel and
Hanzi Wang | null | 1207.3389 | null | null |
Dimension Reduction by Mutual Information Feature Extraction | cs.LG cs.CV | During the past decades, to study high-dimensional data in a large variety of
problems, researchers have proposed many Feature Extraction algorithms. One of
the most effective approaches for optimal feature extraction is based on mutual
information (MI). However it is not always easy to get an accurate estimation
for high dimensional MI. In terms of MI, the optimal feature extraction is
creating a feature set from the data which jointly have the largest dependency
on the target class and minimum redundancy. In this paper, a
component-by-component gradient ascent method is proposed for feature
extraction which is based on one-dimensional MI estimates. We will refer to
this algorithm as Mutual Information Feature Extraction (MIFX). The performance
of this proposed method is evaluated using UCI databases. The results indicate
that MIFX provides a robust performance over different data sets which are
almost always the best or comparable to the best ones.
| Ali Shadvar | null | 1207.3394 | null | null |
MahNMF: Manhattan Non-negative Matrix Factorization | stat.ML cs.LG cs.NA | Non-negative matrix factorization (NMF) approximates a non-negative matrix
$X$ by a product of two non-negative low-rank factor matrices $W$ and $H$. NMF
and its extensions minimize either the Kullback-Leibler divergence or the
Euclidean distance between $X$ and $W^T H$ to model the Poisson noise or the
Gaussian noise. In practice, when the noise distribution is heavy tailed, they
cannot perform well. This paper presents Manhattan NMF (MahNMF) which minimizes
the Manhattan distance between $X$ and $W^T H$ for modeling the heavy tailed
Laplacian noise. Similar to sparse and low-rank matrix decompositions, MahNMF
robustly estimates the low-rank part and the sparse part of a non-negative
matrix and thus performs effectively when data are contaminated by outliers. We
extend MahNMF for various practical applications by developing box-constrained
MahNMF, manifold regularized MahNMF, group sparse MahNMF, elastic net inducing
MahNMF, and symmetric MahNMF. The major contribution of this paper lies in two
fast optimization algorithms for MahNMF and its extensions: the rank-one
residual iteration (RRI) method and Nesterov's smoothing method. In particular,
by approximating the residual matrix by the outer product of one row of W and
one row of $H$ in MahNMF, we develop an RRI method to iteratively update each
variable of $W$ and $H$ in a closed form solution. Although RRI is efficient
for small scale MahNMF and some of its extensions, it is neither scalable to
large scale matrices nor flexible enough to optimize all MahNMF extensions.
Since the objective functions of MahNMF and its extensions are neither convex
nor smooth, we apply Nesterov's smoothing method to recursively optimize one
factor matrix with another matrix fixed. By setting the smoothing parameter
inversely proportional to the iteration number, we improve the approximation
accuracy iteratively for both MahNMF and its extensions.
| Naiyang Guan, Dacheng Tao, Zhigang Luo, John Shawe-Taylor | null | 1207.3438 | null | null |
Improved brain pattern recovery through ranking approaches | cs.LG stat.ML | Inferring the functional specificity of brain regions from functional
Magnetic Resonance Images (fMRI) data is a challenging statistical problem.
While the General Linear Model (GLM) remains the standard approach for brain
mapping, supervised learning techniques (a.k.a.} decoding) have proven to be
useful to capture multivariate statistical effects distributed across voxels
and brain regions. Up to now, much effort has been made to improve decoding by
incorporating prior knowledge in the form of a particular regularization term.
In this paper we demonstrate that further improvement can be made by accounting
for non-linearities using a ranking approach rather than the commonly used
least-square regression. Through simulation, we compare the recovery properties
of our approach to linear models commonly used in fMRI based decoding. We
demonstrate the superiority of ranking with a real fMRI dataset.
| Fabian Pedregosa (INRIA Paris - Rocquencourt), Alexandre Gramfort
(LNAO, INRIA Saclay - Ile de France), Ga\"el Varoquaux (LNAO, INRIA Saclay -
Ile de France), Bertrand Thirion (INRIA Saclay - Ile de France), Christophe
Pallier (NEUROSPIN), Elodie Cauvet (NEUROSPIN) | null | 1207.3520 | null | null |
Diagnosing client faults using SVM-based intelligent inference from TCP
packet traces | cs.NI cs.AI cs.LG | We present the Intelligent Automated Client Diagnostic (IACD) system, which
only relies on inference from Transmission Control Protocol (TCP) packet traces
for rapid diagnosis of client device problems that cause network performance
issues. Using soft-margin Support Vector Machine (SVM) classifiers, the system
(i) distinguishes link problems from client problems, and (ii) identifies
characteristics unique to client faults to report the root cause of the client
device problem. Experimental evaluation demonstrated the capability of the IACD
system to distinguish between faulty and healthy links and to diagnose the
client faults with 98% accuracy in healthy links. The system can perform fault
diagnosis independent of the client's specific TCP implementation, enabling
diagnosis capability on diverse range of client computers.
| Chathuranga Widanapathirana, Y. Ahmet Sekercioglu, Paul G.
Fitzpatrick, Milosh V. Ivanovich, Jonathan C. Li | 10.1109/IB2Com.2011.6217894 | 1207.3560 | null | null |
Learning to rank from medical imaging data | cs.LG cs.CV | Medical images can be used to predict a clinical score coding for the
severity of a disease, a pain level or the complexity of a cognitive task. In
all these cases, the predicted variable has a natural order. While a standard
classifier discards this information, we would like to take it into account in
order to improve prediction performance. A standard linear regression does
model such information, however the linearity assumption is likely not be
satisfied when predicting from pixel intensities in an image. In this paper we
address these modeling challenges with a supervised learning procedure where
the model aims to order or rank images. We use a linear model for its
robustness in high dimension and its possible interpretation. We show on
simulations and two fMRI datasets that this approach is able to predict the
correct ordering on pairs of images, yielding higher prediction accuracy than
standard regression and multiclass classification techniques.
| Fabian Pedregosa (INRIA Paris - Rocquencourt, INRIA Saclay - Ile de
France), Alexandre Gramfort (INRIA Saclay - Ile de France, LNAO), Ga\"el
Varoquaux (INRIA Saclay - Ile de France, LNAO), Elodie Cauvet (NEUROSPIN),
Christophe Pallier (NEUROSPIN), Bertrand Thirion (INRIA Saclay - Ile de
France) | null | 1207.3598 | null | null |
Fusing image representations for classification using support vector
machines | cs.CV cs.LG | In order to improve classification accuracy different image representations
are usually combined. This can be done by using two different fusing schemes.
In feature level fusion schemes, image representations are combined before the
classification process. In classifier fusion, the decisions taken separately
based on individual representations are fused to make a decision. In this paper
the main methods derived for both strategies are evaluated. Our experimental
results show that classifier fusion performs better. Specifically Bayes belief
integration is the best performing strategy for image classification task.
| Can Demirkesen (BIT Lab, LJK), Hocine Cherifi (BIT Lab, Le2i) | 10.1109/IVCNZ.2009.5378367 | 1207.3607 | null | null |
Towards a Self-Organized Agent-Based Simulation Model for Exploration of
Human Synaptic Connections | cs.NE cs.AI cs.LG nlin.AO | In this paper, the early design of our self-organized agent-based simulation
model for exploration of synaptic connections that faithfully generates what is
observed in natural situation is given. While we take inspiration from
neuroscience, our intent is not to create a veridical model of processes in
neurodevelopmental biology, nor to represent a real biological system. Instead,
our goal is to design a simulation model that learns acting in the same way of
human nervous system by using findings on human subjects using reflex
methodologies in order to estimate unknown connections.
| \"Onder G\"urcan, Carole Bernon, Kemal S. T\"urker | null | 1207.3760 | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.