categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
cs.AI cs.LG cs.SY | null | 1301.0584 | null | null | http://arxiv.org/pdf/1301.0584v1 | 2012-12-12T15:57:19Z | 2012-12-12T15:57:19Z | Decayed MCMC Filtering | Filtering---estimating the state of a partially observable Markov process
from a sequence of observations---is one of the most widely studied problems in
control theory, AI, and computational statistics. Exact computation of the
posterior distribution is generally intractable for large discrete systems and
for nonlinear continuous systems, so a good deal of effort has gone into
developing robust approximation algorithms. This paper describes a simple
stochastic approximation algorithm for filtering called {em decayed MCMC}. The
algorithm applies Markov chain Monte Carlo sampling to the space of state
trajectories using a proposal distribution that favours flips of more recent
state variables. The formal analysis of the algorithm involves a generalization
of standard coupling arguments for MCMC convergence. We prove that for any
ergodic underlying Markov process, the convergence time of decayed MCMC with
inverse-polynomial decay remains bounded as the length of the observation
sequence grows. We show experimentally that decayed MCMC is at least
competitive with other approximation algorithms such as particle filtering.
| [
"['Bhaskara Marthi' 'Hanna Pasula' 'Stuart Russell' 'Yuval Peres']",
"Bhaskara Marthi, Hanna Pasula, Stuart Russell, Yuval Peres"
] |
cs.LG stat.ML | null | 1301.0586 | null | null | http://arxiv.org/pdf/1301.0586v1 | 2012-12-12T15:57:27Z | 2012-12-12T15:57:27Z | Staged Mixture Modelling and Boosting | In this paper, we introduce and evaluate a data-driven staged mixture
modeling technique for building density, regression, and classification models.
Our basic approach is to sequentially add components to a finite mixture model
using the structural expectation maximization (SEM) algorithm. We show that our
technique is qualitatively similar to boosting. This correspondence is a
natural byproduct of the fact that we use the SEM algorithm to sequentially fit
the mixture model. Finally, in our experimental evaluation, we demonstrate the
effectiveness of our approach on a variety of prediction and density estimation
tasks using real-world data.
| [
"Christopher Meek, Bo Thiesson, David Heckerman",
"['Christopher Meek' 'Bo Thiesson' 'David Heckerman']"
] |
cs.DS cs.LG stat.ML | null | 1301.0587 | null | null | http://arxiv.org/pdf/1301.0587v1 | 2012-12-12T15:57:31Z | 2012-12-12T15:57:31Z | Optimal Time Bounds for Approximate Clustering | Clustering is a fundamental problem in unsupervised learning, and has been
studied widely both as a problem of learning mixture models and as an
optimization problem. In this paper, we study clustering with respect the
emph{k-median} objective function, a natural formulation of clustering in which
we attempt to minimize the average distance to cluster centers. One of the main
contributions of this paper is a simple but powerful sampling technique that we
call emph{successive sampling} that could be of independent interest. We show
that our sampling procedure can rapidly identify a small set of points (of size
just O(klog{n/k})) that summarize the input points for the purpose of
clustering. Using successive sampling, we develop an algorithm for the k-median
problem that runs in O(nk) time for a wide range of values of k and is
guaranteed, with high probability, to return a solution with cost at most a
constant factor times optimal. We also establish a lower bound of Omega(nk) on
any randomized constant-factor approximation algorithm for the k-median problem
that succeeds with even a negligible (say 1/100) probability. Thus we establish
a tight time bound of Theta(nk) for the k-median problem for a wide range of
values of k. The best previous upper bound for the problem was O(nk), where the
O-notation hides polylogarithmic factors in n and k. The best previous lower
bound of O(nk) applied only to deterministic k-median algorithms. While we
focus our presentation on the k-median objective, all our upper bounds are
valid for the k-means objective as well. In this context our algorithm compares
favorably to the widely used k-means heuristic, which requires O(nk) time for
just one iteration and provides no useful approximation guarantees.
| [
"['Ramgopal Mettu' 'Greg Plaxton']",
"Ramgopal Mettu, Greg Plaxton"
] |
cs.LG cs.IR stat.ML | null | 1301.0588 | null | null | http://arxiv.org/pdf/1301.0588v1 | 2012-12-12T15:57:35Z | 2012-12-12T15:57:35Z | Expectation-Propogation for the Generative Aspect Model | The generative aspect model is an extension of the multinomial model for text
that allows word probabilities to vary stochastically across documents.
Previous results with aspect models have been promising, but hindered by the
computational difficulty of carrying out inference and learning. This paper
demonstrates that the simple variational methods of Blei et al (2001) can lead
to inaccurate inferences and biased learning for the generative aspect model.
We develop an alternative approach that leads to higher accuracy at comparable
cost. An extension of Expectation-Propagation is used for inference and then
embedded in an EM algorithm for learning. Experimental results are presented
for both synthetic and real data sets.
| [
"Thomas P. Minka, John Lafferty",
"['Thomas P. Minka' 'John Lafferty']"
] |
cs.LG stat.ML | null | 1301.0593 | null | null | http://arxiv.org/pdf/1301.0593v1 | 2012-12-12T15:57:54Z | 2012-12-12T15:57:54Z | Bayesian Network Classifiers in a High Dimensional Framework | We present a growing dimension asymptotic formalism. The perspective in this
paper is classification theory and we show that it can accommodate
probabilistic networks classifiers, including naive Bayes model and its
augmented version. When represented as a Bayesian network these classifiers
have an important advantage: The corresponding discriminant function turns out
to be a specialized case of a generalized additive model, which makes it
possible to get closed form expressions for the asymptotic misclassification
probabilities used here as a measure of classification accuracy. Moreover, in
this paper we propose a new quantity for assessing the discriminative power of
a set of features which is then used to elaborate the augmented naive Bayes
classifier. The result is a weighted form of the augmented naive Bayes that
distributes weights among the sets of features according to their
discriminative power. We derive the asymptotic distribution of the sample based
discriminative power and show that it is seriously overestimated in a high
dimensional case. We then apply this result to find the optimal, in a sense of
minimum misclassification probability, type of weighting.
| [
"Tatjana Pavlenko, Dietrich von Rosen",
"['Tatjana Pavlenko' 'Dietrich von Rosen']"
] |
cs.AI cs.LG | null | 1301.0598 | null | null | http://arxiv.org/pdf/1301.0598v1 | 2012-12-12T15:58:13Z | 2012-12-12T15:58:13Z | Asymptotic Model Selection for Naive Bayesian Networks | We develop a closed form asymptotic formula to compute the marginal
likelihood of data given a naive Bayesian network model with two hidden states
and binary features. This formula deviates from the standard BIC score. Our
work provides a concrete example that the BIC score is generally not valid for
statistical models that belong to a stratified exponential family. This stands
in contrast to linear and curved exponential families, where the BIC score has
been proven to provide a correct approximation for the marginal likelihood.
| [
"['Dmitry Rusakov' 'Dan Geiger']",
"Dmitry Rusakov, Dan Geiger"
] |
cs.LG stat.ML | null | 1301.0599 | null | null | http://arxiv.org/pdf/1301.0599v1 | 2012-12-12T15:58:17Z | 2012-12-12T15:58:17Z | Advances in Boosting (Invited Talk) | Boosting is a general method of generating many simple classification rules
and combining them into a single, highly accurate rule. In this talk, I will
review the AdaBoost boosting algorithm and some of its underlying theory, and
then look at how this theory has helped us to face some of the challenges of
applying AdaBoost in two domains: In the first of these, we used boosting for
predicting and modeling the uncertainty of prices in complicated, interacting
auctions. The second application was to the classification of caller utterances
in a telephone spoken-dialogue system where we faced two challenges: the need
to incorporate prior knowledge to compensate for initially insufficient data;
and a later need to filter the large stream of unlabeled examples being
collected to select the ones whose labels are likely to be most informative.
| [
"['Robert E. Schapire']",
"Robert E. Schapire"
] |
cs.LG cs.AI cs.IR | null | 1301.0600 | null | null | http://arxiv.org/pdf/1301.0600v2 | 2015-05-16T23:00:34Z | 2012-12-12T15:58:21Z | An MDP-based Recommender System | Typical Recommender systems adopt a static view of the recommendation process
and treat it as a prediction problem. We argue that it is more appropriate to
view the problem of generating recommendations as a sequential decision problem
and, consequently, that Markov decision processes (MDP) provide a more
appropriate model for Recommender systems. MDPs introduce two benefits: they
take into account the long-term effects of each recommendation, and they take
into account the expected value of each recommendation. To succeed in practice,
an MDP-based Recommender system must employ a strong initial model; and the
bulk of this paper is concerned with the generation of such a model. In
particular, we suggest the use of an n-gram predictive model for generating the
initial MDP. Our n-gram model induces a Markov-chain model of user behavior
whose predictive accuracy is greater than that of existing predictive models.
We describe our predictive model in detail and evaluate its performance on real
data. In addition, we show how the model can be used in an MDP-based
Recommender system.
| [
"Guy Shani, Ronen I. Brafman, David Heckerman",
"['Guy Shani' 'Ronen I. Brafman' 'David Heckerman']"
] |
cs.LG stat.ML | null | 1301.0601 | null | null | http://arxiv.org/pdf/1301.0601v1 | 2012-12-12T15:58:25Z | 2012-12-12T15:58:25Z | Reinforcement Learning with Partially Known World Dynamics | Reinforcement learning would enjoy better success on real-world problems if
domain knowledge could be imparted to the algorithm by the modelers. Most
problems have both hidden state and unknown dynamics. Partially observable
Markov decision processes (POMDPs) allow for the modeling of both.
Unfortunately, they do not provide a natural framework in which to specify
knowledge about the domain dynamics. The designer must either admit to knowing
nothing about the dynamics or completely specify the dynamics (thereby turning
it into a planning problem). We propose a new framework called a partially
known Markov decision process (PKMDP) which allows the designer to specify
known dynamics while still leaving portions of the environment s dynamics
unknown.The model represents NOT ONLY the environment dynamics but also the
agents knowledge of the dynamics. We present a reinforcement learning algorithm
for this model based on importance sampling. The algorithm incorporates
planning based on the known dynamics and learning about the unknown dynamics.
Our results clearly demonstrate the ability to add domain knowledge and the
resulting benefits for learning.
| [
"Christian R. Shelton",
"['Christian R. Shelton']"
] |
cs.LG stat.ML | null | 1301.0602 | null | null | http://arxiv.org/pdf/1301.0602v1 | 2012-12-12T15:58:30Z | 2012-12-12T15:58:30Z | Unsupervised Active Learning in Large Domains | Active learning is a powerful approach to analyzing data effectively. We show
that the feasibility of active learning depends crucially on the choice of
measure with respect to which the query is being optimized. The standard
information gain, for example, does not permit an accurate evaluation with a
small committee, a representative subset of the model space. We propose a
surrogate measure requiring only a small committee and discuss the properties
of this new measure. We devise, in addition, a bootstrap approach for committee
selection. The advantages of this approach are illustrated in the context of
recovering (regulatory) network models.
| [
"['Harald Steck' 'Tommi S. Jaakkola']",
"Harald Steck, Tommi S. Jaakkola"
] |
cs.LG cs.AI stat.ML | null | 1301.0604 | null | null | http://arxiv.org/pdf/1301.0604v1 | 2012-12-12T15:58:38Z | 2012-12-12T15:58:38Z | Discriminative Probabilistic Models for Relational Data | In many supervised learning tasks, the entities to be labeled are related to
each other in complex ways and their labels are not independent. For example,
in hypertext classification, the labels of linked pages are highly correlated.
A standard approach is to classify each entity independently, ignoring the
correlations between them. Recently, Probabilistic Relational Models, a
relational version of Bayesian networks, were used to define a joint
probabilistic model for a collection of related entities. In this paper, we
present an alternative framework that builds on (conditional) Markov networks
and addresses two limitations of the previous approach. First, undirected
models do not impose the acyclicity constraint that hinders representation of
many important relational dependencies in directed models. Second, undirected
models are well suited for discriminative training, where we optimize the
conditional likelihood of the labels given the features, which generally
improves classification accuracy. We show how to train these models
effectively, and how to use approximate probabilistic inference over the
learned model for collective classification of multiple related entities. We
provide experimental results on a webpage classification task, showing that
accuracy can be significantly improved by modeling relational dependencies.
| [
"['Ben Taskar' 'Pieter Abbeel' 'Daphne Koller']",
"Ben Taskar, Pieter Abbeel, Daphne Koller"
] |
cs.LG stat.ML | null | 1301.0610 | null | null | http://arxiv.org/pdf/1301.0610v1 | 2012-12-12T15:59:01Z | 2012-12-12T15:59:01Z | A New Class of Upper Bounds on the Log Partition Function | Bounds on the log partition function are important in a variety of contexts,
including approximate inference, model fitting, decision theory, and large
deviations analysis. We introduce a new class of upper bounds on the log
partition function, based on convex combinations of distributions in the
exponential domain, that is applicable to an arbitrary undirected graphical
model. In the special case of convex combinations of tree-structured
distributions, we obtain a family of variational problems, similar to the Bethe
free energy, but distinguished by the following desirable properties: i. they
are cnvex, and have a unique global minimum; and ii. the global minimum gives
an upper bound on the log partition function. The global minimum is defined by
stationary conditions very similar to those defining fixed points of belief
propagation or tree-based reparameterization Wainwright et al., 2001. As with
BP fixed points, the elements of the minimizing argument can be used as
approximations to the marginals of the original model. The analysis described
here can be extended to structures of higher treewidth e.g., hypertrees,
thereby making connections with more advanced approximations e.g., Kikuchi and
variants Yedidia et al., 2001; Minka, 2001.
| [
"['Martin Wainwright' 'Tommi S. Jaakkola' 'Alan Willsky']",
"Martin Wainwright, Tommi S. Jaakkola, Alan Willsky"
] |
cs.LG cs.AI stat.ML | null | 1301.0613 | null | null | http://arxiv.org/pdf/1301.0613v1 | 2012-12-12T15:59:15Z | 2012-12-12T15:59:15Z | IPF for Discrete Chain Factor Graphs | Iterative Proportional Fitting (IPF), combined with EM, is commonly used as
an algorithm for likelihood maximization in undirected graphical models. In
this paper, we present two iterative algorithms that generalize upon IPF. The
first one is for likelihood maximization in discrete chain factor graphs, which
we define as a wide class of discrete variable models including undirected
graphical models and Bayesian networks, but also chain graphs and sigmoid
belief networks. The second one is for conditional likelihood maximization in
standard undirected models and Bayesian networks. In both algorithms, the
iteration steps are expressed in closed form. Numerical simulations show that
the algorithms are competitive with state of the art methods.
| [
"['Wim Wiegerinck' 'Tom Heskes']",
"Wim Wiegerinck, Tom Heskes"
] |
cs.LG stat.ML | null | 1301.0725 | null | null | http://arxiv.org/pdf/1301.0725v1 | 2013-01-04T13:56:25Z | 2013-01-04T13:56:25Z | The Sum-over-Forests density index: identifying dense regions in a graph | This work introduces a novel nonparametric density index defined on graphs,
the Sum-over-Forests (SoF) density index. It is based on a clear and intuitive
idea: high-density regions in a graph are characterized by the fact that they
contain a large amount of low-cost trees with high outdegrees while low-density
regions contain few ones. Therefore, a Boltzmann probability distribution on
the countable set of forests in the graph is defined so that large (high-cost)
forests occur with a low probability while short (low-cost) forests occur with
a high probability. Then, the SoF density index of a node is defined as the
expected outdegree of this node in a non-trivial tree of the forest, thus
providing a measure of density around that node. Following the matrix-forest
theorem, and a statistical physics framework, it is shown that the SoF density
index can be easily computed in closed form through a simple matrix inversion.
Experiments on artificial and real data sets show that the proposed index
performs well on finding dense regions, for graphs of various origins.
| [
"Mathieu Senelle, Silvia Garcia-Diez, Amin Mantrach, Masashi Shimbo,\n Marco Saerens, Fran\\c{c}ois Fouss",
"['Mathieu Senelle' 'Silvia Garcia-Diez' 'Amin Mantrach' 'Masashi Shimbo'\n 'Marco Saerens' 'François Fouss']"
] |
math.ST cs.LG math.PR stat.TH | 10.3150/15-BEJ703 | 1301.0802 | null | null | http://arxiv.org/abs/1301.0802v4 | 2016-03-24T14:26:50Z | 2013-01-04T18:55:41Z | Borrowing strengh in hierarchical Bayes: Posterior concentration of the
Dirichlet base measure | This paper studies posterior concentration behavior of the base probability
measure of a Dirichlet measure, given observations associated with the sampled
Dirichlet processes, as the number of observations tends to infinity. The base
measure itself is endowed with another Dirichlet prior, a construction known as
the hierarchical Dirichlet processes (Teh et al. [J. Amer. Statist. Assoc. 101
(2006) 1566-1581]). Convergence rates are established in transportation
distances (i.e., Wasserstein metrics) under various conditions on the geometry
of the support of the true base measure. As a consequence of the theory, we
demonstrate the benefit of "borrowing strength" in the inference of multiple
groups of data - a powerful insight often invoked to motivate hierarchical
modeling. In certain settings, the gain in efficiency due to the latent
hierarchy can be dramatic, improving from a standard nonparametric rate to a
parametric rate of convergence. Tools developed include transportation
distances for nonparametric Bayesian hierarchies of random measures, the
existence of tests for Dirichlet measures, and geometric properties of the
support of Dirichlet measures.
| [
"['XuanLong Nguyen']",
"XuanLong Nguyen"
] |
cs.LG cs.DB cs.DS stat.ML | null | 1301.1218 | null | null | http://arxiv.org/pdf/1301.1218v3 | 2014-01-22T16:38:44Z | 2013-01-07T15:04:43Z | Finding the True Frequent Itemsets | Frequent Itemsets (FIs) mining is a fundamental primitive in data mining. It
requires to identify all itemsets appearing in at least a fraction $\theta$ of
a transactional dataset $\mathcal{D}$. Often though, the ultimate goal of
mining $\mathcal{D}$ is not an analysis of the dataset \emph{per se}, but the
understanding of the underlying process that generated it. Specifically, in
many applications $\mathcal{D}$ is a collection of samples obtained from an
unknown probability distribution $\pi$ on transactions, and by extracting the
FIs in $\mathcal{D}$ one attempts to infer itemsets that are frequently (i.e.,
with probability at least $\theta$) generated by $\pi$, which we call the True
Frequent Itemsets (TFIs). Due to the inherently stochastic nature of the
generative process, the set of FIs is only a rough approximation of the set of
TFIs, as it often contains a huge number of \emph{false positives}, i.e.,
spurious itemsets that are not among the TFIs. In this work we design and
analyze an algorithm to identify a threshold $\hat{\theta}$ such that the
collection of itemsets with frequency at least $\hat{\theta}$ in $\mathcal{D}$
contains only TFIs with probability at least $1-\delta$, for some
user-specified $\delta$. Our method uses results from statistical learning
theory involving the (empirical) VC-dimension of the problem at hand. This
allows us to identify almost all the TFIs without including any false positive.
We also experimentally compare our method with the direct mining of
$\mathcal{D}$ at frequency $\theta$ and with techniques based on widely-used
standard bounds (i.e., the Chernoff bounds) of the binomial distribution, and
show that our algorithm outperforms these methods and achieves even better
results than what is guaranteed by the theoretical analysis.
| [
"['Matteo Riondato' 'Fabio Vandin']",
"Matteo Riondato and Fabio Vandin"
] |
stat.ML cs.LG | null | 1301.1254 | null | null | http://arxiv.org/pdf/1301.1254v1 | 2013-01-07T16:39:09Z | 2013-01-07T16:39:09Z | Dynamical Models and Tracking Regret in Online Convex Programming | This paper describes a new online convex optimization method which
incorporates a family of candidate dynamical models and establishes novel
tracking regret bounds that scale with the comparator's deviation from the best
dynamical model in this family. Previous online optimization methods are
designed to have a total accumulated loss comparable to that of the best
comparator sequence, and existing tracking or shifting regret bounds scale with
the overall variation of the comparator sequence. In many practical scenarios,
however, the environment is nonstationary and comparator sequences with small
variation are quite weak, resulting in large losses. The proposed Dynamic
Mirror Descent method, in contrast, can yield low regret relative to highly
variable comparator sequences by both tracking the best dynamical model and
forming predictions based on that model. This concept is demonstrated
empirically in the context of sequential compressive observations of a dynamic
scene and tracking a dynamic social network.
| [
"['Eric C. Hall' 'Rebecca M. Willett']",
"Eric C. Hall and Rebecca M. Willett"
] |
stat.ML cs.AI cs.LG | null | 1301.1299 | null | null | http://arxiv.org/pdf/1301.1299v1 | 2013-01-07T18:48:02Z | 2013-01-07T18:48:02Z | Automated Variational Inference in Probabilistic Programming | We present a new algorithm for approximate inference in probabilistic
programs, based on a stochastic gradient for variational programs. This method
is efficient without restrictions on the probabilistic program; it is
particularly practical for distributions which are not analytically tractable,
including highly structured distributions that arise in probabilistic programs.
We show how to automatically derive mean-field probabilistic programs and
optimize them, and demonstrate that our perspective improves inference
efficiency over other algorithms.
| [
"['David Wingate' 'Theophane Weber']",
"David Wingate, Theophane Weber"
] |
cs.NE cs.IT cs.LG math.IT | null | 1301.1555 | null | null | http://arxiv.org/pdf/1301.1555v5 | 2013-08-23T14:26:16Z | 2013-01-08T14:55:45Z | Coupled Neural Associative Memories | We propose a novel architecture to design a neural associative memory that is
capable of learning a large number of patterns and recalling them later in
presence of noise. It is based on dividing the neurons into local clusters and
parallel plains, very similar to the architecture of the visual cortex of
macaque brain. The common features of our proposed architecture with those of
spatially-coupled codes enable us to show that the performance of such networks
in eliminating noise is drastically better than the previous approaches while
maintaining the ability of learning an exponentially large number of patterns.
Previous work either failed in providing good performance during the recall
phase or in offering large pattern retrieval (storage) capacities. We also
present computational experiments that lend additional support to the
theoretical analysis.
| [
"['Amin Karbasi' 'Amir Hesam Salavati' 'Amin Shokrollahi']",
"Amin Karbasi, Amir Hesam Salavati, and Amin Shokrollahi"
] |
q-bio.BM cs.LG | null | 1301.1590 | null | null | http://arxiv.org/pdf/1301.1590v1 | 2013-01-08T16:58:28Z | 2013-01-08T16:58:28Z | An Efficient Algorithm for Upper Bound on the Partition Function of
Nucleic Acids | It has been shown that minimum free energy structure for RNAs and RNA-RNA
interaction is often incorrect due to inaccuracies in the energy parameters and
inherent limitations of the energy model. In contrast, ensemble based
quantities such as melting temperature and equilibrium concentrations can be
more reliably predicted. Even structure prediction by sampling from the
ensemble and clustering those structures by Sfold [7] has proven to be more
reliable than minimum free energy structure prediction. The main obstacle for
ensemble based approaches is the computational complexity of the partition
function and base pairing probabilities. For instance, the space complexity of
the partition function for RNA-RNA interaction is $O(n^4)$ and the time
complexity is $O(n^6)$ which are prohibitively large [4,12]. Our goal in this
paper is to give a fast algorithm, based on sparse folding, to calculate an
upper bound on the partition function. Our work is based on the recent
algorithm of Hazan and Jaakkola [10]. The space complexity of our algorithm is
the same as that of sparse folding algorithms, and the time complexity of our
algorithm is $O(MFE(n)\ell)$ for single RNA and $O(MFE(m, n)\ell)$ for RNA-RNA
interaction in practice, in which $MFE$ is the running time of sparse folding
and $\ell \leq n$ ($\ell \leq n + m$) is a sequence dependent parameter.
| [
"Hamidreza Chitsaz and Elmirasadat Forouzmand and Gholamreza Haffari",
"['Hamidreza Chitsaz' 'Elmirasadat Forouzmand' 'Gholamreza Haffari']"
] |
q-bio.BM cs.CE cs.LG | null | 1301.1608 | null | null | http://arxiv.org/pdf/1301.1608v1 | 2013-01-08T17:43:08Z | 2013-01-08T17:43:08Z | The RNA Newton Polytope and Learnability of Energy Parameters | Despite nearly two scores of research on RNA secondary structure and RNA-RNA
interaction prediction, the accuracy of the state-of-the-art algorithms are
still far from satisfactory. Researchers have proposed increasingly complex
energy models and improved parameter estimation methods in anticipation of
endowing their methods with enough power to solve the problem. The output has
disappointingly been only modest improvements, not matching the expectations.
Even recent massively featured machine learning approaches were not able to
break the barrier. In this paper, we introduce the notion of learnability of
the parameters of an energy model as a measure of its inherent capability. We
say that the parameters of an energy model are learnable iff there exists at
least one set of such parameters that renders every known RNA structure to date
the minimum free energy structure. We derive a necessary condition for the
learnability and give a dynamic programming algorithm to assess it. Our
algorithm computes the convex hull of the feature vectors of all feasible
structures in the ensemble of a given input sequence. Interestingly, that
convex hull coincides with the Newton polytope of the partition function as a
polynomial in energy parameters. We demonstrated the application of our theory
to a simple energy model consisting of a weighted count of A-U and C-G base
pairs. Our results show that this simple energy model satisfies the necessary
condition for less than one third of the input unpseudoknotted
sequence-structure pairs chosen from the RNA STRAND v2.0 database. For another
one third, the necessary condition is barely violated, which suggests that
augmenting this simple energy model with more features such as the Turner loops
may solve the problem. The necessary condition is severely violated for 8%,
which provides a small set of hard cases that require further investigation.
| [
"Elmirasadat Forouzmand and Hamidreza Chitsaz",
"['Elmirasadat Forouzmand' 'Hamidreza Chitsaz']"
] |
cs.LG stat.ML | null | 1301.1722 | null | null | http://arxiv.org/pdf/1301.1722v1 | 2013-01-08T23:45:06Z | 2013-01-08T23:45:06Z | Linear Bandits in High Dimension and Recommendation Systems | A large number of online services provide automated recommendations to help
users to navigate through a large collection of items. New items (products,
videos, songs, advertisements) are suggested on the basis of the user's past
history and --when available-- her demographic profile. Recommendations have to
satisfy the dual goal of helping the user to explore the space of available
items, while allowing the system to probe the user's preferences.
We model this trade-off using linearly parametrized multi-armed bandits,
propose a policy and prove upper and lower bounds on the cumulative "reward"
that coincide up to constants in the data poor (high-dimensional) regime. Prior
work on linear bandits has focused on the data rich (low-dimensional) regime
and used cumulative "risk" as the figure of merit. For this data rich regime,
we provide a simple modification for our policy that achieves near-optimal risk
performance under more restrictive assumptions on the geometry of the problem.
We test (a variation of) the scheme used for establishing achievability on the
Netflix and MovieLens datasets and obtain good agreement with the qualitative
predictions of the theory we develop.
| [
"Yash Deshpande and Andrea Montanari",
"['Yash Deshpande' 'Andrea Montanari']"
] |
cs.LG | null | 1301.1936 | null | null | http://arxiv.org/pdf/1301.1936v1 | 2013-01-09T18:02:54Z | 2013-01-09T18:02:54Z | Risk-Aversion in Multi-armed Bandits | Stochastic multi-armed bandits solve the Exploration-Exploitation dilemma and
ultimately maximize the expected reward. Nonetheless, in many practical
problems, maximizing the expected reward is not the most desirable objective.
In this paper, we introduce a novel setting based on the principle of
risk-aversion where the objective is to compete against the arm with the best
risk-return trade-off. This setting proves to be intrinsically more difficult
than the standard multi-arm bandit setting due in part to an exploration risk
which introduces a regret associated to the variability of an algorithm. Using
variance as a measure of risk, we introduce two new algorithms, investigate
their theoretical guarantees, and report preliminary empirical results.
| [
"Amir Sani (INRIA Lille - Nord Europe), Alessandro Lazaric (INRIA Lille\n - Nord Europe), R\\'emi Munos (INRIA Lille - Nord Europe)",
"['Amir Sani' 'Alessandro Lazaric' 'Rémi Munos']"
] |
stat.ML cs.LG | null | 1301.1942 | null | null | http://arxiv.org/pdf/1301.1942v2 | 2016-01-10T16:01:22Z | 2013-01-09T18:26:56Z | Bayesian Optimization in a Billion Dimensions via Random Embeddings | Bayesian optimization techniques have been successfully applied to robotics,
planning, sensor placement, recommendation, advertising, intelligent user
interfaces and automatic algorithm configuration. Despite these successes, the
approach is restricted to problems of moderate dimension, and several workshops
on Bayesian optimization have identified its scaling to high-dimensions as one
of the holy grails of the field. In this paper, we introduce a novel random
embedding idea to attack this problem. The resulting Random EMbedding Bayesian
Optimization (REMBO) algorithm is very simple, has important invariance
properties, and applies to domains with both categorical and continuous
variables. We present a thorough theoretical analysis of REMBO. Empirical
results confirm that REMBO can effectively solve problems with billions of
dimensions, provided the intrinsic dimensionality is low. They also show that
REMBO achieves state-of-the-art performance in optimizing the 47 discrete
parameters of a popular mixed integer linear programming solver.
| [
"Ziyu Wang, Frank Hutter, Masrour Zoghi, David Matheson, Nando de\n Freitas",
"['Ziyu Wang' 'Frank Hutter' 'Masrour Zoghi' 'David Matheson'\n 'Nando de Freitas']"
] |
cs.LG | null | 1301.2012 | null | null | http://arxiv.org/pdf/1301.2012v1 | 2013-01-10T00:47:21Z | 2013-01-10T00:47:21Z | Error Correction in Learning using SVMs | This paper is concerned with learning binary classifiers under adversarial
label-noise. We introduce the problem of error-correction in learning where the
goal is to recover the original clean data from a label-manipulated version of
it, given (i) no constraints on the adversary other than an upper-bound on the
number of errors, and (ii) some regularity properties for the original data. We
present a simple and practical error-correction algorithm called SubSVMs that
learns individual SVMs on several small-size (log-size), class-balanced, random
subsets of the data and then reclassifies the training points using a majority
vote. Our analysis reveals the need for the two main ingredients of SubSVMs,
namely class-balanced sampling and subsampled bagging. Experimental results on
synthetic as well as benchmark UCI data demonstrate the effectiveness of our
approach. In addition to noise-tolerance, log-size subsampled bagging also
yields significant run-time benefits over standard SVMs.
| [
"['Srivatsan Laxman' 'Sushil Mittal' 'Ramarathnam Venkatesan']",
"Srivatsan Laxman, Sushil Mittal and Ramarathnam Venkatesan"
] |
stat.ML cs.LG | null | 1301.2015 | null | null | http://arxiv.org/pdf/1301.2015v1 | 2013-01-10T02:02:01Z | 2013-01-10T02:02:01Z | Heteroscedastic Relevance Vector Machine | In this work we propose a heteroscedastic generalization to RVM, a fast
Bayesian framework for regression, based on some recent similar works. We use
variational approximation and expectation propagation to tackle the problem.
The work is still under progress and we are examining the results and comparing
with the previous works.
| [
"['Daniel Khashabi' 'Mojtaba Ziyadi' 'Feng Liang']",
"Daniel Khashabi, Mojtaba Ziyadi, Feng Liang"
] |
cs.CV cs.LG stat.ML | null | 1301.2032 | null | null | http://arxiv.org/pdf/1301.2032v1 | 2013-01-10T05:26:18Z | 2013-01-10T05:26:18Z | Training Effective Node Classifiers for Cascade Classification | Cascade classifiers are widely used in real-time object detection. Different
from conventional classifiers that are designed for a low overall
classification error rate, a classifier in each node of the cascade is required
to achieve an extremely high detection rate and moderate false positive rate.
Although there are a few reported methods addressing this requirement in the
context of object detection, there is no principled feature selection method
that explicitly takes into account this asymmetric node learning objective. We
provide such an algorithm here. We show that a special case of the biased
minimax probability machine has the same formulation as the linear asymmetric
classifier (LAC) of Wu et al (2005). We then design a new boosting algorithm
that directly optimizes the cost function of LAC. The resulting
totally-corrective boosting algorithm is implemented by the column generation
technique in convex optimization. Experimental results on object detection
verify the effectiveness of the proposed boosting algorithm as a node
classifier in cascade object detection, and show performance better than that
of the current state-of-the-art.
| [
"['Chunhua Shen' 'Peng Wang' 'Sakrapee Paisitkriangkrai'\n 'Anton van den Hengel']",
"Chunhua Shen and Peng Wang and Sakrapee Paisitkriangkrai and Anton van\n den Hengel"
] |
stat.ML cs.LG | null | 1301.2115 | null | null | http://arxiv.org/pdf/1301.2115v1 | 2013-01-10T13:29:17Z | 2013-01-10T13:29:17Z | Domain Generalization via Invariant Feature Representation | This paper investigates domain generalization: How to take knowledge acquired
from an arbitrary number of related domains and apply it to previously unseen
domains? We propose Domain-Invariant Component Analysis (DICA), a kernel-based
optimization algorithm that learns an invariant transformation by minimizing
the dissimilarity across domains, whilst preserving the functional relationship
between input and output variables. A learning-theoretic analysis shows that
reducing dissimilarity improves the expected generalization ability of
classifiers on new domains, motivating the proposed algorithm. Experimental
results on synthetic and real-world datasets demonstrate that DICA successfully
learns invariant features and improves classifier performance in practice.
| [
"Krikamol Muandet, David Balduzzi, Bernhard Sch\\\"olkopf",
"['Krikamol Muandet' 'David Balduzzi' 'Bernhard Schölkopf']"
] |
stat.ML cs.LG stat.ME | null | 1301.2194 | null | null | http://arxiv.org/pdf/1301.2194v1 | 2013-01-10T17:23:11Z | 2013-01-10T17:23:11Z | Network-based clustering with mixtures of L1-penalized Gaussian
graphical models: an empirical investigation | In many applications, multivariate samples may harbor previously unrecognized
heterogeneity at the level of conditional independence or network structure.
For example, in cancer biology, disease subtypes may differ with respect to
subtype-specific interplay between molecular components. Then, both subtype
discovery and estimation of subtype-specific networks present important and
related challenges. To enable such analyses, we put forward a mixture model
whose components are sparse Gaussian graphical models. This brings together
model-based clustering and graphical modeling to permit simultaneous estimation
of cluster assignments and cluster-specific networks. We carry out estimation
within an L1-penalized framework, and investigate several specific penalization
regimes. We present empirical results on simulated data and provide general
recommendations for the formulation and use of mixtures of L1-penalized
Gaussian graphical models.
| [
"Steven M. Hill and Sach Mukherjee",
"['Steven M. Hill' 'Sach Mukherjee']"
] |
cs.AI cs.LG stat.ML | null | 1301.2262 | null | null | http://arxiv.org/pdf/1301.2262v1 | 2013-01-10T16:23:01Z | 2013-01-10T16:23:01Z | Conditions Under Which Conditional Independence and Scoring Methods Lead
to Identical Selection of Bayesian Network Models | It is often stated in papers tackling the task of inferring Bayesian network
structures from data that there are these two distinct approaches: (i) Apply
conditional independence tests when testing for the presence or otherwise of
edges; (ii) Search the model space using a scoring metric. Here I argue that
for complete data and a given node ordering this division is a myth, by showing
that cross entropy methods for checking conditional independence are
mathematically identical to methods based upon discriminating between models by
their overall goodness-of-fit logarithmic scores.
| [
"Robert G. Cowell",
"['Robert G. Cowell']"
] |
cs.LG stat.CO stat.ML | null | 1301.2266 | null | null | http://arxiv.org/pdf/1301.2266v1 | 2013-01-10T16:23:18Z | 2013-01-10T16:23:18Z | Variational MCMC | We propose a new class of learning algorithms that combines variational
approximation and Markov chain Monte Carlo (MCMC) simulation. Naive algorithms
that use the variational approximation as proposal distribution can perform
poorly because this approximation tends to underestimate the true variance and
other features of the data. We solve this problem by introducing more
sophisticated MCMC algorithms. One of these algorithms is a mixture of two MCMC
kernels: a random walk Metropolis kernel and a blockMetropolis-Hastings (MH)
kernel with a variational approximation as proposaldistribution. The MH kernel
allows one to locate regions of high probability efficiently. The Metropolis
kernel allows us to explore the vicinity of these regions. This algorithm
outperforms variationalapproximations because it yields slightly better
estimates of the mean and considerably better estimates of higher moments, such
as covariances. It also outperforms standard MCMC algorithms because it locates
theregions of high probability quickly, thus speeding up convergence. We
demonstrate this algorithm on the problem of Bayesian parameter estimation for
logistic (sigmoid) belief networks.
| [
"Nando de Freitas, Pedro Hojen-Sorensen, Michael I. Jordan, Stuart\n Russell",
"['Nando de Freitas' 'Pedro Hojen-Sorensen' 'Michael I. Jordan'\n 'Stuart Russell']"
] |
cs.AI cs.LG | null | 1301.2268 | null | null | http://arxiv.org/pdf/1301.2268v1 | 2013-01-10T16:23:26Z | 2013-01-10T16:23:26Z | Incorporating Expressive Graphical Models in Variational Approximations:
Chain-Graphs and Hidden Variables | Global variational approximation methods in graphical models allow efficient
approximate inference of complex posterior distributions by using a simpler
model. The choice of the approximating model determines a tradeoff between the
complexity of the approximation procedure and the quality of the approximation.
In this paper, we consider variational approximations based on two classes of
models that are richer than standard Bayesian networks, Markov networks or
mixture models. As such, these classes allow to find better tradeoffs in the
spectrum of approximations. The first class of models are chain graphs, which
capture distributions that are partially directed. The second class of models
are directed graphs (Bayesian networks) with additional latent variables. Both
classes allow representation of multi-variable dependencies that cannot be
easily represented within a Bayesian network.
| [
"['Tal El-Hay' 'Nir Friedman']",
"Tal El-Hay, Nir Friedman"
] |
cs.LG cs.AI stat.ML | null | 1301.2269 | null | null | http://arxiv.org/pdf/1301.2269v1 | 2013-01-10T16:23:30Z | 2013-01-10T16:23:30Z | Learning the Dimensionality of Hidden Variables | A serious problem in learning probabilistic models is the presence of hidden
variables. These variables are not observed, yet interact with several of the
observed variables. Detecting hidden variables poses two problems: determining
the relations to other variables in the model and determining the number of
states of the hidden variable. In this paper, we address the latter problem in
the context of Bayesian networks. We describe an approach that utilizes a
score-based agglomerative state-clustering. As we show, this approach allows us
to efficiently evaluate models with a range of cardinalities for the hidden
variable. We show how to extend this procedure to deal with multiple
interacting hidden variables. We demonstrate the effectiveness of this approach
by evaluating it on synthetic and real-life data. We show that our approach
learns models with hidden variables that generalize better and have better
structure than previous approaches.
| [
"['Gal Elidan' 'Nir Friedman']",
"Gal Elidan, Nir Friedman"
] |
cs.LG cs.AI stat.ML | null | 1301.2270 | null | null | http://arxiv.org/pdf/1301.2270v1 | 2013-01-10T16:23:36Z | 2013-01-10T16:23:36Z | Multivariate Information Bottleneck | The Information bottleneck method is an unsupervised non-parametric data
organization technique. Given a joint distribution P(A,B), this method
constructs a new variable T that extracts partitions, or clusters, over the
values of A that are informative about B. The information bottleneck has
already been applied to document classification, gene expression, neural code,
and spectral analysis. In this paper, we introduce a general principled
framework for multivariate extensions of the information bottleneck method.
This allows us to consider multiple systems of data partitions that are
inter-related. Our approach utilizes Bayesian networks for specifying the
systems of clusters and what information each captures. We show that this
construction provides insight about bottleneck variations and enables us to
characterize solutions of these variations. We also present a general framework
for iterative algorithms for constructing solutions, and apply it to several
examples.
| [
"['Nir Friedman' 'Ori Mosenzon' 'Noam Slonim' 'Naftali Tishby']",
"Nir Friedman, Ori Mosenzon, Noam Slonim, Naftali Tishby"
] |
cs.LG stat.ML | null | 1301.2278 | null | null | http://arxiv.org/pdf/1301.2278v1 | 2013-01-10T16:24:10Z | 2013-01-10T16:24:10Z | Discovering Multiple Constraints that are Frequently Approximately
Satisfied | Some high-dimensional data.sets can be modelled by assuming that there are
many different linear constraints, each of which is Frequently Approximately
Satisfied (FAS) by the data. The probability of a data vector under the model
is then proportional to the product of the probabilities of its constraint
violations. We describe three methods of learning products of constraints using
a heavy-tailed probability distribution for the violations.
| [
"Geoffrey E. Hinton, Yee Whye Teh",
"['Geoffrey E. Hinton' 'Yee Whye Teh']"
] |
cs.LG cs.AI stat.ML | null | 1301.2280 | null | null | http://arxiv.org/pdf/1301.2280v1 | 2013-01-10T16:24:19Z | 2013-01-10T16:24:19Z | Estimating Well-Performing Bayesian Networks using Bernoulli Mixtures | A novel method for estimating Bayesian network (BN) parameters from data is
presented which provides improved performance on test data. Previous research
has shown the value of representing conditional probability distributions
(CPDs) via neural networks(Neal 1992), noisy-OR gates (Neal 1992, Diez 1993)and
decision trees (Friedman and Goldszmidt 1996).The Bernoulli mixture network
(BMN) explicitly represents the CPDs of discrete BN nodes as mixtures of local
distributions,each having a different set of parents.This increases the space
of possible structures which can be considered,enabling the CPDs to have
finer-grained dependencies.The resulting estimation procedure induces a
modelthat is better able to emulate the underlying interactions occurring in
the data than conventional conditional Bernoulli network models.The results for
artificially generated data indicate that overfitting is best reduced by
restricting the complexity of candidate mixture substructures local to each
node. Furthermore, mixtures of very simple substructures can perform almost as
well as more complex ones.The BMN is also applied to data collected from an
online adventure game with an application to keyhole plan recognition. The
results show that the BMN-based model brings a dramatic improvement in
performance over a conventional BN model.
| [
"Geoff A. Jarrad",
"['Geoff A. Jarrad']"
] |
cs.LG cs.AI stat.ML | null | 1301.2283 | null | null | http://arxiv.org/pdf/1301.2283v1 | 2013-01-10T16:24:32Z | 2013-01-10T16:24:32Z | Improved learning of Bayesian networks | The search space of Bayesian Network structures is usually defined as Acyclic
Directed Graphs (DAGs) and the search is done by local transformations of DAGs.
But the space of Bayesian Networks is ordered by DAG Markov model inclusion and
it is natural to consider that a good search policy should take this into
account. First attempt to do this (Chickering 1996) was using equivalence
classes of DAGs instead of DAGs itself. This approach produces better results
but it is significantly slower. We present a compromise between these two
approaches. It uses DAGs to search the space in such a way that the ordering by
inclusion is taken into account. This is achieved by repetitive usage of local
moves within the equivalence class of DAGs. We show that this new approach
produces better results than the original DAGs approach without substantial
change in time complexity. We present empirical results, within the framework
of heuristic search and Markov Chain Monte Carlo, provided through the Alarm
dataset.
| [
"['Tomas Kocka' 'Robert Castelo']",
"Tomas Kocka, Robert Castelo"
] |
cs.LG stat.ML | null | 1301.2284 | null | null | http://arxiv.org/pdf/1301.2284v1 | 2013-01-10T16:24:36Z | 2013-01-10T16:24:36Z | Classifier Learning with Supervised Marginal Likelihood | It has been argued that in supervised classification tasks, in practice it
may be more sensible to perform model selection with respect to some more
focused model selection score, like the supervised (conditional) marginal
likelihood, than with respect to the standard marginal likelihood criterion.
However, for most Bayesian network models, computing the supervised marginal
likelihood score takes exponential time with respect to the amount of observed
data. In this paper, we consider diagnostic Bayesian network classifiers where
the significant model parameters represent conditional distributions for the
class variable, given the values of the predictor variables, in which case the
supervised marginal likelihood can be computed in linear time with respect to
the data. As the number of model parameters grows in this case exponentially
with respect to the number of predictors, we focus on simple diagnostic models
where the number of relevant predictors is small, and suggest two approaches
for applying this type of models in classification. The first approach is based
on mixtures of simple diagnostic models, while in the second approach we apply
the small predictor sets of the simple diagnostic models for augmenting the
Naive Bayes classifier.
| [
"Petri Kontkanen, Petri Myllymaki, Henry Tirri",
"['Petri Kontkanen' 'Petri Myllymaki' 'Henry Tirri']"
] |
cs.LG stat.ML | null | 1301.2286 | null | null | http://arxiv.org/pdf/1301.2286v1 | 2013-01-10T16:24:45Z | 2013-01-10T16:24:45Z | Iterative Markov Chain Monte Carlo Computation of Reference Priors and
Minimax Risk | We present an iterative Markov chainMonte Carlo algorithm for
computingreference priors and minimax risk forgeneral parametric families.
Ourapproach uses MCMC techniques based onthe Blahut-Arimoto algorithm
forcomputing channel capacity ininformation theory. We give astatistical
analysis of the algorithm,bounding the number of samples requiredfor the
stochastic algorithm to closelyapproximate the deterministic algorithmin each
iteration. Simulations arepresented for several examples fromexponential
families. Although we focuson applications to reference priors andminimax risk,
the methods and analysiswe develop are applicable to a muchbroader class of
optimization problemsand iterative algorithms.
| [
"John Lafferty, Larry A. Wasserman",
"['John Lafferty' 'Larry A. Wasserman']"
] |
cs.AI cs.LG | null | 1301.2292 | null | null | http://arxiv.org/pdf/1301.2292v1 | 2013-01-10T16:25:12Z | 2013-01-10T16:25:12Z | A Bayesian Multiresolution Independence Test for Continuous Variables | In this paper we present a method ofcomputing the posterior probability
ofconditional independence of two or morecontinuous variables from
data,examined at several resolutions. Ourapproach is motivated by
theobservation that the appearance ofcontinuous data varies widely atvarious
resolutions, producing verydifferent independence estimatesbetween the
variablesinvolved. Therefore, it is difficultto ascertain independence
withoutexamining data at several carefullyselected resolutions. In our paper,
weaccomplish this using the exactcomputation of the posteriorprobability of
independence, calculatedanalytically given a resolution. Ateach examined
resolution, we assume amultinomial distribution with Dirichletpriors for the
discretized tableparameters, and compute the posteriorusing Bayesian
integration. Acrossresolutions, we use a search procedureto approximate the
Bayesian integral ofprobability over an exponential numberof possible
histograms. Our methodgeneralizes to an arbitrary numbervariables in a
straightforward manner.The test is suitable for Bayesiannetwork learning
algorithms that useindependence tests to infer the networkstructure, in domains
that contain anymix of continuous, ordinal andcategorical variables.
| [
"Dimitris Margaritis, Sebastian Thrun",
"['Dimitris Margaritis' 'Sebastian Thrun']"
] |
cs.AI cs.LG | null | 1301.2294 | null | null | http://arxiv.org/pdf/1301.2294v1 | 2013-01-10T16:25:20Z | 2013-01-10T16:25:20Z | Expectation Propagation for approximate Bayesian inference | This paper presents a new deterministic approximation technique in Bayesian
networks. This method, "Expectation Propagation", unifies two previous
techniques: assumed-density filtering, an extension of the Kalman filter, and
loopy belief propagation, an extension of belief propagation in Bayesian
networks. All three algorithms try to recover an approximate distribution which
is close in KL divergence to the true distribution. Loopy belief propagation,
because it propagates exact belief states, is useful for a limited class of
belief networks, such as those which are purely discrete. Expectation
Propagation approximates the belief states by only retaining certain
expectations, such as mean and variance, and iterates until these expectations
are consistent throughout the network. This makes it applicable to hybrid
networks with discrete and continuous nodes. Expectation Propagation also
extends belief propagation in the opposite direction - it can propagate richer
belief states that incorporate correlations between nodes. Experiments with
Gaussian mixture models show Expectation Propagation to be convincingly better
than methods with similar computational cost: Laplace's method, variational
Bayes, and Monte Carlo. Expectation Propagation also provides an efficient
algorithm for training Bayes point machine classifiers.
| [
"['Thomas P. Minka']",
"Thomas P. Minka"
] |
cs.IR cs.LG stat.ML | null | 1301.2303 | null | null | http://arxiv.org/pdf/1301.2303v1 | 2013-01-10T16:25:59Z | 2013-01-10T16:25:59Z | Probabilistic Models for Unified Collaborative and Content-Based
Recommendation in Sparse-Data Environments | Recommender systems leverage product and community information to target
products to consumers. Researchers have developed collaborative recommenders,
content-based recommenders, and (largely ad-hoc) hybrid systems. We propose a
unified probabilistic framework for merging collaborative and content-based
recommendations. We extend Hofmann's [1999] aspect model to incorporate
three-way co-occurrence data among users, items, and item content. The relative
influence of collaboration data versus content data is not imposed as an
exogenous parameter, but rather emerges naturally from the given data sources.
Global probabilistic models coupled with standard Expectation Maximization (EM)
learning algorithms tend to drastically overfit in sparse-data situations, as
is typical in recommendation applications. We show that secondary content
information can often be used to overcome sparsity. Experiments on data from
the ResearchIndex library of Computer Science publications show that
appropriate mixture models incorporating secondary data produce significantly
better quality recommenders than k-nearest neighbors (k-NN). Global
probabilistic models also allow more general inferences than local methods like
k-NN.
| [
"['Alexandrin Popescul' 'Lyle H. Ungar' 'David M Pennock' 'Steve Lawrence']",
"Alexandrin Popescul, Lyle H. Ungar, David M Pennock, Steve Lawrence"
] |
cs.IR cs.LG | null | 1301.2309 | null | null | http://arxiv.org/pdf/1301.2309v1 | 2013-01-10T16:26:26Z | 2013-01-10T16:26:26Z | Symmetric Collaborative Filtering Using the Noisy Sensor Model | Collaborative filtering is the process of making recommendations regarding
the potential preference of a user, for example shopping on the Internet, based
on the preference ratings of the user and a number of other users for various
items. This paper considers collaborative filtering based on
explicitmulti-valued ratings. To evaluate the algorithms, weconsider only {em
pure} collaborative filtering, using ratings exclusively, and no other
information about the people or items.Our approach is to predict a user's
preferences regarding a particularitem by using other people who rated that
item and other items ratedby the user as noisy sensors. The noisy sensor model
uses Bayes' theorem to compute the probability distribution for the
user'srating of a new item. We give two variant models: in one, we learn a{em
classical normal linear regression} model of how users rate items; in
another,we assume different users rate items the same, but the accuracy of
thesensors needs to be learned. We compare these variant models
withstate-of-the-art techniques and show how they are significantly
better,whether a user has rated only two items or many. We reportempirical
results using the EachMovie database
footnote{http://research.compaq.com/SRC/eachmovie/} of movie ratings. Wealso
show that by considering items similarity along with theusers similarity, the
accuracy of the prediction increases.
| [
"['Rita Sharma' 'David L Poole']",
"Rita Sharma, David L Poole"
] |
cs.AI cs.LG | null | 1301.2310 | null | null | http://arxiv.org/pdf/1301.2310v1 | 2013-01-10T16:26:30Z | 2013-01-10T16:26:30Z | Policy Improvement for POMDPs Using Normalized Importance Sampling | We present a new method for estimating the expected return of a POMDP from
experience. The method does not assume any knowledge of the POMDP and allows
the experience to be gathered from an arbitrary sequence of policies. The
return is estimated for any new policy of the POMDP. We motivate the estimator
from function-approximation and importance sampling points-of-view and derive
its theoretical properties. Although the estimator is biased, it has low
variance and the bias is often irrelevant when the estimator is used for
pair-wise comparisons. We conclude by extending the estimator to policies with
memory and compare its performance in a greedy search algorithm to REINFORCE
algorithms showing an order of magnitude reduction in the number of trials
required.
| [
"Christian R. Shelton",
"['Christian R. Shelton']"
] |
cs.LG cs.AI stat.ML | null | 1301.2311 | null | null | http://arxiv.org/pdf/1301.2311v1 | 2013-01-10T16:26:35Z | 2013-01-10T16:26:35Z | Maximum Likelihood Bounded Tree-Width Markov Networks | Chow and Liu (1968) studied the problem of learning a maximumlikelihood
Markov tree. We generalize their work to more complexMarkov networks by
considering the problem of learning a maximumlikelihood Markov network of
bounded complexity. We discuss howtree-width is in many ways the appropriate
measure of complexity andthus analyze the problem of learning a maximum
likelihood Markovnetwork of bounded tree-width.Similar to the work of Chow and
Liu, we are able to formalize thelearning problem as a combinatorial
optimization problem on graphs. Weshow that learning a maximum likelihood
Markov network of boundedtree-width is equivalent to finding a maximum weight
hypertree. Thisequivalence gives rise to global, integer-programming
based,approximation algorithms with provable performance guarantees, for
thelearning problem. This contrasts with heuristic local-searchalgorithms which
were previously suggested (e.g. by Malvestuto 1991).The equivalence also allows
us to study the computational hardness ofthe learning problem. We show that
learning a maximum likelihoodMarkov network of bounded tree-width is NP-hard,
and discuss thehardness of approximation.
| [
"['Nathan Srebro']",
"Nathan Srebro"
] |
cs.LG cs.AI stat.ML | null | 1301.2315 | null | null | http://arxiv.org/pdf/1301.2315v1 | 2013-01-10T16:26:53Z | 2013-01-10T16:26:53Z | The Optimal Reward Baseline for Gradient-Based Reinforcement Learning | There exist a number of reinforcement learning algorithms which learnby
climbing the gradient of expected reward. Their long-runconvergence has been
proved, even in partially observableenvironments with non-deterministic
actions, and without the need fora system model. However, the variance of the
gradient estimator hasbeen found to be a significant practical problem. Recent
approacheshave discounted future rewards, introducing a bias-variance
trade-offinto the gradient estimate. We incorporate a reward baseline into
thelearning system, and show that it affects variance without
introducingfurther bias. In particular, as we approach the
zero-bias,high-variance parameterization, the optimal (or variance
minimizing)constant reward baseline is equal to the long-term average
expectedreward. Modified policy-gradient algorithms are presented, and anumber
of experiments demonstrate their improvement over previous work.
| [
"Lex Weaver, Nigel Tao",
"['Lex Weaver' 'Nigel Tao']"
] |
cs.LG stat.ML | null | 1301.2316 | null | null | http://arxiv.org/pdf/1301.2316v1 | 2013-01-10T16:26:57Z | 2013-01-10T16:26:57Z | Cross-covariance modelling via DAGs with hidden variables | DAG models with hidden variables present many difficulties that are not
present when all nodes are observed. In particular, fully observed DAG models
are identified and correspond to well-defined sets ofdistributions, whereas
this is not true if nodes are unobserved. Inthis paper we characterize exactly
the set of distributions given by a class of one-dimensional Gaussian latent
variable models. These models relate two blocks of observed variables, modeling
only the cross-covariance matrix. We describe the relation of this model to the
singular value decomposition of the cross-covariance matrix. We show that,
although the model is underidentified, useful information may be extracted. We
further consider an alternative parametrization in which one latent variable is
associated with each block. Our analysis leads to some novel covariance
equivalence results for Gaussian hidden variable models.
| [
"Jacob A. Wegelin, Thomas S. Richardson",
"['Jacob A. Wegelin' 'Thomas S. Richardson']"
] |
cs.AI cs.LG | null | 1301.2317 | null | null | http://arxiv.org/pdf/1301.2317v1 | 2013-01-10T16:27:02Z | 2013-01-10T16:27:02Z | Belief Optimization for Binary Networks: A Stable Alternative to Loopy
Belief Propagation | We present a novel inference algorithm for arbitrary, binary, undirected
graphs. Unlike loopy belief propagation, which iterates fixed point equations,
we directly descend on the Bethe free energy. The algorithm consists of two
phases, first we update the pairwise probabilities, given the marginal
probabilities at each unit,using an analytic expression. Next, we update the
marginal probabilities, given the pairwise probabilities by following the
negative gradient of the Bethe free energy. Both steps are guaranteed to
decrease the Bethe free energy, and since it is lower bounded, the algorithm is
guaranteed to converge to a local minimum. We also show that the Bethe free
energy is equal to the TAP free energy up to second order in the weights. In
experiments we confirm that when belief propagation converges it usually finds
identical solutions as our belief optimization method. However, in cases where
belief propagation fails to converge, belief optimization continues to converge
to reasonable beliefs. The stable nature of belief optimization makes it
ideally suited for learning graphical models from data.
| [
"Max Welling, Yee Whye Teh",
"['Max Welling' 'Yee Whye Teh']"
] |
cs.LG cs.AI stat.ML | null | 1301.2318 | null | null | http://arxiv.org/pdf/1301.2318v1 | 2013-01-10T16:27:07Z | 2013-01-10T16:27:07Z | Statistical Modeling in Continuous Speech Recognition (CSR)(Invited
Talk) | Automatic continuous speech recognition (CSR) is sufficiently mature that a
variety of real world applications are now possible including large vocabulary
transcription and interactive spoken dialogues. This paper reviews the
evolution of the statistical modelling techniques which underlie current-day
systems, specifically hidden Markov models (HMMs) and N-grams. Starting from a
description of the speech signal and its parameterisation, the various
modelling assumptions and their consequences are discussed. It then describes
various techniques by which the effects of these assumptions can be mitigated.
Despite the progress that has been made, the limitations of current modelling
techniques are still evident. The paper therefore concludes with a brief review
of some of the more fundamental modelling work now in progress.
| [
"Steve Young",
"['Steve Young']"
] |
cs.IR cs.AI cs.LG | null | 1301.2320 | null | null | http://arxiv.org/pdf/1301.2320v1 | 2013-01-10T16:27:15Z | 2013-01-10T16:27:15Z | Using Temporal Data for Making Recommendations | We treat collaborative filtering as a univariate time series estimation
problem: given a user's previous votes, predict the next vote. We describe two
families of methods for transforming data to encode time order in ways amenable
to off-the-shelf classification and density estimation tools, and examine the
results of using these approaches on several real-world data sets. The
improvements in predictive accuracy we realize recommend the use of other
predictive algorithms that exploit the temporal order of data.
| [
"Andrew Zimdars, David Maxwell Chickering, Christopher Meek",
"['Andrew Zimdars' 'David Maxwell Chickering' 'Christopher Meek']"
] |
cs.AI cs.LG | null | 1301.2343 | null | null | http://arxiv.org/pdf/1301.2343v1 | 2013-01-10T21:54:42Z | 2013-01-10T21:54:42Z | Planning by Prioritized Sweeping with Small Backups | Efficient planning plays a crucial role in model-based reinforcement
learning. Traditionally, the main planning operation is a full backup based on
the current estimates of the successor states. Consequently, its computation
time is proportional to the number of successor states. In this paper, we
introduce a new planning backup that uses only the current value of a single
successor state and has a computation time independent of the number of
successor states. This new backup, which we call a small backup, opens the door
to a new class of model-based reinforcement learning methods that exhibit much
finer control over their planning process than traditional methods. We
empirically demonstrate that this increased flexibility allows for more
efficient planning by showing that an implementation of prioritized sweeping
based on small backups achieves a substantial performance improvement over
classical implementations.
| [
"['Harm van Seijen' 'Richard S. Sutton']",
"Harm van Seijen and Richard S. Sutton"
] |
cs.LG cs.IT math.IT math.OC math.ST stat.ML stat.TH | 10.1214/13-AOS1199 | 1301.2603 | null | null | http://arxiv.org/abs/1301.2603v3 | 2014-05-23T13:19:54Z | 2013-01-11T21:05:23Z | Robust subspace clustering | Subspace clustering refers to the task of finding a multi-subspace
representation that best fits a collection of points taken from a
high-dimensional space. This paper introduces an algorithm inspired by sparse
subspace clustering (SSC) [In IEEE Conference on Computer Vision and Pattern
Recognition, CVPR (2009) 2790-2797] to cluster noisy data, and develops some
novel theory demonstrating its correctness. In particular, the theory uses
ideas from geometric functional analysis to show that the algorithm can
accurately recover the underlying subspaces under minimal requirements on their
orientation, and on the number of samples per subspace. Synthetic as well as
real data experiments complement our theoretical study, illustrating our
approach and demonstrating its effectiveness.
| [
"Mahdi Soltanolkotabi, Ehsan Elhamifar, Emmanuel J. Cand\\`es",
"['Mahdi Soltanolkotabi' 'Ehsan Elhamifar' 'Emmanuel J. Candès']"
] |
cs.LG | null | 1301.2609 | null | null | http://arxiv.org/pdf/1301.2609v5 | 2014-02-03T06:57:28Z | 2013-01-11T21:24:11Z | Learning to Optimize Via Posterior Sampling | This paper considers the use of a simple posterior sampling algorithm to
balance between exploration and exploitation when learning to optimize actions
such as in multi-armed bandit problems. The algorithm, also known as Thompson
Sampling, offers significant advantages over the popular upper confidence bound
(UCB) approach, and can be applied to problems with finite or infinite action
spaces and complicated relationships among action rewards. We make two
theoretical contributions. The first establishes a connection between posterior
sampling and UCB algorithms. This result lets us convert regret bounds
developed for UCB algorithms into Bayesian regret bounds for posterior
sampling. Our second theoretical contribution is a Bayesian regret bound for
posterior sampling that applies broadly and can be specialized to many model
classes. This bound depends on a new notion we refer to as the eluder
dimension, which measures the degree of dependence among action rewards.
Compared to UCB algorithm Bayesian regret bounds for specific model classes,
our general bound matches the best available for linear models and is stronger
than the best available for generalized linear models. Further, our analysis
provides insight into performance advantages of posterior sampling, which are
highlighted through simulation results that demonstrate performance surpassing
recently proposed UCB algorithms.
| [
"['Daniel Russo' 'Benjamin Van Roy']",
"Daniel Russo and Benjamin Van Roy"
] |
cs.CV cs.IR cs.LG | 10.1109/TPAMI.2013.182 | 1301.2628 | null | null | http://arxiv.org/abs/1301.2628v3 | 2013-06-02T16:27:49Z | 2013-01-11T23:08:15Z | Robust Text Detection in Natural Scene Images | Text detection in natural scene images is an important prerequisite for many
content-based image analysis tasks. In this paper, we propose an accurate and
robust method for detecting texts in natural scene images. A fast and effective
pruning algorithm is designed to extract Maximally Stable Extremal Regions
(MSERs) as character candidates using the strategy of minimizing regularized
variations. Character candidates are grouped into text candidates by the
ingle-link clustering algorithm, where distance weights and threshold of the
clustering algorithm are learned automatically by a novel self-training
distance metric learning algorithm. The posterior probabilities of text
candidates corresponding to non-text are estimated with an character
classifier; text candidates with high probabilities are then eliminated and
finally texts are identified with a text classifier. The proposed system is
evaluated on the ICDAR 2011 Robust Reading Competition dataset; the f measure
is over 76% and is significantly better than the state-of-the-art performance
of 71%. Experimental results on a publicly available multilingual dataset also
show that our proposed method can outperform the other competitive method with
the f measure increase of over 9 percent. Finally, we have setup an online demo
of our proposed scene text detection system at
http://kems.ustb.edu.cn/learning/yin/dtext.
| [
"Xu-Cheng Yin, Xuwang Yin, Kaizhu Huang, Hong-Wei Hao",
"['Xu-Cheng Yin' 'Xuwang Yin' 'Kaizhu Huang' 'Hong-Wei Hao']"
] |
cs.LG stat.ML | null | 1301.2655 | null | null | http://arxiv.org/pdf/1301.2655v1 | 2013-01-12T07:46:24Z | 2013-01-12T07:46:24Z | Functional Regularized Least Squares Classi cation with Operator-valued
Kernels | Although operator-valued kernels have recently received increasing interest
in various machine learning and functional data analysis problems such as
multi-task learning or functional regression, little attention has been paid to
the understanding of their associated feature spaces. In this paper, we explore
the potential of adopting an operator-valued kernel feature space perspective
for the analysis of functional data. We then extend the Regularized Least
Squares Classification (RLSC) algorithm to cover situations where there are
multiple functions per observation. Experiments on a sound recognition problem
show that the proposed method outperforms the classical RLSC algorithm.
| [
"['Hachem Kadri' 'Asma Rabaoui' 'Philippe Preux' 'Emmanuel Duflos'\n 'Alain Rakotomamonjy']",
"Hachem Kadri (INRIA Lille - Nord Europe), Asma Rabaoui (IMS), Philippe\n Preux (INRIA Lille - Nord Europe, LIFL), Emmanuel Duflos (INRIA Lille - Nord\n Europe, LAGIS), Alain Rakotomamonjy (LITIS)"
] |
stat.ML cs.LG | null | 1301.2656 | null | null | http://arxiv.org/pdf/1301.2656v1 | 2013-01-12T07:46:56Z | 2013-01-12T07:46:56Z | Multiple functional regression with both discrete and continuous
covariates | In this paper we present a nonparametric method for extending functional
regression methodology to the situation where more than one functional
covariate is used to predict a functional response. Borrowing the idea from
Kadri et al. (2010a), the method, which support mixed discrete and continuous
explanatory variables, is based on estimating a function-valued function in
reproducing kernel Hilbert spaces by virtue of positive operator-valued
kernels.
| [
"Hachem Kadri (INRIA Lille - Nord Europe), Philippe Preux (INRIA Lille\n - Nord Europe, LIFL), Emmanuel Duflos (INRIA Lille - Nord Europe, LAGIS),\n St\\'ephane Canu (LITIS)",
"['Hachem Kadri' 'Philippe Preux' 'Emmanuel Duflos' 'Stéphane Canu']"
] |
cs.LG cs.SI stat.ML | 10.1109/ICDMW.2012.61 | 1301.2659 | null | null | http://arxiv.org/abs/1301.2659v1 | 2013-01-12T07:51:14Z | 2013-01-12T07:51:14Z | A Triclustering Approach for Time Evolving Graphs | This paper introduces a novel technique to track structures in time evolving
graphs. The method is based on a parameter free approach for three-dimensional
co-clustering of the source vertices, the target vertices and the time. All
these features are simultaneously segmented in order to build time segments and
clusters of vertices whose edge distributions are similar and evolve in the
same way over the time segments. The main novelty of this approach lies in that
the time segments are directly inferred from the evolution of the edge
distribution between the vertices, thus not requiring the user to make an a
priori discretization. Experiments conducted on a synthetic dataset illustrate
the good behaviour of the technique, and a study of a real-life dataset shows
the potential of the proposed approach for exploratory data analysis.
| [
"['Romain Guigourès' 'Marc Boullé' 'Fabrice Rossi']",
"Romain Guigour\\`es, Marc Boull\\'e, Fabrice Rossi (SAMM)"
] |
cs.AI cs.LG cs.LO | null | 1301.2683 | null | null | http://arxiv.org/pdf/1301.2683v2 | 2014-05-28T12:54:41Z | 2013-01-12T13:02:21Z | BliStr: The Blind Strategymaker | BliStr is a system that automatically develops strategies for E prover on a
large set of problems. The main idea is to interleave (i) iterated
low-timelimit local search for new strategies on small sets of similar easy
problems with (ii) higher-timelimit evaluation of the new strategies on all
problems. The accumulated results of the global higher-timelimit runs are used
to define and evolve the notion of "similar easy problems", and to control the
selection of the next strategy to be improved. The technique was used to
significantly strengthen the set of E strategies used by the MaLARea, PS-E,
E-MaLeS, and E systems in the CASC@Turing 2012 competition, particularly in the
Mizar division. Similar improvement was obtained on the problems created from
the Flyspeck corpus.
| [
"['Josef Urban']",
"Josef Urban"
] |
stat.ML cs.IT cs.LG math.IT math.ST stat.TH | null | 1301.2725 | null | null | http://arxiv.org/pdf/1301.2725v1 | 2013-01-12T22:39:56Z | 2013-01-12T22:39:56Z | Robust High Dimensional Sparse Regression and Matching Pursuit | We consider high dimensional sparse regression, and develop strategies able
to deal with arbitrary -- possibly, severe or coordinated -- errors in the
covariance matrix $X$. These may come from corrupted data, persistent
experimental errors, or malicious respondents in surveys/recommender systems,
etc. Such non-stochastic error-in-variables problems are notoriously difficult
to treat, and as we demonstrate, the problem is particularly pronounced in
high-dimensional settings where the primary goal is {\em support recovery} of
the sparse regressor. We develop algorithms for support recovery in sparse
regression, when some number $n_1$ out of $n+n_1$ total covariate/response
pairs are {\it arbitrarily (possibly maliciously) corrupted}. We are interested
in understanding how many outliers, $n_1$, we can tolerate, while identifying
the correct support. To the best of our knowledge, neither standard outlier
rejection techniques, nor recently developed robust regression algorithms (that
focus only on corrupted response variables), nor recent algorithms for dealing
with stochastic noise or erasures, can provide guarantees on support recovery.
Perhaps surprisingly, we also show that the natural brute force algorithm that
searches over all subsets of $n$ covariate/response pairs, and all subsets of
possible support coordinates in order to minimize regression error, is
remarkably poor, unable to correctly identify the support with even $n_1 =
O(n/k)$ corrupted points, where $k$ is the sparsity. This is true even in the
basic setting we consider, where all authentic measurements and noise are
independent and sub-Gaussian. In this setting, we provide a simple algorithm --
no more computationally taxing than OMP -- that gives stronger performance
guarantees, recovering the support with up to $n_1 = O(n/(\sqrt{k} \log p))$
corrupted points, where $p$ is the dimension of the signal to be recovered.
| [
"['Yudong Chen' 'Constantine Caramanis' 'Shie Mannor']",
"Yudong Chen, Constantine Caramanis, Shie Mannor"
] |
cs.IR cs.LG | null | 1301.2785 | null | null | http://arxiv.org/pdf/1301.2785v1 | 2013-01-13T15:58:09Z | 2013-01-13T15:58:09Z | A comparison of SVM and RVM for Document Classification | Document classification is a task of assigning a new unclassified document to
one of the predefined set of classes. The content based document classification
uses the content of the document with some weighting criteria to assign it to
one of the predefined classes. It is a major task in library science,
electronic document management systems and information sciences. This paper
investigates document classification by using two different classification
techniques (1) Support Vector Machine (SVM) and (2) Relevance Vector Machine
(RVM). SVM is a supervised machine learning technique that can be used for
classification task. In its basic form, SVM represents the instances of the
data into space and tries to separate the distinct classes by a maximum
possible wide gap (hyper plane) that separates the classes. On the other hand
RVM uses probabilistic measure to define this separation space. RVM uses
Bayesian inference to obtain succinct solution, thus RVM uses significantly
fewer basis functions. Experimental studies on three standard text
classification datasets reveal that although RVM takes more training time, its
classification is much better as compared to SVM.
| [
"['Muhammad Rafi' 'Mohammad Shahid Shaikh']",
"Muhammad Rafi, Mohammad Shahid Shaikh"
] |
cs.CV cs.LG stat.ML | null | 1301.2840 | null | null | http://arxiv.org/pdf/1301.2840v4 | 2013-04-25T14:26:04Z | 2013-01-14T01:34:17Z | Unsupervised Feature Learning for low-level Local Image Descriptors | Unsupervised feature learning has shown impressive results for a wide range
of input modalities, in particular for object classification tasks in computer
vision. Using a large amount of unlabeled data, unsupervised feature learning
methods are utilized to construct high-level representations that are
discriminative enough for subsequently trained supervised classification
algorithms. However, it has never been \emph{quantitatively} investigated yet
how well unsupervised learning methods can find \emph{low-level
representations} for image patches without any additional supervision. In this
paper we examine the performance of pure unsupervised methods on a low-level
correspondence task, a problem that is central to many Computer Vision
applications. We find that a special type of Restricted Boltzmann Machines
(RBMs) performs comparably to hand-crafted descriptors. Additionally, a simple
binarization scheme produces compact representations that perform better than
several state-of-the-art descriptors.
| [
"Christian Osendorfer and Justin Bayer and Sebastian Urban and Patrick\n van der Smagt",
"['Christian Osendorfer' 'Justin Bayer' 'Sebastian Urban'\n 'Patrick van der Smagt']"
] |
cs.LG stat.ML | null | 1301.3192 | null | null | http://arxiv.org/pdf/1301.3192v1 | 2013-01-15T00:54:38Z | 2013-01-15T00:54:38Z | Matrix Approximation under Local Low-Rank Assumption | Matrix approximation is a common tool in machine learning for building
accurate prediction models for recommendation systems, text mining, and
computer vision. A prevalent assumption in constructing matrix approximations
is that the partially observed matrix is of low-rank. We propose a new matrix
approximation model where we assume instead that the matrix is only locally of
low-rank, leading to a representation of the observed matrix as a weighted sum
of low-rank matrices. We analyze the accuracy of the proposed local low-rank
modeling. Our experiments show improvements in prediction accuracy in
recommendation tasks.
| [
"Joonseok Lee, Seungyeon Kim, Guy Lebanon, Yoram Singer",
"['Joonseok Lee' 'Seungyeon Kim' 'Guy Lebanon' 'Yoram Singer']"
] |
cs.LG cs.CV | 10.1109/TPAMI.2013.31 | 1301.3193 | null | null | http://arxiv.org/abs/1301.3193v1 | 2013-01-15T01:07:14Z | 2013-01-15T01:07:14Z | Learning Graphical Model Parameters with Approximate Marginal Inference | Likelihood based-learning of graphical models faces challenges of
computational-complexity and robustness to model mis-specification. This paper
studies methods that fit parameters directly to maximize a measure of the
accuracy of predicted marginals, taking into account both model and inference
approximations at training time. Experiments on imaging problems suggest
marginalization-based learning performs better than likelihood-based
approximations on difficult problems where the model being fit is approximate
in nature.
| [
"Justin Domke",
"['Justin Domke']"
] |
cs.LG | null | 1301.3224 | null | null | http://arxiv.org/pdf/1301.3224v5 | 2013-04-09T01:10:49Z | 2013-01-15T04:39:32Z | Efficient Learning of Domain-invariant Image Representations | We present an algorithm that learns representations which explicitly
compensate for domain mismatch and which can be efficiently realized as linear
classifiers. Specifically, we form a linear transformation that maps features
from the target (test) domain to the source (training) domain as part of
training the classifier. We optimize both the transformation and classifier
parameters jointly, and introduce an efficient cost function based on
misclassification loss. Our method combines several features previously
unavailable in a single algorithm: multi-class adaptation through
representation learning, ability to map across heterogeneous feature spaces,
and scalability to large datasets. We present experiments on several image
datasets that demonstrate improved accuracy and computational advantages
compared to previous approaches.
| [
"Judy Hoffman, Erik Rodner, Jeff Donahue, Trevor Darrell, Kate Saenko",
"['Judy Hoffman' 'Erik Rodner' 'Jeff Donahue' 'Trevor Darrell'\n 'Kate Saenko']"
] |
cs.LG cs.CL stat.ML | null | 1301.3226 | null | null | http://arxiv.org/pdf/1301.3226v4 | 2013-05-29T21:06:09Z | 2013-01-15T04:52:10Z | The Expressive Power of Word Embeddings | We seek to better understand the difference in quality of the several
publicly released embeddings. We propose several tasks that help to distinguish
the characteristics of different embeddings. Our evaluation of sentiment
polarity and synonym/antonym relations shows that embeddings are able to
capture surprisingly nuanced semantics even in the absence of sentence
structure. Moreover, benchmarking the embeddings shows great variance in
quality and characteristics of the semantics captured by the tested embeddings.
Finally, we show the impact of varying the number of dimensions and the
resolution of each dimension on the effective useful features captured by the
embedding space. Our contributions highlight the importance of embeddings for
NLP tasks and the effect of their quality on the final results.
| [
"['Yanqing Chen' 'Bryan Perozzi' 'Rami Al-Rfou' 'Steven Skiena']",
"Yanqing Chen, Bryan Perozzi, Rami Al-Rfou, Steven Skiena"
] |
cs.CV cs.LG | null | 1301.3323 | null | null | http://arxiv.org/pdf/1301.3323v4 | 2013-03-18T07:19:31Z | 2013-01-15T12:47:39Z | Auto-pooling: Learning to Improve Invariance of Image Features from
Image Sequences | Learning invariant representations from images is one of the hardest
challenges facing computer vision. Spatial pooling is widely used to create
invariance to spatial shifting, but it is restricted to convolutional models.
In this paper, we propose a novel pooling method that can learn soft clustering
of features from image sequences. It is trained to improve the temporal
coherence of features, while keeping the information loss at minimum. Our
method does not use spatial information, so it can be used with
non-convolutional models too. Experiments on images extracted from natural
videos showed that our method can cluster similar features together. When
trained by convolutional features, auto-pooling outperformed traditional
spatial pooling on an image classification task, even though it does not use
the spatial topology of features.
| [
"['Sainbayar Sukhbaatar' 'Takaki Makino' 'Kazuyuki Aihara']",
"Sainbayar Sukhbaatar, Takaki Makino and Kazuyuki Aihara"
] |
cs.LG cs.CV stat.ML | null | 1301.3342 | null | null | http://arxiv.org/pdf/1301.3342v2 | 2013-03-08T11:00:32Z | 2013-01-15T13:44:18Z | Barnes-Hut-SNE | The paper presents an O(N log N)-implementation of t-SNE -- an embedding
technique that is commonly used for the visualization of high-dimensional data
in scatter plots and that normally runs in O(N^2). The new implementation uses
vantage-point trees to compute sparse pairwise similarities between the input
data objects, and it uses a variant of the Barnes-Hut algorithm - an algorithm
used by astronomers to perform N-body simulations - to approximate the forces
between the corresponding points in the embedding. Our experiments show that
the new algorithm, called Barnes-Hut-SNE, leads to substantial computational
advantages over standard t-SNE, and that it makes it possible to learn
embeddings of data sets with millions of objects.
| [
"['Laurens van der Maaten']",
"Laurens van der Maaten"
] |
cs.MA cs.LG math.OC stat.ML | null | 1301.3347 | null | null | http://arxiv.org/pdf/1301.3347v1 | 2013-01-15T14:00:55Z | 2013-01-15T14:00:55Z | Multi-agent learning using Fictitious Play and Extended Kalman Filter | Decentralised optimisation tasks are important components of multi-agent
systems. These tasks can be interpreted as n-player potential games: therefore
game-theoretic learning algorithms can be used to solve decentralised
optimisation tasks. Fictitious play is the canonical example of these
algorithms. Nevertheless fictitious play implicitly assumes that players have
stationary strategies. We present a novel variant of fictitious play where
players predict their opponents' strategies using Extended Kalman filters and
use their predictions to update their strategies.
We show that in 2 by 2 games with at least one pure Nash equilibrium and in
potential games where players have two available actions, the proposed
algorithm converges to the pure Nash equilibrium. The performance of the
proposed algorithm was empirically tested, in two strategic form games and an
ad-hoc sensor network surveillance problem. The proposed algorithm performs
better than the classic fictitious play algorithm in these games and therefore
improves the performance of game-theoretical learning in decentralised
optimisation.
| [
"['Michalis Smyrnakis']",
"Michalis Smyrnakis"
] |
cs.NA cs.LG | null | 1301.3389 | null | null | http://arxiv.org/pdf/1301.3389v2 | 2013-03-18T09:15:29Z | 2013-01-15T15:59:46Z | The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization | Non-negative matrix factorization (NMF) has become a popular machine learning
approach to many problems in text mining, speech and image processing,
bio-informatics and seismic data analysis to name a few. In NMF, a matrix of
non-negative data is approximated by the low-rank product of two matrices with
non-negative entries. In this paper, the approximation quality is measured by
the Kullback-Leibler divergence between the data and its low-rank
reconstruction. The existence of the simple multiplicative update (MU)
algorithm for computing the matrix factors has contributed to the success of
NMF. Despite the availability of algorithms showing faster convergence, MU
remains popular due to its simplicity. In this paper, a diagonalized Newton
algorithm (DNA) is proposed showing faster convergence while the implementation
remains simple and suitable for high-rank problems. The DNA algorithm is
applied to various publicly available data sets, showing a substantial speed-up
on modern hardware.
| [
"['Hugo Van hamme']",
"Hugo Van hamme"
] |
cs.LG | null | 1301.3391 | null | null | http://arxiv.org/pdf/1301.3391v3 | 2013-03-11T15:38:05Z | 2013-01-15T16:06:11Z | Feature grouping from spatially constrained multiplicative interaction | We present a feature learning model that learns to encode relationships
between images. The model is defined as a Gated Boltzmann Machine, which is
constrained such that hidden units that are nearby in space can gate each
other's connections. We show how frequency/orientation "columns" as well as
topographic filter maps follow naturally from training the model on image
pairs. The model also helps explain why square-pooling models yield feature
groups with similar grouping properties. Experimental results on synthetic
image transformations show that spatially constrained gating is an effective
way to reduce the number of parameters and thereby to regularize a
transformation-learning model.
| [
"['Felix Bauer' 'Roland Memisevic']",
"Felix Bauer, Roland Memisevic"
] |
cs.LG cs.CV cs.IR | null | 1301.3461 | null | null | http://arxiv.org/pdf/1301.3461v7 | 2013-04-23T08:13:55Z | 2013-01-15T19:32:20Z | Factorized Topic Models | In this paper we present a modification to a latent topic model, which makes
the model exploit supervision to produce a factorized representation of the
observed data. The structured parameterization separately encodes variance that
is shared between classes from variance that is private to each class by the
introduction of a new prior over the topic space. The approach allows for a
more eff{}icient inference and provides an intuitive interpretation of the data
in terms of an informative signal together with structured noise. The
factorized representation is shown to enhance inference performance for image,
text, and video classification.
| [
"['Cheng Zhang' 'Carl Henrik Ek' 'Andreas Damianou' 'Hedvig Kjellstrom']",
"Cheng Zhang and Carl Henrik Ek and Andreas Damianou and Hedvig\n Kjellstrom"
] |
stat.ML cs.CV cs.LG | null | 1301.3468 | null | null | http://arxiv.org/pdf/1301.3468v6 | 2013-03-04T10:41:34Z | 2013-01-15T19:45:27Z | Boltzmann Machines and Denoising Autoencoders for Image Denoising | Image denoising based on a probabilistic model of local image patches has
been employed by various researchers, and recently a deep (denoising)
autoencoder has been proposed by Burger et al. [2012] and Xie et al. [2012] as
a good model for this. In this paper, we propose that another popular family of
models in the field of deep learning, called Boltzmann machines, can perform
image denoising as well as, or in certain cases of high level of noise, better
than denoising autoencoders. We empirically evaluate the two models on three
different sets of images with different types and levels of noise. Throughout
the experiments we also examine the effect of the depth of the models. The
experiments confirmed our claim and revealed that the performance can be
improved by adding more hidden layers, especially when the level of noise is
high.
| [
"['Kyunghyun Cho']",
"Kyunghyun Cho"
] |
cs.LG cs.CV stat.ML | null | 1301.3476 | null | null | http://arxiv.org/pdf/1301.3476v3 | 2013-03-11T18:00:00Z | 2013-01-15T20:21:54Z | Pushing Stochastic Gradient towards Second-Order Methods --
Backpropagation Learning with Transformations in Nonlinearities | Recently, we proposed to transform the outputs of each hidden neuron in a
multi-layer perceptron network to have zero output and zero slope on average,
and use separate shortcut connections to model the linear dependencies instead.
We continue the work by firstly introducing a third transformation to normalize
the scale of the outputs of each hidden neuron, and secondly by analyzing the
connections to second order optimization methods. We show that the
transformations make a simple stochastic gradient behave closer to second-order
optimization methods and thus speed up learning. This is shown both in theory
and with experiments. The experiments on the third transformation show that
while it further increases the speed of learning, it can also hurt performance
by converging to a worse local optimum, where both the inputs and outputs of
many hidden neurons are close to zero.
| [
"['Tommi Vatanen' 'Tapani Raiko' 'Harri Valpola' 'Yann LeCun']",
"Tommi Vatanen, Tapani Raiko, Harri Valpola, Yann LeCun"
] |
cs.LG | null | 1301.3485 | null | null | http://arxiv.org/pdf/1301.3485v2 | 2013-03-21T17:02:48Z | 2013-01-15T20:52:50Z | A Semantic Matching Energy Function for Learning with Multi-relational
Data | Large-scale relational learning becomes crucial for handling the huge amounts
of structured data generated daily in many application domains ranging from
computational biology or information retrieval, to natural language processing.
In this paper, we present a new neural network architecture designed to embed
multi-relational graphs into a flexible continuous vector space in which the
original data is kept and enhanced. The network is trained to encode the
semantics of these graphs in order to assign high probabilities to plausible
components. We empirically show that it reaches competitive performance in link
prediction on standard datasets from the literature.
| [
"['Xavier Glorot' 'Antoine Bordes' 'Jason Weston' 'Yoshua Bengio']",
"Xavier Glorot and Antoine Bordes and Jason Weston and Yoshua Bengio"
] |
cs.CV cs.LG | null | 1301.3516 | null | null | http://arxiv.org/pdf/1301.3516v3 | 2015-05-05T18:12:46Z | 2013-01-15T22:15:06Z | Learnable Pooling Regions for Image Classification | Biologically inspired, from the early HMAX model to Spatial Pyramid Matching,
pooling has played an important role in visual recognition pipelines. Spatial
pooling, by grouping of local codes, equips these methods with a certain degree
of robustness to translation and deformation yet preserving important spatial
information. Despite the predominance of this approach in current recognition
systems, we have seen little progress to fully adapt the pooling strategy to
the task at hand. This paper proposes a model for learning task dependent
pooling scheme -- including previously proposed hand-crafted pooling schemes as
a particular instantiation. In our work, we investigate the role of different
regularization terms showing that the smooth regularization term is crucial to
achieve strong performance using the presented architecture. Finally, we
propose an efficient and parallel method to train the model. Our experiments
show improved performance over hand-crafted pooling schemes on the CIFAR-10 and
CIFAR-100 datasets -- in particular improving the state-of-the-art to 56.29% on
the latter.
| [
"['Mateusz Malinowski' 'Mario Fritz']",
"Mateusz Malinowski and Mario Fritz"
] |
cs.LG | null | 1301.3524 | null | null | http://arxiv.org/pdf/1301.3524v1 | 2013-01-15T22:51:40Z | 2013-01-15T22:51:40Z | How good is the Electricity benchmark for evaluating concept drift
adaptation | In this correspondence, we will point out a problem with testing adaptive
classifiers on autocorrelated data. In such a case random change alarms may
boost the accuracy figures. Hence, we cannot be sure if the adaptation is
working well.
| [
"['Indre Zliobaite']",
"Indre Zliobaite"
] |
cs.LG cs.NA | null | 1301.3527 | null | null | http://arxiv.org/pdf/1301.3527v2 | 2013-03-18T22:42:11Z | 2013-01-15T23:11:05Z | Block Coordinate Descent for Sparse NMF | Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data
analysis. An important variant is the sparse NMF problem which arises when we
explicitly require the learnt features to be sparse. A natural measure of
sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms,
such as L$_1$/L$_2$ measure, have been shown to model sparsity robustly, based
on intuitive attributes that such measures need to satisfy. This is in contrast
to computationally cheaper alternatives such as the plain L$_1$ norm. However,
present algorithms designed for optimizing the mixed norm L$_1$/L$_2$ are slow
and other formulations for sparse NMF have been proposed such as those based on
L$_1$ and L$_0$ norms. Our proposed algorithm allows us to solve the mixed norm
sparsity constraints while not sacrificing computation time. We present
experimental evidence on real-world datasets that shows our new algorithm
performs an order of magnitude faster compared to the current state-of-the-art
solvers optimizing the mixed norm and is suitable for large-scale datasets.
| [
"Vamsi K. Potluru, Sergey M. Plis, Jonathan Le Roux, Barak A.\n Pearlmutter, Vince D. Calhoun, Thomas P. Hayes",
"['Vamsi K. Potluru' 'Sergey M. Plis' 'Jonathan Le Roux'\n 'Barak A. Pearlmutter' 'Vince D. Calhoun' 'Thomas P. Hayes']"
] |
q-bio.GN cs.LG stat.ML | null | 1301.3528 | null | null | http://arxiv.org/pdf/1301.3528v1 | 2013-01-15T23:19:14Z | 2013-01-15T23:19:14Z | An Efficient Sufficient Dimension Reduction Method for Identifying
Genetic Variants of Clinical Significance | Fast and cheaper next generation sequencing technologies will generate
unprecedentedly massive and highly-dimensional genomic and epigenomic variation
data. In the near future, a routine part of medical record will include the
sequenced genomes. A fundamental question is how to efficiently extract genomic
and epigenomic variants of clinical utility which will provide information for
optimal wellness and interference strategies. Traditional paradigm for
identifying variants of clinical validity is to test association of the
variants. However, significantly associated genetic variants may or may not be
usefulness for diagnosis and prognosis of diseases. Alternative to association
studies for finding genetic variants of predictive utility is to systematically
search variants that contain sufficient information for phenotype prediction.
To achieve this, we introduce concepts of sufficient dimension reduction and
coordinate hypothesis which project the original high dimensional data to very
low dimensional space while preserving all information on response phenotypes.
We then formulate clinically significant genetic variant discovery problem into
sparse SDR problem and develop algorithms that can select significant genetic
variants from up to or even ten millions of predictors with the aid of dividing
SDR for whole genome into a number of subSDR problems defined for genomic
regions. The sparse SDR is in turn formulated as sparse optimal scoring
problem, but with penalty which can remove row vectors from the basis matrix.
To speed up computation, we develop the modified alternating direction method
for multipliers to solve the sparse optimal scoring problem which can easily be
implemented in parallel. To illustrate its application, the proposed method is
applied to simulation data and the NHLBI's Exome Sequencing Project dataset
| [
"['Momiao Xiong' 'Long Ma']",
"Momiao Xiong and Long Ma"
] |
cs.NE cs.CV cs.LG q-bio.NC | null | 1301.3530 | null | null | http://arxiv.org/pdf/1301.3530v2 | 2013-01-25T20:39:46Z | 2013-01-15T23:42:21Z | The Neural Representation Benchmark and its Evaluation on Brain and
Machine | A key requirement for the development of effective learning representations
is their evaluation and comparison to representations we know to be effective.
In natural sensory domains, the community has viewed the brain as a source of
inspiration and as an implicit benchmark for success. However, it has not been
possible to directly test representational learning algorithms directly against
the representations contained in neural systems. Here, we propose a new
benchmark for visual representations on which we have directly tested the
neural representation in multiple visual cortical areas in macaque (utilizing
data from [Majaj et al., 2012]), and on which any computer vision algorithm
that produces a feature space can be tested. The benchmark measures the
effectiveness of the neural or machine representation by computing the
classification loss on the ordered eigendecomposition of a kernel matrix
[Montavon et al., 2011]. In our analysis we find that the neural representation
in visual area IT is superior to visual area V4. In our analysis of
representational learning algorithms, we find that three-layer models approach
the representational performance of V4 and the algorithm in [Le et al., 2012]
surpasses the performance of V4. Impressively, we find that a recent supervised
algorithm [Krizhevsky et al., 2012] achieves performance comparable to that of
IT for an intermediate level of image variation difficulty, and surpasses IT at
a higher difficulty level. We believe this result represents a major milestone:
it is the first learning algorithm we have found that exceeds our current
estimate of IT representation performance. We hope that this benchmark will
assist the community in matching the representational performance of visual
cortex and will serve as an initial rallying point for further correspondence
between representations derived in brains and machines.
| [
"Charles F. Cadieu, Ha Hong, Dan Yamins, Nicolas Pinto, Najib J. Majaj,\n James J. DiCarlo",
"['Charles F. Cadieu' 'Ha Hong' 'Dan Yamins' 'Nicolas Pinto'\n 'Najib J. Majaj' 'James J. DiCarlo']"
] |
cs.NE cs.LG stat.ML | null | 1301.3533 | null | null | http://arxiv.org/pdf/1301.3533v2 | 2013-02-22T10:18:15Z | 2013-01-16T00:12:21Z | Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint | Deep Belief Networks (DBN) have been successfully applied on popular machine
learning tasks. Specifically, when applied on hand-written digit recognition,
DBNs have achieved approximate accuracy rates of 98.8%. In an effort to
optimize the data representation achieved by the DBN and maximize their
descriptive power, recent advances have focused on inducing sparse constraints
at each layer of the DBN. In this paper we present a theoretical approach for
sparse constraints in the DBN using the mixed norm for both non-overlapping and
overlapping groups. We explore how these constraints affect the classification
accuracy for digit recognition in three different datasets (MNIST, USPS, RIMES)
and provide initial estimations of their usefulness by altering different
parameters such as the group size and overlap percentage.
| [
"Xanadu Halkias, Sebastien Paris, Herve Glotin",
"['Xanadu Halkias' 'Sebastien Paris' 'Herve Glotin']"
] |
cs.LG | null | 1301.3539 | null | null | http://arxiv.org/pdf/1301.3539v1 | 2013-01-16T01:07:38Z | 2013-01-16T01:07:38Z | Learning Features with Structure-Adapting Multi-view Exponential Family
Harmoniums | We proposea graphical model for multi-view feature extraction that
automatically adapts its structure to achieve better representation of data
distribution. The proposed model, structure-adapting multi-view harmonium
(SA-MVH) has switch parameters that control the connection between hidden nodes
and input views, and learn the switch parameter while training. Numerical
experiments on synthetic and a real-world dataset demonstrate the useful
behavior of the SA-MVH, compared to existing multi-view feature extraction
methods.
| [
"['Yoonseop Kang' 'Seungjin Choi']",
"Yoonseop Kang and Seungjin Choi"
] |
cs.LG cs.CV stat.ML | null | 1301.3541 | null | null | http://arxiv.org/pdf/1301.3541v3 | 2013-03-15T19:25:30Z | 2013-01-16T01:27:15Z | Deep Predictive Coding Networks | The quality of data representation in deep learning methods is directly
related to the prior model imposed on the representations; however, generally
used fixed priors are not capable of adjusting to the context in the data. To
address this issue, we propose deep predictive coding networks, a hierarchical
generative model that empirically alters priors on the latent representations
in a dynamic and context-sensitive manner. This model captures the temporal
dependencies in time-varying signals and uses top-down information to modulate
the representation in lower layers. The centerpiece of our model is a novel
procedure to infer sparse states of a dynamic model which is used for feature
extraction. We also extend this feature extraction block to introduce a pooling
function that captures locally invariant representations. When applied on a
natural video data, we show that our method is able to learn high-level visual
features. We also demonstrate the role of the top-down connections by showing
the robustness of the proposed model to structured noise.
| [
"['Rakesh Chalasani' 'Jose C. Principe']",
"Rakesh Chalasani and Jose C. Principe"
] |
cs.LG cs.NE stat.ML | null | 1301.3545 | null | null | http://arxiv.org/pdf/1301.3545v2 | 2013-03-16T16:07:12Z | 2013-01-16T01:40:20Z | Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines | This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for
training Boltzmann Machines. Similar in spirit to the Hessian-Free method of
Martens [8], our algorithm belongs to the family of truncated Newton methods
and exploits an efficient matrix-vector product to avoid explicitely storing
the natural gradient metric $L$. This metric is shown to be the expected second
derivative of the log-partition function (under the model distribution), or
equivalently, the variance of the vector of partial derivatives of the energy
function. We evaluate our method on the task of joint-training a 3-layer Deep
Boltzmann Machine and show that MFNG does indeed have faster per-epoch
convergence compared to Stochastic Maximum Likelihood with centering, though
wall-clock performance is currently not competitive.
| [
"Guillaume Desjardins, Razvan Pascanu, Aaron Courville and Yoshua\n Bengio",
"['Guillaume Desjardins' 'Razvan Pascanu' 'Aaron Courville' 'Yoshua Bengio']"
] |
cs.LG cs.CV | null | 1301.3551 | null | null | http://arxiv.org/pdf/1301.3551v6 | 2013-06-04T04:42:39Z | 2013-01-16T01:49:52Z | Information Theoretic Learning with Infinitely Divisible Kernels | In this paper, we develop a framework for information theoretic learning
based on infinitely divisible matrices. We formulate an entropy-like functional
on positive definite matrices based on Renyi's axiomatic definition of entropy
and examine some key properties of this functional that lead to the concept of
infinite divisibility. The proposed formulation avoids the plug in estimation
of density and brings along the representation power of reproducing kernel
Hilbert spaces. As an application example, we derive a supervised metric
learning algorithm using a matrix based analogue to conditional entropy
achieving results comparable with the state of the art.
| [
"['Luis G. Sanchez Giraldo' 'Jose C. Principe']",
"Luis G. Sanchez Giraldo and Jose C. Principe"
] |
cs.LG cs.NE stat.ML | null | 1301.3557 | null | null | http://arxiv.org/pdf/1301.3557v1 | 2013-01-16T02:12:07Z | 2013-01-16T02:12:07Z | Stochastic Pooling for Regularization of Deep Convolutional Neural
Networks | We introduce a simple and effective method for regularizing large
convolutional neural networks. We replace the conventional deterministic
pooling operations with a stochastic procedure, randomly picking the activation
within each pooling region according to a multinomial distribution, given by
the activities within the pooling region. The approach is hyper-parameter free
and can be combined with other regularization approaches, such as dropout and
data augmentation. We achieve state-of-the-art performance on four image
datasets, relative to other approaches that do not utilize data augmentation.
| [
"['Matthew D. Zeiler' 'Rob Fergus']",
"Matthew D. Zeiler and Rob Fergus"
] |
stat.ML cs.LG | null | 1301.3568 | null | null | http://arxiv.org/pdf/1301.3568v3 | 2013-05-01T04:48:20Z | 2013-01-16T03:21:27Z | Joint Training Deep Boltzmann Machines for Classification | We introduce a new method for training deep Boltzmann machines jointly. Prior
methods of training DBMs require an initial learning pass that trains the model
greedily, one layer at a time, or do not perform well on classification tasks.
In our approach, we train all layers of the DBM simultaneously, using a novel
training procedure called multi-prediction training. The resulting model can
either be interpreted as a single generative model trained to maximize a
variational approximation to the generalized pseudolikelihood, or as a family
of recurrent networks that share parameters and may be approximately averaged
together using a novel technique we call the multi-inference trick. We show
that our approach performs competitively for classification and outperforms
previous methods in terms of accuracy of approximate inference and
classification with missing inputs.
| [
"['Ian J. Goodfellow' 'Aaron Courville' 'Yoshua Bengio']",
"Ian J. Goodfellow and Aaron Courville and Yoshua Bengio"
] |
cs.LG cs.CV stat.ML | null | 1301.3575 | null | null | http://arxiv.org/pdf/1301.3575v1 | 2013-01-16T03:52:09Z | 2013-01-16T03:52:09Z | Kernelized Locality-Sensitive Hashing for Semi-Supervised Agglomerative
Clustering | Large scale agglomerative clustering is hindered by computational burdens. We
propose a novel scheme where exact inter-instance distance calculation is
replaced by the Hamming distance between Kernelized Locality-Sensitive Hashing
(KLSH) hashed values. This results in a method that drastically decreases
computation time. Additionally, we take advantage of certain labeled data
points via distance metric learning to achieve a competitive precision and
recall comparing to K-Means but in much less computation time.
| [
"Boyi Xie, Shuheng Zheng",
"['Boyi Xie' 'Shuheng Zheng']"
] |
cs.LG | null | 1301.3577 | null | null | http://arxiv.org/pdf/1301.3577v3 | 2013-03-20T15:37:33Z | 2013-01-16T04:07:46Z | Saturating Auto-Encoders | We introduce a simple new regularizer for auto-encoders whose hidden-unit
activation functions contain at least one zero-gradient (saturated) region.
This regularizer explicitly encourages activations in the saturated region(s)
of the corresponding activation function. We call these Saturating
Auto-Encoders (SATAE). We show that the saturation regularizer explicitly
limits the SATAE's ability to reconstruct inputs which are not near the data
manifold. Furthermore, we show that a wide variety of features can be learned
when different activation functions are used. Finally, connections are
established with the Contractive and Sparse Auto-Encoders.
| [
"['Rostislav Goroshin' 'Yann LeCun']",
"Rostislav Goroshin and Yann LeCun"
] |
cs.LG cs.CV | null | 1301.3583 | null | null | http://arxiv.org/pdf/1301.3583v4 | 2013-03-14T20:49:20Z | 2013-01-16T04:45:29Z | Big Neural Networks Waste Capacity | This article exposes the failure of some big neural networks to leverage
added capacity to reduce underfitting. Past research suggest diminishing
returns when increasing the size of neural networks. Our experiments on
ImageNet LSVRC-2010 show that this may be due to the fact there are highly
diminishing returns for capacity in terms of training error, leading to
underfitting. This suggests that the optimization method - first order gradient
descent - fails at this regime. Directly attacking this problem, either through
the optimization method or the choices of parametrization, may allow to improve
the generalization error on large datasets, for which a large capacity is
required.
| [
"['Yann N. Dauphin' 'Yoshua Bengio']",
"Yann N. Dauphin, Yoshua Bengio"
] |
cs.LG cs.NA | null | 1301.3584 | null | null | http://arxiv.org/pdf/1301.3584v7 | 2014-02-17T16:29:27Z | 2013-01-16T04:47:02Z | Revisiting Natural Gradient for Deep Networks | We evaluate natural gradient, an algorithm originally proposed in Amari
(1997), for learning deep models. The contributions of this paper are as
follows. We show the connection between natural gradient and three other
recently proposed methods for training deep models: Hessian-Free (Martens,
2010), Krylov Subspace Descent (Vinyals and Povey, 2012) and TONGA (Le Roux et
al., 2008). We describe how one can use unlabeled data to improve the
generalization error obtained by natural gradient and empirically evaluate the
robustness of the algorithm to the ordering of the training set compared to
stochastic gradient descent. Finally we extend natural gradient to incorporate
second order information alongside the manifold information and provide a
benchmark of the new algorithm using a truncated Newton approach for inverting
the metric matrix instead of using a diagonal approximation of it.
| [
"['Razvan Pascanu' 'Yoshua Bengio']",
"Razvan Pascanu and Yoshua Bengio"
] |
cs.LG cs.CV cs.RO | null | 1301.3592 | null | null | http://arxiv.org/pdf/1301.3592v6 | 2014-08-21T18:17:37Z | 2013-01-16T05:33:56Z | Deep Learning for Detecting Robotic Grasps | We consider the problem of detecting robotic grasps in an RGB-D view of a
scene containing objects. In this work, we apply a deep learning approach to
solve this problem, which avoids time-consuming hand-design of features. This
presents two main challenges. First, we need to evaluate a huge number of
candidate grasps. In order to make detection fast, as well as robust, we
present a two-step cascaded structure with two deep networks, where the top
detections from the first are re-evaluated by the second. The first network has
fewer features, is faster to run, and can effectively prune out unlikely
candidate grasps. The second, with more features, is slower but has to run only
on the top few detections. Second, we need to handle multimodal inputs well,
for which we present a method to apply structured regularization on the weights
based on multimodal group regularization. We demonstrate that our method
outperforms the previous state-of-the-art methods in robotic grasp detection,
and can be used to successfully execute grasps on two different robotic
platforms.
| [
"['Ian Lenz' 'Honglak Lee' 'Ashutosh Saxena']",
"Ian Lenz and Honglak Lee and Ashutosh Saxena"
] |
cs.LG cs.CL cs.NE eess.AS | null | 1301.3605 | null | null | http://arxiv.org/pdf/1301.3605v3 | 2013-03-08T19:42:37Z | 2013-01-16T07:23:19Z | Feature Learning in Deep Neural Networks - Studies on Speech Recognition
Tasks | Recent studies have shown that deep neural networks (DNNs) perform
significantly better than shallow networks and Gaussian mixture models (GMMs)
on large vocabulary speech recognition tasks. In this paper, we argue that the
improved accuracy achieved by the DNNs is the result of their ability to
extract discriminative internal representations that are robust to the many
sources of variability in speech signals. We show that these representations
become increasingly insensitive to small perturbations in the input with
increasing network depth, which leads to better speech recognition performance
with deeper networks. We also show that DNNs cannot extrapolate to test samples
that are substantially different from the training examples. If the training
data are sufficiently representative, however, internal features learned by the
DNN are relatively stable with respect to speaker differences, bandwidth
differences, and environment distortion. This enables DNN-based recognizers to
perform as well or better than state-of-the-art systems based on GMMs or
shallow networks without the need for explicit model adaptation or feature
normalization.
| [
"Dong Yu, Michael L. Seltzer, Jinyu Li, Jui-Ting Huang, Frank Seide",
"['Dong Yu' 'Michael L. Seltzer' 'Jinyu Li' 'Jui-Ting Huang' 'Frank Seide']"
] |
cs.CL cs.LG | null | 1301.3618 | null | null | http://arxiv.org/pdf/1301.3618v2 | 2013-03-16T03:23:26Z | 2013-01-16T08:05:35Z | Learning New Facts From Knowledge Bases With Neural Tensor Networks and
Semantic Word Vectors | Knowledge bases provide applications with the benefit of easily accessible,
systematic relational knowledge but often suffer in practice from their
incompleteness and lack of knowledge of new entities and relations. Much work
has focused on building or extending them by finding patterns in large
unannotated text corpora. In contrast, here we mainly aim to complete a
knowledge base by predicting additional true relationships between entities,
based on generalizations that can be discerned in the given knowledgebase. We
introduce a neural tensor network (NTN) model which predicts new relationship
entries that can be added to the database. This model can be improved by
initializing entity representations with word vectors learned in an
unsupervised fashion from text, and when doing this, existing relations can
even be queried for entities that were not present in the database. Our model
generalizes and outperforms existing models for this problem, and can classify
unseen relationships in WordNet with an accuracy of 75.8%.
| [
"Danqi Chen, Richard Socher, Christopher D. Manning, Andrew Y. Ng",
"['Danqi Chen' 'Richard Socher' 'Christopher D. Manning' 'Andrew Y. Ng']"
] |
cs.CL cs.LG | null | 1301.3627 | null | null | http://arxiv.org/pdf/1301.3627v2 | 2013-05-11T12:17:44Z | 2013-01-16T08:37:39Z | Two SVDs produce more focal deep learning representations | A key characteristic of work on deep learning and neural networks in general
is that it relies on representations of the input that support generalization,
robust inference, domain adaptation and other desirable functionalities. Much
recent progress in the field has focused on efficient and effective methods for
computing representations. In this paper, we propose an alternative method that
is more efficient than prior work and produces representations that have a
property we call focality -- a property we hypothesize to be important for
neural network representations. The method consists of a simple application of
two consecutive SVDs and is inspired by Anandkumar (2012).
| [
"Hinrich Schuetze, Christian Scheible",
"['Hinrich Schuetze' 'Christian Scheible']"
] |
cs.LG | null | 1301.3630 | null | null | http://arxiv.org/pdf/1301.3630v4 | 2013-03-20T21:18:07Z | 2013-01-16T09:01:47Z | Behavior Pattern Recognition using A New Representation Model | We study the use of inverse reinforcement learning (IRL) as a tool for the
recognition of agents' behavior on the basis of observation of their sequential
decision behavior interacting with the environment. We model the problem faced
by the agents as a Markov decision process (MDP) and model the observed
behavior of the agents in terms of forward planning for the MDP. We use IRL to
learn reward functions and then use these reward functions as the basis for
clustering or classification models. Experimental studies with GridWorld, a
navigation problem, and the secretary problem, an optimal stopping problem,
suggest reward vectors found from IRL can be a good basis for behavior pattern
recognition problems. Empirical comparisons of our method with several existing
IRL algorithms and with direct methods that use feature statistics observed in
state-action space suggest it may be superior for recognition problems.
| [
"['Qifeng Qiao' 'Peter A. Beling']",
"Qifeng Qiao and Peter A. Beling"
] |
cs.LG cs.NE stat.ML | null | 1301.3641 | null | null | http://arxiv.org/pdf/1301.3641v3 | 2013-05-01T06:57:50Z | 2013-01-16T10:10:23Z | Training Neural Networks with Stochastic Hessian-Free Optimization | Hessian-free (HF) optimization has been successfully used for training deep
autoencoders and recurrent networks. HF uses the conjugate gradient algorithm
to construct update directions through curvature-vector products that can be
computed on the same order of time as gradients. In this paper we exploit this
property and study stochastic HF with gradient and curvature mini-batches
independent of the dataset size. We modify Martens' HF for these settings and
integrate dropout, a method for preventing co-adaptation of feature detectors,
to guard against overfitting. Stochastic Hessian-free optimization gives an
intermediary between SGD and HF that achieves competitive performance on both
classification and deep autoencoder experiments.
| [
"['Ryan Kiros']",
"Ryan Kiros"
] |
cs.CV cs.LG | null | 1301.3644 | null | null | http://arxiv.org/pdf/1301.3644v1 | 2013-01-16T10:12:37Z | 2013-01-16T10:12:37Z | Regularized Discriminant Embedding for Visual Descriptor Learning | Images can vary according to changes in viewpoint, resolution, noise, and
illumination. In this paper, we aim to learn representations for an image,
which are robust to wide changes in such environmental conditions, using
training pairs of matching and non-matching local image patches that are
collected under various environmental conditions. We present a regularized
discriminant analysis that emphasizes two challenging categories among the
given training pairs: (1) matching, but far apart pairs and (2) non-matching,
but close pairs in the original feature space (e.g., SIFT feature space).
Compared to existing work on metric learning and discriminant analysis, our
method can better distinguish relevant images from irrelevant, but look-alike
images.
| [
"['Kye-Hyeon Kim' 'Rui Cai' 'Lei Zhang' 'Seungjin Choi']",
"Kye-Hyeon Kim, Rui Cai, Lei Zhang, Seungjin Choi"
] |
cs.CV cs.LG | null | 1301.3666 | null | null | http://arxiv.org/pdf/1301.3666v2 | 2013-03-20T00:44:08Z | 2013-01-16T12:01:34Z | Zero-Shot Learning Through Cross-Modal Transfer | This work introduces a model that can recognize objects in images even if no
training data is available for the objects. The only necessary knowledge about
the unseen categories comes from unsupervised large text corpora. In our
zero-shot framework distributional information in language can be seen as
spanning a semantic basis for understanding what objects look like. Most
previous zero-shot learning models can only differentiate between unseen
classes. In contrast, our model can both obtain state of the art performance on
classes that have thousands of training images and obtain reasonable
performance on unseen classes. This is achieved by first using outlier
detection in the semantic space and then two separate recognition models.
Furthermore, our model does not require any manually defined semantic features
for either words or images.
| [
"Richard Socher, Milind Ganjoo, Hamsa Sridhar, Osbert Bastani,\n Christopher D. Manning, Andrew Y. Ng",
"['Richard Socher' 'Milind Ganjoo' 'Hamsa Sridhar' 'Osbert Bastani'\n 'Christopher D. Manning' 'Andrew Y. Ng']"
] |
cs.AI cs.LG | 10.1007/s10472-014-9419-5 | 1301.3720 | null | null | http://arxiv.org/abs/1301.3720v2 | 2014-02-25T17:32:50Z | 2013-01-16T15:21:19Z | The IBMAP approach for Markov networks structure learning | In this work we consider the problem of learning the structure of Markov
networks from data. We present an approach for tackling this problem called
IBMAP, together with an efficient instantiation of the approach: the IBMAP-HC
algorithm, designed for avoiding important limitations of existing
independence-based algorithms. These algorithms proceed by performing
statistical independence tests on data, trusting completely the outcome of each
test. In practice tests may be incorrect, resulting in potential cascading
errors and the consequent reduction in the quality of the structures learned.
IBMAP contemplates this uncertainty in the outcome of the tests through a
probabilistic maximum-a-posteriori approach. The approach is instantiated in
the IBMAP-HC algorithm, a structure selection strategy that performs a
polynomial heuristic local search in the space of possible structures. We
present an extensive empirical evaluation on synthetic and real data, showing
that our algorithm outperforms significantly the current independence-based
algorithms, in terms of data efficiency and quality of learned structures, with
equivalent computational complexities. We also show the performance of IBMAP-HC
in a real-world application of knowledge discovery: EDAs, which are
evolutionary algorithms that use structure learning on each generation for
modeling the distribution of populations. The experiments show that when
IBMAP-HC is used to learn the structure, EDAs improve the convergence to the
optimum.
| [
"Federico Schl\\\"uter and Facundo Bromberg and Alejandro Edera",
"['Federico Schlüter' 'Facundo Bromberg' 'Alejandro Edera']"
] |
cs.LG | null | 1301.3753 | null | null | http://arxiv.org/pdf/1301.3753v2 | 2013-01-19T19:38:36Z | 2013-01-16T17:04:10Z | Switched linear encoding with rectified linear autoencoders | Several recent results in machine learning have established formal
connections between autoencoders---artificial neural network models that
attempt to reproduce their inputs---and other coding models like sparse coding
and K-means. This paper explores in depth an autoencoder model that is
constructed using rectified linear activations on its hidden units. Our
analysis builds on recent results to further unify the world of sparse linear
coding models. We provide an intuitive interpretation of the behavior of these
coding models and demonstrate this intuition using small, artificial datasets
with known distributions.
| [
"Leif Johnson and Craig Corcoran",
"['Leif Johnson' 'Craig Corcoran']"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.